title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Smith Number
23 Jun, 2022 Given a number n, the task is to find out whether this number is smith or not. A Smith Number is a composite number whose sum of digits is equal to the sum of digits in its prime factorization. Examples: Input : n = 4 Output : Yes Prime factorization = 2, 2 and 2 + 2 = 4 Therefore, 4 is a smith number Input : n = 6 Output : No Prime factorization = 2, 3 and 2 + 3 is not 6. Therefore, 6 is not a smith number Input : n = 666 Output : Yes Prime factorization = 2, 3, 3, 37 and 2 + 3 + 3 + (3 + 7) = 6 + 6 + 6 = 18 Therefore, 666 is a smith number Input : n = 13 Output : No Prime factorization = 13 and 13 = 13, But 13 is not a smith number as it is not a composite number The idea is first find all prime numbers below a limit using Sieve of Sundaram (This is especially useful when we want to check multiple numbers for Smith). Now for every input to be checked for Smith, we go through all prime factors of it and find sum of digits of every prime factor. We also find sum of digits in given number. Finally we compare two sums. If they are same, we return true. C++ Java Python C# PHP Javascript // C++ program to check whether a number is// Smith Number or not.#include<bits/stdc++.h>using namespace std;const int MAX = 10000; // array to store all prime less than and equal to 10^6vector <int> primes; // utility function for sieve of sundaramvoid sieveSundaram(){ // In general Sieve of Sundaram, produces primes smaller // than (2*x + 2) for a number given number x. Since // we want primes smaller than MAX, we reduce MAX to half // This array is used to separate numbers of the form // i+j+2ij from others where 1 <= i <= j bool marked[MAX/2 + 100] = {0}; // Main logic of Sundaram. Mark all numbers which // do not generate prime number by doing 2*i+1 for (int i=1; i<=(sqrt(MAX)-1)/2; i++) for (int j=(i*(i+1))<<1; j<=MAX/2; j=j+2*i+1) marked[j] = true; // Since 2 is a prime number primes.push_back(2); // Print other primes. Remaining primes are of the // form 2*i + 1 such that marked[i] is false. for (int i=1; i<=MAX/2; i++) if (marked[i] == false) primes.push_back(2*i + 1);} // Returns true if n is a Smith number, else false.bool isSmith(int n){ int original_no = n; // Find sum the digits of prime factors of n int pDigitSum = 0; for (int i = 0; primes[i] <= n/2; i++) { while (n % primes[i] == 0) { // If primes[i] is a prime factor, // add its digits to pDigitSum. int p = primes[i]; n = n/p; while (p > 0) { pDigitSum += (p % 10); p = p/10; } } } // If n!=1 then one prime factor still to be // summed up; if (n != 1 && n != original_no) { while (n > 0) { pDigitSum = pDigitSum + n%10; n = n/10; } } // All prime factors digits summed up // Now sum the original number digits int sumDigits = 0; while (original_no > 0) { sumDigits = sumDigits + original_no % 10; original_no = original_no/10; } // If sum of digits in prime factors and sum // of digits in original number are same, then // return true. Else return false. return (pDigitSum == sumDigits);} // Driver codeint main(){ // Finding all prime numbers before limit. These // numbers are used to find prime factors. sieveSundaram(); cout << "Printing first few Smith Numbers" " using isSmith()n"; for (int i=1; i<500; i++) if (isSmith(i)) cout << i << " "; return 0;} // Java program to check whether a number is// Smith Number or not. import java.util.Vector; class Test{ static int MAX = 10000; // array to store all prime less than and equal to 10^6 static Vector <Integer> primes = new Vector<>(); // utility function for sieve of sundaram static void sieveSundaram() { // In general Sieve of Sundaram, produces primes smaller // than (2*x + 2) for a number given number x. Since // we want primes smaller than MAX, we reduce MAX to half // This array is used to separate numbers of the form // i+j+2ij from others where 1 <= i <= j boolean marked[] = new boolean[MAX/2 + 100]; // Main logic of Sundaram. Mark all numbers which // do not generate prime number by doing 2*i+1 for (int i=1; i<=(Math.sqrt(MAX)-1)/2; i++) for (int j=(i*(i+1))<<1; j<=MAX/2; j=j+2*i+1) marked[j] = true; // Since 2 is a prime number primes.addElement(2); // Print other primes. Remaining primes are of the // form 2*i + 1 such that marked[i] is false. for (int i=1; i<=MAX/2; i++) if (marked[i] == false) primes.addElement(2*i + 1); } // Returns true if n is a Smith number, else false. static boolean isSmith(int n) { int original_no = n; // Find sum the digits of prime factors of n int pDigitSum = 0; for (int i = 0; primes.get(i) <= n/2; i++) { while (n % primes.get(i) == 0) { // If primes[i] is a prime factor, // add its digits to pDigitSum. int p = primes.get(i); n = n/p; while (p > 0) { pDigitSum += (p % 10); p = p/10; } } } // If n!=1 then one prime factor still to be // summed up; if (n != 1 && n != original_no) { while (n > 0) { pDigitSum = pDigitSum + n%10; n = n/10; } } // All prime factors digits summed up // Now sum the original number digits int sumDigits = 0; while (original_no > 0) { sumDigits = sumDigits + original_no % 10; original_no = original_no/10; } // If sum of digits in prime factors and sum // of digits in original number are same, then // return true. Else return false. return (pDigitSum == sumDigits); } // Driver method public static void main(String[] args) { // Finding all prime numbers before limit. These // numbers are used to find prime factors. sieveSundaram(); System.out.println("Printing first few Smith Numbers" + " using isSmith()"); for (int i=1; i<500; i++) if (isSmith(i)) System.out.print(i + " "); }} # Python program to check whether a number is# Smith Number or not. import math MAX = 10000 # array to store all prime less than and equal to 10^6primes = [] # utility function for sieve of sundaramdef sieveSundaram (): #In general Sieve of Sundaram, produces primes smaller # than (2*x + 2) for a number given number x. Since # we want primes smaller than MAX, we reduce MAX to half # This array is used to separate numbers of the form # i+j+2ij from others where 1 <= i <= j marked = [0] * ((MAX/2)+100) # Main logic of Sundaram. Mark all numbers which # do not generate prime number by doing 2*i+1 i = 1 while i <= ((math.sqrt (MAX)-1)/2) : j = (i* (i+1)) << 1 while j <= MAX/2 : marked[j] = 1 j = j+ 2 * i + 1 i = i + 1 # Since 2 is a prime number primes.append (2) # Print other primes. Remaining primes are of the # form 2*i + 1 such that marked[i] is false. i=1 while i <= MAX /2 : if marked[i] == 0 : primes.append( 2* i + 1) i=i+1 #Returns true if n is a Smith number, else false.def isSmith( n) : original_no = n #Find sum the digits of prime factors of n pDigitSum = 0; i=0 while (primes[i] <= n/2 ) : while n % primes[i] == 0 : #If primes[i] is a prime factor , # add its digits to pDigitSum. p = primes[i] n = n/p while p > 0 : pDigitSum += (p % 10) p = p/10 i=i+1 # If n!=1 then one prime factor still to be # summed up if not n == 1 and not n == original_no : while n > 0 : pDigitSum = pDigitSum + n%10 n=n/10 # All prime factors digits summed up # Now sum the original number digits sumDigits = 0 while original_no > 0 : sumDigits = sumDigits + original_no % 10 original_no = original_no/10 #If sum of digits in prime factors and sum # of digits in original number are same, then # return true. Else return false. return pDigitSum == sumDigits#-----end of function isSmith------ #Driver method# Finding all prime numbers before limit. These# numbers are used to find prime factors.sieveSundaram();print "Printing first few Smith Numbers using isSmith()"i = 1while i<500 : if isSmith(i) : print i, i=i+1 #This code is contributed by Nikita Tiwari // C# program to check whether a number is// Smith Number or not.using System;using System.Collections; class Test{ static int MAX = 10000; // array to store all prime less than and equal to 10^6 static ArrayList primes = new ArrayList(10); // utility function for sieve of sundaram static void sieveSundaram() { // In general Sieve of Sundaram, produces primes smaller // than (2*x + 2) for a number given number x. Since // we want primes smaller than MAX, we reduce MAX to half // This array is used to separate numbers of the form // i+j+2ij from others where 1 <= i <= j bool[] marked = new bool[MAX/2 + 100]; // Main logic of Sundaram. Mark all numbers which // do not generate prime number by doing 2*i+1 for (int i=1; i<=(Math.Sqrt(MAX)-1)/2; i++) for (int j=(i*(i+1))<<1; j<=MAX/2; j=j+2*i+1) marked[j] = true; // Since 2 is a prime number primes.Add(2); // Print other primes. Remaining primes are of the // form 2*i + 1 such that marked[i] is false. for (int i=1; i<=MAX/2; i++) if (marked[i] == false) primes.Add(2*i + 1); } // Returns true if n is a Smith number, else false. static bool isSmith(int n) { int original_no = n; // Find sum the digits of prime factors of n int pDigitSum = 0; for (int i = 0; (int)primes[i] <= n/2; i++) { while (n % (int)primes[i] == 0) { // If primes[i] is a prime factor, // add its digits to pDigitSum. int p = (int)primes[i]; n = n/p; while (p > 0) { pDigitSum += (p % 10); p = p/10; } } } // If n!=1 then one prime factor still to be // summed up; if (n != 1 && n != original_no) { while (n > 0) { pDigitSum = pDigitSum + n%10; n = n/10; } } // All prime factors digits summed up // Now sum the original number digits int sumDigits = 0; while (original_no > 0) { sumDigits = sumDigits + original_no % 10; original_no = original_no/10; } // If sum of digits in prime factors and sum // of digits in original number are same, then // return true. Else return false. return (pDigitSum == sumDigits); } // Driver method public static void Main() { // Finding all prime numbers before limit. These // numbers are used to find prime factors. sieveSundaram(); Console.WriteLine("Printing first few Smith Numbers" + " using isSmith()"); for (int i=1; i<500; i++) if (isSmith(i)) Console.Write(i + " "); }}// This Code is contributed by mits <?php// PHP program to check whether a// number is Smith Number or not.$MAX = 10000; // array to store all prime less// than and equal to 10^6$primes = array(); // utility function for sieve of sundaramfunction sieveSundaram(){ global $MAX, $primes; // In general Sieve of Sundaram, produces // primes smaller than (2*x + 2) for a // number given number x. Since we want // primes smaller than MAX, we reduce MAX // to half. This array is used to separate // numbers of the form i+j+2ij from others // where 1 <= i <= j $marked = array_fill(0, ($MAX / 2 + 100), false); // Main logic of Sundaram. Mark all numbers which // do not generate prime number by doing 2*i+1 for ($i = 1; $i <= (sqrt($MAX) - 1) / 2; $i++) for ($j = ($i * ($i + 1)) << 1; $j <= $MAX / 2; $j = $j + 2 * $i + 1) $marked[$j] = true; // Since 2 is a prime number array_push($primes, 2); // Print other primes. Remaining primes are of the // form 2*i + 1 such that marked[i] is false. for ($i = 1; $i <= $MAX / 2; $i++) if ($marked[$i] == false) array_push($primes, 2 * $i + 1);} // Returns true if n is a Smith number, else false.function isSmith($n){ global $MAX, $primes; $original_no = $n; // Find sum the digits of prime // factors of n $pDigitSum = 0; for ($i = 0; $primes[$i] <= $n / 2; $i++) { while ($n % $primes[$i] == 0) { // If primes[i] is a prime factor, // add its digits to pDigitSum. $p = $primes[$i]; $n = $n / $p; while ($p > 0) { $pDigitSum += ($p % 10); $p = $p / 10; } } } // If n!=1 then one prime factor still // to be summed up; if ($n != 1 && $n != $original_no) { while ($n > 0) { $pDigitSum = $pDigitSum + $n % 10; $n = $n / 10; } } // All prime factors digits summed up // Now sum the original number digits $sumDigits = 0; while ($original_no > 0) { $sumDigits = $sumDigits + $original_no % 10; $original_no = $original_no / 10; } // If sum of digits in prime factors and sum // of digits in original number are same, then // return true. Else return false. return ($pDigitSum == $sumDigits);} // Driver code // Finding all prime numbers before limit. These// numbers are used to find prime factors.sieveSundaram(); echo "Printing first few Smith Numbers" . " using isSmith()\n";for ($i = 1; $i < 500; $i++) if (isSmith($i)) echo $i . " "; // This code is contributed by mits?> // JavaScript program to check whether a number is// Smith Number or not.let MAX = 10000; // array to store all prime less than and equal to 10^6let primes = []; // utility function for sieve of sundaramfunction sieveSundaram(){ // In general Sieve of Sundaram, produces primes smaller // than (2*x + 2) for a number given number x. Since // we want primes smaller than MAX, we reduce MAX to half // This array is used to separate numbers of the form // i+j+2ij from others where 1 <= i <= j let LEN = Math.floor(MAX/2) + 100; let marked = []; for (let i=0; i < LEN; i++) { marked.push(0); } // Main logic of Sundaram. Mark all numbers which // do not generate prime number by doing 2*i+1 for (let i=1; i<=Math.floor((Math.sqrt(MAX)-1)/2); i++){ for (let j=(i*(i+1))<<1; j<=Math.floor(MAX/2); j=j+2*i+1){ marked[j] = 1; } } // Since 2 is a prime number primes.push(2); // Print other primes. Remaining primes are of the // form 2*i + 1 such that marked[i] is false. for (let i=1; i<=Math.floor(MAX/2); i++){ if (marked[i] == 0){ primes.push(2*i + 1); } } } // Returns true if n is a Smith number, else false.function isSmith(n){ let original_no = n; // Find sum the digits of prime factors of n let pDigitSum = 0; for (let i = 0; primes[i] <= Math.floor(n/2); i++){ while (n % primes[i] == 0){ // If primes[i] is a prime factor, // add its digits to pDigitSum. let p = primes[i]; n = Math.floor(n/p); while (p > 0){ pDigitSum += (p % 10); p = Math.floor(p/10); } } } // If n!=1 then one prime factor still to be // summed up; if (n != 1 && n != original_no){ while (n > 0){ pDigitSum = pDigitSum + n%10; n = Math.floor(n/10); } } // All prime factors digits summed up // Now sum the original number digits let sumDigits = 0; while (original_no > 0){ sumDigits = sumDigits + original_no % 10; original_no = Math.floor(original_no/10); } // If sum of digits in prime factors and sum // of digits in original number are same, then // return true. Else return false. return (pDigitSum === sumDigits);} // Driver code // Finding all prime numbers before limit. These// numbers are used to find prime factors.{ sieveSundaram(); console.log("Printing first few Smith Numbers using isSmith() "); for (let i=1; i<500; i++){ // console.log(i); if (isSmith(i)){ console.log(i); } }} // The code is contributed by Gautam goel (gautamgoel962) Output: Printing first few Smith Numbers using isSmith() 4 22 27 58 85 94 121 166 202 265 274 319 346 355 378 382 391 438 454 483 Time Complexity: O(n log n) Auxiliary Space: O(n) Smith Number | GeeksforGeeks - YouTubeGeeksforGeeks529K subscribersSmith Number | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.More videosMore videosYou're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 7:06•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=nJVF9YkXxNA" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div> This article is contributed by Aarti_Rathi and Sahil Chhabra(KILLER). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. SushilKumar4 Mithun Kumar Akanksha_Rai surinderdawra388 gautamgoel962 codewithshinchan series sieve Mathematical Mathematical series sieve Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Algorithm to solve Rubik's Cube Merge two sorted arrays with O(1) extra space Program to print prime numbers from 1 to N. Find next greater number with same set of digits Segment Tree | Set 1 (Sum of given range) Check if a number is Palindrome Count ways to reach the n'th stair Fizz Buzz Implementation Count all possible paths from top left to bottom right of a mXn matrix Product of Array except itself
[ { "code": null, "e": 52, "s": 24, "text": "\n23 Jun, 2022" }, { "code": null, "e": 256, "s": 52, "text": "Given a number n, the task is to find out whether this number is smith or not. A Smith Number is a composite number whose sum of digits is equal to the sum of digits in its prime factorization. Examples:" }, { "code": null, "e": 739, "s": 256, "text": "Input : n = 4\nOutput : Yes\nPrime factorization = 2, 2 and 2 + 2 = 4\nTherefore, 4 is a smith number\n\nInput : n = 6\nOutput : No\nPrime factorization = 2, 3 and 2 + 3 is\nnot 6. Therefore, 6 is not a smith number\n\nInput : n = 666\nOutput : Yes\nPrime factorization = 2, 3, 3, 37 and\n2 + 3 + 3 + (3 + 7) = 6 + 6 + 6 = 18\nTherefore, 666 is a smith number\n\nInput : n = 13\nOutput : No\nPrime factorization = 13 and 13 = 13,\nBut 13 is not a smith number as it is not\na composite number" }, { "code": null, "e": 1133, "s": 739, "text": "The idea is first find all prime numbers below a limit using Sieve of Sundaram (This is especially useful when we want to check multiple numbers for Smith). Now for every input to be checked for Smith, we go through all prime factors of it and find sum of digits of every prime factor. We also find sum of digits in given number. Finally we compare two sums. If they are same, we return true. " }, { "code": null, "e": 1137, "s": 1133, "text": "C++" }, { "code": null, "e": 1142, "s": 1137, "text": "Java" }, { "code": null, "e": 1149, "s": 1142, "text": "Python" }, { "code": null, "e": 1152, "s": 1149, "text": "C#" }, { "code": null, "e": 1156, "s": 1152, "text": "PHP" }, { "code": null, "e": 1167, "s": 1156, "text": "Javascript" }, { "code": "// C++ program to check whether a number is// Smith Number or not.#include<bits/stdc++.h>using namespace std;const int MAX = 10000; // array to store all prime less than and equal to 10^6vector <int> primes; // utility function for sieve of sundaramvoid sieveSundaram(){ // In general Sieve of Sundaram, produces primes smaller // than (2*x + 2) for a number given number x. Since // we want primes smaller than MAX, we reduce MAX to half // This array is used to separate numbers of the form // i+j+2ij from others where 1 <= i <= j bool marked[MAX/2 + 100] = {0}; // Main logic of Sundaram. Mark all numbers which // do not generate prime number by doing 2*i+1 for (int i=1; i<=(sqrt(MAX)-1)/2; i++) for (int j=(i*(i+1))<<1; j<=MAX/2; j=j+2*i+1) marked[j] = true; // Since 2 is a prime number primes.push_back(2); // Print other primes. Remaining primes are of the // form 2*i + 1 such that marked[i] is false. for (int i=1; i<=MAX/2; i++) if (marked[i] == false) primes.push_back(2*i + 1);} // Returns true if n is a Smith number, else false.bool isSmith(int n){ int original_no = n; // Find sum the digits of prime factors of n int pDigitSum = 0; for (int i = 0; primes[i] <= n/2; i++) { while (n % primes[i] == 0) { // If primes[i] is a prime factor, // add its digits to pDigitSum. int p = primes[i]; n = n/p; while (p > 0) { pDigitSum += (p % 10); p = p/10; } } } // If n!=1 then one prime factor still to be // summed up; if (n != 1 && n != original_no) { while (n > 0) { pDigitSum = pDigitSum + n%10; n = n/10; } } // All prime factors digits summed up // Now sum the original number digits int sumDigits = 0; while (original_no > 0) { sumDigits = sumDigits + original_no % 10; original_no = original_no/10; } // If sum of digits in prime factors and sum // of digits in original number are same, then // return true. Else return false. return (pDigitSum == sumDigits);} // Driver codeint main(){ // Finding all prime numbers before limit. These // numbers are used to find prime factors. sieveSundaram(); cout << \"Printing first few Smith Numbers\" \" using isSmith()n\"; for (int i=1; i<500; i++) if (isSmith(i)) cout << i << \" \"; return 0;}", "e": 3689, "s": 1167, "text": null }, { "code": "// Java program to check whether a number is// Smith Number or not. import java.util.Vector; class Test{ static int MAX = 10000; // array to store all prime less than and equal to 10^6 static Vector <Integer> primes = new Vector<>(); // utility function for sieve of sundaram static void sieveSundaram() { // In general Sieve of Sundaram, produces primes smaller // than (2*x + 2) for a number given number x. Since // we want primes smaller than MAX, we reduce MAX to half // This array is used to separate numbers of the form // i+j+2ij from others where 1 <= i <= j boolean marked[] = new boolean[MAX/2 + 100]; // Main logic of Sundaram. Mark all numbers which // do not generate prime number by doing 2*i+1 for (int i=1; i<=(Math.sqrt(MAX)-1)/2; i++) for (int j=(i*(i+1))<<1; j<=MAX/2; j=j+2*i+1) marked[j] = true; // Since 2 is a prime number primes.addElement(2); // Print other primes. Remaining primes are of the // form 2*i + 1 such that marked[i] is false. for (int i=1; i<=MAX/2; i++) if (marked[i] == false) primes.addElement(2*i + 1); } // Returns true if n is a Smith number, else false. static boolean isSmith(int n) { int original_no = n; // Find sum the digits of prime factors of n int pDigitSum = 0; for (int i = 0; primes.get(i) <= n/2; i++) { while (n % primes.get(i) == 0) { // If primes[i] is a prime factor, // add its digits to pDigitSum. int p = primes.get(i); n = n/p; while (p > 0) { pDigitSum += (p % 10); p = p/10; } } } // If n!=1 then one prime factor still to be // summed up; if (n != 1 && n != original_no) { while (n > 0) { pDigitSum = pDigitSum + n%10; n = n/10; } } // All prime factors digits summed up // Now sum the original number digits int sumDigits = 0; while (original_no > 0) { sumDigits = sumDigits + original_no % 10; original_no = original_no/10; } // If sum of digits in prime factors and sum // of digits in original number are same, then // return true. Else return false. return (pDigitSum == sumDigits); } // Driver method public static void main(String[] args) { // Finding all prime numbers before limit. These // numbers are used to find prime factors. sieveSundaram(); System.out.println(\"Printing first few Smith Numbers\" + \" using isSmith()\"); for (int i=1; i<500; i++) if (isSmith(i)) System.out.print(i + \" \"); }}", "e": 6743, "s": 3689, "text": null }, { "code": "# Python program to check whether a number is# Smith Number or not. import math MAX = 10000 # array to store all prime less than and equal to 10^6primes = [] # utility function for sieve of sundaramdef sieveSundaram (): #In general Sieve of Sundaram, produces primes smaller # than (2*x + 2) for a number given number x. Since # we want primes smaller than MAX, we reduce MAX to half # This array is used to separate numbers of the form # i+j+2ij from others where 1 <= i <= j marked = [0] * ((MAX/2)+100) # Main logic of Sundaram. Mark all numbers which # do not generate prime number by doing 2*i+1 i = 1 while i <= ((math.sqrt (MAX)-1)/2) : j = (i* (i+1)) << 1 while j <= MAX/2 : marked[j] = 1 j = j+ 2 * i + 1 i = i + 1 # Since 2 is a prime number primes.append (2) # Print other primes. Remaining primes are of the # form 2*i + 1 such that marked[i] is false. i=1 while i <= MAX /2 : if marked[i] == 0 : primes.append( 2* i + 1) i=i+1 #Returns true if n is a Smith number, else false.def isSmith( n) : original_no = n #Find sum the digits of prime factors of n pDigitSum = 0; i=0 while (primes[i] <= n/2 ) : while n % primes[i] == 0 : #If primes[i] is a prime factor , # add its digits to pDigitSum. p = primes[i] n = n/p while p > 0 : pDigitSum += (p % 10) p = p/10 i=i+1 # If n!=1 then one prime factor still to be # summed up if not n == 1 and not n == original_no : while n > 0 : pDigitSum = pDigitSum + n%10 n=n/10 # All prime factors digits summed up # Now sum the original number digits sumDigits = 0 while original_no > 0 : sumDigits = sumDigits + original_no % 10 original_no = original_no/10 #If sum of digits in prime factors and sum # of digits in original number are same, then # return true. Else return false. return pDigitSum == sumDigits#-----end of function isSmith------ #Driver method# Finding all prime numbers before limit. These# numbers are used to find prime factors.sieveSundaram();print \"Printing first few Smith Numbers using isSmith()\"i = 1while i<500 : if isSmith(i) : print i, i=i+1 #This code is contributed by Nikita Tiwari", "e": 9165, "s": 6743, "text": null }, { "code": "// C# program to check whether a number is// Smith Number or not.using System;using System.Collections; class Test{ static int MAX = 10000; // array to store all prime less than and equal to 10^6 static ArrayList primes = new ArrayList(10); // utility function for sieve of sundaram static void sieveSundaram() { // In general Sieve of Sundaram, produces primes smaller // than (2*x + 2) for a number given number x. Since // we want primes smaller than MAX, we reduce MAX to half // This array is used to separate numbers of the form // i+j+2ij from others where 1 <= i <= j bool[] marked = new bool[MAX/2 + 100]; // Main logic of Sundaram. Mark all numbers which // do not generate prime number by doing 2*i+1 for (int i=1; i<=(Math.Sqrt(MAX)-1)/2; i++) for (int j=(i*(i+1))<<1; j<=MAX/2; j=j+2*i+1) marked[j] = true; // Since 2 is a prime number primes.Add(2); // Print other primes. Remaining primes are of the // form 2*i + 1 such that marked[i] is false. for (int i=1; i<=MAX/2; i++) if (marked[i] == false) primes.Add(2*i + 1); } // Returns true if n is a Smith number, else false. static bool isSmith(int n) { int original_no = n; // Find sum the digits of prime factors of n int pDigitSum = 0; for (int i = 0; (int)primes[i] <= n/2; i++) { while (n % (int)primes[i] == 0) { // If primes[i] is a prime factor, // add its digits to pDigitSum. int p = (int)primes[i]; n = n/p; while (p > 0) { pDigitSum += (p % 10); p = p/10; } } } // If n!=1 then one prime factor still to be // summed up; if (n != 1 && n != original_no) { while (n > 0) { pDigitSum = pDigitSum + n%10; n = n/10; } } // All prime factors digits summed up // Now sum the original number digits int sumDigits = 0; while (original_no > 0) { sumDigits = sumDigits + original_no % 10; original_no = original_no/10; } // If sum of digits in prime factors and sum // of digits in original number are same, then // return true. Else return false. return (pDigitSum == sumDigits); } // Driver method public static void Main() { // Finding all prime numbers before limit. These // numbers are used to find prime factors. sieveSundaram(); Console.WriteLine(\"Printing first few Smith Numbers\" + \" using isSmith()\"); for (int i=1; i<500; i++) if (isSmith(i)) Console.Write(i + \" \"); }}// This Code is contributed by mits", "e": 12204, "s": 9165, "text": null }, { "code": "<?php// PHP program to check whether a// number is Smith Number or not.$MAX = 10000; // array to store all prime less// than and equal to 10^6$primes = array(); // utility function for sieve of sundaramfunction sieveSundaram(){ global $MAX, $primes; // In general Sieve of Sundaram, produces // primes smaller than (2*x + 2) for a // number given number x. Since we want // primes smaller than MAX, we reduce MAX // to half. This array is used to separate // numbers of the form i+j+2ij from others // where 1 <= i <= j $marked = array_fill(0, ($MAX / 2 + 100), false); // Main logic of Sundaram. Mark all numbers which // do not generate prime number by doing 2*i+1 for ($i = 1; $i <= (sqrt($MAX) - 1) / 2; $i++) for ($j = ($i * ($i + 1)) << 1; $j <= $MAX / 2; $j = $j + 2 * $i + 1) $marked[$j] = true; // Since 2 is a prime number array_push($primes, 2); // Print other primes. Remaining primes are of the // form 2*i + 1 such that marked[i] is false. for ($i = 1; $i <= $MAX / 2; $i++) if ($marked[$i] == false) array_push($primes, 2 * $i + 1);} // Returns true if n is a Smith number, else false.function isSmith($n){ global $MAX, $primes; $original_no = $n; // Find sum the digits of prime // factors of n $pDigitSum = 0; for ($i = 0; $primes[$i] <= $n / 2; $i++) { while ($n % $primes[$i] == 0) { // If primes[i] is a prime factor, // add its digits to pDigitSum. $p = $primes[$i]; $n = $n / $p; while ($p > 0) { $pDigitSum += ($p % 10); $p = $p / 10; } } } // If n!=1 then one prime factor still // to be summed up; if ($n != 1 && $n != $original_no) { while ($n > 0) { $pDigitSum = $pDigitSum + $n % 10; $n = $n / 10; } } // All prime factors digits summed up // Now sum the original number digits $sumDigits = 0; while ($original_no > 0) { $sumDigits = $sumDigits + $original_no % 10; $original_no = $original_no / 10; } // If sum of digits in prime factors and sum // of digits in original number are same, then // return true. Else return false. return ($pDigitSum == $sumDigits);} // Driver code // Finding all prime numbers before limit. These// numbers are used to find prime factors.sieveSundaram(); echo \"Printing first few Smith Numbers\" . \" using isSmith()\\n\";for ($i = 1; $i < 500; $i++) if (isSmith($i)) echo $i . \" \"; // This code is contributed by mits?>", "e": 14875, "s": 12204, "text": null }, { "code": "// JavaScript program to check whether a number is// Smith Number or not.let MAX = 10000; // array to store all prime less than and equal to 10^6let primes = []; // utility function for sieve of sundaramfunction sieveSundaram(){ // In general Sieve of Sundaram, produces primes smaller // than (2*x + 2) for a number given number x. Since // we want primes smaller than MAX, we reduce MAX to half // This array is used to separate numbers of the form // i+j+2ij from others where 1 <= i <= j let LEN = Math.floor(MAX/2) + 100; let marked = []; for (let i=0; i < LEN; i++) { marked.push(0); } // Main logic of Sundaram. Mark all numbers which // do not generate prime number by doing 2*i+1 for (let i=1; i<=Math.floor((Math.sqrt(MAX)-1)/2); i++){ for (let j=(i*(i+1))<<1; j<=Math.floor(MAX/2); j=j+2*i+1){ marked[j] = 1; } } // Since 2 is a prime number primes.push(2); // Print other primes. Remaining primes are of the // form 2*i + 1 such that marked[i] is false. for (let i=1; i<=Math.floor(MAX/2); i++){ if (marked[i] == 0){ primes.push(2*i + 1); } } } // Returns true if n is a Smith number, else false.function isSmith(n){ let original_no = n; // Find sum the digits of prime factors of n let pDigitSum = 0; for (let i = 0; primes[i] <= Math.floor(n/2); i++){ while (n % primes[i] == 0){ // If primes[i] is a prime factor, // add its digits to pDigitSum. let p = primes[i]; n = Math.floor(n/p); while (p > 0){ pDigitSum += (p % 10); p = Math.floor(p/10); } } } // If n!=1 then one prime factor still to be // summed up; if (n != 1 && n != original_no){ while (n > 0){ pDigitSum = pDigitSum + n%10; n = Math.floor(n/10); } } // All prime factors digits summed up // Now sum the original number digits let sumDigits = 0; while (original_no > 0){ sumDigits = sumDigits + original_no % 10; original_no = Math.floor(original_no/10); } // If sum of digits in prime factors and sum // of digits in original number are same, then // return true. Else return false. return (pDigitSum === sumDigits);} // Driver code // Finding all prime numbers before limit. These// numbers are used to find prime factors.{ sieveSundaram(); console.log(\"Printing first few Smith Numbers using isSmith() \"); for (let i=1; i<500; i++){ // console.log(i); if (isSmith(i)){ console.log(i); } }} // The code is contributed by Gautam goel (gautamgoel962)", "e": 17612, "s": 14875, "text": null }, { "code": null, "e": 17620, "s": 17612, "text": "Output:" }, { "code": null, "e": 17743, "s": 17620, "text": "Printing first few Smith Numbers using isSmith()\n4 22 27 58 85 94 121 166 202 265 274 319 346 355 378 382 391 438 454 483 " }, { "code": null, "e": 17793, "s": 17743, "text": "Time Complexity: O(n log n) Auxiliary Space: O(n)" }, { "code": null, "e": 18635, "s": 17793, "text": "Smith Number | GeeksforGeeks - YouTubeGeeksforGeeks529K subscribersSmith Number | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.More videosMore videosYou're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 7:06•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=nJVF9YkXxNA\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>" }, { "code": null, "e": 19081, "s": 18635, "text": "This article is contributed by Aarti_Rathi and Sahil Chhabra(KILLER). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 19094, "s": 19081, "text": "SushilKumar4" }, { "code": null, "e": 19107, "s": 19094, "text": "Mithun Kumar" }, { "code": null, "e": 19120, "s": 19107, "text": "Akanksha_Rai" }, { "code": null, "e": 19137, "s": 19120, "text": "surinderdawra388" }, { "code": null, "e": 19151, "s": 19137, "text": "gautamgoel962" }, { "code": null, "e": 19168, "s": 19151, "text": "codewithshinchan" }, { "code": null, "e": 19175, "s": 19168, "text": "series" }, { "code": null, "e": 19181, "s": 19175, "text": "sieve" }, { "code": null, "e": 19194, "s": 19181, "text": "Mathematical" }, { "code": null, "e": 19207, "s": 19194, "text": "Mathematical" }, { "code": null, "e": 19214, "s": 19207, "text": "series" }, { "code": null, "e": 19220, "s": 19214, "text": "sieve" }, { "code": null, "e": 19318, "s": 19220, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 19350, "s": 19318, "text": "Algorithm to solve Rubik's Cube" }, { "code": null, "e": 19396, "s": 19350, "text": "Merge two sorted arrays with O(1) extra space" }, { "code": null, "e": 19440, "s": 19396, "text": "Program to print prime numbers from 1 to N." }, { "code": null, "e": 19489, "s": 19440, "text": "Find next greater number with same set of digits" }, { "code": null, "e": 19531, "s": 19489, "text": "Segment Tree | Set 1 (Sum of given range)" }, { "code": null, "e": 19563, "s": 19531, "text": "Check if a number is Palindrome" }, { "code": null, "e": 19598, "s": 19563, "text": "Count ways to reach the n'th stair" }, { "code": null, "e": 19623, "s": 19598, "text": "Fizz Buzz Implementation" }, { "code": null, "e": 19694, "s": 19623, "text": "Count all possible paths from top left to bottom right of a mXn matrix" } ]
OpenMP | Introduction with Installation Guide
23 Apr, 2019 After a long thirst for parallelizing highly regular loops in matrix-oriented numerical programming, OpenMP was introduced by OpenMP Architecture Review Board (ARB) on 1997. In the subsequent releases, the enthusiastic OpenMP team added many features to it including the task parallelizing, support for accelerators, user-defined reductions and lot more. The latest OpenMP 5.0 release was made in 2018 November. Open Multi-processing (OpenMP) is a technique of parallelizing a section(s) of C/C++/Fortran code. OpenMP is also seen as an extension to C/C++/Fortran languages by adding the parallelizing features to them. In general, OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms that ranges from the normal desktop computer to the high-end supercomputers. THREAD Vs PROCESSA process is created by the OS to execute a program with given resources(memory, registers); generally, different processes do not share their memory with another. A thread is a subset of a process, and it shares the resources of its parent process but has its own stack to keep track of function calls. Multiple threads of a process will have access to the same memory. Parallel Memory ArchitecturesBefore getting deep into OpenMP, let’s revive the basic parallel memory architectures.These are divided into three categories; Shared memory: OpenMP comes under the shared memory concept. In this, different CPU’s (processors) will have access to the same memory location. Since all CPU’s connect to the same memory, memory access should be handled carefully. Distributed memory: Here, each CPU(processor) will have its own memory location to access and use. In order to make them communicate, all independent systems will be connected together using a network. MPI is based on distributed architecture. Hybrid: Hybrid is a combination of both shared and distributed architectures. A simple scenario to showcase the power of OpenMP would be comparing the execution time of a normal C/C++ program and the OpenMP program. Steps for Installation of OpenMP STEP 1: Check the GCC version of the compilergcc --versionGCC provides support for OpenMP starting from its version 4.2.0. So if the system has GCC compiler with the version higher than 4.2.0, then it must have OpenMP features configured with it.If the system doesn’t have the GCC compiler, we can use the following commandsudo apt install gccFor more detailed support for installation, we can refer here gcc --version GCC provides support for OpenMP starting from its version 4.2.0. So if the system has GCC compiler with the version higher than 4.2.0, then it must have OpenMP features configured with it. If the system doesn’t have the GCC compiler, we can use the following command sudo apt install gcc For more detailed support for installation, we can refer here STEP 2: Configuring OpenMPWe can check whether the OpenMP features are configured into our compiler or not, using the commandecho |cpp -fopenmp -dM |grep -i openIf OpenMP is not featured in the compiler, we can configure it use using the commandsudo apt install libomp-dev echo |cpp -fopenmp -dM |grep -i open If OpenMP is not featured in the compiler, we can configure it use using the command sudo apt install libomp-dev STEP 3: Setting the number of threadsIn OpenMP, Before running the code, we can initialise the number of threads to be executed using the following command. Here, we set the number of threads to be getting executed to be 8 threads.export OMP_NUM_THREADS=8 export OMP_NUM_THREADS=8 Running First Code in OpenMP // OpenMP header#include <omp.h>#include <stdio.h>#include <stdlib.h> int main(int argc, char* argv[]){ int nthreads, tid; // Begin of parallel region #pragma omp parallel private(nthreads, tid) { // Getting thread number tid = omp_get_thread_num(); printf("Welcome to GFG from thread = %d\n", tid); if (tid == 0) { // Only master thread does this nthreads = omp_get_num_threads(); printf("Number of threads = %d\n", nthreads); } }} Output: This program will print a message which will be getting executed by various threads. Compile: gcc -o gfg -fopenmp geeksforgeeks.c Execute: ./gfg OpenMP Articles C Language C++ CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Const keyword in C++ Must Do Questions for Companies like TCS, CTS, HCL, IBM ... Service-Oriented Architecture Advantages and Disadvantages of Linked List What's the difference between Scripting and Programming Languages? std::sort() in C++ STL Bitwise Operators in C/C++ Arrays in C/C++ Substring in C++ Dynamic Memory Allocation in C using malloc(), calloc(), free() and realloc()
[ { "code": null, "e": 28, "s": 0, "text": "\n23 Apr, 2019" }, { "code": null, "e": 440, "s": 28, "text": "After a long thirst for parallelizing highly regular loops in matrix-oriented numerical programming, OpenMP was introduced by OpenMP Architecture Review Board (ARB) on 1997. In the subsequent releases, the enthusiastic OpenMP team added many features to it including the task parallelizing, support for accelerators, user-defined reductions and lot more. The latest OpenMP 5.0 release was made in 2018 November." }, { "code": null, "e": 882, "s": 440, "text": "Open Multi-processing (OpenMP) is a technique of parallelizing a section(s) of C/C++/Fortran code. OpenMP is also seen as an extension to C/C++/Fortran languages by adding the parallelizing features to them. In general, OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms that ranges from the normal desktop computer to the high-end supercomputers." }, { "code": null, "e": 1270, "s": 882, "text": "THREAD Vs PROCESSA process is created by the OS to execute a program with given resources(memory, registers); generally, different processes do not share their memory with another. A thread is a subset of a process, and it shares the resources of its parent process but has its own stack to keep track of function calls. Multiple threads of a process will have access to the same memory." }, { "code": null, "e": 1426, "s": 1270, "text": "Parallel Memory ArchitecturesBefore getting deep into OpenMP, let’s revive the basic parallel memory architectures.These are divided into three categories;" }, { "code": null, "e": 1658, "s": 1426, "text": "Shared memory: OpenMP comes under the shared memory concept. In this, different CPU’s (processors) will have access to the same memory location. Since all CPU’s connect to the same memory, memory access should be handled carefully." }, { "code": null, "e": 1902, "s": 1658, "text": "Distributed memory: Here, each CPU(processor) will have its own memory location to access and use. In order to make them communicate, all independent systems will be connected together using a network. MPI is based on distributed architecture." }, { "code": null, "e": 2118, "s": 1902, "text": "Hybrid: Hybrid is a combination of both shared and distributed architectures. A simple scenario to showcase the power of OpenMP would be comparing the execution time of a normal C/C++ program and the OpenMP program." }, { "code": null, "e": 2151, "s": 2118, "text": "Steps for Installation of OpenMP" }, { "code": null, "e": 2556, "s": 2151, "text": "STEP 1: Check the GCC version of the compilergcc --versionGCC provides support for OpenMP starting from its version 4.2.0. So if the system has GCC compiler with the version higher than 4.2.0, then it must have OpenMP features configured with it.If the system doesn’t have the GCC compiler, we can use the following commandsudo apt install gccFor more detailed support for installation, we can refer here" }, { "code": null, "e": 2570, "s": 2556, "text": "gcc --version" }, { "code": null, "e": 2759, "s": 2570, "text": "GCC provides support for OpenMP starting from its version 4.2.0. So if the system has GCC compiler with the version higher than 4.2.0, then it must have OpenMP features configured with it." }, { "code": null, "e": 2837, "s": 2759, "text": "If the system doesn’t have the GCC compiler, we can use the following command" }, { "code": null, "e": 2858, "s": 2837, "text": "sudo apt install gcc" }, { "code": null, "e": 2920, "s": 2858, "text": "For more detailed support for installation, we can refer here" }, { "code": null, "e": 3193, "s": 2920, "text": "STEP 2: Configuring OpenMPWe can check whether the OpenMP features are configured into our compiler or not, using the commandecho |cpp -fopenmp -dM |grep -i openIf OpenMP is not featured in the compiler, we can configure it use using the commandsudo apt install libomp-dev" }, { "code": null, "e": 3230, "s": 3193, "text": "echo |cpp -fopenmp -dM |grep -i open" }, { "code": null, "e": 3315, "s": 3230, "text": "If OpenMP is not featured in the compiler, we can configure it use using the command" }, { "code": null, "e": 3343, "s": 3315, "text": "sudo apt install libomp-dev" }, { "code": null, "e": 3599, "s": 3343, "text": "STEP 3: Setting the number of threadsIn OpenMP, Before running the code, we can initialise the number of threads to be executed using the following command. Here, we set the number of threads to be getting executed to be 8 threads.export OMP_NUM_THREADS=8" }, { "code": null, "e": 3624, "s": 3599, "text": "export OMP_NUM_THREADS=8" }, { "code": null, "e": 3653, "s": 3624, "text": "Running First Code in OpenMP" }, { "code": "// OpenMP header#include <omp.h>#include <stdio.h>#include <stdlib.h> int main(int argc, char* argv[]){ int nthreads, tid; // Begin of parallel region #pragma omp parallel private(nthreads, tid) { // Getting thread number tid = omp_get_thread_num(); printf(\"Welcome to GFG from thread = %d\\n\", tid); if (tid == 0) { // Only master thread does this nthreads = omp_get_num_threads(); printf(\"Number of threads = %d\\n\", nthreads); } }}", "e": 4208, "s": 3653, "text": null }, { "code": null, "e": 4216, "s": 4208, "text": "Output:" }, { "code": null, "e": 4301, "s": 4216, "text": "This program will print a message which will be getting executed by various threads." }, { "code": null, "e": 4364, "s": 4301, "text": "Compile: \ngcc -o gfg -fopenmp geeksforgeeks.c\n\nExecute:\n./gfg\n" }, { "code": null, "e": 4371, "s": 4364, "text": "OpenMP" }, { "code": null, "e": 4380, "s": 4371, "text": "Articles" }, { "code": null, "e": 4391, "s": 4380, "text": "C Language" }, { "code": null, "e": 4395, "s": 4391, "text": "C++" }, { "code": null, "e": 4399, "s": 4395, "text": "CPP" }, { "code": null, "e": 4497, "s": 4399, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 4518, "s": 4497, "text": "Const keyword in C++" }, { "code": null, "e": 4578, "s": 4518, "text": "Must Do Questions for Companies like TCS, CTS, HCL, IBM ..." }, { "code": null, "e": 4608, "s": 4578, "text": "Service-Oriented Architecture" }, { "code": null, "e": 4652, "s": 4608, "text": "Advantages and Disadvantages of Linked List" }, { "code": null, "e": 4719, "s": 4652, "text": "What's the difference between Scripting and Programming Languages?" }, { "code": null, "e": 4742, "s": 4719, "text": "std::sort() in C++ STL" }, { "code": null, "e": 4769, "s": 4742, "text": "Bitwise Operators in C/C++" }, { "code": null, "e": 4785, "s": 4769, "text": "Arrays in C/C++" }, { "code": null, "e": 4802, "s": 4785, "text": "Substring in C++" } ]
Graph Matrices in Software Testing
11 Mar, 2022 A graph matrix is a data structure that can assist in developing a tool for automation of path testing. Properties of graph matrices are fundamental for developing a test tool and hence graph matrices are very useful in understanding software testing concepts and theory. What is a Graph Matrix ?A graph matrix is a square matrix whose size represents the number of nodes in the control flow graph. If you do not know what control flow graphs are, then read this article. Each row and column in the matrix identifies a node and the entries in the matrix represent the edges or links between these nodes. Conventionally, nodes are denoted by digits and edges are denoted by letters. Let’s take an example. Let’s convert this control flow graph into a graph matrix. Since the graph has 4 nodes, so the graph matrix would have a dimension of 4 X 4. Matrix entries will be filled as follows : (1, 1) will be filled with ‘a’ as an edge exists from node 1 to node 1 (1, 2) will be filled with ‘b’ as an edge exists from node 1 to node 2. It is important to note that (2, 1) will not be filled as the edge is unidirectional and not bidirectional (1, 3) will be filled with ‘c’ as edge c exists from node 1 to node 3 (2, 4) will be filled with ‘d’ as edge exists from node 2 to node 4 (3, 4) will be filled with ‘e’ as an edge exists from node 3 to node 4 The graph matrix formed is shown below : Connection Matrix :A connection matrix is a matrix defined with edges weight. In simple form, when a connection exists between two nodes of control flow graph, then the edge weight is 1, otherwise, it is 0. However, 0 is not usually entered in the matrix cells to reduce the complexity. For example, if we represent the above control flow graph as a connection matrix, then the result would be : As we can see, the weight of the edges are simply replaced by 1 and the cells which were empty before are left as it is, i.e., representing 0. A connection matrix is used to find the cyclomatic complexity of the control graph.Although there are three other methods to find the cyclomatic complexity but this method works well too. Following are the steps to compute the cyclomatic complexity : Count the number of 1s in each row and write it in the end of the rowSubtract 1 from this count for each row (Ignore the row if its count is 0)Add the count of each row calculated previouslyAdd 1 to this total countThe final sum in Step 4 is the cyclomatic complexity of the control flow graph Count the number of 1s in each row and write it in the end of the row Subtract 1 from this count for each row (Ignore the row if its count is 0) Add the count of each row calculated previously Add 1 to this total count The final sum in Step 4 is the cyclomatic complexity of the control flow graph Let’s apply these steps to the graph above to compute the cyclomatic complexity. We can verify this value for cyclomatic complexity using other methods : Method-1 : Cyclomatic complexity = e - n + 2 * P Since here, e = 5 n = 4 and, P = 1 Therefore, cyclomatic complexity, = 5 - 4 + 2 * 1 = 3 Method-2 : Cyclomatic complexity = d + P Here, d = 2 and, P = 1 Therefore, cyclomatic complexity, = 2 + 1 = 3 Method-3: Cyclomatic complexity = number of regions in the graph li Region 1: bounded by edges b, c, d, and e Region 2: bounded by edge a (in loop) Region 3: outside the graph Therefore, cyclomatic complexity, = 1 + 1 + 1 = 3 It can be seen that all other methods give the same result. Method1, 2, and 3 have been discussed here in detail. marcshemanthraj Software Testing Software Engineering Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n11 Mar, 2022" }, { "code": null, "e": 300, "s": 28, "text": "A graph matrix is a data structure that can assist in developing a tool for automation of path testing. Properties of graph matrices are fundamental for developing a test tool and hence graph matrices are very useful in understanding software testing concepts and theory." }, { "code": null, "e": 710, "s": 300, "text": "What is a Graph Matrix ?A graph matrix is a square matrix whose size represents the number of nodes in the control flow graph. If you do not know what control flow graphs are, then read this article. Each row and column in the matrix identifies a node and the entries in the matrix represent the edges or links between these nodes. Conventionally, nodes are denoted by digits and edges are denoted by letters." }, { "code": null, "e": 733, "s": 710, "text": "Let’s take an example." }, { "code": null, "e": 917, "s": 733, "text": "Let’s convert this control flow graph into a graph matrix. Since the graph has 4 nodes, so the graph matrix would have a dimension of 4 X 4. Matrix entries will be filled as follows :" }, { "code": null, "e": 988, "s": 917, "text": "(1, 1) will be filled with ‘a’ as an edge exists from node 1 to node 1" }, { "code": null, "e": 1167, "s": 988, "text": "(1, 2) will be filled with ‘b’ as an edge exists from node 1 to node 2. It is important to note that (2, 1) will not be filled as the edge is unidirectional and not bidirectional" }, { "code": null, "e": 1237, "s": 1167, "text": "(1, 3) will be filled with ‘c’ as edge c exists from node 1 to node 3" }, { "code": null, "e": 1305, "s": 1237, "text": "(2, 4) will be filled with ‘d’ as edge exists from node 2 to node 4" }, { "code": null, "e": 1376, "s": 1305, "text": "(3, 4) will be filled with ‘e’ as an edge exists from node 3 to node 4" }, { "code": null, "e": 1417, "s": 1376, "text": "The graph matrix formed is shown below :" }, { "code": null, "e": 1704, "s": 1417, "text": "Connection Matrix :A connection matrix is a matrix defined with edges weight. In simple form, when a connection exists between two nodes of control flow graph, then the edge weight is 1, otherwise, it is 0. However, 0 is not usually entered in the matrix cells to reduce the complexity." }, { "code": null, "e": 1813, "s": 1704, "text": "For example, if we represent the above control flow graph as a connection matrix, then the result would be :" }, { "code": null, "e": 1956, "s": 1813, "text": "As we can see, the weight of the edges are simply replaced by 1 and the cells which were empty before are left as it is, i.e., representing 0." }, { "code": null, "e": 2144, "s": 1956, "text": "A connection matrix is used to find the cyclomatic complexity of the control graph.Although there are three other methods to find the cyclomatic complexity but this method works well too." }, { "code": null, "e": 2207, "s": 2144, "text": "Following are the steps to compute the cyclomatic complexity :" }, { "code": null, "e": 2501, "s": 2207, "text": "Count the number of 1s in each row and write it in the end of the rowSubtract 1 from this count for each row (Ignore the row if its count is 0)Add the count of each row calculated previouslyAdd 1 to this total countThe final sum in Step 4 is the cyclomatic complexity of the control flow graph" }, { "code": null, "e": 2571, "s": 2501, "text": "Count the number of 1s in each row and write it in the end of the row" }, { "code": null, "e": 2646, "s": 2571, "text": "Subtract 1 from this count for each row (Ignore the row if its count is 0)" }, { "code": null, "e": 2694, "s": 2646, "text": "Add the count of each row calculated previously" }, { "code": null, "e": 2720, "s": 2694, "text": "Add 1 to this total count" }, { "code": null, "e": 2799, "s": 2720, "text": "The final sum in Step 4 is the cyclomatic complexity of the control flow graph" }, { "code": null, "e": 2880, "s": 2799, "text": "Let’s apply these steps to the graph above to compute the cyclomatic complexity." }, { "code": null, "e": 2953, "s": 2880, "text": "We can verify this value for cyclomatic complexity using other methods :" }, { "code": null, "e": 2964, "s": 2953, "text": "Method-1 :" }, { "code": null, "e": 3003, "s": 2964, "text": "Cyclomatic complexity\n= e - n + 2 * P " }, { "code": null, "e": 3015, "s": 3003, "text": "Since here," }, { "code": null, "e": 3039, "s": 3015, "text": "e = 5\nn = 4\nand, P = 1 " }, { "code": null, "e": 3073, "s": 3039, "text": "Therefore, cyclomatic complexity," }, { "code": null, "e": 3095, "s": 3073, "text": "= 5 - 4 + 2 * 1 \n= 3 " }, { "code": null, "e": 3106, "s": 3095, "text": "Method-2 :" }, { "code": null, "e": 3138, "s": 3106, "text": "Cyclomatic complexity \n= d + P " }, { "code": null, "e": 3144, "s": 3138, "text": "Here," }, { "code": null, "e": 3163, "s": 3144, "text": "d = 2 \nand, P = 1 " }, { "code": null, "e": 3197, "s": 3163, "text": "Therefore, cyclomatic complexity," }, { "code": null, "e": 3211, "s": 3197, "text": "= 2 + 1 \n= 3 " }, { "code": null, "e": 3221, "s": 3211, "text": "Method-3:" }, { "code": null, "e": 3277, "s": 3221, "text": "Cyclomatic complexity\n= number of regions in the graph " }, { "code": null, "e": 3280, "s": 3277, "text": "li" }, { "code": null, "e": 3322, "s": 3280, "text": "Region 1: bounded by edges b, c, d, and e" }, { "code": null, "e": 3360, "s": 3322, "text": "Region 2: bounded by edge a (in loop)" }, { "code": null, "e": 3388, "s": 3360, "text": "Region 3: outside the graph" }, { "code": null, "e": 3422, "s": 3388, "text": "Therefore, cyclomatic complexity," }, { "code": null, "e": 3440, "s": 3422, "text": "= 1 + 1 + 1 \n= 3 " }, { "code": null, "e": 3554, "s": 3440, "text": "It can be seen that all other methods give the same result. Method1, 2, and 3 have been discussed here in detail." }, { "code": null, "e": 3570, "s": 3554, "text": "marcshemanthraj" }, { "code": null, "e": 3587, "s": 3570, "text": "Software Testing" }, { "code": null, "e": 3608, "s": 3587, "text": "Software Engineering" } ]
p5.js | textFont() Function
16 Apr, 2020 The textFont() function in p5.js is used to specify the font that will be used to draw text using the text() function. In the WEBGL mode, only the fonts loaded by the loadFont() method are supported. Syntax: textFont( font, size ) Parameters: This function accepts two parameters as mentioned above and described below: font: It is a string that specifies the name of the web safe font or a font object loaded by the loadFont() function. size: It is a number that specifies the size of the font to use. It is an optional parameter. Return Value: It is an object that contains the current font. Below examples illustrate the textFont() function in p5.js: Example 1: This example shows the use of web safe fonts that are generally available on all systems. function setup() { createCanvas(600, 300); textSize(30); textFont('Helvetica'); text('This is the Helvetica font', 20, 80); textFont('Georgia'); text('This is the Georgia font', 20, 120); textFont('Times New Roman'); text('This is the Times New Roman font', 20, 160); textFont('Courier New'); text('This is the Courier New font', 20, 200);} Output: Example 2: This example shows the use of a font loaded using the loadFont() function. let newFont; function preload() { newFont = loadFont('fonts/Montserrat.otf');} function setup() { createCanvas(400, 200); textSize(20); fill("red"); text('Click once to print using " + "a new loaded font', 20, 20); fill("black"); text('Using the default font', 20, 60); text('This is text written using" + " the new font', 20, 80);} function mouseClicked() { textFont(newFont); textSize(20); text('Using the Montserrat font', 20, 140); text('This is text written using the" + " new loaded font', 20, 160);} Output: Online editor: https://editor.p5js.org/ Environment Setup: https://www.geeksforgeeks.org/p5-js-soundfile-object-installation-and-methods/ Reference: https://p5js.org/reference/#/p5/textFont JavaScript-p5.js JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n16 Apr, 2020" }, { "code": null, "e": 228, "s": 28, "text": "The textFont() function in p5.js is used to specify the font that will be used to draw text using the text() function. In the WEBGL mode, only the fonts loaded by the loadFont() method are supported." }, { "code": null, "e": 236, "s": 228, "text": "Syntax:" }, { "code": null, "e": 259, "s": 236, "text": "textFont( font, size )" }, { "code": null, "e": 348, "s": 259, "text": "Parameters: This function accepts two parameters as mentioned above and described below:" }, { "code": null, "e": 466, "s": 348, "text": "font: It is a string that specifies the name of the web safe font or a font object loaded by the loadFont() function." }, { "code": null, "e": 560, "s": 466, "text": "size: It is a number that specifies the size of the font to use. It is an optional parameter." }, { "code": null, "e": 622, "s": 560, "text": "Return Value: It is an object that contains the current font." }, { "code": null, "e": 682, "s": 622, "text": "Below examples illustrate the textFont() function in p5.js:" }, { "code": null, "e": 783, "s": 682, "text": "Example 1: This example shows the use of web safe fonts that are generally available on all systems." }, { "code": "function setup() { createCanvas(600, 300); textSize(30); textFont('Helvetica'); text('This is the Helvetica font', 20, 80); textFont('Georgia'); text('This is the Georgia font', 20, 120); textFont('Times New Roman'); text('This is the Times New Roman font', 20, 160); textFont('Courier New'); text('This is the Courier New font', 20, 200);}", "e": 1136, "s": 783, "text": null }, { "code": null, "e": 1144, "s": 1136, "text": "Output:" }, { "code": null, "e": 1230, "s": 1144, "text": "Example 2: This example shows the use of a font loaded using the loadFont() function." }, { "code": "let newFont; function preload() { newFont = loadFont('fonts/Montserrat.otf');} function setup() { createCanvas(400, 200); textSize(20); fill(\"red\"); text('Click once to print using \" + \"a new loaded font', 20, 20); fill(\"black\"); text('Using the default font', 20, 60); text('This is text written using\" + \" the new font', 20, 80);} function mouseClicked() { textFont(newFont); textSize(20); text('Using the Montserrat font', 20, 140); text('This is text written using the\" + \" new loaded font', 20, 160);}", "e": 1770, "s": 1230, "text": null }, { "code": null, "e": 1778, "s": 1770, "text": "Output:" }, { "code": null, "e": 1818, "s": 1778, "text": "Online editor: https://editor.p5js.org/" }, { "code": null, "e": 1916, "s": 1818, "text": "Environment Setup: https://www.geeksforgeeks.org/p5-js-soundfile-object-installation-and-methods/" }, { "code": null, "e": 1968, "s": 1916, "text": "Reference: https://p5js.org/reference/#/p5/textFont" }, { "code": null, "e": 1985, "s": 1968, "text": "JavaScript-p5.js" }, { "code": null, "e": 1996, "s": 1985, "text": "JavaScript" }, { "code": null, "e": 2013, "s": 1996, "text": "Web Technologies" } ]
How to use Radio Component in ReactJS?
05 May, 2021 Radio buttons allow the user to select one option from a set. Material UI for React has this component available for us and it is very easy to integrate. We can use Radio Component in ReactJS using the following approach: Creating React Application And Installing Module: Step 1: Create a React application using the following command: npx create-react-app foldername Step 2: After creating your project folder i.e. foldername, move to it using the following command: cd foldername Step 3: After creating the ReactJS application, Install the material-ui modules using the following command: npm install @material-ui/core Project Structure: It will look like the following. Project Structure App.js: Now write down the following code in the App.js file. Here, App is our default component where we have written our code. Javascript import React from 'react';import Radio from '@material-ui/core/Radio';import FormControl from '@material-ui/core/FormControl';import FormLabel from '@material-ui/core/FormLabel';import FormControlLabel from '@material-ui/core/FormControlLabel';import RadioGroup from '@material-ui/core/RadioGroup'; const App = () => { return ( <div style={{ margin: 'auto', display: 'block', width: 'fit-content' }}> <h3>How to use Radio Component in ReactJS?</h3> <FormControl component="fieldset"> <FormLabel component="legend">Select Your Gender</FormLabel> <RadioGroup aria-label="gender" name="gender1"> <FormControlLabel value="male" control={<Radio />} label="Male" /> <FormControlLabel value="female" control={<Radio />} label="Female" /> <FormControlLabel value="other" control={<Radio />} label="Other" /> </RadioGroup> </FormControl> </div> );} export default App; Step to Run Application: Run the application using the following command from the root directory of the project: npm start Output: Now open your browser and go to http://localhost:3000/, you will see the following output: simmytarika5 JavaScript ReactJS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React Remove elements from a JavaScript Array How to append HTML code to a div using JavaScript ? Difference Between PUT and PATCH Request How to fetch data from an API in ReactJS ? How to redirect to another page in ReactJS ? Axios in React: A Guide for Beginners ReactJS Functional Components
[ { "code": null, "e": 54, "s": 26, "text": "\n05 May, 2021" }, { "code": null, "e": 276, "s": 54, "text": "Radio buttons allow the user to select one option from a set. Material UI for React has this component available for us and it is very easy to integrate. We can use Radio Component in ReactJS using the following approach:" }, { "code": null, "e": 326, "s": 276, "text": "Creating React Application And Installing Module:" }, { "code": null, "e": 390, "s": 326, "text": "Step 1: Create a React application using the following command:" }, { "code": null, "e": 422, "s": 390, "text": "npx create-react-app foldername" }, { "code": null, "e": 522, "s": 422, "text": "Step 2: After creating your project folder i.e. foldername, move to it using the following command:" }, { "code": null, "e": 536, "s": 522, "text": "cd foldername" }, { "code": null, "e": 645, "s": 536, "text": "Step 3: After creating the ReactJS application, Install the material-ui modules using the following command:" }, { "code": null, "e": 675, "s": 645, "text": "npm install @material-ui/core" }, { "code": null, "e": 727, "s": 675, "text": "Project Structure: It will look like the following." }, { "code": null, "e": 745, "s": 727, "text": "Project Structure" }, { "code": null, "e": 874, "s": 745, "text": "App.js: Now write down the following code in the App.js file. Here, App is our default component where we have written our code." }, { "code": null, "e": 885, "s": 874, "text": "Javascript" }, { "code": "import React from 'react';import Radio from '@material-ui/core/Radio';import FormControl from '@material-ui/core/FormControl';import FormLabel from '@material-ui/core/FormLabel';import FormControlLabel from '@material-ui/core/FormControlLabel';import RadioGroup from '@material-ui/core/RadioGroup'; const App = () => { return ( <div style={{ margin: 'auto', display: 'block', width: 'fit-content' }}> <h3>How to use Radio Component in ReactJS?</h3> <FormControl component=\"fieldset\"> <FormLabel component=\"legend\">Select Your Gender</FormLabel> <RadioGroup aria-label=\"gender\" name=\"gender1\"> <FormControlLabel value=\"male\" control={<Radio />} label=\"Male\" /> <FormControlLabel value=\"female\" control={<Radio />} label=\"Female\" /> <FormControlLabel value=\"other\" control={<Radio />} label=\"Other\" /> </RadioGroup> </FormControl> </div> );} export default App;", "e": 1822, "s": 885, "text": null }, { "code": null, "e": 1938, "s": 1825, "text": "Step to Run Application: Run the application using the following command from the root directory of the project:" }, { "code": null, "e": 1950, "s": 1940, "text": "npm start" }, { "code": null, "e": 2049, "s": 1950, "text": "Output: Now open your browser and go to http://localhost:3000/, you will see the following output:" }, { "code": null, "e": 2066, "s": 2053, "text": "simmytarika5" }, { "code": null, "e": 2077, "s": 2066, "text": "JavaScript" }, { "code": null, "e": 2085, "s": 2077, "text": "ReactJS" }, { "code": null, "e": 2102, "s": 2085, "text": "Web Technologies" }, { "code": null, "e": 2200, "s": 2102, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2261, "s": 2200, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 2333, "s": 2261, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 2373, "s": 2333, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 2425, "s": 2373, "text": "How to append HTML code to a div using JavaScript ?" }, { "code": null, "e": 2466, "s": 2425, "text": "Difference Between PUT and PATCH Request" }, { "code": null, "e": 2509, "s": 2466, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 2554, "s": 2509, "text": "How to redirect to another page in ReactJS ?" }, { "code": null, "e": 2592, "s": 2554, "text": "Axios in React: A Guide for Beginners" } ]
find_element_by_class_name() driver method – Selenium Python
15 Apr, 2020 Selenium’s Python Module is built to perform automated testing with Python. Selenium Python bindings provides a simple API to write functional/acceptance tests using Selenium WebDriver. After you have installed selenium and checked out – Navigating links using get method, you might want to play more with Selenium Python. After one has opened a page using selenium such as geeksforgeeks, one might want to click some buttons automatically or fill a form automatically or any such automated task. This article revolves around how to grab or locate elements in a webpage using locating strategies of Selenium Web Driver. More specifically, find_element_by_class_name() is discussed in this article. driver.find_element_by_class_name("class_of_element") Example –For instance, consider this page source: <html> <body> <p class="content">Site content goes here.</p></body><html> Now after you have created a driver, you can grab an element using – content = driver.find_element_by_class_name('content') Let’s try to practically implement this method and get a element instance for “https://www.geeksforgeeks.org/”. Let’s try to grab search form input using its class name “gsc-input”.Create a file called run.py to demonstrate find_element_by_class_name method – # Python program to demonstrate# selenium # import webdriverfrom selenium import webdriver # create webdriver objectdriver = webdriver.Firefox() # enter keyword to searchkeyword = "geeksforgeeks" # get geeksforgeeks.orgdriver.get("https://www.geeksforgeeks.org/") # get element element = driver.find_element_by_class_name("gsc-input") # print complete elementprint(element) Now run using – Python run.py First, it will open firefox window with geeksforgeeks, and then select the element and print it on terminal as show below.Browser Output –Terminal Output – NaveenArora Python-selenium selenium Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Different ways to create Pandas Dataframe Enumerate() in Python Read a file line by line in Python Python String | replace() How to Install PIP on Windows ? *args and **kwargs in Python Python Classes and Objects Python OOPs Concepts Introduction To PYTHON
[ { "code": null, "e": 28, "s": 0, "text": "\n15 Apr, 2020" }, { "code": null, "e": 525, "s": 28, "text": "Selenium’s Python Module is built to perform automated testing with Python. Selenium Python bindings provides a simple API to write functional/acceptance tests using Selenium WebDriver. After you have installed selenium and checked out – Navigating links using get method, you might want to play more with Selenium Python. After one has opened a page using selenium such as geeksforgeeks, one might want to click some buttons automatically or fill a form automatically or any such automated task." }, { "code": null, "e": 726, "s": 525, "text": "This article revolves around how to grab or locate elements in a webpage using locating strategies of Selenium Web Driver. More specifically, find_element_by_class_name() is discussed in this article." }, { "code": null, "e": 781, "s": 726, "text": "driver.find_element_by_class_name(\"class_of_element\")\n" }, { "code": null, "e": 831, "s": 781, "text": "Example –For instance, consider this page source:" }, { "code": "<html> <body> <p class=\"content\">Site content goes here.</p></body><html>", "e": 906, "s": 831, "text": null }, { "code": null, "e": 975, "s": 906, "text": "Now after you have created a driver, you can grab an element using –" }, { "code": null, "e": 1031, "s": 975, "text": "content = driver.find_element_by_class_name('content')\n" }, { "code": null, "e": 1291, "s": 1031, "text": "Let’s try to practically implement this method and get a element instance for “https://www.geeksforgeeks.org/”. Let’s try to grab search form input using its class name “gsc-input”.Create a file called run.py to demonstrate find_element_by_class_name method –" }, { "code": "# Python program to demonstrate# selenium # import webdriverfrom selenium import webdriver # create webdriver objectdriver = webdriver.Firefox() # enter keyword to searchkeyword = \"geeksforgeeks\" # get geeksforgeeks.orgdriver.get(\"https://www.geeksforgeeks.org/\") # get element element = driver.find_element_by_class_name(\"gsc-input\") # print complete elementprint(element)", "e": 1671, "s": 1291, "text": null }, { "code": null, "e": 1687, "s": 1671, "text": "Now run using –" }, { "code": null, "e": 1701, "s": 1687, "text": "Python run.py" }, { "code": null, "e": 1857, "s": 1701, "text": "First, it will open firefox window with geeksforgeeks, and then select the element and print it on terminal as show below.Browser Output –Terminal Output –" }, { "code": null, "e": 1869, "s": 1857, "text": "NaveenArora" }, { "code": null, "e": 1885, "s": 1869, "text": "Python-selenium" }, { "code": null, "e": 1894, "s": 1885, "text": "selenium" }, { "code": null, "e": 1901, "s": 1894, "text": "Python" }, { "code": null, "e": 1999, "s": 1901, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2017, "s": 1999, "text": "Python Dictionary" }, { "code": null, "e": 2059, "s": 2017, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 2081, "s": 2059, "text": "Enumerate() in Python" }, { "code": null, "e": 2116, "s": 2081, "text": "Read a file line by line in Python" }, { "code": null, "e": 2142, "s": 2116, "text": "Python String | replace()" }, { "code": null, "e": 2174, "s": 2142, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 2203, "s": 2174, "text": "*args and **kwargs in Python" }, { "code": null, "e": 2230, "s": 2203, "text": "Python Classes and Objects" }, { "code": null, "e": 2251, "s": 2230, "text": "Python OOPs Concepts" } ]
How to Use Proguard in Android Application Properly?
19 Sep, 2021 While writing the code for your Android app, there may be some lines of code that are unnecessary and may increase the size of your app’s APK. Aside from useless code, there are numerous libraries that you may have incorporated in your program but did not use all of the functionalities that each library provides. Also, you may have developed some code that will be obsolete in the future and failed to delete it. These factors are to blame for the increased size of your application’s APK. Android has Proguard capability to minimize the size of the APK. Using Proguard aids in fulfilling the three functions listed below: Minify the codeObfuscate the codeOptimize the code Minify the code Obfuscate the code Optimize the code As a result, Proguard will assist you in shrinking the size of your APK, removing unnecessary classes and methods, and making your program tough to reverse engineer. So, while using Proguard, you need to keep a few things in mind. In this article, we will look at some of the aspects that must be considered while utilizing Proguard in our Android application. So let us begin with the first point. Almost every Android application contains a Data or Model class that is used to retrieve data from a remote database. So, if you use Proguard, the Progurde will modify the class, variable, and method names. For example, if you have a class named User and it has variables such as firstName and lastName, the Proguard will rename your User class to A or something like that. Simultaneously, your variable names may be altered to some x, y, or z. As a result, if you try to put the data of the same variable in the database, you will obtain a run time error since no such field exists in the database. To circumvent this, instruct Proguard to preserve the variables and methods of the User class and not obfuscate them. To accomplish this, add the following line to your proguard-rules.pro file: -keep class full.class.name.of.your.project.here.** { *; } By adding the preceding line, the Proguard will do no obfuscation or optimization of the code in the chosen class. The majority of Proguard’s issues are caused by its obfuscation characteristic. The same as with Fragments. We define some String TAG as FragmentName.class whenever we create a Fragment. use getSimpleName() to get the TAG and utilize it in our Fragment. However, if you are using Proguard, this may cause a problem in your application. Assume we have two fragments in the firstfragment package named Fragement1 and Fragment2 in the secondfragment package. As a result, the Proguard will change the firstfragment package to “a” and the secondfragment package to “b,” but the Proguard can also change the names of Fragment1 to “a” (because the package and Fragments are different, so names can be the same) and Fragment2 to “a” as well, because Fragment2 is a different package. However, if you are currently using FragmentName.class.getSimpleName(), this will cause confusion since Fragment1.class.getSimpleName() has been changed to a.class.getSimpleName() and Fragment2.class.getSimpleName() has been changed to a.class.getSimpleName() (). Both TAG values will get perplexed as to which fragment is being called. Using FragmentName.class.getSimpleName() should thus be avoided. When your classes and methods are accessible dynamically, such as through reflection, you must ensure that your class or function is obfuscated. For example, if you reference your code from an XML file (which normally uses reflection) and use Proguard at the same time, Proguard will change your class name to “a” or something similar in the XML code, no changes will be made, and this will result in a class not found or method not found exception. As a result, always include the class utilizing reflection in your Proguard rules. When you construct your own view in an Android application, ensure sure the view class you’re creating is included in the Proguard rule, or you’ll get a class not found error. For example, if you create a simple view named com.geeksforgeeks.android and use it in your XML file, the Proguard will alter your view class name to “a” or something similar, and your code in the XML is using com.geeksforgeeks.android, but there is no such class since it has been changed to “a.” We use a lot of open-source libraries in our Android application. However, if you are using Progurad, you must also provide the Proguard rules of the libraries you are using, if any. You may locate the Proguard rules of a library by reading the README.md file of the library you used and adding the provided Proguard rules to your application. An example of the Retrofit library’s Proguard rules may be found here. You must be mindful of the effects of Proguard’s obfuscation when utilizing native code in your Android application, i.e. when using c++ code in your Android application. The Proguard can only inspect the Java class in general. So, if you call a Java method from C++ code and that function has been obfuscated by Proguard, the JNI will throw a method not found error.’ -keepclasseswithmembernames your.class.name.here * { native <methods>; } Geek Tip: So, if you call any methods from native code, you must include such methods in the Proguard rules by including the following line in the proguard-rules.pro file: In Java, we can load resources from a JAR file, and certain libraries can also load resources from an APK file. This is not an issue, but if we use Proguard, the Proguard will alter the names of these resources files, and these files will not be discovered in the required package. As a result, the names of classes that load resources from the APK should be kept in your Proguard rules. When you’re finished optimizing your code, place all of the files in the root package to eliminate the need for fully qualified names like com.geeksforgeeks.android.example. To enable this feature, include the following line in your proguard-rules.pro file: -repackageclasses In this article, we learned about the precautions that should be followed when utilizing Proguard in our Android app. We discovered how to maintain specific classes in the Proguard. We also learned how to add Proguard rules from third-party libraries. Picked Android Android Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference Between Implicit Intent and Explicit Intent in Android Retrofit with Kotlin Coroutine in Android Flutter - BoxShadow Widget Bundle in Android with Example Animation in Android with Example ExoPlayer in Android with Example Bottom Navigation Bar in Android Using Kotlin How to Post Data to API using Retrofit in Android? Android | How to Create/Start a New Project in Android Studio? Data Binding in Android with Example
[ { "code": null, "e": 28, "s": 0, "text": "\n19 Sep, 2021" }, { "code": null, "e": 653, "s": 28, "text": "While writing the code for your Android app, there may be some lines of code that are unnecessary and may increase the size of your app’s APK. Aside from useless code, there are numerous libraries that you may have incorporated in your program but did not use all of the functionalities that each library provides. Also, you may have developed some code that will be obsolete in the future and failed to delete it. These factors are to blame for the increased size of your application’s APK. Android has Proguard capability to minimize the size of the APK. Using Proguard aids in fulfilling the three functions listed below:" }, { "code": null, "e": 704, "s": 653, "text": "Minify the codeObfuscate the codeOptimize the code" }, { "code": null, "e": 720, "s": 704, "text": "Minify the code" }, { "code": null, "e": 739, "s": 720, "text": "Obfuscate the code" }, { "code": null, "e": 757, "s": 739, "text": "Optimize the code" }, { "code": null, "e": 1156, "s": 757, "text": "As a result, Proguard will assist you in shrinking the size of your APK, removing unnecessary classes and methods, and making your program tough to reverse engineer. So, while using Proguard, you need to keep a few things in mind. In this article, we will look at some of the aspects that must be considered while utilizing Proguard in our Android application. So let us begin with the first point." }, { "code": null, "e": 1364, "s": 1156, "text": "Almost every Android application contains a Data or Model class that is used to retrieve data from a remote database. So, if you use Proguard, the Progurde will modify the class, variable, and method names. " }, { "code": null, "e": 1757, "s": 1364, "text": "For example, if you have a class named User and it has variables such as firstName and lastName, the Proguard will rename your User class to A or something like that. Simultaneously, your variable names may be altered to some x, y, or z. As a result, if you try to put the data of the same variable in the database, you will obtain a run time error since no such field exists in the database." }, { "code": null, "e": 1951, "s": 1757, "text": "To circumvent this, instruct Proguard to preserve the variables and methods of the User class and not obfuscate them. To accomplish this, add the following line to your proguard-rules.pro file:" }, { "code": null, "e": 2010, "s": 1951, "text": "-keep class full.class.name.of.your.project.here.** { *; }" }, { "code": null, "e": 2125, "s": 2010, "text": "By adding the preceding line, the Proguard will do no obfuscation or optimization of the code in the chosen class." }, { "code": null, "e": 2233, "s": 2125, "text": "The majority of Proguard’s issues are caused by its obfuscation characteristic. The same as with Fragments." }, { "code": null, "e": 2902, "s": 2233, "text": "We define some String TAG as FragmentName.class whenever we create a Fragment. use getSimpleName() to get the TAG and utilize it in our Fragment. However, if you are using Proguard, this may cause a problem in your application. Assume we have two fragments in the firstfragment package named Fragement1 and Fragment2 in the secondfragment package. As a result, the Proguard will change the firstfragment package to “a” and the secondfragment package to “b,” but the Proguard can also change the names of Fragment1 to “a” (because the package and Fragments are different, so names can be the same) and Fragment2 to “a” as well, because Fragment2 is a different package." }, { "code": null, "e": 3304, "s": 2902, "text": "However, if you are currently using FragmentName.class.getSimpleName(), this will cause confusion since Fragment1.class.getSimpleName() has been changed to a.class.getSimpleName() and Fragment2.class.getSimpleName() has been changed to a.class.getSimpleName() (). Both TAG values will get perplexed as to which fragment is being called. Using FragmentName.class.getSimpleName() should thus be avoided." }, { "code": null, "e": 3754, "s": 3304, "text": "When your classes and methods are accessible dynamically, such as through reflection, you must ensure that your class or function is obfuscated. For example, if you reference your code from an XML file (which normally uses reflection) and use Proguard at the same time, Proguard will change your class name to “a” or something similar in the XML code, no changes will be made, and this will result in a class not found or method not found exception." }, { "code": null, "e": 3837, "s": 3754, "text": "As a result, always include the class utilizing reflection in your Proguard rules." }, { "code": null, "e": 4013, "s": 3837, "text": "When you construct your own view in an Android application, ensure sure the view class you’re creating is included in the Proguard rule, or you’ll get a class not found error." }, { "code": null, "e": 4311, "s": 4013, "text": "For example, if you create a simple view named com.geeksforgeeks.android and use it in your XML file, the Proguard will alter your view class name to “a” or something similar, and your code in the XML is using com.geeksforgeeks.android, but there is no such class since it has been changed to “a.”" }, { "code": null, "e": 4495, "s": 4311, "text": "We use a lot of open-source libraries in our Android application. However, if you are using Progurad, you must also provide the Proguard rules of the libraries you are using, if any. " }, { "code": null, "e": 4727, "s": 4495, "text": "You may locate the Proguard rules of a library by reading the README.md file of the library you used and adding the provided Proguard rules to your application. An example of the Retrofit library’s Proguard rules may be found here." }, { "code": null, "e": 5096, "s": 4727, "text": "You must be mindful of the effects of Proguard’s obfuscation when utilizing native code in your Android application, i.e. when using c++ code in your Android application. The Proguard can only inspect the Java class in general. So, if you call a Java method from C++ code and that function has been obfuscated by Proguard, the JNI will throw a method not found error.’" }, { "code": null, "e": 5169, "s": 5096, "text": "-keepclasseswithmembernames your.class.name.here * { native <methods>; }" }, { "code": null, "e": 5341, "s": 5169, "text": "Geek Tip: So, if you call any methods from native code, you must include such methods in the Proguard rules by including the following line in the proguard-rules.pro file:" }, { "code": null, "e": 5729, "s": 5341, "text": "In Java, we can load resources from a JAR file, and certain libraries can also load resources from an APK file. This is not an issue, but if we use Proguard, the Proguard will alter the names of these resources files, and these files will not be discovered in the required package. As a result, the names of classes that load resources from the APK should be kept in your Proguard rules." }, { "code": null, "e": 5987, "s": 5729, "text": "When you’re finished optimizing your code, place all of the files in the root package to eliminate the need for fully qualified names like com.geeksforgeeks.android.example. To enable this feature, include the following line in your proguard-rules.pro file:" }, { "code": null, "e": 6005, "s": 5987, "text": "-repackageclasses" }, { "code": null, "e": 6257, "s": 6005, "text": "In this article, we learned about the precautions that should be followed when utilizing Proguard in our Android app. We discovered how to maintain specific classes in the Proguard. We also learned how to add Proguard rules from third-party libraries." }, { "code": null, "e": 6264, "s": 6257, "text": "Picked" }, { "code": null, "e": 6272, "s": 6264, "text": "Android" }, { "code": null, "e": 6280, "s": 6272, "text": "Android" }, { "code": null, "e": 6378, "s": 6280, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 6444, "s": 6378, "text": "Difference Between Implicit Intent and Explicit Intent in Android" }, { "code": null, "e": 6486, "s": 6444, "text": "Retrofit with Kotlin Coroutine in Android" }, { "code": null, "e": 6513, "s": 6486, "text": "Flutter - BoxShadow Widget" }, { "code": null, "e": 6544, "s": 6513, "text": "Bundle in Android with Example" }, { "code": null, "e": 6578, "s": 6544, "text": "Animation in Android with Example" }, { "code": null, "e": 6612, "s": 6578, "text": "ExoPlayer in Android with Example" }, { "code": null, "e": 6658, "s": 6612, "text": "Bottom Navigation Bar in Android Using Kotlin" }, { "code": null, "e": 6709, "s": 6658, "text": "How to Post Data to API using Retrofit in Android?" }, { "code": null, "e": 6772, "s": 6709, "text": "Android | How to Create/Start a New Project in Android Studio?" } ]
Class Based vs Function Based Views – Which One is Better to Use in Django?
15 May, 2021 Django...We all know the popularity of this python framework all over the world. This framework has made life easier for developers. It has become easier for developers to build a full-fledged web application in Django. If you’re an experienced Django developer then surely you might have been aware of the flow of the project. How things run in the boilerplate of Django and how data gets rendered to the user. Django works on the MVT concept we mainly work on two types of views in it... class-based views and function-based views. If you’re new to the Django framework then surely you might have been using FBVs (Function Based Views). Initially, Django started with the Function Based Views but later Django added the concept of class-based views to avoid the redundancy of code in the boilerplate. It is a debate among developers which one is better to use in Django... class-based views or function-based views? Today in this blog we are going to discuss this topic in-depth to get to know the pros and cons of both of the views. You can accomplish your task using both of them. Some tasks can be best implemented using CBVs and some of them can be implemented in FBVs. Django views have mainly three requirements... They are callable. You can write the views either using function-based or class-based. While using CBVs you inherit the method as_view() that uses the dispatch() method to call the method that is appropriate depending on the HTTP verb (get, post), etc. As a first positional argument, Django views should accept HttpRequest. It should return the HttpResponse object, or it should raise an exception. Now let’s compare both of the views and see the pros and cons of both of them. Function-based views are good for beginners. It is very easy to understand in comparison to class-based views. Initially when you want to focus on core fundamentals, using the function-based views gives the advantage to understand it. Let’s discuss some pros and cons of it. Pros: Easy to read, understand and implement. Explicit code flow Straightforward usage of decorators. Good for the specialized functionality. Cons: Code redundancy and hard to extend Conditional branching will be used to handle HTTP methods. As we have discussed function-based views are easy to understand but due to the code redundancy in a large Django project, you will find similar kinds of functions in the views. You will find a similar kind of code is repeated unnecessarily. Here is an example of a function-based view... Python3 def example_create_view(request, pk): template_name = 'form.html' form_class = FormExample form = form_class if request.method == 'POST': form = form_class(request.POST) if form.is_valid(): form.save() return HttpResponseRedirect(reverse('list-view')) return render(request, template_name, {'form': form}) All the above cons of FBVs you won’t find in class-based views. You won’t have to write the same code over and over in your boilerplate. Class-based views are the alternatives of function-based views. It is implemented in the projects as Python objects instead of functions. Class-based views don’t replace function-based views, but they do have certain advantages over function-based views. Class-based views take care of basic functionalities such as deleting an item or add an item. Using the class-based view is not easy if you’re a beginner. You will have to go through the documentation, and you will have to study it properly. Once you understand the function-based view in Django and your concepts are clear, you can move to the class-based views. Let’s discuss the class-based views in detail. Pros The most significant advantage of the class-based view is inheritance. In the class-based view, you can inherit another class, and it can be modified for the different use cases. It helps you in following the DRY principle. You won’t have to write the same code over and over in your boilerplate. Code reusability is possible in class-based views. You can extend class-based views, and you can add more functionalities using Mixins. Another advantage of using a class-based view is code structuring. In class-based views, you can use different class instance methods (instead of conditional branching statements inside function-based views) to generate different HTTP requests. Built-in generic class-based views. Cons Complex to implement and harder to read Implicit code flow. Extra import or method override required in view decorators. Below is an example of a class-based view... Python3 class MyCreateView(View): template_name = 'form.html' form_class = MyForm def get(self, request, *args, **kwargs): form = self.form_class return render(request, template_name, {'form': form}) def post(self, request, *args, **kwargs): form = self.form_class(request.POST) if form.is_valid(): form.save() return HttpResonseRedirect(reverse('list-view')) else: return render(request, self.template_name, {'form': form}) We have a little abstraction and method as_view() is calling the dispatch() to determine which class method needs to be executed, depending on the HTTP request. as_view() let you override the class attributes in your URLs confs. You can do something like the below... Python3 urlpatterns = [ url(r'^new/$', MyCreateView.as_view(), name='original-create-view') url(r'^new_two/$', MyCreateView.as_view(template_name='other_form.html', form_class='MyOtherForm'), name='modified-create-view') ] Once you start using the Django generic class-based views, you will be able to over-write the helper method like get_form_class and get_template_names. You can insert the additional logic at these points instead of just overriding the class attribute. One of the good examples of it is...ModelFormMixin. form_valid method is overridden. With the updated value stored in self.object() form_valid method is overridden. Creating a new object, form handling, list views, pagination, archive views all these things are the common use cases in a Web application. It comes in Django core, you can implement them from the module django.views.generic. Generic class-based views are a great choice to perform all these tasks. It speeds up the development process. Django provides a set of views, mixins, and generic class-based views. Taking the advantage of it you can solve the most common tasks in web development. The main goal is not to reduce the boilerplate. It saves you from writing the same code again and again. Modify MyCreateView to inherit from django.views.generic.CreateView. Python3 from django.views.generic import CreateView class MyCreateView(CreateView): model = MyModel form_class = MyForm You might be thinking that where all the code disappears. The answer is that it’s all in django.views.generic.CreateView. You get a lot of functionality and shortcuts when you inherit from CreateView. You also buy into a sort of ‘convention over configuration.’ style arrangement. Let’s discuss few more details... By default template should reside in /<modelname>/<modelname>_form.html. You can change it by setting the class attribute template_name and template_name_suffix. We also need to declare the model and form_class attributes. Methods you inherit from CreateView rely on them. You will have to declare success_url as a class attribute on the view or you will have to specify get_absolute_url() in the model. This is important for the view in your boilerplate else the view won’t know where to redirect to following a successful form submission. Define the fields in your form or specify the fields class attribute on the view. Here in this example, you can choose to do the latter. Look at the example given below to check how it will look like. Python3 from django import formsfrom . models import MyModel class MyModelForm(forms.ModelForm): class Meta: model = MyModel fields = ['name', 'description'] It is still a debate among developers that which one is good to use. Class-based views or function-based views? We have discussed the pros and cons for both of them but it totally depends on the context and the needs. We have mentioned that class-based views don’t replace function-based views. In some cases, function-based views are better and in some cases, class-based views are better. In the implementation of the list view, you can get it working by subclassing the ListView and overriding the attributes. In a scenario where you need to perform the more complex operation, handling multiple forms at once, a function-based view will be a better choice for you. Python Django Difference Between GBlog Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Difference Between Method Overloading and Method Overriding in Java Difference between Compile-time and Run-time Polymorphism in Java Similarities and Difference between Java and C++ Difference between Internal and External fragmentation DSA Sheet by Love Babbar Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ... GEEK-O-LYMPICS 2022 - May The Geeks Force Be With You! Geek Streak - 24 Days POTD Challenge Practice for cracking any coding interview
[ { "code": null, "e": 53, "s": 25, "text": "\n15 May, 2021" }, { "code": null, "e": 466, "s": 53, "text": "Django...We all know the popularity of this python framework all over the world. This framework has made life easier for developers. It has become easier for developers to build a full-fledged web application in Django. If you’re an experienced Django developer then surely you might have been aware of the flow of the project. How things run in the boilerplate of Django and how data gets rendered to the user. " }, { "code": null, "e": 694, "s": 466, "text": "Django works on the MVT concept we mainly work on two types of views in it... class-based views and function-based views. If you’re new to the Django framework then surely you might have been using FBVs (Function Based Views). " }, { "code": null, "e": 1092, "s": 694, "text": "Initially, Django started with the Function Based Views but later Django added the concept of class-based views to avoid the redundancy of code in the boilerplate. It is a debate among developers which one is better to use in Django... class-based views or function-based views? Today in this blog we are going to discuss this topic in-depth to get to know the pros and cons of both of the views. " }, { "code": null, "e": 1279, "s": 1092, "text": "You can accomplish your task using both of them. Some tasks can be best implemented using CBVs and some of them can be implemented in FBVs. Django views have mainly three requirements..." }, { "code": null, "e": 1532, "s": 1279, "text": "They are callable. You can write the views either using function-based or class-based. While using CBVs you inherit the method as_view() that uses the dispatch() method to call the method that is appropriate depending on the HTTP verb (get, post), etc." }, { "code": null, "e": 1604, "s": 1532, "text": "As a first positional argument, Django views should accept HttpRequest." }, { "code": null, "e": 1680, "s": 1604, "text": "It should return the HttpResponse object, or it should raise an exception. " }, { "code": null, "e": 1759, "s": 1680, "text": "Now let’s compare both of the views and see the pros and cons of both of them." }, { "code": null, "e": 2035, "s": 1759, "text": "Function-based views are good for beginners. It is very easy to understand in comparison to class-based views. Initially when you want to focus on core fundamentals, using the function-based views gives the advantage to understand it. Let’s discuss some pros and cons of it. " }, { "code": null, "e": 2041, "s": 2035, "text": "Pros:" }, { "code": null, "e": 2081, "s": 2041, "text": "Easy to read, understand and implement." }, { "code": null, "e": 2100, "s": 2081, "text": "Explicit code flow" }, { "code": null, "e": 2137, "s": 2100, "text": "Straightforward usage of decorators." }, { "code": null, "e": 2177, "s": 2137, "text": "Good for the specialized functionality." }, { "code": null, "e": 2183, "s": 2177, "text": "Cons:" }, { "code": null, "e": 2218, "s": 2183, "text": "Code redundancy and hard to extend" }, { "code": null, "e": 2277, "s": 2218, "text": "Conditional branching will be used to handle HTTP methods." }, { "code": null, "e": 2520, "s": 2277, "text": "As we have discussed function-based views are easy to understand but due to the code redundancy in a large Django project, you will find similar kinds of functions in the views. You will find a similar kind of code is repeated unnecessarily. " }, { "code": null, "e": 2567, "s": 2520, "text": "Here is an example of a function-based view..." }, { "code": null, "e": 2575, "s": 2567, "text": "Python3" }, { "code": "def example_create_view(request, pk): template_name = 'form.html' form_class = FormExample form = form_class if request.method == 'POST': form = form_class(request.POST) if form.is_valid(): form.save() return HttpResponseRedirect(reverse('list-view')) return render(request, template_name, {'form': form})", "e": 2908, "s": 2575, "text": null }, { "code": null, "e": 3046, "s": 2908, "text": "All the above cons of FBVs you won’t find in class-based views. You won’t have to write the same code over and over in your boilerplate. " }, { "code": null, "e": 3396, "s": 3046, "text": "Class-based views are the alternatives of function-based views. It is implemented in the projects as Python objects instead of functions. Class-based views don’t replace function-based views, but they do have certain advantages over function-based views. Class-based views take care of basic functionalities such as deleting an item or add an item. " }, { "code": null, "e": 3713, "s": 3396, "text": "Using the class-based view is not easy if you’re a beginner. You will have to go through the documentation, and you will have to study it properly. Once you understand the function-based view in Django and your concepts are clear, you can move to the class-based views. Let’s discuss the class-based views in detail." }, { "code": null, "e": 3718, "s": 3713, "text": "Pros" }, { "code": null, "e": 3898, "s": 3718, "text": "The most significant advantage of the class-based view is inheritance. In the class-based view, you can inherit another class, and it can be modified for the different use cases." }, { "code": null, "e": 4067, "s": 3898, "text": "It helps you in following the DRY principle. You won’t have to write the same code over and over in your boilerplate. Code reusability is possible in class-based views." }, { "code": null, "e": 4152, "s": 4067, "text": "You can extend class-based views, and you can add more functionalities using Mixins." }, { "code": null, "e": 4397, "s": 4152, "text": "Another advantage of using a class-based view is code structuring. In class-based views, you can use different class instance methods (instead of conditional branching statements inside function-based views) to generate different HTTP requests." }, { "code": null, "e": 4433, "s": 4397, "text": "Built-in generic class-based views." }, { "code": null, "e": 4438, "s": 4433, "text": "Cons" }, { "code": null, "e": 4478, "s": 4438, "text": "Complex to implement and harder to read" }, { "code": null, "e": 4498, "s": 4478, "text": "Implicit code flow." }, { "code": null, "e": 4559, "s": 4498, "text": "Extra import or method override required in view decorators." }, { "code": null, "e": 4604, "s": 4559, "text": "Below is an example of a class-based view..." }, { "code": null, "e": 4612, "s": 4604, "text": "Python3" }, { "code": "class MyCreateView(View): template_name = 'form.html' form_class = MyForm def get(self, request, *args, **kwargs): form = self.form_class return render(request, template_name, {'form': form}) def post(self, request, *args, **kwargs): form = self.form_class(request.POST) if form.is_valid(): form.save() return HttpResonseRedirect(reverse('list-view')) else: return render(request, self.template_name, {'form': form})", "e": 5067, "s": 4612, "text": null }, { "code": null, "e": 5335, "s": 5067, "text": "We have a little abstraction and method as_view() is calling the dispatch() to determine which class method needs to be executed, depending on the HTTP request. as_view() let you override the class attributes in your URLs confs. You can do something like the below..." }, { "code": null, "e": 5343, "s": 5335, "text": "Python3" }, { "code": "urlpatterns = [ url(r'^new/$', MyCreateView.as_view(), name='original-create-view') url(r'^new_two/$', MyCreateView.as_view(template_name='other_form.html', form_class='MyOtherForm'), name='modified-create-view') ]", "e": 5584, "s": 5343, "text": null }, { "code": null, "e": 5837, "s": 5584, "text": "Once you start using the Django generic class-based views, you will be able to over-write the helper method like get_form_class and get_template_names. You can insert the additional logic at these points instead of just overriding the class attribute. " }, { "code": null, "e": 6003, "s": 5837, "text": "One of the good examples of it is...ModelFormMixin. form_valid method is overridden. With the updated value stored in self.object() form_valid method is overridden. " }, { "code": null, "e": 6341, "s": 6003, "text": "Creating a new object, form handling, list views, pagination, archive views all these things are the common use cases in a Web application. It comes in Django core, you can implement them from the module django.views.generic. Generic class-based views are a great choice to perform all these tasks. It speeds up the development process. " }, { "code": null, "e": 6496, "s": 6341, "text": "Django provides a set of views, mixins, and generic class-based views. Taking the advantage of it you can solve the most common tasks in web development. " }, { "code": null, "e": 6671, "s": 6496, "text": "The main goal is not to reduce the boilerplate. It saves you from writing the same code again and again. Modify MyCreateView to inherit from django.views.generic.CreateView. " }, { "code": null, "e": 6679, "s": 6671, "text": "Python3" }, { "code": "from django.views.generic import CreateView class MyCreateView(CreateView): model = MyModel form_class = MyForm", "e": 6799, "s": 6679, "text": null }, { "code": null, "e": 7114, "s": 6799, "text": "You might be thinking that where all the code disappears. The answer is that it’s all in django.views.generic.CreateView. You get a lot of functionality and shortcuts when you inherit from CreateView. You also buy into a sort of ‘convention over configuration.’ style arrangement. Let’s discuss few more details..." }, { "code": null, "e": 7277, "s": 7114, "text": "By default template should reside in /<modelname>/<modelname>_form.html. You can change it by setting the class attribute template_name and template_name_suffix. " }, { "code": null, "e": 7388, "s": 7277, "text": "We also need to declare the model and form_class attributes. Methods you inherit from CreateView rely on them." }, { "code": null, "e": 7656, "s": 7388, "text": "You will have to declare success_url as a class attribute on the view or you will have to specify get_absolute_url() in the model. This is important for the view in your boilerplate else the view won’t know where to redirect to following a successful form submission." }, { "code": null, "e": 7793, "s": 7656, "text": "Define the fields in your form or specify the fields class attribute on the view. Here in this example, you can choose to do the latter." }, { "code": null, "e": 7858, "s": 7793, "text": "Look at the example given below to check how it will look like. " }, { "code": null, "e": 7866, "s": 7858, "text": "Python3" }, { "code": "from django import formsfrom . models import MyModel class MyModelForm(forms.ModelForm): class Meta: model = MyModel fields = ['name', 'description']", "e": 8023, "s": 7866, "text": null }, { "code": null, "e": 8415, "s": 8023, "text": "It is still a debate among developers that which one is good to use. Class-based views or function-based views? We have discussed the pros and cons for both of them but it totally depends on the context and the needs. We have mentioned that class-based views don’t replace function-based views. In some cases, function-based views are better and in some cases, class-based views are better. " }, { "code": null, "e": 8694, "s": 8415, "text": "In the implementation of the list view, you can get it working by subclassing the ListView and overriding the attributes. In a scenario where you need to perform the more complex operation, handling multiple forms at once, a function-based view will be a better choice for you. " }, { "code": null, "e": 8708, "s": 8694, "text": "Python Django" }, { "code": null, "e": 8727, "s": 8708, "text": "Difference Between" }, { "code": null, "e": 8733, "s": 8727, "text": "GBlog" }, { "code": null, "e": 8740, "s": 8733, "text": "Python" }, { "code": null, "e": 8838, "s": 8740, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 8899, "s": 8838, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 8967, "s": 8899, "text": "Difference Between Method Overloading and Method Overriding in Java" }, { "code": null, "e": 9033, "s": 8967, "text": "Difference between Compile-time and Run-time Polymorphism in Java" }, { "code": null, "e": 9082, "s": 9033, "text": "Similarities and Difference between Java and C++" }, { "code": null, "e": 9137, "s": 9082, "text": "Difference between Internal and External fragmentation" }, { "code": null, "e": 9162, "s": 9137, "text": "DSA Sheet by Love Babbar" }, { "code": null, "e": 9236, "s": 9162, "text": "Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ..." }, { "code": null, "e": 9291, "s": 9236, "text": "GEEK-O-LYMPICS 2022 - May The Geeks Force Be With You!" }, { "code": null, "e": 9328, "s": 9291, "text": "Geek Streak - 24 Days POTD Challenge" } ]
How to display images in Angular2 ?
27 Jun, 2020 We can serve an image in angular2 by first placing the image in the assets directory of your project where you can create a specific directory for the images or just keep it in the assets directory as it is.Once you have put all the required images in the assets directory open the specific component typescript (.ts) file where you want to serve the image. Now you can add the URL of the image in a variable within the constructor so that it is initialized upon component creation. ExampleDemo.Component.ts import { Component, OnInit } from '@angular/core'; @Component({ selector: 'app-demo', templateUrl: './demo.component.html', styleUrls: ['./demo.component.css']}) export class DemoComponent implements OnInit { ImagePath: string; constructor() { //image location this.ImagePath = '/assets/images/sample.jpg' } ngOnInit() { } } Now modify the component’s template file to fetch the image.Demo.Component.html <!--...header and body content--> <mat-card class="example-card" > <mat-card-header class="example-header" style="padding-left: 40%;"> <h2><span></span>GeeksforGeeks</h2 > </mat-card-header> <mat-card-content> <img [src] ="ImagePath" style="width: 600px;height: 400px;"> </mat-card-content></mat-card> <!--... body and footer content--> Output: You can also fetch a web image directly from the website or database(For Ex. Firebase) by providing the full valid URL of the image. NOTE:In angular2, images and other media are by default fetched from the assets directory within your project folder(all other directories of the project are not public to the components by default) this can be changed by modifying angular-cli.json.Within this JSON file add your media directory by adding it in the assets object property. //add a new directory or image to start fetching from that location "assets": [ "assets", "img", "favicon.ico", ".htaccess"] AngularJS-Misc Picked HTML JavaScript Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. REST API (Introduction) Design a Tribute Page using HTML & CSS Build a Survey Form using HTML and CSS Design a web page using HTML and CSS Angular File Upload Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React Remove elements from a JavaScript Array Difference Between PUT and PATCH Request How to append HTML code to a div using JavaScript ?
[ { "code": null, "e": 28, "s": 0, "text": "\n27 Jun, 2020" }, { "code": null, "e": 511, "s": 28, "text": "We can serve an image in angular2 by first placing the image in the assets directory of your project where you can create a specific directory for the images or just keep it in the assets directory as it is.Once you have put all the required images in the assets directory open the specific component typescript (.ts) file where you want to serve the image. Now you can add the URL of the image in a variable within the constructor so that it is initialized upon component creation." }, { "code": null, "e": 536, "s": 511, "text": "ExampleDemo.Component.ts" }, { "code": "import { Component, OnInit } from '@angular/core'; @Component({ selector: 'app-demo', templateUrl: './demo.component.html', styleUrls: ['./demo.component.css']}) export class DemoComponent implements OnInit { ImagePath: string; constructor() { //image location this.ImagePath = '/assets/images/sample.jpg' } ngOnInit() { } }", "e": 882, "s": 536, "text": null }, { "code": null, "e": 962, "s": 882, "text": "Now modify the component’s template file to fetch the image.Demo.Component.html" }, { "code": "<!--...header and body content--> <mat-card class=\"example-card\" > <mat-card-header class=\"example-header\" style=\"padding-left: 40%;\"> <h2><span></span>GeeksforGeeks</h2 > </mat-card-header> <mat-card-content> <img [src] =\"ImagePath\" style=\"width: 600px;height: 400px;\"> </mat-card-content></mat-card> <!--... body and footer content-->", "e": 1340, "s": 962, "text": null }, { "code": null, "e": 1349, "s": 1340, "text": "Output: " }, { "code": null, "e": 1482, "s": 1349, "text": "You can also fetch a web image directly from the website or database(For Ex. Firebase) by providing the full valid URL of the image." }, { "code": null, "e": 1822, "s": 1482, "text": "NOTE:In angular2, images and other media are by default fetched from the assets directory within your project folder(all other directories of the project are not public to the components by default) this can be changed by modifying angular-cli.json.Within this JSON file add your media directory by adding it in the assets object property." }, { "code": "//add a new directory or image to start fetching from that location \"assets\": [ \"assets\", \"img\", \"favicon.ico\", \".htaccess\"]", "e": 1948, "s": 1822, "text": null }, { "code": null, "e": 1963, "s": 1948, "text": "AngularJS-Misc" }, { "code": null, "e": 1970, "s": 1963, "text": "Picked" }, { "code": null, "e": 1975, "s": 1970, "text": "HTML" }, { "code": null, "e": 1986, "s": 1975, "text": "JavaScript" }, { "code": null, "e": 2003, "s": 1986, "text": "Web Technologies" }, { "code": null, "e": 2008, "s": 2003, "text": "HTML" }, { "code": null, "e": 2106, "s": 2008, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2130, "s": 2106, "text": "REST API (Introduction)" }, { "code": null, "e": 2169, "s": 2130, "text": "Design a Tribute Page using HTML & CSS" }, { "code": null, "e": 2208, "s": 2169, "text": "Build a Survey Form using HTML and CSS" }, { "code": null, "e": 2245, "s": 2208, "text": "Design a web page using HTML and CSS" }, { "code": null, "e": 2265, "s": 2245, "text": "Angular File Upload" }, { "code": null, "e": 2326, "s": 2265, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 2398, "s": 2326, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 2438, "s": 2398, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 2479, "s": 2438, "text": "Difference Between PUT and PATCH Request" } ]
How to create Pandas DataFrame from nested XML?
28 Apr, 2021 In this article, we will learn how to create Pandas DataFrame from nested XML. We will use the xml.etree.ElementTree module, which is a built-in module in Python for parsing or reading information from the XML file. The ElementTree represents the XML document as a tree and the Element represents only a single node of the tree. Here, we will use some functions to process out code which is stated below: ElementTree.parse( XML_file) : To read data from an XML file root.iter(‘root_name’): To iterate through the branches of the root node ElementTree.fromstring(XML_file) : To read data when XML code which is passed as a string inside triple quotes in the python code prstree.findall(‘store’): To find all the elements of the parsed XML ElementTree node.attribute.get(attribte_name ): To get the attribute node.find(attribte_name): To retrieve the text content of the mentioned attribute_name pandas.DataFrame() : To convert the XML data to a DataFrame list.append(): To append the items to a list Parse or read the XML file using ElementTree.parse( ) function and get the root element. Iterate through the root node to get the child nodes attributes ‘SL NO’ (here) and extract the text values of each attribute (here foodItem, price, quantity, and discount). Get the respective food items with specifications as a unit appended to a list(here all_items() list). Convert the list into a DataFrame using pandas.DataFrame() function and mention the column names within quotes separated by commas. Print the DataFrame and it’s done. XML <?xml version="1.0" encoding="UTF-8"?> <Food> <Info> <Msg>Food Store items.</Msg> </Info> <store slNo="1"> <foodItem>meat</foodItem> <price>200</price> <quantity>1kg</quantity> <discount>7%</discount> </store> <store slNo="2"> <foodItem>fish</foodItem> <price>150</price> <quantity>1kg</quantity> <discount>5%</discount> </store> <store slNo="3"> <foodItem>egg</foodItem> <price>100</price> <quantity>50 pieces</quantity> <discount>5%</discount> </store> <store slNo="4"> <foodItem>milk</foodItem> <price>50</price> <quantity>1 litre</quantity> <discount>3%</discount> </store> </Food> Example 1: In this code below we have parsed the XML file. Give the complete path where you have saved the XML file within quotes. So here we need to use ElementTree.parse() function to read the data from the XML file and then the getroot() function to get the root. Then follow the steps given. Python3 import xml.etree.ElementTree as ETreeimport pandas as pd # give the path where you saved the xml file# inside the quotesxmldata = "C: \\ProgramData\\Microsoft\\ Windows\\Start Menu\\Programs\\ Anaconda3(64-bit)\\xmltopandas.xml"prstree = ETree.parse(xmldata)root = prstree.getroot() # print(root)store_items = []all_items = [] for storeno in root.iter('store'): store_Nr = storeno.attrib.get('slNo') itemsF = storeno.find('foodItem').text price = storeno.find('price').text quan = storeno.find('quantity').text dis = storeno.find('discount').text store_items = [store_Nr, itemsF, price, quan, dis] all_items.append(store_items) xmlToDf = pd.DataFrame(all_items, columns=[ 'SL No', 'ITEM_NUMBER', 'PRICE', 'QUANTITY', 'DISCOUNT']) print(xmlToDf.to_string(index=False)) Output: Note: The XML file should be saved in the same directory or folder where your Python code saved. Example 2: We can also pass the XML content as a string inside triple quotes. In that case, we need to use the fromstring() function to read the string. Get the root using the ‘tag’ object and follow the same steps to convert it to a DataFrame as mentioned above. Python3 import xml.etree.ElementTree as ETreeimport pandas as pd xmldata = '''<?xml version="1.0" encoding="UTF-8"?> <Food> <Info> <Msg>Food Store items.</Msg> </Info> <store slNo="1"> <foodItem>meat</foodItem> <price>200</price> <quantity>1kg</quantity> <discount>7%</discount> </store> <store slNo="2"> <foodItem>fish</foodItem> <price>150</price> <quantity>1kg</quantity> <discount>5%</discount> </store> <store slNo="3"> <foodItem>egg</foodItem> <price>100</price> <quantity>50 pieces</quantity> <discount>5%</discount> </store> <store slNo="4"> <foodItem>milk</foodItem> <price>50</price> <quantity>1 litre</quantity> <discount>3%</discount> </store> </Food>''' prstree = ETree.fromstring(xmldata)root = prstree.tag #print(root)store_items = []all_items = [] for storeno in prstree.findall('store'): store_Nr = storeno.attrib.get('slNo') itemsF= storeno.find('foodItem').text price= storeno.find('price').text quan= storeno.find('quantity').text dis= storeno.find('discount').text store_items = [store_Nr,itemsF,price,quan,dis] all_items.append(store_items) xmlToDf = pd.DataFrame(all_items,columns=[ 'SL No','ITEM_NUMBER','PRICE','QUANTITY','DISCOUNT']) print(xmlToDf.to_string(index=False)) Output: Picked Python pandas-dataFrame Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Python Classes and Objects Python OOPs Concepts Introduction To PYTHON How to drop one or multiple columns in Pandas Dataframe Python | os.path.join() method Check if element exists in list in Python How To Convert Python Dictionary To JSON? Python | Get unique values from a list Create a directory in Python
[ { "code": null, "e": 54, "s": 26, "text": "\n28 Apr, 2021" }, { "code": null, "e": 383, "s": 54, "text": "In this article, we will learn how to create Pandas DataFrame from nested XML. We will use the xml.etree.ElementTree module, which is a built-in module in Python for parsing or reading information from the XML file. The ElementTree represents the XML document as a tree and the Element represents only a single node of the tree." }, { "code": null, "e": 459, "s": 383, "text": "Here, we will use some functions to process out code which is stated below:" }, { "code": null, "e": 520, "s": 459, "text": "ElementTree.parse( XML_file) : To read data from an XML file" }, { "code": null, "e": 593, "s": 520, "text": "root.iter(‘root_name’): To iterate through the branches of the root node" }, { "code": null, "e": 723, "s": 593, "text": "ElementTree.fromstring(XML_file) : To read data when XML code which is passed as a string inside triple quotes in the python code" }, { "code": null, "e": 804, "s": 723, "text": "prstree.findall(‘store’): To find all the elements of the parsed XML ElementTree" }, { "code": null, "e": 861, "s": 804, "text": "node.attribute.get(attribte_name ): To get the attribute" }, { "code": null, "e": 948, "s": 861, "text": "node.find(attribte_name): To retrieve the text content of the mentioned attribute_name" }, { "code": null, "e": 1008, "s": 948, "text": "pandas.DataFrame() : To convert the XML data to a DataFrame" }, { "code": null, "e": 1053, "s": 1008, "text": "list.append(): To append the items to a list" }, { "code": null, "e": 1142, "s": 1053, "text": "Parse or read the XML file using ElementTree.parse( ) function and get the root element." }, { "code": null, "e": 1315, "s": 1142, "text": "Iterate through the root node to get the child nodes attributes ‘SL NO’ (here) and extract the text values of each attribute (here foodItem, price, quantity, and discount)." }, { "code": null, "e": 1418, "s": 1315, "text": "Get the respective food items with specifications as a unit appended to a list(here all_items() list)." }, { "code": null, "e": 1550, "s": 1418, "text": "Convert the list into a DataFrame using pandas.DataFrame() function and mention the column names within quotes separated by commas." }, { "code": null, "e": 1585, "s": 1550, "text": "Print the DataFrame and it’s done." }, { "code": null, "e": 1589, "s": 1585, "text": "XML" }, { "code": "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <Food> <Info> <Msg>Food Store items.</Msg> </Info> <store slNo=\"1\"> <foodItem>meat</foodItem> <price>200</price> <quantity>1kg</quantity> <discount>7%</discount> </store> <store slNo=\"2\"> <foodItem>fish</foodItem> <price>150</price> <quantity>1kg</quantity> <discount>5%</discount> </store> <store slNo=\"3\"> <foodItem>egg</foodItem> <price>100</price> <quantity>50 pieces</quantity> <discount>5%</discount> </store> <store slNo=\"4\"> <foodItem>milk</foodItem> <price>50</price> <quantity>1 litre</quantity> <discount>3%</discount> </store> </Food>", "e": 2535, "s": 1589, "text": null }, { "code": null, "e": 2546, "s": 2535, "text": "Example 1:" }, { "code": null, "e": 2831, "s": 2546, "text": "In this code below we have parsed the XML file. Give the complete path where you have saved the XML file within quotes. So here we need to use ElementTree.parse() function to read the data from the XML file and then the getroot() function to get the root. Then follow the steps given." }, { "code": null, "e": 2839, "s": 2831, "text": "Python3" }, { "code": "import xml.etree.ElementTree as ETreeimport pandas as pd # give the path where you saved the xml file# inside the quotesxmldata = \"C: \\\\ProgramData\\\\Microsoft\\\\ Windows\\\\Start Menu\\\\Programs\\\\ Anaconda3(64-bit)\\\\xmltopandas.xml\"prstree = ETree.parse(xmldata)root = prstree.getroot() # print(root)store_items = []all_items = [] for storeno in root.iter('store'): store_Nr = storeno.attrib.get('slNo') itemsF = storeno.find('foodItem').text price = storeno.find('price').text quan = storeno.find('quantity').text dis = storeno.find('discount').text store_items = [store_Nr, itemsF, price, quan, dis] all_items.append(store_items) xmlToDf = pd.DataFrame(all_items, columns=[ 'SL No', 'ITEM_NUMBER', 'PRICE', 'QUANTITY', 'DISCOUNT']) print(xmlToDf.to_string(index=False))", "e": 3646, "s": 2839, "text": null }, { "code": null, "e": 3654, "s": 3646, "text": "Output:" }, { "code": null, "e": 3751, "s": 3654, "text": "Note: The XML file should be saved in the same directory or folder where your Python code saved." }, { "code": null, "e": 3762, "s": 3751, "text": "Example 2:" }, { "code": null, "e": 4015, "s": 3762, "text": "We can also pass the XML content as a string inside triple quotes. In that case, we need to use the fromstring() function to read the string. Get the root using the ‘tag’ object and follow the same steps to convert it to a DataFrame as mentioned above." }, { "code": null, "e": 4023, "s": 4015, "text": "Python3" }, { "code": "import xml.etree.ElementTree as ETreeimport pandas as pd xmldata = '''<?xml version=\"1.0\" encoding=\"UTF-8\"?> <Food> <Info> <Msg>Food Store items.</Msg> </Info> <store slNo=\"1\"> <foodItem>meat</foodItem> <price>200</price> <quantity>1kg</quantity> <discount>7%</discount> </store> <store slNo=\"2\"> <foodItem>fish</foodItem> <price>150</price> <quantity>1kg</quantity> <discount>5%</discount> </store> <store slNo=\"3\"> <foodItem>egg</foodItem> <price>100</price> <quantity>50 pieces</quantity> <discount>5%</discount> </store> <store slNo=\"4\"> <foodItem>milk</foodItem> <price>50</price> <quantity>1 litre</quantity> <discount>3%</discount> </store> </Food>''' prstree = ETree.fromstring(xmldata)root = prstree.tag #print(root)store_items = []all_items = [] for storeno in prstree.findall('store'): store_Nr = storeno.attrib.get('slNo') itemsF= storeno.find('foodItem').text price= storeno.find('price').text quan= storeno.find('quantity').text dis= storeno.find('discount').text store_items = [store_Nr,itemsF,price,quan,dis] all_items.append(store_items) xmlToDf = pd.DataFrame(all_items,columns=[ 'SL No','ITEM_NUMBER','PRICE','QUANTITY','DISCOUNT']) print(xmlToDf.to_string(index=False))", "e": 5636, "s": 4023, "text": null }, { "code": null, "e": 5644, "s": 5636, "text": "Output:" }, { "code": null, "e": 5651, "s": 5644, "text": "Picked" }, { "code": null, "e": 5675, "s": 5651, "text": "Python pandas-dataFrame" }, { "code": null, "e": 5689, "s": 5675, "text": "Python-pandas" }, { "code": null, "e": 5696, "s": 5689, "text": "Python" }, { "code": null, "e": 5794, "s": 5696, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 5826, "s": 5794, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 5853, "s": 5826, "text": "Python Classes and Objects" }, { "code": null, "e": 5874, "s": 5853, "text": "Python OOPs Concepts" }, { "code": null, "e": 5897, "s": 5874, "text": "Introduction To PYTHON" }, { "code": null, "e": 5953, "s": 5897, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 5984, "s": 5953, "text": "Python | os.path.join() method" }, { "code": null, "e": 6026, "s": 5984, "text": "Check if element exists in list in Python" }, { "code": null, "e": 6068, "s": 6026, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 6107, "s": 6068, "text": "Python | Get unique values from a list" } ]
Choose k array elements such that difference of maximum and minimum is minimized
06 Jul, 2022 Given an array of n integers and a positive number k. We are allowed to take any k integers from the given array. The task is to find the minimum possible value of the difference between maximum and minimum of K numbers. Examples: Input : arr[] = {10, 100, 300, 200, 1000, 20, 30} k = 3 Output : 20 20 is the minimum possible difference between any maximum and minimum of any k numbers. Given k = 3, we get the result 20 by selecting integers {10, 20, 30}. max(10, 20, 30) - min(10, 20, 30) = 30 - 10 = 20. Input : arr[] = {1, 2, 3, 4, 10, 20, 30, 40, 100, 200}. k = 4 Output : 3 The idea is to sort the array and choose k continuous integers. Why continuous? Let the chosen k integers be arr[0], arr[1], ...arr[r], arr[r+x]..., arr[k-1], all in increasing order but not continuous in the sorted array. This means there exists an integer p which lies between arr[r] and arr[r+x],. So if p is included and arr[0] is removed, then the new difference will be arr[r] – arr[1] whereas old difference was arr[r] – arr[0]. And we know arr[0] ≤ arr[1] ≤ ... ≤ arr[k-1] so minimum difference reduces or remains the same. If we perform the same procedure for other p like numbers, we get the minimum difference. Algorithm to solve the problem: Sort the Array.Calculate the maximum(k numbers) – minimum(k numbers) for each group of k consecutive integers.Return minimum of all values obtained in step 2. Sort the Array. Calculate the maximum(k numbers) – minimum(k numbers) for each group of k consecutive integers. Return minimum of all values obtained in step 2. Below is the implementation of above idea : C++ Java Python3 C# PHP Javascript // C++ program to find minimum difference of maximum// and minimum of K number.#include<bits/stdc++.h>using namespace std; // Return minimum difference of maximum and minimum// of k elements of arr[0..n-1].int minDiff(int arr[], int n, int k){ int result = INT_MAX; // Sorting the array. sort(arr, arr + n); // Find minimum value among all K size subarray. for (int i=0; i<=n-k; i++) result = min(result, arr[i+k-1] - arr[i]); return result;} // Driven Programint main(){ int arr[] = {10, 100, 300, 200, 1000, 20, 30}; int n = sizeof(arr)/sizeof(arr[0]); int k = 3; cout << minDiff(arr, n, k) << endl; return 0;} // Java program to find minimum difference// of maximum and minimum of K number.import java.util.*; class GFG { // Return minimum difference of// maximum and minimum of k// elements of arr[0..n-1].static int minDiff(int arr[], int n, int k) { int result = Integer.MAX_VALUE; // Sorting the array. Arrays.sort(arr); // Find minimum value among // all K size subarray. for (int i = 0; i <= n - k; i++) result = Math.min(result, arr[i + k - 1] - arr[i]); return result;} // Driver codepublic static void main(String[] args) { int arr[] = {10, 100, 300, 200, 1000, 20, 30}; int n = arr.length; int k = 3; System.out.println(minDiff(arr, n, k));}} // This code is contributed by Anant Agarwal. # Python program to find minimum# difference of maximum# and minimum of K number. # Return minimum difference# of maximum and minimum# of k elements of arr[0..n-1].def minDiff(arr,n,k): result = +2147483647 # Sorting the array. arr.sort() # Find minimum value among # all K size subarray. for i in range(n-k+1): result = int(min(result, arr[i+k-1] - arr[i])) return result # Driver code arr= [10, 100, 300, 200, 1000, 20, 30]n =len(arr)k = 3 print(minDiff(arr, n, k)) # This code is contributed# by Anant Agarwal. // C# program to find minimum// difference of maximum and// minimum of K number.using System; class GFG { // Return minimum difference of// maximum and minimum of k// elements of arr[0..n - 1].static int minDiff(int []arr, int n, int k){ int result = int.MaxValue; // Sorting the array. Array.Sort(arr); // Find minimum value among // all K size subarray. for (int i = 0; i <= n - k; i++) result = Math.Min(result, arr[i + k - 1] - arr[i]); return result;} // Driver codepublic static void Main() { int []arr = {10, 100, 300, 200, 1000, 20, 30}; int n = arr.Length; int k = 3; Console.WriteLine(minDiff(arr, n, k));}} // This code is contributed by vt_m. <?php// PHP program to find minimum// difference of maximum and// minimum of K number. // Return minimum difference// of maximum and minimum// of k elements of arr[0..n-1].function minDiff($arr, $n, $k){ $INT_MAX = 2147483647; $result = $INT_MAX ; // Sorting the array. sort($arr , $n); sort($arr); // Find minimum value among // all K size subarray. for ($i = 0; $i <= $n - $k; $i++) $result = min($result, $arr[$i + $k - 1] - $arr[$i]); return $result;} // Driver Code $arr = array(10, 100, 300, 200, 1000, 20, 30); $n = sizeof($arr); $k = 3; echo minDiff($arr, $n, $k); // This code is contributed by nitin mittal.?> <script>// javascript program to find minimum difference// of maximum and minimum of K number. // Return minimum difference of // maximum and minimum of k // elements of arr[0..n-1]. function minDiff(arr , n , k) { var result = Number.MAX_VALUE; // Sorting the array. arr.sort((a,b)=>a-b); // Find minimum value among // all K size subarray. for (i = 0; i <= n - k; i++) result = Math.min(result, arr[i + k - 1] - arr[i]); return result; } // Driver code var arr = [ 10, 100, 300, 200, 1000, 20, 30 ]; var n = arr.length; var k = 3; document.write(minDiff(arr, n, k)); // This code contributed by gauravrajput1</script> 20 Time Complexity: O(nlogn).Auxiliary Space: O(1) This article is contributed by Anuj Chauhan. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. vt_m nitin mittal GauravRajput1 sionc140720 shivamanandrj9 hardikkoriintern Amazon Arrays Sorting Amazon Arrays Sorting Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n06 Jul, 2022" }, { "code": null, "e": 273, "s": 52, "text": "Given an array of n integers and a positive number k. We are allowed to take any k integers from the given array. The task is to find the minimum possible value of the difference between maximum and minimum of K numbers." }, { "code": null, "e": 284, "s": 273, "text": "Examples: " }, { "code": null, "e": 693, "s": 284, "text": "Input : arr[] = {10, 100, 300, 200, 1000, 20, 30}\n k = 3\nOutput : 20\n20 is the minimum possible difference between any\nmaximum and minimum of any k numbers.\nGiven k = 3, we get the result 20 by selecting \nintegers {10, 20, 30}.\nmax(10, 20, 30) - min(10, 20, 30) = 30 - 10 = 20.\n\nInput : arr[] = {1, 2, 3, 4, 10, 20, 30, 40, \n 100, 200}.\n k = 4 \nOutput : 3" }, { "code": null, "e": 1315, "s": 693, "text": "The idea is to sort the array and choose k continuous integers. Why continuous? Let the chosen k integers be arr[0], arr[1], ...arr[r], arr[r+x]..., arr[k-1], all in increasing order but not continuous in the sorted array. This means there exists an integer p which lies between arr[r] and arr[r+x],. So if p is included and arr[0] is removed, then the new difference will be arr[r] – arr[1] whereas old difference was arr[r] – arr[0]. And we know arr[0] ≤ arr[1] ≤ ... ≤ arr[k-1] so minimum difference reduces or remains the same. If we perform the same procedure for other p like numbers, we get the minimum difference." }, { "code": null, "e": 1348, "s": 1315, "text": "Algorithm to solve the problem: " }, { "code": null, "e": 1507, "s": 1348, "text": "Sort the Array.Calculate the maximum(k numbers) – minimum(k numbers) for each group of k consecutive integers.Return minimum of all values obtained in step 2." }, { "code": null, "e": 1523, "s": 1507, "text": "Sort the Array." }, { "code": null, "e": 1619, "s": 1523, "text": "Calculate the maximum(k numbers) – minimum(k numbers) for each group of k consecutive integers." }, { "code": null, "e": 1668, "s": 1619, "text": "Return minimum of all values obtained in step 2." }, { "code": null, "e": 1713, "s": 1668, "text": "Below is the implementation of above idea : " }, { "code": null, "e": 1717, "s": 1713, "text": "C++" }, { "code": null, "e": 1722, "s": 1717, "text": "Java" }, { "code": null, "e": 1730, "s": 1722, "text": "Python3" }, { "code": null, "e": 1733, "s": 1730, "text": "C#" }, { "code": null, "e": 1737, "s": 1733, "text": "PHP" }, { "code": null, "e": 1748, "s": 1737, "text": "Javascript" }, { "code": "// C++ program to find minimum difference of maximum// and minimum of K number.#include<bits/stdc++.h>using namespace std; // Return minimum difference of maximum and minimum// of k elements of arr[0..n-1].int minDiff(int arr[], int n, int k){ int result = INT_MAX; // Sorting the array. sort(arr, arr + n); // Find minimum value among all K size subarray. for (int i=0; i<=n-k; i++) result = min(result, arr[i+k-1] - arr[i]); return result;} // Driven Programint main(){ int arr[] = {10, 100, 300, 200, 1000, 20, 30}; int n = sizeof(arr)/sizeof(arr[0]); int k = 3; cout << minDiff(arr, n, k) << endl; return 0;}", "e": 2405, "s": 1748, "text": null }, { "code": "// Java program to find minimum difference// of maximum and minimum of K number.import java.util.*; class GFG { // Return minimum difference of// maximum and minimum of k// elements of arr[0..n-1].static int minDiff(int arr[], int n, int k) { int result = Integer.MAX_VALUE; // Sorting the array. Arrays.sort(arr); // Find minimum value among // all K size subarray. for (int i = 0; i <= n - k; i++) result = Math.min(result, arr[i + k - 1] - arr[i]); return result;} // Driver codepublic static void main(String[] args) { int arr[] = {10, 100, 300, 200, 1000, 20, 30}; int n = arr.length; int k = 3; System.out.println(minDiff(arr, n, k));}} // This code is contributed by Anant Agarwal.", "e": 3138, "s": 2405, "text": null }, { "code": "# Python program to find minimum# difference of maximum# and minimum of K number. # Return minimum difference# of maximum and minimum# of k elements of arr[0..n-1].def minDiff(arr,n,k): result = +2147483647 # Sorting the array. arr.sort() # Find minimum value among # all K size subarray. for i in range(n-k+1): result = int(min(result, arr[i+k-1] - arr[i])) return result # Driver code arr= [10, 100, 300, 200, 1000, 20, 30]n =len(arr)k = 3 print(minDiff(arr, n, k)) # This code is contributed# by Anant Agarwal.", "e": 3687, "s": 3138, "text": null }, { "code": "// C# program to find minimum// difference of maximum and// minimum of K number.using System; class GFG { // Return minimum difference of// maximum and minimum of k// elements of arr[0..n - 1].static int minDiff(int []arr, int n, int k){ int result = int.MaxValue; // Sorting the array. Array.Sort(arr); // Find minimum value among // all K size subarray. for (int i = 0; i <= n - k; i++) result = Math.Min(result, arr[i + k - 1] - arr[i]); return result;} // Driver codepublic static void Main() { int []arr = {10, 100, 300, 200, 1000, 20, 30}; int n = arr.Length; int k = 3; Console.WriteLine(minDiff(arr, n, k));}} // This code is contributed by vt_m.", "e": 4404, "s": 3687, "text": null }, { "code": "<?php// PHP program to find minimum// difference of maximum and// minimum of K number. // Return minimum difference// of maximum and minimum// of k elements of arr[0..n-1].function minDiff($arr, $n, $k){ $INT_MAX = 2147483647; $result = $INT_MAX ; // Sorting the array. sort($arr , $n); sort($arr); // Find minimum value among // all K size subarray. for ($i = 0; $i <= $n - $k; $i++) $result = min($result, $arr[$i + $k - 1] - $arr[$i]); return $result;} // Driver Code $arr = array(10, 100, 300, 200, 1000, 20, 30); $n = sizeof($arr); $k = 3; echo minDiff($arr, $n, $k); // This code is contributed by nitin mittal.?>", "e": 5118, "s": 4404, "text": null }, { "code": "<script>// javascript program to find minimum difference// of maximum and minimum of K number. // Return minimum difference of // maximum and minimum of k // elements of arr[0..n-1]. function minDiff(arr , n , k) { var result = Number.MAX_VALUE; // Sorting the array. arr.sort((a,b)=>a-b); // Find minimum value among // all K size subarray. for (i = 0; i <= n - k; i++) result = Math.min(result, arr[i + k - 1] - arr[i]); return result; } // Driver code var arr = [ 10, 100, 300, 200, 1000, 20, 30 ]; var n = arr.length; var k = 3; document.write(minDiff(arr, n, k)); // This code contributed by gauravrajput1</script>", "e": 5855, "s": 5118, "text": null }, { "code": null, "e": 5859, "s": 5855, "text": "20\n" }, { "code": null, "e": 5907, "s": 5859, "text": "Time Complexity: O(nlogn).Auxiliary Space: O(1)" }, { "code": null, "e": 6204, "s": 5907, "text": "This article is contributed by Anuj Chauhan. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. " }, { "code": null, "e": 6209, "s": 6204, "text": "vt_m" }, { "code": null, "e": 6222, "s": 6209, "text": "nitin mittal" }, { "code": null, "e": 6236, "s": 6222, "text": "GauravRajput1" }, { "code": null, "e": 6248, "s": 6236, "text": "sionc140720" }, { "code": null, "e": 6263, "s": 6248, "text": "shivamanandrj9" }, { "code": null, "e": 6280, "s": 6263, "text": "hardikkoriintern" }, { "code": null, "e": 6287, "s": 6280, "text": "Amazon" }, { "code": null, "e": 6294, "s": 6287, "text": "Arrays" }, { "code": null, "e": 6302, "s": 6294, "text": "Sorting" }, { "code": null, "e": 6309, "s": 6302, "text": "Amazon" }, { "code": null, "e": 6316, "s": 6309, "text": "Arrays" }, { "code": null, "e": 6324, "s": 6316, "text": "Sorting" } ]
Python | Deleting all occurrences of character
25 Jun, 2022 These days string manipulation is very popular in Python, and due to its immutable character, sometimes, it becomes more important to know its working and hacks. This particular article solves the problem of deleting all occurrences of a character from a string. Let’s discuss ways in which this can be achieved. Method #1: Using translate() Usually this function is used to convert a particular character to some other character. By translating the resultant removal character to “None”, this function can perform this task. This function works only for Python2 Python # Python code to demonstrate working of# Deleting all occurrences of character# Using translate() # initializing stringtest_str = "GeeksforGeeks & quot # initializing removal characterrem_char = "e & quot # printing original stringprint(& quot The original string is : & quot + str(test_str)) # Using translate()# Deleting all occurrences of characterres = test_str.translate(None, rem_char) # printing resultprint(& quot The string after character deletion : & quot + str(res)) The original string is : GeeksforGeeks The string after character deletion : GksforGks Method #2: Using replace() This function works functionally quite similarly to the above function, but it is recommended due to several reasons. It can be used in newer versions of Python and is more efficient than the above function. We replace the empty string instead of None as above for using this function to perform this task. Python3 # Python3 code to demonstrate working of# Deleting all occurrences of character# Using replace() # initializing stringtest_str = "GeeksforGeeks" # initializing removal characterrem_char = "e" # printing original stringprint("The original string is : " + str(test_str)) # Using replace()# Deleting all occurrences of characterres = test_str.replace(rem_char, "") # printing resultprint("The string after character deletion : " + str(res)) The original string is : GeeksforGeeks The string after character deletion : GksforGks Method #3: Without using any built-in methods Python3 # Python3 code to demonstrate working of# Deleting all occurrences of character # initializing stringtest_str = "GeeksforGeeks" # initializing removal characterrem_char = "e" # printing original stringprint("The original string is : " + str(test_str)) new_str = ""# Deleting all occurrences of characterfor i in test_str: if(i != rem_char): new_str += ires = new_str# printing resultprint("The string after character deletion : " + str(res)) The original string is : GeeksforGeeks The string after character deletion : GksforGks kogantibhavya Python string-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Python Classes and Objects Python OOPs Concepts Introduction To PYTHON How to drop one or multiple columns in Pandas Dataframe Defaultdict in Python Python | Get dictionary keys as a list Python | Convert a list to dictionary Python Program for Fibonacci numbers Python | Convert string dictionary to dictionary
[ { "code": null, "e": 28, "s": 0, "text": "\n25 Jun, 2022" }, { "code": null, "e": 342, "s": 28, "text": "These days string manipulation is very popular in Python, and due to its immutable character, sometimes, it becomes more important to know its working and hacks. This particular article solves the problem of deleting all occurrences of a character from a string. Let’s discuss ways in which this can be achieved. " }, { "code": null, "e": 593, "s": 342, "text": "Method #1: Using translate() Usually this function is used to convert a particular character to some other character. By translating the resultant removal character to “None”, this function can perform this task. This function works only for Python2 " }, { "code": null, "e": 600, "s": 593, "text": "Python" }, { "code": "# Python code to demonstrate working of# Deleting all occurrences of character# Using translate() # initializing stringtest_str = \"GeeksforGeeks & quot # initializing removal characterrem_char = \"e & quot # printing original stringprint(& quot The original string is : & quot + str(test_str)) # Using translate()# Deleting all occurrences of characterres = test_str.translate(None, rem_char) # printing resultprint(& quot The string after character deletion : & quot + str(res))", "e": 1103, "s": 600, "text": null }, { "code": null, "e": 1190, "s": 1103, "text": "The original string is : GeeksforGeeks\nThe string after character deletion : GksforGks" }, { "code": null, "e": 1525, "s": 1190, "text": "Method #2: Using replace() This function works functionally quite similarly to the above function, but it is recommended due to several reasons. It can be used in newer versions of Python and is more efficient than the above function. We replace the empty string instead of None as above for using this function to perform this task. " }, { "code": null, "e": 1533, "s": 1525, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of# Deleting all occurrences of character# Using replace() # initializing stringtest_str = \"GeeksforGeeks\" # initializing removal characterrem_char = \"e\" # printing original stringprint(\"The original string is : \" + str(test_str)) # Using replace()# Deleting all occurrences of characterres = test_str.replace(rem_char, \"\") # printing resultprint(\"The string after character deletion : \" + str(res))", "e": 1971, "s": 1533, "text": null }, { "code": null, "e": 2058, "s": 1971, "text": "The original string is : GeeksforGeeks\nThe string after character deletion : GksforGks" }, { "code": null, "e": 2104, "s": 2058, "text": "Method #3: Without using any built-in methods" }, { "code": null, "e": 2112, "s": 2104, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of# Deleting all occurrences of character # initializing stringtest_str = \"GeeksforGeeks\" # initializing removal characterrem_char = \"e\" # printing original stringprint(\"The original string is : \" + str(test_str)) new_str = \"\"# Deleting all occurrences of characterfor i in test_str: if(i != rem_char): new_str += ires = new_str# printing resultprint(\"The string after character deletion : \" + str(res))", "e": 2565, "s": 2112, "text": null }, { "code": null, "e": 2652, "s": 2565, "text": "The original string is : GeeksforGeeks\nThe string after character deletion : GksforGks" }, { "code": null, "e": 2666, "s": 2652, "text": "kogantibhavya" }, { "code": null, "e": 2689, "s": 2666, "text": "Python string-programs" }, { "code": null, "e": 2696, "s": 2689, "text": "Python" }, { "code": null, "e": 2712, "s": 2696, "text": "Python Programs" }, { "code": null, "e": 2810, "s": 2712, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2842, "s": 2810, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 2869, "s": 2842, "text": "Python Classes and Objects" }, { "code": null, "e": 2890, "s": 2869, "text": "Python OOPs Concepts" }, { "code": null, "e": 2913, "s": 2890, "text": "Introduction To PYTHON" }, { "code": null, "e": 2969, "s": 2913, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 2991, "s": 2969, "text": "Defaultdict in Python" }, { "code": null, "e": 3030, "s": 2991, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 3068, "s": 3030, "text": "Python | Convert a list to dictionary" }, { "code": null, "e": 3105, "s": 3068, "text": "Python Program for Fibonacci numbers" } ]
Make subplots span multiple grid rows and columns in Matplotlib
29 Oct, 2021 In this article, we are going to discuss how to make subplots span multiple grid rows and columns using matplotlib module. For Representation in Python, matplotlib library has been the workhorse for a long while now. It has held its own even after more agile opponents with simpler code interface and abilities like seaborn, plotly, bokeh and so on have shown up on the scene. Despite the fact that Matplotlib may come up short on the intuitive capacities of the tenderfoots, it does an above and beyond the employment of imagining our information investigation undertakings in Exploratory Information Analysis(EDA). During EDA, one may run over circumstances where we need to show a gathering of related plots as a component of a bigger picture to commute home our knowledge. The subplot capacity of matplotlib takes care of the work for us. Be that as it may, in specific circumstances, we might need to join a few subplots and need to have distinctive angle proportions for each subplot. Firstly, import gridspec submodule of matplotlib module. Python3 # import modulesimport matplotlib.pyplot as pltfrom matplotlib.gridspec import GridSpec # create objectsfig = plt.figure()gs = GridSpec(4, 4, figure=fig) We first need to make an object of GridSpec which permits us to indicate the absolute number of lines and segments as contentions in the general figure alongside a figure object. We store the GridSpec object in a variable called gs and indicate that we need to have 4 lines and 4 segments in the general figure. Below are some programs to make subplots span multiple grid rows and columns: Example 1: Python3 # explictit function to hide index# of subplots in the figuredef formatAxes(fig): for i, ax in enumerate(fig.axes): ax.text(0.5, 0.5, "ax%d" % (i+1), va="center", ha="center") ax.tick_params(labelbottom=False, labelleft=False) # import required modulesimport matplotlib.pyplot as pltfrom matplotlib.gridspec import GridSpec # create objectsfig = plt.figure(constrained_layout=True)gs = GridSpec(3, 3, figure=fig) # create sub plots as gridax1 = fig.add_subplot(gs[0, :])ax2 = fig.add_subplot(gs[1, :-1])ax3 = fig.add_subplot(gs[1, : -1])ax4 = fig.add_subplot(gs[-1, 0])ax5 = fig.add_subplot(gs[-1, -2]) # depict illustrationfig.suptitle("Grid-Spec")formatAxes(fig)plt.show() Output: Presently, we need to determine the subtleties of how each subplot will traverse the lines and segments in the general figure. It is valuable to make a harsh sketch on paper regarding how you need the subplots to be spread out, so they don’t cover. When done, we pass on this data through the GridSpec object we made. The line/segment length information is passed in similar list documentation we use for subsetting exhibits/dataframes with lines and segment list numbers beginning from zero and utilizing the ‘:’ to indicate the range. The GridSpec object with the file is passed to the add_subplot capacity of the figure object. We add a general title for the figure and eliminate the ticks to envision the format better as the goal here is to exhibit how we can accomplish subplots spreading over different lines/segments. At the point when you actualize this, clearly, you will need to add your pivot ticks, names and so forth from your dataframe and change the dividing and figure size to oblige these plot components. Example 2: Here, we have created an explicit function to format axes of the figure i.e. hide the index values of the sub plots. Python3 # explictit function to hide index# of sub plots in the figuredef formatAxes(fig): for i, ax in enumerate(fig.axes): ax.text(0.5, 0.5, "ax%d" % (i+1), va="center", ha="center") ax.tick_params(labelbottom=False, labelleft=False) # import required modulesimport matplotlib.pyplot as pltfrom matplotlib.gridspec import GridSpec # create objectsfig = plt.figure(constrained_layout=True)gs = GridSpec(3, 3, figure=fig) # create sub plots as gridax1 = fig.add_subplot(gs[0, :])ax2 = fig.add_subplot(gs[1, :-1])ax3 = fig.add_subplot(gs[1:, -1])ax4 = fig.add_subplot(gs[-1, 0])ax5 = fig.add_subplot(gs[-1, -2]) # depict illustrationfig.suptitle("GridSpec")formatAxes(fig)plt.show() Output: This may prove to be useful in multi-variable time arrangement plots where we might need to show the time arrangement plot extending across the sections in the top column and other uni-variate, multi-variate perception in the other subplots underneath. You can tweak how your jigsaw looks like by indicating your line/sections in the general figure and ranges of your individual subplots. Example 3: Combining two subplots utilizing subplots and GridSpec, here we need to consolidate two subplots in a tomahawks format made with subplots. We can get the GridSpec from the tomahawks and afterward eliminate the covered tomahawks and fill the hole with another greater tomahawks. Here we make a format with the last two tomahawks in the last section joined. See additionally Modifying Figure Formats Utilizing GridSpec and Different Capacities. Python3 # import required modulesimport matplotlib.pyplot as pltfrom matplotlib.gridspec import GridSpec # create objectsfig, axes = plt.subplots(ncols=3, nrows=3,)gs = axes[1, 2].get_gridspec() # create sub plots as gridfor ax in axes[1:, -1]: ax.remove()axsbig = fig.add_subplot(gs[1:, -1])axsbig.annotate('Big Axes \nGridSpec[1:, -1]', (0.1, 0.5), xycoords='axes fraction', va='center', color="g") # depict illustrationfig.tight_layout()plt.show() Output: Example 4: In this example, we are going to add a subplot that spans two rows. Python3 # import required modulesimport matplotlib.pyplot as pltfrom matplotlib.gridspec import GridSpec # create objectsfig = plt.figure()gridspan = fig.add_gridspec(2, 2) # create sub plots as gridax1 = fig.add_subplot(gs[0, 0])ax2 = fig.add_subplot(gs[1, 0])ax3 = fig.add_subplot(gs[:, -1]) # depict illustrationplt.show() Output: Example 5: Here, we illustrate an application of subplots by depicting various graphs in different grids of the illustration. Python3 # Import librariesimport numpy as npimport matplotlib.pyplot as plt # Create a 2x2 grid of plotsfig, axes = plt.subplots(2, 2, constrained_layout=True)a = np.linspace(0, 1) # Modify top-left plotaxes[0,0].set_title("Top--Left", color="g")axes[0,0].plot(x, x) # Modify top-right plotaxes[0,1].set_title("Top--Right", color="r")axes[0,1].plot(x, x**2) # Modify bottom-left plotaxes[1,0].set_title("Bottom--Left", color="b")axes[1,0].plot(x, np.sin(3*x)) # Modify bottom-right plotaxes[1,1].set_title("Bottom--Right")axes[1,1].plot(x, 1/(1+x)) # Depict illustrationplt.show() Output: sumitgumber28 Picked Python-matplotlib Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n29 Oct, 2021" }, { "code": null, "e": 151, "s": 28, "text": "In this article, we are going to discuss how to make subplots span multiple grid rows and columns using matplotlib module." }, { "code": null, "e": 647, "s": 151, "text": "For Representation in Python, matplotlib library has been the workhorse for a long while now. It has held its own even after more agile opponents with simpler code interface and abilities like seaborn, plotly, bokeh and so on have shown up on the scene. Despite the fact that Matplotlib may come up short on the intuitive capacities of the tenderfoots, it does an above and beyond the employment of imagining our information investigation undertakings in Exploratory Information Analysis(EDA). " }, { "code": null, "e": 1022, "s": 647, "text": "During EDA, one may run over circumstances where we need to show a gathering of related plots as a component of a bigger picture to commute home our knowledge. The subplot capacity of matplotlib takes care of the work for us. Be that as it may, in specific circumstances, we might need to join a few subplots and need to have distinctive angle proportions for each subplot. " }, { "code": null, "e": 1079, "s": 1022, "text": "Firstly, import gridspec submodule of matplotlib module." }, { "code": null, "e": 1087, "s": 1079, "text": "Python3" }, { "code": "# import modulesimport matplotlib.pyplot as pltfrom matplotlib.gridspec import GridSpec # create objectsfig = plt.figure()gs = GridSpec(4, 4, figure=fig)", "e": 1241, "s": 1087, "text": null }, { "code": null, "e": 1422, "s": 1241, "text": "We first need to make an object of GridSpec which permits us to indicate the absolute number of lines and segments as contentions in the general figure alongside a figure object. " }, { "code": null, "e": 1557, "s": 1422, "text": "We store the GridSpec object in a variable called gs and indicate that we need to have 4 lines and 4 segments in the general figure. " }, { "code": null, "e": 1636, "s": 1557, "text": "Below are some programs to make subplots span multiple grid rows and columns: " }, { "code": null, "e": 1647, "s": 1636, "text": "Example 1:" }, { "code": null, "e": 1655, "s": 1647, "text": "Python3" }, { "code": "# explictit function to hide index# of subplots in the figuredef formatAxes(fig): for i, ax in enumerate(fig.axes): ax.text(0.5, 0.5, \"ax%d\" % (i+1), va=\"center\", ha=\"center\") ax.tick_params(labelbottom=False, labelleft=False) # import required modulesimport matplotlib.pyplot as pltfrom matplotlib.gridspec import GridSpec # create objectsfig = plt.figure(constrained_layout=True)gs = GridSpec(3, 3, figure=fig) # create sub plots as gridax1 = fig.add_subplot(gs[0, :])ax2 = fig.add_subplot(gs[1, :-1])ax3 = fig.add_subplot(gs[1, : -1])ax4 = fig.add_subplot(gs[-1, 0])ax5 = fig.add_subplot(gs[-1, -2]) # depict illustrationfig.suptitle(\"Grid-Spec\")formatAxes(fig)plt.show()", "e": 2348, "s": 1655, "text": null }, { "code": null, "e": 2356, "s": 2348, "text": "Output:" }, { "code": null, "e": 2989, "s": 2356, "text": "Presently, we need to determine the subtleties of how each subplot will traverse the lines and segments in the general figure. It is valuable to make a harsh sketch on paper regarding how you need the subplots to be spread out, so they don’t cover. When done, we pass on this data through the GridSpec object we made. The line/segment length information is passed in similar list documentation we use for subsetting exhibits/dataframes with lines and segment list numbers beginning from zero and utilizing the ‘:’ to indicate the range. The GridSpec object with the file is passed to the add_subplot capacity of the figure object. " }, { "code": null, "e": 3384, "s": 2989, "text": "We add a general title for the figure and eliminate the ticks to envision the format better as the goal here is to exhibit how we can accomplish subplots spreading over different lines/segments. At the point when you actualize this, clearly, you will need to add your pivot ticks, names and so forth from your dataframe and change the dividing and figure size to oblige these plot components. " }, { "code": null, "e": 3395, "s": 3384, "text": "Example 2:" }, { "code": null, "e": 3512, "s": 3395, "text": "Here, we have created an explicit function to format axes of the figure i.e. hide the index values of the sub plots." }, { "code": null, "e": 3520, "s": 3512, "text": "Python3" }, { "code": "# explictit function to hide index# of sub plots in the figuredef formatAxes(fig): for i, ax in enumerate(fig.axes): ax.text(0.5, 0.5, \"ax%d\" % (i+1), va=\"center\", ha=\"center\") ax.tick_params(labelbottom=False, labelleft=False) # import required modulesimport matplotlib.pyplot as pltfrom matplotlib.gridspec import GridSpec # create objectsfig = plt.figure(constrained_layout=True)gs = GridSpec(3, 3, figure=fig) # create sub plots as gridax1 = fig.add_subplot(gs[0, :])ax2 = fig.add_subplot(gs[1, :-1])ax3 = fig.add_subplot(gs[1:, -1])ax4 = fig.add_subplot(gs[-1, 0])ax5 = fig.add_subplot(gs[-1, -2]) # depict illustrationfig.suptitle(\"GridSpec\")formatAxes(fig)plt.show()", "e": 4211, "s": 3520, "text": null }, { "code": null, "e": 4223, "s": 4215, "text": "Output:" }, { "code": null, "e": 4616, "s": 4227, "text": "This may prove to be useful in multi-variable time arrangement plots where we might need to show the time arrangement plot extending across the sections in the top column and other uni-variate, multi-variate perception in the other subplots underneath. You can tweak how your jigsaw looks like by indicating your line/sections in the general figure and ranges of your individual subplots." }, { "code": null, "e": 4629, "s": 4618, "text": "Example 3:" }, { "code": null, "e": 4989, "s": 4631, "text": "Combining two subplots utilizing subplots and GridSpec, here we need to consolidate two subplots in a tomahawks format made with subplots. We can get the GridSpec from the tomahawks and afterward eliminate the covered tomahawks and fill the hole with another greater tomahawks. Here we make a format with the last two tomahawks in the last section joined. " }, { "code": null, "e": 5078, "s": 4991, "text": "See additionally Modifying Figure Formats Utilizing GridSpec and Different Capacities." }, { "code": null, "e": 5088, "s": 5080, "text": "Python3" }, { "code": "# import required modulesimport matplotlib.pyplot as pltfrom matplotlib.gridspec import GridSpec # create objectsfig, axes = plt.subplots(ncols=3, nrows=3,)gs = axes[1, 2].get_gridspec() # create sub plots as gridfor ax in axes[1:, -1]: ax.remove()axsbig = fig.add_subplot(gs[1:, -1])axsbig.annotate('Big Axes \\nGridSpec[1:, -1]', (0.1, 0.5), xycoords='axes fraction', va='center', color=\"g\") # depict illustrationfig.tight_layout()plt.show()", "e": 5579, "s": 5088, "text": null }, { "code": null, "e": 5587, "s": 5579, "text": "Output:" }, { "code": null, "e": 5598, "s": 5587, "text": "Example 4:" }, { "code": null, "e": 5666, "s": 5598, "text": "In this example, we are going to add a subplot that spans two rows." }, { "code": null, "e": 5674, "s": 5666, "text": "Python3" }, { "code": "# import required modulesimport matplotlib.pyplot as pltfrom matplotlib.gridspec import GridSpec # create objectsfig = plt.figure()gridspan = fig.add_gridspec(2, 2) # create sub plots as gridax1 = fig.add_subplot(gs[0, 0])ax2 = fig.add_subplot(gs[1, 0])ax3 = fig.add_subplot(gs[:, -1]) # depict illustrationplt.show()", "e": 5992, "s": 5674, "text": null }, { "code": null, "e": 6000, "s": 5992, "text": "Output:" }, { "code": null, "e": 6011, "s": 6000, "text": "Example 5:" }, { "code": null, "e": 6126, "s": 6011, "text": "Here, we illustrate an application of subplots by depicting various graphs in different grids of the illustration." }, { "code": null, "e": 6134, "s": 6126, "text": "Python3" }, { "code": "# Import librariesimport numpy as npimport matplotlib.pyplot as plt # Create a 2x2 grid of plotsfig, axes = plt.subplots(2, 2, constrained_layout=True)a = np.linspace(0, 1) # Modify top-left plotaxes[0,0].set_title(\"Top--Left\", color=\"g\")axes[0,0].plot(x, x) # Modify top-right plotaxes[0,1].set_title(\"Top--Right\", color=\"r\")axes[0,1].plot(x, x**2) # Modify bottom-left plotaxes[1,0].set_title(\"Bottom--Left\", color=\"b\")axes[1,0].plot(x, np.sin(3*x)) # Modify bottom-right plotaxes[1,1].set_title(\"Bottom--Right\")axes[1,1].plot(x, 1/(1+x)) # Depict illustrationplt.show()", "e": 6707, "s": 6134, "text": null }, { "code": null, "e": 6719, "s": 6711, "text": "Output:" }, { "code": null, "e": 6737, "s": 6723, "text": "sumitgumber28" }, { "code": null, "e": 6744, "s": 6737, "text": "Picked" }, { "code": null, "e": 6762, "s": 6744, "text": "Python-matplotlib" }, { "code": null, "e": 6769, "s": 6762, "text": "Python" } ]
How to Verify the Distribution of Data using Q-Q Plots? | by Satyam Kumar | Towards Data Science
A Q-Q plot, or Quantile-Quantile plot, is a graphical method to verify the distribution of any random variable such as normal, exponential, lognormal, etc. It is a statistical approach to observe the nature of any distribution. For example, if given a distribution need to be verified if it is a normal distribution or not, we run statistical analysis and compare the unknown distribution with a known normal distribution. Then by observing the results of the Q-Q plot, we can confirm if the given distribution is normally distributed or not. Given an unknown random variable.Find each integral percentile value or 100 z-values.Generate a known random distribution and follow steps 1–2 for this distribution too.Plotting Q-Q plot Given an unknown random variable. Find each integral percentile value or 100 z-values. Generate a known random distribution and follow steps 1–2 for this distribution too. Plotting Q-Q plot Given a random distribution, that needs to be verified if it is a normal/gaussian distribution or not. For understanding, we will name this unknown distribution X, and known normal distribution as Y. X = np.random.normal(loc=50, scale=25, size=1000) we are generating a normal distribution having 1000 values with mean=50 and standard deviation=25. X_100 = []for i in range(1,101): X_100.append(np.percentile(X, i)) Compute each integral percentile (1%, 2%, 3%, . . . , 99%, 100%) value of X random distribution and store it in X_100. Y = np.random.normal(loc=0, scale=1, size=1000) Generating a normal distribution having 1000 values with mean=0 and standard deviation=1 which need to be compared with the unknown distribution X to verify if X distribution is distributed normally or not. Y_100 = []for i in range(101): Y_100.append(np.percentile(Y, i)) Compute each integral percentile (1%, 2%, 3%, . . . , 99%, 100%) value of Y random distributions and store it in Y_100. Plot a scatter plot for the above obtained 100 percentile values of unknown distribution to the normal distribution. Here X — is the unknown distribution that is compared to Y — normal distribution. For a Q-Q Plot, if the scatter points in the plot lie in a straight line, then both the random variable have same distribution, else they have different distribution. From the above Q-Q plot, it is observed that X is normally distributed. If X is not normally distributed and it has some other distribution, then if the Q-Q plot is plotted between X and a normal distribution the scatter points will not lie in a straight line. Here, X distributed is a log-normal distribution, which is compared to a normal distribution, hence the scatter points in the Q-Q plot are not in a straight line. Here are 4 Q-Q plots for 4 different conditions of X and Y distribution. Q-Q plot can be used to compare any two distribution and can be used to verify an unknown distribution by comparing it with a known distribution. There is a major limitation for this method that is the requirement of a large set of data points, as concluding fewer data would not be a wise decision. By observing the Q-Q plot one can predict if the two distributions are the same or not. Thank You for Reading
[ { "code": null, "e": 400, "s": 172, "text": "A Q-Q plot, or Quantile-Quantile plot, is a graphical method to verify the distribution of any random variable such as normal, exponential, lognormal, etc. It is a statistical approach to observe the nature of any distribution." }, { "code": null, "e": 715, "s": 400, "text": "For example, if given a distribution need to be verified if it is a normal distribution or not, we run statistical analysis and compare the unknown distribution with a known normal distribution. Then by observing the results of the Q-Q plot, we can confirm if the given distribution is normally distributed or not." }, { "code": null, "e": 902, "s": 715, "text": "Given an unknown random variable.Find each integral percentile value or 100 z-values.Generate a known random distribution and follow steps 1–2 for this distribution too.Plotting Q-Q plot" }, { "code": null, "e": 936, "s": 902, "text": "Given an unknown random variable." }, { "code": null, "e": 989, "s": 936, "text": "Find each integral percentile value or 100 z-values." }, { "code": null, "e": 1074, "s": 989, "text": "Generate a known random distribution and follow steps 1–2 for this distribution too." }, { "code": null, "e": 1092, "s": 1074, "text": "Plotting Q-Q plot" }, { "code": null, "e": 1292, "s": 1092, "text": "Given a random distribution, that needs to be verified if it is a normal/gaussian distribution or not. For understanding, we will name this unknown distribution X, and known normal distribution as Y." }, { "code": null, "e": 1342, "s": 1292, "text": "X = np.random.normal(loc=50, scale=25, size=1000)" }, { "code": null, "e": 1441, "s": 1342, "text": "we are generating a normal distribution having 1000 values with mean=50 and standard deviation=25." }, { "code": null, "e": 1511, "s": 1441, "text": "X_100 = []for i in range(1,101): X_100.append(np.percentile(X, i))" }, { "code": null, "e": 1630, "s": 1511, "text": "Compute each integral percentile (1%, 2%, 3%, . . . , 99%, 100%) value of X random distribution and store it in X_100." }, { "code": null, "e": 1678, "s": 1630, "text": "Y = np.random.normal(loc=0, scale=1, size=1000)" }, { "code": null, "e": 1885, "s": 1678, "text": "Generating a normal distribution having 1000 values with mean=0 and standard deviation=1 which need to be compared with the unknown distribution X to verify if X distribution is distributed normally or not." }, { "code": null, "e": 1953, "s": 1885, "text": "Y_100 = []for i in range(101): Y_100.append(np.percentile(Y, i))" }, { "code": null, "e": 2073, "s": 1953, "text": "Compute each integral percentile (1%, 2%, 3%, . . . , 99%, 100%) value of Y random distributions and store it in Y_100." }, { "code": null, "e": 2190, "s": 2073, "text": "Plot a scatter plot for the above obtained 100 percentile values of unknown distribution to the normal distribution." }, { "code": null, "e": 2272, "s": 2190, "text": "Here X — is the unknown distribution that is compared to Y — normal distribution." }, { "code": null, "e": 2439, "s": 2272, "text": "For a Q-Q Plot, if the scatter points in the plot lie in a straight line, then both the random variable have same distribution, else they have different distribution." }, { "code": null, "e": 2511, "s": 2439, "text": "From the above Q-Q plot, it is observed that X is normally distributed." }, { "code": null, "e": 2700, "s": 2511, "text": "If X is not normally distributed and it has some other distribution, then if the Q-Q plot is plotted between X and a normal distribution the scatter points will not lie in a straight line." }, { "code": null, "e": 2863, "s": 2700, "text": "Here, X distributed is a log-normal distribution, which is compared to a normal distribution, hence the scatter points in the Q-Q plot are not in a straight line." }, { "code": null, "e": 2936, "s": 2863, "text": "Here are 4 Q-Q plots for 4 different conditions of X and Y distribution." }, { "code": null, "e": 3324, "s": 2936, "text": "Q-Q plot can be used to compare any two distribution and can be used to verify an unknown distribution by comparing it with a known distribution. There is a major limitation for this method that is the requirement of a large set of data points, as concluding fewer data would not be a wise decision. By observing the Q-Q plot one can predict if the two distributions are the same or not." } ]
1's and 2's complement of a Binary Number - GeeksforGeeks
02 Nov, 2021 Given a Binary Number as a string, print its 1’s and 2’s complements. 1’s complement of a binary number is another binary number obtained by toggling all bits in it, i.e., transforming the 0 bit to 1 and the 1 bit to 0. Examples: 1's complement of "0111" is "1000" 1's complement of "1100" is "0011" 2’s complement of a binary number is 1, added to the 1’s complement of the binary number. Examples: 2's complement of "0111" is "1001" 2's complement of "1100" is "0100" Step 1: Start from the Least Significant Bit and traverse left until you find a 1. Until you find 1, the bits stay the same Step 2: Once you have found 1, let the 1 as it is, and now Step 3: Flip all the bits left into the 1. Suppose we need to find 2s Complement of 100100 Step 1: Traverse and let the bit stay the same until you find 1. Here x is not known yet. Answer = xxxx00 – Step 2: You found 1. Let it stay the same. Answer = xxx100 Step 3: Flip all the bits left into the 1. Answer = 011100. Hence, the 2s complement of 100100 is 011100. For one’s complement, we simply need to flip all bits. For 2’s complement, we first find one’s complement. We traverse the one’s complement starting from LSB (least significant bit), and look for 0. We flip all 1’s (change to 0) until we find a 0. Finally, we flip the found 0. For example, 2’s complement of “01000” is “11000” (Note that we first find one’s complement of 01000 as 10111). If there are all 1’s (in one’s complement), we add an extra 1 in the string. For example, 2’s complement of “000” is “1000” (1’s complement of “000” is “111”). Below is the implementation. C++ Java Python3 C# Javascript // C++ program to print 1's and 2's complement of// a binary number#include <bits/stdc++.h>using namespace std; // Returns '0' for '1' and '1' for '0'char flip(char c) {return (c == '0')? '1': '0';} // Print 1's and 2's complement of binary number// represented by "bin"void printOneAndTwosComplement(string bin){ int n = bin.length(); int i; string ones, twos; ones = twos = ""; // for ones complement flip every bit for (i = 0; i < n; i++) ones += flip(bin[i]); // for two's complement go from right to left in // ones complement and if we get 1 make, we make // them 0 and keep going left when we get first // 0, make that 1 and go out of loop twos = ones; for (i = n - 1; i >= 0; i--) { if (ones[i] == '1') twos[i] = '0'; else { twos[i] = '1'; break; } } // If No break : all are 1 as in 111 or 11111; // in such case, add extra 1 at beginning if (i == -1) twos = '1' + twos; cout << "1's complement: " << ones << endl; cout << "2's complement: " << twos << endl;} // Driver programint main(){ string bin = "1100"; printOneAndTwosComplement(bin); return 0;} // Java program to print 1's and 2's complement of// a binary number class GFG{ // Returns '0' for '1' and '1' for '0' static char flip(char c) { return (c == '0') ? '1' : '0'; } // Print 1's and 2's complement of binary number // represented by "bin" static void printOneAndTwosComplement(String bin) { int n = bin.length(); int i; String ones = "", twos = ""; ones = twos = ""; // for ones complement flip every bit for (i = 0; i < n; i++) { ones += flip(bin.charAt(i)); } // for two's complement go from right to left in // ones complement and if we get 1 make, we make // them 0 and keep going left when we get first // 0, make that 1 and go out of loop twos = ones; for (i = n - 1; i >= 0; i--) { if (ones.charAt(i) == '1') { twos = twos.substring(0, i) + '0' + twos.substring(i + 1); } else { twos = twos.substring(0, i) + '1' + twos.substring(i + 1); break; } } // If No break : all are 1 as in 111 or 11111; // in such case, add extra 1 at beginning if (i == -1) { twos = '1' + twos; } System.out.println("1's complement: " + ones);; System.out.println("2's complement: " + twos); } // Driver code public static void main(String[] args) { String bin = "1100"; printOneAndTwosComplement(bin); }} // This code contributed by Rajput-Ji # Python3 program to print 1's and 2's# complement of a binary number # Returns '0' for '1' and '1' for '0'def flip(c): return '1' if (c == '0') else '0' # Print 1's and 2's complement of# binary number represented by "bin"def printOneAndTwosComplement(bin): n = len(bin) ones = "" twos = "" # for ones complement flip every bit for i in range(n): ones += flip(bin[i]) # for two's complement go from right # to left in ones complement and if # we get 1 make, we make them 0 and # keep going left when we get first # 0, make that 1 and go out of loop ones = list(ones.strip("")) twos = list(ones) for i in range(n - 1, -1, -1): if (ones[i] == '1'): twos[i] = '0' else: twos[i] = '1' break i -= 1 # If No break : all are 1 as in 111 or 11111 # in such case, add extra 1 at beginning if (i == -1): twos.insert(0, '1') print("1's complement: ", *ones, sep = "") print("2's complement: ", *twos, sep = "") # Driver Codeif __name__ == '__main__': bin = "1100" printOneAndTwosComplement(bin.strip("")) # This code is contributed# by SHUBHAMSINGH10 // C# program to print 1's and 2's complement of// a binary numberusing System; class GFG{ // Returns '0' for '1' and '1' for '0' static char flip(char c) { return (c == '0') ? '1' : '0'; } // Print 1's and 2's complement of binary number // represented by "bin" static void printOneAndTwosComplement(String bin) { int n = bin.Length; int i; String ones = "", twos = ""; ones = twos = ""; // for ones complement flip every bit for (i = 0; i < n; i++) { ones += flip(bin[i]); } // for two's complement go from right to left in // ones complement and if we get 1 make, we make // them 0 and keep going left when we get first // 0, make that 1 and go out of loop twos = ones; for (i = n - 1; i >= 0; i--) { if (ones[i] == '1') { twos = twos.Substring(0, i) + '0' + twos.Substring(i + 1,twos.Length-(i+1)); } else { twos = twos.Substring(0, i) + '1' + twos.Substring(i + 1,twos.Length-(i+1)); break; } } // If No break : all are 1 as in 111 or 11111; // in such case, add extra 1 at beginning if (i == -1) { twos = '1' + twos; } Console.WriteLine("1's complement: " + ones);; Console.WriteLine("2's complement: " + twos); } // Driver code public static void Main(String[] args) { String bin = "1100"; printOneAndTwosComplement(bin); }} // This code has been contributed by 29AjayKumar <script> // Javascript program to print 1's and 2's complement of// a binary number // Returns '0' for '1' and '1' for '0'function flip (c) {return (c == '0')? '1': '0';} // Print 1's and 2's complement of binary number// represented by "bin"function printOneAndTwosComplement(bin){ var n = bin.length; var i; var ones, twos; ones = twos = ""; // for ones complement flip every bit for (i = 0; i < n; i++) ones += flip(bin[i]); // for two's complement go from right to left in // ones complement and if we get 1 make, we make // them 0 and keep going left when we get first // 0, make that 1 and go out of loop twos = ones; twos = twos.split('') for (i = n - 1; i >= 0; i--) { if (ones[i] == '1') twos[i] = '0'; else { twos[i] = '1'; break; } } twos = twos.join('') // If No break : all are 1 as in 111 or 11111; // in such case, add extra 1 at beginning if (i == -1) twos = '1' + twos; document.write( "1's complement: " + ones + "<br>"); document.write( "2's complement: " + twos + "<br>");} // Driver programvar bin = "1100";printOneAndTwosComplement(bin); </script> Output: 1's complement: 0011 2's complement: 0100 Time Complexity: O(n) Auxiliary Space: O(1) Thanks to Utkarsh Trivedi for the above solution.As a side note, signed numbers generally use 2’s complement representation. Positive values are stored as it is and negative values are stored in their 2’s complement form. One extra bit is required to indicate whether the number is positive or negative. For example, char is 8 bits in C. If 2’s complement representation is used for char, then 127 is stored as it is, i.e., 01111111, where first 0 indicates positive. But -127 is stored as 10000001.Related Post : Efficient method for 2’s complement of a binary stringPlease write comments if you find anything incorrect, or you want to share more information about the topic discussed above.References: http://qa.geeksforgeeks.org/6439/write-program-calculate-ones-and-twos-complement-of-number http://geeksquiz.com/whats-difference-between-1s-complement-and-2s-complement/ SHUBHAMSINGH10 Rajput-Ji 29AjayKumar jagnoorsingh humanbeeng famously subham348 Amazon binary-representation complement Arrays Bit Magic Strings Amazon Arrays Strings Bit Magic Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Stack Data Structure (Introduction and Program) Top 50 Array Coding Problems for Interviews Introduction to Arrays Linear Search Multidimensional Arrays in Java Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming) Cyclic Redundancy Check and Modulo-2 Division Count set bits in an integer Little and Big Endian Mystery How to swap two numbers without using a temporary variable?
[ { "code": null, "e": 24828, "s": 24800, "text": "\n02 Nov, 2021" }, { "code": null, "e": 24899, "s": 24828, "text": "Given a Binary Number as a string, print its 1’s and 2’s complements. " }, { "code": null, "e": 25049, "s": 24899, "text": "1’s complement of a binary number is another binary number obtained by toggling all bits in it, i.e., transforming the 0 bit to 1 and the 1 bit to 0." }, { "code": null, "e": 25060, "s": 25049, "text": "Examples: " }, { "code": null, "e": 25132, "s": 25060, "text": "1's complement of \"0111\" is \"1000\"\n1's complement of \"1100\" is \"0011\" " }, { "code": null, "e": 25223, "s": 25132, "text": "2’s complement of a binary number is 1, added to the 1’s complement of the binary number. " }, { "code": null, "e": 25234, "s": 25223, "text": "Examples: " }, { "code": null, "e": 25307, "s": 25234, "text": "2's complement of \"0111\" is \"1001\"\n2's complement of \"1100\" is \"0100\" " }, { "code": null, "e": 25433, "s": 25307, "text": "Step 1: Start from the Least Significant Bit and traverse left until you find a 1. Until you find 1, the bits stay the same" }, { "code": null, "e": 25492, "s": 25433, "text": "Step 2: Once you have found 1, let the 1 as it is, and now" }, { "code": null, "e": 25535, "s": 25492, "text": "Step 3: Flip all the bits left into the 1." }, { "code": null, "e": 25583, "s": 25535, "text": "Suppose we need to find 2s Complement of 100100" }, { "code": null, "e": 25691, "s": 25583, "text": "Step 1: Traverse and let the bit stay the same until you find 1. Here x is not known yet. Answer = xxxx00 –" }, { "code": null, "e": 25750, "s": 25691, "text": "Step 2: You found 1. Let it stay the same. Answer = xxx100" }, { "code": null, "e": 25810, "s": 25750, "text": "Step 3: Flip all the bits left into the 1. Answer = 011100." }, { "code": null, "e": 25856, "s": 25810, "text": "Hence, the 2s complement of 100100 is 011100." }, { "code": null, "e": 26408, "s": 25856, "text": "For one’s complement, we simply need to flip all bits. For 2’s complement, we first find one’s complement. We traverse the one’s complement starting from LSB (least significant bit), and look for 0. We flip all 1’s (change to 0) until we find a 0. Finally, we flip the found 0. For example, 2’s complement of “01000” is “11000” (Note that we first find one’s complement of 01000 as 10111). If there are all 1’s (in one’s complement), we add an extra 1 in the string. For example, 2’s complement of “000” is “1000” (1’s complement of “000” is “111”)." }, { "code": null, "e": 26438, "s": 26408, "text": "Below is the implementation. " }, { "code": null, "e": 26442, "s": 26438, "text": "C++" }, { "code": null, "e": 26447, "s": 26442, "text": "Java" }, { "code": null, "e": 26455, "s": 26447, "text": "Python3" }, { "code": null, "e": 26458, "s": 26455, "text": "C#" }, { "code": null, "e": 26469, "s": 26458, "text": "Javascript" }, { "code": "// C++ program to print 1's and 2's complement of// a binary number#include <bits/stdc++.h>using namespace std; // Returns '0' for '1' and '1' for '0'char flip(char c) {return (c == '0')? '1': '0';} // Print 1's and 2's complement of binary number// represented by \"bin\"void printOneAndTwosComplement(string bin){ int n = bin.length(); int i; string ones, twos; ones = twos = \"\"; // for ones complement flip every bit for (i = 0; i < n; i++) ones += flip(bin[i]); // for two's complement go from right to left in // ones complement and if we get 1 make, we make // them 0 and keep going left when we get first // 0, make that 1 and go out of loop twos = ones; for (i = n - 1; i >= 0; i--) { if (ones[i] == '1') twos[i] = '0'; else { twos[i] = '1'; break; } } // If No break : all are 1 as in 111 or 11111; // in such case, add extra 1 at beginning if (i == -1) twos = '1' + twos; cout << \"1's complement: \" << ones << endl; cout << \"2's complement: \" << twos << endl;} // Driver programint main(){ string bin = \"1100\"; printOneAndTwosComplement(bin); return 0;}", "e": 27686, "s": 26469, "text": null }, { "code": "// Java program to print 1's and 2's complement of// a binary number class GFG{ // Returns '0' for '1' and '1' for '0' static char flip(char c) { return (c == '0') ? '1' : '0'; } // Print 1's and 2's complement of binary number // represented by \"bin\" static void printOneAndTwosComplement(String bin) { int n = bin.length(); int i; String ones = \"\", twos = \"\"; ones = twos = \"\"; // for ones complement flip every bit for (i = 0; i < n; i++) { ones += flip(bin.charAt(i)); } // for two's complement go from right to left in // ones complement and if we get 1 make, we make // them 0 and keep going left when we get first // 0, make that 1 and go out of loop twos = ones; for (i = n - 1; i >= 0; i--) { if (ones.charAt(i) == '1') { twos = twos.substring(0, i) + '0' + twos.substring(i + 1); } else { twos = twos.substring(0, i) + '1' + twos.substring(i + 1); break; } } // If No break : all are 1 as in 111 or 11111; // in such case, add extra 1 at beginning if (i == -1) { twos = '1' + twos; } System.out.println(\"1's complement: \" + ones);; System.out.println(\"2's complement: \" + twos); } // Driver code public static void main(String[] args) { String bin = \"1100\"; printOneAndTwosComplement(bin); }} // This code contributed by Rajput-Ji", "e": 29289, "s": 27686, "text": null }, { "code": "# Python3 program to print 1's and 2's# complement of a binary number # Returns '0' for '1' and '1' for '0'def flip(c): return '1' if (c == '0') else '0' # Print 1's and 2's complement of# binary number represented by \"bin\"def printOneAndTwosComplement(bin): n = len(bin) ones = \"\" twos = \"\" # for ones complement flip every bit for i in range(n): ones += flip(bin[i]) # for two's complement go from right # to left in ones complement and if # we get 1 make, we make them 0 and # keep going left when we get first # 0, make that 1 and go out of loop ones = list(ones.strip(\"\")) twos = list(ones) for i in range(n - 1, -1, -1): if (ones[i] == '1'): twos[i] = '0' else: twos[i] = '1' break i -= 1 # If No break : all are 1 as in 111 or 11111 # in such case, add extra 1 at beginning if (i == -1): twos.insert(0, '1') print(\"1's complement: \", *ones, sep = \"\") print(\"2's complement: \", *twos, sep = \"\") # Driver Codeif __name__ == '__main__': bin = \"1100\" printOneAndTwosComplement(bin.strip(\"\")) # This code is contributed# by SHUBHAMSINGH10", "e": 30490, "s": 29289, "text": null }, { "code": "// C# program to print 1's and 2's complement of// a binary numberusing System; class GFG{ // Returns '0' for '1' and '1' for '0' static char flip(char c) { return (c == '0') ? '1' : '0'; } // Print 1's and 2's complement of binary number // represented by \"bin\" static void printOneAndTwosComplement(String bin) { int n = bin.Length; int i; String ones = \"\", twos = \"\"; ones = twos = \"\"; // for ones complement flip every bit for (i = 0; i < n; i++) { ones += flip(bin[i]); } // for two's complement go from right to left in // ones complement and if we get 1 make, we make // them 0 and keep going left when we get first // 0, make that 1 and go out of loop twos = ones; for (i = n - 1; i >= 0; i--) { if (ones[i] == '1') { twos = twos.Substring(0, i) + '0' + twos.Substring(i + 1,twos.Length-(i+1)); } else { twos = twos.Substring(0, i) + '1' + twos.Substring(i + 1,twos.Length-(i+1)); break; } } // If No break : all are 1 as in 111 or 11111; // in such case, add extra 1 at beginning if (i == -1) { twos = '1' + twos; } Console.WriteLine(\"1's complement: \" + ones);; Console.WriteLine(\"2's complement: \" + twos); } // Driver code public static void Main(String[] args) { String bin = \"1100\"; printOneAndTwosComplement(bin); }} // This code has been contributed by 29AjayKumar", "e": 32163, "s": 30490, "text": null }, { "code": "<script> // Javascript program to print 1's and 2's complement of// a binary number // Returns '0' for '1' and '1' for '0'function flip (c) {return (c == '0')? '1': '0';} // Print 1's and 2's complement of binary number// represented by \"bin\"function printOneAndTwosComplement(bin){ var n = bin.length; var i; var ones, twos; ones = twos = \"\"; // for ones complement flip every bit for (i = 0; i < n; i++) ones += flip(bin[i]); // for two's complement go from right to left in // ones complement and if we get 1 make, we make // them 0 and keep going left when we get first // 0, make that 1 and go out of loop twos = ones; twos = twos.split('') for (i = n - 1; i >= 0; i--) { if (ones[i] == '1') twos[i] = '0'; else { twos[i] = '1'; break; } } twos = twos.join('') // If No break : all are 1 as in 111 or 11111; // in such case, add extra 1 at beginning if (i == -1) twos = '1' + twos; document.write( \"1's complement: \" + ones + \"<br>\"); document.write( \"2's complement: \" + twos + \"<br>\");} // Driver programvar bin = \"1100\";printOneAndTwosComplement(bin); </script>", "e": 33384, "s": 32163, "text": null }, { "code": null, "e": 33393, "s": 33384, "text": "Output: " }, { "code": null, "e": 33435, "s": 33393, "text": "1's complement: 0011\n2's complement: 0100" }, { "code": null, "e": 33457, "s": 33435, "text": "Time Complexity: O(n)" }, { "code": null, "e": 33479, "s": 33457, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 34355, "s": 33479, "text": "Thanks to Utkarsh Trivedi for the above solution.As a side note, signed numbers generally use 2’s complement representation. Positive values are stored as it is and negative values are stored in their 2’s complement form. One extra bit is required to indicate whether the number is positive or negative. For example, char is 8 bits in C. If 2’s complement representation is used for char, then 127 is stored as it is, i.e., 01111111, where first 0 indicates positive. But -127 is stored as 10000001.Related Post : Efficient method for 2’s complement of a binary stringPlease write comments if you find anything incorrect, or you want to share more information about the topic discussed above.References: http://qa.geeksforgeeks.org/6439/write-program-calculate-ones-and-twos-complement-of-number http://geeksquiz.com/whats-difference-between-1s-complement-and-2s-complement/ " }, { "code": null, "e": 34370, "s": 34355, "text": "SHUBHAMSINGH10" }, { "code": null, "e": 34380, "s": 34370, "text": "Rajput-Ji" }, { "code": null, "e": 34392, "s": 34380, "text": "29AjayKumar" }, { "code": null, "e": 34405, "s": 34392, "text": "jagnoorsingh" }, { "code": null, "e": 34416, "s": 34405, "text": "humanbeeng" }, { "code": null, "e": 34425, "s": 34416, "text": "famously" }, { "code": null, "e": 34435, "s": 34425, "text": "subham348" }, { "code": null, "e": 34442, "s": 34435, "text": "Amazon" }, { "code": null, "e": 34464, "s": 34442, "text": "binary-representation" }, { "code": null, "e": 34475, "s": 34464, "text": "complement" }, { "code": null, "e": 34482, "s": 34475, "text": "Arrays" }, { "code": null, "e": 34492, "s": 34482, "text": "Bit Magic" }, { "code": null, "e": 34500, "s": 34492, "text": "Strings" }, { "code": null, "e": 34507, "s": 34500, "text": "Amazon" }, { "code": null, "e": 34514, "s": 34507, "text": "Arrays" }, { "code": null, "e": 34522, "s": 34514, "text": "Strings" }, { "code": null, "e": 34532, "s": 34522, "text": "Bit Magic" }, { "code": null, "e": 34630, "s": 34532, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 34639, "s": 34630, "text": "Comments" }, { "code": null, "e": 34652, "s": 34639, "text": "Old Comments" }, { "code": null, "e": 34700, "s": 34652, "text": "Stack Data Structure (Introduction and Program)" }, { "code": null, "e": 34744, "s": 34700, "text": "Top 50 Array Coding Problems for Interviews" }, { "code": null, "e": 34767, "s": 34744, "text": "Introduction to Arrays" }, { "code": null, "e": 34781, "s": 34767, "text": "Linear Search" }, { "code": null, "e": 34813, "s": 34781, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 34881, "s": 34813, "text": "Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)" }, { "code": null, "e": 34927, "s": 34881, "text": "Cyclic Redundancy Check and Modulo-2 Division" }, { "code": null, "e": 34956, "s": 34927, "text": "Count set bits in an integer" }, { "code": null, "e": 34986, "s": 34956, "text": "Little and Big Endian Mystery" } ]
while Loop in java
A while loop statement in Java programming language repeatedly executes a target statement as long as a given condition is true. The syntax of a while loop is − while(Boolean_expression) { // Statements } Here, statement(s) may be a single statement or a block of statements. The condition may be any expression, and true is any non zero value. When executing, if the boolean_expression result is true, then the actions inside the loop will be executed. This will continue as long as the expression result is true. When the condition becomes false, program control passes to the line immediately following the loop. Here, key point of the while loop is that the loop might not ever run. When the expression is tested and the result is false, the loop body will be skipped and the first statement after the while loop will be executed. public class Test { public static void main(String args[]) { int x = 10; while( x < 20 ) { System.out.print("value of x : " + x ); x++; System.out.print("\n"); } } } This will produce the following result − value of x : 10 value of x : 11 value of x : 12 value of x : 13 value of x : 14 value of x : 15 value of x : 16 value of x : 17 value of x : 18 value of x : 19 16 Lectures 2 hours Malhar Lathkar 19 Lectures 5 hours Malhar Lathkar 25 Lectures 2.5 hours Anadi Sharma 126 Lectures 7 hours Tushar Kale 119 Lectures 17.5 hours Monica Mittal 76 Lectures 7 hours Arnab Chakraborty Print Add Notes Bookmark this page
[ { "code": null, "e": 2506, "s": 2377, "text": "A while loop statement in Java programming language repeatedly executes a target statement as long as a given condition is true." }, { "code": null, "e": 2538, "s": 2506, "text": "The syntax of a while loop is −" }, { "code": null, "e": 2586, "s": 2538, "text": "while(Boolean_expression) {\n // Statements\n}\n" }, { "code": null, "e": 2727, "s": 2586, "text": "Here, statement(s) may be a single statement or a block of statements. The condition may be any expression, and true is any non zero value. " }, { "code": null, "e": 2897, "s": 2727, "text": "When executing, if the boolean_expression result is true, then the actions inside the loop will be executed. This will continue as long as the expression result is true." }, { "code": null, "e": 2998, "s": 2897, "text": "When the condition becomes false, program control passes to the line immediately following the loop." }, { "code": null, "e": 3217, "s": 2998, "text": "Here, key point of the while loop is that the loop might not ever run. When the expression is tested and the result is false, the loop body will be skipped and the first statement after the while loop will be executed." }, { "code": null, "e": 3436, "s": 3217, "text": "public class Test {\n\n public static void main(String args[]) {\n int x = 10;\n\n while( x < 20 ) {\n System.out.print(\"value of x : \" + x );\n x++;\n System.out.print(\"\\n\");\n }\n }\n}" }, { "code": null, "e": 3477, "s": 3436, "text": "This will produce the following result −" }, { "code": null, "e": 3638, "s": 3477, "text": "value of x : 10\nvalue of x : 11\nvalue of x : 12\nvalue of x : 13\nvalue of x : 14\nvalue of x : 15\nvalue of x : 16\nvalue of x : 17\nvalue of x : 18\nvalue of x : 19\n" }, { "code": null, "e": 3671, "s": 3638, "text": "\n 16 Lectures \n 2 hours \n" }, { "code": null, "e": 3687, "s": 3671, "text": " Malhar Lathkar" }, { "code": null, "e": 3720, "s": 3687, "text": "\n 19 Lectures \n 5 hours \n" }, { "code": null, "e": 3736, "s": 3720, "text": " Malhar Lathkar" }, { "code": null, "e": 3771, "s": 3736, "text": "\n 25 Lectures \n 2.5 hours \n" }, { "code": null, "e": 3785, "s": 3771, "text": " Anadi Sharma" }, { "code": null, "e": 3819, "s": 3785, "text": "\n 126 Lectures \n 7 hours \n" }, { "code": null, "e": 3833, "s": 3819, "text": " Tushar Kale" }, { "code": null, "e": 3870, "s": 3833, "text": "\n 119 Lectures \n 17.5 hours \n" }, { "code": null, "e": 3885, "s": 3870, "text": " Monica Mittal" }, { "code": null, "e": 3918, "s": 3885, "text": "\n 76 Lectures \n 7 hours \n" }, { "code": null, "e": 3937, "s": 3918, "text": " Arnab Chakraborty" }, { "code": null, "e": 3944, "s": 3937, "text": " Print" }, { "code": null, "e": 3955, "s": 3944, "text": " Add Notes" } ]
Leverage on D3.js v4 to build a Network Graph for Tableau | by Charmaine Chui | Towards Data Science
So recently countries around the world are feverishly contact-tracing to control the Covid-19 infection rates. As a fellow data analyst I have been exposed to my fair share of network diagrams lately. It was intriguing to see a graph made up primarily of nodes and links to be not only aesthetically appealing but also represent connectivity between different entities effectively. Majority of network graph visualisations are mostly deployed on web applications. The harsh truth is web development time far exceeds that of dashboarding. Moreover teams I have worked with are often only interested in snapshots of the network diagram so the ability to plot network diagrams on dashboarding tools such as Tableau would offer greater convenience in highlighting significant findings where interactivity with the graph is less crucial. Hence, I proceeded to explore and implement ways to plot the exact same graph on Tableau with minimal effort. This led me to create a pet project comprising of 2 parts — Part I: A web application to output data directly for Tableau (functionalities include flexibility to manually adjust the graph’s layout + export of the final Tableau-ready data output) Part II: Using the data generator created in Part I, include a network diagram in my next Tableau dashboard accordingly. With regards to this part of the project, I had based my web app’s layout and functionalities on Tristan Guillevin’s https://observablehq.com/@ladataviz/network-data-generator Don’t get me wrong while I felt it was wonderful and I was thoroughly impressed in the ingenuity of leveraging on the d3 graphing library, there were 2 main constraints when I attempted to use the outputs in Tableau — Constraint 1: The tool does not allow the flexibility of dragging nodes directly to other positions to enable a more custom layout. While strength and collision parameters can be toggled, the node movements are unpredictable and hard to manipulate. Constraint 2: The output generated ultimately does render the exact layout previewed in the tool. However, when I attempted to filter the displays of some nodes or links, there was no particular field in the data outputs — nodes.csv and links.csv which allowed this to happen in Tableau easily. Before resolving the above 2 issues, I went ahead to develop the web interface in a similar format as the one by Tristan Guillevin: Basically, it’s a single page application with all instructions and required information along the way. Now, here comes arguable one of the most user-friendly features — Enabling users to manually alter and drag the nodes to conform the graph to their desired layout: Here’s a clear comparison that the tool has successfully altered the nodes’ coordinates after being shifted: Hence, (1/2) of the constraints is resolved — the other outstanding issue being that the original output generated do not allow filtering of specific nodes or links when plotted in Tableau (which shall be addressed and elaborated on in the Part II). Eventually, I decided on identifying Singapore’s Dengue Clusters in 2020-Q3. The raw data sources are stated in the final dashboard itself but after geocoding and some data transforming, the required JSON input to generate the Tableau graphs are churned out: https://github.com/incubated-geek-cc/tableau-data-utility/blob/master/public/data/sg_dengue_clusters.json After plotting the network graph and adding the maps onto the dashboard, I faced the 2nd constraint of being unable to filter by specific nodes/links: [Id] (nodes.csv): Identifier of each node [Id] (links.csv): The [Id] of the node i.e. the source/target [Key] (links.csv): a numerical value auto-generated to identify the source node and target node [Type] (links.csv): source/target Realistically only the [Id] field can be used to identify which nodes/links to filter. However due to the output format when a single node is filtered the links of the nodes are not filtered by default, this leaves many suspending links in the diagram: The solution I came up with was to generate an extra field known as [Link Id], which basically concatenates the [Id] of the source node and the [Id] of the target node. This serves the same purpose as the [Key] field, which is used to distinguish the links but both nodes and links are now identifiable by the [Link Id]. Thereafter, a field known as [Filter by Link Id] is created with the following formula: IF [Parameters].[Cluster Id]=’All’ THEN TRUE ELSE RIGHT(link_id,LEN([Parameters].[Cluster Id]))=[Parameters].[Cluster Id] END Hence, the dashboard can finally be cross-filtered across the network diagram and other visualisations simultaneously: Feel free to use the tool deployed over at https://tableau-data-utility.herokuapp.com/ to generate your Tableau network datasets.
[ { "code": null, "e": 553, "s": 171, "text": "So recently countries around the world are feverishly contact-tracing to control the Covid-19 infection rates. As a fellow data analyst I have been exposed to my fair share of network diagrams lately. It was intriguing to see a graph made up primarily of nodes and links to be not only aesthetically appealing but also represent connectivity between different entities effectively." }, { "code": null, "e": 1174, "s": 553, "text": "Majority of network graph visualisations are mostly deployed on web applications. The harsh truth is web development time far exceeds that of dashboarding. Moreover teams I have worked with are often only interested in snapshots of the network diagram so the ability to plot network diagrams on dashboarding tools such as Tableau would offer greater convenience in highlighting significant findings where interactivity with the graph is less crucial. Hence, I proceeded to explore and implement ways to plot the exact same graph on Tableau with minimal effort. This led me to create a pet project comprising of 2 parts —" }, { "code": null, "e": 1360, "s": 1174, "text": "Part I: A web application to output data directly for Tableau (functionalities include flexibility to manually adjust the graph’s layout + export of the final Tableau-ready data output)" }, { "code": null, "e": 1481, "s": 1360, "text": "Part II: Using the data generator created in Part I, include a network diagram in my next Tableau dashboard accordingly." }, { "code": null, "e": 1657, "s": 1481, "text": "With regards to this part of the project, I had based my web app’s layout and functionalities on Tristan Guillevin’s https://observablehq.com/@ladataviz/network-data-generator" }, { "code": null, "e": 1875, "s": 1657, "text": "Don’t get me wrong while I felt it was wonderful and I was thoroughly impressed in the ingenuity of leveraging on the d3 graphing library, there were 2 main constraints when I attempted to use the outputs in Tableau —" }, { "code": null, "e": 2124, "s": 1875, "text": "Constraint 1: The tool does not allow the flexibility of dragging nodes directly to other positions to enable a more custom layout. While strength and collision parameters can be toggled, the node movements are unpredictable and hard to manipulate." }, { "code": null, "e": 2419, "s": 2124, "text": "Constraint 2: The output generated ultimately does render the exact layout previewed in the tool. However, when I attempted to filter the displays of some nodes or links, there was no particular field in the data outputs — nodes.csv and links.csv which allowed this to happen in Tableau easily." }, { "code": null, "e": 2551, "s": 2419, "text": "Before resolving the above 2 issues, I went ahead to develop the web interface in a similar format as the one by Tristan Guillevin:" }, { "code": null, "e": 2819, "s": 2551, "text": "Basically, it’s a single page application with all instructions and required information along the way. Now, here comes arguable one of the most user-friendly features — Enabling users to manually alter and drag the nodes to conform the graph to their desired layout:" }, { "code": null, "e": 2928, "s": 2819, "text": "Here’s a clear comparison that the tool has successfully altered the nodes’ coordinates after being shifted:" }, { "code": null, "e": 3178, "s": 2928, "text": "Hence, (1/2) of the constraints is resolved — the other outstanding issue being that the original output generated do not allow filtering of specific nodes or links when plotted in Tableau (which shall be addressed and elaborated on in the Part II)." }, { "code": null, "e": 3543, "s": 3178, "text": "Eventually, I decided on identifying Singapore’s Dengue Clusters in 2020-Q3. The raw data sources are stated in the final dashboard itself but after geocoding and some data transforming, the required JSON input to generate the Tableau graphs are churned out: https://github.com/incubated-geek-cc/tableau-data-utility/blob/master/public/data/sg_dengue_clusters.json" }, { "code": null, "e": 3694, "s": 3543, "text": "After plotting the network graph and adding the maps onto the dashboard, I faced the 2nd constraint of being unable to filter by specific nodes/links:" }, { "code": null, "e": 3736, "s": 3694, "text": "[Id] (nodes.csv): Identifier of each node" }, { "code": null, "e": 3798, "s": 3736, "text": "[Id] (links.csv): The [Id] of the node i.e. the source/target" }, { "code": null, "e": 3894, "s": 3798, "text": "[Key] (links.csv): a numerical value auto-generated to identify the source node and target node" }, { "code": null, "e": 3928, "s": 3894, "text": "[Type] (links.csv): source/target" }, { "code": null, "e": 4181, "s": 3928, "text": "Realistically only the [Id] field can be used to identify which nodes/links to filter. However due to the output format when a single node is filtered the links of the nodes are not filtered by default, this leaves many suspending links in the diagram:" }, { "code": null, "e": 4502, "s": 4181, "text": "The solution I came up with was to generate an extra field known as [Link Id], which basically concatenates the [Id] of the source node and the [Id] of the target node. This serves the same purpose as the [Key] field, which is used to distinguish the links but both nodes and links are now identifiable by the [Link Id]." }, { "code": null, "e": 4590, "s": 4502, "text": "Thereafter, a field known as [Filter by Link Id] is created with the following formula:" }, { "code": null, "e": 4717, "s": 4590, "text": "IF [Parameters].[Cluster Id]=’All’ THEN TRUE ELSE RIGHT(link_id,LEN([Parameters].[Cluster Id]))=[Parameters].[Cluster Id] END" }, { "code": null, "e": 4836, "s": 4717, "text": "Hence, the dashboard can finally be cross-filtered across the network diagram and other visualisations simultaneously:" } ]
LexNLP — Library For Automated Text Extraction & NER | by Rohan Gupta | Towards Data Science
A few weeks ago, I had to extract certain types of data from a set of documents and wondered what was the best way to do it. The documents were all leasing forms with data such as entity names, addresses, dates, amounts, conditions, etc. I started out the old fashioned way, using regex to identify certain fields with their respective synonyms in each document. I had to manually set up the rules for extracting each type of field. However, I later discovered LexNLP and its capabilities. All of my required data was extracted with LexNLP without having to write any rules. I was quite amazed by the library, but did not find any tutorials apart from the documentation. So, here I am creating a gateway to LexNLP for other data professionals who could benefit from its features. Below is an overview of LexNLP, which is made by ContraxSuite. I provide examples for extracting certain kinds of data such as dates, entity names, money, and addresses. The library is currently available for extraction in English, Spanish and German. LexNLP can extract all the following information from textual data: acts, e.g., “section 1 of the Advancing Hope Act, 1986” amounts, e.g., “ten pounds” or “5.8 megawatts” citations, e.g., “10 U.S. 100” or “1998 S. Ct. 1” companies, e.g., “Lexpredict LLC” conditions, e.g., “subject to ...” or “unless and until ...” constraints, e.g., “no more than” or “ copyright, e.g., “© Copyright 2000 Acme” courts, e.g., “Supreme Court of New York” CUSIP, e.g., “392690QT3” dates, e.g., “June 1, 2017” or “2018–01–01” definitions, e.g., “Term shall mean ...” distances, e.g., “fifteen miles” durations, e.g., “ten years” or “thirty days” geographic and geopolitical entities, e.g., “New York” or “Norway” money and currency usages, e.g., “$5” or “10 Euro” percents and rates, e.g., “10%” or “50 bps” PII, e.g., “212–212–2121” or “999–999–9999” ratios, e.g.,” 3:1” or “four to three” regulations, e.g., “32 CFR 170” trademarks, e.g., “MyApp (TM)” URLs, e.g., “http://acme.com/” LexNLP requires python 3.6! So you are advised to create a completely new virtual environment in python 3.6 and download all requirements for the LexNLP library from the github link below. github.com Once you have everything installed, try import LexNLP. If it works then you’re all set to continue, but if you run into an error, go through the requirements again and make sure you have the right versions. Comment below if you need any help. We will start off by importing each of the following libraries. Make sure to install each of these libraries: from tika import parserimport glob, osimport pandas as pdfrom bs4 import BeautifulSoupimport codecsimport reimport numpy as np Since all my files were in PDF/Doc formats, I would just have to extract the text as a string from each document. I will use the library Tika for this. The function I have written below gives me a data-frame with information on each file including the text needed for extraction. def multipdftotxt(path): df = pd.DataFrame(columns = ['S.No', 'File_Name', 'Author', 'Creation_Date', 'Title','Content']) pdfs = [] i=0 os.chdir(path) types = ['*.pdf', '*.doc', '*.docx'] textfiles = [] for typ in types: textfiles.append(glob.glob(typ)) flat_list = [] for sublist in textfiles: for item in sublist: flat_list.append(item) textfiles = flat_list for file in textfiles: print(file) raw = parser.from_file(file) text = raw['content'] dict2 = raw['metadata'] Author = dict2.get('Author') Creation_Date = dict2.get('Creation-Date') title = dict2.get('title') i = i+1 df1 = {'S.No': i,'File_Name': file,'Author': Author,'Creation_Date': Creation_Date, 'Title': title,'Content': text} df = df.append(df1, ignore_index=True) df = df.replace('\n', ' \n ', regex=True, ) df = df.replace('\t', ' ', regex=True) df = df.dropna(subset=['Content']) return df The image above shows us what the data-frame would look like. All I need from this data-frame is the columns ‘Text’ and ‘File names’, therefore I will extract these two columns in a dictionary using another function. def dftod(df): l = [] for i in df['Content']: l.append(i) emailname = [] for i in df['File_Name']: emailname.append(i) d = dict(zip(emailname, l)) k = [v.strip() for k,v in d.items()] k = [re.sub(' +', ' ', temp) for temp in k] k = [re.sub('\n +', '\n', temp) for temp in k] k = [re.sub('\n+', '\n', temp) for temp in k] d = dict(zip(emailname, k)) return d Now, I have extracted the text in a dictionary and I am ready to use LexNLP’s extraction features. Importing the right functions from LexNLP is the key to using the library properly. Below, I will show you how to extract specific types of data: Entity Names, Addresses, Dates, and Money. import lexnlp.extract.en.entities.nltk_re#Remember d is our dictionary containing filenames and text.#For entity names, use lexnlp.extract.en.entities.nltk_re.get_companies(text)for filename,text in d.items():print(list(lexnlp.extract.en.entities.nltk_re.get_entities.nltk_re.get_companies(text)))Output:['Target Inc', 'Hawthorne LLC', 'Willburne & Co.'] from lexnlp.extract.en.addresses import address_features#You want to use lexnlp.extract.en.addresses.address_features.get_word_features(text)for filename,text in d.items():print(list(lexnlp.extract.en.addresses.address_features.get_word_features(text)))#Check for DateTime/Zip-code/Email-Address/URL:lexnlp.extract.en.addresses.address_features.is_datetime(text)lexnlp.extract.en.addresses.address_features.is_zip_code(text)lexnlp.extract.en.addresses.address_features.is_email(text)lexnlp.extract.en.addresses.address_features.is_url(text) Dates can be extracted in the following formats: February 1, 1998 2017–06–01 1st day of June, 2017 31 October 2016 15th of March 2000 import lexnlp.extract.en.datesfor filename,text in d.items():print(list(lexnlp.extract.en.dates.get_dates(text)))Output:[[datetime.date(1998, 2, 1)],[datetime.date(2017, 6, 1)],[datetime.date(2016, 10, 31)],[datetime.date(2000, 3, 15)]] Money can be extracted in the following formats: five dollars 5 dollars 5 USD $5 As of now, only the following currencies can be detected through LexNLP: USD/$: US Dollars EUR/€: Euros GBP/£: Great British pounds JPY/¥: Japanese Yen CNY/RMB/元/¥: Chinese Yuan/Renminbi INR/Rs/₹: Indian Rupee import lexnlp.extract.en.moneyfor filename,text in d.items():print(list(lexnlp.extract.en.money.get_money(text)))Output:[(5000000.00, 'GBP'),(100000.00, 'INR')] For more information and resources, please visit the official documentation: https://lexpredict-lexnlp.readthedocs.io While LexNLP does the digging, enjoy your coffee and wait for those results to unfold! Keep following! I appreciate the love. I hadn’t been writing as much in the last few months, but I’m glad to say that I’m back in the game.
[ { "code": null, "e": 409, "s": 171, "text": "A few weeks ago, I had to extract certain types of data from a set of documents and wondered what was the best way to do it. The documents were all leasing forms with data such as entity names, addresses, dates, amounts, conditions, etc." }, { "code": null, "e": 604, "s": 409, "text": "I started out the old fashioned way, using regex to identify certain fields with their respective synonyms in each document. I had to manually set up the rules for extracting each type of field." }, { "code": null, "e": 951, "s": 604, "text": "However, I later discovered LexNLP and its capabilities. All of my required data was extracted with LexNLP without having to write any rules. I was quite amazed by the library, but did not find any tutorials apart from the documentation. So, here I am creating a gateway to LexNLP for other data professionals who could benefit from its features." }, { "code": null, "e": 1203, "s": 951, "text": "Below is an overview of LexNLP, which is made by ContraxSuite. I provide examples for extracting certain kinds of data such as dates, entity names, money, and addresses. The library is currently available for extraction in English, Spanish and German." }, { "code": null, "e": 1271, "s": 1203, "text": "LexNLP can extract all the following information from textual data:" }, { "code": null, "e": 1327, "s": 1271, "text": "acts, e.g., “section 1 of the Advancing Hope Act, 1986”" }, { "code": null, "e": 1374, "s": 1327, "text": "amounts, e.g., “ten pounds” or “5.8 megawatts”" }, { "code": null, "e": 1424, "s": 1374, "text": "citations, e.g., “10 U.S. 100” or “1998 S. Ct. 1”" }, { "code": null, "e": 1458, "s": 1424, "text": "companies, e.g., “Lexpredict LLC”" }, { "code": null, "e": 1519, "s": 1458, "text": "conditions, e.g., “subject to ...” or “unless and until ...”" }, { "code": null, "e": 1558, "s": 1519, "text": "constraints, e.g., “no more than” or “" }, { "code": null, "e": 1599, "s": 1558, "text": "copyright, e.g., “© Copyright 2000 Acme”" }, { "code": null, "e": 1641, "s": 1599, "text": "courts, e.g., “Supreme Court of New York”" }, { "code": null, "e": 1666, "s": 1641, "text": "CUSIP, e.g., “392690QT3”" }, { "code": null, "e": 1710, "s": 1666, "text": "dates, e.g., “June 1, 2017” or “2018–01–01”" }, { "code": null, "e": 1751, "s": 1710, "text": "definitions, e.g., “Term shall mean ...”" }, { "code": null, "e": 1784, "s": 1751, "text": "distances, e.g., “fifteen miles”" }, { "code": null, "e": 1830, "s": 1784, "text": "durations, e.g., “ten years” or “thirty days”" }, { "code": null, "e": 1897, "s": 1830, "text": "geographic and geopolitical entities, e.g., “New York” or “Norway”" }, { "code": null, "e": 1948, "s": 1897, "text": "money and currency usages, e.g., “$5” or “10 Euro”" }, { "code": null, "e": 1992, "s": 1948, "text": "percents and rates, e.g., “10%” or “50 bps”" }, { "code": null, "e": 2036, "s": 1992, "text": "PII, e.g., “212–212–2121” or “999–999–9999”" }, { "code": null, "e": 2075, "s": 2036, "text": "ratios, e.g.,” 3:1” or “four to three”" }, { "code": null, "e": 2107, "s": 2075, "text": "regulations, e.g., “32 CFR 170”" }, { "code": null, "e": 2138, "s": 2107, "text": "trademarks, e.g., “MyApp (TM)”" }, { "code": null, "e": 2169, "s": 2138, "text": "URLs, e.g., “http://acme.com/”" }, { "code": null, "e": 2358, "s": 2169, "text": "LexNLP requires python 3.6! So you are advised to create a completely new virtual environment in python 3.6 and download all requirements for the LexNLP library from the github link below." }, { "code": null, "e": 2369, "s": 2358, "text": "github.com" }, { "code": null, "e": 2612, "s": 2369, "text": "Once you have everything installed, try import LexNLP. If it works then you’re all set to continue, but if you run into an error, go through the requirements again and make sure you have the right versions. Comment below if you need any help." }, { "code": null, "e": 2722, "s": 2612, "text": "We will start off by importing each of the following libraries. Make sure to install each of these libraries:" }, { "code": null, "e": 2849, "s": 2722, "text": "from tika import parserimport glob, osimport pandas as pdfrom bs4 import BeautifulSoupimport codecsimport reimport numpy as np" }, { "code": null, "e": 3129, "s": 2849, "text": "Since all my files were in PDF/Doc formats, I would just have to extract the text as a string from each document. I will use the library Tika for this. The function I have written below gives me a data-frame with information on each file including the text needed for extraction." }, { "code": null, "e": 4153, "s": 3129, "text": "def multipdftotxt(path): df = pd.DataFrame(columns = ['S.No', 'File_Name', 'Author', 'Creation_Date', 'Title','Content']) pdfs = [] i=0 os.chdir(path) types = ['*.pdf', '*.doc', '*.docx'] textfiles = [] for typ in types: textfiles.append(glob.glob(typ)) flat_list = [] for sublist in textfiles: for item in sublist: flat_list.append(item) textfiles = flat_list for file in textfiles: print(file) raw = parser.from_file(file) text = raw['content'] dict2 = raw['metadata'] Author = dict2.get('Author') Creation_Date = dict2.get('Creation-Date') title = dict2.get('title') i = i+1 df1 = {'S.No': i,'File_Name': file,'Author': Author,'Creation_Date': Creation_Date, 'Title': title,'Content': text} df = df.append(df1, ignore_index=True) df = df.replace('\\n', ' \\n ', regex=True, ) df = df.replace('\\t', ' ', regex=True) df = df.dropna(subset=['Content']) return df" }, { "code": null, "e": 4370, "s": 4153, "text": "The image above shows us what the data-frame would look like. All I need from this data-frame is the columns ‘Text’ and ‘File names’, therefore I will extract these two columns in a dictionary using another function." }, { "code": null, "e": 4780, "s": 4370, "text": "def dftod(df): l = [] for i in df['Content']: l.append(i) emailname = [] for i in df['File_Name']: emailname.append(i) d = dict(zip(emailname, l)) k = [v.strip() for k,v in d.items()] k = [re.sub(' +', ' ', temp) for temp in k] k = [re.sub('\\n +', '\\n', temp) for temp in k] k = [re.sub('\\n+', '\\n', temp) for temp in k] d = dict(zip(emailname, k)) return d" }, { "code": null, "e": 4879, "s": 4780, "text": "Now, I have extracted the text in a dictionary and I am ready to use LexNLP’s extraction features." }, { "code": null, "e": 5068, "s": 4879, "text": "Importing the right functions from LexNLP is the key to using the library properly. Below, I will show you how to extract specific types of data: Entity Names, Addresses, Dates, and Money." }, { "code": null, "e": 5423, "s": 5068, "text": "import lexnlp.extract.en.entities.nltk_re#Remember d is our dictionary containing filenames and text.#For entity names, use lexnlp.extract.en.entities.nltk_re.get_companies(text)for filename,text in d.items():print(list(lexnlp.extract.en.entities.nltk_re.get_entities.nltk_re.get_companies(text)))Output:['Target Inc', 'Hawthorne LLC', 'Willburne & Co.']" }, { "code": null, "e": 5964, "s": 5423, "text": "from lexnlp.extract.en.addresses import address_features#You want to use lexnlp.extract.en.addresses.address_features.get_word_features(text)for filename,text in d.items():print(list(lexnlp.extract.en.addresses.address_features.get_word_features(text)))#Check for DateTime/Zip-code/Email-Address/URL:lexnlp.extract.en.addresses.address_features.is_datetime(text)lexnlp.extract.en.addresses.address_features.is_zip_code(text)lexnlp.extract.en.addresses.address_features.is_email(text)lexnlp.extract.en.addresses.address_features.is_url(text)" }, { "code": null, "e": 6013, "s": 5964, "text": "Dates can be extracted in the following formats:" }, { "code": null, "e": 6030, "s": 6013, "text": "February 1, 1998" }, { "code": null, "e": 6041, "s": 6030, "text": "2017–06–01" }, { "code": null, "e": 6063, "s": 6041, "text": "1st day of June, 2017" }, { "code": null, "e": 6079, "s": 6063, "text": "31 October 2016" }, { "code": null, "e": 6098, "s": 6079, "text": "15th of March 2000" }, { "code": null, "e": 6335, "s": 6098, "text": "import lexnlp.extract.en.datesfor filename,text in d.items():print(list(lexnlp.extract.en.dates.get_dates(text)))Output:[[datetime.date(1998, 2, 1)],[datetime.date(2017, 6, 1)],[datetime.date(2016, 10, 31)],[datetime.date(2000, 3, 15)]]" }, { "code": null, "e": 6384, "s": 6335, "text": "Money can be extracted in the following formats:" }, { "code": null, "e": 6397, "s": 6384, "text": "five dollars" }, { "code": null, "e": 6407, "s": 6397, "text": "5 dollars" }, { "code": null, "e": 6413, "s": 6407, "text": "5 USD" }, { "code": null, "e": 6416, "s": 6413, "text": "$5" }, { "code": null, "e": 6489, "s": 6416, "text": "As of now, only the following currencies can be detected through LexNLP:" }, { "code": null, "e": 6507, "s": 6489, "text": "USD/$: US Dollars" }, { "code": null, "e": 6520, "s": 6507, "text": "EUR/€: Euros" }, { "code": null, "e": 6548, "s": 6520, "text": "GBP/£: Great British pounds" }, { "code": null, "e": 6568, "s": 6548, "text": "JPY/¥: Japanese Yen" }, { "code": null, "e": 6603, "s": 6568, "text": "CNY/RMB/元/¥: Chinese Yuan/Renminbi" }, { "code": null, "e": 6626, "s": 6603, "text": "INR/Rs/₹: Indian Rupee" }, { "code": null, "e": 6787, "s": 6626, "text": "import lexnlp.extract.en.moneyfor filename,text in d.items():print(list(lexnlp.extract.en.money.get_money(text)))Output:[(5000000.00, 'GBP'),(100000.00, 'INR')]" }, { "code": null, "e": 6864, "s": 6787, "text": "For more information and resources, please visit the official documentation:" }, { "code": null, "e": 6905, "s": 6864, "text": "https://lexpredict-lexnlp.readthedocs.io" }, { "code": null, "e": 6992, "s": 6905, "text": "While LexNLP does the digging, enjoy your coffee and wait for those results to unfold!" } ]
Installation and Getting Started
You can easily install Jupyter notebook application using pip package manager. pip3 install jupyter To start the application, use the following command in the command prompt window. c:\python36>jupyter notebook The server application starts running at default port number 8888 and browser window opens to show notebook dashboard. Observe that the dashboard shows a dropdown near the right border of browser with an arrow beside the New button. It contains the currently available notebook kernels. Now, choose Python 3, then a new notebook opens in a new tab. An input cell as similar to that of in IPython console is displayed. You can execute any Python expression in it. The result will be displayed in the Out cell. 22 Lectures 49 mins Bigdata Engineer Print Add Notes Bookmark this page
[ { "code": null, "e": 2739, "s": 2660, "text": "You can easily install Jupyter notebook application using pip package manager." }, { "code": null, "e": 2761, "s": 2739, "text": "pip3 install jupyter\n" }, { "code": null, "e": 2843, "s": 2761, "text": "To start the application, use the following command in the command prompt window." }, { "code": null, "e": 2873, "s": 2843, "text": "c:\\python36>jupyter notebook\n" }, { "code": null, "e": 2992, "s": 2873, "text": "The server application starts running at default port number 8888 and browser window opens to show notebook dashboard." }, { "code": null, "e": 3291, "s": 2992, "text": "Observe that the dashboard shows a dropdown near the right border of browser with an arrow beside the New button. It contains the currently available notebook kernels. Now, choose Python 3, then a new notebook opens in a new tab. An input cell as similar to that of in IPython console is displayed." }, { "code": null, "e": 3382, "s": 3291, "text": "You can execute any Python expression in it. The result will be displayed in the Out cell." }, { "code": null, "e": 3414, "s": 3382, "text": "\n 22 Lectures \n 49 mins\n" }, { "code": null, "e": 3432, "s": 3414, "text": " Bigdata Engineer" }, { "code": null, "e": 3439, "s": 3432, "text": " Print" }, { "code": null, "e": 3450, "s": 3439, "text": " Add Notes" } ]
Chomsky Normal Form
Theory of Computation In Automata, every grammar in Chomsky Normal Form is context-free. So, every context free grammar can be converted into Chomsky Normal Form. The name Chomsky Normal Form came from Noam Chomsky, who is an American linguist, philosopher and scientist. He had modelled knowledge of the language using formal grammars. He claimed that with the help of formal grammar, we can produce an infinite number of sentences with a limited set of grammatical rules. There are some conditions of grammar to belong in Chomsky Normal Form. A context free grammar G = (V,Σ,P,S) is said to be in Chomsky Normal Form if all of its production rules are in the form. A → BC, A → a, S → ε Here, A, B and C are non-terminal symbols, where A,B,C ∈ N, a is the terminal symbol, where a ∈ Σ, S is the start symbol, where S → ε and ε is the empty string. We can also say that any language (L) without any production is generated by the grammar G in which the productions are in the above form. The context free grammar, under Chomsky Normal Form is formed with no unit productions, if any and the right hand side of any production has a single terminal or two or more symbols. Suppose there is a grammar G with the productions P given by, S -> aAbB A -> aA|a B -> bB|b There, we have given productions in proper form - A -> a B -> b There is no productions in the set P. Ba -> a Bb -> b // For S -> aAbB, we have S -> BaABbB, // For A -> aA A -> BaA, // For B -> bB B -> BbB In the above, we have only the following in non proper form- S -> BaABbB Let's taken two variables C1 and C2. S -> BaABbB S -> BaC1C2 C1 -> AC2 C2 -> BbB So, the grammar G2, in Chomsky Normal Form with the productions written as - S -> BaC1C2 C1 -> AC2 C2 -> BbB A -> BaA B -> BbB Ba -> a, Bb -> b, A -> a, B -> b
[ { "code": null, "e": 112, "s": 90, "text": "Theory of Computation" }, { "code": null, "e": 635, "s": 112, "text": "In Automata, every grammar in Chomsky Normal Form is context-free. So, every context free grammar can be converted into Chomsky Normal Form. The name Chomsky Normal Form came from Noam Chomsky, who is an American linguist, philosopher and scientist. He had modelled knowledge of the language using formal grammars. He claimed that with the help of formal grammar, we can produce an infinite number of sentences with a limited set of grammatical rules. There are some conditions of grammar to belong in Chomsky Normal Form." }, { "code": null, "e": 757, "s": 635, "text": "A context free grammar G = (V,Σ,P,S) is said to be in Chomsky Normal Form if all of its production rules are in the form." }, { "code": null, "e": 780, "s": 757, "text": "A → BC, \nA → a, \nS → ε" }, { "code": null, "e": 941, "s": 780, "text": "Here, A, B and C are non-terminal symbols, where A,B,C ∈ N, a is the terminal symbol, where a ∈ Σ, S is the start symbol, where S → ε and ε is the empty string." }, { "code": null, "e": 1080, "s": 941, "text": "We can also say that any language (L) without any production is generated by the grammar G in which the productions are in the above form." }, { "code": null, "e": 1263, "s": 1080, "text": "The context free grammar, under Chomsky Normal Form is formed with no unit productions, if any and the right hand side of any production has a single terminal or two or more symbols." }, { "code": null, "e": 1325, "s": 1263, "text": "Suppose there is a grammar G with the productions P given by," }, { "code": null, "e": 1359, "s": 1325, "text": "S -> aAbB \n\nA -> aA|a \n\nB -> bB|b" }, { "code": null, "e": 1409, "s": 1359, "text": "There, we have given productions in proper form -" }, { "code": null, "e": 1425, "s": 1409, "text": "A -> a \n\nB -> b" }, { "code": null, "e": 1463, "s": 1425, "text": "There is no productions in the set P." }, { "code": null, "e": 1577, "s": 1463, "text": "Ba -> a \n\nBb -> b \n\n// For S -> aAbB, we have\nS -> BaABbB, \n\n// For A -> aA \nA -> BaA, \n\n// For B -> bB \nB -> BbB" }, { "code": null, "e": 1638, "s": 1577, "text": "In the above, we have only the following in non proper form-" }, { "code": null, "e": 1650, "s": 1638, "text": "S -> BaABbB" }, { "code": null, "e": 1687, "s": 1650, "text": "Let's taken two variables C1 and C2." }, { "code": null, "e": 1734, "s": 1687, "text": "S -> BaABbB\n\nS -> BaC1C2\n\nC1 -> AC2\n\nC2 -> BbB" }, { "code": null, "e": 1812, "s": 1734, "text": "So, the grammar G2, in Chomsky Normal Form with the productions written as - " } ]
Materialize - Chips
Materialize provides a special component called Chip, which can be used to represent a small set of information. For example, a contact, tags, etc. chip Set the div container as a chip. The following example demonstrates the use of chip class to showcase creating various types of tags. <!DOCTYPE html> <html> <head> <title>The Materialize Chips Example</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.97.3/css/materialize.min.css"> <script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.97.3/js/materialize.min.js"></script> </head> <body class="container"> <div class="chip"> <img alt="HTML5" src="html5-mini-logo.jpg">HTML 5 </div> <div class="chip"> HTML 5<i class="material-icons">close</i> </div> </body> </html> Verify the output. Print Add Notes Bookmark this page
[ { "code": null, "e": 2335, "s": 2187, "text": "Materialize provides a special component called Chip, which can be used to represent a small set of information. For example, a contact, tags, etc." }, { "code": null, "e": 2340, "s": 2335, "text": "chip" }, { "code": null, "e": 2373, "s": 2340, "text": "Set the div container as a chip." }, { "code": null, "e": 2474, "s": 2373, "text": "The following example demonstrates the use of chip class to showcase creating various types of tags." }, { "code": null, "e": 3311, "s": 2474, "text": "<!DOCTYPE html>\n<html>\n <head>\n <title>The Materialize Chips Example</title>\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n <link rel=\"stylesheet\" href=\"https://fonts.googleapis.com/icon?family=Material+Icons\">\n <link rel=\"stylesheet\" href=\"https://cdnjs.cloudflare.com/ajax/libs/materialize/0.97.3/css/materialize.min.css\">\n <script type=\"text/javascript\" src=\"https://code.jquery.com/jquery-2.1.1.min.js\"></script>\n <script src=\"https://cdnjs.cloudflare.com/ajax/libs/materialize/0.97.3/js/materialize.min.js\"></script>\n </head>\n <body class=\"container\">\n <div class=\"chip\">\n <img alt=\"HTML5\" src=\"html5-mini-logo.jpg\">HTML 5\n </div>\n <div class=\"chip\">\n HTML 5<i class=\"material-icons\">close</i>\n </div>\n </body>\n</html>" }, { "code": null, "e": 3330, "s": 3311, "text": "Verify the output." }, { "code": null, "e": 3337, "s": 3330, "text": " Print" }, { "code": null, "e": 3348, "s": 3337, "text": " Add Notes" } ]
Scala Queue sum() method with example - GeeksforGeeks
29 Oct, 2019 The sum() method is utilized to return the sum of all the elements of the queue. Method Definition: def sum: A Return Type: It returns the sum of all the elements of the queue. Example #1: // Scala program of sum() // method // Import Queue import scala.collection.mutable._ // Creating object object GfG { // Main method def main(args:Array[String]) { // Creating a queue val q1 = Queue(1, 2, 3, 4, 5) // Print the queue println(q1) // Applying sum method val result = q1.sum // Display output print("Sum of all elements of the queue: " + result) } } Queue(1, 2, 3, 4, 5) Sum of all elements of the queue: 15 Example #2: // Scala program of sum() // method // Import Queue import scala.collection.mutable._ // Creating object object GfG { // Main method def main(args:Array[String]) { // Creating a queue val q1 = Queue(5, 2, 13, 7, 1) // Print the queue println(q1) // Applying sum method val result = q1.sum // Display output print("Sum of all elements of the queue: " + result) } } Queue(5, 2, 13, 7, 1) Sum of all elements of the queue: 28 Scala scala-collection Scala-Method Scala Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Scala Tutorial – Learn Scala with Step By Step Guide Type Casting in Scala Scala Lists Class and Object in Scala Scala String substring() method with example Break statement in Scala Lambda Expression in Scala Scala String replace() method with example Operators in Scala
[ { "code": null, "e": 23477, "s": 23449, "text": "\n29 Oct, 2019" }, { "code": null, "e": 23558, "s": 23477, "text": "The sum() method is utilized to return the sum of all the elements of the queue." }, { "code": null, "e": 23588, "s": 23558, "text": "Method Definition: def sum: A" }, { "code": null, "e": 23654, "s": 23588, "text": "Return Type: It returns the sum of all the elements of the queue." }, { "code": null, "e": 23666, "s": 23654, "text": "Example #1:" }, { "code": "// Scala program of sum() // method // Import Queue import scala.collection.mutable._ // Creating object object GfG { // Main method def main(args:Array[String]) { // Creating a queue val q1 = Queue(1, 2, 3, 4, 5) // Print the queue println(q1) // Applying sum method val result = q1.sum // Display output print(\"Sum of all elements of the queue: \" + result) } } ", "e": 24157, "s": 23666, "text": null }, { "code": null, "e": 24216, "s": 24157, "text": "Queue(1, 2, 3, 4, 5)\nSum of all elements of the queue: 15\n" }, { "code": null, "e": 24228, "s": 24216, "text": "Example #2:" }, { "code": "// Scala program of sum() // method // Import Queue import scala.collection.mutable._ // Creating object object GfG { // Main method def main(args:Array[String]) { // Creating a queue val q1 = Queue(5, 2, 13, 7, 1) // Print the queue println(q1) // Applying sum method val result = q1.sum // Display output print(\"Sum of all elements of the queue: \" + result) } } ", "e": 24720, "s": 24228, "text": null }, { "code": null, "e": 24780, "s": 24720, "text": "Queue(5, 2, 13, 7, 1)\nSum of all elements of the queue: 28\n" }, { "code": null, "e": 24786, "s": 24780, "text": "Scala" }, { "code": null, "e": 24803, "s": 24786, "text": "scala-collection" }, { "code": null, "e": 24816, "s": 24803, "text": "Scala-Method" }, { "code": null, "e": 24822, "s": 24816, "text": "Scala" }, { "code": null, "e": 24920, "s": 24822, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 24929, "s": 24920, "text": "Comments" }, { "code": null, "e": 24942, "s": 24929, "text": "Old Comments" }, { "code": null, "e": 24995, "s": 24942, "text": "Scala Tutorial – Learn Scala with Step By Step Guide" }, { "code": null, "e": 25017, "s": 24995, "text": "Type Casting in Scala" }, { "code": null, "e": 25029, "s": 25017, "text": "Scala Lists" }, { "code": null, "e": 25055, "s": 25029, "text": "Class and Object in Scala" }, { "code": null, "e": 25100, "s": 25055, "text": "Scala String substring() method with example" }, { "code": null, "e": 25125, "s": 25100, "text": "Break statement in Scala" }, { "code": null, "e": 25152, "s": 25125, "text": "Lambda Expression in Scala" }, { "code": null, "e": 25195, "s": 25152, "text": "Scala String replace() method with example" } ]
Do I need to import the Java.lang package anytime during running a program?
No, java.lang package is a default package in Java therefore, there is no need to import it explicitly. i.e. without importing you can access the classes of this package. If you observe the following example here we haven’t imported the lang package explicitly but, still we are able to calculate the square root of a number using the sqrt() method of the java.lang.Math class. Live Demo public class LangTest { public static void main(String args[]){ int num = 100; double result = Math.sqrt(num); System.out.println("Square root of the number: "+result); } } Square root of the number: 10.0
[ { "code": null, "e": 1166, "s": 1062, "text": "No, java.lang package is a default package in Java therefore, there is no need to import it explicitly." }, { "code": null, "e": 1233, "s": 1166, "text": "i.e. without importing you can access the classes of this package." }, { "code": null, "e": 1440, "s": 1233, "text": "If you observe the following example here we haven’t imported the lang package explicitly but, still we are able to calculate the square root of a number using the sqrt() method of the java.lang.Math class." }, { "code": null, "e": 1451, "s": 1440, "text": " Live Demo" }, { "code": null, "e": 1648, "s": 1451, "text": "public class LangTest {\n public static void main(String args[]){\n int num = 100;\n double result = Math.sqrt(num);\n System.out.println(\"Square root of the number: \"+result);\n }\n}" }, { "code": null, "e": 1681, "s": 1648, "text": "Square root of the number: 10.0\n" } ]
Predicting Hourly Energy consumption of San Diego (short & long term forecasts)— II | Towards Data Science
Part 1 of this post had covered the basics of energy (electricity) consumption, how to import, resample, and merge datasets gathered from different sources and EDA. Some inferences were also extracted from San Deigo’s energy consumption data merged with temperature and solar panels installation data. towardsdatascience.com In this post, I will explore some time series models like persistent forecast (as baseline), ARIMA and FB prophet; and then also extend my approach to include linear regression, random forests, XGBoost and an ensemble model, to see whether or not these linear and non-linear approaches can model our time series accurately. Links to Jupyter notebooks on nbviewer:1. Data import and EDA (covered in Part 1)2. Building ML models and Predictions (covered in this post) If you want a very detailed explanation of time series data processing, modeling and effectively visualizing time series data then please see the 2nd notebook listed above. Having said that, I will try my best to cover the important topics in this post. Github link for the entire project: Hourly_Energy_Consumption_Prediction. Below is the data with some added features from the last post: #Import the 5 years of hourly energy consumption data previously #cleaned, explored, and stored as 'hourly1418_energy_temp_PV.csv' in # Part 1sdge = pd.read_csv('hourly1418_energy_temp_PV.csv', index_col = 'Dates', parse_dates=['Dates', 'Date']) SDGE: Our target variable, San Diego’s hourly energy consumption in — MWh (also the hourly load or demand) non_working: 0 if the day is either a weekend or a holiday (binary) HourlyDryBulbTemperature: The dry bulb temperature measured at San Diego airport in deg F cum_AC_kW: The cumulative installed capacity of solar panels at customer sites (until the date given in the index column) in kW Some observations from the previous analysis done in Part 1: 43824 hours (rows) of data (5 years 2014–2018) SDGE: Min: 1437.08, Mean: 2364.92 and Median: 2298.0, Max: 4867.0 The time series has multiple seasonal patterns — daily, weekly, and yearly. It has a slightly decreasing trend which is inversely proportional to the solar installation capacity Energy consumption is highly correlated with temperature. I have used the time series from 1st Jan 2014 till 31st March 2018 as my training data. And have predicted the hourly energy consumption values for 1st April to 31 Dec 2018 (~15% of the entire data). The final goal for this project is: “The developed forecasting model/s can be utilized by the electrical utilities to effectively plan their energy generation operations and balance the demand with appropriate supply. An efficient forecast can prove very useful for the utilities in planning their day to day operations, meeting their customers’ energy demand, and avoiding any excess generation of energy.” Basically, we want to fit an ML model onto our hourly energy consumption time series and use that to predict the future energy consumption of San Diego. Prediction WindowIn any time series problem, it is very important to define the window of prediction beforehand. Here I have tested the models on a future window of 1-hour, 1-week, and ~8 months. Error Metrics UsedI have calculated R2 score, MAE, RMSE, and MAPE for each model for the training and test sets. Out of the 4 metrics, MAPE will be chosen to select the best model. MAPE helps us to understand the % error on the absolute value and is indifferent to the absolute magnitude of the time series. AssumptionsAll ML models have some assumptions and here is one of them that was used for this project: The future solar panel installations and temperature data are readily available to us and we don’t need to run separate forecasts for them (I wish!). In the real world, we need to use the forecasted values for these independent variables (or at least use the average expected values for the temperature and estimated future solar installations based on the past data). But for simplicity, I have used the actual observed values for these variables in this project. Without much ado, let’s focus our energy on the time series prediction problem. Time series is a sequence of observations taken sequentially in time. Any time-series data has the following components: ref link Level: The baseline value for the series if it were a straight line. Trend: The optional and often linearly increasing or decreasing behavior of the series over time. Seasonality: The optional repeating patterns or cycles of behavior over time. Noise: The optional variability in the observations that cannot be explained by the model. Another important feature of most time series is that observations close together in time tend to be correlated (serially dependent). This feature is also called as auto-correlation and, as we will see later, forms an important aspect of the conventional time series modeling techniques like AutoRegressive Integrated Moving Average (ARIMA). Stationarity: A time series is stationary when the mean, variance, and autocorrelation are constant over time, and there are no seasonal or cyclic patterns. It is very important to make a time series stationary before using any linear regression-based models like ARIMA (regression-based on lagged values of Y). A time series is usually made stationary by differencing the series with itself. Two good articles on stationarity — 1, 2. Due to all the above features, modeling a time series involves a slightly different approach than regular ML problems. Here is a good link explaining the main differences. Here is another good resource for diving deeper into time series forecasting modeling techniques. Time series cross-validation: (Ref)Cross-validation for time series is a bit different since one cannot randomly mix values in a fold while still preserving the temporal structure. With randomization, all-time dependencies between observations will be lost. This is why we will have to use a more tricky approach in optimizing the model parameters such as “cross-validation on a rolling basis”. The idea is rather simple — we train our model on a small segment of the time series from the beginning until some t, make predictions for the next t+n steps, and calculate an error. Then, we expand our training sample to t+n values, make predictions from t+n until t+2∗n, and continue moving our test segment of the time series until we hit the last available observation. This can be established using the sklearn.model_selection's TimeSeriesSplit module. We will use this technique to calculate the validation set errors. Short term hour ahead forecasts """ Error metrics for hour ahead forecasts when simply repeating last hour's values """ # error_metrics(predicted_values, true_values) is a function that I # built to calculate the errors for a given model's predictions_ = error_metrics(sdge_lin.loc[X_test.index.shift(-1, freq='H'), 'SDGE'], y_test)>> RMSE or Root mean squared error: 122.27>> Variance score: 0.94>> Mean Absolute Error: 99.23>> Mean Absolute Percentage Error: 4.21 % Long term ~8 months ahead forecasts """ Error metrics on months ahead forecast when simply repeating last year's values """# the data is hourly and one year = 8760 hours_ = error_metrics(sdge_lin.loc[X_test.index.shift(-8760, freq='H'), 'SDGE'], y_test)>> RMSE or Root mean squared error: 330.74>> Variance score: 0.55>> Mean Absolute Error: 224.89>> Mean Absolute Percentage Error: 9.23 % In the energy forecasting domain, it is even more important to get the daily max demand right, because that can determine if the plant operator needs to switch ON a peaker (mostly gas) plant or not. So, calculating the error on that too. """Resampling both the y_test and predictions at a 24 hours period and using the max as the aggregate function"""_ = error_metrics(sdge_lin.loc[X_test.index.shift(-8760, freq='H'), 'SDGE'].resample('24h').max(), y_test.resample('24h').max())>> RMSE or Root mean squared error: 389.37>> Variance score: 0.22>> Mean Absolute Error: 264.44>> Mean Absolute Percentage Error: 8.60 % “We are ready to build the future-predicting models now.” Note: Some of the classical time series models that are generally used for time series forecasting are Autoregressive Integrated Moving Average (ARIMA), Seasonal Autoregressive Integrated Moving-Average with Exogenous Regressors (SARIMAX), Vector Autoregression Moving-Average with Exogenous Regressors (VARMAX), Simple Exponential Smoothing (SES), Holt Winter’s Exponential Smoothing (HWES), and more. You can find more details about each model in this article. We will try our hands only on SARIMAX in this post and then try the traditional linear and non-linear ML models. So, why should we even consider using traditional ML regression models for time series forecasting? Because, in some cases, like sales forecasting or energy prediction, the y variable strongly depends on external factors, like the temperature in case of energy consumption. So, it becomes more of a regression problem than time series forecasting, and also supervised ML models can help us find better patterns in such data that rely heavily on exogenous variables. Traditional time series models like SARIMAX cannot handle multiple seasonalities. They also require a lot of historical data to be able to forecast accurately and are more time consuming when it comes to hyperparameter tuning. Please refer to [3] for use cases of traditional ML regression models for sales forecasting. Also, this [4] paper proves that using simple regression-based ML models to predict electricity demand is possible and the errors are comparable to the more complex models. The energy consumption values can also be expected to depend on it’s previous lagged values because the energy consumption of a region shouldn’t be expected to change much in the next few hours except for any unexpected or unfortunate events. So we will add the lagged values of energy consumption as the X parameters and check if we can predict better using the past values (in addition to the variables that we had already added). """ Adding max 24 lags; lag1 is the value of the energy consumption in the previous hour, lag2 is the value of energy consumption2 hours before the current value and so on."""# Creating the lag variablesfor i in range(24): sdge1_lin['lag'+str(i+1)] = sdge1_lin['SDGE'].shift(i+1)""" Since the first 24 values won't have any 24th lag, they will be NaN. So dropping the NaNs """lag_sdge = sdge1_lin.dropna() plot_predvstrue_reg(elastic_net_lag.predict(X_test_lag), y_test_lag) Error metrics for model Elastic net with all lags>> RMSE or Root mean squared error: 50.75>> Variance score: 0.99>> Mean Absolute Error: 37.36>> Mean Absolute Percentage Error: 1.58 % We can see from the graphs and the errors, when compared to the baseline, the elastic net model performs much better on all error metrics. The RMSE is only 50.75 MW compared to 122 MW with the baseline model. The MAPE also dropped from 4.21% to 1.58%. So, this model performs very well but it comes with a limitation — we can use it to predict only the next hour value. That is, the maximum time window it can predict accurately is 1 hour. So, if that is the application case then the elastic net model with previous lag values should be used. As mentioned earlier, traditional time series models like SARIMAX cannot handle multiple seasonalities. There are two interesting time series forecasting methods called BATS and TBATS that are capable of modeling time series with multiple seasonalities, check out link. But, they are computationally slow and SARIMAX models with Fourier series handling the multiple seasonalities can perform as good as the TBATS model. The main diriving point behind this feature transformation is that we cannot just feed in the hours as 0,1,2,....22,23 to the model, because we need to make the model understand that 0th and 23rd hours are in reality as close as the 0th hour is to the 1st hour. Same for the weekdays and years. Adding Fourier cyclical series for hour, year, and week periods """ as said above the k terms for each yearly, weekly and daily seasonalities could be chosen by optimizing on the AIC values..but after some research on energy consumption time series k= 5 was chosen for each seasonality """add_fourier_terms(lag_sdge, year_k= 5, week_k=5 , day_k=5)# Visualizing the new variables on week seasonality_ = (1-lag_sdge.loc['01-01-2014':'01-09-2014', [col for col in lag_sdge if col.startswith('week')]]).sum(axis = 1).plot() From the above plot, we can see how the discrete weekday values have been transformed into a more continuous variable pattern. The same can be visualized for hourly and yearly variables too. sdgecyc.head(2) The correlation of the time series observations calculated with values of the same series at previous times is called a serial correlation, or an autocorrelation (ACF). It is used to determine the moving average (MA or q) term of the ARIMA(p,d,q) models. A partial autocorrelation (PACF) is a summary of the relationship between an observation in a time series with observations at prior time steps with the relationships of intervening observations removed. It is used to determine the autoregression (AR or p) term of the ARIMA(p,d,q) models. And d in ARIMA(p,d,q) is the number of differencing required to make the time series stationary. SARIMAX is an extension of ARIMA to handle seasonality terms (S) and exogenous variables (X). The basic architecture of a SARIMAX model is given by SARIMAX(p, d, q)x(P, D, Q, s) where p, d, q are as defined above and the (P, D, Q, s) is the seasonal component of the model for the AR parameters, differences, MA parameters, and periodicity respectively. s is an integer giving the periodicity (number of periods in season), often it is 4 for quarterly data or 12 for monthly data. The default is no seasonal effect. The SARIMAX model takes many inputs which require some tuning to be able to pick the best model parameters. pmdarimapackage’s auto_arimamodule automates this task and gives us the best model by taking in some input ranges, similar to gridsearchcv. Error metrics for model SARIMAX(2,1,1)x(1,0,1,24) with Fourier terms 1 week ahead forecast on test setRMSE or Root mean squared error: 150.46Variance score: 0.68Mean Absolute Error: 103.46Mean Absolute Percentage Error: 5.33 % We see that the first-week forecast is pretty well but even at the end of the first week the forecasting performance decreases and the confidence interval values grow larger beyond the scale of the range of the energy consumption values. Thus, SARIMAX model was not able to capture long term trends but it did well on 1 week ahead forecast. Errors for 1 hour ahead forecasts weren’t calculated above for SARIMAX model, by using (dynamic=True), because we get excellent results using elastic net regression for 1 hour ahead forecasts using the lag variables and it is much faster to fit than SARIMAX. Let’s try using FBProphet for our problem. FBProphet provides a decomposition regression model that is extendable and configurable with interpretable parameters. Prophet frames the forecasting problem as a curve-fitting exercise rather than looking explicitly at the time-based dependence of each observation within a time series. Similar to SARIMAX, we can add extra regressor terms like temperature data to the model as well. (Reference link1, link2, link3) At its core, Prophet is an additive model with the following components: y(t)=g(t)+s(t)+h(t)+εty(t)=g(t)+s(t)+h(t)+εt g(t) models trends(t) models seasonality with Fourier seriesh(t) models the effects of holidays or large eventsεt represents an irreducible error term Using only the ‘SDGE’, ‘HourlyDryBulbTemperature’, ‘cum_AC_kW’, ‘non_working_working’ columns while using Prophet because Prophet, unlike SARIMAX, handles multiple seasonalities well. So, we don’t need to pass in the Fourier terms separately. FB Prophet can be passed with a holiday feature, but since we have already captured the holidays and weekends in the ‘non_working_working’ column we won’t pass a separate holiday list to Prophet. Error metrics for FB Prophet w/ auto seasonality 1 week aheadRMSE or Root mean squared error: 203.21Variance score: 0.76Mean Absolute Error: 164.76Mean Absolute Percentage Error: 7.66 %Error metrics for FB Prophet w/ auto seasonality 8 months aheadRMSE or Root mean squared error: 262.74Variance score: 0.72Mean Absolute Error: 201.06Mean Absolute Percentage Error: 8.55 % So, though the errors of this model are higher than the other models, both on 1-week ahead and months ahead forecasts, it looks like FB Prophet has captured the multiple seasonalities and trend of our data very well. I will just mention the errors for Random Forests here. Elastic net regression didn’t do very well for long term forecasts. Error metrics for Tuned Random forest with fourier termsRMSE or Root mean squared error: 196.99Variance score: 0.84Mean Absolute Error: 137.41Mean Absolute Percentage Error: 5.73 % Random forest performs well with the Fourier terms added. It captures the multiple seasonalities well and also gives a MAPE of only 5.76% compared to the baseline MAPE of 9.23%. Elastic net regression doesn’t capture the higher energy consumption values as good as the random forest does. Random forest’s better performance paved the way for trying the tree-based XGBoost model on the data. XGBoost (Extreme Gradient Boosting) belongs to a family of boosting algorithms and uses the gradient boosting (GBM) framework at its core. It is an optimized distributed gradient boosting library. XGBoost is well known to provide better solutions than other machine learning algorithms. It is not often used for time series, especially if the base used is trees because it is difficult to catch the trend with trees, but since our data doesn’t have a very significant trend and also since it has multiple seasonalities (modeled using Fourier series) and depends significantly on exogenous variables, we can try XGboost to see how it performs on the time series data of energy consumption. Error metrics for Tuned XGBoost with Fourier termsRMSE or Root mean squared error: 172.21Variance score: 0.88Mean Absolute Error: 121.19Mean Absolute Percentage Error: 5.08 % The entire model was tuned and fit in around 2 minutes. And it seems to have learned the data patterns along with multiple seasonalities very well. The error on ~8 months-ahead forecast is just 5.08% compared to the baseline of 9.23 %. Also, the R2 fit is 88% compared to the baseline of 55%. WHY should we consider this? As discussed earlier, to model a time-series data it needs to be stationary. So, the ideal case would be to detrend the data and then feed it into the ML models and then add the trend to the forecasted results. Nonetheless, good results were obtained above without detrending because the energy consumption data from 2014 to 2018 has a very weak trend and the multiple seasonalities were handled well by the Fourier terms. Alternatively, the overall data trend and also the effect of cum_AC_kW, which is the cumulative PV installation to date, can be modeled using FB Prophet and then merged with the XGBoost’s forecast. Any tree-based regression model like XGBoost cannot easily handle the X variables like cum_AC_kW because it is an ever-increasing variable and the test data will always have higher magnitude values not seen by the model in the training set. I have extracted the trend and cum_AC_kW impact on energy from the FB Prophet model and subtracted these two components from our main dataframe with all the Fourier terms. Then this detrended energy consumption data was passed onto the XGBoost model and the XGBoost forecast results were added back to the total trend to get the final predictions. Below is the trend and effect of cum_AC_kw from our previously trained FBProphet model. We will employ the following architecture for creating our new model: Error metrics for XGBoost with detrend Prophet, Fourier termsRMSE or Root mean squared error: 212.97Variance score: 0.81Mean Absolute Error: 170.77Mean Absolute Percentage Error: 7.30 % The combined model is performing worse than the XGBoost alone because FB Prophet overestimates the trend and cum_AC_kW’s impact on energy consumption, especially at the end of the time scale, as can be seen from the plot above for forecast.trend + forecast.cum_AC_kW which shows a sudden dip at the end. This seems to result in an underprediction of energy consumption and impacts the overall results. But still, the model performs very well compared to the baseline and for reasons discussed above, is a better model for time series in terms of stability than XGboost alone. Different models were tried to forecast the energy demand per hour, measured in MW, of the San Diego Gas and Electric (SDGE) utility region in CA. The energy consumption is highly dependent on the outside temperature and has strong multiple seasonalities — daily, weekly, and yearly. The increasing PV (photovoltaic) installations in the region (cum_AC_kW) seem to have brought the decreasing trend in energy consumption since more renewable energy at the customer’s facility means less load on the utility. Note that there can be other factors causing this decreasing trend such as the energy storage installations at the customer facilities, increase in electric efficiencies of the household and commercial equipment, people becoming more conscious of their usage (morally or through utility incentives), etc. The best way to capture the trend, which is a combination of all the above factors and maybe more, is to make the model learn the trend over a long period of time. The seasonality is an important part of predicting the energy consumption of a region, so getting that part right was also very crucial for improving the model’s performance (which was achieved by using Fourier series). Error terms on hour ahead forecast on the test set Error terms on months ahead forecast on the test set The XGBoost model with the Fourier terms has performed very well, predicting for a forecasting window of 8 months ahead. For hourly data with multiple seasonalities that is a pretty impressive result. For long term forecasts- most of the models have performed better than the baseline persistence model and the best model (XGBoost) gives a MAPE of 5.08% compared to the baseline error of 9.23% on the test set. The RMSE, R2, and MAE values are also considerably lower than the baseline model. For example, the difference in RMSE with the baseline model is almost 160 MW which is pretty significant. To help one understand this better, the average size of a gas-fired combined-cycle power plant is 500MW. FB Prophet does a very good job of identifying the trend and multiple seasonalities of the data. It can be paired with XGBoost and a more robust long term forecast can be obtained. Thanks! Please feel free to reach out to me if you have any questions or want to suggest any improvements or corrections. Save energy, save the planet.
[ { "code": null, "e": 348, "s": 46, "text": "Part 1 of this post had covered the basics of energy (electricity) consumption, how to import, resample, and merge datasets gathered from different sources and EDA. Some inferences were also extracted from San Deigo’s energy consumption data merged with temperature and solar panels installation data." }, { "code": null, "e": 371, "s": 348, "text": "towardsdatascience.com" }, { "code": null, "e": 695, "s": 371, "text": "In this post, I will explore some time series models like persistent forecast (as baseline), ARIMA and FB prophet; and then also extend my approach to include linear regression, random forests, XGBoost and an ensemble model, to see whether or not these linear and non-linear approaches can model our time series accurately." }, { "code": null, "e": 837, "s": 695, "text": "Links to Jupyter notebooks on nbviewer:1. Data import and EDA (covered in Part 1)2. Building ML models and Predictions (covered in this post)" }, { "code": null, "e": 1091, "s": 837, "text": "If you want a very detailed explanation of time series data processing, modeling and effectively visualizing time series data then please see the 2nd notebook listed above. Having said that, I will try my best to cover the important topics in this post." }, { "code": null, "e": 1165, "s": 1091, "text": "Github link for the entire project: Hourly_Energy_Consumption_Prediction." }, { "code": null, "e": 1228, "s": 1165, "text": "Below is the data with some added features from the last post:" }, { "code": null, "e": 1474, "s": 1228, "text": "#Import the 5 years of hourly energy consumption data previously #cleaned, explored, and stored as 'hourly1418_energy_temp_PV.csv' in # Part 1sdge = pd.read_csv('hourly1418_energy_temp_PV.csv', index_col = 'Dates', parse_dates=['Dates', 'Date'])" }, { "code": null, "e": 1581, "s": 1474, "text": "SDGE: Our target variable, San Diego’s hourly energy consumption in — MWh (also the hourly load or demand)" }, { "code": null, "e": 1649, "s": 1581, "text": "non_working: 0 if the day is either a weekend or a holiday (binary)" }, { "code": null, "e": 1739, "s": 1649, "text": "HourlyDryBulbTemperature: The dry bulb temperature measured at San Diego airport in deg F" }, { "code": null, "e": 1867, "s": 1739, "text": "cum_AC_kW: The cumulative installed capacity of solar panels at customer sites (until the date given in the index column) in kW" }, { "code": null, "e": 1928, "s": 1867, "text": "Some observations from the previous analysis done in Part 1:" }, { "code": null, "e": 1975, "s": 1928, "text": "43824 hours (rows) of data (5 years 2014–2018)" }, { "code": null, "e": 2041, "s": 1975, "text": "SDGE: Min: 1437.08, Mean: 2364.92 and Median: 2298.0, Max: 4867.0" }, { "code": null, "e": 2117, "s": 2041, "text": "The time series has multiple seasonal patterns — daily, weekly, and yearly." }, { "code": null, "e": 2219, "s": 2117, "text": "It has a slightly decreasing trend which is inversely proportional to the solar installation capacity" }, { "code": null, "e": 2277, "s": 2219, "text": "Energy consumption is highly correlated with temperature." }, { "code": null, "e": 2477, "s": 2277, "text": "I have used the time series from 1st Jan 2014 till 31st March 2018 as my training data. And have predicted the hourly energy consumption values for 1st April to 31 Dec 2018 (~15% of the entire data)." }, { "code": null, "e": 2885, "s": 2477, "text": "The final goal for this project is: “The developed forecasting model/s can be utilized by the electrical utilities to effectively plan their energy generation operations and balance the demand with appropriate supply. An efficient forecast can prove very useful for the utilities in planning their day to day operations, meeting their customers’ energy demand, and avoiding any excess generation of energy.”" }, { "code": null, "e": 3038, "s": 2885, "text": "Basically, we want to fit an ML model onto our hourly energy consumption time series and use that to predict the future energy consumption of San Diego." }, { "code": null, "e": 3234, "s": 3038, "text": "Prediction WindowIn any time series problem, it is very important to define the window of prediction beforehand. Here I have tested the models on a future window of 1-hour, 1-week, and ~8 months." }, { "code": null, "e": 3542, "s": 3234, "text": "Error Metrics UsedI have calculated R2 score, MAE, RMSE, and MAPE for each model for the training and test sets. Out of the 4 metrics, MAPE will be chosen to select the best model. MAPE helps us to understand the % error on the absolute value and is indifferent to the absolute magnitude of the time series." }, { "code": null, "e": 3645, "s": 3542, "text": "AssumptionsAll ML models have some assumptions and here is one of them that was used for this project:" }, { "code": null, "e": 4110, "s": 3645, "text": "The future solar panel installations and temperature data are readily available to us and we don’t need to run separate forecasts for them (I wish!). In the real world, we need to use the forecasted values for these independent variables (or at least use the average expected values for the temperature and estimated future solar installations based on the past data). But for simplicity, I have used the actual observed values for these variables in this project." }, { "code": null, "e": 4190, "s": 4110, "text": "Without much ado, let’s focus our energy on the time series prediction problem." }, { "code": null, "e": 4320, "s": 4190, "text": "Time series is a sequence of observations taken sequentially in time. Any time-series data has the following components: ref link" }, { "code": null, "e": 4389, "s": 4320, "text": "Level: The baseline value for the series if it were a straight line." }, { "code": null, "e": 4487, "s": 4389, "text": "Trend: The optional and often linearly increasing or decreasing behavior of the series over time." }, { "code": null, "e": 4565, "s": 4487, "text": "Seasonality: The optional repeating patterns or cycles of behavior over time." }, { "code": null, "e": 4656, "s": 4565, "text": "Noise: The optional variability in the observations that cannot be explained by the model." }, { "code": null, "e": 4998, "s": 4656, "text": "Another important feature of most time series is that observations close together in time tend to be correlated (serially dependent). This feature is also called as auto-correlation and, as we will see later, forms an important aspect of the conventional time series modeling techniques like AutoRegressive Integrated Moving Average (ARIMA)." }, { "code": null, "e": 5433, "s": 4998, "text": "Stationarity: A time series is stationary when the mean, variance, and autocorrelation are constant over time, and there are no seasonal or cyclic patterns. It is very important to make a time series stationary before using any linear regression-based models like ARIMA (regression-based on lagged values of Y). A time series is usually made stationary by differencing the series with itself. Two good articles on stationarity — 1, 2." }, { "code": null, "e": 5605, "s": 5433, "text": "Due to all the above features, modeling a time series involves a slightly different approach than regular ML problems. Here is a good link explaining the main differences." }, { "code": null, "e": 5703, "s": 5605, "text": "Here is another good resource for diving deeper into time series forecasting modeling techniques." }, { "code": null, "e": 6623, "s": 5703, "text": "Time series cross-validation: (Ref)Cross-validation for time series is a bit different since one cannot randomly mix values in a fold while still preserving the temporal structure. With randomization, all-time dependencies between observations will be lost. This is why we will have to use a more tricky approach in optimizing the model parameters such as “cross-validation on a rolling basis”. The idea is rather simple — we train our model on a small segment of the time series from the beginning until some t, make predictions for the next t+n steps, and calculate an error. Then, we expand our training sample to t+n values, make predictions from t+n until t+2∗n, and continue moving our test segment of the time series until we hit the last available observation. This can be established using the sklearn.model_selection's TimeSeriesSplit module. We will use this technique to calculate the validation set errors." }, { "code": null, "e": 6655, "s": 6623, "text": "Short term hour ahead forecasts" }, { "code": null, "e": 7091, "s": 6655, "text": "\"\"\" Error metrics for hour ahead forecasts when simply repeating last hour's values \"\"\" # error_metrics(predicted_values, true_values) is a function that I # built to calculate the errors for a given model's predictions_ = error_metrics(sdge_lin.loc[X_test.index.shift(-1, freq='H'), 'SDGE'], y_test)>> RMSE or Root mean squared error: 122.27>> Variance score: 0.94>> Mean Absolute Error: 99.23>> Mean Absolute Percentage Error: 4.21 %" }, { "code": null, "e": 7127, "s": 7091, "text": "Long term ~8 months ahead forecasts" }, { "code": null, "e": 7481, "s": 7127, "text": "\"\"\" Error metrics on months ahead forecast when simply repeating last year's values \"\"\"# the data is hourly and one year = 8760 hours_ = error_metrics(sdge_lin.loc[X_test.index.shift(-8760, freq='H'), 'SDGE'], y_test)>> RMSE or Root mean squared error: 330.74>> Variance score: 0.55>> Mean Absolute Error: 224.89>> Mean Absolute Percentage Error: 9.23 %" }, { "code": null, "e": 7719, "s": 7481, "text": "In the energy forecasting domain, it is even more important to get the daily max demand right, because that can determine if the plant operator needs to switch ON a peaker (mostly gas) plant or not. So, calculating the error on that too." }, { "code": null, "e": 8097, "s": 7719, "text": "\"\"\"Resampling both the y_test and predictions at a 24 hours period and using the max as the aggregate function\"\"\"_ = error_metrics(sdge_lin.loc[X_test.index.shift(-8760, freq='H'), 'SDGE'].resample('24h').max(), y_test.resample('24h').max())>> RMSE or Root mean squared error: 389.37>> Variance score: 0.22>> Mean Absolute Error: 264.44>> Mean Absolute Percentage Error: 8.60 %" }, { "code": null, "e": 8155, "s": 8097, "text": "“We are ready to build the future-predicting models now.”" }, { "code": null, "e": 8731, "s": 8155, "text": "Note: Some of the classical time series models that are generally used for time series forecasting are Autoregressive Integrated Moving Average (ARIMA), Seasonal Autoregressive Integrated Moving-Average with Exogenous Regressors (SARIMAX), Vector Autoregression Moving-Average with Exogenous Regressors (VARMAX), Simple Exponential Smoothing (SES), Holt Winter’s Exponential Smoothing (HWES), and more. You can find more details about each model in this article. We will try our hands only on SARIMAX in this post and then try the traditional linear and non-linear ML models." }, { "code": null, "e": 8831, "s": 8731, "text": "So, why should we even consider using traditional ML regression models for time series forecasting?" }, { "code": null, "e": 9197, "s": 8831, "text": "Because, in some cases, like sales forecasting or energy prediction, the y variable strongly depends on external factors, like the temperature in case of energy consumption. So, it becomes more of a regression problem than time series forecasting, and also supervised ML models can help us find better patterns in such data that rely heavily on exogenous variables." }, { "code": null, "e": 9424, "s": 9197, "text": "Traditional time series models like SARIMAX cannot handle multiple seasonalities. They also require a lot of historical data to be able to forecast accurately and are more time consuming when it comes to hyperparameter tuning." }, { "code": null, "e": 9517, "s": 9424, "text": "Please refer to [3] for use cases of traditional ML regression models for sales forecasting." }, { "code": null, "e": 9690, "s": 9517, "text": "Also, this [4] paper proves that using simple regression-based ML models to predict electricity demand is possible and the errors are comparable to the more complex models." }, { "code": null, "e": 10123, "s": 9690, "text": "The energy consumption values can also be expected to depend on it’s previous lagged values because the energy consumption of a region shouldn’t be expected to change much in the next few hours except for any unexpected or unfortunate events. So we will add the lagged values of energy consumption as the X parameters and check if we can predict better using the past values (in addition to the variables that we had already added)." }, { "code": null, "e": 10532, "s": 10123, "text": "\"\"\" Adding max 24 lags; lag1 is the value of the energy consumption in the previous hour, lag2 is the value of energy consumption2 hours before the current value and so on.\"\"\"# Creating the lag variablesfor i in range(24): sdge1_lin['lag'+str(i+1)] = sdge1_lin['SDGE'].shift(i+1)\"\"\" Since the first 24 values won't have any 24th lag, they will be NaN. So dropping the NaNs \"\"\"lag_sdge = sdge1_lin.dropna()" }, { "code": null, "e": 10601, "s": 10532, "text": "plot_predvstrue_reg(elastic_net_lag.predict(X_test_lag), y_test_lag)" }, { "code": null, "e": 10785, "s": 10601, "text": "Error metrics for model Elastic net with all lags>> RMSE or Root mean squared error: 50.75>> Variance score: 0.99>> Mean Absolute Error: 37.36>> Mean Absolute Percentage Error: 1.58 %" }, { "code": null, "e": 10924, "s": 10785, "text": "We can see from the graphs and the errors, when compared to the baseline, the elastic net model performs much better on all error metrics." }, { "code": null, "e": 10994, "s": 10924, "text": "The RMSE is only 50.75 MW compared to 122 MW with the baseline model." }, { "code": null, "e": 11037, "s": 10994, "text": "The MAPE also dropped from 4.21% to 1.58%." }, { "code": null, "e": 11329, "s": 11037, "text": "So, this model performs very well but it comes with a limitation — we can use it to predict only the next hour value. That is, the maximum time window it can predict accurately is 1 hour. So, if that is the application case then the elastic net model with previous lag values should be used." }, { "code": null, "e": 11433, "s": 11329, "text": "As mentioned earlier, traditional time series models like SARIMAX cannot handle multiple seasonalities." }, { "code": null, "e": 11749, "s": 11433, "text": "There are two interesting time series forecasting methods called BATS and TBATS that are capable of modeling time series with multiple seasonalities, check out link. But, they are computationally slow and SARIMAX models with Fourier series handling the multiple seasonalities can perform as good as the TBATS model." }, { "code": null, "e": 12044, "s": 11749, "text": "The main diriving point behind this feature transformation is that we cannot just feed in the hours as 0,1,2,....22,23 to the model, because we need to make the model understand that 0th and 23rd hours are in reality as close as the 0th hour is to the 1st hour. Same for the weekdays and years." }, { "code": null, "e": 12108, "s": 12044, "text": "Adding Fourier cyclical series for hour, year, and week periods" }, { "code": null, "e": 12564, "s": 12108, "text": "\"\"\" as said above the k terms for each yearly, weekly and daily seasonalities could be chosen by optimizing on the AIC values..but after some research on energy consumption time series k= 5 was chosen for each seasonality \"\"\"add_fourier_terms(lag_sdge, year_k= 5, week_k=5 , day_k=5)# Visualizing the new variables on week seasonality_ = (1-lag_sdge.loc['01-01-2014':'01-09-2014', [col for col in lag_sdge if col.startswith('week')]]).sum(axis = 1).plot()" }, { "code": null, "e": 12755, "s": 12564, "text": "From the above plot, we can see how the discrete weekday values have been transformed into a more continuous variable pattern. The same can be visualized for hourly and yearly variables too." }, { "code": null, "e": 12771, "s": 12755, "text": "sdgecyc.head(2)" }, { "code": null, "e": 13026, "s": 12771, "text": "The correlation of the time series observations calculated with values of the same series at previous times is called a serial correlation, or an autocorrelation (ACF). It is used to determine the moving average (MA or q) term of the ARIMA(p,d,q) models." }, { "code": null, "e": 13316, "s": 13026, "text": "A partial autocorrelation (PACF) is a summary of the relationship between an observation in a time series with observations at prior time steps with the relationships of intervening observations removed. It is used to determine the autoregression (AR or p) term of the ARIMA(p,d,q) models." }, { "code": null, "e": 13413, "s": 13316, "text": "And d in ARIMA(p,d,q) is the number of differencing required to make the time series stationary." }, { "code": null, "e": 13929, "s": 13413, "text": "SARIMAX is an extension of ARIMA to handle seasonality terms (S) and exogenous variables (X). The basic architecture of a SARIMAX model is given by SARIMAX(p, d, q)x(P, D, Q, s) where p, d, q are as defined above and the (P, D, Q, s) is the seasonal component of the model for the AR parameters, differences, MA parameters, and periodicity respectively. s is an integer giving the periodicity (number of periods in season), often it is 4 for quarterly data or 12 for monthly data. The default is no seasonal effect." }, { "code": null, "e": 14177, "s": 13929, "text": "The SARIMAX model takes many inputs which require some tuning to be able to pick the best model parameters. pmdarimapackage’s auto_arimamodule automates this task and gives us the best model by taking in some input ranges, similar to gridsearchcv." }, { "code": null, "e": 14404, "s": 14177, "text": "Error metrics for model SARIMAX(2,1,1)x(1,0,1,24) with Fourier terms 1 week ahead forecast on test setRMSE or Root mean squared error: 150.46Variance score: 0.68Mean Absolute Error: 103.46Mean Absolute Percentage Error: 5.33 %" }, { "code": null, "e": 14745, "s": 14404, "text": "We see that the first-week forecast is pretty well but even at the end of the first week the forecasting performance decreases and the confidence interval values grow larger beyond the scale of the range of the energy consumption values. Thus, SARIMAX model was not able to capture long term trends but it did well on 1 week ahead forecast." }, { "code": null, "e": 15004, "s": 14745, "text": "Errors for 1 hour ahead forecasts weren’t calculated above for SARIMAX model, by using (dynamic=True), because we get excellent results using elastic net regression for 1 hour ahead forecasts using the lag variables and it is much faster to fit than SARIMAX." }, { "code": null, "e": 15464, "s": 15004, "text": "Let’s try using FBProphet for our problem. FBProphet provides a decomposition regression model that is extendable and configurable with interpretable parameters. Prophet frames the forecasting problem as a curve-fitting exercise rather than looking explicitly at the time-based dependence of each observation within a time series. Similar to SARIMAX, we can add extra regressor terms like temperature data to the model as well. (Reference link1, link2, link3)" }, { "code": null, "e": 15537, "s": 15464, "text": "At its core, Prophet is an additive model with the following components:" }, { "code": null, "e": 15582, "s": 15537, "text": "y(t)=g(t)+s(t)+h(t)+εty(t)=g(t)+s(t)+h(t)+εt" }, { "code": null, "e": 15733, "s": 15582, "text": "g(t) models trends(t) models seasonality with Fourier seriesh(t) models the effects of holidays or large eventsεt represents an irreducible error term" }, { "code": null, "e": 15976, "s": 15733, "text": "Using only the ‘SDGE’, ‘HourlyDryBulbTemperature’, ‘cum_AC_kW’, ‘non_working_working’ columns while using Prophet because Prophet, unlike SARIMAX, handles multiple seasonalities well. So, we don’t need to pass in the Fourier terms separately." }, { "code": null, "e": 16172, "s": 15976, "text": "FB Prophet can be passed with a holiday feature, but since we have already captured the holidays and weekends in the ‘non_working_working’ column we won’t pass a separate holiday list to Prophet." }, { "code": null, "e": 16545, "s": 16172, "text": "Error metrics for FB Prophet w/ auto seasonality 1 week aheadRMSE or Root mean squared error: 203.21Variance score: 0.76Mean Absolute Error: 164.76Mean Absolute Percentage Error: 7.66 %Error metrics for FB Prophet w/ auto seasonality 8 months aheadRMSE or Root mean squared error: 262.74Variance score: 0.72Mean Absolute Error: 201.06Mean Absolute Percentage Error: 8.55 %" }, { "code": null, "e": 16762, "s": 16545, "text": "So, though the errors of this model are higher than the other models, both on 1-week ahead and months ahead forecasts, it looks like FB Prophet has captured the multiple seasonalities and trend of our data very well." }, { "code": null, "e": 16886, "s": 16762, "text": "I will just mention the errors for Random Forests here. Elastic net regression didn’t do very well for long term forecasts." }, { "code": null, "e": 17067, "s": 16886, "text": "Error metrics for Tuned Random forest with fourier termsRMSE or Root mean squared error: 196.99Variance score: 0.84Mean Absolute Error: 137.41Mean Absolute Percentage Error: 5.73 %" }, { "code": null, "e": 17245, "s": 17067, "text": "Random forest performs well with the Fourier terms added. It captures the multiple seasonalities well and also gives a MAPE of only 5.76% compared to the baseline MAPE of 9.23%." }, { "code": null, "e": 17356, "s": 17245, "text": "Elastic net regression doesn’t capture the higher energy consumption values as good as the random forest does." }, { "code": null, "e": 17458, "s": 17356, "text": "Random forest’s better performance paved the way for trying the tree-based XGBoost model on the data." }, { "code": null, "e": 17655, "s": 17458, "text": "XGBoost (Extreme Gradient Boosting) belongs to a family of boosting algorithms and uses the gradient boosting (GBM) framework at its core. It is an optimized distributed gradient boosting library." }, { "code": null, "e": 18147, "s": 17655, "text": "XGBoost is well known to provide better solutions than other machine learning algorithms. It is not often used for time series, especially if the base used is trees because it is difficult to catch the trend with trees, but since our data doesn’t have a very significant trend and also since it has multiple seasonalities (modeled using Fourier series) and depends significantly on exogenous variables, we can try XGboost to see how it performs on the time series data of energy consumption." }, { "code": null, "e": 18322, "s": 18147, "text": "Error metrics for Tuned XGBoost with Fourier termsRMSE or Root mean squared error: 172.21Variance score: 0.88Mean Absolute Error: 121.19Mean Absolute Percentage Error: 5.08 %" }, { "code": null, "e": 18378, "s": 18322, "text": "The entire model was tuned and fit in around 2 minutes." }, { "code": null, "e": 18615, "s": 18378, "text": "And it seems to have learned the data patterns along with multiple seasonalities very well. The error on ~8 months-ahead forecast is just 5.08% compared to the baseline of 9.23 %. Also, the R2 fit is 88% compared to the baseline of 55%." }, { "code": null, "e": 18644, "s": 18615, "text": "WHY should we consider this?" }, { "code": null, "e": 19067, "s": 18644, "text": "As discussed earlier, to model a time-series data it needs to be stationary. So, the ideal case would be to detrend the data and then feed it into the ML models and then add the trend to the forecasted results. Nonetheless, good results were obtained above without detrending because the energy consumption data from 2014 to 2018 has a very weak trend and the multiple seasonalities were handled well by the Fourier terms." }, { "code": null, "e": 19506, "s": 19067, "text": "Alternatively, the overall data trend and also the effect of cum_AC_kW, which is the cumulative PV installation to date, can be modeled using FB Prophet and then merged with the XGBoost’s forecast. Any tree-based regression model like XGBoost cannot easily handle the X variables like cum_AC_kW because it is an ever-increasing variable and the test data will always have higher magnitude values not seen by the model in the training set." }, { "code": null, "e": 19854, "s": 19506, "text": "I have extracted the trend and cum_AC_kW impact on energy from the FB Prophet model and subtracted these two components from our main dataframe with all the Fourier terms. Then this detrended energy consumption data was passed onto the XGBoost model and the XGBoost forecast results were added back to the total trend to get the final predictions." }, { "code": null, "e": 19942, "s": 19854, "text": "Below is the trend and effect of cum_AC_kw from our previously trained FBProphet model." }, { "code": null, "e": 20012, "s": 19942, "text": "We will employ the following architecture for creating our new model:" }, { "code": null, "e": 20198, "s": 20012, "text": "Error metrics for XGBoost with detrend Prophet, Fourier termsRMSE or Root mean squared error: 212.97Variance score: 0.81Mean Absolute Error: 170.77Mean Absolute Percentage Error: 7.30 %" }, { "code": null, "e": 20774, "s": 20198, "text": "The combined model is performing worse than the XGBoost alone because FB Prophet overestimates the trend and cum_AC_kW’s impact on energy consumption, especially at the end of the time scale, as can be seen from the plot above for forecast.trend + forecast.cum_AC_kW which shows a sudden dip at the end. This seems to result in an underprediction of energy consumption and impacts the overall results. But still, the model performs very well compared to the baseline and for reasons discussed above, is a better model for time series in terms of stability than XGboost alone." }, { "code": null, "e": 20921, "s": 20774, "text": "Different models were tried to forecast the energy demand per hour, measured in MW, of the San Diego Gas and Electric (SDGE) utility region in CA." }, { "code": null, "e": 21587, "s": 20921, "text": "The energy consumption is highly dependent on the outside temperature and has strong multiple seasonalities — daily, weekly, and yearly. The increasing PV (photovoltaic) installations in the region (cum_AC_kW) seem to have brought the decreasing trend in energy consumption since more renewable energy at the customer’s facility means less load on the utility. Note that there can be other factors causing this decreasing trend such as the energy storage installations at the customer facilities, increase in electric efficiencies of the household and commercial equipment, people becoming more conscious of their usage (morally or through utility incentives), etc." }, { "code": null, "e": 21971, "s": 21587, "text": "The best way to capture the trend, which is a combination of all the above factors and maybe more, is to make the model learn the trend over a long period of time. The seasonality is an important part of predicting the energy consumption of a region, so getting that part right was also very crucial for improving the model’s performance (which was achieved by using Fourier series)." }, { "code": null, "e": 22022, "s": 21971, "text": "Error terms on hour ahead forecast on the test set" }, { "code": null, "e": 22075, "s": 22022, "text": "Error terms on months ahead forecast on the test set" }, { "code": null, "e": 22276, "s": 22075, "text": "The XGBoost model with the Fourier terms has performed very well, predicting for a forecasting window of 8 months ahead. For hourly data with multiple seasonalities that is a pretty impressive result." }, { "code": null, "e": 22779, "s": 22276, "text": "For long term forecasts- most of the models have performed better than the baseline persistence model and the best model (XGBoost) gives a MAPE of 5.08% compared to the baseline error of 9.23% on the test set. The RMSE, R2, and MAE values are also considerably lower than the baseline model. For example, the difference in RMSE with the baseline model is almost 160 MW which is pretty significant. To help one understand this better, the average size of a gas-fired combined-cycle power plant is 500MW." }, { "code": null, "e": 22960, "s": 22779, "text": "FB Prophet does a very good job of identifying the trend and multiple seasonalities of the data. It can be paired with XGBoost and a more robust long term forecast can be obtained." }, { "code": null, "e": 23082, "s": 22960, "text": "Thanks! Please feel free to reach out to me if you have any questions or want to suggest any improvements or corrections." } ]
COUNTIF in 5 languages (VBA/SQL/PYTHON/M query/DAX powerBI) | by Aymen | Towards Data Science
Excel is a powerful spreadsheet used by most people working in data analysis. The increase of volume of data and development user-friendly tools is an opportunity of improvement of Excel reports by mixing them with another tool or language. As working for a financial reporting department I faced the need to boost our reporting tools. A simple way to start working with a new language is to translate what we use to do in excel, in another language. “How can I pivot this?”, “How can I vlookup that?”. In this article I will share with you how you can make COUNTIF in 5 different languages: VBA, python, SQL, DAX (Power BI), M (Power query). it will simple tips but if you want to get more detailed article don’t forget to follow me! COUNTIF is one of the statistical function of Excel to count the number of cells that meet a criterion. The simplest way of using it is as follow: =COUNTIF(Where do you want to look?, What do you want to look for?) But in the part “What do you want to look for?” you can put a conditional criteria ( it could be logical operator (<,>,<>,=) and wildcards (*,$) for partial conditions). In this article I will use a similar example as another article I wrote to present the SUMIF in 5 languages: towardsdatascience.com It will be a tab with a list of items In cell C2 we can put a COUNTIF formula with arguments : Range=A2:A8, Criteria=cell(A2). So basically we count the number of “Item1” in our list. =COUNTIF(A1:A8,A2) Visual Basic for Application (VBA) is an implementation of Microsoft Visual Basic integrated into Microsoft Office applications. On your code, you can set up a variable to store the result of the COUNTIF result using the statement “WorksheetFunction.sumif”. It works exactly as the function itself. Then you can store the result directly into cell C2. Range(“C2”) = WorksheetFunction.COUNTIF(Range(“A2:A8”), Range(“A2”)) Let’s do it properly by declaring variables now: Sub sumif()Dim wb As WorkbookDim wsD As WorksheetDim Arg1 As RangeDim Arg2 As RangeDim Arg3 As RangeSet wb = ThisWorkbookSet wsD = wb.Worksheets("Sheet1")Set Arg1 = wsD.Range("A2:A8") 'where to findSet Arg2 = wsD.Range("A2") 'what to findRange(“C2”)= Application.WorksheetFunction.SUMIF(Arg1, Arg2)End Sub SQL ( Structured Query Language) or sequel, is a standard language for storing, manipulating and retrieving data in databases. It is one of the common upgrade done by companies that face limits with Excel. Usually, the first reaction is to negotiate some budget in order to store the data into a database and use SQL to “speak” with this database and organise, manipulate the data. The language is also highly appreciable. Close to a natural language, you don’t feel coding when typing simple SQL request. Let’s start with creating a table with items: CREATE TABLE table1 ( Item varchar(255) );INSERT INTO table1VALUES ('Item1');INSERT INTO table1VALUES ('Item2');INSERT INTO table1VALUES ('Item1');INSERT INTO table1VALUES ('Item3');INSERT INTO table1VALUES ('Item1');INSERT INTO table1VALUES ('Item4');INSERT INTO table1VALUES ('Item4');SELECT*FROM TABLE1; Here is the result: Then keep in mind that in SQL, when you want to get an information you will often use the expression SELECT. Basically in SQL you play with data, tables and relationship. So you have to understand well the “how I can get this information” in order to SELECT it: SELECT “How I will get what I want” FROM “Were I have to apply the How”. So in our case we will select the count of the case where the items of the column “Item” equals “Item1”. Let’s write the formula and then analyze it: SELECT COUNT(CASE WHEN Item = 'Item1' THEN 1 END) FROM table1; So we SELECT the COUNT of the CASE WHEN (CASE in SQL is a kind of “IF” function) Item='Item1' THEN (if it’s the case) we count 1. And this applied in table1. Result: Python is an interpreted, high level language with a generic purpose. It is used in a large range of application, including data analysis. We can present python by saying “For all application its libraries”. And for data, without surprise, we will use the famous Pandas. There are several ways to COUNTIF in python. We could play with a dataframe, columns, group by etc. But here I will simply create a list with all items, then loop into this list and add 1 every time that the loop meet an “Item1”. All this stored in a variable result: items =['Item1','Item2','Item1','Item3','Item1','Item4','Item4']res = sum(1 for i in items if i == 'Item1') Here is the result: M is the powerful language behind the tool power query. Even if you are working on the query editor, every single step will be written in M. M stands for Data Mashup or Data Modeling. I highly recommend to have a look at this language, we can do so much more than just using the graphical interface. Regarding the COUNTIF, the graphical interface is so smooth that we can directly use it. You just have to import your table into the graphical interface and then use the filter in the column Item: Then you select the column “Items”, and on the field Transform do the COUNT: And you get the count. Another way is to use the Group By in button in the left, but in the scale of our example, it will add more step that we can skip by directly filter and count. I will present this with another feature where it will be more useful (The pivot table). DAX stands for Data Analysis Expressions. More than a language, it is a library of functions and operators that can be used to build formulas in Power BI and Power Pivot. The COUNTIF is a good opportunity to introduce the function CALCULATE. It is one of the most famous function in Power BI that evaluates an expression after applying a filtering rule. It is more close to the COUNTIFS as you can apply several filters arguments. CALCULATE(<expression>[, <filter1> [, <filter2> [, ...]]]) Expression: is the expression to be evaluated. Filter1,2,...: can be a Boolean expression, a table filter or even another function. In our case, the expression to be evaluated is the count of Table1 items, by applying one filter table in column “Items”=”Item1" countif = CALCULATE(COUNTA(Table1[Items]),Table1[Items] = 'Item1') This article is a part of a set of articles where I share my way of doing an Excel feature in other languages. For sure, when you decide to move from Excel to programming, your approach of data analysis will totally change and you will learn that there are tons of way to get to your solution. If you have another manner to do this or even know how to do it with other languages, feel free to share them in the comments or by contacting me in my social networks! I will be pleased to read your thoughts!
[ { "code": null, "e": 413, "s": 172, "text": "Excel is a powerful spreadsheet used by most people working in data analysis. The increase of volume of data and development user-friendly tools is an opportunity of improvement of Excel reports by mixing them with another tool or language." }, { "code": null, "e": 675, "s": 413, "text": "As working for a financial reporting department I faced the need to boost our reporting tools. A simple way to start working with a new language is to translate what we use to do in excel, in another language. “How can I pivot this?”, “How can I vlookup that?”." }, { "code": null, "e": 907, "s": 675, "text": "In this article I will share with you how you can make COUNTIF in 5 different languages: VBA, python, SQL, DAX (Power BI), M (Power query). it will simple tips but if you want to get more detailed article don’t forget to follow me!" }, { "code": null, "e": 1011, "s": 907, "text": "COUNTIF is one of the statistical function of Excel to count the number of cells that meet a criterion." }, { "code": null, "e": 1054, "s": 1011, "text": "The simplest way of using it is as follow:" }, { "code": null, "e": 1122, "s": 1054, "text": "=COUNTIF(Where do you want to look?, What do you want to look for?)" }, { "code": null, "e": 1292, "s": 1122, "text": "But in the part “What do you want to look for?” you can put a conditional criteria ( it could be logical operator (<,>,<>,=) and wildcards (*,$) for partial conditions)." }, { "code": null, "e": 1401, "s": 1292, "text": "In this article I will use a similar example as another article I wrote to present the SUMIF in 5 languages:" }, { "code": null, "e": 1424, "s": 1401, "text": "towardsdatascience.com" }, { "code": null, "e": 1608, "s": 1424, "text": "It will be a tab with a list of items In cell C2 we can put a COUNTIF formula with arguments : Range=A2:A8, Criteria=cell(A2). So basically we count the number of “Item1” in our list." }, { "code": null, "e": 1627, "s": 1608, "text": "=COUNTIF(A1:A8,A2)" }, { "code": null, "e": 1756, "s": 1627, "text": "Visual Basic for Application (VBA) is an implementation of Microsoft Visual Basic integrated into Microsoft Office applications." }, { "code": null, "e": 1979, "s": 1756, "text": "On your code, you can set up a variable to store the result of the COUNTIF result using the statement “WorksheetFunction.sumif”. It works exactly as the function itself. Then you can store the result directly into cell C2." }, { "code": null, "e": 2048, "s": 1979, "text": "Range(“C2”) = WorksheetFunction.COUNTIF(Range(“A2:A8”), Range(“A2”))" }, { "code": null, "e": 2097, "s": 2048, "text": "Let’s do it properly by declaring variables now:" }, { "code": null, "e": 2403, "s": 2097, "text": "Sub sumif()Dim wb As WorkbookDim wsD As WorksheetDim Arg1 As RangeDim Arg2 As RangeDim Arg3 As RangeSet wb = ThisWorkbookSet wsD = wb.Worksheets(\"Sheet1\")Set Arg1 = wsD.Range(\"A2:A8\") 'where to findSet Arg2 = wsD.Range(\"A2\") 'what to findRange(“C2”)= Application.WorksheetFunction.SUMIF(Arg1, Arg2)End Sub" }, { "code": null, "e": 2909, "s": 2403, "text": "SQL ( Structured Query Language) or sequel, is a standard language for storing, manipulating and retrieving data in databases. It is one of the common upgrade done by companies that face limits with Excel. Usually, the first reaction is to negotiate some budget in order to store the data into a database and use SQL to “speak” with this database and organise, manipulate the data. The language is also highly appreciable. Close to a natural language, you don’t feel coding when typing simple SQL request." }, { "code": null, "e": 2955, "s": 2909, "text": "Let’s start with creating a table with items:" }, { "code": null, "e": 3267, "s": 2955, "text": "CREATE TABLE table1 ( Item varchar(255) );INSERT INTO table1VALUES ('Item1');INSERT INTO table1VALUES ('Item2');INSERT INTO table1VALUES ('Item1');INSERT INTO table1VALUES ('Item3');INSERT INTO table1VALUES ('Item1');INSERT INTO table1VALUES ('Item4');INSERT INTO table1VALUES ('Item4');SELECT*FROM TABLE1;" }, { "code": null, "e": 3287, "s": 3267, "text": "Here is the result:" }, { "code": null, "e": 3549, "s": 3287, "text": "Then keep in mind that in SQL, when you want to get an information you will often use the expression SELECT. Basically in SQL you play with data, tables and relationship. So you have to understand well the “how I can get this information” in order to SELECT it:" }, { "code": null, "e": 3622, "s": 3549, "text": "SELECT “How I will get what I want” FROM “Were I have to apply the How”." }, { "code": null, "e": 3772, "s": 3622, "text": "So in our case we will select the count of the case where the items of the column “Item” equals “Item1”. Let’s write the formula and then analyze it:" }, { "code": null, "e": 3835, "s": 3772, "text": "SELECT COUNT(CASE WHEN Item = 'Item1' THEN 1 END) FROM table1;" }, { "code": null, "e": 3993, "s": 3835, "text": "So we SELECT the COUNT of the CASE WHEN (CASE in SQL is a kind of “IF” function) Item='Item1' THEN (if it’s the case) we count 1. And this applied in table1." }, { "code": null, "e": 4001, "s": 3993, "text": "Result:" }, { "code": null, "e": 4272, "s": 4001, "text": "Python is an interpreted, high level language with a generic purpose. It is used in a large range of application, including data analysis. We can present python by saying “For all application its libraries”. And for data, without surprise, we will use the famous Pandas." }, { "code": null, "e": 4540, "s": 4272, "text": "There are several ways to COUNTIF in python. We could play with a dataframe, columns, group by etc. But here I will simply create a list with all items, then loop into this list and add 1 every time that the loop meet an “Item1”. All this stored in a variable result:" }, { "code": null, "e": 4648, "s": 4540, "text": "items =['Item1','Item2','Item1','Item3','Item1','Item4','Item4']res = sum(1 for i in items if i == 'Item1')" }, { "code": null, "e": 4668, "s": 4648, "text": "Here is the result:" }, { "code": null, "e": 4968, "s": 4668, "text": "M is the powerful language behind the tool power query. Even if you are working on the query editor, every single step will be written in M. M stands for Data Mashup or Data Modeling. I highly recommend to have a look at this language, we can do so much more than just using the graphical interface." }, { "code": null, "e": 5165, "s": 4968, "text": "Regarding the COUNTIF, the graphical interface is so smooth that we can directly use it. You just have to import your table into the graphical interface and then use the filter in the column Item:" }, { "code": null, "e": 5242, "s": 5165, "text": "Then you select the column “Items”, and on the field Transform do the COUNT:" }, { "code": null, "e": 5514, "s": 5242, "text": "And you get the count. Another way is to use the Group By in button in the left, but in the scale of our example, it will add more step that we can skip by directly filter and count. I will present this with another feature where it will be more useful (The pivot table)." }, { "code": null, "e": 5685, "s": 5514, "text": "DAX stands for Data Analysis Expressions. More than a language, it is a library of functions and operators that can be used to build formulas in Power BI and Power Pivot." }, { "code": null, "e": 5945, "s": 5685, "text": "The COUNTIF is a good opportunity to introduce the function CALCULATE. It is one of the most famous function in Power BI that evaluates an expression after applying a filtering rule. It is more close to the COUNTIFS as you can apply several filters arguments." }, { "code": null, "e": 6004, "s": 5945, "text": "CALCULATE(<expression>[, <filter1> [, <filter2> [, ...]]])" }, { "code": null, "e": 6051, "s": 6004, "text": "Expression: is the expression to be evaluated." }, { "code": null, "e": 6136, "s": 6051, "text": "Filter1,2,...: can be a Boolean expression, a table filter or even another function." }, { "code": null, "e": 6265, "s": 6136, "text": "In our case, the expression to be evaluated is the count of Table1 items, by applying one filter table in column “Items”=”Item1\"" }, { "code": null, "e": 6332, "s": 6265, "text": "countif = CALCULATE(COUNTA(Table1[Items]),Table1[Items] = 'Item1')" } ]
ProgressDialog in Android using Kotlin?
This example demonstrates how to show a progress dialog in Android using Kotlin. Step 1 − Create a new project in Android Studio, go to File ⇉ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:layout_marginTop="50dp" android:text="Tutorials Point" android:textAlignment="center" android:textColor="@android:color/holo_green_dark" android:textSize="32sp" android:textStyle="bold" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerInParent="true" android:layout_centerHorizontal="true" android:layout_marginTop="50dp" android:text="Progress bar using Kotlin" android:textAlignment="center" android:textColor="@android:color/holo_green_dark" android:textSize="32sp" android:textStyle="bold" /> </RelativeLayout> Step 3 − Add the following code to src/MainActivity.kt import android.app.ProgressDialog import androidx.appcompat.app.AppCompatActivity import android.os.Bundle @Suppress("DEPRECATION") class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) title = "KotlinApp" val progressDialog = ProgressDialog(this@MainActivity) progressDialog.setTitle("Kotlin Progress Bar") progressDialog.setMessage("Application is loading, please wait") progressDialog.show() } } Step 4 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.kotlipapp"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − Click here to download the project code.
[ { "code": null, "e": 1143, "s": 1062, "text": "This example demonstrates how to show a progress dialog in Android using Kotlin." }, { "code": null, "e": 1272, "s": 1143, "text": "Step 1 − Create a new project in Android Studio, go to File ⇉ New Project and fill all required details to create a new project." }, { "code": null, "e": 1337, "s": 1272, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 2437, "s": 1337, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".MainActivity\">\n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginTop=\"50dp\"\n android:text=\"Tutorials Point\"\n android:textAlignment=\"center\"\n android:textColor=\"@android:color/holo_green_dark\"\n android:textSize=\"32sp\"\n android:textStyle=\"bold\" />\n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerInParent=\"true\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginTop=\"50dp\"\n android:text=\"Progress bar using Kotlin\"\n android:textAlignment=\"center\"\n android:textColor=\"@android:color/holo_green_dark\"\n android:textSize=\"32sp\"\n android:textStyle=\"bold\" />\n</RelativeLayout>" }, { "code": null, "e": 2492, "s": 2437, "text": "Step 3 − Add the following code to src/MainActivity.kt" }, { "code": null, "e": 3055, "s": 2492, "text": "import android.app.ProgressDialog\nimport androidx.appcompat.app.AppCompatActivity\nimport android.os.Bundle\n@Suppress(\"DEPRECATION\")\nclass MainActivity : AppCompatActivity() {\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n val progressDialog = ProgressDialog(this@MainActivity)\n progressDialog.setTitle(\"Kotlin Progress Bar\")\n progressDialog.setMessage(\"Application is loading, please wait\")\n progressDialog.show()\n }\n}" }, { "code": null, "e": 3110, "s": 3055, "text": "Step 4 − Add the following code to androidManifest.xml" }, { "code": null, "e": 3786, "s": 3110, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"app.com.kotlipapp\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 4137, "s": 3786, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −" }, { "code": null, "e": 4178, "s": 4137, "text": "Click here to download the project code." } ]
Practice questions on Arrays in C++
Array is a data structure that store data in the contagious memory location. Declaring arrays Declaring arrays is done by the following syntax : int 1D[] - for 1-D array int 2D[][] - for 2-D array If you initialize an array with lesser number of elements, rest are initialized with 0. Memory address of elements of the array 1-D array : address[i] = baseAddress + i*size 2-D array (row major) : address[i][j] = baseAddress + (i*n + j) * size Now, let’s see some practice problem Predict the output of the following code snippet int arr[5] = {6, 9}; for(int i = 0; i<5; i++) cout<<arr[i]<<" "; 6 9 0 0 0 The array is initialized with two values and the rest of the values are initialized as 0 which is reflected in the output. int arr[][3] = {1, 2, 3, 4, 5, 6, 7, 8, 9}; cout<<arr[1][2]; 6 Find the address of the given element of the integer array. If base address is 1420. 1D array : arr[43] address = 1420 + 43*2 = 1506 2D array of size arr[10][10] : arr[5][4], stored as row major address = 1420 + (5*10 + 4)*2 = 1420 + (54)*2 = 1528.
[ { "code": null, "e": 1139, "s": 1062, "text": "Array is a data structure that store data in the contagious memory location." }, { "code": null, "e": 1156, "s": 1139, "text": "Declaring arrays" }, { "code": null, "e": 1259, "s": 1156, "text": "Declaring arrays is done by the following syntax :\nint 1D[] - for 1-D array\nint 2D[][] - for 2-D array" }, { "code": null, "e": 1347, "s": 1259, "text": "If you initialize an array with lesser number of elements, rest are initialized with 0." }, { "code": null, "e": 1387, "s": 1347, "text": "Memory address of elements of the array" }, { "code": null, "e": 1504, "s": 1387, "text": "1-D array : address[i] = baseAddress + i*size\n2-D array (row major) : address[i][j] = baseAddress + (i*n + j) * size" }, { "code": null, "e": 1541, "s": 1504, "text": "Now, let’s see some practice problem" }, { "code": null, "e": 1590, "s": 1541, "text": "Predict the output of the following code snippet" }, { "code": null, "e": 1658, "s": 1590, "text": "int arr[5] = {6, 9};\nfor(int i = 0; i<5; i++)\n cout<<arr[i]<<\" \";" }, { "code": null, "e": 1668, "s": 1658, "text": "6 9 0 0 0" }, { "code": null, "e": 1791, "s": 1668, "text": "The array is initialized with two values and the rest of the values are initialized as 0 which is reflected in the output." }, { "code": null, "e": 1852, "s": 1791, "text": "int arr[][3] = {1, 2, 3, 4, 5, 6, 7, 8, 9};\ncout<<arr[1][2];" }, { "code": null, "e": 1854, "s": 1852, "text": "6" }, { "code": null, "e": 1939, "s": 1854, "text": "Find the address of the given element of the integer array. If base address is 1420." }, { "code": null, "e": 2103, "s": 1939, "text": "1D array : arr[43]\naddress = 1420 + 43*2 = 1506\n2D array of size arr[10][10] : arr[5][4], stored as row major\naddress = 1420 + (5*10 + 4)*2 = 1420 + (54)*2 = 1528." } ]
Scala - Lists
Scala Lists are quite similar to arrays which means, all the elements of a list have the same type but there are two important differences. First, lists are immutable, which means elements of a list cannot be changed by assignment. Second, lists represent a linked list whereas arrays are flat. The type of a list that has elements of type T is written as List[T]. Try the following example, here are few lists defined for various data types. // List of Strings val fruit: List[String] = List("apples", "oranges", "pears") // List of Integers val nums: List[Int] = List(1, 2, 3, 4) // Empty List. val empty: List[Nothing] = List() // Two dimensional list val dim: List[List[Int]] = List( List(1, 0, 0), List(0, 1, 0), List(0, 0, 1) ) All lists can be defined using two fundamental building blocks, a tail Nil and ::, which is pronounced cons. Nil also represents the empty list. All the above lists can be defined as follows. // List of Strings val fruit = "apples" :: ("oranges" :: ("pears" :: Nil)) // List of Integers val nums = 1 :: (2 :: (3 :: (4 :: Nil))) // Empty List. val empty = Nil // Two dimensional list val dim = (1 :: (0 :: (0 :: Nil))) :: (0 :: (1 :: (0 :: Nil))) :: (0 :: (0 :: (1 :: Nil))) :: Nil All operations on lists can be expressed in terms of the following three methods. head This method returns the first element of a list. tail This method returns a list consisting of all elements except the first. isEmpty This method returns true if the list is empty otherwise false. The following example shows how to use the above methods. object Demo { def main(args: Array[String]) { val fruit = "apples" :: ("oranges" :: ("pears" :: Nil)) val nums = Nil println( "Head of fruit : " + fruit.head ) println( "Tail of fruit : " + fruit.tail ) println( "Check if fruit is empty : " + fruit.isEmpty ) println( "Check if nums is empty : " + nums.isEmpty ) } } Save the above program in Demo.scala. The following commands are used to compile and execute this program. \>scalac Demo.scala \>scala Demo Head of fruit : apples Tail of fruit : List(oranges, pears) Check if fruit is empty : false Check if nums is empty : true You can use either ::: operator or List.:::() method or List.concat() method to add two or more lists. Please find the following example given below − object Demo { def main(args: Array[String]) { val fruit1 = "apples" :: ("oranges" :: ("pears" :: Nil)) val fruit2 = "mangoes" :: ("banana" :: Nil) // use two or more lists with ::: operator var fruit = fruit1 ::: fruit2 println( "fruit1 ::: fruit2 : " + fruit ) // use two lists with Set.:::() method fruit = fruit1.:::(fruit2) println( "fruit1.:::(fruit2) : " + fruit ) // pass two or more lists as arguments fruit = List.concat(fruit1, fruit2) println( "List.concat(fruit1, fruit2) : " + fruit ) } } Save the above program in Demo.scala. The following commands are used to compile and execute this program. \>scalac Demo.scala \>scala Demo fruit1 ::: fruit2 : List(apples, oranges, pears, mangoes, banana) fruit1.:::(fruit2) : List(mangoes, banana, apples, oranges, pears) List.concat(fruit1, fruit2) : List(apples, oranges, pears, mangoes, banana) You can use List.fill() method creates a list consisting of zero or more copies of the same element. Try the following example program. object Demo { def main(args: Array[String]) { val fruit = List.fill(3)("apples") // Repeats apples three times. println( "fruit : " + fruit ) val num = List.fill(10)(2) // Repeats 2, 10 times. println( "num : " + num ) } } Save the above program in Demo.scala. The following commands are used to compile and execute this program. \>scalac Demo.scala \>scala Demo fruit : List(apples, apples, apples) num : List(2, 2, 2, 2, 2, 2, 2, 2, 2, 2) You can use a function along with List.tabulate() method to apply on all the elements of the list before tabulating the list. Its arguments are just like those of List.fill: the first argument list gives the dimensions of the list to create, and the second describes the elements of the list. The only difference is that instead of the elements being fixed, they are computed from a function. Try the following example program. object Demo { def main(args: Array[String]) { // Creates 5 elements using the given function. val squares = List.tabulate(6)(n => n * n) println( "squares : " + squares ) val mul = List.tabulate( 4,5 )( _ * _ ) println( "mul : " + mul ) } } Save the above program in Demo.scala. The following commands are used to compile and execute this program. \>scalac Demo.scala \>scala Demo squares : List(0, 1, 4, 9, 16, 25) mul : List(List(0, 0, 0, 0, 0), List(0, 1, 2, 3, 4), List(0, 2, 4, 6, 8), List(0, 3, 6, 9, 12)) You can use List.reverse method to reverse all elements of the list. The Following example shows the usage. object Demo { def main(args: Array[String]) { val fruit = "apples" :: ("oranges" :: ("pears" :: Nil)) println( "Before reverse fruit : " + fruit ) println( "After reverse fruit : " + fruit.reverse ) } } Save the above program in Demo.scala. The following commands are used to compile and execute this program. \>scalac Demo.scala \>scala Demo Before reverse fruit : List(apples, oranges, pears) After reverse fruit : List(pears, oranges, apples) Following are the important methods, which you can use while playing with Lists. For a complete list of methods available, please check the official documentation of Scala. def +(elem: A): List[A] Prepends an element to this list def ::(x: A): List[A] Adds an element at the beginning of this list. def :::(prefix: List[A]): List[A] Adds the elements of a given list in front of this list. def ::(x: A): List[A] Adds an element x at the beginning of the list def addString(b: StringBuilder): StringBuilder Appends all elements of the list to a string builder. def addString(b: StringBuilder, sep: String): StringBuilder Appends all elements of the list to a string builder using a separator string. def apply(n: Int): A Selects an element by its index in the list. def contains(elem: Any): Boolean Tests whether the list contains a given value as an element. def copyToArray(xs: Array[A], start: Int, len: Int): Unit Copies elements of the list to an array. Fills the given array xs with at most length (len) elements of this list, beginning at position start. def distinct: List[A] Builds a new list from the list without any duplicate elements. def drop(n: Int): List[A] Returns all elements except first n ones. def dropRight(n: Int): List[A] Returns all elements except last n ones. def dropWhile(p: (A) => Boolean): List[A] Drops longest prefix of elements that satisfy a predicate. def endsWith[B](that: Seq[B]): Boolean Tests whether the list ends with the given sequence. def equals(that: Any): Boolean The equals method for arbitrary sequences. Compares this sequence to some other object. def exists(p: (A) => Boolean): Boolean Tests whether a predicate holds for some of the elements of the list. def filter(p: (A) => Boolean): List[A] Returns all elements of the list which satisfy a predicate. def forall(p: (A) => Boolean): Boolean Tests whether a predicate holds for all elements of the list. def foreach(f: (A) => Unit): Unit Applies a function f to all elements of the list. def head: A Selects the first element of the list. def indexOf(elem: A, from: Int): Int Finds index of first occurrence value in the list, after the index position. def init: List[A] Returns all elements except the last. def intersect(that: Seq[A]): List[A] Computes the multiset intersection between the list and another sequence. def isEmpty: Boolean Tests whether the list is empty. def iterator: Iterator[A] Creates a new iterator over all elements contained in the iterable object. def last: A Returns the last element. def lastIndexOf(elem: A, end: Int): Int Finds index of last occurrence of some value in the list; before or at a given end index. def length: Int Returns the length of the list. def map[B](f: (A) => B): List[B] Builds a new collection by applying a function to all elements of this list. def max: A Finds the largest element. def min: A Finds the smallest element. def mkString: String Displays all elements of the list in a string. def mkString(sep: String): String Displays all elements of the list in a string using a separator string. def reverse: List[A] Returns new list with elements in reversed order. def sorted[B >: A]: List[A] Sorts the list according to an Ordering. def startsWith[B](that: Seq[B], offset: Int): Boolean Tests whether the list contains the given sequence at a given index. def sum: A Sums up the elements of this collection. def tail: List[A] Returns all elements except the first. def take(n: Int): List[A] Returns first "n" elements. def takeRight(n: Int): List[A] Returns last "n" elements. def toArray: Array[A] Converts the list to an array. def toBuffer[B >: A]: Buffer[B] Converts the list to a mutable buffer. def toMap[T, U]: Map[T, U] Converts this list to a map. def toSeq: Seq[A] Converts the list to a sequence. def toSet[B >: A]: Set[B] Converts the list to a set. def toString(): String Converts the list to a string. 82 Lectures 7 hours Arnab Chakraborty 23 Lectures 1.5 hours Mukund Kumar Mishra 52 Lectures 1.5 hours Bigdata Engineer 76 Lectures 5.5 hours Bigdata Engineer 69 Lectures 7.5 hours Bigdata Engineer 46 Lectures 4.5 hours Stone River ELearning Print Add Notes Bookmark this page
[ { "code": null, "e": 2293, "s": 1998, "text": "Scala Lists are quite similar to arrays which means, all the elements of a list have the same type but there are two important differences. First, lists are immutable, which means elements of a list cannot be changed by assignment. Second, lists represent a linked list whereas arrays are flat." }, { "code": null, "e": 2363, "s": 2293, "text": "The type of a list that has elements of type T is written as List[T]." }, { "code": null, "e": 2441, "s": 2363, "text": "Try the following example, here are few lists defined for various data types." }, { "code": null, "e": 2759, "s": 2441, "text": "// List of Strings\nval fruit: List[String] = List(\"apples\", \"oranges\", \"pears\")\n\n// List of Integers\nval nums: List[Int] = List(1, 2, 3, 4)\n\n// Empty List.\nval empty: List[Nothing] = List()\n\n// Two dimensional list\nval dim: List[List[Int]] =\n List(\n List(1, 0, 0),\n List(0, 1, 0),\n List(0, 0, 1)\n )" }, { "code": null, "e": 2951, "s": 2759, "text": "All lists can be defined using two fundamental building blocks, a tail Nil and ::, which is pronounced cons. Nil also represents the empty list. All the above lists can be defined as follows." }, { "code": null, "e": 3263, "s": 2951, "text": "// List of Strings\nval fruit = \"apples\" :: (\"oranges\" :: (\"pears\" :: Nil))\n\n// List of Integers\nval nums = 1 :: (2 :: (3 :: (4 :: Nil)))\n\n// Empty List.\nval empty = Nil\n\n// Two dimensional list\nval dim = (1 :: (0 :: (0 :: Nil))) ::\n (0 :: (1 :: (0 :: Nil))) ::\n (0 :: (0 :: (1 :: Nil))) :: Nil" }, { "code": null, "e": 3345, "s": 3263, "text": "All operations on lists can be expressed in terms of the following three methods." }, { "code": null, "e": 3350, "s": 3345, "text": "head" }, { "code": null, "e": 3399, "s": 3350, "text": "This method returns the first element of a list." }, { "code": null, "e": 3404, "s": 3399, "text": "tail" }, { "code": null, "e": 3476, "s": 3404, "text": "This method returns a list consisting of all elements except the first." }, { "code": null, "e": 3484, "s": 3476, "text": "isEmpty" }, { "code": null, "e": 3547, "s": 3484, "text": "This method returns true if the list is empty otherwise false." }, { "code": null, "e": 3605, "s": 3547, "text": "The following example shows how to use the above methods." }, { "code": null, "e": 3965, "s": 3605, "text": "object Demo {\n def main(args: Array[String]) {\n val fruit = \"apples\" :: (\"oranges\" :: (\"pears\" :: Nil))\n val nums = Nil\n\n println( \"Head of fruit : \" + fruit.head )\n println( \"Tail of fruit : \" + fruit.tail )\n println( \"Check if fruit is empty : \" + fruit.isEmpty )\n println( \"Check if nums is empty : \" + nums.isEmpty )\n }\n}" }, { "code": null, "e": 4072, "s": 3965, "text": "Save the above program in Demo.scala. The following commands are used to compile and execute this program." }, { "code": null, "e": 4106, "s": 4072, "text": "\\>scalac Demo.scala\n\\>scala Demo\n" }, { "code": null, "e": 4229, "s": 4106, "text": "Head of fruit : apples\nTail of fruit : List(oranges, pears)\nCheck if fruit is empty : false\nCheck if nums is empty : true\n" }, { "code": null, "e": 4381, "s": 4229, "text": "You can use either ::: operator or List.:::() method or List.concat() method to add two or more lists. Please find the following example given below −" }, { "code": null, "e": 4965, "s": 4381, "text": "object Demo {\n def main(args: Array[String]) {\n val fruit1 = \"apples\" :: (\"oranges\" :: (\"pears\" :: Nil))\n val fruit2 = \"mangoes\" :: (\"banana\" :: Nil)\n\n // use two or more lists with ::: operator\n var fruit = fruit1 ::: fruit2\n println( \"fruit1 ::: fruit2 : \" + fruit )\n \n // use two lists with Set.:::() method\n fruit = fruit1.:::(fruit2)\n println( \"fruit1.:::(fruit2) : \" + fruit )\n\n // pass two or more lists as arguments\n fruit = List.concat(fruit1, fruit2)\n println( \"List.concat(fruit1, fruit2) : \" + fruit )\n }\n}" }, { "code": null, "e": 5072, "s": 4965, "text": "Save the above program in Demo.scala. The following commands are used to compile and execute this program." }, { "code": null, "e": 5106, "s": 5072, "text": "\\>scalac Demo.scala\n\\>scala Demo\n" }, { "code": null, "e": 5316, "s": 5106, "text": "fruit1 ::: fruit2 : List(apples, oranges, pears, mangoes, banana)\nfruit1.:::(fruit2) : List(mangoes, banana, apples, oranges, pears)\nList.concat(fruit1, fruit2) : List(apples, oranges, pears, mangoes, banana)\n" }, { "code": null, "e": 5452, "s": 5316, "text": "You can use List.fill() method creates a list consisting of zero or more copies of the same element. Try the following example program." }, { "code": null, "e": 5716, "s": 5452, "text": "object Demo {\n def main(args: Array[String]) {\n val fruit = List.fill(3)(\"apples\") // Repeats apples three times.\n println( \"fruit : \" + fruit )\n\n val num = List.fill(10)(2) // Repeats 2, 10 times.\n println( \"num : \" + num )\n }\n}" }, { "code": null, "e": 5823, "s": 5716, "text": "Save the above program in Demo.scala. The following commands are used to compile and execute this program." }, { "code": null, "e": 5857, "s": 5823, "text": "\\>scalac Demo.scala\n\\>scala Demo\n" }, { "code": null, "e": 5936, "s": 5857, "text": "fruit : List(apples, apples, apples)\nnum : List(2, 2, 2, 2, 2, 2, 2, 2, 2, 2)\n" }, { "code": null, "e": 6329, "s": 5936, "text": "You can use a function along with List.tabulate() method to apply on all the elements of the list before tabulating the list. Its arguments are just like those of List.fill: the first argument list gives the dimensions of the list to create, and the second describes the elements of the list. The only difference is that instead of the elements being fixed, they are computed from a function." }, { "code": null, "e": 6364, "s": 6329, "text": "Try the following example program." }, { "code": null, "e": 6650, "s": 6364, "text": "object Demo {\n def main(args: Array[String]) {\n // Creates 5 elements using the given function.\n val squares = List.tabulate(6)(n => n * n)\n println( \"squares : \" + squares )\n\n val mul = List.tabulate( 4,5 )( _ * _ ) \n println( \"mul : \" + mul )\n }\n}" }, { "code": null, "e": 6757, "s": 6650, "text": "Save the above program in Demo.scala. The following commands are used to compile and execute this program." }, { "code": null, "e": 6791, "s": 6757, "text": "\\>scalac Demo.scala\n\\>scala Demo\n" }, { "code": null, "e": 6927, "s": 6791, "text": "squares : List(0, 1, 4, 9, 16, 25)\nmul : List(List(0, 0, 0, 0, 0), List(0, 1, 2, 3, 4), \n List(0, 2, 4, 6, 8), List(0, 3, 6, 9, 12))\n" }, { "code": null, "e": 7035, "s": 6927, "text": "You can use List.reverse method to reverse all elements of the list. The Following example shows the usage." }, { "code": null, "e": 7269, "s": 7035, "text": "object Demo {\n def main(args: Array[String]) {\n val fruit = \"apples\" :: (\"oranges\" :: (\"pears\" :: Nil))\n \n println( \"Before reverse fruit : \" + fruit )\n println( \"After reverse fruit : \" + fruit.reverse )\n }\n}" }, { "code": null, "e": 7376, "s": 7269, "text": "Save the above program in Demo.scala. The following commands are used to compile and execute this program." }, { "code": null, "e": 7410, "s": 7376, "text": "\\>scalac Demo.scala\n\\>scala Demo\n" }, { "code": null, "e": 7514, "s": 7410, "text": "Before reverse fruit : List(apples, oranges, pears)\nAfter reverse fruit : List(pears, oranges, apples)\n" }, { "code": null, "e": 7687, "s": 7514, "text": "Following are the important methods, which you can use while playing with Lists. For a complete list of methods available, please check the official documentation of Scala." }, { "code": null, "e": 7711, "s": 7687, "text": "def +(elem: A): List[A]" }, { "code": null, "e": 7744, "s": 7711, "text": "Prepends an element to this list" }, { "code": null, "e": 7766, "s": 7744, "text": "def ::(x: A): List[A]" }, { "code": null, "e": 7813, "s": 7766, "text": "Adds an element at the beginning of this list." }, { "code": null, "e": 7847, "s": 7813, "text": "def :::(prefix: List[A]): List[A]" }, { "code": null, "e": 7904, "s": 7847, "text": "Adds the elements of a given list in front of this list." }, { "code": null, "e": 7926, "s": 7904, "text": "def ::(x: A): List[A]" }, { "code": null, "e": 7973, "s": 7926, "text": "Adds an element x at the beginning of the list" }, { "code": null, "e": 8020, "s": 7973, "text": "def addString(b: StringBuilder): StringBuilder" }, { "code": null, "e": 8074, "s": 8020, "text": "Appends all elements of the list to a string builder." }, { "code": null, "e": 8134, "s": 8074, "text": "def addString(b: StringBuilder, sep: String): StringBuilder" }, { "code": null, "e": 8213, "s": 8134, "text": "Appends all elements of the list to a string builder using a separator string." }, { "code": null, "e": 8234, "s": 8213, "text": "def apply(n: Int): A" }, { "code": null, "e": 8279, "s": 8234, "text": "Selects an element by its index in the list." }, { "code": null, "e": 8312, "s": 8279, "text": "def contains(elem: Any): Boolean" }, { "code": null, "e": 8373, "s": 8312, "text": "Tests whether the list contains a given value as an element." }, { "code": null, "e": 8431, "s": 8373, "text": "def copyToArray(xs: Array[A], start: Int, len: Int): Unit" }, { "code": null, "e": 8575, "s": 8431, "text": "Copies elements of the list to an array. Fills the given array xs with at most length (len) elements of this list, beginning at position start." }, { "code": null, "e": 8597, "s": 8575, "text": "def distinct: List[A]" }, { "code": null, "e": 8661, "s": 8597, "text": "Builds a new list from the list without any duplicate elements." }, { "code": null, "e": 8687, "s": 8661, "text": "def drop(n: Int): List[A]" }, { "code": null, "e": 8729, "s": 8687, "text": "Returns all elements except first n ones." }, { "code": null, "e": 8760, "s": 8729, "text": "def dropRight(n: Int): List[A]" }, { "code": null, "e": 8801, "s": 8760, "text": "Returns all elements except last n ones." }, { "code": null, "e": 8843, "s": 8801, "text": "def dropWhile(p: (A) => Boolean): List[A]" }, { "code": null, "e": 8902, "s": 8843, "text": "Drops longest prefix of elements that satisfy a predicate." }, { "code": null, "e": 8941, "s": 8902, "text": "def endsWith[B](that: Seq[B]): Boolean" }, { "code": null, "e": 8994, "s": 8941, "text": "Tests whether the list ends with the given sequence." }, { "code": null, "e": 9025, "s": 8994, "text": "def equals(that: Any): Boolean" }, { "code": null, "e": 9113, "s": 9025, "text": "The equals method for arbitrary sequences. Compares this sequence to some other object." }, { "code": null, "e": 9152, "s": 9113, "text": "def exists(p: (A) => Boolean): Boolean" }, { "code": null, "e": 9222, "s": 9152, "text": "Tests whether a predicate holds for some of the elements of the list." }, { "code": null, "e": 9261, "s": 9222, "text": "def filter(p: (A) => Boolean): List[A]" }, { "code": null, "e": 9321, "s": 9261, "text": "Returns all elements of the list which satisfy a predicate." }, { "code": null, "e": 9360, "s": 9321, "text": "def forall(p: (A) => Boolean): Boolean" }, { "code": null, "e": 9422, "s": 9360, "text": "Tests whether a predicate holds for all elements of the list." }, { "code": null, "e": 9456, "s": 9422, "text": "def foreach(f: (A) => Unit): Unit" }, { "code": null, "e": 9506, "s": 9456, "text": "Applies a function f to all elements of the list." }, { "code": null, "e": 9518, "s": 9506, "text": "def head: A" }, { "code": null, "e": 9557, "s": 9518, "text": "Selects the first element of the list." }, { "code": null, "e": 9594, "s": 9557, "text": "def indexOf(elem: A, from: Int): Int" }, { "code": null, "e": 9671, "s": 9594, "text": "Finds index of first occurrence value in the list, after the index position." }, { "code": null, "e": 9689, "s": 9671, "text": "def init: List[A]" }, { "code": null, "e": 9727, "s": 9689, "text": "Returns all elements except the last." }, { "code": null, "e": 9764, "s": 9727, "text": "def intersect(that: Seq[A]): List[A]" }, { "code": null, "e": 9838, "s": 9764, "text": "Computes the multiset intersection between the list and another sequence." }, { "code": null, "e": 9859, "s": 9838, "text": "def isEmpty: Boolean" }, { "code": null, "e": 9892, "s": 9859, "text": "Tests whether the list is empty." }, { "code": null, "e": 9918, "s": 9892, "text": "def iterator: Iterator[A]" }, { "code": null, "e": 9993, "s": 9918, "text": "Creates a new iterator over all elements contained in the iterable object." }, { "code": null, "e": 10005, "s": 9993, "text": "def last: A" }, { "code": null, "e": 10031, "s": 10005, "text": "Returns the last element." }, { "code": null, "e": 10071, "s": 10031, "text": "def lastIndexOf(elem: A, end: Int): Int" }, { "code": null, "e": 10161, "s": 10071, "text": "Finds index of last occurrence of some value in the list; before or at a given end index." }, { "code": null, "e": 10177, "s": 10161, "text": "def length: Int" }, { "code": null, "e": 10209, "s": 10177, "text": "Returns the length of the list." }, { "code": null, "e": 10242, "s": 10209, "text": "def map[B](f: (A) => B): List[B]" }, { "code": null, "e": 10319, "s": 10242, "text": "Builds a new collection by applying a function to all elements of this list." }, { "code": null, "e": 10330, "s": 10319, "text": "def max: A" }, { "code": null, "e": 10357, "s": 10330, "text": "Finds the largest element." }, { "code": null, "e": 10368, "s": 10357, "text": "def min: A" }, { "code": null, "e": 10396, "s": 10368, "text": "Finds the smallest element." }, { "code": null, "e": 10417, "s": 10396, "text": "def mkString: String" }, { "code": null, "e": 10464, "s": 10417, "text": "Displays all elements of the list in a string." }, { "code": null, "e": 10498, "s": 10464, "text": "def mkString(sep: String): String" }, { "code": null, "e": 10570, "s": 10498, "text": "Displays all elements of the list in a string using a separator string." }, { "code": null, "e": 10591, "s": 10570, "text": "def reverse: List[A]" }, { "code": null, "e": 10641, "s": 10591, "text": "Returns new list with elements in reversed order." }, { "code": null, "e": 10669, "s": 10641, "text": "def sorted[B >: A]: List[A]" }, { "code": null, "e": 10710, "s": 10669, "text": "Sorts the list according to an Ordering." }, { "code": null, "e": 10764, "s": 10710, "text": "def startsWith[B](that: Seq[B], offset: Int): Boolean" }, { "code": null, "e": 10833, "s": 10764, "text": "Tests whether the list contains the given sequence at a given index." }, { "code": null, "e": 10844, "s": 10833, "text": "def sum: A" }, { "code": null, "e": 10885, "s": 10844, "text": "Sums up the elements of this collection." }, { "code": null, "e": 10903, "s": 10885, "text": "def tail: List[A]" }, { "code": null, "e": 10942, "s": 10903, "text": "Returns all elements except the first." }, { "code": null, "e": 10968, "s": 10942, "text": "def take(n: Int): List[A]" }, { "code": null, "e": 10996, "s": 10968, "text": "Returns first \"n\" elements." }, { "code": null, "e": 11027, "s": 10996, "text": "def takeRight(n: Int): List[A]" }, { "code": null, "e": 11054, "s": 11027, "text": "Returns last \"n\" elements." }, { "code": null, "e": 11076, "s": 11054, "text": "def toArray: Array[A]" }, { "code": null, "e": 11107, "s": 11076, "text": "Converts the list to an array." }, { "code": null, "e": 11139, "s": 11107, "text": "def toBuffer[B >: A]: Buffer[B]" }, { "code": null, "e": 11178, "s": 11139, "text": "Converts the list to a mutable buffer." }, { "code": null, "e": 11205, "s": 11178, "text": "def toMap[T, U]: Map[T, U]" }, { "code": null, "e": 11234, "s": 11205, "text": "Converts this list to a map." }, { "code": null, "e": 11252, "s": 11234, "text": "def toSeq: Seq[A]" }, { "code": null, "e": 11285, "s": 11252, "text": "Converts the list to a sequence." }, { "code": null, "e": 11311, "s": 11285, "text": "def toSet[B >: A]: Set[B]" }, { "code": null, "e": 11339, "s": 11311, "text": "Converts the list to a set." }, { "code": null, "e": 11362, "s": 11339, "text": "def toString(): String" }, { "code": null, "e": 11393, "s": 11362, "text": "Converts the list to a string." }, { "code": null, "e": 11426, "s": 11393, "text": "\n 82 Lectures \n 7 hours \n" }, { "code": null, "e": 11445, "s": 11426, "text": " Arnab Chakraborty" }, { "code": null, "e": 11480, "s": 11445, "text": "\n 23 Lectures \n 1.5 hours \n" }, { "code": null, "e": 11501, "s": 11480, "text": " Mukund Kumar Mishra" }, { "code": null, "e": 11536, "s": 11501, "text": "\n 52 Lectures \n 1.5 hours \n" }, { "code": null, "e": 11554, "s": 11536, "text": " Bigdata Engineer" }, { "code": null, "e": 11589, "s": 11554, "text": "\n 76 Lectures \n 5.5 hours \n" }, { "code": null, "e": 11607, "s": 11589, "text": " Bigdata Engineer" }, { "code": null, "e": 11642, "s": 11607, "text": "\n 69 Lectures \n 7.5 hours \n" }, { "code": null, "e": 11660, "s": 11642, "text": " Bigdata Engineer" }, { "code": null, "e": 11695, "s": 11660, "text": "\n 46 Lectures \n 4.5 hours \n" }, { "code": null, "e": 11718, "s": 11695, "text": " Stone River ELearning" }, { "code": null, "e": 11725, "s": 11718, "text": " Print" }, { "code": null, "e": 11736, "s": 11725, "text": " Add Notes" } ]
Maximum Product Subarray | Practice | GeeksforGeeks
Given an array Arr[] that contains N integers (may be positive, negative or zero). Find the product of the maximum product subarray. Example 1: Input: N = 5 Arr[] = {6, -3, -10, 0, 2} Output: 180 Explanation: Subarray with maximum product is [6, -3, -10] which gives product as 180. Example 2: Input: N = 6 Arr[] = {2, 3, 4, 5, -1, 0} Output: 120 Explanation: Subarray with maximum product is [2, 3, 4, 5] which gives product as 120. Your Task: You don't need to read input or print anything. Your task is to complete the function maxProduct() which takes the array of integers arr and n as parameters and returns an integer denoting the answer. Note: Use 64-bit integer data type to avoid overflow. Expected Time Complexity: O(N) Expected Auxiliary Space: O(1) Constraints: 1 ≤ N ≤ 500 -102 ≤ Arri ≤ 102 0 niravramotiya0033 days ago import sys class Solution: def maxProduct(self,arr, n): front=-sys.maxsize - 1 back=-sys.maxsize - 1 fp=0 sf=1 sb=1 bp=n-1 while(fp<n): sf=sf*arr[fp] sb=sb*arr[bp] if arr[fp]==0: sf=1 if arr[bp]==0: sb=1 if front<sf: front=sf if back<sb: back=sb fp+=1 bp-=1 if front>back: return front else: return back return 0 0 jeffprivate03136 days ago Kinda hardcoded the solution #include <bits/stdc++.h> using namespace std; // } Driver Code Ends//User function template for C++class Solution{public: // Function to find maximum product subarraylong long maxProduct(vector<int> arr, int n) { long long tm=1; long long gm=numeric_limits<long long>::min(); int flag=-1;//positive int l=0; int r=-1; for(int i=0;i<n;i++){//for recording positive or negative if(arr[i]==0){ if(flag==-1){ gm=max(gm,(long long)0); }else{ if(l==r){ gm=max(gm,(long long)arr[l]); l=i+1; tm=1; flag=-1; continue; } long long ttm=tm; int tl=l; int tr=r; while(ttm<0 and tl<tr){ ttm/=arr[tl]; tl++; gm=max(gm,ttm); } gm=max(gm,ttm); ttm=tm; tl=l; while(ttm<0 and tr>tl){ ttm/=arr[tr]; tr--; gm=max(gm,ttm); } gm=max(gm,ttm); } tm=1; flag=-1; l=i+1; }else{ tm*=arr[i]; r=max(i,r); if(flag==-1){ flag=1;//changed, not consecutive zeroes } } //cout<<tm<<" "<<l<<" "<<r<<endl; } if(flag!=-1){ if(l==r){ gm=max(gm,(long long)arr[r]); }else{ //cout<<tm<<" "<<l<<" "<<r<<endl; long long ttm=tm; int tl=l; int tr=r; while(ttm<0 and tl<tr){ ttm/=arr[tl]; tl++; gm=max(ttm,gm); } gm=max(gm,ttm); ttm=tm; tl=l; while(ttm<0 and tr>tl){ ttm/=arr[tr]; tr--; } gm=max(gm,ttm); } } return gm; // code here}}; // { Driver Code Starts. int main() { int t; cin >> t; while (t--) { int n, i; cin >> n; vector<int> arr(n); for (i = 0; i < n; i++) { cin >> arr[i]; } Solution ob; auto ans = ob.maxProduct(arr, n); cout << ans << "\n"; } return 0;} // } Driver Code Ends 0 kumarbarnwalashutosh1 week ago In the Example1 the maximum product as I observed is 360 ( product of [6,-3,-10,2]), but it says 180. Where am I wrong can somebody tell me. 0 rohanmeher1642 weeks ago // java solution class Solution { // Function to find maximum product subarray long maxProduct(int[] arr, int n) { long product=1; long maxproduct=arr[0]; for(int i=0;i<n; i++) { product*=arr[i]; maxproduct=Math.max(maxproduct, product); if(product==0) { product=1; } } product=1; for(int i= n-1; i>=0; i--) { product*=arr[i]; maxproduct=Math.max(maxproduct, product); if(product==0) { product=1; } } return maxproduct; }} +3 tarrasridhar11542 weeks ago Time : 0.68/1.86 long long maxProduct(vector<int> arr, int n) { long long int maxproduct=INT_MIN; long long int product=1; for(int i=0;i<n; i++) { product*=arr[i]; maxproduct=max(maxproduct, product); if(product==0) { product=1; } } product=1; for(int i= n-1; i>=0; i--) { product*=arr[i]; maxproduct=max(maxproduct, product); if(product==0) { product=1; } } return maxproduct; } 0 raonirajkumar262 weeks ago Python Solution def maxProduct(self,arr, n): minVal = arr[0] maxVal = arr[0] maxProduct = arr[0] for i in range(1, n): if (arr[i] < 0): maxVal,minVal = minVal,maxVal # print("1",arr[i],minVal,maxVal,maxProduct) maxVal = max(arr[i], maxVal * arr[i]) minVal = min(arr[i], minVal * arr[i]) maxProduct = max(maxProduct, maxVal) # print("2",arr[i],minVal,maxVal,maxProduct) return maxProduct +1 joyrockok3 weeks ago class Solution { // Function to find maximum product subarray long maxProduct(int[] arr, int n) { // code here long MaxSumOfSubArr=0; long Max=0, Min=0; MaxSumOfSubArr = Max = Min = arr[0]; for(int i=1; i<n; i++) { long temp = Math.max(Max*arr[i], Min*arr[i]); Min = Math.min(Max*arr[i], Min*arr[i]); Max = temp; Max = Math.max(arr[i], Max); Min = Math.min(arr[i], Min); MaxSumOfSubArr = Math.max(MaxSumOfSubArr, Max); } return MaxSumOfSubArr; }} +3 harshscode3 weeks ago best approach.. long long maxProduct(vector<int> a, int n) { long long prdt=1,maxprdt=INT_MIN; // if -ve element or zero is at right side then we found maximun from left for(int i=0;i<n;i++) { prdt*=a[i]; maxprdt=max(maxprdt,prdt); if(prdt==0) prdt=1; } // if -ve element is from left side then we got maximum product from right prdt=1; for(int i=n-1;i>=0;i--) { prdt*=a[i]; maxprdt=max(maxprdt,prdt); if(prdt==0) prdt=1; } return maxprdt; +2 adityagagtiwari3 weeks ago Pepcoding Intuitive solution.!! class Solution { // Function to find maximum product subarray long maxProduct(int[] arr, int n) { // code here //an important claim is that the MPS will be starting from the beginning or from the end //it CANNOT be inbetween the array . This can be proved using contradiction long currprod = 1; long maxprod = Integer.MIN_VALUE; for(int i=0;i<n;i++) { currprod*=arr[i]; maxprod = Math.max(maxprod,currprod); //here zero act as a clearscreen which would give you one maxprod value //after each zero fresh current product should be calculated as using zero as a //part of subarray will make your product value into a BIGASS ZERO. if(currprod==0) currprod = 1; } currprod = 1; for(int i=n-1;i>=0;i--) { currprod*=arr[i]; maxprod = Math.max(maxprod,currprod); if(currprod==0) currprod = 1; } return maxprod; } } -1 roopbhawana3153 weeks ago pyrhon solution def maxProduct(self,arr, n): ans = mx = mn = arr[0] for i in range(1,n): if i == 0: mx=1 mn=1 continue else: temp1=arr[i]*mx temp2=arr[i]*mn mx=max(temp1,temp2) mx=max(mx,arr[i]) mn=min(temp1,temp2) mn=min(mn,arr[i]) ans=max(ans,mx) return ans We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 371, "s": 238, "text": "Given an array Arr[] that contains N integers (may be positive, negative or zero). Find the product of the maximum product subarray." }, { "code": null, "e": 382, "s": 371, "text": "Example 1:" }, { "code": null, "e": 522, "s": 382, "text": "Input:\nN = 5\nArr[] = {6, -3, -10, 0, 2}\nOutput: 180\nExplanation: Subarray with maximum product\nis [6, -3, -10] which gives product as 180.\n" }, { "code": null, "e": 533, "s": 522, "text": "Example 2:" }, { "code": null, "e": 674, "s": 533, "text": "Input:\nN = 6\nArr[] = {2, 3, 4, 5, -1, 0}\nOutput: 120\nExplanation: Subarray with maximum product\nis [2, 3, 4, 5] which gives product as 120.\n" }, { "code": null, "e": 940, "s": 674, "text": "Your Task:\nYou don't need to read input or print anything. Your task is to complete the function maxProduct() which takes the array of integers arr and n as parameters and returns an integer denoting the answer.\nNote: Use 64-bit integer data type to avoid overflow." }, { "code": null, "e": 1002, "s": 940, "text": "Expected Time Complexity: O(N)\nExpected Auxiliary Space: O(1)" }, { "code": null, "e": 1045, "s": 1002, "text": "Constraints:\n1 ≤ N ≤ 500\n-102 ≤ Arri ≤ 102" }, { "code": null, "e": 1047, "s": 1045, "text": "0" }, { "code": null, "e": 1074, "s": 1047, "text": "niravramotiya0033 days ago" }, { "code": null, "e": 1498, "s": 1074, "text": "import sys\nclass Solution:\ndef maxProduct(self,arr, n):\n front=-sys.maxsize - 1\n back=-sys.maxsize - 1\n fp=0\n sf=1\n sb=1\n bp=n-1\n while(fp<n):\n sf=sf*arr[fp]\n sb=sb*arr[bp]\n if arr[fp]==0:\n sf=1\n if arr[bp]==0:\n sb=1\n if front<sf:\n front=sf\n if back<sb:\n back=sb\n fp+=1\n bp-=1\n if front>back:\n return front\n else:\n return back\n return 0" }, { "code": null, "e": 1500, "s": 1498, "text": "0" }, { "code": null, "e": 1526, "s": 1500, "text": "jeffprivate03136 days ago" }, { "code": null, "e": 1555, "s": 1526, "text": "Kinda hardcoded the solution" }, { "code": null, "e": 1580, "s": 1555, "text": "#include <bits/stdc++.h>" }, { "code": null, "e": 1601, "s": 1580, "text": "using namespace std;" }, { "code": null, "e": 1677, "s": 1601, "text": "// } Driver Code Ends//User function template for C++class Solution{public:" }, { "code": null, "e": 3532, "s": 1677, "text": "// Function to find maximum product subarraylong long maxProduct(vector<int> arr, int n) { long long tm=1; long long gm=numeric_limits<long long>::min(); int flag=-1;//positive int l=0; int r=-1; for(int i=0;i<n;i++){//for recording positive or negative if(arr[i]==0){ if(flag==-1){ gm=max(gm,(long long)0); }else{ if(l==r){ gm=max(gm,(long long)arr[l]); l=i+1; tm=1; flag=-1; continue; } long long ttm=tm; int tl=l; int tr=r; while(ttm<0 and tl<tr){ ttm/=arr[tl]; tl++; gm=max(gm,ttm); } gm=max(gm,ttm); ttm=tm; tl=l; while(ttm<0 and tr>tl){ ttm/=arr[tr]; tr--; gm=max(gm,ttm); } gm=max(gm,ttm); } tm=1; flag=-1; l=i+1; }else{ tm*=arr[i]; r=max(i,r); if(flag==-1){ flag=1;//changed, not consecutive zeroes } } //cout<<tm<<\" \"<<l<<\" \"<<r<<endl; } if(flag!=-1){ if(l==r){ gm=max(gm,(long long)arr[r]); }else{ //cout<<tm<<\" \"<<l<<\" \"<<r<<endl; long long ttm=tm; int tl=l; int tr=r; while(ttm<0 and tl<tr){ ttm/=arr[tl]; tl++; gm=max(ttm,gm); } gm=max(gm,ttm); ttm=tm; tl=l; while(ttm<0 and tr>tl){ ttm/=arr[tr]; tr--; } gm=max(gm,ttm); } } return gm; // code here}};" }, { "code": null, "e": 3557, "s": 3532, "text": "// { Driver Code Starts." }, { "code": null, "e": 3856, "s": 3557, "text": "int main() { int t; cin >> t; while (t--) { int n, i; cin >> n; vector<int> arr(n); for (i = 0; i < n; i++) { cin >> arr[i]; } Solution ob; auto ans = ob.maxProduct(arr, n); cout << ans << \"\\n\"; } return 0;} // } Driver Code Ends" }, { "code": null, "e": 3858, "s": 3856, "text": "0" }, { "code": null, "e": 3889, "s": 3858, "text": "kumarbarnwalashutosh1 week ago" }, { "code": null, "e": 4030, "s": 3889, "text": "In the Example1 the maximum product as I observed is 360 ( product of [6,-3,-10,2]), but it says 180. Where am I wrong can somebody tell me." }, { "code": null, "e": 4034, "s": 4032, "text": "0" }, { "code": null, "e": 4059, "s": 4034, "text": "rohanmeher1642 weeks ago" }, { "code": null, "e": 4076, "s": 4059, "text": "// java solution" }, { "code": null, "e": 4616, "s": 4078, "text": "class Solution { // Function to find maximum product subarray long maxProduct(int[] arr, int n) { long product=1; long maxproduct=arr[0]; for(int i=0;i<n; i++) { product*=arr[i]; maxproduct=Math.max(maxproduct, product); if(product==0) { product=1; } } product=1; for(int i= n-1; i>=0; i--) { product*=arr[i]; maxproduct=Math.max(maxproduct, product); if(product==0) { product=1; } } return maxproduct; }}" }, { "code": null, "e": 4619, "s": 4616, "text": "+3" }, { "code": null, "e": 4647, "s": 4619, "text": "tarrasridhar11542 weeks ago" }, { "code": null, "e": 5199, "s": 4647, "text": "Time : 0.68/1.86\nlong long maxProduct(vector<int> arr, int n) {\n\t \n\t long long int maxproduct=INT_MIN;\n\t long long int product=1;\n\t for(int i=0;i<n; i++)\n\t {\n\t product*=arr[i];\n\t maxproduct=max(maxproduct, product);\n\t if(product==0)\n\t {\n\t product=1;\n\t }\n\t }\n\t product=1;\n\t for(int i= n-1; i>=0; i--)\n\t {\n\t product*=arr[i];\n\t maxproduct=max(maxproduct, product);\n\t if(product==0)\n\t {\n\t product=1;\n\t }\n\t }\n\t return maxproduct;\n\t\n\t}" }, { "code": null, "e": 5205, "s": 5203, "text": "0" }, { "code": null, "e": 5232, "s": 5205, "text": "raonirajkumar262 weeks ago" }, { "code": null, "e": 5248, "s": 5232, "text": "Python Solution" }, { "code": null, "e": 5725, "s": 5248, "text": " def maxProduct(self,arr, n): minVal = arr[0] maxVal = arr[0] maxProduct = arr[0] for i in range(1, n): if (arr[i] < 0): maxVal,minVal = minVal,maxVal # print(\"1\",arr[i],minVal,maxVal,maxProduct) maxVal = max(arr[i], maxVal * arr[i]) minVal = min(arr[i], minVal * arr[i]) maxProduct = max(maxProduct, maxVal) # print(\"2\",arr[i],minVal,maxVal,maxProduct) return maxProduct" }, { "code": null, "e": 5728, "s": 5725, "text": "+1" }, { "code": null, "e": 5749, "s": 5728, "text": "joyrockok3 weeks ago" }, { "code": null, "e": 6281, "s": 5749, "text": "class Solution { // Function to find maximum product subarray long maxProduct(int[] arr, int n) { // code here long MaxSumOfSubArr=0; long Max=0, Min=0; MaxSumOfSubArr = Max = Min = arr[0]; for(int i=1; i<n; i++) { long temp = Math.max(Max*arr[i], Min*arr[i]); Min = Math.min(Max*arr[i], Min*arr[i]); Max = temp; Max = Math.max(arr[i], Max); Min = Math.min(arr[i], Min); MaxSumOfSubArr = Math.max(MaxSumOfSubArr, Max); } return MaxSumOfSubArr; }}" }, { "code": null, "e": 6284, "s": 6281, "text": "+3" }, { "code": null, "e": 6306, "s": 6284, "text": "harshscode3 weeks ago" }, { "code": null, "e": 6322, "s": 6306, "text": "best approach.." }, { "code": null, "e": 6812, "s": 6322, "text": " long long maxProduct(vector<int> a, int n) { long long prdt=1,maxprdt=INT_MIN; // if -ve element or zero is at right side then we found maximun from left for(int i=0;i<n;i++) { prdt*=a[i]; maxprdt=max(maxprdt,prdt); if(prdt==0) prdt=1; } // if -ve element is from left side then we got maximum product from right prdt=1; for(int i=n-1;i>=0;i--) { prdt*=a[i]; maxprdt=max(maxprdt,prdt); if(prdt==0) prdt=1; } return maxprdt;" }, { "code": null, "e": 6817, "s": 6814, "text": "+2" }, { "code": null, "e": 6844, "s": 6817, "text": "adityagagtiwari3 weeks ago" }, { "code": null, "e": 6876, "s": 6844, "text": "Pepcoding Intuitive solution.!!" }, { "code": null, "e": 7934, "s": 6878, "text": "class Solution {\n // Function to find maximum product subarray\n long maxProduct(int[] arr, int n) {\n // code here\n //an important claim is that the MPS will be starting from the beginning or from the end \n //it CANNOT be inbetween the array . This can be proved using contradiction\n long currprod = 1;\n long maxprod = Integer.MIN_VALUE;\n for(int i=0;i<n;i++)\n {\n currprod*=arr[i];\n maxprod = Math.max(maxprod,currprod);\n //here zero act as a clearscreen which would give you one maxprod value\n //after each zero fresh current product should be calculated as using zero as a\n //part of subarray will make your product value into a BIGASS ZERO.\n if(currprod==0)\n currprod = 1;\n \n }\n currprod = 1;\n for(int i=n-1;i>=0;i--)\n {\n currprod*=arr[i];\n maxprod = Math.max(maxprod,currprod);\n if(currprod==0)\n currprod = 1;\n \n }\n return maxprod; \n }\n}" }, { "code": null, "e": 7937, "s": 7934, "text": "-1" }, { "code": null, "e": 7963, "s": 7937, "text": "roopbhawana3153 weeks ago" }, { "code": null, "e": 7979, "s": 7963, "text": "pyrhon solution" }, { "code": null, "e": 8316, "s": 7981, "text": "def maxProduct(self,arr, n): ans = mx = mn = arr[0] for i in range(1,n): if i == 0: mx=1 mn=1 continue else: temp1=arr[i]*mx temp2=arr[i]*mn mx=max(temp1,temp2) mx=max(mx,arr[i]) mn=min(temp1,temp2) mn=min(mn,arr[i]) ans=max(ans,mx) return ans" }, { "code": null, "e": 8462, "s": 8316, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 8498, "s": 8462, "text": " Login to access your submissions. " }, { "code": null, "e": 8508, "s": 8498, "text": "\nProblem\n" }, { "code": null, "e": 8518, "s": 8508, "text": "\nContest\n" }, { "code": null, "e": 8581, "s": 8518, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 8729, "s": 8581, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 8937, "s": 8729, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 9043, "s": 8937, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
Check if two trees have same structure - GeeksforGeeks
08 Apr, 2022 Given two binary trees. The task is to write a program to check if the two trees are identical in structure. In the above figure both of the trees, Tree1 and Tree2 are identical in structure. That is, they have the same structure.Note: This problem is different from Check if two trees are identical as here we need to compare only the structures of the two trees and not the values at their nodes. The idea is to traverse both trees simultaneously following the same paths and keep checking if a node exists for both the trees or not.Algorithm: If both trees are empty then return 1.Else If both trees are non-empty: Check left subtrees recursively i.e., call isSameStructure(tree1->left_subtree, tree2->left_subtree)Check right subtrees recursively i.e., call isSameStructure(tree1->right_subtree, tree2->right_subtree)If the value returned in above two steps are true then return 1.Else return 0 (one is empty and other is not). If both trees are empty then return 1. Else If both trees are non-empty: Check left subtrees recursively i.e., call isSameStructure(tree1->left_subtree, tree2->left_subtree)Check right subtrees recursively i.e., call isSameStructure(tree1->right_subtree, tree2->right_subtree)If the value returned in above two steps are true then return 1. Check left subtrees recursively i.e., call isSameStructure(tree1->left_subtree, tree2->left_subtree) Check right subtrees recursively i.e., call isSameStructure(tree1->right_subtree, tree2->right_subtree) If the value returned in above two steps are true then return 1. Else return 0 (one is empty and other is not). Below is the implementation of above algorithm: C++ C Java Python3 C# Javascript // C++ program to check if two trees have// same structure#include <iostream>using namespace std; // A binary tree node has data, pointer to left child// and a pointer to right childstruct Node{ int data; struct Node* left; struct Node* right;}; // Helper function that allocates a new node with the// given data and NULL left and right pointers.Node* newNode(int data){ Node* node = new Node; node->data = data; node->left = NULL; node->right = NULL; return(node);} // Function to check if two trees have same// structureint isSameStructure(Node* a, Node* b){ // 1. both empty if (a==NULL && b==NULL) return 1; // 2. both non-empty -> compare them if (a!=NULL && b!=NULL) { return ( isSameStructure(a->left, b->left) && isSameStructure(a->right, b->right) ); } // 3. one empty, one not -> false return 0;} // Driver codeint main(){ Node *root1 = newNode(10); Node *root2 = newNode(100); root1->left = newNode(7); root1->right = newNode(15); root1->left->left = newNode(4); root1->left->right = newNode(9); root1->right->right = newNode(20); root2->left = newNode(70); root2->right = newNode(150); root2->left->left = newNode(40); root2->left->right = newNode(90); root2->right->right = newNode(200); if (isSameStructure(root1, root2)) printf("Both trees have same structure"); else printf("Trees do not have same structure"); return 0;} // This code is contributed by aditya kumar (adityakumar129) // C++ program to check if two trees have// same structure#include <stdio.h>#include <stdlib.h> // A binary tree node has data, pointer to left child// and a pointer to right childtypedef struct Node { int data; struct Node* left; struct Node* right;} Node; // Helper function that allocates a new node with the// given data and NULL left and right pointers.Node* newNode(int data){ Node* node = (Node*)malloc(sizeof(Node)); node->data = data; node->left = NULL; node->right = NULL; return (node);} // Function to check if two trees have same// structureint isSameStructure(Node* a, Node* b){ // 1. both empty if (a == NULL && b == NULL) return 1; // 2. both non-empty -> compare them if (a != NULL && b != NULL) { return (isSameStructure(a->left, b->left) && isSameStructure(a->right, b->right)); } // 3. one empty, one not -> false return 0;} // Driver codeint main(){ Node* root1 = newNode(10); Node* root2 = newNode(100); root1->left = newNode(7); root1->right = newNode(15); root1->left->left = newNode(4); root1->left->right = newNode(9); root1->right->right = newNode(20); root2->left = newNode(70); root2->right = newNode(150); root2->left->left = newNode(40); root2->left->right = newNode(90); root2->right->right = newNode(200); if (isSameStructure(root1, root2)) printf("Both trees have same structure"); else printf("Trees do not have same structure"); return 0;} // This code is contributed by aditya kumar (adityakumar129) // Java program to check if two trees have// same structureclass GFG{ // A binary tree node has data,// pointer to left child and// a pointer to right childstatic class Node{ int data; Node left; Node right;}; // Helper function that allocates a new node// with the given data and null left// and right pointers.static Node newNode(int data){ Node node = new Node(); node.data = data; node.left = null; node.right = null; return(node);} // Function to check if two trees// have same structurestatic boolean isSameStructure(Node a, Node b){ // 1. both empty if (a == null && b == null) return true; // 2. both non-empty . compare them if (a != null && b != null) { return ( isSameStructure(a.left, b.left) && isSameStructure(a.right, b.right) ); } // 3. one empty, one not . false return false;} // Driver codepublic static void main(String args[]){ Node root1 = newNode(10); Node root2 = newNode(100); root1.left = newNode(7); root1.right = newNode(15); root1.left.left = newNode(4); root1.left.right = newNode(9); root1.right.right = newNode(20); root2.left = newNode(70); root2.right = newNode(150); root2.left.left = newNode(40); root2.left.right = newNode(90); root2.right.right = newNode(200); if (isSameStructure(root1, root2)) System.out.printf("Both trees have same structure"); else System.out.printf("Trees do not have same structure");}} // This code is contributed by Arnab Kundu # Python3 program to check if two trees have# same structure # A binary tree node has data, pointer to left child# and a pointer to right childclass Node: def __init__(self, data): self.left = None self.right = None self.data = data # Helper function that allocates a new node with the# given data and None left and right pointers.def newNode(data): node = Node(data) return node # Function to check if two trees have same# structuredef isSameStructure(a, b): # 1. both empty if (a == None and b == None): return 1; # 2. both non-empty . compare them if (a != None and b != None): return ( isSameStructure(a.left, b.left) and isSameStructure(a.right, b.right)) # 3. one empty, one not . false return 0; # Driver codeif __name__=='__main__': root1 = newNode(10); root2 = newNode(100); root1.left = newNode(7); root1.right = newNode(15); root1.left.left = newNode(4); root1.left.right = newNode(9); root1.right.right = newNode(20); root2.left = newNode(70); root2.right = newNode(150); root2.left.left = newNode(40); root2.left.right = newNode(90); root2.right.right = newNode(200); if (isSameStructure(root1, root2)): print("Both trees have same structure"); else: print("Trees do not have same structure"); # This code is contributed by rutvik_56 // C# program to check if two trees// have same structureusing System; class GFG{ // A binary tree node has data,// pointer to left child and// a pointer to right childpublic class Node{ public int data; public Node left; public Node right;}; // Helper function that allocates a new node// with the given data and null left// and right pointers.static Node newNode(int data){ Node node = new Node(); node.data = data; node.left = null; node.right = null; return(node);} // Function to check if two trees// have same structurestatic Boolean isSameStructure(Node a, Node b){ // 1. both empty if (a == null && b == null) return true; // 2. both non-empty . compare them if (a != null && b != null) { return ( isSameStructure(a.left, b.left) && isSameStructure(a.right, b.right) ); } // 3. one empty, one not . false return false;} // Driver codepublic static void Main(String []args){ Node root1 = newNode(10); Node root2 = newNode(100); root1.left = newNode(7); root1.right = newNode(15); root1.left.left = newNode(4); root1.left.right = newNode(9); root1.right.right = newNode(20); root2.left = newNode(70); root2.right = newNode(150); root2.left.left = newNode(40); root2.left.right = newNode(90); root2.right.right = newNode(200); if (isSameStructure(root1, root2)) Console.Write("Both trees have " + "same structure"); else Console.Write("Trees do not have" + " same structure");}} // This code is contributed by Rajput-Ji <script> // JavaScript program to check if two trees// have same structure // A binary tree node has data,// pointer to left child and// a pointer to right childclass Node{ constructor() { this.data = 0; this.left = null; this.right = null; }}; // Helper function that allocates a new node// with the given data and null left// and right pointers.function newNode(data){ var node = new Node(); node.data = data; node.left = null; node.right = null; return node;} // Function to check if two trees// have same structurefunction isSameStructure(a, b){ // 1. both empty if (a == null && b == null) return true; // 2. both non-empty . compare them if (a != null && b != null) { return isSameStructure(a.left, b.left) && isSameStructure(a.right, b.right) ; } // 3. one empty, one not . false return false;} // Driver codevar root1 = newNode(10);var root2 = newNode(100);root1.left = newNode(7);root1.right = newNode(15);root1.left.left = newNode(4);root1.left.right = newNode(9);root1.right.right = newNode(20);root2.left = newNode(70);root2.right = newNode(150);root2.left.left = newNode(40);root2.left.right = newNode(90);root2.right.right = newNode(200);if (isSameStructure(root1, root2)) document.write("Both trees have " + "same structure");else document.write("Trees do not have" + " same structure"); </script> Both trees have same structure andrew1234 Rajput-Ji rutvik_56 noob2000 adityakumar129 Algorithms-Recursion Binary Tree Trees Data Structures Tree Data Structures Tree Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Data Structures | Queue | Question 11 Data Structures | Stack | Question 4 Count of triplets in an Array (i, j, k) such that i < j < k and a[k] < a[i] < a[j] Program to create Custom Vector Class in C++ Encryption and Decryption of String according to given technique Tree Traversals (Inorder, Preorder and Postorder) Binary Tree | Set 1 (Introduction) Level Order Binary Tree Traversal AVL Tree | Set 1 (Insertion) Inorder Tree Traversal without Recursion
[ { "code": null, "e": 25034, "s": 25006, "text": "\n08 Apr, 2022" }, { "code": null, "e": 25144, "s": 25034, "text": "Given two binary trees. The task is to write a program to check if the two trees are identical in structure. " }, { "code": null, "e": 25435, "s": 25144, "text": "In the above figure both of the trees, Tree1 and Tree2 are identical in structure. That is, they have the same structure.Note: This problem is different from Check if two trees are identical as here we need to compare only the structures of the two trees and not the values at their nodes. " }, { "code": null, "e": 25584, "s": 25435, "text": "The idea is to traverse both trees simultaneously following the same paths and keep checking if a node exists for both the trees or not.Algorithm: " }, { "code": null, "e": 25970, "s": 25584, "text": "If both trees are empty then return 1.Else If both trees are non-empty: Check left subtrees recursively i.e., call isSameStructure(tree1->left_subtree, tree2->left_subtree)Check right subtrees recursively i.e., call isSameStructure(tree1->right_subtree, tree2->right_subtree)If the value returned in above two steps are true then return 1.Else return 0 (one is empty and other is not)." }, { "code": null, "e": 26009, "s": 25970, "text": "If both trees are empty then return 1." }, { "code": null, "e": 26311, "s": 26009, "text": "Else If both trees are non-empty: Check left subtrees recursively i.e., call isSameStructure(tree1->left_subtree, tree2->left_subtree)Check right subtrees recursively i.e., call isSameStructure(tree1->right_subtree, tree2->right_subtree)If the value returned in above two steps are true then return 1." }, { "code": null, "e": 26412, "s": 26311, "text": "Check left subtrees recursively i.e., call isSameStructure(tree1->left_subtree, tree2->left_subtree)" }, { "code": null, "e": 26516, "s": 26412, "text": "Check right subtrees recursively i.e., call isSameStructure(tree1->right_subtree, tree2->right_subtree)" }, { "code": null, "e": 26581, "s": 26516, "text": "If the value returned in above two steps are true then return 1." }, { "code": null, "e": 26628, "s": 26581, "text": "Else return 0 (one is empty and other is not)." }, { "code": null, "e": 26678, "s": 26628, "text": "Below is the implementation of above algorithm: " }, { "code": null, "e": 26682, "s": 26678, "text": "C++" }, { "code": null, "e": 26684, "s": 26682, "text": "C" }, { "code": null, "e": 26689, "s": 26684, "text": "Java" }, { "code": null, "e": 26697, "s": 26689, "text": "Python3" }, { "code": null, "e": 26700, "s": 26697, "text": "C#" }, { "code": null, "e": 26711, "s": 26700, "text": "Javascript" }, { "code": "// C++ program to check if two trees have// same structure#include <iostream>using namespace std; // A binary tree node has data, pointer to left child// and a pointer to right childstruct Node{ int data; struct Node* left; struct Node* right;}; // Helper function that allocates a new node with the// given data and NULL left and right pointers.Node* newNode(int data){ Node* node = new Node; node->data = data; node->left = NULL; node->right = NULL; return(node);} // Function to check if two trees have same// structureint isSameStructure(Node* a, Node* b){ // 1. both empty if (a==NULL && b==NULL) return 1; // 2. both non-empty -> compare them if (a!=NULL && b!=NULL) { return ( isSameStructure(a->left, b->left) && isSameStructure(a->right, b->right) ); } // 3. one empty, one not -> false return 0;} // Driver codeint main(){ Node *root1 = newNode(10); Node *root2 = newNode(100); root1->left = newNode(7); root1->right = newNode(15); root1->left->left = newNode(4); root1->left->right = newNode(9); root1->right->right = newNode(20); root2->left = newNode(70); root2->right = newNode(150); root2->left->left = newNode(40); root2->left->right = newNode(90); root2->right->right = newNode(200); if (isSameStructure(root1, root2)) printf(\"Both trees have same structure\"); else printf(\"Trees do not have same structure\"); return 0;} // This code is contributed by aditya kumar (adityakumar129)", "e": 28269, "s": 26711, "text": null }, { "code": "// C++ program to check if two trees have// same structure#include <stdio.h>#include <stdlib.h> // A binary tree node has data, pointer to left child// and a pointer to right childtypedef struct Node { int data; struct Node* left; struct Node* right;} Node; // Helper function that allocates a new node with the// given data and NULL left and right pointers.Node* newNode(int data){ Node* node = (Node*)malloc(sizeof(Node)); node->data = data; node->left = NULL; node->right = NULL; return (node);} // Function to check if two trees have same// structureint isSameStructure(Node* a, Node* b){ // 1. both empty if (a == NULL && b == NULL) return 1; // 2. both non-empty -> compare them if (a != NULL && b != NULL) { return (isSameStructure(a->left, b->left) && isSameStructure(a->right, b->right)); } // 3. one empty, one not -> false return 0;} // Driver codeint main(){ Node* root1 = newNode(10); Node* root2 = newNode(100); root1->left = newNode(7); root1->right = newNode(15); root1->left->left = newNode(4); root1->left->right = newNode(9); root1->right->right = newNode(20); root2->left = newNode(70); root2->right = newNode(150); root2->left->left = newNode(40); root2->left->right = newNode(90); root2->right->right = newNode(200); if (isSameStructure(root1, root2)) printf(\"Both trees have same structure\"); else printf(\"Trees do not have same structure\"); return 0;} // This code is contributed by aditya kumar (adityakumar129)", "e": 29841, "s": 28269, "text": null }, { "code": "// Java program to check if two trees have// same structureclass GFG{ // A binary tree node has data,// pointer to left child and// a pointer to right childstatic class Node{ int data; Node left; Node right;}; // Helper function that allocates a new node// with the given data and null left// and right pointers.static Node newNode(int data){ Node node = new Node(); node.data = data; node.left = null; node.right = null; return(node);} // Function to check if two trees// have same structurestatic boolean isSameStructure(Node a, Node b){ // 1. both empty if (a == null && b == null) return true; // 2. both non-empty . compare them if (a != null && b != null) { return ( isSameStructure(a.left, b.left) && isSameStructure(a.right, b.right) ); } // 3. one empty, one not . false return false;} // Driver codepublic static void main(String args[]){ Node root1 = newNode(10); Node root2 = newNode(100); root1.left = newNode(7); root1.right = newNode(15); root1.left.left = newNode(4); root1.left.right = newNode(9); root1.right.right = newNode(20); root2.left = newNode(70); root2.right = newNode(150); root2.left.left = newNode(40); root2.left.right = newNode(90); root2.right.right = newNode(200); if (isSameStructure(root1, root2)) System.out.printf(\"Both trees have same structure\"); else System.out.printf(\"Trees do not have same structure\");}} // This code is contributed by Arnab Kundu", "e": 31400, "s": 29841, "text": null }, { "code": "# Python3 program to check if two trees have# same structure # A binary tree node has data, pointer to left child# and a pointer to right childclass Node: def __init__(self, data): self.left = None self.right = None self.data = data # Helper function that allocates a new node with the# given data and None left and right pointers.def newNode(data): node = Node(data) return node # Function to check if two trees have same# structuredef isSameStructure(a, b): # 1. both empty if (a == None and b == None): return 1; # 2. both non-empty . compare them if (a != None and b != None): return ( isSameStructure(a.left, b.left) and isSameStructure(a.right, b.right)) # 3. one empty, one not . false return 0; # Driver codeif __name__=='__main__': root1 = newNode(10); root2 = newNode(100); root1.left = newNode(7); root1.right = newNode(15); root1.left.left = newNode(4); root1.left.right = newNode(9); root1.right.right = newNode(20); root2.left = newNode(70); root2.right = newNode(150); root2.left.left = newNode(40); root2.left.right = newNode(90); root2.right.right = newNode(200); if (isSameStructure(root1, root2)): print(\"Both trees have same structure\"); else: print(\"Trees do not have same structure\"); # This code is contributed by rutvik_56", "e": 32839, "s": 31400, "text": null }, { "code": "// C# program to check if two trees// have same structureusing System; class GFG{ // A binary tree node has data,// pointer to left child and// a pointer to right childpublic class Node{ public int data; public Node left; public Node right;}; // Helper function that allocates a new node// with the given data and null left// and right pointers.static Node newNode(int data){ Node node = new Node(); node.data = data; node.left = null; node.right = null; return(node);} // Function to check if two trees// have same structurestatic Boolean isSameStructure(Node a, Node b){ // 1. both empty if (a == null && b == null) return true; // 2. both non-empty . compare them if (a != null && b != null) { return ( isSameStructure(a.left, b.left) && isSameStructure(a.right, b.right) ); } // 3. one empty, one not . false return false;} // Driver codepublic static void Main(String []args){ Node root1 = newNode(10); Node root2 = newNode(100); root1.left = newNode(7); root1.right = newNode(15); root1.left.left = newNode(4); root1.left.right = newNode(9); root1.right.right = newNode(20); root2.left = newNode(70); root2.right = newNode(150); root2.left.left = newNode(40); root2.left.right = newNode(90); root2.right.right = newNode(200); if (isSameStructure(root1, root2)) Console.Write(\"Both trees have \" + \"same structure\"); else Console.Write(\"Trees do not have\" + \" same structure\");}} // This code is contributed by Rajput-Ji", "e": 34504, "s": 32839, "text": null }, { "code": "<script> // JavaScript program to check if two trees// have same structure // A binary tree node has data,// pointer to left child and// a pointer to right childclass Node{ constructor() { this.data = 0; this.left = null; this.right = null; }}; // Helper function that allocates a new node// with the given data and null left// and right pointers.function newNode(data){ var node = new Node(); node.data = data; node.left = null; node.right = null; return node;} // Function to check if two trees// have same structurefunction isSameStructure(a, b){ // 1. both empty if (a == null && b == null) return true; // 2. both non-empty . compare them if (a != null && b != null) { return isSameStructure(a.left, b.left) && isSameStructure(a.right, b.right) ; } // 3. one empty, one not . false return false;} // Driver codevar root1 = newNode(10);var root2 = newNode(100);root1.left = newNode(7);root1.right = newNode(15);root1.left.left = newNode(4);root1.left.right = newNode(9);root1.right.right = newNode(20);root2.left = newNode(70);root2.right = newNode(150);root2.left.left = newNode(40);root2.left.right = newNode(90);root2.right.right = newNode(200);if (isSameStructure(root1, root2)) document.write(\"Both trees have \" + \"same structure\");else document.write(\"Trees do not have\" + \" same structure\"); </script>", "e": 35939, "s": 34504, "text": null }, { "code": null, "e": 35970, "s": 35939, "text": "Both trees have same structure" }, { "code": null, "e": 35983, "s": 35972, "text": "andrew1234" }, { "code": null, "e": 35993, "s": 35983, "text": "Rajput-Ji" }, { "code": null, "e": 36003, "s": 35993, "text": "rutvik_56" }, { "code": null, "e": 36012, "s": 36003, "text": "noob2000" }, { "code": null, "e": 36027, "s": 36012, "text": "adityakumar129" }, { "code": null, "e": 36048, "s": 36027, "text": "Algorithms-Recursion" }, { "code": null, "e": 36060, "s": 36048, "text": "Binary Tree" }, { "code": null, "e": 36066, "s": 36060, "text": "Trees" }, { "code": null, "e": 36082, "s": 36066, "text": "Data Structures" }, { "code": null, "e": 36087, "s": 36082, "text": "Tree" }, { "code": null, "e": 36103, "s": 36087, "text": "Data Structures" }, { "code": null, "e": 36108, "s": 36103, "text": "Tree" }, { "code": null, "e": 36206, "s": 36108, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 36244, "s": 36206, "text": "Data Structures | Queue | Question 11" }, { "code": null, "e": 36281, "s": 36244, "text": "Data Structures | Stack | Question 4" }, { "code": null, "e": 36364, "s": 36281, "text": "Count of triplets in an Array (i, j, k) such that i < j < k and a[k] < a[i] < a[j]" }, { "code": null, "e": 36409, "s": 36364, "text": "Program to create Custom Vector Class in C++" }, { "code": null, "e": 36474, "s": 36409, "text": "Encryption and Decryption of String according to given technique" }, { "code": null, "e": 36524, "s": 36474, "text": "Tree Traversals (Inorder, Preorder and Postorder)" }, { "code": null, "e": 36559, "s": 36524, "text": "Binary Tree | Set 1 (Introduction)" }, { "code": null, "e": 36593, "s": 36559, "text": "Level Order Binary Tree Traversal" }, { "code": null, "e": 36622, "s": 36593, "text": "AVL Tree | Set 1 (Insertion)" } ]
MySQL - SUM rows with same ID?
To sum rows with same ID, use the GROUP BY HAVING clause. Let us create a table − mysql> create table demo84 -> ( -> id int, -> price int -> ) -> ; Query OK, 0 rows affected (0.60 Insert some records into the table with the help of insert command − mysql> insert into demo84 values(1,2000); Query OK, 1 row affected (0.08 mysql> insert into demo84 values(1,2000); Query OK, 1 row affected (0.14 mysql> insert into demo84 values(2,1800); Query OK, 1 row affected (0.14 mysql> insert into demo84 values(2,2200); Query OK, 1 row affected (0.14 mysql> insert into demo84 values(3,1700); Query OK, 1 row affected (0.12 Display records from the table using select statement − mysql> select *from demo84; This will produce the following output − +------+-------+| id | price | +------+-------+| 1 | 2000 | | 1 | 2000 || 2 | 1800 || 2 | 2200 | | 3 | 1700 |+------+-------+ 5 rows in set (0.00 sec) +------+-------+ +------+-------+ | 1 | 2000 | | 2 | 1800 | | 3 | 1700 | 5 rows in set (0.00 sec) Following is the query to sum rows with same id − mysql> select id,sum(price) as Total from demo84 -> group by id -> having sum(demo84.price) >=2000; This will produce the following output − +------+-------+| id | Total | +------+-------+| 1 | 4000 | | 2 | 4000 |+------+-------+ 2 rows in set (0.00 sec) +------+-------+ +------+-------+ | 2 | 4000 | 2 rows in set (0.00 sec)
[ { "code": null, "e": 1120, "s": 1062, "text": "To sum rows with same ID, use the GROUP BY HAVING clause." }, { "code": null, "e": 1144, "s": 1120, "text": "Let us create a table −" }, { "code": null, "e": 1257, "s": 1144, "text": "mysql> create table demo84\n -> (\n -> id int,\n -> price int\n -> )\n -> ;\nQuery OK, 0 rows affected (0.60" }, { "code": null, "e": 1326, "s": 1257, "text": "Insert some records into the table with the help of insert command −" }, { "code": null, "e": 1695, "s": 1326, "text": "mysql> insert into demo84 values(1,2000);\nQuery OK, 1 row affected (0.08\n\nmysql> insert into demo84 values(1,2000);\nQuery OK, 1 row affected (0.14\n\nmysql> insert into demo84 values(2,1800);\nQuery OK, 1 row affected (0.14\n\nmysql> insert into demo84 values(2,2200);\nQuery OK, 1 row affected (0.14\n\nmysql> insert into demo84 values(3,1700);\nQuery OK, 1 row affected (0.12" }, { "code": null, "e": 1751, "s": 1695, "text": "Display records from the table using select statement −" }, { "code": null, "e": 1779, "s": 1751, "text": "mysql> select *from demo84;" }, { "code": null, "e": 1820, "s": 1779, "text": "This will produce the following output −" }, { "code": null, "e": 1993, "s": 1820, "text": "+------+-------+| id | price |\n+------+-------+| 1 | 2000 |\n| 1 | 2000 || 2 | 1800 || 2 | 2200 |\n| 3 | 1700 |+------+-------+\n5 rows in set (0.00 sec)" }, { "code": null, "e": 2010, "s": 1993, "text": "+------+-------+" }, { "code": null, "e": 2027, "s": 2010, "text": "+------+-------+" }, { "code": null, "e": 2044, "s": 2027, "text": "| 1 | 2000 |" }, { "code": null, "e": 2061, "s": 2044, "text": "| 2 | 1800 |" }, { "code": null, "e": 2078, "s": 2061, "text": "| 3 | 1700 |" }, { "code": null, "e": 2103, "s": 2078, "text": "5 rows in set (0.00 sec)" }, { "code": null, "e": 2153, "s": 2103, "text": "Following is the query to sum rows with same id −" }, { "code": null, "e": 2259, "s": 2153, "text": "mysql> select id,sum(price) as Total from demo84\n -> group by id\n -> having sum(demo84.price) >=2000;" }, { "code": null, "e": 2300, "s": 2259, "text": "This will produce the following output −" }, { "code": null, "e": 2424, "s": 2300, "text": "+------+-------+| id | Total |\n+------+-------+| 1 | 4000 |\n| 2 | 4000 |+------+-------+\n2 rows in set (0.00 sec)" }, { "code": null, "e": 2441, "s": 2424, "text": "+------+-------+" }, { "code": null, "e": 2458, "s": 2441, "text": "+------+-------+" }, { "code": null, "e": 2475, "s": 2458, "text": "| 2 | 4000 |" }, { "code": null, "e": 2500, "s": 2475, "text": "2 rows in set (0.00 sec)" } ]
A Real-World Application of Vector Autoregressive (VAR) model | by Mohammad Masum, PhD | Towards Data Science
Multivariate Time Series Analysis A univariate time series data contains only one single time-dependent variable while a multivariate time series data consists of multiple time-dependent variables. We generally use multivariate time series analysis to model and explain the interesting interdependencies and co-movements among the variables. In the multivariate analysis — the assumption is that the time-dependent variables not only depend on their past values but also show dependency between them. Multivariate time series models leverage the dependencies to provide more reliable and accurate forecasts for a specific given data, though the univariate analysis outperforms multivariate in general[1]. In this article, we apply a multivariate time series method, called Vector Auto Regression (VAR) on a real-world dataset. Vector Auto Regression (VAR) VAR model is a stochastic process that represents a group of time-dependent variables as a linear function of their own past values and the past values of all the other variables in the group. For instance, we can consider a bivariate time series analysis that describes a relationship between hourly temperature and wind speed as a function of past values [2]: temp(t) = a1 + w11* temp(t-1) + w12* wind(t-1) + e1(t-1) wind(t) = a2 + w21* temp(t-1) + w22*wind(t-1) +e2(t-1) where a1 and a2 are constants; w11, w12, w21, and w22 are the coefficients; e1 and e2 are the error terms. Dataset Statmodels is a python API that allows users to explore data, estimate statistical models, and perform statistical tests [3]. It contains time series data as well. We download a dataset from the API. To download the data, we have to install some libraries and then load the data: import pandas as pdimport statsmodels.api as smfrom statsmodels.tsa.api import VARdata = sm.datasets.macrodata.load_pandas().datadata.head(2) The output shows the first two observations of the total dataset: The data contains a number of time-series data, we take only two time-dependent variables “realgdp” and “realdpi” for experiment purposes and use “year” columns as the index of the data. data1 = data[["realgdp", 'realdpi']]data1.index = data["year"] output: Let's visualize the data: data1.plot(figsize = (8,5)) Both of the series show an increasing trend over time with slight ups and downs. Stationary Before applying VAR, both the time series variable should be stationary. Both the series are not stationary since both the series do not show constant mean and variance over time. We can also perform a statistical test like the Augmented Dickey-Fuller test (ADF) to find stationarity of the series using the AIC criteria. from statsmodels.tsa.stattools import adfulleradfuller_test = adfuller(data1['realgdp'], autolag= "AIC")print("ADF test statistic: {}".format(adfuller_test[0]))print("p-value: {}".format(adfuller_test[1])) output: In both cases, the p-value is not significant enough, meaning that we can not reject the null hypothesis and conclude that the series are non-stationary. Differencing As both the series are not stationary, we perform differencing and later check the stationarity. data_d = data1.diff().dropna() The “realgdp” series becomes stationary after first differencing of the original series as the p-value of the test is statistically significant. The “realdpi” series becomes stationary after first differencing of the original series as the p-value of the test is statistically significant. Model In this section, we apply the VAR model on the one differenced series. We carry-out the train-test split of the data and keep the last 10-days as test data. train = data_d.iloc[:-10,:]test = data_d.iloc[-10:,:] Searching optimal order of VAR model In the process of VAR modeling, we opt to employ Information Criterion Akaike (AIC) as a model selection criterion to conduct optimal model identification. In simple terms, we select the order (p) of VAR based on the best AIC score. The AIC, in general, penalizes models for being too complex, though the complex models may perform slightly better on some other model selection criterion. Hence, we expect an inflection point in searching the order (p), meaning that, the AIC score should decrease with order (p) gets larger until a certain order and then the score starts increasing. For this, we perform grid-search to investigate the optimal order (p). forecasting_model = VAR(train)results_aic = []for p in range(1,10): results = forecasting_model.fit(p) results_aic.append(results.aic) In the first line of the code: we train VAR model with the training data. Rest of code: perform a for loop to find the AIC scores for fitting order ranging from 1 to 10. We can visualize the results (AIC scores against orders) to better understand the inflection point: import seaborn as snssns.set()plt.plot(list(np.arange(1,10,1)), results_aic)plt.xlabel("Order")plt.ylabel("AIC")plt.show() From the plot, the lowest AIC score is achieved at the order of 2 and then the AIC scores show an increasing trend with the order p gets larger. Hence, we select the 2 as the optimal order of the VAR model. Consequently, we fit order 2 to the forecasting model. let's check the summary of the model: results = forecasting_model.fit(2)results.summary() The summary output contains much information: Forecasting We use 2 as the optimal order in fitting the VAR model. Thus, we take the final 2 steps in the training data for forecasting the immediate next step (i.e., the first day of the test data). Now, after fitting the model, we forecast for the test data where the last 2 days of training data set as lagged values and steps set as 10 days as we want to forecast for the next 10 days. laaged_values = train.values[-2:]forecast = pd.DataFrame(results.forecast(y= laaged_values, steps=10), index = test.index, columns= ['realgdp_1d', 'realdpi_1d'])forecast The output: We have to note that the aforementioned forecasts are for the one differenced model. Hence, we must reverse the first differenced forecasts into the original forecast values. forecast["realgdp_forecasted"] = data1["realgdp"].iloc[-10-1] + forecast_1D['realgdp_1d'].cumsum()forecast["realdpi_forecasted"] = data1["realdpi"].iloc[-10-1] + forecast_1D['realdpi_1d'].cumsum() output: The first two columns are the forecasted values for 1 differenced series and the last two columns show the forecasted values for the original series. Now, we visualize the original test values and the forecasted values by VAR. The original realdpi and the forecasted realdpi show a similar pattern throwout the forecasted days. For realgdp: the first half of the forecasted values show a similar pattern as the original values, on the other hand, the last half of the forecasted values do not follow similar pattern. To sum up, in this article, we discuss multivariate time series analysis and applied the VAR model on a real-world multivariate time series dataset. You can also read the article — A real-world time series data analysis and forecasting, where I applied ARIMA (univariate time series analysis model) to forecast univariate time series data.
[ { "code": null, "e": 206, "s": 172, "text": "Multivariate Time Series Analysis" }, { "code": null, "e": 999, "s": 206, "text": "A univariate time series data contains only one single time-dependent variable while a multivariate time series data consists of multiple time-dependent variables. We generally use multivariate time series analysis to model and explain the interesting interdependencies and co-movements among the variables. In the multivariate analysis — the assumption is that the time-dependent variables not only depend on their past values but also show dependency between them. Multivariate time series models leverage the dependencies to provide more reliable and accurate forecasts for a specific given data, though the univariate analysis outperforms multivariate in general[1]. In this article, we apply a multivariate time series method, called Vector Auto Regression (VAR) on a real-world dataset." }, { "code": null, "e": 1028, "s": 999, "text": "Vector Auto Regression (VAR)" }, { "code": null, "e": 1221, "s": 1028, "text": "VAR model is a stochastic process that represents a group of time-dependent variables as a linear function of their own past values and the past values of all the other variables in the group." }, { "code": null, "e": 1390, "s": 1221, "text": "For instance, we can consider a bivariate time series analysis that describes a relationship between hourly temperature and wind speed as a function of past values [2]:" }, { "code": null, "e": 1447, "s": 1390, "text": "temp(t) = a1 + w11* temp(t-1) + w12* wind(t-1) + e1(t-1)" }, { "code": null, "e": 1502, "s": 1447, "text": "wind(t) = a2 + w21* temp(t-1) + w22*wind(t-1) +e2(t-1)" }, { "code": null, "e": 1609, "s": 1502, "text": "where a1 and a2 are constants; w11, w12, w21, and w22 are the coefficients; e1 and e2 are the error terms." }, { "code": null, "e": 1617, "s": 1609, "text": "Dataset" }, { "code": null, "e": 1817, "s": 1617, "text": "Statmodels is a python API that allows users to explore data, estimate statistical models, and perform statistical tests [3]. It contains time series data as well. We download a dataset from the API." }, { "code": null, "e": 1897, "s": 1817, "text": "To download the data, we have to install some libraries and then load the data:" }, { "code": null, "e": 2039, "s": 1897, "text": "import pandas as pdimport statsmodels.api as smfrom statsmodels.tsa.api import VARdata = sm.datasets.macrodata.load_pandas().datadata.head(2)" }, { "code": null, "e": 2105, "s": 2039, "text": "The output shows the first two observations of the total dataset:" }, { "code": null, "e": 2292, "s": 2105, "text": "The data contains a number of time-series data, we take only two time-dependent variables “realgdp” and “realdpi” for experiment purposes and use “year” columns as the index of the data." }, { "code": null, "e": 2355, "s": 2292, "text": "data1 = data[[\"realgdp\", 'realdpi']]data1.index = data[\"year\"]" }, { "code": null, "e": 2363, "s": 2355, "text": "output:" }, { "code": null, "e": 2389, "s": 2363, "text": "Let's visualize the data:" }, { "code": null, "e": 2417, "s": 2389, "text": "data1.plot(figsize = (8,5))" }, { "code": null, "e": 2498, "s": 2417, "text": "Both of the series show an increasing trend over time with slight ups and downs." }, { "code": null, "e": 2509, "s": 2498, "text": "Stationary" }, { "code": null, "e": 2831, "s": 2509, "text": "Before applying VAR, both the time series variable should be stationary. Both the series are not stationary since both the series do not show constant mean and variance over time. We can also perform a statistical test like the Augmented Dickey-Fuller test (ADF) to find stationarity of the series using the AIC criteria." }, { "code": null, "e": 3037, "s": 2831, "text": "from statsmodels.tsa.stattools import adfulleradfuller_test = adfuller(data1['realgdp'], autolag= \"AIC\")print(\"ADF test statistic: {}\".format(adfuller_test[0]))print(\"p-value: {}\".format(adfuller_test[1]))" }, { "code": null, "e": 3045, "s": 3037, "text": "output:" }, { "code": null, "e": 3199, "s": 3045, "text": "In both cases, the p-value is not significant enough, meaning that we can not reject the null hypothesis and conclude that the series are non-stationary." }, { "code": null, "e": 3212, "s": 3199, "text": "Differencing" }, { "code": null, "e": 3309, "s": 3212, "text": "As both the series are not stationary, we perform differencing and later check the stationarity." }, { "code": null, "e": 3340, "s": 3309, "text": "data_d = data1.diff().dropna()" }, { "code": null, "e": 3485, "s": 3340, "text": "The “realgdp” series becomes stationary after first differencing of the original series as the p-value of the test is statistically significant." }, { "code": null, "e": 3630, "s": 3485, "text": "The “realdpi” series becomes stationary after first differencing of the original series as the p-value of the test is statistically significant." }, { "code": null, "e": 3636, "s": 3630, "text": "Model" }, { "code": null, "e": 3793, "s": 3636, "text": "In this section, we apply the VAR model on the one differenced series. We carry-out the train-test split of the data and keep the last 10-days as test data." }, { "code": null, "e": 3847, "s": 3793, "text": "train = data_d.iloc[:-10,:]test = data_d.iloc[-10:,:]" }, { "code": null, "e": 3884, "s": 3847, "text": "Searching optimal order of VAR model" }, { "code": null, "e": 4540, "s": 3884, "text": "In the process of VAR modeling, we opt to employ Information Criterion Akaike (AIC) as a model selection criterion to conduct optimal model identification. In simple terms, we select the order (p) of VAR based on the best AIC score. The AIC, in general, penalizes models for being too complex, though the complex models may perform slightly better on some other model selection criterion. Hence, we expect an inflection point in searching the order (p), meaning that, the AIC score should decrease with order (p) gets larger until a certain order and then the score starts increasing. For this, we perform grid-search to investigate the optimal order (p)." }, { "code": null, "e": 4677, "s": 4540, "text": "forecasting_model = VAR(train)results_aic = []for p in range(1,10): results = forecasting_model.fit(p) results_aic.append(results.aic)" }, { "code": null, "e": 4947, "s": 4677, "text": "In the first line of the code: we train VAR model with the training data. Rest of code: perform a for loop to find the AIC scores for fitting order ranging from 1 to 10. We can visualize the results (AIC scores against orders) to better understand the inflection point:" }, { "code": null, "e": 5070, "s": 4947, "text": "import seaborn as snssns.set()plt.plot(list(np.arange(1,10,1)), results_aic)plt.xlabel(\"Order\")plt.ylabel(\"AIC\")plt.show()" }, { "code": null, "e": 5332, "s": 5070, "text": "From the plot, the lowest AIC score is achieved at the order of 2 and then the AIC scores show an increasing trend with the order p gets larger. Hence, we select the 2 as the optimal order of the VAR model. Consequently, we fit order 2 to the forecasting model." }, { "code": null, "e": 5370, "s": 5332, "text": "let's check the summary of the model:" }, { "code": null, "e": 5422, "s": 5370, "text": "results = forecasting_model.fit(2)results.summary()" }, { "code": null, "e": 5468, "s": 5422, "text": "The summary output contains much information:" }, { "code": null, "e": 5480, "s": 5468, "text": "Forecasting" }, { "code": null, "e": 5669, "s": 5480, "text": "We use 2 as the optimal order in fitting the VAR model. Thus, we take the final 2 steps in the training data for forecasting the immediate next step (i.e., the first day of the test data)." }, { "code": null, "e": 5859, "s": 5669, "text": "Now, after fitting the model, we forecast for the test data where the last 2 days of training data set as lagged values and steps set as 10 days as we want to forecast for the next 10 days." }, { "code": null, "e": 6029, "s": 5859, "text": "laaged_values = train.values[-2:]forecast = pd.DataFrame(results.forecast(y= laaged_values, steps=10), index = test.index, columns= ['realgdp_1d', 'realdpi_1d'])forecast" }, { "code": null, "e": 6041, "s": 6029, "text": "The output:" }, { "code": null, "e": 6216, "s": 6041, "text": "We have to note that the aforementioned forecasts are for the one differenced model. Hence, we must reverse the first differenced forecasts into the original forecast values." }, { "code": null, "e": 6421, "s": 6216, "text": "forecast[\"realgdp_forecasted\"] = data1[\"realgdp\"].iloc[-10-1] + forecast_1D['realgdp_1d'].cumsum()forecast[\"realdpi_forecasted\"] = data1[\"realdpi\"].iloc[-10-1] + forecast_1D['realdpi_1d'].cumsum() " }, { "code": null, "e": 6429, "s": 6421, "text": "output:" }, { "code": null, "e": 6579, "s": 6429, "text": "The first two columns are the forecasted values for 1 differenced series and the last two columns show the forecasted values for the original series." }, { "code": null, "e": 6656, "s": 6579, "text": "Now, we visualize the original test values and the forecasted values by VAR." }, { "code": null, "e": 6946, "s": 6656, "text": "The original realdpi and the forecasted realdpi show a similar pattern throwout the forecasted days. For realgdp: the first half of the forecasted values show a similar pattern as the original values, on the other hand, the last half of the forecasted values do not follow similar pattern." }, { "code": null, "e": 7095, "s": 6946, "text": "To sum up, in this article, we discuss multivariate time series analysis and applied the VAR model on a real-world multivariate time series dataset." } ]
CoffeeScript - Conditionals
While programming, we encounter some scenarios where we have to choose a path from a given set of paths. In such situations, we need conditional statements. Conditional statements help us take decisions and perform right actions. Following is the general form of a typical decision-making structure found in most of the programming languages. JavaScript supports the if statement (including its variants) and switch statement. In addition to the conditionals available in JavaScript, CoffeeScript includes the unless statement, the negation of if, and even more. Following are the conditional statements provided by CoffeeScript. An if statement consists of a Boolean expression followed by one or more statements. These statements execute when the given Boolean expression is true. An if statement can be followed by an optional else statement, which executes when the Boolean expression is false. An unless statement is similar to if with a Boolean expression followed by one or more statements except. These statements execute when a given Boolean expression is false. An unless statement can be followed by an optional else statement, which executes when a Boolean expression is true. A switch statement allows a variable to be tested for equality against a list of values. The if and unless statements are block statements that are written in multiple lines. CoffeeScript provides the then keyword using which we can write the if and the unless statements in a single line. Following are the statements in CoffeeScript that are written using then keyword. Using the if-then statement we can write the if statement of CoffeeScript in a single line. It consists of a Boolean expression followed by then keyword, which is followed by one or more statements. These statements execute when the given Boolean expression is true. The if-then statement can be followed by an optional else statement, which executes when the Boolean expression is false. Using if-then...else statement, we can write the if...else statement in a single line. Using the unless-then statement, we can write the unless statement of CoffeeScript in a single line. It consists of a Boolean expression followed by then keyword, which is followed by one or more statements. These statements execute when the given Boolean expression is false. The unless-then statement can be followed by an optional else statement, which executes when the Boolean expression is true. Using unless-then...else statement, we can write the unless...else statement in a single line. In CoffeeScript, you can also write the if and unless statements having a code block first followed by if or unless keyword as shown below. This is the postfix form of those statements. It comes handy while writing programs in CoffeeScript. #Postfix if Statements to be executed if expression #Postfix unless Statements to be executed unless expression show example Print Add Notes Bookmark this page
[ { "code": null, "e": 2539, "s": 2309, "text": "While programming, we encounter some scenarios where we have to choose a path from a given set of paths. In such situations, we need conditional statements. Conditional statements help us take decisions and perform right actions." }, { "code": null, "e": 2652, "s": 2539, "text": "Following is the general form of a typical decision-making structure found in most of the programming languages." }, { "code": null, "e": 2872, "s": 2652, "text": "JavaScript supports the if statement (including its variants) and switch statement. In addition to the conditionals available in JavaScript, CoffeeScript includes the unless statement, the negation of if, and even more." }, { "code": null, "e": 2939, "s": 2872, "text": "Following are the conditional statements provided by CoffeeScript." }, { "code": null, "e": 3092, "s": 2939, "text": "An if statement consists of a Boolean expression followed by one or more statements. These statements execute when the given Boolean expression is true." }, { "code": null, "e": 3208, "s": 3092, "text": "An if statement can be followed by an optional else statement, which executes when the Boolean expression is false." }, { "code": null, "e": 3381, "s": 3208, "text": "An unless statement is similar to if with a Boolean expression followed by one or more statements except. These statements execute when a given Boolean expression is false." }, { "code": null, "e": 3498, "s": 3381, "text": "An unless statement can be followed by an optional else statement, which executes when a Boolean expression is true." }, { "code": null, "e": 3587, "s": 3498, "text": "A switch statement allows a variable to be tested for equality against a list of values." }, { "code": null, "e": 3788, "s": 3587, "text": "The if and unless statements are block statements that are written in multiple lines. CoffeeScript provides the then keyword using which we can write the if and the unless statements in a single line." }, { "code": null, "e": 3870, "s": 3788, "text": "Following are the statements in CoffeeScript that are written using then keyword." }, { "code": null, "e": 4137, "s": 3870, "text": "Using the if-then statement we can write the if statement of CoffeeScript in a single line. It consists of a Boolean expression followed by then keyword, which is followed by one or more statements. These statements execute when the given Boolean expression is true." }, { "code": null, "e": 4346, "s": 4137, "text": "The if-then statement can be followed by an optional else statement, which executes when the Boolean expression is false. Using if-then...else statement, we can write the if...else statement in a single line." }, { "code": null, "e": 4623, "s": 4346, "text": "Using the unless-then statement, we can write the unless statement of CoffeeScript in a single line. It consists of a Boolean expression followed by then keyword, which is followed by one or more statements. These statements execute when the given Boolean expression is false." }, { "code": null, "e": 4843, "s": 4623, "text": "The unless-then statement can be followed by an optional else statement, which executes when the Boolean expression is true. Using unless-then...else statement, we can write the unless...else statement in a single line." }, { "code": null, "e": 5084, "s": 4843, "text": "In CoffeeScript, you can also write the if and unless statements having a code block first followed by if or unless keyword as shown below. This is the postfix form of those statements. It comes handy while writing programs in CoffeeScript." }, { "code": null, "e": 5198, "s": 5084, "text": "#Postfix if\nStatements to be executed if expression\n\n#Postfix unless\nStatements to be executed unless expression\n" }, { "code": null, "e": 5211, "s": 5198, "text": "show example" }, { "code": null, "e": 5218, "s": 5211, "text": " Print" }, { "code": null, "e": 5229, "s": 5218, "text": " Add Notes" } ]
Text Analysis & Feature Engineering with NLP | by Mauro Di Pietro | Towards Data Science
In this article, using NLP and Python, I will explain how to analyze text data and extract features for your machine learning model. NLP (Natural Language Processing) is a field of artificial intelligence that studies the interactions between computers and human languages, in particular how to program computers to process and analyze large amounts of natural language data. NLP is often applied for classifying text data. Text classification is the problem of assigning categories to text data according to its content. The most important part of text classification is feature engineering: the process of creating features for a machine learning model from raw text data. In this article, I will explain different methods to analyze text and extract features that can be used to build a classification model. I will present some useful Python code that can be easily applied in other similar cases (just copy, paste, run) and walk through every line of code with comments so that you can replicate this example (link to the full code below). github.com I will use the “News category dataset” (link below) in which you are provided with news headlines from the year 2012 to 2018 obtained from HuffPost and you are asked to classify them with the right category. www.kaggle.com In particular, I will go through: Environment setup: import packages and read data. Language detection: understand which natural language data is in. Text preprocessing: text cleaning and transformation. Length analysis: measured with different metrics. Sentiment analysis: determine whether a text is positive or negative. Named-Entity recognition: tag text with pre-defined categories such as person names, organizations, locations. Word frequency: find the most important n-grams. Word vectors: transform a word into numbers. Topic modeling: extract the main topics from corpus. First of all, I need to import the following libraries. ## for dataimport pandas as pdimport collectionsimport json## for plottingimport matplotlib.pyplot as pltimport seaborn as snsimport wordcloud## for text processingimport reimport nltk## for language detectionimport langdetect ## for sentimentfrom textblob import TextBlob## for nerimport spacy## for vectorizerfrom sklearn import feature_extraction, manifold## for word embeddingimport gensim.downloader as gensim_api## for topic modelingimport gensim The dataset is contained into a json file, so I will first read it into a list of dictionaries with the json package and then transform it into a pandas Dataframe. lst_dics = []with open('data.json', mode='r', errors='ignore') as json_file: for dic in json_file: lst_dics.append( json.loads(dic) )## print the first one lst_dics[0] The original dataset contains over 30 categories, but for the purposes of this tutorial, I will work with a subset of 3: Entertainment, Politics, and Tech. ## create dtfdtf = pd.DataFrame(lst_dics)## filter categoriesdtf = dtf[ dtf["category"].isin(['ENTERTAINMENT','POLITICS','TECH']) ][["category","headline"]]## rename columnsdtf = dtf.rename(columns={"category":"y", "headline":"text"})## print 5 random rowsdtf.sample(5) In order to understand the composition of the dataset, I am going to look into univariate distributions (probability distribution of just one variable) by showing labels frequency with a bar plot. x = "y"fig, ax = plt.subplots()fig.suptitle(x, fontsize=12)dtf[x].reset_index().groupby(x).count().sort_values(by= "index").plot(kind="barh", legend=False, ax=ax).grid(axis='x')plt.show() The dataset is imbalanced: the proportion of Tech news is really small compared to the others. This can be an issue during modeling and a resample of the dataset may be useful. Now that it’s all set, I will start by cleaning data, then I will extract different insights from raw text and add them as new columns of the dataframe. This new information can be used as potential features for a classification model. Let’s get started, shall we? First of all, I want to make sure that I’m dealing with the same language and with the langdetect package this is really easy. To give an illustration, I will use it on the first news headline of the dataset: txt = dtf["text"].iloc[0]print(txt, " --> ", langdetect.detect(txt)) Let’s do it for the whole dataset by adding a column with the language information: dtf['lang'] = dtf["text"].apply(lambda x: langdetect.detect(x) if x.strip() != "" else "")dtf.head() The dataframe has now a new column. Using the same code from before, I can see how many different languages there are: Even if there are different languages, English is the main one. Therefore I am going to filter the news in English. dtf = dtf[dtf["lang"]=="en"] Data preprocessing is the phase of preparing raw data to make it suitable for a machine learning model. For NLP, that includes text cleaning, stopwords removal, stemming and lemmatization. Text cleaning steps vary according to the type of data and the required task. Generally, the string is converted to lowercase and punctuation is removed before text gets tokenized. Tokenization is the process of splitting a string into a list of strings (or “tokens”). Let’s use the first news headline again as an example: print("--- original ---")print(txt)print("--- cleaning ---")txt = re.sub(r'[^\w\s]', '', str(txt).lower().strip())print(txt)print("--- tokenization ---")txt = txt.split()print(txt) Do we want to keep all the tokens in the list? We don't. In fact, we want to remove all the words that don’t provide additional information. In the example, the most important word is “song” because it can point any classification model in the right direction. By contrast, words like “and”, “for”, “the” aren’t useful as they probably appear in almost every observation in the dataset. Those are examples of stop words. This expression usually refers to the most common words in a language, but there is no single universal list of stop words. We can create a list of generic stop words for the English vocabulary with NLTK (the Natural Language Toolkit), which is a suite of libraries and programs for symbolic and statistical natural language processing. lst_stopwords = nltk.corpus.stopwords.words("english")lst_stopwords Let’s remove those stop words from the first news headline: print("--- remove stopwords ---")txt = [word for word in txt if word not in lst_stopwords]print(txt) We need to be very careful with stop words because if you remove the wrong token you may lose important information. For example, the word “will” was removed and we lost the information that the person is Will Smith. With this in mind, it can be useful to do some manual modification to the raw text before removing stop words (for example, replacing “Will Smith” with “Will_Smith”). Now that we have all the useful tokens, we can apply word transformations. Stemming and Lemmatization both generate the root form of words. The difference is that stem might not be an actual word whereas lemma is an actual language word (also stemming is usually faster). Those algorithms are both provided by NLTK. Continuing the example: print("--- stemming ---")ps = nltk.stem.porter.PorterStemmer()print([ps.stem(word) for word in txt])print("--- lemmatisation ---")lem = nltk.stem.wordnet.WordNetLemmatizer()print([lem.lemmatize(word) for word in txt]) As you can see, some words have changed: “joins” turned into its root form “join”, just like “cups”. On the other hand, “official” only changed with stemming into the stem “offici” which isn’t a word, created by removing the suffix “-al”. I will put all those preprocessing steps into a single function and apply it to the whole dataset. '''Preprocess a string.:parameter :param text: string - name of column containing text :param lst_stopwords: list - list of stopwords to remove :param flg_stemm: bool - whether stemming is to be applied :param flg_lemm: bool - whether lemmitisation is to be applied:return cleaned text'''def utils_preprocess_text(text, flg_stemm=False, flg_lemm=True, lst_stopwords=None): ## clean (convert to lowercase and remove punctuations and characters and then strip) text = re.sub(r'[^\w\s]', '', str(text).lower().strip()) ## Tokenize (convert from string to list) lst_text = text.split() ## remove Stopwords if lst_stopwords is not None: lst_text = [word for word in lst_text if word not in lst_stopwords] ## Stemming (remove -ing, -ly, ...) if flg_stemm == True: ps = nltk.stem.porter.PorterStemmer() lst_text = [ps.stem(word) for word in lst_text] ## Lemmatisation (convert the word into root word) if flg_lemm == True: lem = nltk.stem.wordnet.WordNetLemmatizer() lst_text = [lem.lemmatize(word) for word in lst_text] ## back to string from list text = " ".join(lst_text) return text Please note that you shouldn't apply both stemming and lemmatization. Here I am going to use the latter. dtf["text_clean"] = dtf["text"].apply(lambda x: utils_preprocess_text(x, flg_stemm=False, flg_lemm=True, lst_stopwords)) Just like before, I created a new column: dtf.head() print(dtf["text"].iloc[0], " --> ", dtf["text_clean"].iloc[0]) It’s important to have a look at the length of the text because it’s an easy calculation that can give a lot of insights. Maybe, for instance, we are lucky enough to discover that one category is systematically longer than another and the length would simply be the only feature needed to build the model. Unfortunately, this won’t be the case as news headlines have similar lengths, but it’s worth a try. There are several length measures for text data. I will give some examples: word count: counts the number of tokens in the text (separated by a space) character count: sum the number of characters of each token sentence count: count the number of sentences (separated by a period) average word length: sum of words length divided by the number of words (character count/word count) average sentence length: sum of sentences length divided by the number of sentences (word count/sentence count) dtf['word_count'] = dtf["text"].apply(lambda x: len(str(x).split(" ")))dtf['char_count'] = dtf["text"].apply(lambda x: sum(len(word) for word in str(x).split(" ")))dtf['sentence_count'] = dtf["text"].apply(lambda x: len(str(x).split(".")))dtf['avg_word_length'] = dtf['char_count'] / dtf['word_count']dtf['avg_sentence_lenght'] = dtf['word_count'] / dtf['sentence_count']dtf.head() Let’s see our usual example: What’s the distribution of those new variables with respect to the target? To answer that I’ll look at the bivariate distributions (how two variables move together). First, I shall split the whole set of observations into 3 samples (Politics, Entertainment, Tech), then compare the histograms and densities of the samples. If the distributions are different then the variable is predictive because the 3 groups have different patterns. For instance, let’s see if the character count is correlated with the target variable: x, y = "char_count", "y"fig, ax = plt.subplots(nrows=1, ncols=2)fig.suptitle(x, fontsize=12)for i in dtf[y].unique(): sns.distplot(dtf[dtf[y]==i][x], hist=True, kde=False, bins=10, hist_kws={"alpha":0.8}, axlabel="histogram", ax=ax[0]) sns.distplot(dtf[dtf[y]==i][x], hist=False, kde=True, kde_kws={"shade":True}, axlabel="density", ax=ax[1])ax[0].grid(True)ax[0].legend(dtf[y].unique())ax[1].grid(True)plt.show() The 3 categories have a similar length distribution. Here, the density plot is very useful because the samples have different sizes. Sentiment analysis is the representation of subjective emotions of text data through numbers or classes. Calculating sentiment is one of the toughest tasks of NLP as natural language is full of ambiguity. For example, the phrase “This is so bad that it’s good” has more than one interpretation. A model could assign a positive signal to the word “good” and a negative one to the word “bad”, resulting in a neutral sentiment. That happens because the context is unknown. The best approach would be training your own sentiment model that fits your data properly. When there is no enough time or data for that, one can use pre-trained models, like Textblob and Vader. Textblob, built on top of NLTK, is one of the most popular, it can assign polarity to words and estimate the sentiment of the whole text as an average. On the other hand, Vader (Valence aware dictionary and sentiment reasoner) is a rule-based model that works particularly well on social media data. I am going to add a sentiment feature with Textblob: dtf["sentiment"] = dtf[column].apply(lambda x: TextBlob(x).sentiment.polarity)dtf.head() print(dtf["text"].iloc[0], " --> ", dtf["sentiment"].iloc[0]) Is there a pattern between categories and sentiment? Most of the headlines have a neutral sentiment, except for Politics news that is skewed on the negative tail, and Tech news that has a spike on the positive tail. NER (Named-entity recognition) is the process to tag named entities mentioned in unstructured text with pre-defined categories such as person names, organizations, locations, time expressions, quantities, etc. Training a NER model is really time-consuming because it requires a pretty rich dataset. Luckily there is someone who already did this job for us. One of the best open source NER tools is SpaCy. It provides different NLP models that are able to recognize several categories of entities. I will give an example using the SpaCy model en_core_web_lg (the large model for English trained on web data) on our usual headline (raw text, not preprocessed): ## call modelner = spacy.load("en_core_web_lg")## tag texttxt = dtf["text"].iloc[0]doc = ner(txt)## display resultspacy.displacy.render(doc, style="ent") That’s pretty cool, but how can we turn this into a useful feature? This is what I’m going to do: run the NER model on every text observation in the dataset, like I did in the previous example. For each news headline, I shall put all the recognized entities into a new column (named “tags”) along with the number of times that same entity appears in the text. In the example, would be { (‘Will Smith’, ‘PERSON’):1, (‘Diplo’, ‘PERSON’):1, (‘Nicky Jam’, ‘PERSON’):1, (“The 2018 World Cup’s”, ‘EVENT’):1 } Then I will create a new column for each tag category (Person, Org, Event, ...) and count the number of found entities of each one. In the example above, the features would be tags_PERSON = 3 tags_EVENT = 1 ## tag text and exctract tags into a listdtf["tags"] = dtf["text"].apply(lambda x: [(tag.text, tag.label_) for tag in ner(x).ents] )## utils function to count the element of a listdef utils_lst_count(lst): dic_counter = collections.Counter() for x in lst: dic_counter[x] += 1 dic_counter = collections.OrderedDict( sorted(dic_counter.items(), key=lambda x: x[1], reverse=True)) lst_count = [ {key:value} for key,value in dic_counter.items() ] return lst_count## count tagsdtf["tags"] = dtf["tags"].apply(lambda x: utils_lst_count(x))## utils function create new column for each tag categorydef utils_ner_features(lst_dics_tuples, tag): if len(lst_dics_tuples) > 0: tag_type = [] for dic_tuples in lst_dics_tuples: for tuple in dic_tuples: type, n = tuple[1], dic_tuples[tuple] tag_type = tag_type + [type]*n dic_counter = collections.Counter() for x in tag_type: dic_counter[x] += 1 return dic_counter[tag] else: return 0## extract featurestags_set = []for lst in dtf["tags"].tolist(): for dic in lst: for k in dic.keys(): tags_set.append(k[1])tags_set = list(set(tags_set))for feature in tags_set: dtf["tags_"+feature] = dtf["tags"].apply(lambda x: utils_ner_features(x, feature))## print resultdtf.head() Now we can have a macro view on the tag types distribution. Let’s take the ORG tags (companies and organizations) for example: In order to go deeper into the analysis, we need to unpack the column “tags” we created in the previous code. Let’s plot the most frequent tags for one of the headline categories: y = "ENTERTAINMENT" tags_list = dtf[dtf["y"]==y]["tags"].sum()map_lst = list(map(lambda x: list(x.keys())[0], tags_list))dtf_tags = pd.DataFrame(map_lst, columns=['tag','type'])dtf_tags["count"] = 1dtf_tags = dtf_tags.groupby(['type', 'tag']).count().reset_index().sort_values("count", ascending=False)fig, ax = plt.subplots()fig.suptitle("Top frequent tags", fontsize=12)sns.barplot(x="count", y="tag", hue="type", data=dtf_tags.iloc[:top,:], dodge=False, ax=ax)ax.grid(axis="x")plt.show() Moving forward with another useful application of NER: do you remember when we removed stop words losing the word “Will” from the name of “Will Smith”? An interesting solution to that problem would be replacing “Will Smith” with “Will_Smith”, so that it won’t be affected by stop words removal. Since going through all the texts in the dataset to change names would be impossible, let’s use SpaCy for that. As we know, SpaCy can recognize a person name, therefore we can use it for name detection and then modify the string. ## predict wit NERtxt = dtf["text"].iloc[0]entities = ner(txt).ents## tag texttagged_txt = txtfor tag in entities: tagged_txt = re.sub(tag.text, "_".join(tag.text.split()), tagged_txt) ## show resultprint(tagged_txt) So far we’ve seen how to do feature engineering by analyzing and processing the whole text. Now we are going to look at the importance of single words by computing the n-grams frequency. An n-gram is a contiguous sequence of n items from a given sample of text. When the n-gram has the size of 1 is referred to as a unigram (size of 2 is a bigram). For example, the phrase “I like this article” can be decomposed in: 4 unigrams: “I”, “like”, “this”, “article” 3 bigrams: “I like”, “like this”, “this article” I will show how to calculate unigrams and bigrams frequency taking the sample of Politics news. y = "POLITICS"corpus = dtf[dtf["y"]==y]["text_clean"]lst_tokens = nltk.tokenize.word_tokenize(corpus.str.cat(sep=" "))fig, ax = plt.subplots(nrows=1, ncols=2)fig.suptitle("Most frequent words", fontsize=15) ## unigramsdic_words_freq = nltk.FreqDist(lst_tokens)dtf_uni = pd.DataFrame(dic_words_freq.most_common(), columns=["Word","Freq"])dtf_uni.set_index("Word").iloc[:top,:].sort_values(by="Freq").plot( kind="barh", title="Unigrams", ax=ax[0], legend=False).grid(axis='x')ax[0].set(ylabel=None) ## bigramsdic_words_freq = nltk.FreqDist(nltk.ngrams(lst_tokens, 2))dtf_bi = pd.DataFrame(dic_words_freq.most_common(), columns=["Word","Freq"])dtf_bi["Word"] = dtf_bi["Word"].apply(lambda x: " ".join( string for string in x) )dtf_bi.set_index("Word").iloc[:top,:].sort_values(by="Freq").plot( kind="barh", title="Bigrams", ax=ax[1], legend=False).grid(axis='x')ax[1].set(ylabel=None)plt.show() If there are n-grams that appear only in one category (i.e “Republican” in Politics news), those can become new features. A more laborious approach would be to vectorize the whole corpus and use all the words as features (Bag of Words approach). Now I’m going to show you how to add word frequency as a feature in your dataframe. We just need the CountVectorizer from Scikit-learn, one of the most popular libraries for machine learning in Python. A vectorizer converts a collection of text documents to a matrix of token counts. I shall give an example using 3 n-grams: “box office” (frequent in Entertainment), “republican” (frequent in Politics), “apple” (frequent in Tech). lst_words = ["box office", "republican", "apple"]## countlst_grams = [len(word.split(" ")) for word in lst_words]vectorizer = feature_extraction.text.CountVectorizer( vocabulary=lst_words, ngram_range=(min(lst_grams),max(lst_grams)))dtf_X = pd.DataFrame(vectorizer.fit_transform(dtf["text_clean"]).todense(), columns=lst_words)## add the new features as columnsdtf = pd.concat([dtf, dtf_X.set_index(dtf.index)], axis=1)dtf.head() A nice way to visualize the same information is with a word cloud where the frequency of each tag is shown with font size and color. wc = wordcloud.WordCloud(background_color='black', max_words=100, max_font_size=35)wc = wc.generate(str(corpus))fig = plt.figure(num=1)plt.axis('off')plt.imshow(wc, cmap=None)plt.show() Recently, the NLP field has developed new linguistic models that rely on a neural network architecture instead of more traditional n-gram models. These new techniques are a set of language modelling and feature learning techniques where words are transformed into vectors of real numbers, hence they are called word embeddings. Word embedding models map a certain word to a vector by building a probability distribution of what tokens would appear before and after the selected word. These models have quickly become popular because, once you have real numbers instead of strings, you can perform calculations. For example, to find words of the same context, one can simply calculate the vectors distance. There are several Python libraries that work with this kind of model. SpaCy is one, but since we have already used it, I will talk about another famous package: Gensim. An open-source library for unsupervised topic modeling and natural language processing that uses modern statistical machine learning. Using Gensim, I will load a pre-trained GloVe model. GloVe (Global Vectors) is an unsupervised learning algorithm for obtaining vector representations for words of size 300. nlp = gensim_api.load("glove-wiki-gigaword-300") We can use this object to map words to vectors: word = "love"nlp[word] nlp[word].shape Now let’s see what are the closest word vectors or, to put in another way, the words that mostly appear in similar contexts. In order to plot the vectors in a two-dimensional space, I need to reduce the dimensions from 300 to 2. I am going to do that with t-distributed Stochastic Neighbor Embedding from Scikit-learn. t-SNE is a tool to visualize high-dimensional data that converts similarities between data points to joint probabilities. ## find closest vectorslabels, X, x, y = [], [], [], []for t in nlp.most_similar(word, topn=20): X.append(nlp[t[0]]) labels.append(t[0])## reduce dimensionspca = manifold.TSNE(perplexity=40, n_components=2, init='pca')new_values = pca.fit_transform(X)for value in new_values: x.append(value[0]) y.append(value[1])## plotfig = plt.figure()for i in range(len(x)): plt.scatter(x[i], y[i], c="black") plt.annotate(labels[i], xy=(x[i],y[i]), xytext=(5,2), textcoords='offset points', ha='right', va='bottom')## add centerplt.scatter(x=0, y=0, c="red")plt.annotate(word, xy=(0,0), xytext=(5,2), textcoords='offset points', ha='right', va='bottom') The Genism package is specialized in topic modeling. A topic model is a type of statistical model for discovering the abstract “topics” that occur in a collection of documents. I will show how to extract topics using LDA (Latent Dirichlet Allocation): a generative statistical model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. Basically, documents are represented as random mixtures over latent topics, where each topic is characterized by a distribution over words. Let’s see what topics we can extract from Tech news. I need to specify the number of topics the model has to cluster, I am going to try with 3: y = "TECH"corpus = dtf[dtf["y"]==y]["text_clean"]## pre-process corpuslst_corpus = []for string in corpus: lst_words = string.split() lst_grams = [" ".join(lst_words[i:i + 2]) for i in range(0, len(lst_words), 2)] lst_corpus.append(lst_grams)## map words to an idid2word = gensim.corpora.Dictionary(lst_corpus)## create dictionary word:freqdic_corpus = [id2word.doc2bow(word) for word in lst_corpus] ## train LDAlda_model = gensim.models.ldamodel.LdaModel(corpus=dic_corpus, id2word=id2word, num_topics=3, random_state=123, update_every=1, chunksize=100, passes=10, alpha='auto', per_word_topics=True) ## outputlst_dics = []for i in range(0,3): lst_tuples = lda_model.get_topic_terms(i) for tupla in lst_tuples: lst_dics.append({"topic":i, "id":tupla[0], "word":id2word[tupla[0]], "weight":tupla[1]})dtf_topics = pd.DataFrame(lst_dics, columns=['topic','id','word','weight']) ## plotfig, ax = plt.subplots()sns.barplot(y="word", x="weight", hue="topic", data=dtf_topics, dodge=False, ax=ax).set_title('Main Topics')ax.set(ylabel="", xlabel="Word Importance")plt.show() Trying to capture the content of 6 years in only 3 topics may be a bit hard, but as we can see, everything regarding Apple Inc. ended up in the same topic. This article has been a tutorial to demonstrate how to analyze text data with NLP and extract features for a machine learning model. I showed how to detect the language the data is in, and how to preprocess and clean text. Then I explained different measures of length, did sentiment analysis with Textblob, and we used SpaCy for named-entity recognition. Finally, I explained the differences between traditional word frequency approaches with Scikit-learn and modern language models using Gensim. Now you know pretty much all the NLP basics to start working with text data. I hope you enjoyed it! Feel free to contact me for questions and feedback or just to share your interesting projects. 👉 Let’s Connect 👈 This article is part of the series NLP with Python, see also:
[ { "code": null, "e": 304, "s": 171, "text": "In this article, using NLP and Python, I will explain how to analyze text data and extract features for your machine learning model." }, { "code": null, "e": 846, "s": 304, "text": "NLP (Natural Language Processing) is a field of artificial intelligence that studies the interactions between computers and human languages, in particular how to program computers to process and analyze large amounts of natural language data. NLP is often applied for classifying text data. Text classification is the problem of assigning categories to text data according to its content. The most important part of text classification is feature engineering: the process of creating features for a machine learning model from raw text data." }, { "code": null, "e": 1216, "s": 846, "text": "In this article, I will explain different methods to analyze text and extract features that can be used to build a classification model. I will present some useful Python code that can be easily applied in other similar cases (just copy, paste, run) and walk through every line of code with comments so that you can replicate this example (link to the full code below)." }, { "code": null, "e": 1227, "s": 1216, "text": "github.com" }, { "code": null, "e": 1435, "s": 1227, "text": "I will use the “News category dataset” (link below) in which you are provided with news headlines from the year 2012 to 2018 obtained from HuffPost and you are asked to classify them with the right category." }, { "code": null, "e": 1450, "s": 1435, "text": "www.kaggle.com" }, { "code": null, "e": 1484, "s": 1450, "text": "In particular, I will go through:" }, { "code": null, "e": 1534, "s": 1484, "text": "Environment setup: import packages and read data." }, { "code": null, "e": 1600, "s": 1534, "text": "Language detection: understand which natural language data is in." }, { "code": null, "e": 1654, "s": 1600, "text": "Text preprocessing: text cleaning and transformation." }, { "code": null, "e": 1704, "s": 1654, "text": "Length analysis: measured with different metrics." }, { "code": null, "e": 1774, "s": 1704, "text": "Sentiment analysis: determine whether a text is positive or negative." }, { "code": null, "e": 1885, "s": 1774, "text": "Named-Entity recognition: tag text with pre-defined categories such as person names, organizations, locations." }, { "code": null, "e": 1934, "s": 1885, "text": "Word frequency: find the most important n-grams." }, { "code": null, "e": 1979, "s": 1934, "text": "Word vectors: transform a word into numbers." }, { "code": null, "e": 2032, "s": 1979, "text": "Topic modeling: extract the main topics from corpus." }, { "code": null, "e": 2088, "s": 2032, "text": "First of all, I need to import the following libraries." }, { "code": null, "e": 2541, "s": 2088, "text": "## for dataimport pandas as pdimport collectionsimport json## for plottingimport matplotlib.pyplot as pltimport seaborn as snsimport wordcloud## for text processingimport reimport nltk## for language detectionimport langdetect ## for sentimentfrom textblob import TextBlob## for nerimport spacy## for vectorizerfrom sklearn import feature_extraction, manifold## for word embeddingimport gensim.downloader as gensim_api## for topic modelingimport gensim" }, { "code": null, "e": 2705, "s": 2541, "text": "The dataset is contained into a json file, so I will first read it into a list of dictionaries with the json package and then transform it into a pandas Dataframe." }, { "code": null, "e": 2888, "s": 2705, "text": "lst_dics = []with open('data.json', mode='r', errors='ignore') as json_file: for dic in json_file: lst_dics.append( json.loads(dic) )## print the first one lst_dics[0]" }, { "code": null, "e": 3044, "s": 2888, "text": "The original dataset contains over 30 categories, but for the purposes of this tutorial, I will work with a subset of 3: Entertainment, Politics, and Tech." }, { "code": null, "e": 3314, "s": 3044, "text": "## create dtfdtf = pd.DataFrame(lst_dics)## filter categoriesdtf = dtf[ dtf[\"category\"].isin(['ENTERTAINMENT','POLITICS','TECH']) ][[\"category\",\"headline\"]]## rename columnsdtf = dtf.rename(columns={\"category\":\"y\", \"headline\":\"text\"})## print 5 random rowsdtf.sample(5)" }, { "code": null, "e": 3511, "s": 3314, "text": "In order to understand the composition of the dataset, I am going to look into univariate distributions (probability distribution of just one variable) by showing labels frequency with a bar plot." }, { "code": null, "e": 3714, "s": 3511, "text": "x = \"y\"fig, ax = plt.subplots()fig.suptitle(x, fontsize=12)dtf[x].reset_index().groupby(x).count().sort_values(by= \"index\").plot(kind=\"barh\", legend=False, ax=ax).grid(axis='x')plt.show()" }, { "code": null, "e": 3891, "s": 3714, "text": "The dataset is imbalanced: the proportion of Tech news is really small compared to the others. This can be an issue during modeling and a resample of the dataset may be useful." }, { "code": null, "e": 4127, "s": 3891, "text": "Now that it’s all set, I will start by cleaning data, then I will extract different insights from raw text and add them as new columns of the dataframe. This new information can be used as potential features for a classification model." }, { "code": null, "e": 4156, "s": 4127, "text": "Let’s get started, shall we?" }, { "code": null, "e": 4365, "s": 4156, "text": "First of all, I want to make sure that I’m dealing with the same language and with the langdetect package this is really easy. To give an illustration, I will use it on the first news headline of the dataset:" }, { "code": null, "e": 4434, "s": 4365, "text": "txt = dtf[\"text\"].iloc[0]print(txt, \" --> \", langdetect.detect(txt))" }, { "code": null, "e": 4518, "s": 4434, "text": "Let’s do it for the whole dataset by adding a column with the language information:" }, { "code": null, "e": 4652, "s": 4518, "text": "dtf['lang'] = dtf[\"text\"].apply(lambda x: langdetect.detect(x) if x.strip() != \"\" else \"\")dtf.head()" }, { "code": null, "e": 4771, "s": 4652, "text": "The dataframe has now a new column. Using the same code from before, I can see how many different languages there are:" }, { "code": null, "e": 4887, "s": 4771, "text": "Even if there are different languages, English is the main one. Therefore I am going to filter the news in English." }, { "code": null, "e": 4916, "s": 4887, "text": "dtf = dtf[dtf[\"lang\"]==\"en\"]" }, { "code": null, "e": 5105, "s": 4916, "text": "Data preprocessing is the phase of preparing raw data to make it suitable for a machine learning model. For NLP, that includes text cleaning, stopwords removal, stemming and lemmatization." }, { "code": null, "e": 5374, "s": 5105, "text": "Text cleaning steps vary according to the type of data and the required task. Generally, the string is converted to lowercase and punctuation is removed before text gets tokenized. Tokenization is the process of splitting a string into a list of strings (or “tokens”)." }, { "code": null, "e": 5429, "s": 5374, "text": "Let’s use the first news headline again as an example:" }, { "code": null, "e": 5610, "s": 5429, "text": "print(\"--- original ---\")print(txt)print(\"--- cleaning ---\")txt = re.sub(r'[^\\w\\s]', '', str(txt).lower().strip())print(txt)print(\"--- tokenization ---\")txt = txt.split()print(txt)" }, { "code": null, "e": 6155, "s": 5610, "text": "Do we want to keep all the tokens in the list? We don't. In fact, we want to remove all the words that don’t provide additional information. In the example, the most important word is “song” because it can point any classification model in the right direction. By contrast, words like “and”, “for”, “the” aren’t useful as they probably appear in almost every observation in the dataset. Those are examples of stop words. This expression usually refers to the most common words in a language, but there is no single universal list of stop words." }, { "code": null, "e": 6368, "s": 6155, "text": "We can create a list of generic stop words for the English vocabulary with NLTK (the Natural Language Toolkit), which is a suite of libraries and programs for symbolic and statistical natural language processing." }, { "code": null, "e": 6436, "s": 6368, "text": "lst_stopwords = nltk.corpus.stopwords.words(\"english\")lst_stopwords" }, { "code": null, "e": 6496, "s": 6436, "text": "Let’s remove those stop words from the first news headline:" }, { "code": null, "e": 6597, "s": 6496, "text": "print(\"--- remove stopwords ---\")txt = [word for word in txt if word not in lst_stopwords]print(txt)" }, { "code": null, "e": 6981, "s": 6597, "text": "We need to be very careful with stop words because if you remove the wrong token you may lose important information. For example, the word “will” was removed and we lost the information that the person is Will Smith. With this in mind, it can be useful to do some manual modification to the raw text before removing stop words (for example, replacing “Will Smith” with “Will_Smith”)." }, { "code": null, "e": 7297, "s": 6981, "text": "Now that we have all the useful tokens, we can apply word transformations. Stemming and Lemmatization both generate the root form of words. The difference is that stem might not be an actual word whereas lemma is an actual language word (also stemming is usually faster). Those algorithms are both provided by NLTK." }, { "code": null, "e": 7321, "s": 7297, "text": "Continuing the example:" }, { "code": null, "e": 7539, "s": 7321, "text": "print(\"--- stemming ---\")ps = nltk.stem.porter.PorterStemmer()print([ps.stem(word) for word in txt])print(\"--- lemmatisation ---\")lem = nltk.stem.wordnet.WordNetLemmatizer()print([lem.lemmatize(word) for word in txt])" }, { "code": null, "e": 7778, "s": 7539, "text": "As you can see, some words have changed: “joins” turned into its root form “join”, just like “cups”. On the other hand, “official” only changed with stemming into the stem “offici” which isn’t a word, created by removing the suffix “-al”." }, { "code": null, "e": 7877, "s": 7778, "text": "I will put all those preprocessing steps into a single function and apply it to the whole dataset." }, { "code": null, "e": 9122, "s": 7877, "text": "'''Preprocess a string.:parameter :param text: string - name of column containing text :param lst_stopwords: list - list of stopwords to remove :param flg_stemm: bool - whether stemming is to be applied :param flg_lemm: bool - whether lemmitisation is to be applied:return cleaned text'''def utils_preprocess_text(text, flg_stemm=False, flg_lemm=True, lst_stopwords=None): ## clean (convert to lowercase and remove punctuations and characters and then strip) text = re.sub(r'[^\\w\\s]', '', str(text).lower().strip()) ## Tokenize (convert from string to list) lst_text = text.split() ## remove Stopwords if lst_stopwords is not None: lst_text = [word for word in lst_text if word not in lst_stopwords] ## Stemming (remove -ing, -ly, ...) if flg_stemm == True: ps = nltk.stem.porter.PorterStemmer() lst_text = [ps.stem(word) for word in lst_text] ## Lemmatisation (convert the word into root word) if flg_lemm == True: lem = nltk.stem.wordnet.WordNetLemmatizer() lst_text = [lem.lemmatize(word) for word in lst_text] ## back to string from list text = \" \".join(lst_text) return text" }, { "code": null, "e": 9227, "s": 9122, "text": "Please note that you shouldn't apply both stemming and lemmatization. Here I am going to use the latter." }, { "code": null, "e": 9348, "s": 9227, "text": "dtf[\"text_clean\"] = dtf[\"text\"].apply(lambda x: utils_preprocess_text(x, flg_stemm=False, flg_lemm=True, lst_stopwords))" }, { "code": null, "e": 9390, "s": 9348, "text": "Just like before, I created a new column:" }, { "code": null, "e": 9401, "s": 9390, "text": "dtf.head()" }, { "code": null, "e": 9464, "s": 9401, "text": "print(dtf[\"text\"].iloc[0], \" --> \", dtf[\"text_clean\"].iloc[0])" }, { "code": null, "e": 9870, "s": 9464, "text": "It’s important to have a look at the length of the text because it’s an easy calculation that can give a lot of insights. Maybe, for instance, we are lucky enough to discover that one category is systematically longer than another and the length would simply be the only feature needed to build the model. Unfortunately, this won’t be the case as news headlines have similar lengths, but it’s worth a try." }, { "code": null, "e": 9946, "s": 9870, "text": "There are several length measures for text data. I will give some examples:" }, { "code": null, "e": 10021, "s": 9946, "text": "word count: counts the number of tokens in the text (separated by a space)" }, { "code": null, "e": 10081, "s": 10021, "text": "character count: sum the number of characters of each token" }, { "code": null, "e": 10151, "s": 10081, "text": "sentence count: count the number of sentences (separated by a period)" }, { "code": null, "e": 10252, "s": 10151, "text": "average word length: sum of words length divided by the number of words (character count/word count)" }, { "code": null, "e": 10364, "s": 10252, "text": "average sentence length: sum of sentences length divided by the number of sentences (word count/sentence count)" }, { "code": null, "e": 10746, "s": 10364, "text": "dtf['word_count'] = dtf[\"text\"].apply(lambda x: len(str(x).split(\" \")))dtf['char_count'] = dtf[\"text\"].apply(lambda x: sum(len(word) for word in str(x).split(\" \")))dtf['sentence_count'] = dtf[\"text\"].apply(lambda x: len(str(x).split(\".\")))dtf['avg_word_length'] = dtf['char_count'] / dtf['word_count']dtf['avg_sentence_lenght'] = dtf['word_count'] / dtf['sentence_count']dtf.head()" }, { "code": null, "e": 10775, "s": 10746, "text": "Let’s see our usual example:" }, { "code": null, "e": 11211, "s": 10775, "text": "What’s the distribution of those new variables with respect to the target? To answer that I’ll look at the bivariate distributions (how two variables move together). First, I shall split the whole set of observations into 3 samples (Politics, Entertainment, Tech), then compare the histograms and densities of the samples. If the distributions are different then the variable is predictive because the 3 groups have different patterns." }, { "code": null, "e": 11298, "s": 11211, "text": "For instance, let’s see if the character count is correlated with the target variable:" }, { "code": null, "e": 11788, "s": 11298, "text": "x, y = \"char_count\", \"y\"fig, ax = plt.subplots(nrows=1, ncols=2)fig.suptitle(x, fontsize=12)for i in dtf[y].unique(): sns.distplot(dtf[dtf[y]==i][x], hist=True, kde=False, bins=10, hist_kws={\"alpha\":0.8}, axlabel=\"histogram\", ax=ax[0]) sns.distplot(dtf[dtf[y]==i][x], hist=False, kde=True, kde_kws={\"shade\":True}, axlabel=\"density\", ax=ax[1])ax[0].grid(True)ax[0].legend(dtf[y].unique())ax[1].grid(True)plt.show()" }, { "code": null, "e": 11921, "s": 11788, "text": "The 3 categories have a similar length distribution. Here, the density plot is very useful because the samples have different sizes." }, { "code": null, "e": 12391, "s": 11921, "text": "Sentiment analysis is the representation of subjective emotions of text data through numbers or classes. Calculating sentiment is one of the toughest tasks of NLP as natural language is full of ambiguity. For example, the phrase “This is so bad that it’s good” has more than one interpretation. A model could assign a positive signal to the word “good” and a negative one to the word “bad”, resulting in a neutral sentiment. That happens because the context is unknown." }, { "code": null, "e": 12886, "s": 12391, "text": "The best approach would be training your own sentiment model that fits your data properly. When there is no enough time or data for that, one can use pre-trained models, like Textblob and Vader. Textblob, built on top of NLTK, is one of the most popular, it can assign polarity to words and estimate the sentiment of the whole text as an average. On the other hand, Vader (Valence aware dictionary and sentiment reasoner) is a rule-based model that works particularly well on social media data." }, { "code": null, "e": 12939, "s": 12886, "text": "I am going to add a sentiment feature with Textblob:" }, { "code": null, "e": 13047, "s": 12939, "text": "dtf[\"sentiment\"] = dtf[column].apply(lambda x: TextBlob(x).sentiment.polarity)dtf.head()" }, { "code": null, "e": 13109, "s": 13047, "text": "print(dtf[\"text\"].iloc[0], \" --> \", dtf[\"sentiment\"].iloc[0])" }, { "code": null, "e": 13162, "s": 13109, "text": "Is there a pattern between categories and sentiment?" }, { "code": null, "e": 13325, "s": 13162, "text": "Most of the headlines have a neutral sentiment, except for Politics news that is skewed on the negative tail, and Tech news that has a spike on the positive tail." }, { "code": null, "e": 13535, "s": 13325, "text": "NER (Named-entity recognition) is the process to tag named entities mentioned in unstructured text with pre-defined categories such as person names, organizations, locations, time expressions, quantities, etc." }, { "code": null, "e": 13822, "s": 13535, "text": "Training a NER model is really time-consuming because it requires a pretty rich dataset. Luckily there is someone who already did this job for us. One of the best open source NER tools is SpaCy. It provides different NLP models that are able to recognize several categories of entities." }, { "code": null, "e": 13984, "s": 13822, "text": "I will give an example using the SpaCy model en_core_web_lg (the large model for English trained on web data) on our usual headline (raw text, not preprocessed):" }, { "code": null, "e": 14138, "s": 13984, "text": "## call modelner = spacy.load(\"en_core_web_lg\")## tag texttxt = dtf[\"text\"].iloc[0]doc = ner(txt)## display resultspacy.displacy.render(doc, style=\"ent\")" }, { "code": null, "e": 14236, "s": 14138, "text": "That’s pretty cool, but how can we turn this into a useful feature? This is what I’m going to do:" }, { "code": null, "e": 14332, "s": 14236, "text": "run the NER model on every text observation in the dataset, like I did in the previous example." }, { "code": null, "e": 14523, "s": 14332, "text": "For each news headline, I shall put all the recognized entities into a new column (named “tags”) along with the number of times that same entity appears in the text. In the example, would be" }, { "code": null, "e": 14641, "s": 14523, "text": "{ (‘Will Smith’, ‘PERSON’):1, (‘Diplo’, ‘PERSON’):1, (‘Nicky Jam’, ‘PERSON’):1, (“The 2018 World Cup’s”, ‘EVENT’):1 }" }, { "code": null, "e": 14817, "s": 14641, "text": "Then I will create a new column for each tag category (Person, Org, Event, ...) and count the number of found entities of each one. In the example above, the features would be" }, { "code": null, "e": 14833, "s": 14817, "text": "tags_PERSON = 3" }, { "code": null, "e": 14848, "s": 14833, "text": "tags_EVENT = 1" }, { "code": null, "e": 16334, "s": 14848, "text": "## tag text and exctract tags into a listdtf[\"tags\"] = dtf[\"text\"].apply(lambda x: [(tag.text, tag.label_) for tag in ner(x).ents] )## utils function to count the element of a listdef utils_lst_count(lst): dic_counter = collections.Counter() for x in lst: dic_counter[x] += 1 dic_counter = collections.OrderedDict( sorted(dic_counter.items(), key=lambda x: x[1], reverse=True)) lst_count = [ {key:value} for key,value in dic_counter.items() ] return lst_count## count tagsdtf[\"tags\"] = dtf[\"tags\"].apply(lambda x: utils_lst_count(x))## utils function create new column for each tag categorydef utils_ner_features(lst_dics_tuples, tag): if len(lst_dics_tuples) > 0: tag_type = [] for dic_tuples in lst_dics_tuples: for tuple in dic_tuples: type, n = tuple[1], dic_tuples[tuple] tag_type = tag_type + [type]*n dic_counter = collections.Counter() for x in tag_type: dic_counter[x] += 1 return dic_counter[tag] else: return 0## extract featurestags_set = []for lst in dtf[\"tags\"].tolist(): for dic in lst: for k in dic.keys(): tags_set.append(k[1])tags_set = list(set(tags_set))for feature in tags_set: dtf[\"tags_\"+feature] = dtf[\"tags\"].apply(lambda x: utils_ner_features(x, feature))## print resultdtf.head()" }, { "code": null, "e": 16461, "s": 16334, "text": "Now we can have a macro view on the tag types distribution. Let’s take the ORG tags (companies and organizations) for example:" }, { "code": null, "e": 16641, "s": 16461, "text": "In order to go deeper into the analysis, we need to unpack the column “tags” we created in the previous code. Let’s plot the most frequent tags for one of the headline categories:" }, { "code": null, "e": 17178, "s": 16641, "text": "y = \"ENTERTAINMENT\" tags_list = dtf[dtf[\"y\"]==y][\"tags\"].sum()map_lst = list(map(lambda x: list(x.keys())[0], tags_list))dtf_tags = pd.DataFrame(map_lst, columns=['tag','type'])dtf_tags[\"count\"] = 1dtf_tags = dtf_tags.groupby(['type', 'tag']).count().reset_index().sort_values(\"count\", ascending=False)fig, ax = plt.subplots()fig.suptitle(\"Top frequent tags\", fontsize=12)sns.barplot(x=\"count\", y=\"tag\", hue=\"type\", data=dtf_tags.iloc[:top,:], dodge=False, ax=ax)ax.grid(axis=\"x\")plt.show()" }, { "code": null, "e": 17703, "s": 17178, "text": "Moving forward with another useful application of NER: do you remember when we removed stop words losing the word “Will” from the name of “Will Smith”? An interesting solution to that problem would be replacing “Will Smith” with “Will_Smith”, so that it won’t be affected by stop words removal. Since going through all the texts in the dataset to change names would be impossible, let’s use SpaCy for that. As we know, SpaCy can recognize a person name, therefore we can use it for name detection and then modify the string." }, { "code": null, "e": 17947, "s": 17703, "text": "## predict wit NERtxt = dtf[\"text\"].iloc[0]entities = ner(txt).ents## tag texttagged_txt = txtfor tag in entities: tagged_txt = re.sub(tag.text, \"_\".join(tag.text.split()), tagged_txt) ## show resultprint(tagged_txt)" }, { "code": null, "e": 18296, "s": 17947, "text": "So far we’ve seen how to do feature engineering by analyzing and processing the whole text. Now we are going to look at the importance of single words by computing the n-grams frequency. An n-gram is a contiguous sequence of n items from a given sample of text. When the n-gram has the size of 1 is referred to as a unigram (size of 2 is a bigram)." }, { "code": null, "e": 18364, "s": 18296, "text": "For example, the phrase “I like this article” can be decomposed in:" }, { "code": null, "e": 18407, "s": 18364, "text": "4 unigrams: “I”, “like”, “this”, “article”" }, { "code": null, "e": 18456, "s": 18407, "text": "3 bigrams: “I like”, “like this”, “this article”" }, { "code": null, "e": 18552, "s": 18456, "text": "I will show how to calculate unigrams and bigrams frequency taking the sample of Politics news." }, { "code": null, "e": 19582, "s": 18552, "text": "y = \"POLITICS\"corpus = dtf[dtf[\"y\"]==y][\"text_clean\"]lst_tokens = nltk.tokenize.word_tokenize(corpus.str.cat(sep=\" \"))fig, ax = plt.subplots(nrows=1, ncols=2)fig.suptitle(\"Most frequent words\", fontsize=15) ## unigramsdic_words_freq = nltk.FreqDist(lst_tokens)dtf_uni = pd.DataFrame(dic_words_freq.most_common(), columns=[\"Word\",\"Freq\"])dtf_uni.set_index(\"Word\").iloc[:top,:].sort_values(by=\"Freq\").plot( kind=\"barh\", title=\"Unigrams\", ax=ax[0], legend=False).grid(axis='x')ax[0].set(ylabel=None) ## bigramsdic_words_freq = nltk.FreqDist(nltk.ngrams(lst_tokens, 2))dtf_bi = pd.DataFrame(dic_words_freq.most_common(), columns=[\"Word\",\"Freq\"])dtf_bi[\"Word\"] = dtf_bi[\"Word\"].apply(lambda x: \" \".join( string for string in x) )dtf_bi.set_index(\"Word\").iloc[:top,:].sort_values(by=\"Freq\").plot( kind=\"barh\", title=\"Bigrams\", ax=ax[1], legend=False).grid(axis='x')ax[1].set(ylabel=None)plt.show()" }, { "code": null, "e": 19828, "s": 19582, "text": "If there are n-grams that appear only in one category (i.e “Republican” in Politics news), those can become new features. A more laborious approach would be to vectorize the whole corpus and use all the words as features (Bag of Words approach)." }, { "code": null, "e": 20260, "s": 19828, "text": "Now I’m going to show you how to add word frequency as a feature in your dataframe. We just need the CountVectorizer from Scikit-learn, one of the most popular libraries for machine learning in Python. A vectorizer converts a collection of text documents to a matrix of token counts. I shall give an example using 3 n-grams: “box office” (frequent in Entertainment), “republican” (frequent in Politics), “apple” (frequent in Tech)." }, { "code": null, "e": 20723, "s": 20260, "text": "lst_words = [\"box office\", \"republican\", \"apple\"]## countlst_grams = [len(word.split(\" \")) for word in lst_words]vectorizer = feature_extraction.text.CountVectorizer( vocabulary=lst_words, ngram_range=(min(lst_grams),max(lst_grams)))dtf_X = pd.DataFrame(vectorizer.fit_transform(dtf[\"text_clean\"]).todense(), columns=lst_words)## add the new features as columnsdtf = pd.concat([dtf, dtf_X.set_index(dtf.index)], axis=1)dtf.head()" }, { "code": null, "e": 20856, "s": 20723, "text": "A nice way to visualize the same information is with a word cloud where the frequency of each tag is shown with font size and color." }, { "code": null, "e": 21067, "s": 20856, "text": "wc = wordcloud.WordCloud(background_color='black', max_words=100, max_font_size=35)wc = wc.generate(str(corpus))fig = plt.figure(num=1)plt.axis('off')plt.imshow(wc, cmap=None)plt.show()" }, { "code": null, "e": 21395, "s": 21067, "text": "Recently, the NLP field has developed new linguistic models that rely on a neural network architecture instead of more traditional n-gram models. These new techniques are a set of language modelling and feature learning techniques where words are transformed into vectors of real numbers, hence they are called word embeddings." }, { "code": null, "e": 21773, "s": 21395, "text": "Word embedding models map a certain word to a vector by building a probability distribution of what tokens would appear before and after the selected word. These models have quickly become popular because, once you have real numbers instead of strings, you can perform calculations. For example, to find words of the same context, one can simply calculate the vectors distance." }, { "code": null, "e": 22250, "s": 21773, "text": "There are several Python libraries that work with this kind of model. SpaCy is one, but since we have already used it, I will talk about another famous package: Gensim. An open-source library for unsupervised topic modeling and natural language processing that uses modern statistical machine learning. Using Gensim, I will load a pre-trained GloVe model. GloVe (Global Vectors) is an unsupervised learning algorithm for obtaining vector representations for words of size 300." }, { "code": null, "e": 22299, "s": 22250, "text": "nlp = gensim_api.load(\"glove-wiki-gigaword-300\")" }, { "code": null, "e": 22347, "s": 22299, "text": "We can use this object to map words to vectors:" }, { "code": null, "e": 22370, "s": 22347, "text": "word = \"love\"nlp[word]" }, { "code": null, "e": 22386, "s": 22370, "text": "nlp[word].shape" }, { "code": null, "e": 22827, "s": 22386, "text": "Now let’s see what are the closest word vectors or, to put in another way, the words that mostly appear in similar contexts. In order to plot the vectors in a two-dimensional space, I need to reduce the dimensions from 300 to 2. I am going to do that with t-distributed Stochastic Neighbor Embedding from Scikit-learn. t-SNE is a tool to visualize high-dimensional data that converts similarities between data points to joint probabilities." }, { "code": null, "e": 23515, "s": 22827, "text": "## find closest vectorslabels, X, x, y = [], [], [], []for t in nlp.most_similar(word, topn=20): X.append(nlp[t[0]]) labels.append(t[0])## reduce dimensionspca = manifold.TSNE(perplexity=40, n_components=2, init='pca')new_values = pca.fit_transform(X)for value in new_values: x.append(value[0]) y.append(value[1])## plotfig = plt.figure()for i in range(len(x)): plt.scatter(x[i], y[i], c=\"black\") plt.annotate(labels[i], xy=(x[i],y[i]), xytext=(5,2), textcoords='offset points', ha='right', va='bottom')## add centerplt.scatter(x=0, y=0, c=\"red\")plt.annotate(word, xy=(0,0), xytext=(5,2), textcoords='offset points', ha='right', va='bottom')" }, { "code": null, "e": 23692, "s": 23515, "text": "The Genism package is specialized in topic modeling. A topic model is a type of statistical model for discovering the abstract “topics” that occur in a collection of documents." }, { "code": null, "e": 24061, "s": 23692, "text": "I will show how to extract topics using LDA (Latent Dirichlet Allocation): a generative statistical model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. Basically, documents are represented as random mixtures over latent topics, where each topic is characterized by a distribution over words." }, { "code": null, "e": 24205, "s": 24061, "text": "Let’s see what topics we can extract from Tech news. I need to specify the number of topics the model has to cluster, I am going to try with 3:" }, { "code": null, "e": 25397, "s": 24205, "text": "y = \"TECH\"corpus = dtf[dtf[\"y\"]==y][\"text_clean\"]## pre-process corpuslst_corpus = []for string in corpus: lst_words = string.split() lst_grams = [\" \".join(lst_words[i:i + 2]) for i in range(0, len(lst_words), 2)] lst_corpus.append(lst_grams)## map words to an idid2word = gensim.corpora.Dictionary(lst_corpus)## create dictionary word:freqdic_corpus = [id2word.doc2bow(word) for word in lst_corpus] ## train LDAlda_model = gensim.models.ldamodel.LdaModel(corpus=dic_corpus, id2word=id2word, num_topics=3, random_state=123, update_every=1, chunksize=100, passes=10, alpha='auto', per_word_topics=True) ## outputlst_dics = []for i in range(0,3): lst_tuples = lda_model.get_topic_terms(i) for tupla in lst_tuples: lst_dics.append({\"topic\":i, \"id\":tupla[0], \"word\":id2word[tupla[0]], \"weight\":tupla[1]})dtf_topics = pd.DataFrame(lst_dics, columns=['topic','id','word','weight']) ## plotfig, ax = plt.subplots()sns.barplot(y=\"word\", x=\"weight\", hue=\"topic\", data=dtf_topics, dodge=False, ax=ax).set_title('Main Topics')ax.set(ylabel=\"\", xlabel=\"Word Importance\")plt.show()" }, { "code": null, "e": 25553, "s": 25397, "text": "Trying to capture the content of 6 years in only 3 topics may be a bit hard, but as we can see, everything regarding Apple Inc. ended up in the same topic." }, { "code": null, "e": 25686, "s": 25553, "text": "This article has been a tutorial to demonstrate how to analyze text data with NLP and extract features for a machine learning model." }, { "code": null, "e": 26128, "s": 25686, "text": "I showed how to detect the language the data is in, and how to preprocess and clean text. Then I explained different measures of length, did sentiment analysis with Textblob, and we used SpaCy for named-entity recognition. Finally, I explained the differences between traditional word frequency approaches with Scikit-learn and modern language models using Gensim. Now you know pretty much all the NLP basics to start working with text data." }, { "code": null, "e": 26246, "s": 26128, "text": "I hope you enjoyed it! Feel free to contact me for questions and feedback or just to share your interesting projects." }, { "code": null, "e": 26264, "s": 26246, "text": "👉 Let’s Connect 👈" } ]
Program to find minimum swaps needed to group all 1s together in Python
Suppose we have a binary string, we have to find the minimum number of swaps needed to group all 1’s together in any place in the string. So if the input is like "10101001101", then the output will be 3, as possible solution is "00000111111". To solve this, we will follow these steps − data := a list of bits from the given string data := a list of bits from the given string set one := 0, n:= length of data array set one := 0, n:= length of data array make an array summ of size n, and fill this with 0, set summ[0] := data[0] make an array summ of size n, and fill this with 0, set summ[0] := data[0] one := one + data[0] one := one + data[0] for i in range 1 to n – 1summ[i] := summ[i - 1] + data[i]one := one + data[i] for i in range 1 to n – 1 summ[i] := summ[i - 1] + data[i] summ[i] := summ[i - 1] + data[i] one := one + data[i] one := one + data[i] ans := one ans := one left := 0, right := one – 1 left := 0, right := one – 1 while right < nif left is 0, then temp := summ[right], otherwise temp := summ[right] –summ[left - 1]ans := minimum of ans, one – tempincrease right and left by 1 while right < n if left is 0, then temp := summ[right], otherwise temp := summ[right] – if left is 0, then temp := summ[right], otherwise temp := summ[right] – summ[left - 1] ans := minimum of ans, one – temp ans := minimum of ans, one – temp increase right and left by 1 increase right and left by 1 return ans return ans Let us see the following implementation to get better understanding − Live Demo class Solution(object): def solve(self, data): data = list(map(int, list(data))) one = 0 n = len(data) summ=[0 for i in range(n)] summ[0] = data[0] one += data[0] for i in range(1,n): summ[i] += summ[i-1]+data[i] one += data[i] ans = one left = 0 right = one-1 while right <n: if left == 0: temp = summ[right] else: temp = summ[right] - summ[left-1] ans = min(ans,one-temp) right+=1 left+=1 return ans ob = Solution() print(ob.solve("10101001101")) "10101001101" 3
[ { "code": null, "e": 1305, "s": 1062, "text": "Suppose we have a binary string, we have to find the minimum number of swaps needed to group all 1’s together in any place in the string. So if the input is like \"10101001101\", then the output will be 3, as possible solution is \"00000111111\"." }, { "code": null, "e": 1349, "s": 1305, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 1394, "s": 1349, "text": "data := a list of bits from the given string" }, { "code": null, "e": 1439, "s": 1394, "text": "data := a list of bits from the given string" }, { "code": null, "e": 1478, "s": 1439, "text": "set one := 0, n:= length of data array" }, { "code": null, "e": 1517, "s": 1478, "text": "set one := 0, n:= length of data array" }, { "code": null, "e": 1592, "s": 1517, "text": "make an array summ of size n, and fill this with 0, set summ[0] := data[0]" }, { "code": null, "e": 1667, "s": 1592, "text": "make an array summ of size n, and fill this with 0, set summ[0] := data[0]" }, { "code": null, "e": 1688, "s": 1667, "text": "one := one + data[0]" }, { "code": null, "e": 1709, "s": 1688, "text": "one := one + data[0]" }, { "code": null, "e": 1787, "s": 1709, "text": "for i in range 1 to n – 1summ[i] := summ[i - 1] + data[i]one := one + data[i]" }, { "code": null, "e": 1813, "s": 1787, "text": "for i in range 1 to n – 1" }, { "code": null, "e": 1846, "s": 1813, "text": "summ[i] := summ[i - 1] + data[i]" }, { "code": null, "e": 1879, "s": 1846, "text": "summ[i] := summ[i - 1] + data[i]" }, { "code": null, "e": 1900, "s": 1879, "text": "one := one + data[i]" }, { "code": null, "e": 1921, "s": 1900, "text": "one := one + data[i]" }, { "code": null, "e": 1932, "s": 1921, "text": "ans := one" }, { "code": null, "e": 1943, "s": 1932, "text": "ans := one" }, { "code": null, "e": 1971, "s": 1943, "text": "left := 0, right := one – 1" }, { "code": null, "e": 1999, "s": 1971, "text": "left := 0, right := one – 1" }, { "code": null, "e": 2161, "s": 1999, "text": "while right < nif left is 0, then temp := summ[right], otherwise temp := summ[right] –summ[left - 1]ans := minimum of ans, one – tempincrease right and left by 1" }, { "code": null, "e": 2177, "s": 2161, "text": "while right < n" }, { "code": null, "e": 2249, "s": 2177, "text": "if left is 0, then temp := summ[right], otherwise temp := summ[right] –" }, { "code": null, "e": 2321, "s": 2249, "text": "if left is 0, then temp := summ[right], otherwise temp := summ[right] –" }, { "code": null, "e": 2336, "s": 2321, "text": "summ[left - 1]" }, { "code": null, "e": 2370, "s": 2336, "text": "ans := minimum of ans, one – temp" }, { "code": null, "e": 2404, "s": 2370, "text": "ans := minimum of ans, one – temp" }, { "code": null, "e": 2433, "s": 2404, "text": "increase right and left by 1" }, { "code": null, "e": 2462, "s": 2433, "text": "increase right and left by 1" }, { "code": null, "e": 2473, "s": 2462, "text": "return ans" }, { "code": null, "e": 2484, "s": 2473, "text": "return ans" }, { "code": null, "e": 2554, "s": 2484, "text": "Let us see the following implementation to get better understanding −" }, { "code": null, "e": 2565, "s": 2554, "text": " Live Demo" }, { "code": null, "e": 3179, "s": 2565, "text": "class Solution(object):\n def solve(self, data):\n data = list(map(int, list(data)))\n one = 0\n n = len(data)\n summ=[0 for i in range(n)]\n summ[0] = data[0]\n one += data[0]\n for i in range(1,n):\n summ[i] += summ[i-1]+data[i]\n one += data[i]\n ans = one\n left = 0\n right = one-1\n while right <n:\n if left == 0:\n temp = summ[right]\n else:\n temp = summ[right] - summ[left-1]\n ans = min(ans,one-temp)\n right+=1\n left+=1\n return ans\nob = Solution()\nprint(ob.solve(\"10101001101\"))" }, { "code": null, "e": 3193, "s": 3179, "text": "\"10101001101\"" }, { "code": null, "e": 3195, "s": 3193, "text": "3" } ]
4 Machine Learning System Architectures | by Kurtis Pykes | Towards Data Science
I’m a big advocate for learning by doing, and it just so turns out that it’s probably the best way to learn machine learning. If you’re a machine learning engineer (and possibly a Data Scientists), you may never quite feel fulfilled when a project ends at the model evaluation phase of the Machine Learning Workflow, as your typical Kaggle competition would — and no, I have nothing against Kaggle, I think it’s a great platform to improve your modeling skills. The next step is to put the model into production, which is generally a topic that is left out of most courses on Machine Learning. towardsdatascience.com Disclaimer: This article was written using notes from Deployment of Machine Learning models Course (Udemy) Once a Machine Learning model has been trained and exported, the next step is coming up with a method to persist the model. For example, we can serialize the model object with pickle — see the code below. import pickleimport pandas as pdfrom sklearn.metrics import accuracy_score from sklearn.datasets import load_breast_cancerfrom sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split# loading the data setdataset = load_breast_cancer(as_frame=True)df = pd.DataFrame(data= dataset.data)df["target"] = dataset.target# Seperating to X and Y X = df.iloc[:, :-1]y = df.iloc[:, -1]# splitting training and testX_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, shuffle=True, random_state=24)# training logistic regression modellr = LogisticRegression(C=0.01)lr.fit(X_train, y_train)# serializing the model lr_dump = pickle.dumps(lr)# making predictions with the model clf = pickle.loads(lr_dump)y_preds = clf.predict(X_test) Note: when writing code to take a Machine Learning model into production, an engineer modularize the above code into training and inference scripts to abide by software engineering best practices. The end of the training script is defined by the point at which the model is dumped into a pickle file, whereas the inference script begins once the model has been loaded to make predictions on new instances. Other methods of serving a model include: MLFlow — MLFlow provides a common serialization format that integrates with various machine learning frameworks in python Language Agnostic exchange formats (i.e. ONNX, PFA, and PMML) It’s always good to be aware of the other options we have since there are some downsides to the popular pickle(or joblib) formats. Pre-Trained: Yes On-the-fly Predictions: Yes In this scenario, the trained model is embedded in the application as a dependency. For example, we can install the model into the application with a pip installation or the trained model can be pulled into the application at build time from a file storage (i.e. AWS S3). An example of this instance is if we had a flask application that we used to predict the value of a property. Our Flask application would serve an HTML page which we could use as an interface to collect information about a property a user would like to know an estimated value for. The Flask application would take those details as inputs, forward them to the model to make a prediction then return them to the client. In the example above, the predictions will be returned to the user's browser, however, we can vary this method to embed the model on a mobile device. This approach is much simpler than other approaches, but there’s a simplicity-flexibility trade-off. For instance, to make a model update the entire application would have to be redeployed (on a mobile device, a new version would need to be released). Pre-Trained: Yes On-the-Fly Predictions: Yes In this architecture, the trained machine learning model becomes a dependency of a separate Machine Learning API service. Extending on from the Flask application to predict the value of a property example above, when the form is submitted to the Flask application server, that server makes another call — possibly using REST, gRPC, SOAP, or Messaging (i.e. RabbitMQ) — to a separate microservice that has been dedicated to Machine learning and is exclusively responsible for returning the prediction. Differing from the embedded model approach, this method compromises simplicity for flexibility. Since we’d have to maintain a separate service there is increased complexity with this architecture, but there is more flexibility since the model deployments are now independent of the main application deployments. Additionally, the model microservice or main server can be scaled separately to deal with higher volumes of traffic or to potentially serve other applications. Pre-Trained: Yes On-the-Fly Predictions: Yes In this architecture, our training process publishes a trained model to a streaming platform (i.e. Apache Kafka) which will be consumed at runtime by the application, instead of build time — eligible to subscribe for any model updates. The recurring theme of simplicity-flexibility trade off occurs here once again. Maintaining the infrastructure required for this archeticutre demands much more engineering sophistication, however ML models can be updated without any applications needing to be redeployed — this is because the model can be ingested at runtime. To extend on our predicting value of a property example, the application would be able to consume from a dedicated topic from the designated streaming service (i.e. Apache Kafka). Pre-trained: Yes On-the-fly Predictions: No This approach is the only asynchronous approach we will be exploring. Predictions are triggered and run asynchronisously either by the application or as a scheduled job. The predictions will be collected and stored — this is what the application uses to serve the predictions via a user interface. Many in industry have moved away from this architecture, but it’s much more forgiving in a sense that predictions can be inspected before being returned to a user. Therefore, we reduce the risk of our ML system making errors since predictions are not on the fly. In regards to the simplicity-flexibility tradeoff, this system compromises simplicity for more flexibility. Sometimes, merely doing some analysis, building multiple models, and evaluating them can get quite boring. If that is the case for you then learning to put machine learning models into production could be the next step and it’s a formidable skill to have in your toolbox. To emphasize, there is no such thing as a “best” system architecture for your model deployment. There is only the best set of tradeoffs that meets your systems requirements. Thank You for Reading! Connect with me:LinkedInTwitter If you enjoy reading stories like this one and wish to support my writing, consider becoming a Medium member. With a $5 a month commitment, you unlock unlimited access to stories on Medium. If you use my sign-up link, I’ll receive a small commission. Already a member? Subscribe to be notified when I publish.
[ { "code": null, "e": 633, "s": 171, "text": "I’m a big advocate for learning by doing, and it just so turns out that it’s probably the best way to learn machine learning. If you’re a machine learning engineer (and possibly a Data Scientists), you may never quite feel fulfilled when a project ends at the model evaluation phase of the Machine Learning Workflow, as your typical Kaggle competition would — and no, I have nothing against Kaggle, I think it’s a great platform to improve your modeling skills." }, { "code": null, "e": 765, "s": 633, "text": "The next step is to put the model into production, which is generally a topic that is left out of most courses on Machine Learning." }, { "code": null, "e": 788, "s": 765, "text": "towardsdatascience.com" }, { "code": null, "e": 895, "s": 788, "text": "Disclaimer: This article was written using notes from Deployment of Machine Learning models Course (Udemy)" }, { "code": null, "e": 1100, "s": 895, "text": "Once a Machine Learning model has been trained and exported, the next step is coming up with a method to persist the model. For example, we can serialize the model object with pickle — see the code below." }, { "code": null, "e": 1887, "s": 1100, "text": "import pickleimport pandas as pdfrom sklearn.metrics import accuracy_score from sklearn.datasets import load_breast_cancerfrom sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split# loading the data setdataset = load_breast_cancer(as_frame=True)df = pd.DataFrame(data= dataset.data)df[\"target\"] = dataset.target# Seperating to X and Y X = df.iloc[:, :-1]y = df.iloc[:, -1]# splitting training and testX_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, shuffle=True, random_state=24)# training logistic regression modellr = LogisticRegression(C=0.01)lr.fit(X_train, y_train)# serializing the model lr_dump = pickle.dumps(lr)# making predictions with the model clf = pickle.loads(lr_dump)y_preds = clf.predict(X_test)" }, { "code": null, "e": 2084, "s": 1887, "text": "Note: when writing code to take a Machine Learning model into production, an engineer modularize the above code into training and inference scripts to abide by software engineering best practices." }, { "code": null, "e": 2293, "s": 2084, "text": "The end of the training script is defined by the point at which the model is dumped into a pickle file, whereas the inference script begins once the model has been loaded to make predictions on new instances." }, { "code": null, "e": 2335, "s": 2293, "text": "Other methods of serving a model include:" }, { "code": null, "e": 2457, "s": 2335, "text": "MLFlow — MLFlow provides a common serialization format that integrates with various machine learning frameworks in python" }, { "code": null, "e": 2519, "s": 2457, "text": "Language Agnostic exchange formats (i.e. ONNX, PFA, and PMML)" }, { "code": null, "e": 2650, "s": 2519, "text": "It’s always good to be aware of the other options we have since there are some downsides to the popular pickle(or joblib) formats." }, { "code": null, "e": 2667, "s": 2650, "text": "Pre-Trained: Yes" }, { "code": null, "e": 2695, "s": 2667, "text": "On-the-fly Predictions: Yes" }, { "code": null, "e": 2967, "s": 2695, "text": "In this scenario, the trained model is embedded in the application as a dependency. For example, we can install the model into the application with a pip installation or the trained model can be pulled into the application at build time from a file storage (i.e. AWS S3)." }, { "code": null, "e": 3386, "s": 2967, "text": "An example of this instance is if we had a flask application that we used to predict the value of a property. Our Flask application would serve an HTML page which we could use as an interface to collect information about a property a user would like to know an estimated value for. The Flask application would take those details as inputs, forward them to the model to make a prediction then return them to the client." }, { "code": null, "e": 3536, "s": 3386, "text": "In the example above, the predictions will be returned to the user's browser, however, we can vary this method to embed the model on a mobile device." }, { "code": null, "e": 3788, "s": 3536, "text": "This approach is much simpler than other approaches, but there’s a simplicity-flexibility trade-off. For instance, to make a model update the entire application would have to be redeployed (on a mobile device, a new version would need to be released)." }, { "code": null, "e": 3805, "s": 3788, "text": "Pre-Trained: Yes" }, { "code": null, "e": 3833, "s": 3805, "text": "On-the-Fly Predictions: Yes" }, { "code": null, "e": 4334, "s": 3833, "text": "In this architecture, the trained machine learning model becomes a dependency of a separate Machine Learning API service. Extending on from the Flask application to predict the value of a property example above, when the form is submitted to the Flask application server, that server makes another call — possibly using REST, gRPC, SOAP, or Messaging (i.e. RabbitMQ) — to a separate microservice that has been dedicated to Machine learning and is exclusively responsible for returning the prediction." }, { "code": null, "e": 4806, "s": 4334, "text": "Differing from the embedded model approach, this method compromises simplicity for flexibility. Since we’d have to maintain a separate service there is increased complexity with this architecture, but there is more flexibility since the model deployments are now independent of the main application deployments. Additionally, the model microservice or main server can be scaled separately to deal with higher volumes of traffic or to potentially serve other applications." }, { "code": null, "e": 4823, "s": 4806, "text": "Pre-Trained: Yes" }, { "code": null, "e": 4851, "s": 4823, "text": "On-the-Fly Predictions: Yes" }, { "code": null, "e": 5087, "s": 4851, "text": "In this architecture, our training process publishes a trained model to a streaming platform (i.e. Apache Kafka) which will be consumed at runtime by the application, instead of build time — eligible to subscribe for any model updates." }, { "code": null, "e": 5414, "s": 5087, "text": "The recurring theme of simplicity-flexibility trade off occurs here once again. Maintaining the infrastructure required for this archeticutre demands much more engineering sophistication, however ML models can be updated without any applications needing to be redeployed — this is because the model can be ingested at runtime." }, { "code": null, "e": 5594, "s": 5414, "text": "To extend on our predicting value of a property example, the application would be able to consume from a dedicated topic from the designated streaming service (i.e. Apache Kafka)." }, { "code": null, "e": 5611, "s": 5594, "text": "Pre-trained: Yes" }, { "code": null, "e": 5638, "s": 5611, "text": "On-the-fly Predictions: No" }, { "code": null, "e": 5936, "s": 5638, "text": "This approach is the only asynchronous approach we will be exploring. Predictions are triggered and run asynchronisously either by the application or as a scheduled job. The predictions will be collected and stored — this is what the application uses to serve the predictions via a user interface." }, { "code": null, "e": 6199, "s": 5936, "text": "Many in industry have moved away from this architecture, but it’s much more forgiving in a sense that predictions can be inspected before being returned to a user. Therefore, we reduce the risk of our ML system making errors since predictions are not on the fly." }, { "code": null, "e": 6307, "s": 6199, "text": "In regards to the simplicity-flexibility tradeoff, this system compromises simplicity for more flexibility." }, { "code": null, "e": 6753, "s": 6307, "text": "Sometimes, merely doing some analysis, building multiple models, and evaluating them can get quite boring. If that is the case for you then learning to put machine learning models into production could be the next step and it’s a formidable skill to have in your toolbox. To emphasize, there is no such thing as a “best” system architecture for your model deployment. There is only the best set of tradeoffs that meets your systems requirements." }, { "code": null, "e": 6776, "s": 6753, "text": "Thank You for Reading!" }, { "code": null, "e": 6808, "s": 6776, "text": "Connect with me:LinkedInTwitter" }, { "code": null, "e": 7059, "s": 6808, "text": "If you enjoy reading stories like this one and wish to support my writing, consider becoming a Medium member. With a $5 a month commitment, you unlock unlimited access to stories on Medium. If you use my sign-up link, I’ll receive a small commission." } ]
How to make textarea 100% without overflow when padding is present ?
14 Jan, 2022 If we have given an HTML document containing <textarea> element and its parent element contains some padding then how to make textarea width to 100%. textarea contains. The idea is to create a div with the class name “wrapper”. Inside that <div> element, we create a text area with a certain number of columns and rows. In this case, it is 30 and 15 respectively. After that, we set the width property to 100% to make a textarea width 100%. HTML Code: The HTML code contains a <textarea> element that holds rows and columns values. html <!DOCTYPE html><html> <head> <title> How to make 100% tetxarea without overflowing when padding is present? </title></head> <body> <div class="wrapper"> <textarea cols="30" rows="15"></textarea> </div></body> </html> CSS Code: The CSS code contains the wrapper class that holds padding, margin, and background-color property. The textarea element contains the width property. CSS <style> .wrapper { padding: 20px; margin:15px 0; background-color: #0f9d58; } textarea { font-size:20px; width:100%; }</style> Complete Code: In this section, we will combine the above two sections to make the width of <textarea> element 100% without overflowing when padding is present. html <!DOCTYPE html><html> <head> <title> How to make 100% tetxarea without overflowing when padding is present? </title> <style> .wrapper { padding: 20px; margin: 15px 0; background-color: #0f9d58; } textarea { font-size: 20px; width: 100%; } </style></head> <body> <div class="wrapper"> <textarea cols="30" rows="15"></textarea> </div></body> </html> Output: tiagorangel2011 sumitgumber28 CSS-Misc HTML-Misc Picked CSS HTML Web Technologies Web technologies Questions HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n14 Jan, 2022" }, { "code": null, "e": 469, "s": 28, "text": "If we have given an HTML document containing <textarea> element and its parent element contains some padding then how to make textarea width to 100%. textarea contains. The idea is to create a div with the class name “wrapper”. Inside that <div> element, we create a text area with a certain number of columns and rows. In this case, it is 30 and 15 respectively. After that, we set the width property to 100% to make a textarea width 100%." }, { "code": null, "e": 560, "s": 469, "text": "HTML Code: The HTML code contains a <textarea> element that holds rows and columns values." }, { "code": null, "e": 565, "s": 560, "text": "html" }, { "code": "<!DOCTYPE html><html> <head> <title> How to make 100% tetxarea without overflowing when padding is present? </title></head> <body> <div class=\"wrapper\"> <textarea cols=\"30\" rows=\"15\"></textarea> </div></body> </html>", "e": 822, "s": 565, "text": null }, { "code": null, "e": 981, "s": 822, "text": "CSS Code: The CSS code contains the wrapper class that holds padding, margin, and background-color property. The textarea element contains the width property." }, { "code": null, "e": 985, "s": 981, "text": "CSS" }, { "code": "<style> .wrapper { padding: 20px; margin:15px 0; background-color: #0f9d58; } textarea { font-size:20px; width:100%; }</style>", "e": 1164, "s": 985, "text": null }, { "code": null, "e": 1325, "s": 1164, "text": "Complete Code: In this section, we will combine the above two sections to make the width of <textarea> element 100% without overflowing when padding is present." }, { "code": null, "e": 1330, "s": 1325, "text": "html" }, { "code": "<!DOCTYPE html><html> <head> <title> How to make 100% tetxarea without overflowing when padding is present? </title> <style> .wrapper { padding: 20px; margin: 15px 0; background-color: #0f9d58; } textarea { font-size: 20px; width: 100%; } </style></head> <body> <div class=\"wrapper\"> <textarea cols=\"30\" rows=\"15\"></textarea> </div></body> </html>", "e": 1809, "s": 1330, "text": null }, { "code": null, "e": 1819, "s": 1809, "text": "Output: " }, { "code": null, "e": 1837, "s": 1821, "text": "tiagorangel2011" }, { "code": null, "e": 1851, "s": 1837, "text": "sumitgumber28" }, { "code": null, "e": 1860, "s": 1851, "text": "CSS-Misc" }, { "code": null, "e": 1870, "s": 1860, "text": "HTML-Misc" }, { "code": null, "e": 1877, "s": 1870, "text": "Picked" }, { "code": null, "e": 1881, "s": 1877, "text": "CSS" }, { "code": null, "e": 1886, "s": 1881, "text": "HTML" }, { "code": null, "e": 1903, "s": 1886, "text": "Web Technologies" }, { "code": null, "e": 1930, "s": 1903, "text": "Web technologies Questions" }, { "code": null, "e": 1935, "s": 1930, "text": "HTML" } ]
Choose X such that (A xor X) + (B xor X) is minimized
26 Nov, 2021 Given two integers A and B. The task is to choose an integer X such that (A xor X) + (B xor X) is the minimum possible. Examples: Input: A = 2, B = 3 Output: X = 2, Sum = 1 Input: A = 7, B = 8 Output: X = 0, Sum = 15 A simple solution is to generate all possible sum by taking xor of A and B with all possible values of X ≤ min(A, B). To generate all possible sums it would take O(N) time where N = min(A, B). An efficient solution is based on the fact that the number X will contain the set bits only at that index where both A and B contain a set bit such that after xor operation with X that bit will be unset. This would take only O(Log N) time.Other cases: If at a particular index one or both the numbers contain 0 (unset bit) and the number X contains 1 (set bit) then 0 will be set after xor with X in A and B then the sum couldn’t be minimized. Below is the implementation of the above approach: C++ Java Python3 C# PHP Javascript // C++ implementation of the approach#include <iostream>using namespace std; // Function to return the integer X such that// (A xor X) + (B ^ X) is minimizedint findX(int A, int B){ int j = 0, x = 0; // While either A or B is non-zero while (A || B) { // Position at which both A and B // have a set bit if ((A & 1) && (B & 1)) { // Inserting a set bit in x x += (1 << j); } // Right shifting both numbers to // traverse all the bits A >>= 1; B >>= 1; j += 1; } return x;} // Driver codeint main(){ int A = 2, B = 3; int X = findX(A, B); cout << "X = " << X << ", Sum = " << (A ^ X) + (B ^ X); return 0;} // Java implementation of the approachclass GFG { // Function to return the integer X such that // (A xor X) + (B ^ X) is minimized static int findX(int A, int B) { int j = 0, x = 0; // While either A or B is non-zero while (A != 0 || B != 0) { // Position at which both A and B // have a set bit if ((A % 2 == 1) && (B % 2 == 1)) { // Inserting a set bit in x x += (1 << j); } // Right shifting both numbers to // traverse all the bits A >>= 1; B >>= 1; j += 1; } return x; } // Driver code public static void main(String[] args) { int A = 2, B = 3; int X = findX(A, B); System.out.println( "X = " + X + ", Sum = " + ((A ^ X) + (B ^ X))); }} // This code has been contributed by 29AjayKumar # Python 3 implementation of the approach # Function to return the integer X such that# (A xor X) + (B ^ X) is minimized def findX(A, B): j = 0 x = 0 # While either A or B is non-zero while (A or B): # Position at which both A and B # have a set bit if ((A & 1) and (B & 1)): # Inserting a set bit in x x += (1 << j) # Right shifting both numbers to # traverse all the bits A >>= 1 B >>= 1 j += 1 return x # Driver codeif __name__ == '__main__': A = 2 B = 3 X = findX(A, B) print("X =", X, ", Sum =", (A ^ X) + (B ^ X)) # This code is contributed by# Surendra_Gangwar // C# implementation of the approachusing System; class GFG { // Function to return the integer X such that // (A xor X) + (B ^ X) is minimized static int findX(int A, int B) { int j = 0, x = 0; // While either A or B is non-zero while (A != 0 || B != 0) { // Position at which both A and B // have a set bit if ((A % 2 == 1) && (B % 2 == 1)) { // Inserting a set bit in x x += (1 << j); } // Right shifting both numbers to // traverse all the bits A >>= 1; B >>= 1; j += 1; } return x; } // Driver code public static void Main(String[] args) { int A = 2, B = 3; int X = findX(A, B); Console.WriteLine( "X = " + X + ", Sum = " + ((A ^ X) + (B ^ X))); }} // This code has been contributed by 29AjayKumar <?php// PHP implementation of the approach // Function to return the integer X such that// (A xor X) + (B ^ X) is minimizedfunction findX($A, $B){ $j = 0; $x = 0; // While either A or B is non-zero while ($A || $B) { // Position at which both A and B // have a set bit if (($A & 1) && ($B & 1)) { // Inserting a set bit in x $x += (1 << $j); } // Right shifting both numbers to // traverse all the bits $A >>= 1; $B >>= 1; $j += 1; } return $x;} // Driver code $A = 2; $B = 3; $X = findX($A, $B); echo "X = " , $X , ", Sum = ", ($A ^ $X) + ($B ^ $X); // This code is contributed by ajit.?> <script> // Javascript implementation of the approach // Function to return the integer X such that // (A xor X) + (B ^ X) is minimized function findX(A, B) { let j = 0, x = 0; // While either A or B is non-zero while (A != 0 || B != 0) { // Position at which both A and B // have a set bit if ((A % 2 == 1) && (B % 2 == 1)) { // Inserting a set bit in x x += (1 << j); } // Right shifting both numbers to // traverse all the bits A >>= 1; B >>= 1; j += 1; } return x; } let A = 2, B = 3; let X = findX(A, B); document.write("X = " + X + ", Sum = " + ((A ^ X) + (B ^ X))); // This code is contributed by suresh07.</script> X = 2, Sum = 1 Time Complexity: O(log(max(A, B))) Auxiliary Space: O(1) Most Efficient Approach: Using the idea that X will contain only the set bits as A and B, X = A & B. On replacing X, the above equation becomes (A ^ (A & B)) + (B ^ (A & B)) which further equates to A^B. Proof: Given (A ^ X) + (B ^ X) Taking X = (A & B), we have (A ^ (A & B)) + (B ^ (A & B)) (using x ^ y = x'y + y'x ) = (A'(A & B) + A(A & B)') + (B'(A & B) + B(A & B)') (using (x & y)' = x' + y') = (A'(A & B) + A(A' + B')) + (B'(A & B) + B(A' + B')) (A'(A & B) = A'A & A'B = 0, B'(A & B) = B'A & B'B = 0) = (A(A' + B')) + (B(A' + B')) = (AA' + AB') + (BA' + BB') (using xx' = x'x = 0) = (AB') + (BA') = (A ^ B) Click here to know more about Boolean Properties. Below is the implementation of the above approach: C++ Java Python3 C# Javascript // c++ implementation of above approach#include <iostream>using namespace std; // finding Xint findX(int A, int B) { return A & B;} // finding Sumint findSum(int A, int B) { return A ^ B;} // Driver codeint main(){ int A = 2, B = 3; cout << "X = " << findX(A, B) << ", Sum = " << findSum(A, B); return 0;} // This code is contributed by yashbeersingh42 // Java implementation of above approachimport java.io.*; class GFG{ // finding X public static int findX(int A, int B) { return A & B; } // finding Sum public static int findSum(int A, int B) { return A ^ B; } // Driver Code public static void main(String[] args) { int A = 2, B = 3; System.out.print("X = " + findX(A, B) + ", Sum = " + findSum(A, B)); }}// This code is contributed by yashbeersingh42 # Python3 implementation of above approach # finding Xdef findX(A, B): return A & B # finding Sumdef findSum(A, B): return A ^ B # Driver codeA, B = 2, 3print("X =", findX(A, B) , ", Sum =" , findSum(A, B)) # This code is contributed by divyeshrabadiya07 // C# implementation of above approachusing System; class GFG{ // Finding Xpublic static int findX(int A, int B){ return A & B;} // Finding Sumpublic static int findSum(int A, int B){ return A ^ B;} // Driver Codepublic static void Main(String[] args){ int A = 2, B = 3; Console.Write("X = " + findX(A, B) + ", Sum = " + findSum(A, B));}} // This code is contributed by Princi Singh <script> // Javascript implementation of the approach // finding X function findX(A, B) { return A & B; } // finding Sum function findSum(A, B) { return A ^ B; } let A = 2, B = 3; document.write("X = " + findX(A, B) + ", Sum = " + findSum(A, B)); </script> X = 2, Sum = 1 Time Complexity: O(1) Auxiliary Space: O(1) 29AjayKumar SURENDRA_GANGWAR jit_t sravanikatasani yashbeersingh42 princi singh divyeshrabadiya07 divyesh072019 suresh07 surindertarika1234 simmytarika5 subhammahato348 Bitwise-XOR Bit Magic Mathematical Mathematical Bit Magic Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n26 Nov, 2021" }, { "code": null, "e": 148, "s": 28, "text": "Given two integers A and B. The task is to choose an integer X such that (A xor X) + (B xor X) is the minimum possible." }, { "code": null, "e": 159, "s": 148, "text": "Examples: " }, { "code": null, "e": 202, "s": 159, "text": "Input: A = 2, B = 3 Output: X = 2, Sum = 1" }, { "code": null, "e": 247, "s": 202, "text": "Input: A = 7, B = 8 Output: X = 0, Sum = 15 " }, { "code": null, "e": 441, "s": 247, "text": "A simple solution is to generate all possible sum by taking xor of A and B with all possible values of X ≤ min(A, B). To generate all possible sums it would take O(N) time where N = min(A, B). " }, { "code": null, "e": 885, "s": 441, "text": "An efficient solution is based on the fact that the number X will contain the set bits only at that index where both A and B contain a set bit such that after xor operation with X that bit will be unset. This would take only O(Log N) time.Other cases: If at a particular index one or both the numbers contain 0 (unset bit) and the number X contains 1 (set bit) then 0 will be set after xor with X in A and B then the sum couldn’t be minimized." }, { "code": null, "e": 937, "s": 885, "text": "Below is the implementation of the above approach: " }, { "code": null, "e": 941, "s": 937, "text": "C++" }, { "code": null, "e": 946, "s": 941, "text": "Java" }, { "code": null, "e": 954, "s": 946, "text": "Python3" }, { "code": null, "e": 957, "s": 954, "text": "C#" }, { "code": null, "e": 961, "s": 957, "text": "PHP" }, { "code": null, "e": 972, "s": 961, "text": "Javascript" }, { "code": "// C++ implementation of the approach#include <iostream>using namespace std; // Function to return the integer X such that// (A xor X) + (B ^ X) is minimizedint findX(int A, int B){ int j = 0, x = 0; // While either A or B is non-zero while (A || B) { // Position at which both A and B // have a set bit if ((A & 1) && (B & 1)) { // Inserting a set bit in x x += (1 << j); } // Right shifting both numbers to // traverse all the bits A >>= 1; B >>= 1; j += 1; } return x;} // Driver codeint main(){ int A = 2, B = 3; int X = findX(A, B); cout << \"X = \" << X << \", Sum = \" << (A ^ X) + (B ^ X); return 0;}", "e": 1695, "s": 972, "text": null }, { "code": "// Java implementation of the approachclass GFG { // Function to return the integer X such that // (A xor X) + (B ^ X) is minimized static int findX(int A, int B) { int j = 0, x = 0; // While either A or B is non-zero while (A != 0 || B != 0) { // Position at which both A and B // have a set bit if ((A % 2 == 1) && (B % 2 == 1)) { // Inserting a set bit in x x += (1 << j); } // Right shifting both numbers to // traverse all the bits A >>= 1; B >>= 1; j += 1; } return x; } // Driver code public static void main(String[] args) { int A = 2, B = 3; int X = findX(A, B); System.out.println( \"X = \" + X + \", Sum = \" + ((A ^ X) + (B ^ X))); }} // This code has been contributed by 29AjayKumar", "e": 2617, "s": 1695, "text": null }, { "code": "# Python 3 implementation of the approach # Function to return the integer X such that# (A xor X) + (B ^ X) is minimized def findX(A, B): j = 0 x = 0 # While either A or B is non-zero while (A or B): # Position at which both A and B # have a set bit if ((A & 1) and (B & 1)): # Inserting a set bit in x x += (1 << j) # Right shifting both numbers to # traverse all the bits A >>= 1 B >>= 1 j += 1 return x # Driver codeif __name__ == '__main__': A = 2 B = 3 X = findX(A, B) print(\"X =\", X, \", Sum =\", (A ^ X) + (B ^ X)) # This code is contributed by# Surendra_Gangwar", "e": 3297, "s": 2617, "text": null }, { "code": "// C# implementation of the approachusing System; class GFG { // Function to return the integer X such that // (A xor X) + (B ^ X) is minimized static int findX(int A, int B) { int j = 0, x = 0; // While either A or B is non-zero while (A != 0 || B != 0) { // Position at which both A and B // have a set bit if ((A % 2 == 1) && (B % 2 == 1)) { // Inserting a set bit in x x += (1 << j); } // Right shifting both numbers to // traverse all the bits A >>= 1; B >>= 1; j += 1; } return x; } // Driver code public static void Main(String[] args) { int A = 2, B = 3; int X = findX(A, B); Console.WriteLine( \"X = \" + X + \", Sum = \" + ((A ^ X) + (B ^ X))); }} // This code has been contributed by 29AjayKumar", "e": 4230, "s": 3297, "text": null }, { "code": "<?php// PHP implementation of the approach // Function to return the integer X such that// (A xor X) + (B ^ X) is minimizedfunction findX($A, $B){ $j = 0; $x = 0; // While either A or B is non-zero while ($A || $B) { // Position at which both A and B // have a set bit if (($A & 1) && ($B & 1)) { // Inserting a set bit in x $x += (1 << $j); } // Right shifting both numbers to // traverse all the bits $A >>= 1; $B >>= 1; $j += 1; } return $x;} // Driver code $A = 2; $B = 3; $X = findX($A, $B); echo \"X = \" , $X , \", Sum = \", ($A ^ $X) + ($B ^ $X); // This code is contributed by ajit.?>", "e": 4954, "s": 4230, "text": null }, { "code": "<script> // Javascript implementation of the approach // Function to return the integer X such that // (A xor X) + (B ^ X) is minimized function findX(A, B) { let j = 0, x = 0; // While either A or B is non-zero while (A != 0 || B != 0) { // Position at which both A and B // have a set bit if ((A % 2 == 1) && (B % 2 == 1)) { // Inserting a set bit in x x += (1 << j); } // Right shifting both numbers to // traverse all the bits A >>= 1; B >>= 1; j += 1; } return x; } let A = 2, B = 3; let X = findX(A, B); document.write(\"X = \" + X + \", Sum = \" + ((A ^ X) + (B ^ X))); // This code is contributed by suresh07.</script>", "e": 5803, "s": 4954, "text": null }, { "code": null, "e": 5818, "s": 5803, "text": "X = 2, Sum = 1" }, { "code": null, "e": 5853, "s": 5818, "text": "Time Complexity: O(log(max(A, B)))" }, { "code": null, "e": 5875, "s": 5853, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 5900, "s": 5875, "text": "Most Efficient Approach:" }, { "code": null, "e": 6081, "s": 5900, "text": "Using the idea that X will contain only the set bits as A and B, X = A & B. On replacing X, the above equation becomes (A ^ (A & B)) + (B ^ (A & B)) which further equates to A^B. " }, { "code": null, "e": 6617, "s": 6081, "text": "Proof:\nGiven (A ^ X) + (B ^ X)\nTaking X = (A & B), we have \n(A ^ (A & B)) + (B ^ (A & B))\n (using x ^ y = x'y + y'x )\n= (A'(A & B) + A(A & B)') + (B'(A & B) + B(A & B)')\n (using (x & y)' = x' + y')\n= (A'(A & B) + A(A' + B')) + (B'(A & B) + B(A' + B'))\n (A'(A & B) = A'A & A'B = 0, B'(A & B) \n = B'A & B'B = 0)\n= (A(A' + B')) + (B(A' + B')) \n= (AA' + AB') + (BA' + BB')\n (using xx' = x'x = 0) \n= (AB') + (BA') \n= (A ^ B)" }, { "code": null, "e": 6667, "s": 6617, "text": "Click here to know more about Boolean Properties." }, { "code": null, "e": 6719, "s": 6667, "text": "Below is the implementation of the above approach: " }, { "code": null, "e": 6723, "s": 6719, "text": "C++" }, { "code": null, "e": 6728, "s": 6723, "text": "Java" }, { "code": null, "e": 6736, "s": 6728, "text": "Python3" }, { "code": null, "e": 6739, "s": 6736, "text": "C#" }, { "code": null, "e": 6750, "s": 6739, "text": "Javascript" }, { "code": "// c++ implementation of above approach#include <iostream>using namespace std; // finding Xint findX(int A, int B) { return A & B;} // finding Sumint findSum(int A, int B) { return A ^ B;} // Driver codeint main(){ int A = 2, B = 3; cout << \"X = \" << findX(A, B) << \", Sum = \" << findSum(A, B); return 0;} // This code is contributed by yashbeersingh42", "e": 7122, "s": 6750, "text": null }, { "code": "// Java implementation of above approachimport java.io.*; class GFG{ // finding X public static int findX(int A, int B) { return A & B; } // finding Sum public static int findSum(int A, int B) { return A ^ B; } // Driver Code public static void main(String[] args) { int A = 2, B = 3; System.out.print(\"X = \" + findX(A, B) + \", Sum = \" + findSum(A, B)); }}// This code is contributed by yashbeersingh42", "e": 7613, "s": 7122, "text": null }, { "code": "# Python3 implementation of above approach # finding Xdef findX(A, B): return A & B # finding Sumdef findSum(A, B): return A ^ B # Driver codeA, B = 2, 3print(\"X =\", findX(A, B) , \", Sum =\" , findSum(A, B)) # This code is contributed by divyeshrabadiya07", "e": 7874, "s": 7613, "text": null }, { "code": "// C# implementation of above approachusing System; class GFG{ // Finding Xpublic static int findX(int A, int B){ return A & B;} // Finding Sumpublic static int findSum(int A, int B){ return A ^ B;} // Driver Codepublic static void Main(String[] args){ int A = 2, B = 3; Console.Write(\"X = \" + findX(A, B) + \", Sum = \" + findSum(A, B));}} // This code is contributed by Princi Singh", "e": 8295, "s": 7874, "text": null }, { "code": "<script> // Javascript implementation of the approach // finding X function findX(A, B) { return A & B; } // finding Sum function findSum(A, B) { return A ^ B; } let A = 2, B = 3; document.write(\"X = \" + findX(A, B) + \", Sum = \" + findSum(A, B)); </script>", "e": 8606, "s": 8295, "text": null }, { "code": null, "e": 8621, "s": 8606, "text": "X = 2, Sum = 1" }, { "code": null, "e": 8643, "s": 8621, "text": "Time Complexity: O(1)" }, { "code": null, "e": 8665, "s": 8643, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 8677, "s": 8665, "text": "29AjayKumar" }, { "code": null, "e": 8694, "s": 8677, "text": "SURENDRA_GANGWAR" }, { "code": null, "e": 8700, "s": 8694, "text": "jit_t" }, { "code": null, "e": 8716, "s": 8700, "text": "sravanikatasani" }, { "code": null, "e": 8732, "s": 8716, "text": "yashbeersingh42" }, { "code": null, "e": 8745, "s": 8732, "text": "princi singh" }, { "code": null, "e": 8763, "s": 8745, "text": "divyeshrabadiya07" }, { "code": null, "e": 8777, "s": 8763, "text": "divyesh072019" }, { "code": null, "e": 8786, "s": 8777, "text": "suresh07" }, { "code": null, "e": 8805, "s": 8786, "text": "surindertarika1234" }, { "code": null, "e": 8818, "s": 8805, "text": "simmytarika5" }, { "code": null, "e": 8834, "s": 8818, "text": "subhammahato348" }, { "code": null, "e": 8846, "s": 8834, "text": "Bitwise-XOR" }, { "code": null, "e": 8856, "s": 8846, "text": "Bit Magic" }, { "code": null, "e": 8869, "s": 8856, "text": "Mathematical" }, { "code": null, "e": 8882, "s": 8869, "text": "Mathematical" }, { "code": null, "e": 8892, "s": 8882, "text": "Bit Magic" } ]
StringBuffer deleteCharAt() Method in Java with Examples
06 Dec, 2018 The Java.lang.StringBuffer.deleteCharAt() is a built-in Java method which removes the char at the specified position in this sequence. So that the sequence is reduced by 1 char. Syntax: public StringBuffer deleteCharAt(int indexpoint) Parameters : The method accepts a single parameter indexpoint of integer type which refers to the index of the char to be removed.Return Value: The function returns the string or returns this object after removing the character.Exception : If the indexpoint is negative or greater than or equal to length(), then the method throws StringIndexOutOfBoundsException. Examples: Input : StringBuffer = worldofgeeks int indexpoint = 4 Output : worlofgeeks Below programs illustrate the working of StringBuffer.deleteCharAt() method:Program 1: // Java program to demonstrate working// of StringBuffer.deleteCharAt() method import java.lang.*; public class Geeks { public static void main(String[] args) { StringBuffer sbf = new StringBuffer("raghav"); System.out.println("String buffer before deletion = " + sbf); // Deleting the character at indexpoint 5 sbf.deleteCharAt(5); System.out.println("After deletion new StringBuffer = " + sbf); }} Output: String buffer before deletion = raghav After deletion new StringBuffer = ragha Program 2: // Java program to demonstrate working// of StringBuffer.deleteCharAt() method import java.lang.*; public class Geeks { public static void main(String[] args) { StringBuffer sbf = new StringBuffer("GeeksforGeeks"); System.out.println("String buffer before deletion = " + sbf); // Deleting the character at indexpoint 5 sbf.deleteCharAt(5); System.out.println("After deletion new StringBuffer = " + sbf); }} Output: String buffer before deletion = GeeksforGeeks After deletion new StringBuffer = GeeksorGeeks Program 3: // Java program to demonstrate working// of StringBuffer.deleteCharAt() method import java.lang.*; public class Geeks { public static void main(String[] args) { StringBuffer sbf = new StringBuffer("Abhishek"); System.out.println("String buffer before deletion = " + sbf); // Deleting the character at indexpoint -5 sbf.deleteCharAt(-5); System.out.println("After deletion new StringBuffer = " + sbf); }} Output: Exception in thread "main" java.lang.StringIndexOutOfBoundsException: String index out of range: -5 at java.lang.AbstractStringBuilder.deleteCharAt (AbstractStringBuilder.java:824) at java.lang.StringBuffer.deleteCharAt(StringBuffer.java:441) at Geeks.main(Geeks.java:14) Java-Functions Java-lang package java-StringBuffer Java-Strings Java Java-Strings Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Interfaces in Java ArrayList in Java Stream In Java Collections in Java Multidimensional Arrays in Java Singleton Class in Java Stack Class in Java Set in Java Introduction to Java Initialize an ArrayList in Java
[ { "code": null, "e": 53, "s": 25, "text": "\n06 Dec, 2018" }, { "code": null, "e": 231, "s": 53, "text": "The Java.lang.StringBuffer.deleteCharAt() is a built-in Java method which removes the char at the specified position in this sequence. So that the sequence is reduced by 1 char." }, { "code": null, "e": 239, "s": 231, "text": "Syntax:" }, { "code": null, "e": 288, "s": 239, "text": "public StringBuffer deleteCharAt(int indexpoint)" }, { "code": null, "e": 652, "s": 288, "text": "Parameters : The method accepts a single parameter indexpoint of integer type which refers to the index of the char to be removed.Return Value: The function returns the string or returns this object after removing the character.Exception : If the indexpoint is negative or greater than or equal to length(), then the method throws StringIndexOutOfBoundsException." }, { "code": null, "e": 662, "s": 652, "text": "Examples:" }, { "code": null, "e": 747, "s": 662, "text": "Input : StringBuffer = worldofgeeks\n int indexpoint = 4\nOutput : worlofgeeks\n" }, { "code": null, "e": 834, "s": 747, "text": "Below programs illustrate the working of StringBuffer.deleteCharAt() method:Program 1:" }, { "code": "// Java program to demonstrate working// of StringBuffer.deleteCharAt() method import java.lang.*; public class Geeks { public static void main(String[] args) { StringBuffer sbf = new StringBuffer(\"raghav\"); System.out.println(\"String buffer before deletion = \" + sbf); // Deleting the character at indexpoint 5 sbf.deleteCharAt(5); System.out.println(\"After deletion new StringBuffer = \" + sbf); }}", "e": 1286, "s": 834, "text": null }, { "code": null, "e": 1294, "s": 1286, "text": "Output:" }, { "code": null, "e": 1373, "s": 1294, "text": "String buffer before deletion = raghav\nAfter deletion new StringBuffer = ragha" }, { "code": null, "e": 1384, "s": 1373, "text": "Program 2:" }, { "code": "// Java program to demonstrate working// of StringBuffer.deleteCharAt() method import java.lang.*; public class Geeks { public static void main(String[] args) { StringBuffer sbf = new StringBuffer(\"GeeksforGeeks\"); System.out.println(\"String buffer before deletion = \" + sbf); // Deleting the character at indexpoint 5 sbf.deleteCharAt(5); System.out.println(\"After deletion new StringBuffer = \" + sbf); }}", "e": 1843, "s": 1384, "text": null }, { "code": null, "e": 1851, "s": 1843, "text": "Output:" }, { "code": null, "e": 1945, "s": 1851, "text": "String buffer before deletion = GeeksforGeeks\nAfter deletion new StringBuffer = GeeksorGeeks\n" }, { "code": null, "e": 1956, "s": 1945, "text": "Program 3:" }, { "code": "// Java program to demonstrate working// of StringBuffer.deleteCharAt() method import java.lang.*; public class Geeks { public static void main(String[] args) { StringBuffer sbf = new StringBuffer(\"Abhishek\"); System.out.println(\"String buffer before deletion = \" + sbf); // Deleting the character at indexpoint -5 sbf.deleteCharAt(-5); System.out.println(\"After deletion new StringBuffer = \" + sbf); }}", "e": 2412, "s": 1956, "text": null }, { "code": null, "e": 2420, "s": 2412, "text": "Output:" }, { "code": null, "e": 2792, "s": 2420, "text": "Exception in thread \"main\" java.lang.StringIndexOutOfBoundsException: \n String index out of range: -5\n at java.lang.AbstractStringBuilder.deleteCharAt\n (AbstractStringBuilder.java:824)\n at java.lang.StringBuffer.deleteCharAt(StringBuffer.java:441)\n at Geeks.main(Geeks.java:14)" }, { "code": null, "e": 2807, "s": 2792, "text": "Java-Functions" }, { "code": null, "e": 2825, "s": 2807, "text": "Java-lang package" }, { "code": null, "e": 2843, "s": 2825, "text": "java-StringBuffer" }, { "code": null, "e": 2856, "s": 2843, "text": "Java-Strings" }, { "code": null, "e": 2861, "s": 2856, "text": "Java" }, { "code": null, "e": 2874, "s": 2861, "text": "Java-Strings" }, { "code": null, "e": 2879, "s": 2874, "text": "Java" }, { "code": null, "e": 2977, "s": 2879, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2996, "s": 2977, "text": "Interfaces in Java" }, { "code": null, "e": 3014, "s": 2996, "text": "ArrayList in Java" }, { "code": null, "e": 3029, "s": 3014, "text": "Stream In Java" }, { "code": null, "e": 3049, "s": 3029, "text": "Collections in Java" }, { "code": null, "e": 3081, "s": 3049, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 3105, "s": 3081, "text": "Singleton Class in Java" }, { "code": null, "e": 3125, "s": 3105, "text": "Stack Class in Java" }, { "code": null, "e": 3137, "s": 3125, "text": "Set in Java" }, { "code": null, "e": 3158, "s": 3137, "text": "Introduction to Java" } ]
Javascript | Math.sign( ) Function
09 Feb, 2022 The Math.sign() is a builtin function in JavaScript and is used to know the sign of a number, indicating whether the number specified is negative or positive. Syntax: Math.sign(number) Parameters: This function accepts a single parameter number which represents the number whose sign you want to know. Return Value: The Math.sign() function returns five different values as described below: It returns 1 if the argument passed is a positive number.It returns -1 if the argument passed is a negative number.It returns 0 if the argument passed is a positive zero.It returns -0 if the argument passed is a negative zero.If none of the above cases match,it returns Nan. It returns 1 if the argument passed is a positive number. It returns -1 if the argument passed is a negative number. It returns 0 if the argument passed is a positive zero. It returns -0 if the argument passed is a negative zero. If none of the above cases match,it returns Nan. Examples: Input : Math.sign(2) Output : 1 Input : Math.sign(-2) Output : -1 Input : Math.sign(0) Output : 0 Input : Math.sign(-0) Output : -0 Input : Math.sign(haa) Output : NaN Below programs illustrates the Math.sign() function in JavaScript: Example 1: When a positive number is passed as an argument.<script type="text/javascript"> document.write(Math.sign(2)); </script>Output:1 <script type="text/javascript"> document.write(Math.sign(2)); </script> Output: 1 Example 2: When a negative number is passed as an argument:<script type="text/javascript"> document.write(Math.sign(-2)); </script>Output:-1 <script type="text/javascript"> document.write(Math.sign(-2)); </script> Output: -1 Example 3: When a positive zero is passed as an argument:<script type="text/javascript"> document.write(Math.sign(0)); </script>Output:0 <script type="text/javascript"> document.write(Math.sign(0)); </script> Output: 0 Example 4: When a negative zero is passed as an argument: <script type="text/javascript"> document.write(Math.sign(-0)); </script>Output:-0 <script type="text/javascript"> document.write(Math.sign(-0)); </script> Output: -0 Example 5: When an invalid number is passed as an argument:<script type="text/javascript"> document.write(Math.sign(haa)); </script>Output:NaN <script type="text/javascript"> document.write(Math.sign(haa)); </script> Output: NaN Supported Browsers: The browsers supported by Javascript Math.sign( ) Function are listed below: Google Chrome 38 and above Firefox 25 and above Opera 25 and above Safari 9 and above Edge 12 and above ysachin2314 javascript-math JavaScript Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n09 Feb, 2022" }, { "code": null, "e": 187, "s": 28, "text": "The Math.sign() is a builtin function in JavaScript and is used to know the sign of a number, indicating whether the number specified is negative or positive." }, { "code": null, "e": 195, "s": 187, "text": "Syntax:" }, { "code": null, "e": 213, "s": 195, "text": "Math.sign(number)" }, { "code": null, "e": 330, "s": 213, "text": "Parameters: This function accepts a single parameter number which represents the number whose sign you want to know." }, { "code": null, "e": 419, "s": 330, "text": "Return Value: The Math.sign() function returns five different values as described below:" }, { "code": null, "e": 694, "s": 419, "text": "It returns 1 if the argument passed is a positive number.It returns -1 if the argument passed is a negative number.It returns 0 if the argument passed is a positive zero.It returns -0 if the argument passed is a negative zero.If none of the above cases match,it returns Nan." }, { "code": null, "e": 752, "s": 694, "text": "It returns 1 if the argument passed is a positive number." }, { "code": null, "e": 811, "s": 752, "text": "It returns -1 if the argument passed is a negative number." }, { "code": null, "e": 867, "s": 811, "text": "It returns 0 if the argument passed is a positive zero." }, { "code": null, "e": 924, "s": 867, "text": "It returns -0 if the argument passed is a negative zero." }, { "code": null, "e": 973, "s": 924, "text": "If none of the above cases match,it returns Nan." }, { "code": null, "e": 983, "s": 973, "text": "Examples:" }, { "code": null, "e": 1167, "s": 983, "text": "Input : Math.sign(2)\nOutput : 1\n \nInput : Math.sign(-2)\nOutput : -1\n\nInput : Math.sign(0)\nOutput : 0\n\nInput : Math.sign(-0)\nOutput : -0\n\nInput : Math.sign(haa)\nOutput : NaN\n" }, { "code": null, "e": 1234, "s": 1167, "text": "Below programs illustrates the Math.sign() function in JavaScript:" }, { "code": null, "e": 1384, "s": 1234, "text": "Example 1: When a positive number is passed as an argument.<script type=\"text/javascript\"> document.write(Math.sign(2)); </script>Output:1" }, { "code": "<script type=\"text/javascript\"> document.write(Math.sign(2)); </script>", "e": 1467, "s": 1384, "text": null }, { "code": null, "e": 1475, "s": 1467, "text": "Output:" }, { "code": null, "e": 1477, "s": 1475, "text": "1" }, { "code": null, "e": 1629, "s": 1477, "text": "Example 2: When a negative number is passed as an argument:<script type=\"text/javascript\"> document.write(Math.sign(-2)); </script>Output:-1" }, { "code": "<script type=\"text/javascript\"> document.write(Math.sign(-2)); </script>", "e": 1713, "s": 1629, "text": null }, { "code": null, "e": 1721, "s": 1713, "text": "Output:" }, { "code": null, "e": 1724, "s": 1721, "text": "-1" }, { "code": null, "e": 1872, "s": 1724, "text": "Example 3: When a positive zero is passed as an argument:<script type=\"text/javascript\"> document.write(Math.sign(0)); </script>Output:0" }, { "code": "<script type=\"text/javascript\"> document.write(Math.sign(0)); </script>", "e": 1955, "s": 1872, "text": null }, { "code": null, "e": 1963, "s": 1955, "text": "Output:" }, { "code": null, "e": 1965, "s": 1963, "text": "0" }, { "code": null, "e": 2120, "s": 1965, "text": "Example 4: When a negative zero is passed as an argument: <script type=\"text/javascript\"> document.write(Math.sign(-0)); </script>Output:-0" }, { "code": " <script type=\"text/javascript\"> document.write(Math.sign(-0)); </script>", "e": 2209, "s": 2120, "text": null }, { "code": null, "e": 2217, "s": 2209, "text": "Output:" }, { "code": null, "e": 2220, "s": 2217, "text": "-0" }, { "code": null, "e": 2374, "s": 2220, "text": "Example 5: When an invalid number is passed as an argument:<script type=\"text/javascript\"> document.write(Math.sign(haa)); </script>Output:NaN" }, { "code": "<script type=\"text/javascript\"> document.write(Math.sign(haa)); </script>", "e": 2459, "s": 2374, "text": null }, { "code": null, "e": 2467, "s": 2459, "text": "Output:" }, { "code": null, "e": 2471, "s": 2467, "text": "NaN" }, { "code": null, "e": 2568, "s": 2471, "text": "Supported Browsers: The browsers supported by Javascript Math.sign( ) Function are listed below:" }, { "code": null, "e": 2595, "s": 2568, "text": "Google Chrome 38 and above" }, { "code": null, "e": 2616, "s": 2595, "text": "Firefox 25 and above" }, { "code": null, "e": 2635, "s": 2616, "text": "Opera 25 and above" }, { "code": null, "e": 2654, "s": 2635, "text": "Safari 9 and above" }, { "code": null, "e": 2672, "s": 2654, "text": "Edge 12 and above" }, { "code": null, "e": 2684, "s": 2672, "text": "ysachin2314" }, { "code": null, "e": 2700, "s": 2684, "text": "javascript-math" }, { "code": null, "e": 2711, "s": 2700, "text": "JavaScript" } ]
Find the levels of factor of a given vector in R
23 May, 2021 Factors are the objects which are used to categorize the data and display the data as levels. The objects can be integer, character. They can be used to find the unique values in the given vector. The resulting data is known as levels. Factors are useful in statistical analysis in analyzing the categorical data. It is used to find the levels of the given vector. In this article, we are going to find the levels of factor in the given vector in R. Example: Input: data=[female,female,male,male,other] Output: factor=female,male,other. We can convert the vector data into factor by using factor() function, with this we can get levels() of the given vector Syntax: factor(vector_name) Result is factor levels. Example: R # create a vector with subjects datadata=c("java","sql","python","java", "security","python") print(data) # apply factora=factor(data) print(a) Output: Example2: R # create a vector with numeric datadata=c(1,2,3,4,5,3,4,5,6,2,7,8,6) print(data) # apply factora=factor(data) print(a) Output: Example 3: R # create a vector with all type of datadata=c(1,2,3,4,5,3,4,"male","female","male", "other",4.5,6.7,4.5) print(data) # apply factora=factor(data) print(a) Output: Picked R Factor-Programs R-Factors R Language R Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Change Color of Bars in Barchart using ggplot2 in R How to Split Column Into Multiple Columns in R DataFrame? Group by function in R using Dplyr How to Change Axis Scales in R Plots? How to filter R DataFrame by values in a column? How to Split Column Into Multiple Columns in R DataFrame? How to filter R DataFrame by values in a column? Replace Specific Characters in String in R Merge DataFrames by Column Names in R How to Sort a DataFrame in R ?
[ { "code": null, "e": 28, "s": 0, "text": "\n23 May, 2021" }, { "code": null, "e": 479, "s": 28, "text": "Factors are the objects which are used to categorize the data and display the data as levels. The objects can be integer, character. They can be used to find the unique values in the given vector. The resulting data is known as levels. Factors are useful in statistical analysis in analyzing the categorical data. It is used to find the levels of the given vector. In this article, we are going to find the levels of factor in the given vector in R. " }, { "code": null, "e": 488, "s": 479, "text": "Example:" }, { "code": null, "e": 495, "s": 488, "text": "Input:" }, { "code": null, "e": 532, "s": 495, "text": "data=[female,female,male,male,other]" }, { "code": null, "e": 540, "s": 532, "text": "Output:" }, { "code": null, "e": 566, "s": 540, "text": "factor=female,male,other." }, { "code": null, "e": 687, "s": 566, "text": "We can convert the vector data into factor by using factor() function, with this we can get levels() of the given vector" }, { "code": null, "e": 695, "s": 687, "text": "Syntax:" }, { "code": null, "e": 715, "s": 695, "text": "factor(vector_name)" }, { "code": null, "e": 740, "s": 715, "text": "Result is factor levels." }, { "code": null, "e": 749, "s": 740, "text": "Example:" }, { "code": null, "e": 751, "s": 749, "text": "R" }, { "code": "# create a vector with subjects datadata=c(\"java\",\"sql\",\"python\",\"java\", \"security\",\"python\") print(data) # apply factora=factor(data) print(a)", "e": 904, "s": 751, "text": null }, { "code": null, "e": 912, "s": 904, "text": "Output:" }, { "code": null, "e": 923, "s": 912, "text": "Example2: " }, { "code": null, "e": 925, "s": 923, "text": "R" }, { "code": "# create a vector with numeric datadata=c(1,2,3,4,5,3,4,5,6,2,7,8,6) print(data) # apply factora=factor(data) print(a)", "e": 1047, "s": 925, "text": null }, { "code": null, "e": 1055, "s": 1047, "text": "Output:" }, { "code": null, "e": 1066, "s": 1055, "text": "Example 3:" }, { "code": null, "e": 1068, "s": 1066, "text": "R" }, { "code": "# create a vector with all type of datadata=c(1,2,3,4,5,3,4,\"male\",\"female\",\"male\", \"other\",4.5,6.7,4.5) print(data) # apply factora=factor(data) print(a)", "e": 1232, "s": 1068, "text": null }, { "code": null, "e": 1240, "s": 1232, "text": "Output:" }, { "code": null, "e": 1247, "s": 1240, "text": "Picked" }, { "code": null, "e": 1265, "s": 1247, "text": "R Factor-Programs" }, { "code": null, "e": 1275, "s": 1265, "text": "R-Factors" }, { "code": null, "e": 1286, "s": 1275, "text": "R Language" }, { "code": null, "e": 1297, "s": 1286, "text": "R Programs" }, { "code": null, "e": 1395, "s": 1297, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 1447, "s": 1395, "text": "Change Color of Bars in Barchart using ggplot2 in R" }, { "code": null, "e": 1505, "s": 1447, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 1540, "s": 1505, "text": "Group by function in R using Dplyr" }, { "code": null, "e": 1578, "s": 1540, "text": "How to Change Axis Scales in R Plots?" }, { "code": null, "e": 1627, "s": 1578, "text": "How to filter R DataFrame by values in a column?" }, { "code": null, "e": 1685, "s": 1627, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 1734, "s": 1685, "text": "How to filter R DataFrame by values in a column?" }, { "code": null, "e": 1777, "s": 1734, "text": "Replace Specific Characters in String in R" }, { "code": null, "e": 1815, "s": 1777, "text": "Merge DataFrames by Column Names in R" } ]
Left pad an integer in Java with zeros
Let us see an example first to understand how an integer looks with left padding − 888 //left padding with spaces 0000000999 //left padding with 7 zeros Let us see an example to left pad a number with zero − Live Demo public class Demo { public static void main(String[] args) { int val = 9899; System.out.println(String.format("%05d",val)); } } 09899 Let us see another example that pads a greater number of zeros − Live Demo public class Demo { public static void main(String[] args) { int val = 9899; System.out.println(String.format("%010d", val)); } } 0000009899
[ { "code": null, "e": 1145, "s": 1062, "text": "Let us see an example first to understand how an integer looks with left padding −" }, { "code": null, "e": 1215, "s": 1145, "text": "888 //left padding with spaces\n0000000999 //left padding with 7 zeros" }, { "code": null, "e": 1270, "s": 1215, "text": "Let us see an example to left pad a number with zero −" }, { "code": null, "e": 1281, "s": 1270, "text": " Live Demo" }, { "code": null, "e": 1427, "s": 1281, "text": "public class Demo {\n public static void main(String[] args) {\n int val = 9899;\n System.out.println(String.format(\"%05d\",val));\n }\n}" }, { "code": null, "e": 1433, "s": 1427, "text": "09899" }, { "code": null, "e": 1498, "s": 1433, "text": "Let us see another example that pads a greater number of zeros −" }, { "code": null, "e": 1509, "s": 1498, "text": " Live Demo" }, { "code": null, "e": 1657, "s": 1509, "text": "public class Demo {\n public static void main(String[] args) {\n int val = 9899;\n System.out.println(String.format(\"%010d\", val));\n }\n}" }, { "code": null, "e": 1668, "s": 1657, "text": "0000009899" } ]
How to save a plot in Seaborn with Python (Matplotlib)?
To save a plot in Seaborn, we can use the savefig() method. Set the figure size and adjust the padding between and around the subplots. Make a two-dimensional, size-mutable, potentially heterogeneous tabular data. Plot pairwise relationships in a dataset. Save the plot into a file using savefig() method. To display the figure, use show() method. import seaborn as sns import pandas as pd import numpy as np import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = [7.50, 3.50] plt.rcParams["figure.autolayout"] = True df = pd.DataFrame(np.random.random((5, 5)), columns=["a", "b", "c", "d", "e"]) sns_pp = sns.pairplot(df) sns_pp.savefig("sns-heatmap.png") When we execute the code, it will create the following plot and save it as "sns-heatmap.png" in the project folder.
[ { "code": null, "e": 1122, "s": 1062, "text": "To save a plot in Seaborn, we can use the savefig() method." }, { "code": null, "e": 1198, "s": 1122, "text": "Set the figure size and adjust the padding between and around the subplots." }, { "code": null, "e": 1276, "s": 1198, "text": "Make a two-dimensional, size-mutable, potentially heterogeneous tabular data." }, { "code": null, "e": 1318, "s": 1276, "text": "Plot pairwise relationships in a dataset." }, { "code": null, "e": 1368, "s": 1318, "text": "Save the plot into a file using savefig() method." }, { "code": null, "e": 1410, "s": 1368, "text": "To display the figure, use show() method." }, { "code": null, "e": 1731, "s": 1410, "text": "import seaborn as sns\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = [7.50, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\n\ndf = pd.DataFrame(np.random.random((5, 5)), columns=[\"a\", \"b\", \"c\", \"d\", \"e\"])\nsns_pp = sns.pairplot(df)\nsns_pp.savefig(\"sns-heatmap.png\")" }, { "code": null, "e": 1847, "s": 1731, "text": "When we execute the code, it will create the following plot and save it as \"sns-heatmap.png\" in\nthe project folder." } ]
Reloading modules in Python?
The reload() - reloads a previously imported module or loaded module. This comes handy in a situation where you repeatedly run a test script during an interactive session, it always uses the first version of the modules we are developing, even we have mades changes to the code. In that scenario we need to make sure that modules are reloaded. The argument passed to the reload() must be a module object which is successfully imported before. import importlib importlib.reload(sys) >>> import sys >>> import importlib >>> importlib.reload(sys) <module 'sys' (built-in)> However, if you are trying to reload a module which is not imported before, you might get an Error. >>> import importlib >>> importlib.reload(sys) Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> importlib.reload(sys) NameError: name 'sys' is not defined Few points to understand, when reload() is executed - Python module’s code is recompiled and the module-level code re-executed, defining a new set of objects which are bound to names in the module’s dictionary by reusing the loader which originally loaded the module. However, the init function of the modules is not loaded again Python module’s code is recompiled and the module-level code re-executed, defining a new set of objects which are bound to names in the module’s dictionary by reusing the loader which originally loaded the module. However, the init function of the modules is not loaded again The old objects are only reclaimed after their reference counts comes down to zero. The old objects are only reclaimed after their reference counts comes down to zero. The names in the module namespace is changed to new object if any. The names in the module namespace is changed to new object if any. Other references of the old objects (like names external to the module) are not necessarily refers to the new objects and must be updated in each namespace where they occur if that is required. Other references of the old objects (like names external to the module) are not necessarily refers to the new objects and must be updated in each namespace where they occur if that is required.
[ { "code": null, "e": 1406, "s": 1062, "text": "The reload() - reloads a previously imported module or loaded module. This comes handy in a situation where you repeatedly run a test script during an interactive session, it always uses the first version of the modules we are developing, even we have mades changes to the code. In that scenario we need to make sure that modules are reloaded." }, { "code": null, "e": 1505, "s": 1406, "text": "The argument passed to the reload() must be a module object which is successfully imported before." }, { "code": null, "e": 1544, "s": 1505, "text": "import importlib\nimportlib.reload(sys)" }, { "code": null, "e": 1632, "s": 1544, "text": ">>> import sys\n>>> import importlib\n>>> importlib.reload(sys)\n<module 'sys' (built-in)>" }, { "code": null, "e": 1732, "s": 1632, "text": "However, if you are trying to reload a module which is not imported before, you might get an Error." }, { "code": null, "e": 1913, "s": 1732, "text": ">>> import importlib\n>>> importlib.reload(sys)\nTraceback (most recent call last):\nFile \"<pyshell#1>\", line 1, in <module>\nimportlib.reload(sys)\nNameError: name 'sys' is not defined" }, { "code": null, "e": 1967, "s": 1913, "text": "Few points to understand, when reload() is executed -" }, { "code": null, "e": 2243, "s": 1967, "text": "Python module’s code is recompiled and the module-level code re-executed, defining a new set of objects which are bound to names in the module’s dictionary by reusing the loader which originally loaded the module. However, the init function of the modules is not loaded again" }, { "code": null, "e": 2519, "s": 2243, "text": "Python module’s code is recompiled and the module-level code re-executed, defining a new set of objects which are bound to names in the module’s dictionary by reusing the loader which originally loaded the module. However, the init function of the modules is not loaded again" }, { "code": null, "e": 2603, "s": 2519, "text": "The old objects are only reclaimed after their reference counts comes down to zero." }, { "code": null, "e": 2687, "s": 2603, "text": "The old objects are only reclaimed after their reference counts comes down to zero." }, { "code": null, "e": 2754, "s": 2687, "text": "The names in the module namespace is changed to new object if any." }, { "code": null, "e": 2821, "s": 2754, "text": "The names in the module namespace is changed to new object if any." }, { "code": null, "e": 3015, "s": 2821, "text": "Other references of the old objects (like names external to the module) are not necessarily refers to the new objects and must be updated in each namespace where they occur if that is required." }, { "code": null, "e": 3209, "s": 3015, "text": "Other references of the old objects (like names external to the module) are not necessarily refers to the new objects and must be updated in each namespace where they occur if that is required." } ]
Face Mask Detection using YOLOv5. COVID-19: Guide to build a face mask... | by Àlex Escolà Nixon | Towards Data Science
In this post, we’ll be going through a step-by-step guide on how to train a YOLOv5 model to detect whether people are wearing a mask or not on a video stream. We’ll start by going through some basic concepts behind object detection models and motivate the use of YOLOv5 for this problem. From there, we’ll review the dataset we’ll be using to train the model, and see how it can be adapted so that it conforms with the darknet format. I’ll then show you how you can train a YOLOv5 model using the downloaded dataset and run inference on images or video files. Find a notebook with the code used for this section here. This project was done alongside with Fran Pérez, also a contributor of this community. I’d recommend you to read some of his excellent posts! Also, you might find it helpful to start with a more general overview of object detection and its main algorithms. If that is the case, I’d recommend reading this blog post for a good starting point. “YOLO”, refering to “You Only Look Once”, is a family of object detection models introduced by Joseph Redmon with a 2016 publication “You Only Look Once: Unified, Real-Time Object Detection”. Since then, several newer versions have been released, of which, the first three were released by Joseph Redmon. On June 29, Glenn Jocher released the latest version YOLOv5, claiming significant improvements with respect to its predecessor. The most interesting improvement, is its “blazingly fast inference”. As posted in this article by Roboflow, running in a Tesla P100, YOLOv5 achieves inference times of up to 0.007 seconds per image, meaning 140 FPS! Using an object detection model such as YOLOv5 is most likely the simplest and most reasonable approach to this problem. This is because we’re limiting the computer vision pipeline to a single step, since object detectors are trained to detect a: Bounding box and a Corresponding label This is precisely what we’re trying to achieve for this problem. In our case, the bounding boxes will be the detected faces, and the corresponding labels will indicate whether the person is wearing a mask or not. Alternatively, if we wanted to build our own deep learning model, it would be more complex, since it would have to be 2-fold: we’d need a model to detect faces in an image, and a second model to detect the presence or absence of face mask in the found bounding boxes. A drawback of doing so, apart from the complexity, is that the inference time would be much slower, especially in images with many faces. We now know which model we can use for this problem. The next natural, and probably most important, aspect is... what about the data?? Luckily, there is a publicly available dataset in Kaggle named Face Mask Detection, which will make our life way easier. The dataset contains 853 images and their corresponding annotation files, indicating whether a person is wearing a mask correctly, incorrectly or not wearing it. Bellow is a sample from the dataset: In this case, we’ll simplify the above to detect if a person is wearing the mask or not (we’ll see how in the Roboflow section). We now know everything we need to get started, so its time to get hands-on! The first thing we need to do is clone the repository from ultralytics/yolov5, and install all required dependencies: !git clone https://github.com/ultralytics/yolov5 # clone repo!pip install -U -r yolov5/requirements.txt # install dependencies The main files that we’ll need from the repository are structured as follows: yolov5 # project root├── models # yolov5 models├── train.py # training script└── detect.py # inference script The 📁models folder contains several .yml files with the different proposed models. In it we can find 4 different models, ordered from smaller to larger (in terms of the amount of parameters): yolov5-s, yolov5-m, yolov5-l and yolov5-x. For a detailed comparison, see here. train.py and detect.py will be the scripts that we’ll be calling to train the model and predict on new images/videos respectively. In order to train the model, a necessary step will be to change the format of the .xml annotation files so that they conform with the darknet format. In the linked github thread, we’ll see that each image has to have a .txt file associated with it, with rows with the format: <object-class> <x> <y> <width> <height> Each line will represent the annotation for each object in the image, where <x> <y> are the coordinates of the centre of the bounding box, and <width> <height> the respective width and height. For example an img1.jpg must have an associated img1.txt containing: 1 0.427234 0.123172 0.191749 0.1712390 0.183523 0.431238 0.241231 0.1741211 0.542341 0.321253 0.191289 0.219217 The good news, is that this step is made really simple thanks to Roboflow. Roboflow enables to easily change between annotation formats, as well as as to augment our image data and split it into training and validation sets, which will be extremely handy! This can be done in 5 simple steps: Upload the images and annotations Choose the train, validation and test proportions you want (train and validation will be enough) Add an augmentation step choosing from among the existing filters, for instance blur, brightness, rotation etc. And finally, generate the new images and export to YOLO Darknet format We should now have a separate folder for each split, train and validation (and test if included), where each of these should contain the .jpg augmented images, the corresponding .txt annotation files and a ._darknet.labels file with the labels in their corresponding order: mask_weared_incorrect # label 0with_mask # label 1without_mask # label 2 This will depend on how you want your model to behave, but as mentioned above, I’ve decided to simplify the problem by limiting the labels to mask or no mask. Also, the label mask_weared_incorrect appears very few times in the dataset, so the model would most likely do a poor job at classifying it. This simple script will do: Now, the numerical labels in the annotation files, will map to: without_mask # label 0with_mask # label 1 The dataset is slightly unbalanced, having more with_mask labels, hence something we can do is augment with images of people not wearing a mask. This will also help improve significantly the performance of the model, since the dataset we’ll be using is quite small. Find a notebook with the code used for this section here. One way we could do this is by downloading images from COCO dataset and then annotating them ourselves. The COCO dataset has an official API, pycocotools, which we can use to download images with person labels: The above will give us a list of dictionaries with the details and url of the images, which we can then use to download locally with: The next step will be to obtain the face bounding boxes for these images. For this a simple way is to use a pre-trained model to detect the faces, and label them in the appropriate format. For that you need to first install pytorch and then facenet-pytorch : conda install pytorch torchvision torchaudio cpuonly -c pytorchpip install facenet-pytorch I also use cv2 to check that the annotations are correct. To have the annotations in darknet format, we’ll need to convert the detections made by facenet-pytorch. We can do this with the function bellow: And you can then obtain the annotations for the downloaded images with the following script: Which also prints the detected annotations, to verify that they are correct. Here are some examples: Before training the model, we will need to create a data.yml, specifying where the training and validation images are and the amount of labels as well as the label names of out training data. The file should be structured as: train: train/imagesval: valid/imagesnc: 2names: ['bad', 'good'] To train the model we have to run train.py, which takes the following arguments: img: input image size batch: batch size epochs: number of epochs data: path to the data.yml file cfg: model to choose among the preexisting in 📁models weights: initial weights path, defaults to yolov5s.pt name: renames output folder device: Wheter to train on cpu or gpu. In my case, I trained the model with a GPU, but otherwise you’ll have to add to the arguments --device cpu to run locally on a CPU. For the rest, you just need to take care of the specified routes: python ~/github/yolov5/train.py --img 416 --batch 16 --epochs 150 --data data.yaml --cfg yolov5s.yaml --weights ‘’ --name robo4_epoch150_s --adam Once the training completes, you can see how your model has performed by checking the generated logs, as well as the loss plots saved as a .png file. Training losses and performance metrics are logged to Tensorboard and also to a runs/exp0/results.txt logfile, which is plotted as results.png after the training completes. Here we see yolov5s.pt trained to 150 epochs (blue), and 300 epochs (orange): As soon as the first epoch is complete we will have a mosaic of images showing both the ground truth and prediction results on test images, which will look like: There’s a detailed tutorial on a colab notebook by ultralytics where you’ll find the above explained in more detail and available for you to run yourself! Now that we’ve trained the model on a dataset of face mask images, it is ready to run inference both on single images and video streams! To run inference, we have to call detect.py, and adjust the following parameters: weights: weights of the trained model source: input file/folder to run inference on, 0 for webcam output: directory to save results iou-thres: IOU threshold for NMS, defaults to 0.45 conf-thres: object confidence threshold The two last you’ll probably have to tweak a little bit based on the generated results: iou-thres defines a threshold on the Intersection Over Union metric obtained for a given object. It defaults to 0.45 , generally a IOU higher than 0.5 is considered a good prediction. See this great post by Adrian Rosebrock for more details on this metric. conf-thres is a measure that determines how confident the model is that a given bounding box contains an object. It equals to the IoU times a factor representing the probability of an object. This metric mostly prevents the model from predicting backgrounds. The higher the threshold, the lesser false positives. However, if we set it too high the model will miss many correct detections on which it has a lower confidence. After a few trial and errors, the following setting seemed to yield the best results for this problem: python ~/github/yolov5/detect.py --weights raw/roboflow4/weights/last_robo4_epoch300_s.pt --source ./office_covid.mp4 --output raw/roboflow4/inference/output --iou-thres 0.3 --conf-thres 0.6 To test the model, I used a video taken with my phone. These are the results: Here you can see that the model is capable of detecting face masks accurately. More data training diversity. The model struggles to detect face masks under certain conditions, for instance, it tends to confuse a long beard with a mask. This can be mitigated by adding more diversity in the training images and perhaps running additional data augmentation techniques. Comparing the different YOLOv5 models to check which performs best based on the metrics and also taking the training time into account. The bigger model YOLOv5x will take significantly longer to train. Thanks a lot for taking the time to read this post and I hope you enjoyed it :)
[ { "code": null, "e": 330, "s": 171, "text": "In this post, we’ll be going through a step-by-step guide on how to train a YOLOv5 model to detect whether people are wearing a mask or not on a video stream." }, { "code": null, "e": 459, "s": 330, "text": "We’ll start by going through some basic concepts behind object detection models and motivate the use of YOLOv5 for this problem." }, { "code": null, "e": 606, "s": 459, "text": "From there, we’ll review the dataset we’ll be using to train the model, and see how it can be adapted so that it conforms with the darknet format." }, { "code": null, "e": 731, "s": 606, "text": "I’ll then show you how you can train a YOLOv5 model using the downloaded dataset and run inference on images or video files." }, { "code": null, "e": 789, "s": 731, "text": "Find a notebook with the code used for this section here." }, { "code": null, "e": 932, "s": 789, "text": "This project was done alongside with Fran Pérez, also a contributor of this community. I’d recommend you to read some of his excellent posts!" }, { "code": null, "e": 1132, "s": 932, "text": "Also, you might find it helpful to start with a more general overview of object detection and its main algorithms. If that is the case, I’d recommend reading this blog post for a good starting point." }, { "code": null, "e": 1324, "s": 1132, "text": "“YOLO”, refering to “You Only Look Once”, is a family of object detection models introduced by Joseph Redmon with a 2016 publication “You Only Look Once: Unified, Real-Time Object Detection”." }, { "code": null, "e": 1565, "s": 1324, "text": "Since then, several newer versions have been released, of which, the first three were released by Joseph Redmon. On June 29, Glenn Jocher released the latest version YOLOv5, claiming significant improvements with respect to its predecessor." }, { "code": null, "e": 1781, "s": 1565, "text": "The most interesting improvement, is its “blazingly fast inference”. As posted in this article by Roboflow, running in a Tesla P100, YOLOv5 achieves inference times of up to 0.007 seconds per image, meaning 140 FPS!" }, { "code": null, "e": 2028, "s": 1781, "text": "Using an object detection model such as YOLOv5 is most likely the simplest and most reasonable approach to this problem. This is because we’re limiting the computer vision pipeline to a single step, since object detectors are trained to detect a:" }, { "code": null, "e": 2047, "s": 2028, "text": "Bounding box and a" }, { "code": null, "e": 2067, "s": 2047, "text": "Corresponding label" }, { "code": null, "e": 2280, "s": 2067, "text": "This is precisely what we’re trying to achieve for this problem. In our case, the bounding boxes will be the detected faces, and the corresponding labels will indicate whether the person is wearing a mask or not." }, { "code": null, "e": 2548, "s": 2280, "text": "Alternatively, if we wanted to build our own deep learning model, it would be more complex, since it would have to be 2-fold: we’d need a model to detect faces in an image, and a second model to detect the presence or absence of face mask in the found bounding boxes." }, { "code": null, "e": 2686, "s": 2548, "text": "A drawback of doing so, apart from the complexity, is that the inference time would be much slower, especially in images with many faces." }, { "code": null, "e": 2821, "s": 2686, "text": "We now know which model we can use for this problem. The next natural, and probably most important, aspect is... what about the data??" }, { "code": null, "e": 2942, "s": 2821, "text": "Luckily, there is a publicly available dataset in Kaggle named Face Mask Detection, which will make our life way easier." }, { "code": null, "e": 3104, "s": 2942, "text": "The dataset contains 853 images and their corresponding annotation files, indicating whether a person is wearing a mask correctly, incorrectly or not wearing it." }, { "code": null, "e": 3141, "s": 3104, "text": "Bellow is a sample from the dataset:" }, { "code": null, "e": 3270, "s": 3141, "text": "In this case, we’ll simplify the above to detect if a person is wearing the mask or not (we’ll see how in the Roboflow section)." }, { "code": null, "e": 3346, "s": 3270, "text": "We now know everything we need to get started, so its time to get hands-on!" }, { "code": null, "e": 3464, "s": 3346, "text": "The first thing we need to do is clone the repository from ultralytics/yolov5, and install all required dependencies:" }, { "code": null, "e": 3591, "s": 3464, "text": "!git clone https://github.com/ultralytics/yolov5 # clone repo!pip install -U -r yolov5/requirements.txt # install dependencies" }, { "code": null, "e": 3669, "s": 3591, "text": "The main files that we’ll need from the repository are structured as follows:" }, { "code": null, "e": 3802, "s": 3669, "text": "yolov5 # project root├── models # yolov5 models├── train.py # training script└── detect.py # inference script" }, { "code": null, "e": 4074, "s": 3802, "text": "The 📁models folder contains several .yml files with the different proposed models. In it we can find 4 different models, ordered from smaller to larger (in terms of the amount of parameters): yolov5-s, yolov5-m, yolov5-l and yolov5-x. For a detailed comparison, see here." }, { "code": null, "e": 4205, "s": 4074, "text": "train.py and detect.py will be the scripts that we’ll be calling to train the model and predict on new images/videos respectively." }, { "code": null, "e": 4481, "s": 4205, "text": "In order to train the model, a necessary step will be to change the format of the .xml annotation files so that they conform with the darknet format. In the linked github thread, we’ll see that each image has to have a .txt file associated with it, with rows with the format:" }, { "code": null, "e": 4521, "s": 4481, "text": "<object-class> <x> <y> <width> <height>" }, { "code": null, "e": 4714, "s": 4521, "text": "Each line will represent the annotation for each object in the image, where <x> <y> are the coordinates of the centre of the bounding box, and <width> <height> the respective width and height." }, { "code": null, "e": 4783, "s": 4714, "text": "For example an img1.jpg must have an associated img1.txt containing:" }, { "code": null, "e": 4895, "s": 4783, "text": "1 0.427234 0.123172 0.191749 0.1712390 0.183523 0.431238 0.241231 0.1741211 0.542341 0.321253 0.191289 0.219217" }, { "code": null, "e": 5151, "s": 4895, "text": "The good news, is that this step is made really simple thanks to Roboflow. Roboflow enables to easily change between annotation formats, as well as as to augment our image data and split it into training and validation sets, which will be extremely handy!" }, { "code": null, "e": 5187, "s": 5151, "text": "This can be done in 5 simple steps:" }, { "code": null, "e": 5221, "s": 5187, "text": "Upload the images and annotations" }, { "code": null, "e": 5318, "s": 5221, "text": "Choose the train, validation and test proportions you want (train and validation will be enough)" }, { "code": null, "e": 5430, "s": 5318, "text": "Add an augmentation step choosing from among the existing filters, for instance blur, brightness, rotation etc." }, { "code": null, "e": 5501, "s": 5430, "text": "And finally, generate the new images and export to YOLO Darknet format" }, { "code": null, "e": 5775, "s": 5501, "text": "We should now have a separate folder for each split, train and validation (and test if included), where each of these should contain the .jpg augmented images, the corresponding .txt annotation files and a ._darknet.labels file with the labels in their corresponding order:" }, { "code": null, "e": 5869, "s": 5775, "text": "mask_weared_incorrect # label 0with_mask # label 1without_mask # label 2" }, { "code": null, "e": 6169, "s": 5869, "text": "This will depend on how you want your model to behave, but as mentioned above, I’ve decided to simplify the problem by limiting the labels to mask or no mask. Also, the label mask_weared_incorrect appears very few times in the dataset, so the model would most likely do a poor job at classifying it." }, { "code": null, "e": 6197, "s": 6169, "text": "This simple script will do:" }, { "code": null, "e": 6261, "s": 6197, "text": "Now, the numerical labels in the annotation files, will map to:" }, { "code": null, "e": 6324, "s": 6261, "text": "without_mask # label 0with_mask # label 1" }, { "code": null, "e": 6590, "s": 6324, "text": "The dataset is slightly unbalanced, having more with_mask labels, hence something we can do is augment with images of people not wearing a mask. This will also help improve significantly the performance of the model, since the dataset we’ll be using is quite small." }, { "code": null, "e": 6648, "s": 6590, "text": "Find a notebook with the code used for this section here." }, { "code": null, "e": 6859, "s": 6648, "text": "One way we could do this is by downloading images from COCO dataset and then annotating them ourselves. The COCO dataset has an official API, pycocotools, which we can use to download images with person labels:" }, { "code": null, "e": 6993, "s": 6859, "text": "The above will give us a list of dictionaries with the details and url of the images, which we can then use to download locally with:" }, { "code": null, "e": 7182, "s": 6993, "text": "The next step will be to obtain the face bounding boxes for these images. For this a simple way is to use a pre-trained model to detect the faces, and label them in the appropriate format." }, { "code": null, "e": 7252, "s": 7182, "text": "For that you need to first install pytorch and then facenet-pytorch :" }, { "code": null, "e": 7343, "s": 7252, "text": "conda install pytorch torchvision torchaudio cpuonly -c pytorchpip install facenet-pytorch" }, { "code": null, "e": 7547, "s": 7343, "text": "I also use cv2 to check that the annotations are correct. To have the annotations in darknet format, we’ll need to convert the detections made by facenet-pytorch. We can do this with the function bellow:" }, { "code": null, "e": 7640, "s": 7547, "text": "And you can then obtain the annotations for the downloaded images with the following script:" }, { "code": null, "e": 7741, "s": 7640, "text": "Which also prints the detected annotations, to verify that they are correct. Here are some examples:" }, { "code": null, "e": 7967, "s": 7741, "text": "Before training the model, we will need to create a data.yml, specifying where the training and validation images are and the amount of labels as well as the label names of out training data. The file should be structured as:" }, { "code": null, "e": 8031, "s": 7967, "text": "train: train/imagesval: valid/imagesnc: 2names: ['bad', 'good']" }, { "code": null, "e": 8112, "s": 8031, "text": "To train the model we have to run train.py, which takes the following arguments:" }, { "code": null, "e": 8134, "s": 8112, "text": "img: input image size" }, { "code": null, "e": 8152, "s": 8134, "text": "batch: batch size" }, { "code": null, "e": 8177, "s": 8152, "text": "epochs: number of epochs" }, { "code": null, "e": 8209, "s": 8177, "text": "data: path to the data.yml file" }, { "code": null, "e": 8263, "s": 8209, "text": "cfg: model to choose among the preexisting in 📁models" }, { "code": null, "e": 8317, "s": 8263, "text": "weights: initial weights path, defaults to yolov5s.pt" }, { "code": null, "e": 8345, "s": 8317, "text": "name: renames output folder" }, { "code": null, "e": 8384, "s": 8345, "text": "device: Wheter to train on cpu or gpu." }, { "code": null, "e": 8582, "s": 8384, "text": "In my case, I trained the model with a GPU, but otherwise you’ll have to add to the arguments --device cpu to run locally on a CPU. For the rest, you just need to take care of the specified routes:" }, { "code": null, "e": 8728, "s": 8582, "text": "python ~/github/yolov5/train.py --img 416 --batch 16 --epochs 150 --data data.yaml --cfg yolov5s.yaml --weights ‘’ --name robo4_epoch150_s --adam" }, { "code": null, "e": 8878, "s": 8728, "text": "Once the training completes, you can see how your model has performed by checking the generated logs, as well as the loss plots saved as a .png file." }, { "code": null, "e": 9051, "s": 8878, "text": "Training losses and performance metrics are logged to Tensorboard and also to a runs/exp0/results.txt logfile, which is plotted as results.png after the training completes." }, { "code": null, "e": 9129, "s": 9051, "text": "Here we see yolov5s.pt trained to 150 epochs (blue), and 300 epochs (orange):" }, { "code": null, "e": 9291, "s": 9129, "text": "As soon as the first epoch is complete we will have a mosaic of images showing both the ground truth and prediction results on test images, which will look like:" }, { "code": null, "e": 9446, "s": 9291, "text": "There’s a detailed tutorial on a colab notebook by ultralytics where you’ll find the above explained in more detail and available for you to run yourself!" }, { "code": null, "e": 9583, "s": 9446, "text": "Now that we’ve trained the model on a dataset of face mask images, it is ready to run inference both on single images and video streams!" }, { "code": null, "e": 9665, "s": 9583, "text": "To run inference, we have to call detect.py, and adjust the following parameters:" }, { "code": null, "e": 9703, "s": 9665, "text": "weights: weights of the trained model" }, { "code": null, "e": 9763, "s": 9703, "text": "source: input file/folder to run inference on, 0 for webcam" }, { "code": null, "e": 9797, "s": 9763, "text": "output: directory to save results" }, { "code": null, "e": 9848, "s": 9797, "text": "iou-thres: IOU threshold for NMS, defaults to 0.45" }, { "code": null, "e": 9888, "s": 9848, "text": "conf-thres: object confidence threshold" }, { "code": null, "e": 9976, "s": 9888, "text": "The two last you’ll probably have to tweak a little bit based on the generated results:" }, { "code": null, "e": 10233, "s": 9976, "text": "iou-thres defines a threshold on the Intersection Over Union metric obtained for a given object. It defaults to 0.45 , generally a IOU higher than 0.5 is considered a good prediction. See this great post by Adrian Rosebrock for more details on this metric." }, { "code": null, "e": 10657, "s": 10233, "text": "conf-thres is a measure that determines how confident the model is that a given bounding box contains an object. It equals to the IoU times a factor representing the probability of an object. This metric mostly prevents the model from predicting backgrounds. The higher the threshold, the lesser false positives. However, if we set it too high the model will miss many correct detections on which it has a lower confidence." }, { "code": null, "e": 10760, "s": 10657, "text": "After a few trial and errors, the following setting seemed to yield the best results for this problem:" }, { "code": null, "e": 10951, "s": 10760, "text": "python ~/github/yolov5/detect.py --weights raw/roboflow4/weights/last_robo4_epoch300_s.pt --source ./office_covid.mp4 --output raw/roboflow4/inference/output --iou-thres 0.3 --conf-thres 0.6" }, { "code": null, "e": 11029, "s": 10951, "text": "To test the model, I used a video taken with my phone. These are the results:" }, { "code": null, "e": 11108, "s": 11029, "text": "Here you can see that the model is capable of detecting face masks accurately." }, { "code": null, "e": 11396, "s": 11108, "text": "More data training diversity. The model struggles to detect face masks under certain conditions, for instance, it tends to confuse a long beard with a mask. This can be mitigated by adding more diversity in the training images and perhaps running additional data augmentation techniques." }, { "code": null, "e": 11598, "s": 11396, "text": "Comparing the different YOLOv5 models to check which performs best based on the metrics and also taking the training time into account. The bigger model YOLOv5x will take significantly longer to train." } ]
java.util.regex.Matcher.quoteReplacement()
The java.time.Matcher.quoteReplacement(String s) method returns a literal replacement String for the specified String. Following is the declaration for java.time.Matcher.quoteReplacement(String s) method. public static String quoteReplacement(String s) s − The string to be literalized. s − The string to be literalized. A literal string replacement. The following example shows the usage of java.time.Matcher.quoteReplacement(String s) method. package com.tutorialspoint; import java.util.regex.Matcher; import java.util.regex.Pattern; public class MatcherDemo { private static String REGEX = "dog"; private static String INPUT = "The dog says meow " + "All dogs say meow."; private static String REPLACE = "cat$"; public static void main(String[] args) { Pattern pattern = Pattern.compile(REGEX); // get a matcher object Matcher matcher = pattern.matcher(INPUT); try{ //Below line will throw exception INPUT = matcher.replaceAll(REPLACE); } catch(Exception e){ System.out.println("Exception: "+ e.getMessage()); } INPUT = matcher.replaceAll(matcher.quoteReplacement(REPLACE)); System.out.println(INPUT); } } Let us compile and run the above program, this will produce the following result − Exception: Illegal group reference: group index is missing The cat$ says meow All cat$s say meow. Print Add Notes Bookmark this page
[ { "code": null, "e": 2243, "s": 2124, "text": "The java.time.Matcher.quoteReplacement(String s) method returns a literal replacement String for the specified String." }, { "code": null, "e": 2329, "s": 2243, "text": "Following is the declaration for java.time.Matcher.quoteReplacement(String s) method." }, { "code": null, "e": 2378, "s": 2329, "text": "public static String quoteReplacement(String s)\n" }, { "code": null, "e": 2412, "s": 2378, "text": "s − The string to be literalized." }, { "code": null, "e": 2446, "s": 2412, "text": "s − The string to be literalized." }, { "code": null, "e": 2476, "s": 2446, "text": "A literal string replacement." }, { "code": null, "e": 2570, "s": 2476, "text": "The following example shows the usage of java.time.Matcher.quoteReplacement(String s) method." }, { "code": null, "e": 3343, "s": 2570, "text": "package com.tutorialspoint;\n\nimport java.util.regex.Matcher;\nimport java.util.regex.Pattern;\n\npublic class MatcherDemo {\n private static String REGEX = \"dog\";\n private static String INPUT = \"The dog says meow \" + \"All dogs say meow.\";\n private static String REPLACE = \"cat$\";\n\n public static void main(String[] args) {\n Pattern pattern = Pattern.compile(REGEX);\n \n // get a matcher object\n Matcher matcher = pattern.matcher(INPUT); \n \n try{\n //Below line will throw exception\n INPUT = matcher.replaceAll(REPLACE);\n } catch(Exception e){\n System.out.println(\"Exception: \"+ e.getMessage());\n }\n INPUT = matcher.replaceAll(matcher.quoteReplacement(REPLACE));\n System.out.println(INPUT);\n }\n}" }, { "code": null, "e": 3426, "s": 3343, "text": "Let us compile and run the above program, this will produce the following result −" }, { "code": null, "e": 3525, "s": 3426, "text": "Exception: Illegal group reference: group index is missing\nThe cat$ says meow All cat$s say meow.\n" }, { "code": null, "e": 3532, "s": 3525, "text": " Print" }, { "code": null, "e": 3543, "s": 3532, "text": " Add Notes" } ]
The Zen of Python: A guide to Python’s design principles | by Vishal Sharma | Towards Data Science
A Pythoneer once wrote 20 aphorisms stating how one can write beautiful and clean code with Python. Came to be known as “The Zen of Python”, these aphorisms exploded amongst Python developers. Tim Peters wrote these BDFL’s (Benevolent Dictator For Life, a nickname of Python creator Guido van Rossum) 20 guiding principles for Python’s design but the last aphorism was left for van Rossum to fill in. Till today, there are 19 aphorisms as Rossum said that the last aphorism is “some bizarre Tim Peters in-joke.” But, what do these aphorisms mean?. Let’s crack it up! The Zen of Python Beautiful is better than ugly.Explicit is better than implicit.Simple is better than complex.Complex is better than complicated.Flat is better than nested.Sparse is better than dense.Readability counts.Special cases aren't special enough to break the rules.Although practicality beats purity.Errors should never pass silently.Unless explicitly silenced.In the face of ambiguity, refuse the temptation to guess.There should be one-- and preferably only one --obvious way to do it.Although that way may not be obvious at first unless you're Dutch.Now is better than never.Although never is often better than *right* now.If the implementation is hard to explain, it's a bad idea.If the implementation is easy to explain, it may be a good idea.Namespaces are one honking great idea -- let's do more of those! You can open “Easter Egg” in your Python IDE by typing: import this Beautiful is better than ugly Being a developer, writing code and making it run is not the only job to do. Python is known for its readability and simplicity. So, why tamper it? Writing clean and readable code is an art which is appreciated by other programmers and let them understand every bit. def collatz(num): if num%2 == 0: return num//2 else: return 3 * num + 1number = input("Enter a number: ")while number != 1: number = collatz(int(number)) print(number) In the above code, there is no need to add “else”. It feels unnecessary at the point. Else, to make it better and clean, you can do something like: def collatz(num): if num%2 == 0: return num//2 return 3 * num + 1 Explicit is better than implicit The aphorism speaks for itself. It means that it is better to make the code more verbose and explicit. Doing this will make the code readable. Hiding code functionality might have repercussions as other programs might not be able to understand the code. Simple is better than complex Why build a complex code when you can build a simpler one. def recursion_reverse(s): if(len(s)==1): return s else: return recursion_reverse(s[1:]) + s[0]original_string = "ABC"print("Reversed String : ", recursion_reverse(original_string)) You can do the in just two lines of code too. my_string = “ABCD”reversed_string = my_string[::-1] Complex is better than complicated The above aphorism stated that simple is better than complex. And, now this one says “Complex is better than complicated”. It is getting confusing! A complex problem might require a complex technique to overcome it but it shouldn’t be complicated. Complications in code make fellow programmers confused. Even if the program is complex, try keeping it on the simpler side which can be easily read and understand. Complicated code will suck effort and time! And, programmers can’t afford to lose time. Flat is better than nested Coders like to create sub-modules for each functionality. Let’s say, I build a module named “spam” and added submodule “foo” and in it, I added “baz”. Now, accessing another subpackage from baz will be like: from spam.foo.baz import eggs Nesting and hierarchies do not add organization but bureaucracy! Sparse is better than dense Sometimes we tend to do everything in just one line of code. You might get the correct output. But, what about readability? Can someone understand it without peeking into it for a long time? print('\n'.join("%i bytes = %i bits which has %i possible values." % (j, j * 8, 256 ** j - 1) for j in (1 << i for i in range(8)))) The same could have broken into parts and could have been written to get the same result. Spread out code is a delight and denser one infuriates co-workers. Readability counts Do you go to StackOverflow every time you are stuck with your code? So, what code do you take a look at? Readable one or complicated piece! It is said that code is read more than written. So please, for the sake of readers starts writing code that it is readable as it is loved by our fellow programmers. Special cases aren’t special enough to break the rules A programmer must adhere to good “programming practices”. Let’s say, you import a library but it does not adhere to good programming practices. It must help you in getting your task done but keep in mind, libraries and languages should always follow the best “programming practices” and known for their consistency. Although practicality beats purity Contradictory to the above “Special cases aren’t special enough to break the rules” aphorism, this one suggests there might be an exception to the given set of rules. The keyword here is “might”. Errors should never pass silently When a function returns None, that is the time you are in the trap of silent errors. It is better to get errors but not silent errors. If you avoid such errors, it can lead to bugs, and further to crashes. That’s why, using the “try-except” statement becomes important in the code. Wherever you think something fishy can go on, you must add this combo and get to know about these silent errors. Unless explicitly silenced Sometimes you want to ignore an error in the code, the best practice is to make the error explicitly silenced. In the face of ambiguity, refuse the temptation to guess Computers do what we want them to do. Still, these machines have made us superstitious. There is always a reason why a computer code is not behaving properly. Only and only your logic can fix it. So, break the temptation to guess and blindly try solutions until code works. There should be one — and preferably only one — obvious way to do it In Perl programming, there is a saying which says — “There’s more than one way to do it!” Using three-four codes to do the same thing is just like having double-edged sword. Not only it consumes time to write it but every fellow must be able to read your 3–4 methods without any issues. Flexibility in this practice isn’t worth celebrating. Although that way may not be obvious at first unless you’re Dutch Referring to Guido van Rossum, Python creator, this aphorism says that the language rule will not be easy to learn or recall unless and until you are the creator of the code. A joke referring to BDFL! Now is better than never This little aphorism says that code stuck in an infinite loop or in a never-ending loop is worse than having code that doesn’t. Although never is often better than *right* now It is better to wait for a program to end than to terminate it early and get incorrect results. This aphorism and the above one “Now is better than never” resemble each other and contradict each other at the same time. If the implementation is hard to explain, it’s a bad idea If you are explaining your code to someone and it is getting hard to explain it to your acquaintances or co-workers, then there is something wrong with your code. It is definitely not simple and is complicated! If the implementation is easy to explain, it may be a good idea Just opposite to above aphorism, this one simply says a simple, readable code is easy to explain. And, if you are able to do it, you are on the right track. Namespaces are one honking great idea — let’s do more of those! In Python, a namespace is basically a system to have a unique name for each and every object. It is a great way to access every underlying object. a = 2print('id(a) =', id(a))a = a+1print('id(a) =', id(a))print('id(3) =', id(3)) It outputted: id(a) = 9302208id(a) = 9302240id(3) = 9302240 The way Python organizes variables under the hood is just unbelievable. That’s why namespaces are cool! Conclusion Python is amazing! That’s why millions of developers use it, code in it, analyze data with it, and build models using it. The guidelines made for it only motivates programmers to write simple, readable, and clean code. Next time you sit for coding, try following these guidelines. PEP 20 — The Zen of Python PEP 20 — The Zen of Python
[ { "code": null, "e": 572, "s": 171, "text": "A Pythoneer once wrote 20 aphorisms stating how one can write beautiful and clean code with Python. Came to be known as “The Zen of Python”, these aphorisms exploded amongst Python developers. Tim Peters wrote these BDFL’s (Benevolent Dictator For Life, a nickname of Python creator Guido van Rossum) 20 guiding principles for Python’s design but the last aphorism was left for van Rossum to fill in." }, { "code": null, "e": 738, "s": 572, "text": "Till today, there are 19 aphorisms as Rossum said that the last aphorism is “some bizarre Tim Peters in-joke.” But, what do these aphorisms mean?. Let’s crack it up!" }, { "code": null, "e": 756, "s": 738, "text": "The Zen of Python" }, { "code": null, "e": 1561, "s": 756, "text": "Beautiful is better than ugly.Explicit is better than implicit.Simple is better than complex.Complex is better than complicated.Flat is better than nested.Sparse is better than dense.Readability counts.Special cases aren't special enough to break the rules.Although practicality beats purity.Errors should never pass silently.Unless explicitly silenced.In the face of ambiguity, refuse the temptation to guess.There should be one-- and preferably only one --obvious way to do it.Although that way may not be obvious at first unless you're Dutch.Now is better than never.Although never is often better than *right* now.If the implementation is hard to explain, it's a bad idea.If the implementation is easy to explain, it may be a good idea.Namespaces are one honking great idea -- let's do more of those!" }, { "code": null, "e": 1617, "s": 1561, "text": "You can open “Easter Egg” in your Python IDE by typing:" }, { "code": null, "e": 1629, "s": 1617, "text": "import this" }, { "code": null, "e": 1659, "s": 1629, "text": "Beautiful is better than ugly" }, { "code": null, "e": 1926, "s": 1659, "text": "Being a developer, writing code and making it run is not the only job to do. Python is known for its readability and simplicity. So, why tamper it? Writing clean and readable code is an art which is appreciated by other programmers and let them understand every bit." }, { "code": null, "e": 2120, "s": 1926, "text": "def collatz(num): if num%2 == 0: return num//2 else: return 3 * num + 1number = input(\"Enter a number: \")while number != 1: number = collatz(int(number)) print(number)" }, { "code": null, "e": 2268, "s": 2120, "text": "In the above code, there is no need to add “else”. It feels unnecessary at the point. Else, to make it better and clean, you can do something like:" }, { "code": null, "e": 2351, "s": 2268, "text": "def collatz(num): if num%2 == 0: return num//2 return 3 * num + 1" }, { "code": null, "e": 2384, "s": 2351, "text": "Explicit is better than implicit" }, { "code": null, "e": 2638, "s": 2384, "text": "The aphorism speaks for itself. It means that it is better to make the code more verbose and explicit. Doing this will make the code readable. Hiding code functionality might have repercussions as other programs might not be able to understand the code." }, { "code": null, "e": 2668, "s": 2638, "text": "Simple is better than complex" }, { "code": null, "e": 2727, "s": 2668, "text": "Why build a complex code when you can build a simpler one." }, { "code": null, "e": 2928, "s": 2727, "text": "def recursion_reverse(s): if(len(s)==1): return s else: return recursion_reverse(s[1:]) + s[0]original_string = \"ABC\"print(\"Reversed String : \", recursion_reverse(original_string))" }, { "code": null, "e": 2974, "s": 2928, "text": "You can do the in just two lines of code too." }, { "code": null, "e": 3026, "s": 2974, "text": "my_string = “ABCD”reversed_string = my_string[::-1]" }, { "code": null, "e": 3061, "s": 3026, "text": "Complex is better than complicated" }, { "code": null, "e": 3309, "s": 3061, "text": "The above aphorism stated that simple is better than complex. And, now this one says “Complex is better than complicated”. It is getting confusing! A complex problem might require a complex technique to overcome it but it shouldn’t be complicated." }, { "code": null, "e": 3561, "s": 3309, "text": "Complications in code make fellow programmers confused. Even if the program is complex, try keeping it on the simpler side which can be easily read and understand. Complicated code will suck effort and time! And, programmers can’t afford to lose time." }, { "code": null, "e": 3588, "s": 3561, "text": "Flat is better than nested" }, { "code": null, "e": 3739, "s": 3588, "text": "Coders like to create sub-modules for each functionality. Let’s say, I build a module named “spam” and added submodule “foo” and in it, I added “baz”." }, { "code": null, "e": 3796, "s": 3739, "text": "Now, accessing another subpackage from baz will be like:" }, { "code": null, "e": 3826, "s": 3796, "text": "from spam.foo.baz import eggs" }, { "code": null, "e": 3891, "s": 3826, "text": "Nesting and hierarchies do not add organization but bureaucracy!" }, { "code": null, "e": 3919, "s": 3891, "text": "Sparse is better than dense" }, { "code": null, "e": 4110, "s": 3919, "text": "Sometimes we tend to do everything in just one line of code. You might get the correct output. But, what about readability? Can someone understand it without peeking into it for a long time?" }, { "code": null, "e": 4257, "s": 4110, "text": "print('\\n'.join(\"%i bytes = %i bits which has %i possible values.\" % (j, j * 8, 256 ** j - 1) for j in (1 << i for i in range(8))))" }, { "code": null, "e": 4414, "s": 4257, "text": "The same could have broken into parts and could have been written to get the same result. Spread out code is a delight and denser one infuriates co-workers." }, { "code": null, "e": 4433, "s": 4414, "text": "Readability counts" }, { "code": null, "e": 4573, "s": 4433, "text": "Do you go to StackOverflow every time you are stuck with your code? So, what code do you take a look at? Readable one or complicated piece!" }, { "code": null, "e": 4738, "s": 4573, "text": "It is said that code is read more than written. So please, for the sake of readers starts writing code that it is readable as it is loved by our fellow programmers." }, { "code": null, "e": 4793, "s": 4738, "text": "Special cases aren’t special enough to break the rules" }, { "code": null, "e": 5109, "s": 4793, "text": "A programmer must adhere to good “programming practices”. Let’s say, you import a library but it does not adhere to good programming practices. It must help you in getting your task done but keep in mind, libraries and languages should always follow the best “programming practices” and known for their consistency." }, { "code": null, "e": 5144, "s": 5109, "text": "Although practicality beats purity" }, { "code": null, "e": 5340, "s": 5144, "text": "Contradictory to the above “Special cases aren’t special enough to break the rules” aphorism, this one suggests there might be an exception to the given set of rules. The keyword here is “might”." }, { "code": null, "e": 5374, "s": 5340, "text": "Errors should never pass silently" }, { "code": null, "e": 5580, "s": 5374, "text": "When a function returns None, that is the time you are in the trap of silent errors. It is better to get errors but not silent errors. If you avoid such errors, it can lead to bugs, and further to crashes." }, { "code": null, "e": 5769, "s": 5580, "text": "That’s why, using the “try-except” statement becomes important in the code. Wherever you think something fishy can go on, you must add this combo and get to know about these silent errors." }, { "code": null, "e": 5796, "s": 5769, "text": "Unless explicitly silenced" }, { "code": null, "e": 5907, "s": 5796, "text": "Sometimes you want to ignore an error in the code, the best practice is to make the error explicitly silenced." }, { "code": null, "e": 5964, "s": 5907, "text": "In the face of ambiguity, refuse the temptation to guess" }, { "code": null, "e": 6238, "s": 5964, "text": "Computers do what we want them to do. Still, these machines have made us superstitious. There is always a reason why a computer code is not behaving properly. Only and only your logic can fix it. So, break the temptation to guess and blindly try solutions until code works." }, { "code": null, "e": 6307, "s": 6238, "text": "There should be one — and preferably only one — obvious way to do it" }, { "code": null, "e": 6648, "s": 6307, "text": "In Perl programming, there is a saying which says — “There’s more than one way to do it!” Using three-four codes to do the same thing is just like having double-edged sword. Not only it consumes time to write it but every fellow must be able to read your 3–4 methods without any issues. Flexibility in this practice isn’t worth celebrating." }, { "code": null, "e": 6714, "s": 6648, "text": "Although that way may not be obvious at first unless you’re Dutch" }, { "code": null, "e": 6915, "s": 6714, "text": "Referring to Guido van Rossum, Python creator, this aphorism says that the language rule will not be easy to learn or recall unless and until you are the creator of the code. A joke referring to BDFL!" }, { "code": null, "e": 6940, "s": 6915, "text": "Now is better than never" }, { "code": null, "e": 7068, "s": 6940, "text": "This little aphorism says that code stuck in an infinite loop or in a never-ending loop is worse than having code that doesn’t." }, { "code": null, "e": 7116, "s": 7068, "text": "Although never is often better than *right* now" }, { "code": null, "e": 7335, "s": 7116, "text": "It is better to wait for a program to end than to terminate it early and get incorrect results. This aphorism and the above one “Now is better than never” resemble each other and contradict each other at the same time." }, { "code": null, "e": 7393, "s": 7335, "text": "If the implementation is hard to explain, it’s a bad idea" }, { "code": null, "e": 7604, "s": 7393, "text": "If you are explaining your code to someone and it is getting hard to explain it to your acquaintances or co-workers, then there is something wrong with your code. It is definitely not simple and is complicated!" }, { "code": null, "e": 7668, "s": 7604, "text": "If the implementation is easy to explain, it may be a good idea" }, { "code": null, "e": 7825, "s": 7668, "text": "Just opposite to above aphorism, this one simply says a simple, readable code is easy to explain. And, if you are able to do it, you are on the right track." }, { "code": null, "e": 7889, "s": 7825, "text": "Namespaces are one honking great idea — let’s do more of those!" }, { "code": null, "e": 8036, "s": 7889, "text": "In Python, a namespace is basically a system to have a unique name for each and every object. It is a great way to access every underlying object." }, { "code": null, "e": 8118, "s": 8036, "text": "a = 2print('id(a) =', id(a))a = a+1print('id(a) =', id(a))print('id(3) =', id(3))" }, { "code": null, "e": 8132, "s": 8118, "text": "It outputted:" }, { "code": null, "e": 8178, "s": 8132, "text": "id(a) = 9302208id(a) = 9302240id(3) = 9302240" }, { "code": null, "e": 8282, "s": 8178, "text": "The way Python organizes variables under the hood is just unbelievable. That’s why namespaces are cool!" }, { "code": null, "e": 8293, "s": 8282, "text": "Conclusion" }, { "code": null, "e": 8574, "s": 8293, "text": "Python is amazing! That’s why millions of developers use it, code in it, analyze data with it, and build models using it. The guidelines made for it only motivates programmers to write simple, readable, and clean code. Next time you sit for coding, try following these guidelines." }, { "code": null, "e": 8601, "s": 8574, "text": "PEP 20 — The Zen of Python" } ]
Apache Commons IO - IOUtils
IOUtils provide utility methods for reading, writing and copying files. The methods work with InputStream, OutputStream, Reader and Writer. Following is the declaration for org.apache.commons.io.IOUtils Class − public class IOUtils extends Object The features of IOUtils are given below − Provides static utility methods for input/output operations. Provides static utility methods for input/output operations. toXXX() − reads data from a stream. toXXX() − reads data from a stream. write() − write data to a stream. write() − write data to a stream. copy() − copy all data to a stream to another stream. copy() − copy all data to a stream to another stream. contentEquals − compare the contents of two streams. contentEquals − compare the contents of two streams. Here is the input file we need to parse − Welcome to TutorialsPoint. Simply Easy Learning. import java.io.BufferedReader; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import org.apache.commons.io.IOUtils; public class IOTester { public static void main(String[] args) { try { //Using BufferedReader readUsingTraditionalWay(); //Using IOUtils readUsingIOUtils(); } catch(IOException e) { System.out.println(e.getMessage()); } } //reading a file using buffered reader line by line public static void readUsingTraditionalWay() throws IOException { try(BufferedReader bufferReader = new BufferedReader( new InputStreamReader( new FileInputStream("input.txt") ) )) { String line; while( ( line = bufferReader.readLine() ) != null ) { System.out.println( line ); } } } //reading a file using IOUtils in one go public static void readUsingIOUtils() throws IOException { try(InputStream in = new FileInputStream("input.txt")) { System.out.println( IOUtils.toString( in , "UTF-8") ); } } } It will print the following result − Welcome to TutorialsPoint. Simply Easy Learning. Welcome to TutorialsPoint. Simply Easy Learning. Print Add Notes Bookmark this page
[ { "code": null, "e": 2544, "s": 2404, "text": "IOUtils provide utility methods for reading, writing and copying files. The methods work with InputStream, OutputStream, Reader and Writer." }, { "code": null, "e": 2615, "s": 2544, "text": "Following is the declaration for org.apache.commons.io.IOUtils Class −" }, { "code": null, "e": 2655, "s": 2615, "text": "public class IOUtils\n extends Object\n" }, { "code": null, "e": 2697, "s": 2655, "text": "The features of IOUtils are given below −" }, { "code": null, "e": 2758, "s": 2697, "text": "Provides static utility methods for input/output operations." }, { "code": null, "e": 2819, "s": 2758, "text": "Provides static utility methods for input/output operations." }, { "code": null, "e": 2855, "s": 2819, "text": "toXXX() − reads data from a stream." }, { "code": null, "e": 2891, "s": 2855, "text": "toXXX() − reads data from a stream." }, { "code": null, "e": 2925, "s": 2891, "text": "write() − write data to a stream." }, { "code": null, "e": 2959, "s": 2925, "text": "write() − write data to a stream." }, { "code": null, "e": 3013, "s": 2959, "text": "copy() − copy all data to a stream to another stream." }, { "code": null, "e": 3067, "s": 3013, "text": "copy() − copy all data to a stream to another stream." }, { "code": null, "e": 3120, "s": 3067, "text": "contentEquals − compare the contents of two streams." }, { "code": null, "e": 3173, "s": 3120, "text": "contentEquals − compare the contents of two streams." }, { "code": null, "e": 3215, "s": 3173, "text": "Here is the input file we need to parse −" }, { "code": null, "e": 3265, "s": 3215, "text": "Welcome to TutorialsPoint. Simply Easy Learning.\n" }, { "code": null, "e": 4398, "s": 3265, "text": "import java.io.BufferedReader;\nimport java.io.FileInputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.InputStreamReader;\nimport org.apache.commons.io.IOUtils;\npublic class IOTester {\n public static void main(String[] args) {\n try {\n //Using BufferedReader\n readUsingTraditionalWay();\n //Using IOUtils\n readUsingIOUtils();\n } catch(IOException e) {\n System.out.println(e.getMessage());\n }\n }\n //reading a file using buffered reader line by line\n public static void readUsingTraditionalWay() throws IOException {\n try(BufferedReader bufferReader = new BufferedReader( new InputStreamReader(\n new FileInputStream(\"input.txt\") ) )) {\n String line;\n while( ( line = bufferReader.readLine() ) != null ) {\n System.out.println( line );\n }\n }\n }\n //reading a file using IOUtils in one go\n public static void readUsingIOUtils() throws IOException {\n try(InputStream in = new FileInputStream(\"input.txt\")) {\n System.out.println( IOUtils.toString( in , \"UTF-8\") );\n }\n }\n}" }, { "code": null, "e": 4435, "s": 4398, "text": "It will print the following result −" }, { "code": null, "e": 4534, "s": 4435, "text": "Welcome to TutorialsPoint. Simply Easy Learning.\nWelcome to TutorialsPoint. Simply Easy Learning.\n" }, { "code": null, "e": 4541, "s": 4534, "text": " Print" }, { "code": null, "e": 4552, "s": 4541, "text": " Add Notes" } ]
Spring OXM - Quick Guide
The Spring Framework provides Object/XML or O/X mapping using global marshaller/unmarshaller interfaces and allows to switch O/X mapping frameworks easily. This process of converting an object to XML is called XML Marshalling/Serialization and conversion from XML to object is called XML Demarshalling/Deserialization. Spring framework provides a Marshaller and UnMarshaller interfaces where Marshaller interface is responsible to marshall an object to XML and UnMarshaller interface deserializes the xml to an object. Following are the key benefits of using Spring OXM framework. Easy Configuration − Using spring bean context factory, creation of marshaller/unmarshaller is quite easy and is configurable without worrying about O/X libraries structures like JAXB Context, JiBX binding factories etc. A marsaller/unmarshaller can be configured as any other bean. Easy Configuration − Using spring bean context factory, creation of marshaller/unmarshaller is quite easy and is configurable without worrying about O/X libraries structures like JAXB Context, JiBX binding factories etc. A marsaller/unmarshaller can be configured as any other bean. Consistent Interfacing − Marshaller and UnMarshaller are global interfaces. These interfaces provides an abstraction layer over other O/X mapping frameworks and allows to switch between them without changing the code or with little code change. Consistent Interfacing − Marshaller and UnMarshaller are global interfaces. These interfaces provides an abstraction layer over other O/X mapping frameworks and allows to switch between them without changing the code or with little code change. Consistent Exception Handling − All underlying exceptions are mapped to a XmlMappingException as root exception. Thus developers need not to worry about underlying O/X mapping tool own exception hiearchy. Consistent Exception Handling − All underlying exceptions are mapped to a XmlMappingException as root exception. Thus developers need not to worry about underlying O/X mapping tool own exception hiearchy. Marshaller is an interface with single method marshal. public interface Marshaller { /** * Marshals the object graph with the given root into the provided Result. */ void marshal(Object graph, Result result) throws XmlMappingException, IOException; } Where graph is any object to be marshalled and result is a tagging interface to represent the XML output. Following are the available types − javax.xml.transform.dom.DOMResult − Represents org.w3c.dom.Node. javax.xml.transform.dom.DOMResult − Represents org.w3c.dom.Node. javax.xml.transform.sax.SAXResult − Represents org.xml.sax.ContentHandler. javax.xml.transform.sax.SAXResult − Represents org.xml.sax.ContentHandler. javax.xml.transform.stream.StreamResult − Represents java.io.File, java.io.OutputStream, or java.io.Writer. javax.xml.transform.stream.StreamResult − Represents java.io.File, java.io.OutputStream, or java.io.Writer. UnMarshaller is an interface with single method unmarshal. public interface UnMarshaller { /** * Unmarshals the given provided Source into an object graph. */ Object unmarshal(Source source) throws XmlMappingException, IOException; } Where source is a tagging interface to represent the XML input. Following are the available types − javax.xml.transform.dom.DOMSource − Represents org.w3c.dom.Node. javax.xml.transform.dom.DOMSource − Represents org.w3c.dom.Node. javax.xml.transform.sax.SAXSource − Represents org.xml.sax.InputSource, and org.xml.sax.XMLReader. javax.xml.transform.sax.SAXSource − Represents org.xml.sax.InputSource, and org.xml.sax.XMLReader. javax.xml.transform.stream.StreamSource − Represents java.io.File, java.io.InputStream, or java.io.Reader. javax.xml.transform.stream.StreamSource − Represents java.io.File, java.io.InputStream, or java.io.Reader. This chapter will guide you on how to prepare a development environment to start your work with Spring Framework. It will also teach you how to set up JDK, Maven and Eclipse on your machine before you set up Spring Framework − You can download the latest version of SDK from Oracle's Java site − Java SE Downloads. You will find instructions for installing JDK in downloaded files, follow the given instructions to install and configure the setup. Finally set PATH and JAVA_HOME environment variables to refer to the directory that contains java and javac, typically java_install_dir/bin and java_install_dir respectively. If you are running Windows and have installed the JDK in C:\jdk-11.0.11, you would have to put the following line in your C:\autoexec.bat file. set PATH=C:\jdk-11.0.11;%PATH% set JAVA_HOME=C:\jdk-11.0.11 Alternatively, on Windows NT/2000/XP, you will have to right-click on My Computer, select Properties → Advanced → Environment Variables. Then, you will have to update the PATH value and click the OK button. On Unix (Solaris, Linux, etc.), if the SDK is installed in /usr/local/jdk-11.0.11 and you use the C shell, you will have to put the following into your .cshrc file. setenv PATH /usr/local/jdk-11.0.11/bin:$PATH setenv JAVA_HOME /usr/local/jdk-11.0.11 Alternatively, if you use an Integrated Development Environment (IDE) like Borland JBuilder, Eclipse, IntelliJ IDEA, or Sun ONE Studio, you will have to compile and run a simple program to confirm that the IDE knows where you have installed Java. Otherwise, you will have to carry out a proper setup as given in the document of the IDE. All the examples in this tutorial have been written using Eclipse IDE. So we would suggest you should have the latest version of Eclipse installed on your machine. To install Eclipse IDE, download the latest Eclipse binaries from www.eclipse.org/downloads. Once you download the installation, unpack the binary distribution into a convenient location. For example, in C:\eclipse on Windows, or /usr/local/eclipse on Linux/Unix and finally set PATH variable appropriately. Eclipse can be started by executing the following commands on Windows machine, or you can simply double-click on eclipse.exe %C:\eclipse\eclipse.exe Eclipse can be started by executing the following commands on Unix (Solaris, Linux, etc.) machine − $/usr/local/eclipse/eclipse After a successful startup, if everything is fine then it should display the following result − In this tutorial, we are using maven to run and build the spring based examples. Follow the Maven - Environment Setup to install maven. Using eclipse, select File → New → Maven Project. Tick the Create a simple project (skip archetype selection) and click Next. Enter the details, as shown below − groupId − com.tutorialspoint groupId − com.tutorialspoint artifactId − springoxm artifactId − springoxm version − 0.0.1-SNAPSHOT version − 0.0.1-SNAPSHOT name − Spring OXM name − Spring OXM description − Spring OXM Project description − Spring OXM Project Click on Finish button and an new project will be created. You can check the default content of pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.tutorialspoint</groupId> <artifactId>springoxm</artifactId> <version>0.0.1-SNAPSHOT</version> <name>Spring OXM</name> <description>Spring OXM Project</description> </project> Now as we've our project ready, let add following dependencies in pom.xml in next chapter. Spring Core Spring Core Spring OXM Spring OXM JAXB JAXB Update the content of pom.xml to have spring core, spring oxm and jaxb dependencies as shown below − pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.tutorialspoint</groupId> <artifactId>springoxm</artifactId> <version>0.0.1-SNAPSHOT</version> <name>Spring OXM</name> <description>Spring OXM Project</description> <properties> <org.springframework.version>4.3.7.RELEASE</org.springframework.version> <org.hibernate.version>5.2.9.Final</org.hibernate.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${org.springframework.version}</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-oxm</artifactId> <version>${org.springframework.version}</version> <scope>compile</scope> </dependency> <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>org.glassfish.jaxb</groupId> <artifactId>jaxb-runtime</artifactId> <version>2.3.1</version> <scope>runtime</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.1</version> <configuration> <source>${java.version}</source> <target>${java.version}</target> </configuration> </plugin> </plugins> </build> </project> Create a class Student.java with O/X Annotation as shown below. Student.java package com.tutorialspoint.oxm.model; import javax.xml.bind.annotation.XmlAttribute; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Student{ String name; int age; int id; public String getName(){ return name; } @XmlElement public void setName(String name){ this.name = name; } public int getAge(){ return age; } @XmlElement public void setAge(int age){ this.age = age; } public int getId(){ return id; } @XmlAttribute public void setId(int id){ this.id = id; } } Create applicationcontext.xml in src → main → resources with the following content. applicationcontext.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:oxm="http://www.springframework.org/schema/oxm" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd"> <oxm:jaxb2-marshaller id="jaxbMarshaller"> <oxm:class-to-be-bound name="com.tutorialspoint.oxm.model.Student"/> </oxm:jaxb2-marshaller> </beans> Create a main class OXMApplication.java with marshaller and unmarshaller objects. The objective of this class is to marshall a student object to student.xml using marshaller object and then unmarshall the student.xml to student object using unmarshaller object. OXMApplication.java package com.tutorialspoint.oxm; import java.io.FileInputStream; import java.io.FileWriter; import java.io.IOException; import javax.xml.transform.stream.StreamResult; import javax.xml.transform.stream.StreamSource; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import org.springframework.oxm.Marshaller; import org.springframework.oxm.Unmarshaller; import org.springframework.oxm.XmlMappingException; import com.tutorialspoint.oxm.model.Student; public class OXMApplication { public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext("applicationcontext.xml"); Marshaller marshaller = (Marshaller)context.getBean("jaxbMarshaller"); Unmarshaller unmarshaller = (Unmarshaller)context.getBean("jaxbMarshaller"); // create student object Student student = new Student(); student.setAge(14); student.setName("Soniya"); try { marshaller.marshal(student, new StreamResult(new FileWriter("student.xml"))); System.out.println("Student marshalled successfully."); FileInputStream is = new FileInputStream("student.xml"); Student student1 = (Student)unmarshaller.unmarshal(new StreamSource(is)); System.out.println("Age: " + student1.getAge() + ", Name: " + student1.getName()); } catch(IOException | XmlMappingException ex) { ex.printStackTrace(); } } } Right click in the content area of the file in eclipse and then select Run as java application and verify the output. Oct 10, 2021 8:48:12 PM org.springframework.context.support.ClassPathXmlApplicationContext prepareRefresh INFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@1e127982: startup date [Sun Oct 10 20:48:12 IST 2021]; root of context hierarchy Oct 10, 2021 8:48:12 PM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions INFO: Loading XML bean definitions from class path resource [applicationcontext.xml] Oct 10, 2021 8:48:13 PM org.springframework.oxm.jaxb.Jaxb2Marshaller createJaxbContextFromClasses INFO: Creating JAXBContext with classes to be bound [class com.tutorialspoint.oxm.model.Student] Student marshalled successfully. Age: 14, Name: Soniya Update the content of pom.xml to have xstream dependencies as shown below − pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.tutorialspoint</groupId> <artifactId>springoxm</artifactId> <version>0.0.1-SNAPSHOT</version> <name>Spring OXM</name> <description>Spring OXM Project</description> <properties> <org.springframework.version>4.3.7.RELEASE</org.springframework.version> <org.hibernate.version>5.2.9.Final</org.hibernate.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${org.springframework.version}</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-oxm</artifactId> <version>${org.springframework.version}</version> <scope>compile</scope> </dependency> <dependency> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> <version>1.4.8</version> <scope>compile</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.1</version> <configuration> <source>${java.version}</source> <target>${java.version}</target> </configuration> </plugin> </plugins> </build> </project> Update the class Student.java as shown below. Student.java package com.tutorialspoint.oxm.model; public class Student { String name; int age; int id; public String getName(){ return name; } public void setName(String name){ this.name = name; } public int getAge(){ return age; } public void setAge(int age){ this.age = age; } public int getId(){ return id; } public void setId(int id){ this.id = id; } } Update applicationcontext.xml in src → main → resources with the following content to use XStreamMarshaller. XStreamMarshaller object can be used for both marshalling and unmarshalling. applicationcontext.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:oxm="http://www.springframework.org/schema/oxm" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd"> <bean id="xstreamMarshaller" class="org.springframework.oxm.xstream.XStreamMarshaller"> <property name="annotatedClasses" value="com.tutorialspoint.oxm.model.Student"></property> </bean> </beans> Update the main class OXMApplication.java with marshaller and unmarshaller objects. The objective of this class is to marshall a student object to student.xml using marshaller object and then unmarshall the student.xml to student object using unmarshaller object. OXMApplication.java package com.tutorialspoint.oxm; import java.io.FileInputStream; import java.io.FileWriter; import java.io.IOException; import javax.xml.transform.stream.StreamResult; import javax.xml.transform.stream.StreamSource; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import org.springframework.oxm.Marshaller; import org.springframework.oxm.Unmarshaller; import org.springframework.oxm.XmlMappingException; import com.tutorialspoint.oxm.model.Student; public class OXMApplication { public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext("applicationcontext.xml"); Marshaller marshaller = (Marshaller)context.getBean("xstreamMarshaller"); Unmarshaller unmarshaller = (Unmarshaller)context.getBean("xstreamMarshaller"); // create student object Student student = new Student(); student.setAge(14); student.setName("Soniya"); try { marshaller.marshal(student, new StreamResult(new FileWriter("student.xml"))); System.out.println("Student marshalled successfully."); FileInputStream is = new FileInputStream("student.xml"); Student student1 = (Student)unmarshaller.unmarshal(new StreamSource(is)); System.out.println("Age: " + student1.getAge() + ", Name: " + student1.getName()); } catch(IOException | XmlMappingException ex) { ex.printStackTrace(); } } } Right click in the content area of the file in eclipse and then select Run as java application and verify the output. Oct 11, 2021 9:18:37 AM org.springframework.context.support.ClassPathXmlApplicationContext prepareRefresh INFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@29ca901e: startup date [Mon Oct 11 09:18:36 IST 2021]; root of context hierarchy Oct 11, 2021 9:18:37 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions INFO: Loading XML bean definitions from class path resource [applicationcontext.xml] WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.thoughtworks.xstream.core.util.Fields (file:/C:/Users/intel/.m2/repository/com/thoughtworks/xstream/xstream/1.4.8/xstream-1.4.8.jar) to field java.util.TreeMap.comparator WARNING: Please consider reporting this to the maintainers of com.thoughtworks.xstream.core.util.Fields WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Student marshalled successfully. Age: 14, Name: Soniya Update the content of pom.xml to have castor dependencies as shown below − pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.tutorialspoint</groupId> <artifactId>springoxm</artifactId> <version>0.0.1-SNAPSHOT</version> <name>Spring OXM</name> <description>Spring OXM Project</description> <properties> <org.springframework.version>4.3.7.RELEASE</org.springframework.version> <org.hibernate.version>5.2.9.Final</org.hibernate.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${org.springframework.version}</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-oxm</artifactId> <version>${org.springframework.version}</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.codehaus.castor</groupId> <artifactId>castor-core</artifactId> <version>1.4.1</version> </dependency> <dependency> <groupId>org.codehaus.castor</groupId> <artifactId>castor-xml</artifactId> <version>1.4.1</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.1</version> <configuration> <source>${java.version}</source> <target>${java.version}</target> </configuration> </plugin> </plugins> </build> </project> Add a mapping xml required for castor mapping to map Student class as mappings.xml under src → main → resources folder as shown below. mappings.xml <?xml version="1.0"?> <!DOCTYPE mapping PUBLIC "-//EXOLAB/Castor Mapping DTD Version 1.0//EN" "http://castor.org/mapping.dtd"> <mapping> <class name="com.tutorialspoint.oxm.model.Student" auto-complete="true" > <map-to xml="Student"/> <field name="id" type="integer"> <bind-xml name="id" node="attribute"></bind-xml> </field> <field name="name"> <bind-xml name="name"></bind-xml> </field> <field name="age"> <bind-xml name="age" type="int"></bind-xml> </field> </class> </mapping> Update applicationcontext.xml in src → main → resources with the following content to use CastorMarshaller. CastorMarshaller object can be used for both marshalling and unmarshalling. applicationcontext.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:oxm="http://www.springframework.org/schema/oxm" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd"> <bean id="castorMarshaller" class="org.springframework.oxm.castor.CastorMarshaller"> <property name="targetClass" value="com.tutorialspoint.oxm.model.Student"></property> <property name="mappingLocation" value="mappings.xml"></property> </bean> </beans> Update the main class OXMApplication.java with marshaller and unmarshaller objects. The objective of this class is to marshall a student object to student.xml using marshaller object and then unmarshall the student.xml to student object using unmarshaller object. OXMApplication.java package com.tutorialspoint.oxm; import java.io.FileInputStream; import java.io.FileWriter; import java.io.IOException; import javax.xml.transform.stream.StreamResult; import javax.xml.transform.stream.StreamSource; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import org.springframework.oxm.Marshaller; import org.springframework.oxm.Unmarshaller; import org.springframework.oxm.XmlMappingException; import com.tutorialspoint.oxm.model.Student; public class OXMApplication { public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext("applicationcontext.xml"); Marshaller marshaller = (Marshaller)context.getBean("castorMarshaller"); Unmarshaller unmarshaller = (Unmarshaller)context.getBean("castorMarshaller"); // create student object Student student = new Student(); student.setAge(14); student.setName("Soniya"); try { marshaller.marshal(student, new StreamResult(new FileWriter("student.xml"))); System.out.println("Student marshalled successfully."); FileInputStream is = new FileInputStream("student.xml"); Student student1 = (Student)unmarshaller.unmarshal(new StreamSource(is)); System.out.println("Age: " + student1.getAge() + ", Name: " + student1.getName()); } catch(IOException | XmlMappingException ex) { ex.printStackTrace(); } } } Right click in the content area of the file in eclipse and then select Run as java application and verify the output. Oct 11, 2021 9:45:34 AM org.springframework.context.support.ClassPathXmlApplicationContext prepareRefresh INFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@6adede5: startup date [Mon Oct 11 09:45:34 IST 2021]; root of context hierarchy Oct 11, 2021 9:45:35 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions INFO: Loading XML bean definitions from class path resource [applicationcontext.xml] WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.exolab.castor.xml.BaseXercesOutputFormat (file:/C:/Users/intel/.m2/repository/org/codehaus/castor/castor-xml/1.4.1/castor-xml-1.4.1.jar) to method com.sun.org.apache.xml.internal.serialize.OutputFormat.setMethod(java.lang.String) WARNING: Please consider reporting this to the maintainers of org.exolab.castor.xml.BaseXercesOutputFormat WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations. WARNING: All illegal access operations will be denied in a future release Student marshalled successfully. Age: 14, Name: Soniya 102 Lectures 8 hours Karthikeya T 39 Lectures 5 hours Chaand Sheikh 73 Lectures 5.5 hours Senol Atac 62 Lectures 4.5 hours Senol Atac 67 Lectures 4.5 hours Senol Atac 69 Lectures 5 hours Senol Atac Print Add Notes Bookmark this page
[ { "code": null, "e": 2223, "s": 1904, "text": "The Spring Framework provides Object/XML or O/X mapping using global marshaller/unmarshaller interfaces and allows to switch O/X mapping frameworks easily. This process of converting an object to XML is called XML Marshalling/Serialization and conversion from XML to object is called XML Demarshalling/Deserialization." }, { "code": null, "e": 2485, "s": 2223, "text": "Spring framework provides a Marshaller and UnMarshaller interfaces where Marshaller interface is responsible to marshall an object to XML and UnMarshaller interface deserializes the xml to an object. Following are the key benefits of using Spring OXM framework." }, { "code": null, "e": 2768, "s": 2485, "text": "Easy Configuration − Using spring bean context factory, creation of marshaller/unmarshaller is quite easy and is configurable without worrying about O/X libraries structures like JAXB Context, JiBX binding factories etc. A marsaller/unmarshaller can be configured as any other bean." }, { "code": null, "e": 3051, "s": 2768, "text": "Easy Configuration − Using spring bean context factory, creation of marshaller/unmarshaller is quite easy and is configurable without worrying about O/X libraries structures like JAXB Context, JiBX binding factories etc. A marsaller/unmarshaller can be configured as any other bean." }, { "code": null, "e": 3296, "s": 3051, "text": "Consistent Interfacing − Marshaller and UnMarshaller are global interfaces. These interfaces provides an abstraction layer over other O/X mapping frameworks and allows to switch between them without changing the code or with little code change." }, { "code": null, "e": 3541, "s": 3296, "text": "Consistent Interfacing − Marshaller and UnMarshaller are global interfaces. These interfaces provides an abstraction layer over other O/X mapping frameworks and allows to switch between them without changing the code or with little code change." }, { "code": null, "e": 3746, "s": 3541, "text": "Consistent Exception Handling − All underlying exceptions are mapped to a XmlMappingException as root exception. Thus developers need not to worry about underlying O/X mapping tool own exception hiearchy." }, { "code": null, "e": 3951, "s": 3746, "text": "Consistent Exception Handling − All underlying exceptions are mapped to a XmlMappingException as root exception. Thus developers need not to worry about underlying O/X mapping tool own exception hiearchy." }, { "code": null, "e": 4006, "s": 3951, "text": "Marshaller is an interface with single method marshal." }, { "code": null, "e": 4223, "s": 4006, "text": "public interface Marshaller {\n /**\n * Marshals the object graph with the given root into the provided Result.\n */\n void marshal(Object graph, Result result)\n throws XmlMappingException, IOException;\n}" }, { "code": null, "e": 4365, "s": 4223, "text": "Where graph is any object to be marshalled and result is a tagging interface to represent the XML output. Following are the available types −" }, { "code": null, "e": 4430, "s": 4365, "text": "javax.xml.transform.dom.DOMResult − Represents org.w3c.dom.Node." }, { "code": null, "e": 4495, "s": 4430, "text": "javax.xml.transform.dom.DOMResult − Represents org.w3c.dom.Node." }, { "code": null, "e": 4570, "s": 4495, "text": "javax.xml.transform.sax.SAXResult − Represents org.xml.sax.ContentHandler." }, { "code": null, "e": 4645, "s": 4570, "text": "javax.xml.transform.sax.SAXResult − Represents org.xml.sax.ContentHandler." }, { "code": null, "e": 4753, "s": 4645, "text": "javax.xml.transform.stream.StreamResult − Represents java.io.File, java.io.OutputStream, or java.io.Writer." }, { "code": null, "e": 4861, "s": 4753, "text": "javax.xml.transform.stream.StreamResult − Represents java.io.File, java.io.OutputStream, or java.io.Writer." }, { "code": null, "e": 4920, "s": 4861, "text": "UnMarshaller is an interface with single method unmarshal." }, { "code": null, "e": 5116, "s": 4920, "text": "public interface UnMarshaller {\n /**\n * Unmarshals the given provided Source into an object graph.\n */\n Object unmarshal(Source source)\n throws XmlMappingException, IOException;\n}" }, { "code": null, "e": 5216, "s": 5116, "text": "Where source is a tagging interface to represent the XML input. Following are the available types −" }, { "code": null, "e": 5281, "s": 5216, "text": "javax.xml.transform.dom.DOMSource − Represents org.w3c.dom.Node." }, { "code": null, "e": 5346, "s": 5281, "text": "javax.xml.transform.dom.DOMSource − Represents org.w3c.dom.Node." }, { "code": null, "e": 5445, "s": 5346, "text": "javax.xml.transform.sax.SAXSource − Represents org.xml.sax.InputSource, and org.xml.sax.XMLReader." }, { "code": null, "e": 5544, "s": 5445, "text": "javax.xml.transform.sax.SAXSource − Represents org.xml.sax.InputSource, and org.xml.sax.XMLReader." }, { "code": null, "e": 5651, "s": 5544, "text": "javax.xml.transform.stream.StreamSource − Represents java.io.File, java.io.InputStream, or java.io.Reader." }, { "code": null, "e": 5758, "s": 5651, "text": "javax.xml.transform.stream.StreamSource − Represents java.io.File, java.io.InputStream, or java.io.Reader." }, { "code": null, "e": 5985, "s": 5758, "text": "This chapter will guide you on how to prepare a development environment to start your work with Spring Framework. It will also teach you how to set up JDK, Maven and Eclipse on your machine before you set up Spring Framework −" }, { "code": null, "e": 6381, "s": 5985, "text": "You can download the latest version of SDK from Oracle's Java site − Java SE Downloads. You will find instructions for installing JDK in downloaded files, follow the given instructions to install and configure the setup. Finally set PATH and JAVA_HOME environment variables to refer to the directory that contains java and javac, typically java_install_dir/bin and java_install_dir respectively." }, { "code": null, "e": 6525, "s": 6381, "text": "If you are running Windows and have installed the JDK in C:\\jdk-11.0.11, you would have to put the following line in your C:\\autoexec.bat file." }, { "code": null, "e": 6588, "s": 6525, "text": "set PATH=C:\\jdk-11.0.11;%PATH% \nset JAVA_HOME=C:\\jdk-11.0.11 \n" }, { "code": null, "e": 6795, "s": 6588, "text": "Alternatively, on Windows NT/2000/XP, you will have to right-click on My Computer, select Properties → Advanced → Environment Variables. Then, you will have to update the PATH value and click the OK button." }, { "code": null, "e": 6960, "s": 6795, "text": "On Unix (Solaris, Linux, etc.), if the SDK is installed in /usr/local/jdk-11.0.11 and you use the C shell, you will have to put the following into your .cshrc file." }, { "code": null, "e": 7047, "s": 6960, "text": "setenv PATH /usr/local/jdk-11.0.11/bin:$PATH \nsetenv JAVA_HOME /usr/local/jdk-11.0.11\n" }, { "code": null, "e": 7384, "s": 7047, "text": "Alternatively, if you use an Integrated Development Environment (IDE) like Borland JBuilder, Eclipse, IntelliJ IDEA, or Sun ONE Studio, you will have to compile and run a simple program to confirm that the IDE knows where you have installed Java. Otherwise, you will have to carry out a proper setup as given in the document of the IDE." }, { "code": null, "e": 7548, "s": 7384, "text": "All the examples in this tutorial have been written using Eclipse IDE. So we would suggest you should have the latest version of Eclipse installed on your machine." }, { "code": null, "e": 7856, "s": 7548, "text": "To install Eclipse IDE, download the latest Eclipse binaries from www.eclipse.org/downloads. Once you download the installation, unpack the binary distribution into a convenient location. For example, in C:\\eclipse on Windows, or /usr/local/eclipse on Linux/Unix and finally set PATH variable appropriately." }, { "code": null, "e": 7981, "s": 7856, "text": "Eclipse can be started by executing the following commands on Windows machine, or you can simply double-click on eclipse.exe" }, { "code": null, "e": 8007, "s": 7981, "text": "%C:\\eclipse\\eclipse.exe \n" }, { "code": null, "e": 8107, "s": 8007, "text": "Eclipse can be started by executing the following commands on Unix (Solaris, Linux, etc.) machine −" }, { "code": null, "e": 8136, "s": 8107, "text": "$/usr/local/eclipse/eclipse\n" }, { "code": null, "e": 8232, "s": 8136, "text": "After a successful startup, if everything is fine then it should display the following result −" }, { "code": null, "e": 8368, "s": 8232, "text": "In this tutorial, we are using maven to run and build the spring based examples. Follow the Maven - Environment Setup to install maven." }, { "code": null, "e": 8495, "s": 8368, "text": "Using eclipse, select File → New → Maven Project. Tick the Create a simple project (skip archetype selection) and click Next." }, { "code": null, "e": 8531, "s": 8495, "text": "Enter the details, as shown below −" }, { "code": null, "e": 8560, "s": 8531, "text": "groupId − com.tutorialspoint" }, { "code": null, "e": 8589, "s": 8560, "text": "groupId − com.tutorialspoint" }, { "code": null, "e": 8612, "s": 8589, "text": "artifactId − springoxm" }, { "code": null, "e": 8635, "s": 8612, "text": "artifactId − springoxm" }, { "code": null, "e": 8660, "s": 8635, "text": "version − 0.0.1-SNAPSHOT" }, { "code": null, "e": 8685, "s": 8660, "text": "version − 0.0.1-SNAPSHOT" }, { "code": null, "e": 8703, "s": 8685, "text": "name − Spring OXM" }, { "code": null, "e": 8721, "s": 8703, "text": "name − Spring OXM" }, { "code": null, "e": 8754, "s": 8721, "text": "description − Spring OXM Project" }, { "code": null, "e": 8787, "s": 8754, "text": "description − Spring OXM Project" }, { "code": null, "e": 8846, "s": 8787, "text": "Click on Finish button and an new project will be created." }, { "code": null, "e": 8891, "s": 8846, "text": "You can check the default content of pom.xml" }, { "code": null, "e": 9346, "s": 8891, "text": "<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" \n xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\">\n \n <modelVersion>4.0.0</modelVersion>\n <groupId>com.tutorialspoint</groupId>\n <artifactId>springoxm</artifactId>\n <version>0.0.1-SNAPSHOT</version>\n <name>Spring OXM</name>\n <description>Spring OXM Project</description>\n</project>" }, { "code": null, "e": 9437, "s": 9346, "text": "Now as we've our project ready, let add following dependencies in pom.xml in next chapter." }, { "code": null, "e": 9449, "s": 9437, "text": "Spring Core" }, { "code": null, "e": 9461, "s": 9449, "text": "Spring Core" }, { "code": null, "e": 9472, "s": 9461, "text": "Spring OXM" }, { "code": null, "e": 9483, "s": 9472, "text": "Spring OXM" }, { "code": null, "e": 9488, "s": 9483, "text": "JAXB" }, { "code": null, "e": 9493, "s": 9488, "text": "JAXB" }, { "code": null, "e": 9594, "s": 9493, "text": "Update the content of pom.xml to have spring core, spring oxm and jaxb dependencies as shown below −" }, { "code": null, "e": 9602, "s": 9594, "text": "pom.xml" }, { "code": null, "e": 11599, "s": 9602, "text": "<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" \n xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\">\n \n <modelVersion>4.0.0</modelVersion>\n <groupId>com.tutorialspoint</groupId>\n <artifactId>springoxm</artifactId>\n <version>0.0.1-SNAPSHOT</version>\n <name>Spring OXM</name>\n <description>Spring OXM Project</description>\n <properties>\n <org.springframework.version>4.3.7.RELEASE</org.springframework.version>\n <org.hibernate.version>5.2.9.Final</org.hibernate.version>\n <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n <java.version>1.8</java.version> \n </properties> \t\n <dependencies>\n <dependency>\n <groupId>org.springframework</groupId>\n <artifactId>spring-context</artifactId>\n <version>${org.springframework.version}</version>\n <scope>compile</scope>\n </dependency>\n <dependency>\n <groupId>org.springframework</groupId>\n <artifactId>spring-oxm</artifactId>\n <version>${org.springframework.version}</version>\n <scope>compile</scope>\n </dependency>\n <dependency>\n <groupId>javax.xml.bind</groupId>\n <artifactId>jaxb-api</artifactId>\n <version>2.3.1</version>\n </dependency>\n <dependency>\n <groupId>org.glassfish.jaxb</groupId>\n <artifactId>jaxb-runtime</artifactId>\n <version>2.3.1</version>\n <scope>runtime</scope>\n </dependency> \n </dependencies>\n <build>\n <plugins>\n <plugin>\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-compiler-plugin</artifactId>\n <version>3.1</version>\n <configuration>\n <source>${java.version}</source>\n <target>${java.version}</target>\n </configuration>\n </plugin>\n </plugins>\n </build>\n</project>" }, { "code": null, "e": 11663, "s": 11599, "text": "Create a class Student.java with O/X Annotation as shown below." }, { "code": null, "e": 11676, "s": 11663, "text": "Student.java" }, { "code": null, "e": 12308, "s": 11676, "text": "package com.tutorialspoint.oxm.model;\n\nimport javax.xml.bind.annotation.XmlAttribute;\nimport javax.xml.bind.annotation.XmlElement;\nimport javax.xml.bind.annotation.XmlRootElement;\n\n@XmlRootElement\npublic class Student{\n String name;\n int age;\n int id;\n\n public String getName(){\n return name;\n }\n @XmlElement\n public void setName(String name){\n this.name = name;\n }\n public int getAge(){\n return age;\n }\n @XmlElement\n public void setAge(int age){\n this.age = age;\n }\n public int getId(){\n return id;\n }\n @XmlAttribute\n public void setId(int id){\n this.id = id;\n }\n}" }, { "code": null, "e": 12393, "s": 12308, "text": "Create applicationcontext.xml in src → main → resources with the following content." }, { "code": null, "e": 12416, "s": 12393, "text": "applicationcontext.xml" }, { "code": null, "e": 13058, "s": 12416, "text": "<?xml version=\"1.0\" encoding=\"UTF-8\"?> \n<beans xmlns=\"http://www.springframework.org/schema/beans\" \n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" \n xmlns:oxm=\"http://www.springframework.org/schema/oxm\" \n xsi:schemaLocation=\"http://www.springframework.org/schema/beans \n http://www.springframework.org/schema/beans/spring-beans-3.0.xsd \n http://www.springframework.org/schema/oxm \n http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd\"> \n\n <oxm:jaxb2-marshaller id=\"jaxbMarshaller\"> \n <oxm:class-to-be-bound name=\"com.tutorialspoint.oxm.model.Student\"/> \n </oxm:jaxb2-marshaller> \n</beans> " }, { "code": null, "e": 13320, "s": 13058, "text": "Create a main class OXMApplication.java with marshaller and unmarshaller objects. The objective of this class is to marshall a student object to student.xml using marshaller object and then unmarshall the student.xml to student object using unmarshaller object." }, { "code": null, "e": 13340, "s": 13320, "text": "OXMApplication.java" }, { "code": null, "e": 14846, "s": 13340, "text": "package com.tutorialspoint.oxm;\n\nimport java.io.FileInputStream;\nimport java.io.FileWriter;\nimport java.io.IOException;\nimport javax.xml.transform.stream.StreamResult;\nimport javax.xml.transform.stream.StreamSource;\nimport org.springframework.context.ApplicationContext;\nimport org.springframework.context.support.ClassPathXmlApplicationContext;\nimport org.springframework.oxm.Marshaller;\nimport org.springframework.oxm.Unmarshaller;\nimport org.springframework.oxm.XmlMappingException;\nimport com.tutorialspoint.oxm.model.Student;\n\npublic class OXMApplication {\n public static void main(String[] args) {\n ApplicationContext context = new ClassPathXmlApplicationContext(\"applicationcontext.xml\"); \n Marshaller marshaller = (Marshaller)context.getBean(\"jaxbMarshaller\");\n Unmarshaller unmarshaller = (Unmarshaller)context.getBean(\"jaxbMarshaller\");\n\n // create student object\n Student student = new Student();\n student.setAge(14);\n student.setName(\"Soniya\");\n\n try {\n marshaller.marshal(student, new StreamResult(new FileWriter(\"student.xml\"))); \n System.out.println(\"Student marshalled successfully.\"); \n FileInputStream is = new FileInputStream(\"student.xml\");\n Student student1 = (Student)unmarshaller.unmarshal(new StreamSource(is));\n System.out.println(\"Age: \" + student1.getAge() + \", Name: \" + student1.getName());\n } catch(IOException | XmlMappingException ex) {\n ex.printStackTrace();\n }\n }\n}" }, { "code": null, "e": 14964, "s": 14846, "text": "Right click in the content area of the file in eclipse and then select Run as java application and verify the output." }, { "code": null, "e": 15678, "s": 14964, "text": "Oct 10, 2021 8:48:12 PM org.springframework.context.support.ClassPathXmlApplicationContext prepareRefresh\nINFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@1e127982: startup date \n[Sun Oct 10 20:48:12 IST 2021]; root of context hierarchy\nOct 10, 2021 8:48:12 PM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions\nINFO: Loading XML bean definitions from class path resource [applicationcontext.xml]\nOct 10, 2021 8:48:13 PM org.springframework.oxm.jaxb.Jaxb2Marshaller createJaxbContextFromClasses\nINFO: Creating JAXBContext with classes to be bound [class com.tutorialspoint.oxm.model.Student]\nStudent marshalled successfully.\nAge: 14, Name: Soniya\n" }, { "code": null, "e": 15754, "s": 15678, "text": "Update the content of pom.xml to have xstream dependencies as shown below −" }, { "code": null, "e": 15762, "s": 15754, "text": "pom.xml" }, { "code": null, "e": 17600, "s": 15762, "text": "<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" \n xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\">\n \n <modelVersion>4.0.0</modelVersion>\n <groupId>com.tutorialspoint</groupId>\n <artifactId>springoxm</artifactId>\n <version>0.0.1-SNAPSHOT</version>\n <name>Spring OXM</name>\n <description>Spring OXM Project</description>\n <properties>\n <org.springframework.version>4.3.7.RELEASE</org.springframework.version>\n <org.hibernate.version>5.2.9.Final</org.hibernate.version>\n <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n <java.version>1.8</java.version> \n </properties> \t\n <dependencies>\n <dependency>\n <groupId>org.springframework</groupId>\n <artifactId>spring-context</artifactId>\n <version>${org.springframework.version}</version>\n <scope>compile</scope>\n </dependency>\n <dependency>\n <groupId>org.springframework</groupId>\n <artifactId>spring-oxm</artifactId>\n <version>${org.springframework.version}</version>\n <scope>compile</scope>\n </dependency>\n <dependency>\n <groupId>com.thoughtworks.xstream</groupId>\n <artifactId>xstream</artifactId>\n <version>1.4.8</version>\n <scope>compile</scope>\n </dependency> \n </dependencies>\n <build>\n <plugins>\n <plugin>\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-compiler-plugin</artifactId>\n <version>3.1</version>\n <configuration>\n <source>${java.version}</source>\n <target>${java.version}</target>\n </configuration>\n </plugin>\n </plugins>\n </build>\n</project>" }, { "code": null, "e": 17646, "s": 17600, "text": "Update the class Student.java as shown below." }, { "code": null, "e": 17659, "s": 17646, "text": "Student.java" }, { "code": null, "e": 18087, "s": 17659, "text": "package com.tutorialspoint.oxm.model;\n\npublic class Student {\n String name;\n int age;\n int id;\n\n public String getName(){\n return name;\n }\n public void setName(String name){\n this.name = name;\n }\n public int getAge(){\n return age;\n }\n public void setAge(int age){\n this.age = age;\n }\n public int getId(){\n return id;\n }\n public void setId(int id){\n this.id = id;\n }\n}" }, { "code": null, "e": 18274, "s": 18087, "text": "Update applicationcontext.xml in src → main → resources with the following content to use XStreamMarshaller. XStreamMarshaller object can be used for both marshalling and unmarshalling." }, { "code": null, "e": 18297, "s": 18274, "text": "applicationcontext.xml" }, { "code": null, "e": 18991, "s": 18297, "text": "<?xml version=\"1.0\" encoding=\"UTF-8\"?> \n<beans xmlns=\"http://www.springframework.org/schema/beans\" \n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" \n xmlns:oxm=\"http://www.springframework.org/schema/oxm\" \n xsi:schemaLocation=\"http://www.springframework.org/schema/beans \n http://www.springframework.org/schema/beans/spring-beans-3.0.xsd \n http://www.springframework.org/schema/oxm \n http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd\"> \n\n <bean id=\"xstreamMarshaller\" class=\"org.springframework.oxm.xstream.XStreamMarshaller\"> \n <property name=\"annotatedClasses\" value=\"com.tutorialspoint.oxm.model.Student\"></property> \n </bean> \n</beans> " }, { "code": null, "e": 19255, "s": 18991, "text": "Update the main class OXMApplication.java with marshaller and unmarshaller objects. The objective of this class is to marshall a student object to student.xml using marshaller object and then unmarshall the student.xml to student object using unmarshaller object." }, { "code": null, "e": 19275, "s": 19255, "text": "OXMApplication.java" }, { "code": null, "e": 20787, "s": 19275, "text": "package com.tutorialspoint.oxm;\n\nimport java.io.FileInputStream;\nimport java.io.FileWriter;\nimport java.io.IOException;\nimport javax.xml.transform.stream.StreamResult;\nimport javax.xml.transform.stream.StreamSource;\nimport org.springframework.context.ApplicationContext;\nimport org.springframework.context.support.ClassPathXmlApplicationContext;\nimport org.springframework.oxm.Marshaller;\nimport org.springframework.oxm.Unmarshaller;\nimport org.springframework.oxm.XmlMappingException;\nimport com.tutorialspoint.oxm.model.Student;\n\npublic class OXMApplication {\n public static void main(String[] args) {\n ApplicationContext context = new ClassPathXmlApplicationContext(\"applicationcontext.xml\"); \n Marshaller marshaller = (Marshaller)context.getBean(\"xstreamMarshaller\");\n Unmarshaller unmarshaller = (Unmarshaller)context.getBean(\"xstreamMarshaller\");\n\n // create student object\n Student student = new Student();\n student.setAge(14);\n student.setName(\"Soniya\");\n\n try {\n marshaller.marshal(student, new StreamResult(new FileWriter(\"student.xml\"))); \n System.out.println(\"Student marshalled successfully.\"); \n FileInputStream is = new FileInputStream(\"student.xml\");\n Student student1 = (Student)unmarshaller.unmarshal(new StreamSource(is));\n System.out.println(\"Age: \" + student1.getAge() + \", Name: \" + student1.getName());\n } catch(IOException | XmlMappingException ex) {\n ex.printStackTrace();\n }\n }\n}" }, { "code": null, "e": 20905, "s": 20787, "text": "Right click in the content area of the file in eclipse and then select Run as java application and verify the output." }, { "code": null, "e": 21980, "s": 20905, "text": "Oct 11, 2021 9:18:37 AM org.springframework.context.support.ClassPathXmlApplicationContext prepareRefresh\nINFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@29ca901e: startup date \n[Mon Oct 11 09:18:36 IST 2021]; root of context hierarchy\nOct 11, 2021 9:18:37 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions\nINFO: Loading XML bean definitions from class path resource [applicationcontext.xml]\nWARNING: An illegal reflective access operation has occurred\nWARNING: Illegal reflective access by com.thoughtworks.xstream.core.util.Fields \n(file:/C:/Users/intel/.m2/repository/com/thoughtworks/xstream/xstream/1.4.8/xstream-1.4.8.jar) \nto field java.util.TreeMap.comparator\nWARNING: Please consider reporting this to the maintainers of com.thoughtworks.xstream.core.util.Fields\nWARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations\nWARNING: All illegal access operations will be denied in a future release\nStudent marshalled successfully.\nAge: 14, Name: Soniya\n" }, { "code": null, "e": 22055, "s": 21980, "text": "Update the content of pom.xml to have castor dependencies as shown below −" }, { "code": null, "e": 22063, "s": 22055, "text": "pom.xml" }, { "code": null, "e": 24032, "s": 22063, "text": "<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" \n xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\">\n \n <modelVersion>4.0.0</modelVersion>\n <groupId>com.tutorialspoint</groupId>\n <artifactId>springoxm</artifactId>\n <version>0.0.1-SNAPSHOT</version>\n <name>Spring OXM</name>\n <description>Spring OXM Project</description>\n <properties>\n <org.springframework.version>4.3.7.RELEASE</org.springframework.version>\n <org.hibernate.version>5.2.9.Final</org.hibernate.version>\n <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n <java.version>1.8</java.version> \n </properties> \t\n <dependencies>\n <dependency>\n <groupId>org.springframework</groupId>\n <artifactId>spring-context</artifactId>\n <version>${org.springframework.version}</version>\n <scope>compile</scope>\n </dependency>\n <dependency>\n <groupId>org.springframework</groupId>\n <artifactId>spring-oxm</artifactId>\n <version>${org.springframework.version}</version>\n <scope>compile</scope>\n </dependency>\n <dependency>\n <groupId>org.codehaus.castor</groupId>\n <artifactId>castor-core</artifactId>\n <version>1.4.1</version>\n </dependency> \n <dependency>\n <groupId>org.codehaus.castor</groupId>\n <artifactId>castor-xml</artifactId>\n <version>1.4.1</version>\n </dependency>\n </dependencies>\n <build>\n <plugins>\n <plugin>\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-compiler-plugin</artifactId>\n <version>3.1</version>\n <configuration>\n <source>${java.version}</source>\n <target>${java.version}</target>\n </configuration>\n </plugin>\n </plugins>\n </build>\n</project>" }, { "code": null, "e": 24168, "s": 24032, "text": "Add a mapping xml required for castor mapping to map Student class as mappings.xml under src → main → resources folder as shown below." }, { "code": null, "e": 24181, "s": 24168, "text": "mappings.xml" }, { "code": null, "e": 24779, "s": 24181, "text": "<?xml version=\"1.0\"?> \n<!DOCTYPE mapping PUBLIC \"-//EXOLAB/Castor Mapping DTD Version 1.0//EN\" \"http://castor.org/mapping.dtd\"> \n<mapping> \n <class name=\"com.tutorialspoint.oxm.model.Student\" auto-complete=\"true\" > \n <map-to xml=\"Student\"/> \n <field name=\"id\" type=\"integer\"> \n <bind-xml name=\"id\" node=\"attribute\"></bind-xml> \n </field> \n <field name=\"name\"> \n <bind-xml name=\"name\"></bind-xml> \n </field> \n <field name=\"age\"> \n <bind-xml name=\"age\" type=\"int\"></bind-xml> \n </field> \n </class> \n</mapping>" }, { "code": null, "e": 24964, "s": 24779, "text": "Update applicationcontext.xml in src → main → resources with the following content to use CastorMarshaller. CastorMarshaller object can be used for both marshalling and unmarshalling." }, { "code": null, "e": 24987, "s": 24964, "text": "applicationcontext.xml" }, { "code": null, "e": 25748, "s": 24989, "text": "<?xml version=\"1.0\" encoding=\"UTF-8\"?> \n<beans xmlns=\"http://www.springframework.org/schema/beans\" \n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" \n xmlns:oxm=\"http://www.springframework.org/schema/oxm\" \n xsi:schemaLocation=\"http://www.springframework.org/schema/beans \n http://www.springframework.org/schema/beans/spring-beans-3.0.xsd \n http://www.springframework.org/schema/oxm \n http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd\"> \n\n <bean id=\"castorMarshaller\" class=\"org.springframework.oxm.castor.CastorMarshaller\"> \n <property name=\"targetClass\" value=\"com.tutorialspoint.oxm.model.Student\"></property> \n <property name=\"mappingLocation\" value=\"mappings.xml\"></property> \n </bean> \n</beans> " }, { "code": null, "e": 26012, "s": 25748, "text": "Update the main class OXMApplication.java with marshaller and unmarshaller objects. The objective of this class is to marshall a student object to student.xml using marshaller object and then unmarshall the student.xml to student object using unmarshaller object." }, { "code": null, "e": 26032, "s": 26012, "text": "OXMApplication.java" }, { "code": null, "e": 27542, "s": 26032, "text": "package com.tutorialspoint.oxm;\n\nimport java.io.FileInputStream;\nimport java.io.FileWriter;\nimport java.io.IOException;\nimport javax.xml.transform.stream.StreamResult;\nimport javax.xml.transform.stream.StreamSource;\nimport org.springframework.context.ApplicationContext;\nimport org.springframework.context.support.ClassPathXmlApplicationContext;\nimport org.springframework.oxm.Marshaller;\nimport org.springframework.oxm.Unmarshaller;\nimport org.springframework.oxm.XmlMappingException;\nimport com.tutorialspoint.oxm.model.Student;\n\npublic class OXMApplication {\n public static void main(String[] args) {\n ApplicationContext context = new ClassPathXmlApplicationContext(\"applicationcontext.xml\"); \n Marshaller marshaller = (Marshaller)context.getBean(\"castorMarshaller\");\n Unmarshaller unmarshaller = (Unmarshaller)context.getBean(\"castorMarshaller\");\n\n // create student object\n Student student = new Student();\n student.setAge(14);\n student.setName(\"Soniya\");\n\n try {\n marshaller.marshal(student, new StreamResult(new FileWriter(\"student.xml\"))); \n System.out.println(\"Student marshalled successfully.\"); \n FileInputStream is = new FileInputStream(\"student.xml\");\n Student student1 = (Student)unmarshaller.unmarshal(new StreamSource(is));\n System.out.println(\"Age: \" + student1.getAge() + \", Name: \" + student1.getName());\n } catch(IOException | XmlMappingException ex) {\n ex.printStackTrace();\n }\n }\n}" }, { "code": null, "e": 27660, "s": 27542, "text": "Right click in the content area of the file in eclipse and then select Run as java application and verify the output." }, { "code": null, "e": 28797, "s": 27660, "text": "Oct 11, 2021 9:45:34 AM org.springframework.context.support.ClassPathXmlApplicationContext prepareRefresh\nINFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@6adede5: startup date \n[Mon Oct 11 09:45:34 IST 2021]; root of context hierarchy\nOct 11, 2021 9:45:35 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions\nINFO: Loading XML bean definitions from class path resource [applicationcontext.xml]\nWARNING: An illegal reflective access operation has occurred\nWARNING: Illegal reflective access by org.exolab.castor.xml.BaseXercesOutputFormat \n(file:/C:/Users/intel/.m2/repository/org/codehaus/castor/castor-xml/1.4.1/castor-xml-1.4.1.jar) \nto method com.sun.org.apache.xml.internal.serialize.OutputFormat.setMethod(java.lang.String)\nWARNING: Please consider reporting this to the maintainers of org.exolab.castor.xml.BaseXercesOutputFormat\nWARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations.\nWARNING: All illegal access operations will be denied in a future release\nStudent marshalled successfully.\nAge: 14, Name: Soniya\n" }, { "code": null, "e": 28831, "s": 28797, "text": "\n 102 Lectures \n 8 hours \n" }, { "code": null, "e": 28845, "s": 28831, "text": " Karthikeya T" }, { "code": null, "e": 28878, "s": 28845, "text": "\n 39 Lectures \n 5 hours \n" }, { "code": null, "e": 28893, "s": 28878, "text": " Chaand Sheikh" }, { "code": null, "e": 28928, "s": 28893, "text": "\n 73 Lectures \n 5.5 hours \n" }, { "code": null, "e": 28940, "s": 28928, "text": " Senol Atac" }, { "code": null, "e": 28975, "s": 28940, "text": "\n 62 Lectures \n 4.5 hours \n" }, { "code": null, "e": 28987, "s": 28975, "text": " Senol Atac" }, { "code": null, "e": 29022, "s": 28987, "text": "\n 67 Lectures \n 4.5 hours \n" }, { "code": null, "e": 29034, "s": 29022, "text": " Senol Atac" }, { "code": null, "e": 29067, "s": 29034, "text": "\n 69 Lectures \n 5 hours \n" }, { "code": null, "e": 29079, "s": 29067, "text": " Senol Atac" }, { "code": null, "e": 29086, "s": 29079, "text": " Print" }, { "code": null, "e": 29097, "s": 29086, "text": " Add Notes" } ]
Avoid R-squared to judge regression model performance | by Kevin Dunn | Towards Data Science
Summary R2 can be calculated before even fitting a regression model, which doesn’t make sense then to use it for judging prediction ability. Also you get the same R2 value if you flip the input and output around. Again, this is nonsensical for a prediction metric. The intention of your regression model is the important factor for choosing an appropriate metric, and a suitable metric is probably not R2. This article explains which better alternatives exist: the standard error, confidence intervals and prediction intervals. Historical data is collected and a relationship between the input x and the output y is calculated. This relationship is often a linear regression model, written as y=a+bx, where the intercept is a and the slope is b. The purpose is most often to use that calculated relationship and make a prediction of some future output, called ŷ, given a new input: ŷ=a+bx. Here are the two most common reasons for building a linear regression model: to learn more about the relationship between input and output;by far the most common usage is to get predictions of the output, based on the input. to learn more about the relationship between input and output; by far the most common usage is to get predictions of the output, based on the input. Let’s look at each of these in turn. The coefficient, b of the linear regression y=a+bx, shows what the average effect is on the output, y, for a one unit increase in the input x. This is called “learning about our system”. For example, if you built a regression model between x=temperature measured in Celsius of your system (input) and the y=pH (the output) you might get a regression model of: y=4.5+0.12x, from which you learn two things: that every 1 degree increase in temperature, leads, on average, to an increase of pH by 0.12 that the expected or predicted pH when using a temperature of x=0 degrees Celsius, is an output pH of 4.5 units. But consider two cases: what if I told you the R2 of this model was 0.2, or it was 0.94. How does this change your learnings? We will come to this in the next section, where we understand a bit more what R2 is measuring. This scenario is the one most people are familiar with. Continuing the above, it is asking what the predicted pH would be for a given new input value of temperature, x. For example, at a new temperature that we have never used before of 13°C, we expect an output pH of 4.5+0.12×13=6.06 pH units. And again, what value do we attach to such a prediction when the model has R2 around 0.2, or if the model has R2 of 0.94? Be warned: searching for a phrase like “R2 interpretation”, or something similar, returns many sites with faulty reasoning. In its simplest form, R2 is nothing more than a measure of how strongly two variables are correlated. It is the square of the correlation coefficient between x and y. where the fancy “E{...}” is the “expected value of” operation, the fancy “V{...}” is the “variance of”, and Cov{x, y} is the “covariance of” operation. The horizontal lines above x and y tell you to use the average value of those data columns. A second interpretation is that R2 is the ratio of the Regression Sum of Squares (RegSS) to the Total Sum of Squares (TSS). Any “sum of squares” value is just a rescaled variance, so the formula here shows R2 is just the ratio of two variances: An R2 value of 1.0 means the numerator and denominator are exactly the same, or in other words, the predictions, ŷi, are identically equally to the original values, yi. Conversely to make R2 have a value of 0.0 the numerator must equal zero, which says the predictions are simply equal to a flat line, the average value of y, no matter what the input value of x. This is the worst model you can have: it says that in the case of R2=0, the best prediction you can make for y is just to return the average value of the training y values. It is from this equation where the common interpretation of R2 comes from: that is the percentage variation explained. The denominator is proportional to the total variation, and the numerator is the explained variation, leading to a fraction (percentage) between 0 and 1. Related to the prior interpretation is that R2 = 1.0−RSS/TSS, coming from that fact that for a simple regression model we have TSS = RegSS + RSS. The RSS is the Residual Sum of Squares (RSS), or mathematically This shows nicely that to get an R2=1 you must have no residuals (RSS=0); and that an R2=0 means that your TSS = RSS, in other words, your residuals have the same variance as the raw data data. More details and illustrations to explain this are in my free book: https://learnche.org/pid/least-squares-modelling/least-squares-model-analysis It is very fruitful to understand the above formulas and to try interpret them yourself, in plain language. It is not easy at first, but it pays off understanding how you can make each part of the equations bigger and smaller, and how you can obtain the extreme values of 0.0 and 1.0. From the above, here are two very simple reasons why R2 is almost never the correct metric to judge how well you can predict a new output, y, from a new input x: If you switch the historical data around, and make y the x and let x become y, then you get exactly the same R2 value. How does that make sense even? A metric of a model’s prediction ability must depend on what is the input and what is the output.What if I told you that I can tell you what the R2 value will be before calculating the model’s slope and intercept? Again, that does not make sense. How can a good metric of prediction performance be calculated before even fitting the prediction model? If you switch the historical data around, and make y the x and let x become y, then you get exactly the same R2 value. How does that make sense even? A metric of a model’s prediction ability must depend on what is the input and what is the output. What if I told you that I can tell you what the R2 value will be before calculating the model’s slope and intercept? Again, that does not make sense. How can a good metric of prediction performance be calculated before even fitting the prediction model? The above would be the equivalent of calculating the prediction ability of a neural network before even fitting it; or flipping the input and outputs around for that neural network and getting the same performance metric. So how do we make these two very strong statements about R2? Look back at the first formula: The numerator is symmetrical. If you switch the roles of x and y you get the same numerator value. This also holds for the denominator. Please confirm this for yourself in Python, Excel, R, MATLAB, or whatever tool you use for linear regression. Here is some R code for fitting a linear model (lm): x = c(10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5)y = c( 8, 7, 8, 9, 8, 10, 7, 4, 11, 5, 6)summary(lm(y ~ x)) # R^2 = 0.657 summary(lm(x ~ y)) # R^2 = 0.657 Again, take a second look at the above formula. Does it depend on: the model’s residuals? no any of the model’s coefficients, such as the slope or intercept? no So it is possible to calculate R2 without even fitting a regression model. The formula depends only on the raw data and not on any model! This is not some mathematical trick because things cancel out or simplify in some special way. It is just a fact of what R2 is designed to measure: the degree of correlation between two sequences, x and y. So quite simply, that last sentence indicates when R2 should be used: only as a measure of correlation between two sequences. Let’s look back at the two use cases: learning from the modelmaking predictions from the model learning from the model making predictions from the model If your goal is to interpret the slope or intercept, then use a metric related to that, and not R2. In this case the confidence intervals for the slope and intercept are informative. The confidence interval is two numbers, a range. Within this range you can expect, with a specified degree of confidence, that it contains the true value of the parameter (e.g. the slope). Let’s return to our temperature and pH example, and only consider the slope. The same idea holds for the intercept. Remember that y=4.5+0.12x, meaning that every 1 degree increase in temperature, leads, on average, to an increase of pH by 0.12 units. A 95% confidence interval for the true (but unknown) slope might have been [0.09; 0.15], meaning that we have 95% probability that this range contains the true slope. Note, it is not the probability that the true slope is within this range; a subtle, but important distinction. There is no probability associated with the true value; the probability is related to the range containing the true slope, which is a fixed, but unknown, value. Why use confidence intervals? The main reason is that the values are in the units that you care about, and not an abstract ratio of two variances, such as R2. Secondly, the wider the range, the poorer the model. Imagine the confidence interval for the slope is either [0.09; 0.15] or [0.03; 0.21]; which would you rather have? There is a direct relationship between the width of the intervals and the value of R2, but this connection is non-linear, and has a different shape for different models. Here is some R code you might use. Similar code is possible with Python: x = c(10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5)y = c( 8, 7, 8, 9, 8, 10, 7, 4, 11, 5, 6)linmod = lm(y ~ x) # predict y from xsummary(linmod)confint(linmod) # confidence interval If you want to judge the model’s predictions use either the standard error (SE) or the prediction interval, not R2. Why SE? The standard error is the standard deviation of the residuals. This sounds more complex than R2, but it isn’t. Firstly, the standard error has units of the output variable. In other words, it has units of the quantity you actually care about, the prediction’s units! If your residuals are normally distributed (a quick and easy check with a q-q plot) then you know approximately 70% of of the residuals lie within the range [−SE; +SE], and 95% of residuals within [−2SE; +2SE]. This is what I personally use the most: calculate SE and multiply it by 4 to get an idea of the “bandwidth” of my typical residuals. It is not exact, but it is good enough to judge “is my prediction model good enough?” If you are comfortable that predictions have a spread of between [−2SE; +2SE], then you can be satisfied with your model. You will read on countless blogs and textbooks that a low R2 is not necessarily bad, and a high R2 isn’t necessarily good. That’s correct. That is because I’ve seen cases with an R2 exceeding 0.99, but the standard error was still too large to be useful for what was required by the end user. Even better than using the standard error calculated when you built (trained) the model is to rather use the standard deviation of the residuals from predictions on an entirely new testing dataset. This is the same metric, but just calculated from testing data, or via cross-validation if you cannot keep testing data aside. Another option, related to the standard error, is to use the prediction interval. It is like a confidence interval, but for an entirely new prediction. In the above example, we might have had: a new temperature measurement, x=10 degrees Celsius, leading to a predicted pH of 5.7±0.4 in other words, the predicted pH lies in a bound from [5.3 to 6.1] with 95% confidence. Such an interval is extremely informative; right away we get a range, in the units we care about, namely the units of the output variable, y. The prediction interval is the shaded region in the illustration, and is nonlinear. It is wider the further away you go from the model center point. While this interval is not always readily available from some software packages (e.g. Excel), you can use, as an approximate rule, that the shaded area covers ±2SE. x = c(10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5)y = c( 8, 7, 8, 9, 8, 10, 7, 4, 11, 5, 6)linmod = lm(y ~ x) # predict y from x# Get the prediction interval for 3 new values of x:newdata = data.frame(x=c(1, 8, 15))predict(linmod, newdata, interval="predict") As before, the higher the value of R2, the smaller the prediction interval width, proving intuitively that a higher R2 is better, but you cannot derive any cut-off limit, ahead of time, of what is “a good R2”, since it depends on the data measured, and of course on what is acceptable for your purpose. Ask yourself why you are building the linear regression model: “I want to know more about my slope or intercept”. Use the confidence interval for them, not R2. A higher R2 is related to a smaller confidence interval. “I want to be able to judge predictions from my model”. Use the prediction interval, or the model’s standard error. A higher R2 is related to smaller prediction intervals and lower standard error. All alternatives described are more interpretable than R2, and are in units you care about. If you’ve made it this far, and want to read more, please see my free book. Some rights reserved
[ { "code": null, "e": 179, "s": 171, "text": "Summary" }, { "code": null, "e": 436, "s": 179, "text": "R2 can be calculated before even fitting a regression model, which doesn’t make sense then to use it for judging prediction ability. Also you get the same R2 value if you flip the input and output around. Again, this is nonsensical for a prediction metric." }, { "code": null, "e": 577, "s": 436, "text": "The intention of your regression model is the important factor for choosing an appropriate metric, and a suitable metric is probably not R2." }, { "code": null, "e": 699, "s": 577, "text": "This article explains which better alternatives exist: the standard error, confidence intervals and prediction intervals." }, { "code": null, "e": 917, "s": 699, "text": "Historical data is collected and a relationship between the input x and the output y is calculated. This relationship is often a linear regression model, written as y=a+bx, where the intercept is a and the slope is b." }, { "code": null, "e": 1063, "s": 917, "text": "The purpose is most often to use that calculated relationship and make a prediction of some future output, called ŷ, given a new input: ŷ=a+bx." }, { "code": null, "e": 1140, "s": 1063, "text": "Here are the two most common reasons for building a linear regression model:" }, { "code": null, "e": 1288, "s": 1140, "text": "to learn more about the relationship between input and output;by far the most common usage is to get predictions of the output, based on the input." }, { "code": null, "e": 1351, "s": 1288, "text": "to learn more about the relationship between input and output;" }, { "code": null, "e": 1437, "s": 1351, "text": "by far the most common usage is to get predictions of the output, based on the input." }, { "code": null, "e": 1474, "s": 1437, "text": "Let’s look at each of these in turn." }, { "code": null, "e": 1661, "s": 1474, "text": "The coefficient, b of the linear regression y=a+bx, shows what the average effect is on the output, y, for a one unit increase in the input x. This is called “learning about our system”." }, { "code": null, "e": 1880, "s": 1661, "text": "For example, if you built a regression model between x=temperature measured in Celsius of your system (input) and the y=pH (the output) you might get a regression model of: y=4.5+0.12x, from which you learn two things:" }, { "code": null, "e": 1973, "s": 1880, "text": "that every 1 degree increase in temperature, leads, on average, to an increase of pH by 0.12" }, { "code": null, "e": 2086, "s": 1973, "text": "that the expected or predicted pH when using a temperature of x=0 degrees Celsius, is an output pH of 4.5 units." }, { "code": null, "e": 2307, "s": 2086, "text": "But consider two cases: what if I told you the R2 of this model was 0.2, or it was 0.94. How does this change your learnings? We will come to this in the next section, where we understand a bit more what R2 is measuring." }, { "code": null, "e": 2603, "s": 2307, "text": "This scenario is the one most people are familiar with. Continuing the above, it is asking what the predicted pH would be for a given new input value of temperature, x. For example, at a new temperature that we have never used before of 13°C, we expect an output pH of 4.5+0.12×13=6.06 pH units." }, { "code": null, "e": 2725, "s": 2603, "text": "And again, what value do we attach to such a prediction when the model has R2 around 0.2, or if the model has R2 of 0.94?" }, { "code": null, "e": 2849, "s": 2725, "text": "Be warned: searching for a phrase like “R2 interpretation”, or something similar, returns many sites with faulty reasoning." }, { "code": null, "e": 3016, "s": 2849, "text": "In its simplest form, R2 is nothing more than a measure of how strongly two variables are correlated. It is the square of the correlation coefficient between x and y." }, { "code": null, "e": 3260, "s": 3016, "text": "where the fancy “E{...}” is the “expected value of” operation, the fancy “V{...}” is the “variance of”, and Cov{x, y} is the “covariance of” operation. The horizontal lines above x and y tell you to use the average value of those data columns." }, { "code": null, "e": 3505, "s": 3260, "text": "A second interpretation is that R2 is the ratio of the Regression Sum of Squares (RegSS) to the Total Sum of Squares (TSS). Any “sum of squares” value is just a rescaled variance, so the formula here shows R2 is just the ratio of two variances:" }, { "code": null, "e": 4042, "s": 3505, "text": "An R2 value of 1.0 means the numerator and denominator are exactly the same, or in other words, the predictions, ŷi, are identically equally to the original values, yi. Conversely to make R2 have a value of 0.0 the numerator must equal zero, which says the predictions are simply equal to a flat line, the average value of y, no matter what the input value of x. This is the worst model you can have: it says that in the case of R2=0, the best prediction you can make for y is just to return the average value of the training y values." }, { "code": null, "e": 4315, "s": 4042, "text": "It is from this equation where the common interpretation of R2 comes from: that is the percentage variation explained. The denominator is proportional to the total variation, and the numerator is the explained variation, leading to a fraction (percentage) between 0 and 1." }, { "code": null, "e": 4461, "s": 4315, "text": "Related to the prior interpretation is that R2 = 1.0−RSS/TSS, coming from that fact that for a simple regression model we have TSS = RegSS + RSS." }, { "code": null, "e": 4525, "s": 4461, "text": "The RSS is the Residual Sum of Squares (RSS), or mathematically" }, { "code": null, "e": 4719, "s": 4525, "text": "This shows nicely that to get an R2=1 you must have no residuals (RSS=0); and that an R2=0 means that your TSS = RSS, in other words, your residuals have the same variance as the raw data data." }, { "code": null, "e": 4865, "s": 4719, "text": "More details and illustrations to explain this are in my free book: https://learnche.org/pid/least-squares-modelling/least-squares-model-analysis" }, { "code": null, "e": 5150, "s": 4865, "text": "It is very fruitful to understand the above formulas and to try interpret them yourself, in plain language. It is not easy at first, but it pays off understanding how you can make each part of the equations bigger and smaller, and how you can obtain the extreme values of 0.0 and 1.0." }, { "code": null, "e": 5312, "s": 5150, "text": "From the above, here are two very simple reasons why R2 is almost never the correct metric to judge how well you can predict a new output, y, from a new input x:" }, { "code": null, "e": 5813, "s": 5312, "text": "If you switch the historical data around, and make y the x and let x become y, then you get exactly the same R2 value. How does that make sense even? A metric of a model’s prediction ability must depend on what is the input and what is the output.What if I told you that I can tell you what the R2 value will be before calculating the model’s slope and intercept? Again, that does not make sense. How can a good metric of prediction performance be calculated before even fitting the prediction model?" }, { "code": null, "e": 6061, "s": 5813, "text": "If you switch the historical data around, and make y the x and let x become y, then you get exactly the same R2 value. How does that make sense even? A metric of a model’s prediction ability must depend on what is the input and what is the output." }, { "code": null, "e": 6315, "s": 6061, "text": "What if I told you that I can tell you what the R2 value will be before calculating the model’s slope and intercept? Again, that does not make sense. How can a good metric of prediction performance be calculated before even fitting the prediction model?" }, { "code": null, "e": 6537, "s": 6315, "text": "The above would be the equivalent of calculating the prediction ability of a neural network before even fitting it; or flipping the input and outputs around for that neural network and getting the same performance metric." }, { "code": null, "e": 6630, "s": 6537, "text": "So how do we make these two very strong statements about R2? Look back at the first formula:" }, { "code": null, "e": 6929, "s": 6630, "text": "The numerator is symmetrical. If you switch the roles of x and y you get the same numerator value. This also holds for the denominator. Please confirm this for yourself in Python, Excel, R, MATLAB, or whatever tool you use for linear regression. Here is some R code for fitting a linear model (lm):" }, { "code": null, "e": 7083, "s": 6929, "text": "x = c(10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5)y = c( 8, 7, 8, 9, 8, 10, 7, 4, 11, 5, 6)summary(lm(y ~ x)) # R^2 = 0.657 summary(lm(x ~ y)) # R^2 = 0.657" }, { "code": null, "e": 7150, "s": 7083, "text": "Again, take a second look at the above formula. Does it depend on:" }, { "code": null, "e": 7176, "s": 7150, "text": "the model’s residuals? no" }, { "code": null, "e": 7244, "s": 7176, "text": "any of the model’s coefficients, such as the slope or intercept? no" }, { "code": null, "e": 7588, "s": 7244, "text": "So it is possible to calculate R2 without even fitting a regression model. The formula depends only on the raw data and not on any model! This is not some mathematical trick because things cancel out or simplify in some special way. It is just a fact of what R2 is designed to measure: the degree of correlation between two sequences, x and y." }, { "code": null, "e": 7714, "s": 7588, "text": "So quite simply, that last sentence indicates when R2 should be used: only as a measure of correlation between two sequences." }, { "code": null, "e": 7752, "s": 7714, "text": "Let’s look back at the two use cases:" }, { "code": null, "e": 7809, "s": 7752, "text": "learning from the modelmaking predictions from the model" }, { "code": null, "e": 7833, "s": 7809, "text": "learning from the model" }, { "code": null, "e": 7867, "s": 7833, "text": "making predictions from the model" }, { "code": null, "e": 8355, "s": 7867, "text": "If your goal is to interpret the slope or intercept, then use a metric related to that, and not R2. In this case the confidence intervals for the slope and intercept are informative. The confidence interval is two numbers, a range. Within this range you can expect, with a specified degree of confidence, that it contains the true value of the parameter (e.g. the slope). Let’s return to our temperature and pH example, and only consider the slope. The same idea holds for the intercept." }, { "code": null, "e": 8929, "s": 8355, "text": "Remember that y=4.5+0.12x, meaning that every 1 degree increase in temperature, leads, on average, to an increase of pH by 0.12 units. A 95% confidence interval for the true (but unknown) slope might have been [0.09; 0.15], meaning that we have 95% probability that this range contains the true slope. Note, it is not the probability that the true slope is within this range; a subtle, but important distinction. There is no probability associated with the true value; the probability is related to the range containing the true slope, which is a fixed, but unknown, value." }, { "code": null, "e": 9256, "s": 8929, "text": "Why use confidence intervals? The main reason is that the values are in the units that you care about, and not an abstract ratio of two variances, such as R2. Secondly, the wider the range, the poorer the model. Imagine the confidence interval for the slope is either [0.09; 0.15] or [0.03; 0.21]; which would you rather have?" }, { "code": null, "e": 9426, "s": 9256, "text": "There is a direct relationship between the width of the intervals and the value of R2, but this connection is non-linear, and has a different shape for different models." }, { "code": null, "e": 9499, "s": 9426, "text": "Here is some R code you might use. Similar code is possible with Python:" }, { "code": null, "e": 9675, "s": 9499, "text": "x = c(10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5)y = c( 8, 7, 8, 9, 8, 10, 7, 4, 11, 5, 6)linmod = lm(y ~ x) # predict y from xsummary(linmod)confint(linmod) # confidence interval" }, { "code": null, "e": 9799, "s": 9675, "text": "If you want to judge the model’s predictions use either the standard error (SE) or the prediction interval, not R2. Why SE?" }, { "code": null, "e": 10618, "s": 9799, "text": "The standard error is the standard deviation of the residuals. This sounds more complex than R2, but it isn’t. Firstly, the standard error has units of the output variable. In other words, it has units of the quantity you actually care about, the prediction’s units! If your residuals are normally distributed (a quick and easy check with a q-q plot) then you know approximately 70% of of the residuals lie within the range [−SE; +SE], and 95% of residuals within [−2SE; +2SE]. This is what I personally use the most: calculate SE and multiply it by 4 to get an idea of the “bandwidth” of my typical residuals. It is not exact, but it is good enough to judge “is my prediction model good enough?” If you are comfortable that predictions have a spread of between [−2SE; +2SE], then you can be satisfied with your model." }, { "code": null, "e": 10911, "s": 10618, "text": "You will read on countless blogs and textbooks that a low R2 is not necessarily bad, and a high R2 isn’t necessarily good. That’s correct. That is because I’ve seen cases with an R2 exceeding 0.99, but the standard error was still too large to be useful for what was required by the end user." }, { "code": null, "e": 11236, "s": 10911, "text": "Even better than using the standard error calculated when you built (trained) the model is to rather use the standard deviation of the residuals from predictions on an entirely new testing dataset. This is the same metric, but just calculated from testing data, or via cross-validation if you cannot keep testing data aside." }, { "code": null, "e": 11429, "s": 11236, "text": "Another option, related to the standard error, is to use the prediction interval. It is like a confidence interval, but for an entirely new prediction. In the above example, we might have had:" }, { "code": null, "e": 11519, "s": 11429, "text": "a new temperature measurement, x=10 degrees Celsius, leading to a predicted pH of 5.7±0.4" }, { "code": null, "e": 11607, "s": 11519, "text": "in other words, the predicted pH lies in a bound from [5.3 to 6.1] with 95% confidence." }, { "code": null, "e": 12063, "s": 11607, "text": "Such an interval is extremely informative; right away we get a range, in the units we care about, namely the units of the output variable, y. The prediction interval is the shaded region in the illustration, and is nonlinear. It is wider the further away you go from the model center point. While this interval is not always readily available from some software packages (e.g. Excel), you can use, as an approximate rule, that the shaded area covers ±2SE." }, { "code": null, "e": 12318, "s": 12063, "text": "x = c(10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5)y = c( 8, 7, 8, 9, 8, 10, 7, 4, 11, 5, 6)linmod = lm(y ~ x) # predict y from x# Get the prediction interval for 3 new values of x:newdata = data.frame(x=c(1, 8, 15))predict(linmod, newdata, interval=\"predict\")" }, { "code": null, "e": 12621, "s": 12318, "text": "As before, the higher the value of R2, the smaller the prediction interval width, proving intuitively that a higher R2 is better, but you cannot derive any cut-off limit, ahead of time, of what is “a good R2”, since it depends on the data measured, and of course on what is acceptable for your purpose." }, { "code": null, "e": 12684, "s": 12621, "text": "Ask yourself why you are building the linear regression model:" }, { "code": null, "e": 12838, "s": 12684, "text": "“I want to know more about my slope or intercept”. Use the confidence interval for them, not R2. A higher R2 is related to a smaller confidence interval." }, { "code": null, "e": 13035, "s": 12838, "text": "“I want to be able to judge predictions from my model”. Use the prediction interval, or the model’s standard error. A higher R2 is related to smaller prediction intervals and lower standard error." }, { "code": null, "e": 13203, "s": 13035, "text": "All alternatives described are more interpretable than R2, and are in units you care about. If you’ve made it this far, and want to read more, please see my free book." } ]
Saliency Map Using PyTorch | Towards Data Science
When we use machine learning models such as Logistic Regression or Decision Tree, we can interpret which variables contribute to the prediction result. But, this step will be difficult when we are using deep learning model. Deep Learning model is a black-box model. It means that we cannot analyze how the model can predict the result based on the data. As the model gets more complex, the interpretability of the model will reduce. But this doesn’t mean that the model cannot be interpreted. We can infer the deep learning model, but we have to work on its complexity. In this article, I will show you how to visualize deep learning model result based on gradients. We call this method as saliency map. We will use a framework called PyTorch to implement this method. Without further ado, let’s get started! Before we get into the saliency map, let’s talk about the image classification. Given this equation, Where, Sc(I) is the score of the class c I corresponds to the image (on one-dimensional vector). w and b correspond to the weight and bias for the class c. As you can see on the equation, we multiply the image vector (I) with the weight vector (w). We can infer that the weight (w) can define the importance for each pixel on the image to the class (c). So, how can we get the weight (w) that correspond to score (S) given to the image (I)? We can get the relationship between those variables by looking at the gradients! But wait, what does basically a gradient? I don’t want to explain so many calculus on it, but I will tell you this. The gradient describes how much the effect of a variable, let’s say x, can affect another variable result, in this case, y. When we use that analogy, we can say that the gradient describes on how strong for each pixel of the image (I) can contribute to the prediction result (S). The equation for looking the weights (w) looks like this, By knowing the weights (w) for each pixel, we can visualize it as a saliency map, where each pixel of it describes the power of that pixel affects the prediction result. Now, let’s get into the implementation! In this section, we will implement the saliency map using PyTorch. The deep learning model that we will use has trained for a Kaggle competition called Plant Pathology 2020 — FGVC7. To download the dataset, you access on the link here. Here are the steps that we have to do, Set up the deep learning model Open the image Preprocess the image Retrieve the gradient Visualize the result Now, the first thing that we have to do is to set up the model. In this case, I will use my pretrained model weight using ResNet-18 architecture. Also, I’ve set up the model so we can use the GPU to get the result. The code looks like this, Right after we set up the model, now we can set up the image. To do this, we will use PIL and torchvision libraries for transform that image. The code looks like this, After we transform the image, we have to reshape it because our model reads the tensor on 4-dimensional shape (batch size, channel, width, height). The code looks like this, # Reshape the image (because the model use # 4-dimensional tensor (batch_size, channel, width, height))image = image.reshape(1, 3, 224, 224) After we reshape the image, now we set our image to run on GPU. The code looks like this, # Set the device for the imageimage = image.to(device) Then, we have to set the image to catch gradient when we do backpropagation to it. The code looks like this, # Set the requires_grad_ to the image for retrieving gradientsimage.requires_grad_() After that, we can catch the gradient by put the image on the model and do the backpropagation. The code looks like this, Now, we can visualize the gradient using matplotlib. But there is one task that we have to do. The image has three channels to it. Therefore, we have to take the maximum value from those channels on each pixel position. Finally, we can visualize the result using the code looks like this, Here is the visualization looks like, As you can see from the image above, the left side is the image, and the right size is the saliency map. Recall from its definition the saliency map will show the strength for each pixel contribution to the final output. In this case, the leaf on this image has a disease called rust as you can see on the yellow spot on it. And if you look carefully, some pixels has a brighter color than any other images. It indicates that those pixels have a huge contribution to the final result, which is the rust itself. Therefore, we can confidently say that the model has predicted the result by looking at the right information of it. Well done! Now you have to implement your own saliency map for interpreting your deep learning model. I hope you can implement the saliency map for your own cases, and don’t forget to follow me on Medium! If you want to have a discussion with me about data science or machine learning, you can contact me on LinkedIn. Thank you for reading my article! [1] Simonyan K, Vedaldi A, Zisserman A. 2014. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv:1312.6034v2 [cs.CV] [2] Rastogi A. 2020. Visualizing Neural Networks using Saliency Maps in PyTorch. Data Driven Investor. [3] https://github.com/sijoonlee/deep_learning/blob/master/cs231n/NetworkVisualization-PyTorch.ipynb
[ { "code": null, "e": 395, "s": 171, "text": "When we use machine learning models such as Logistic Regression or Decision Tree, we can interpret which variables contribute to the prediction result. But, this step will be difficult when we are using deep learning model." }, { "code": null, "e": 604, "s": 395, "text": "Deep Learning model is a black-box model. It means that we cannot analyze how the model can predict the result based on the data. As the model gets more complex, the interpretability of the model will reduce." }, { "code": null, "e": 741, "s": 604, "text": "But this doesn’t mean that the model cannot be interpreted. We can infer the deep learning model, but we have to work on its complexity." }, { "code": null, "e": 940, "s": 741, "text": "In this article, I will show you how to visualize deep learning model result based on gradients. We call this method as saliency map. We will use a framework called PyTorch to implement this method." }, { "code": null, "e": 980, "s": 940, "text": "Without further ado, let’s get started!" }, { "code": null, "e": 1081, "s": 980, "text": "Before we get into the saliency map, let’s talk about the image classification. Given this equation," }, { "code": null, "e": 1088, "s": 1081, "text": "Where," }, { "code": null, "e": 1122, "s": 1088, "text": "Sc(I) is the score of the class c" }, { "code": null, "e": 1178, "s": 1122, "text": "I corresponds to the image (on one-dimensional vector)." }, { "code": null, "e": 1237, "s": 1178, "text": "w and b correspond to the weight and bias for the class c." }, { "code": null, "e": 1435, "s": 1237, "text": "As you can see on the equation, we multiply the image vector (I) with the weight vector (w). We can infer that the weight (w) can define the importance for each pixel on the image to the class (c)." }, { "code": null, "e": 1603, "s": 1435, "text": "So, how can we get the weight (w) that correspond to score (S) given to the image (I)? We can get the relationship between those variables by looking at the gradients!" }, { "code": null, "e": 1719, "s": 1603, "text": "But wait, what does basically a gradient? I don’t want to explain so many calculus on it, but I will tell you this." }, { "code": null, "e": 1843, "s": 1719, "text": "The gradient describes how much the effect of a variable, let’s say x, can affect another variable result, in this case, y." }, { "code": null, "e": 2057, "s": 1843, "text": "When we use that analogy, we can say that the gradient describes on how strong for each pixel of the image (I) can contribute to the prediction result (S). The equation for looking the weights (w) looks like this," }, { "code": null, "e": 2227, "s": 2057, "text": "By knowing the weights (w) for each pixel, we can visualize it as a saliency map, where each pixel of it describes the power of that pixel affects the prediction result." }, { "code": null, "e": 2267, "s": 2227, "text": "Now, let’s get into the implementation!" }, { "code": null, "e": 2503, "s": 2267, "text": "In this section, we will implement the saliency map using PyTorch. The deep learning model that we will use has trained for a Kaggle competition called Plant Pathology 2020 — FGVC7. To download the dataset, you access on the link here." }, { "code": null, "e": 2542, "s": 2503, "text": "Here are the steps that we have to do," }, { "code": null, "e": 2573, "s": 2542, "text": "Set up the deep learning model" }, { "code": null, "e": 2588, "s": 2573, "text": "Open the image" }, { "code": null, "e": 2609, "s": 2588, "text": "Preprocess the image" }, { "code": null, "e": 2631, "s": 2609, "text": "Retrieve the gradient" }, { "code": null, "e": 2652, "s": 2631, "text": "Visualize the result" }, { "code": null, "e": 2893, "s": 2652, "text": "Now, the first thing that we have to do is to set up the model. In this case, I will use my pretrained model weight using ResNet-18 architecture. Also, I’ve set up the model so we can use the GPU to get the result. The code looks like this," }, { "code": null, "e": 3061, "s": 2893, "text": "Right after we set up the model, now we can set up the image. To do this, we will use PIL and torchvision libraries for transform that image. The code looks like this," }, { "code": null, "e": 3235, "s": 3061, "text": "After we transform the image, we have to reshape it because our model reads the tensor on 4-dimensional shape (batch size, channel, width, height). The code looks like this," }, { "code": null, "e": 3376, "s": 3235, "text": "# Reshape the image (because the model use # 4-dimensional tensor (batch_size, channel, width, height))image = image.reshape(1, 3, 224, 224)" }, { "code": null, "e": 3466, "s": 3376, "text": "After we reshape the image, now we set our image to run on GPU. The code looks like this," }, { "code": null, "e": 3521, "s": 3466, "text": "# Set the device for the imageimage = image.to(device)" }, { "code": null, "e": 3630, "s": 3521, "text": "Then, we have to set the image to catch gradient when we do backpropagation to it. The code looks like this," }, { "code": null, "e": 3715, "s": 3630, "text": "# Set the requires_grad_ to the image for retrieving gradientsimage.requires_grad_()" }, { "code": null, "e": 3837, "s": 3715, "text": "After that, we can catch the gradient by put the image on the model and do the backpropagation. The code looks like this," }, { "code": null, "e": 4126, "s": 3837, "text": "Now, we can visualize the gradient using matplotlib. But there is one task that we have to do. The image has three channels to it. Therefore, we have to take the maximum value from those channels on each pixel position. Finally, we can visualize the result using the code looks like this," }, { "code": null, "e": 4164, "s": 4126, "text": "Here is the visualization looks like," }, { "code": null, "e": 4385, "s": 4164, "text": "As you can see from the image above, the left side is the image, and the right size is the saliency map. Recall from its definition the saliency map will show the strength for each pixel contribution to the final output." }, { "code": null, "e": 4675, "s": 4385, "text": "In this case, the leaf on this image has a disease called rust as you can see on the yellow spot on it. And if you look carefully, some pixels has a brighter color than any other images. It indicates that those pixels have a huge contribution to the final result, which is the rust itself." }, { "code": null, "e": 4792, "s": 4675, "text": "Therefore, we can confidently say that the model has predicted the result by looking at the right information of it." }, { "code": null, "e": 4997, "s": 4792, "text": "Well done! Now you have to implement your own saliency map for interpreting your deep learning model. I hope you can implement the saliency map for your own cases, and don’t forget to follow me on Medium!" }, { "code": null, "e": 5144, "s": 4997, "text": "If you want to have a discussion with me about data science or machine learning, you can contact me on LinkedIn. Thank you for reading my article!" } ]
Maximum Spanning Tree using Prim’s Algorithm - GeeksforGeeks
10 Nov, 2021 Given undirected weighted graph G, the task is to find the Maximum Spanning Tree of the Graph using Prim’s Algorithm Prims algorithm is a Greedy algorithm which can be used to find the Minimum Spanning Tree (MST) as well as the Maximum Spanning Tree of a Graph. Examples: Input: graph[V][V] = {{0, 2, 0, 6, 0}, {2, 0, 3, 8, 5}, {0, 3, 0, 0, 7}, {6, 8, 0, 0, 9}, {0, 5, 7, 9, 0}}Output:The total weight of the Maximum Spanning tree is 30.Edges Weight3 – 1 84 – 2 70 – 3 63 – 4 9Explanation:Choosing other edges won’t result in maximum spanning tree. Maximum Spanning Tree: Given an undirected weighted graph, a maximum spanning tree is a spanning tree having maximum weight. It can be easily computed using Prim’s algorithm. The goal here is to find the spanning tree with the maximum weight out of all possible spanning trees. Prim’s Algorithm: Prim’s algorithm is a greedy algorithm, which works on the idea that a spanning tree must have all its vertices connected. The algorithm works by building the tree one vertex at a time, from an arbitrary starting vertex, and adding the most expensive possible connection from the tree to another vertex, which will give us the Maximum Spanning Tree (MST). Follow the steps below to solve the problem: Initialize a visited array of boolean datatype, to keep track of vertices visited so far. Initialize all the values with false. Initialize an array weights[], representing the maximum weight to connect that vertex. Initialize all the values with some minimum value. Initialize an array parent[], to keep track of the maximum spanning tree. Assign some large value, as the weight of the first vertex and parent as -1, so that it is picked first and has no parent. From all the unvisited vertices, pick a vertex v having a maximum weight and mark it as visited. Update the weights of all the unvisited adjacent vertices of v. To update the weights, iterate through all the unvisited neighbors of v. For every adjacent vertex x, if the weight of the edge between v and x is greater than the previous value of v, update the value of v with that weight. Below is the implementation of the above algorithm: C++ Java Python3 C# Javascript // C++ program for the above algorithm #include <bits/stdc++.h>using namespace std;#define V 5 // Function to find index of max-weight// vertex from set of unvisited verticesint findMaxVertex(bool visited[], int weights[]){ // Stores the index of max-weight vertex // from set of unvisited vertices int index = -1; // Stores the maximum weight from // the set of unvisited vertices int maxW = INT_MIN; // Iterate over all possible // nodes of a graph for (int i = 0; i < V; i++) { // If the current node is unvisited // and weight of current vertex is // greater than maxW if (visited[i] == false && weights[i] > maxW) { // Update maxW maxW = weights[i]; // Update index index = i; } } return index;} // Utility function to find the maximum// spanning tree of graphvoid printMaximumSpanningTree(int graph[V][V], int parent[]){ // Stores total weight of // maximum spanning tree // of a graph int MST = 0; // Iterate over all possible nodes // of a graph for (int i = 1; i < V; i++) { // Update MST MST += graph[i][parent[i]]; } cout << "Weight of the maximum Spanning-tree " << MST << '\n' << '\n'; cout << "Edges \tWeight\n"; // Print the Edges and weight of // maximum spanning tree of a graph for (int i = 1; i < V; i++) { cout << parent[i] << " - " << i << " \t" << graph[i][parent[i]] << " \n"; }} // Function to find the maximum spanning treevoid maximumSpanningTree(int graph[V][V]){ // visited[i]:Check if vertex i // is visited or not bool visited[V]; // weights[i]: Stores maximum weight of // graph to connect an edge with i int weights[V]; // parent[i]: Stores the parent node // of vertex i int parent[V]; // Initialize weights as -INFINITE, // and visited of a node as false for (int i = 0; i < V; i++) { visited[i] = false; weights[i] = INT_MIN; } // Include 1st vertex in // maximum spanning tree weights[0] = INT_MAX; parent[0] = -1; // Search for other (V-1) vertices // and build a tree for (int i = 0; i < V - 1; i++) { // Stores index of max-weight vertex // from a set of unvisited vertex int maxVertexIndex = findMaxVertex(visited, weights); // Mark that vertex as visited visited[maxVertexIndex] = true; // Update adjacent vertices of // the current visited vertex for (int j = 0; j < V; j++) { // If there is an edge between j // and current visited vertex and // also j is unvisited vertex if (graph[j][maxVertexIndex] != 0 && visited[j] == false) { // If graph[v][x] is // greater than weight[v] if (graph[j][maxVertexIndex] > weights[j]) { // Update weights[j] weights[j] = graph[j][maxVertexIndex]; // Update parent[j] parent[j] = maxVertexIndex; } } } } // Print maximum spanning tree printMaximumSpanningTree(graph, parent);} // Driver Codeint main(){ // Given graph int graph[V][V] = { { 0, 2, 0, 6, 0 }, { 2, 0, 3, 8, 5 }, { 0, 3, 0, 0, 7 }, { 6, 8, 0, 0, 9 }, { 0, 5, 7, 9, 0 } }; // Function call maximumSpanningTree(graph); return 0;} // Java program for the above algorithmimport java.io.*;class GFG{ public static int V = 5; // Function to find index of max-weight // vertex from set of unvisited vertices static int findMaxVertex(boolean visited[], int weights[]) { // Stores the index of max-weight vertex // from set of unvisited vertices int index = -1; // Stores the maximum weight from // the set of unvisited vertices int maxW = Integer.MIN_VALUE; // Iterate over all possible // nodes of a graph for (int i = 0; i < V; i++) { // If the current node is unvisited // and weight of current vertex is // greater than maxW if (visited[i] == false && weights[i] > maxW) { // Update maxW maxW = weights[i]; // Update index index = i; } } return index; } // Utility function to find the maximum // spanning tree of graph static void printMaximumSpanningTree(int graph[][], int parent[]) { // Stores total weight of // maximum spanning tree // of a graph int MST = 0; // Iterate over all possible nodes // of a graph for (int i = 1; i < V; i++) { // Update MST MST += graph[i][parent[i]]; } System.out.println("Weight of the maximum Spanning-tree " + MST); System.out.println(); System.out.println("Edges \tWeight"); // Print the Edges and weight of // maximum spanning tree of a graph for (int i = 1; i < V; i++) { System.out.println(parent[i] + " - " + i + " \t" + graph[i][parent[i]]); } } // Function to find the maximum spanning tree static void maximumSpanningTree(int[][] graph) { // visited[i]:Check if vertex i // is visited or not boolean[] visited = new boolean[V]; // weights[i]: Stores maximum weight of // graph to connect an edge with i int[] weights = new int[V]; // parent[i]: Stores the parent node // of vertex i int[] parent = new int[V]; // Initialize weights as -INFINITE, // and visited of a node as false for (int i = 0; i < V; i++) { visited[i] = false; weights[i] = Integer.MIN_VALUE; } // Include 1st vertex in // maximum spanning tree weights[0] = Integer.MAX_VALUE; parent[0] = -1; // Search for other (V-1) vertices // and build a tree for (int i = 0; i < V - 1; i++) { // Stores index of max-weight vertex // from a set of unvisited vertex int maxVertexIndex = findMaxVertex(visited, weights); // Mark that vertex as visited visited[maxVertexIndex] = true; // Update adjacent vertices of // the current visited vertex for (int j = 0; j < V; j++) { // If there is an edge between j // and current visited vertex and // also j is unvisited vertex if (graph[j][maxVertexIndex] != 0 && visited[j] == false) { // If graph[v][x] is // greater than weight[v] if (graph[j][maxVertexIndex] > weights[j]) { // Update weights[j] weights[j] = graph[j][maxVertexIndex]; // Update parent[j] parent[j] = maxVertexIndex; } } } } // Print maximum spanning tree printMaximumSpanningTree(graph, parent); } // Driver Code public static void main(String[] args) { // Given graph int[][] graph = { { 0, 2, 0, 6, 0 }, { 2, 0, 3, 8, 5 }, { 0, 3, 0, 0, 7 }, { 6, 8, 0, 0, 9 }, { 0, 5, 7, 9, 0 } }; // Function call maximumSpanningTree(graph); }} // This code is contributed by Dharanendra L V # Python program for the above algorithmimport sysV = 5; # Function to find index of max-weight# vertex from set of unvisited verticesdef findMaxVertex(visited, weights): # Stores the index of max-weight vertex # from set of unvisited vertices index = -1; # Stores the maximum weight from # the set of unvisited vertices maxW = -sys.maxsize; # Iterate over all possible # Nodes of a graph for i in range(V): # If the current Node is unvisited # and weight of current vertex is # greater than maxW if (visited[i] == False and weights[i] > maxW): # Update maxW maxW = weights[i]; # Update index index = i; return index; # Utility function to find the maximum# spanning tree of graphdef printMaximumSpanningTree(graph, parent): # Stores total weight of # maximum spanning tree # of a graph MST = 0; # Iterate over all possible Nodes # of a graph for i in range(1, V): # Update MST MST += graph[i][parent[i]]; print("Weight of the maximum Spanning-tree ", MST); print(); print("Edges \tWeight"); # Print Edges and weight of # maximum spanning tree of a graph for i in range(1, V): print(parent[i] , " - " , i , " \t" , graph[i][parent[i]]); # Function to find the maximum spanning treedef maximumSpanningTree(graph): # visited[i]:Check if vertex i # is visited or not visited = [True]*V; # weights[i]: Stores maximum weight of # graph to connect an edge with i weights = [0]*V; # parent[i]: Stores the parent Node # of vertex i parent = [0]*V; # Initialize weights as -INFINITE, # and visited of a Node as False for i in range(V): visited[i] = False; weights[i] = -sys.maxsize; # Include 1st vertex in # maximum spanning tree weights[0] = sys.maxsize; parent[0] = -1; # Search for other (V-1) vertices # and build a tree for i in range(V - 1): # Stores index of max-weight vertex # from a set of unvisited vertex maxVertexIndex = findMaxVertex(visited, weights); # Mark that vertex as visited visited[maxVertexIndex] = True; # Update adjacent vertices of # the current visited vertex for j in range(V): # If there is an edge between j # and current visited vertex and # also j is unvisited vertex if (graph[j][maxVertexIndex] != 0 and visited[j] == False): # If graph[v][x] is # greater than weight[v] if (graph[j][maxVertexIndex] > weights[j]): # Update weights[j] weights[j] = graph[j][maxVertexIndex]; # Update parent[j] parent[j] = maxVertexIndex; # Print maximum spanning tree printMaximumSpanningTree(graph, parent); # Driver Codeif __name__ == '__main__': # Given graph graph = [[0, 2, 0, 6, 0], [2, 0, 3, 8, 5], [0, 3, 0, 0, 7], [6, 8, 0, 0, 9], [0, 5, 7, 9, 0]]; # Function call maximumSpanningTree(graph); # This code is contributed by 29AjayKumar // C# program for the above algorithmusing System;class GFG{ public static int V = 5; // Function to find index of max-weight // vertex from set of unvisited vertices static int findMaxVertex(bool[] visited, int[] weights) { // Stores the index of max-weight vertex // from set of unvisited vertices int index = -1; // Stores the maximum weight from // the set of unvisited vertices int maxW = int.MinValue; // Iterate over all possible // nodes of a graph for (int i = 0; i < V; i++) { // If the current node is unvisited // and weight of current vertex is // greater than maxW if (visited[i] == false && weights[i] > maxW) { // Update maxW maxW = weights[i]; // Update index index = i; } } return index; } // Utility function to find the maximum // spanning tree of graph static void printMaximumSpanningTree(int[, ] graph, int[] parent) { // Stores total weight of // maximum spanning tree // of a graph int MST = 0; // Iterate over all possible nodes // of a graph for (int i = 1; i < V; i++) { // Update MST MST += graph[i, parent[i]]; } Console.WriteLine( "Weight of the maximum Spanning-tree " + MST); Console.WriteLine(); Console.WriteLine("Edges \tWeight"); // Print the Edges and weight of // maximum spanning tree of a graph for (int i = 1; i < V; i++) { Console.WriteLine(parent[i] + " - " + i + " \t" + graph[i, parent[i]]); } } // Function to find the maximum spanning tree static void maximumSpanningTree(int[, ] graph) { // visited[i]:Check if vertex i // is visited or not bool[] visited = new bool[V]; // weights[i]: Stores maximum weight of // graph to connect an edge with i int[] weights = new int[V]; // parent[i]: Stores the parent node // of vertex i int[] parent = new int[V]; // Initialize weights as -INFINITE, // and visited of a node as false for (int i = 0; i < V; i++) { visited[i] = false; weights[i] = int.MinValue; } // Include 1st vertex in // maximum spanning tree weights[0] = int.MaxValue; parent[0] = -1; // Search for other (V-1) vertices // and build a tree for (int i = 0; i < V - 1; i++) { // Stores index of max-weight vertex // from a set of unvisited vertex int maxVertexIndex = findMaxVertex(visited, weights); // Mark that vertex as visited visited[maxVertexIndex] = true; // Update adjacent vertices of // the current visited vertex for (int j = 0; j < V; j++) { // If there is an edge between j // and current visited vertex and // also j is unvisited vertex if (graph[j, maxVertexIndex] != 0 && visited[j] == false) { // If graph[v][x] is // greater than weight[v] if (graph[j, maxVertexIndex] > weights[j]) { // Update weights[j] weights[j] = graph[j, maxVertexIndex]; // Update parent[j] parent[j] = maxVertexIndex; } } } } // Print maximum spanning tree printMaximumSpanningTree(graph, parent); } // Driver Code static public void Main() { // Given graph int[, ] graph = { { 0, 2, 0, 6, 0 }, { 2, 0, 3, 8, 5 }, { 0, 3, 0, 0, 7 }, { 6, 8, 0, 0, 9 }, { 0, 5, 7, 9, 0 } }; // Function call maximumSpanningTree(graph); }} // This code is contributed by Dharanendra L V <script> // Javascript program for the above algorithm var V = 5; // Function to find index of max-weight// vertex from set of unvisited verticesfunction findMaxVertex(visited, weights){ // Stores the index of max-weight vertex // from set of unvisited vertices var index = -1; // Stores the maximum weight from // the set of unvisited vertices var maxW = -1000000000; // Iterate over all possible // nodes of a graph for (var i = 0; i < V; i++) { // If the current node is unvisited // and weight of current vertex is // greater than maxW if (visited[i] == false && weights[i] > maxW) { // Update maxW maxW = weights[i]; // Update index index = i; } } return index;} // Utility function to find the maximum// spanning tree of graphfunction printMaximumSpanningTree(graph, parent){ // Stores total weight of // maximum spanning tree // of a graph var MST = 0; // Iterate over all possible nodes // of a graph for (var i = 1; i < V; i++) { // Update MST MST += graph[i][parent[i]]; } document.write( "Weight of the maximum Spanning-tree " + MST + '<br>' + '<br>'); document.write( "Edges \tWeight<br>"); // Print the Edges and weight of // maximum spanning tree of a graph for (var i = 1; i < V; i++) { document.write( parent[i] + " - " + i + " " + graph[i][parent[i]] + " <br>"); }} // Function to find the maximum spanning treefunction maximumSpanningTree(graph){ // visited[i]:Check if vertex i // is visited or not var visited = Array(V).fill(false); // weights[i]: Stores maximum weight of // graph to connect an edge with i var weights = Array(V).fill(-1000000000); // parent[i]: Stores the parent node // of vertex i var parent = Array(V).fill(0); // Include 1st vertex in // maximum spanning tree weights[0] = 1000000000; parent[0] = -1; // Search for other (V-1) vertices // and build a tree for (var i = 0; i < V - 1; i++) { // Stores index of max-weight vertex // from a set of unvisited vertex var maxVertexIndex = findMaxVertex(visited, weights); // Mark that vertex as visited visited[maxVertexIndex] = true; // Update adjacent vertices of // the current visited vertex for (var j = 0; j < V; j++) { // If there is an edge between j // and current visited vertex and // also j is unvisited vertex if (graph[j][maxVertexIndex] != 0 && visited[j] == false) { // If graph[v][x] is // greater than weight[v] if (graph[j][maxVertexIndex] > weights[j]) { // Update weights[j] weights[j] = graph[j][maxVertexIndex]; // Update parent[j] parent[j] = maxVertexIndex; } } } } // Print maximum spanning tree printMaximumSpanningTree(graph, parent);} // Driver Code// Given graphvar graph = [ [ 0, 2, 0, 6, 0 ], [ 2, 0, 3, 8, 5 ], [ 0, 3, 0, 0, 7 ], [ 6, 8, 0, 0, 9 ], [ 0, 5, 7, 9, 0 ] ];// Function callmaximumSpanningTree(graph); // This code is contributed by rutvik_56.</script> Weight of the maximum Spanning-tree 30 Edges Weight 3 - 1 8 4 - 2 7 0 - 3 6 3 - 4 9 Time Complexity: O(V2) where V is the number of nodes in the graph.Auxiliary Space: O(V2) dharanendralv23 29AjayKumar rutvik_56 khushboogoyal499 ankita_saini Graph Minimum Spanning Tree Prim's Algorithm.MST Technical Scripter 2020 Graph Mathematical Technical Scripter Mathematical Graph Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Tree, Back, Edge and Cross Edges in DFS of Graph Vertex Cover Problem | Set 1 (Introduction and Approximate Algorithm) Comparison between Adjacency List and Adjacency Matrix representation of Graph Eulerian path and circuit for undirected graph Find if there is a path between two vertices in a directed graph Program for Fibonacci numbers Write a program to print all permutations of a given string C++ Data Types Set in C++ Standard Template Library (STL) Program to find sum of elements in a given array
[ { "code": null, "e": 24701, "s": 24673, "text": "\n10 Nov, 2021" }, { "code": null, "e": 24818, "s": 24701, "text": "Given undirected weighted graph G, the task is to find the Maximum Spanning Tree of the Graph using Prim’s Algorithm" }, { "code": null, "e": 24963, "s": 24818, "text": "Prims algorithm is a Greedy algorithm which can be used to find the Minimum Spanning Tree (MST) as well as the Maximum Spanning Tree of a Graph." }, { "code": null, "e": 24973, "s": 24963, "text": "Examples:" }, { "code": null, "e": 25281, "s": 24973, "text": "Input: graph[V][V] = {{0, 2, 0, 6, 0}, {2, 0, 3, 8, 5}, {0, 3, 0, 0, 7}, {6, 8, 0, 0, 9}, {0, 5, 7, 9, 0}}Output:The total weight of the Maximum Spanning tree is 30.Edges Weight3 – 1 84 – 2 70 – 3 63 – 4 9Explanation:Choosing other edges won’t result in maximum spanning tree." }, { "code": null, "e": 25304, "s": 25281, "text": "Maximum Spanning Tree:" }, { "code": null, "e": 25559, "s": 25304, "text": "Given an undirected weighted graph, a maximum spanning tree is a spanning tree having maximum weight. It can be easily computed using Prim’s algorithm. The goal here is to find the spanning tree with the maximum weight out of all possible spanning trees." }, { "code": null, "e": 25577, "s": 25559, "text": "Prim’s Algorithm:" }, { "code": null, "e": 25933, "s": 25577, "text": "Prim’s algorithm is a greedy algorithm, which works on the idea that a spanning tree must have all its vertices connected. The algorithm works by building the tree one vertex at a time, from an arbitrary starting vertex, and adding the most expensive possible connection from the tree to another vertex, which will give us the Maximum Spanning Tree (MST)." }, { "code": null, "e": 25978, "s": 25933, "text": "Follow the steps below to solve the problem:" }, { "code": null, "e": 26106, "s": 25978, "text": "Initialize a visited array of boolean datatype, to keep track of vertices visited so far. Initialize all the values with false." }, { "code": null, "e": 26244, "s": 26106, "text": "Initialize an array weights[], representing the maximum weight to connect that vertex. Initialize all the values with some minimum value." }, { "code": null, "e": 26318, "s": 26244, "text": "Initialize an array parent[], to keep track of the maximum spanning tree." }, { "code": null, "e": 26441, "s": 26318, "text": "Assign some large value, as the weight of the first vertex and parent as -1, so that it is picked first and has no parent." }, { "code": null, "e": 26538, "s": 26441, "text": "From all the unvisited vertices, pick a vertex v having a maximum weight and mark it as visited." }, { "code": null, "e": 26827, "s": 26538, "text": "Update the weights of all the unvisited adjacent vertices of v. To update the weights, iterate through all the unvisited neighbors of v. For every adjacent vertex x, if the weight of the edge between v and x is greater than the previous value of v, update the value of v with that weight." }, { "code": null, "e": 26879, "s": 26827, "text": "Below is the implementation of the above algorithm:" }, { "code": null, "e": 26883, "s": 26879, "text": "C++" }, { "code": null, "e": 26888, "s": 26883, "text": "Java" }, { "code": null, "e": 26896, "s": 26888, "text": "Python3" }, { "code": null, "e": 26899, "s": 26896, "text": "C#" }, { "code": null, "e": 26910, "s": 26899, "text": "Javascript" }, { "code": "// C++ program for the above algorithm #include <bits/stdc++.h>using namespace std;#define V 5 // Function to find index of max-weight// vertex from set of unvisited verticesint findMaxVertex(bool visited[], int weights[]){ // Stores the index of max-weight vertex // from set of unvisited vertices int index = -1; // Stores the maximum weight from // the set of unvisited vertices int maxW = INT_MIN; // Iterate over all possible // nodes of a graph for (int i = 0; i < V; i++) { // If the current node is unvisited // and weight of current vertex is // greater than maxW if (visited[i] == false && weights[i] > maxW) { // Update maxW maxW = weights[i]; // Update index index = i; } } return index;} // Utility function to find the maximum// spanning tree of graphvoid printMaximumSpanningTree(int graph[V][V], int parent[]){ // Stores total weight of // maximum spanning tree // of a graph int MST = 0; // Iterate over all possible nodes // of a graph for (int i = 1; i < V; i++) { // Update MST MST += graph[i][parent[i]]; } cout << \"Weight of the maximum Spanning-tree \" << MST << '\\n' << '\\n'; cout << \"Edges \\tWeight\\n\"; // Print the Edges and weight of // maximum spanning tree of a graph for (int i = 1; i < V; i++) { cout << parent[i] << \" - \" << i << \" \\t\" << graph[i][parent[i]] << \" \\n\"; }} // Function to find the maximum spanning treevoid maximumSpanningTree(int graph[V][V]){ // visited[i]:Check if vertex i // is visited or not bool visited[V]; // weights[i]: Stores maximum weight of // graph to connect an edge with i int weights[V]; // parent[i]: Stores the parent node // of vertex i int parent[V]; // Initialize weights as -INFINITE, // and visited of a node as false for (int i = 0; i < V; i++) { visited[i] = false; weights[i] = INT_MIN; } // Include 1st vertex in // maximum spanning tree weights[0] = INT_MAX; parent[0] = -1; // Search for other (V-1) vertices // and build a tree for (int i = 0; i < V - 1; i++) { // Stores index of max-weight vertex // from a set of unvisited vertex int maxVertexIndex = findMaxVertex(visited, weights); // Mark that vertex as visited visited[maxVertexIndex] = true; // Update adjacent vertices of // the current visited vertex for (int j = 0; j < V; j++) { // If there is an edge between j // and current visited vertex and // also j is unvisited vertex if (graph[j][maxVertexIndex] != 0 && visited[j] == false) { // If graph[v][x] is // greater than weight[v] if (graph[j][maxVertexIndex] > weights[j]) { // Update weights[j] weights[j] = graph[j][maxVertexIndex]; // Update parent[j] parent[j] = maxVertexIndex; } } } } // Print maximum spanning tree printMaximumSpanningTree(graph, parent);} // Driver Codeint main(){ // Given graph int graph[V][V] = { { 0, 2, 0, 6, 0 }, { 2, 0, 3, 8, 5 }, { 0, 3, 0, 0, 7 }, { 6, 8, 0, 0, 9 }, { 0, 5, 7, 9, 0 } }; // Function call maximumSpanningTree(graph); return 0;}", "e": 30523, "s": 26910, "text": null }, { "code": "// Java program for the above algorithmimport java.io.*;class GFG{ public static int V = 5; // Function to find index of max-weight // vertex from set of unvisited vertices static int findMaxVertex(boolean visited[], int weights[]) { // Stores the index of max-weight vertex // from set of unvisited vertices int index = -1; // Stores the maximum weight from // the set of unvisited vertices int maxW = Integer.MIN_VALUE; // Iterate over all possible // nodes of a graph for (int i = 0; i < V; i++) { // If the current node is unvisited // and weight of current vertex is // greater than maxW if (visited[i] == false && weights[i] > maxW) { // Update maxW maxW = weights[i]; // Update index index = i; } } return index; } // Utility function to find the maximum // spanning tree of graph static void printMaximumSpanningTree(int graph[][], int parent[]) { // Stores total weight of // maximum spanning tree // of a graph int MST = 0; // Iterate over all possible nodes // of a graph for (int i = 1; i < V; i++) { // Update MST MST += graph[i][parent[i]]; } System.out.println(\"Weight of the maximum Spanning-tree \" + MST); System.out.println(); System.out.println(\"Edges \\tWeight\"); // Print the Edges and weight of // maximum spanning tree of a graph for (int i = 1; i < V; i++) { System.out.println(parent[i] + \" - \" + i + \" \\t\" + graph[i][parent[i]]); } } // Function to find the maximum spanning tree static void maximumSpanningTree(int[][] graph) { // visited[i]:Check if vertex i // is visited or not boolean[] visited = new boolean[V]; // weights[i]: Stores maximum weight of // graph to connect an edge with i int[] weights = new int[V]; // parent[i]: Stores the parent node // of vertex i int[] parent = new int[V]; // Initialize weights as -INFINITE, // and visited of a node as false for (int i = 0; i < V; i++) { visited[i] = false; weights[i] = Integer.MIN_VALUE; } // Include 1st vertex in // maximum spanning tree weights[0] = Integer.MAX_VALUE; parent[0] = -1; // Search for other (V-1) vertices // and build a tree for (int i = 0; i < V - 1; i++) { // Stores index of max-weight vertex // from a set of unvisited vertex int maxVertexIndex = findMaxVertex(visited, weights); // Mark that vertex as visited visited[maxVertexIndex] = true; // Update adjacent vertices of // the current visited vertex for (int j = 0; j < V; j++) { // If there is an edge between j // and current visited vertex and // also j is unvisited vertex if (graph[j][maxVertexIndex] != 0 && visited[j] == false) { // If graph[v][x] is // greater than weight[v] if (graph[j][maxVertexIndex] > weights[j]) { // Update weights[j] weights[j] = graph[j][maxVertexIndex]; // Update parent[j] parent[j] = maxVertexIndex; } } } } // Print maximum spanning tree printMaximumSpanningTree(graph, parent); } // Driver Code public static void main(String[] args) { // Given graph int[][] graph = { { 0, 2, 0, 6, 0 }, { 2, 0, 3, 8, 5 }, { 0, 3, 0, 0, 7 }, { 6, 8, 0, 0, 9 }, { 0, 5, 7, 9, 0 } }; // Function call maximumSpanningTree(graph); }} // This code is contributed by Dharanendra L V", "e": 34283, "s": 30523, "text": null }, { "code": "# Python program for the above algorithmimport sysV = 5; # Function to find index of max-weight# vertex from set of unvisited verticesdef findMaxVertex(visited, weights): # Stores the index of max-weight vertex # from set of unvisited vertices index = -1; # Stores the maximum weight from # the set of unvisited vertices maxW = -sys.maxsize; # Iterate over all possible # Nodes of a graph for i in range(V): # If the current Node is unvisited # and weight of current vertex is # greater than maxW if (visited[i] == False and weights[i] > maxW): # Update maxW maxW = weights[i]; # Update index index = i; return index; # Utility function to find the maximum# spanning tree of graphdef printMaximumSpanningTree(graph, parent): # Stores total weight of # maximum spanning tree # of a graph MST = 0; # Iterate over all possible Nodes # of a graph for i in range(1, V): # Update MST MST += graph[i][parent[i]]; print(\"Weight of the maximum Spanning-tree \", MST); print(); print(\"Edges \\tWeight\"); # Print Edges and weight of # maximum spanning tree of a graph for i in range(1, V): print(parent[i] , \" - \" , i , \" \\t\" , graph[i][parent[i]]); # Function to find the maximum spanning treedef maximumSpanningTree(graph): # visited[i]:Check if vertex i # is visited or not visited = [True]*V; # weights[i]: Stores maximum weight of # graph to connect an edge with i weights = [0]*V; # parent[i]: Stores the parent Node # of vertex i parent = [0]*V; # Initialize weights as -INFINITE, # and visited of a Node as False for i in range(V): visited[i] = False; weights[i] = -sys.maxsize; # Include 1st vertex in # maximum spanning tree weights[0] = sys.maxsize; parent[0] = -1; # Search for other (V-1) vertices # and build a tree for i in range(V - 1): # Stores index of max-weight vertex # from a set of unvisited vertex maxVertexIndex = findMaxVertex(visited, weights); # Mark that vertex as visited visited[maxVertexIndex] = True; # Update adjacent vertices of # the current visited vertex for j in range(V): # If there is an edge between j # and current visited vertex and # also j is unvisited vertex if (graph[j][maxVertexIndex] != 0 and visited[j] == False): # If graph[v][x] is # greater than weight[v] if (graph[j][maxVertexIndex] > weights[j]): # Update weights[j] weights[j] = graph[j][maxVertexIndex]; # Update parent[j] parent[j] = maxVertexIndex; # Print maximum spanning tree printMaximumSpanningTree(graph, parent); # Driver Codeif __name__ == '__main__': # Given graph graph = [[0, 2, 0, 6, 0], [2, 0, 3, 8, 5], [0, 3, 0, 0, 7], [6, 8, 0, 0, 9], [0, 5, 7, 9, 0]]; # Function call maximumSpanningTree(graph); # This code is contributed by 29AjayKumar", "e": 37543, "s": 34283, "text": null }, { "code": "// C# program for the above algorithmusing System;class GFG{ public static int V = 5; // Function to find index of max-weight // vertex from set of unvisited vertices static int findMaxVertex(bool[] visited, int[] weights) { // Stores the index of max-weight vertex // from set of unvisited vertices int index = -1; // Stores the maximum weight from // the set of unvisited vertices int maxW = int.MinValue; // Iterate over all possible // nodes of a graph for (int i = 0; i < V; i++) { // If the current node is unvisited // and weight of current vertex is // greater than maxW if (visited[i] == false && weights[i] > maxW) { // Update maxW maxW = weights[i]; // Update index index = i; } } return index; } // Utility function to find the maximum // spanning tree of graph static void printMaximumSpanningTree(int[, ] graph, int[] parent) { // Stores total weight of // maximum spanning tree // of a graph int MST = 0; // Iterate over all possible nodes // of a graph for (int i = 1; i < V; i++) { // Update MST MST += graph[i, parent[i]]; } Console.WriteLine( \"Weight of the maximum Spanning-tree \" + MST); Console.WriteLine(); Console.WriteLine(\"Edges \\tWeight\"); // Print the Edges and weight of // maximum spanning tree of a graph for (int i = 1; i < V; i++) { Console.WriteLine(parent[i] + \" - \" + i + \" \\t\" + graph[i, parent[i]]); } } // Function to find the maximum spanning tree static void maximumSpanningTree(int[, ] graph) { // visited[i]:Check if vertex i // is visited or not bool[] visited = new bool[V]; // weights[i]: Stores maximum weight of // graph to connect an edge with i int[] weights = new int[V]; // parent[i]: Stores the parent node // of vertex i int[] parent = new int[V]; // Initialize weights as -INFINITE, // and visited of a node as false for (int i = 0; i < V; i++) { visited[i] = false; weights[i] = int.MinValue; } // Include 1st vertex in // maximum spanning tree weights[0] = int.MaxValue; parent[0] = -1; // Search for other (V-1) vertices // and build a tree for (int i = 0; i < V - 1; i++) { // Stores index of max-weight vertex // from a set of unvisited vertex int maxVertexIndex = findMaxVertex(visited, weights); // Mark that vertex as visited visited[maxVertexIndex] = true; // Update adjacent vertices of // the current visited vertex for (int j = 0; j < V; j++) { // If there is an edge between j // and current visited vertex and // also j is unvisited vertex if (graph[j, maxVertexIndex] != 0 && visited[j] == false) { // If graph[v][x] is // greater than weight[v] if (graph[j, maxVertexIndex] > weights[j]) { // Update weights[j] weights[j] = graph[j, maxVertexIndex]; // Update parent[j] parent[j] = maxVertexIndex; } } } } // Print maximum spanning tree printMaximumSpanningTree(graph, parent); } // Driver Code static public void Main() { // Given graph int[, ] graph = { { 0, 2, 0, 6, 0 }, { 2, 0, 3, 8, 5 }, { 0, 3, 0, 0, 7 }, { 6, 8, 0, 0, 9 }, { 0, 5, 7, 9, 0 } }; // Function call maximumSpanningTree(graph); }} // This code is contributed by Dharanendra L V", "e": 41237, "s": 37543, "text": null }, { "code": "<script> // Javascript program for the above algorithm var V = 5; // Function to find index of max-weight// vertex from set of unvisited verticesfunction findMaxVertex(visited, weights){ // Stores the index of max-weight vertex // from set of unvisited vertices var index = -1; // Stores the maximum weight from // the set of unvisited vertices var maxW = -1000000000; // Iterate over all possible // nodes of a graph for (var i = 0; i < V; i++) { // If the current node is unvisited // and weight of current vertex is // greater than maxW if (visited[i] == false && weights[i] > maxW) { // Update maxW maxW = weights[i]; // Update index index = i; } } return index;} // Utility function to find the maximum// spanning tree of graphfunction printMaximumSpanningTree(graph, parent){ // Stores total weight of // maximum spanning tree // of a graph var MST = 0; // Iterate over all possible nodes // of a graph for (var i = 1; i < V; i++) { // Update MST MST += graph[i][parent[i]]; } document.write( \"Weight of the maximum Spanning-tree \" + MST + '<br>' + '<br>'); document.write( \"Edges \\tWeight<br>\"); // Print the Edges and weight of // maximum spanning tree of a graph for (var i = 1; i < V; i++) { document.write( parent[i] + \" - \" + i + \" \" + graph[i][parent[i]] + \" <br>\"); }} // Function to find the maximum spanning treefunction maximumSpanningTree(graph){ // visited[i]:Check if vertex i // is visited or not var visited = Array(V).fill(false); // weights[i]: Stores maximum weight of // graph to connect an edge with i var weights = Array(V).fill(-1000000000); // parent[i]: Stores the parent node // of vertex i var parent = Array(V).fill(0); // Include 1st vertex in // maximum spanning tree weights[0] = 1000000000; parent[0] = -1; // Search for other (V-1) vertices // and build a tree for (var i = 0; i < V - 1; i++) { // Stores index of max-weight vertex // from a set of unvisited vertex var maxVertexIndex = findMaxVertex(visited, weights); // Mark that vertex as visited visited[maxVertexIndex] = true; // Update adjacent vertices of // the current visited vertex for (var j = 0; j < V; j++) { // If there is an edge between j // and current visited vertex and // also j is unvisited vertex if (graph[j][maxVertexIndex] != 0 && visited[j] == false) { // If graph[v][x] is // greater than weight[v] if (graph[j][maxVertexIndex] > weights[j]) { // Update weights[j] weights[j] = graph[j][maxVertexIndex]; // Update parent[j] parent[j] = maxVertexIndex; } } } } // Print maximum spanning tree printMaximumSpanningTree(graph, parent);} // Driver Code// Given graphvar graph = [ [ 0, 2, 0, 6, 0 ], [ 2, 0, 3, 8, 5 ], [ 0, 3, 0, 0, 7 ], [ 6, 8, 0, 0, 9 ], [ 0, 5, 7, 9, 0 ] ];// Function callmaximumSpanningTree(graph); // This code is contributed by rutvik_56.</script>", "e": 44679, "s": 41237, "text": null }, { "code": null, "e": 44790, "s": 44682, "text": "Weight of the maximum Spanning-tree 30\n\nEdges Weight\n3 - 1 8 \n4 - 2 7 \n0 - 3 6 \n3 - 4 9" }, { "code": null, "e": 44884, "s": 44794, "text": "Time Complexity: O(V2) where V is the number of nodes in the graph.Auxiliary Space: O(V2)" }, { "code": null, "e": 44902, "s": 44886, "text": "dharanendralv23" }, { "code": null, "e": 44914, "s": 44902, "text": "29AjayKumar" }, { "code": null, "e": 44924, "s": 44914, "text": "rutvik_56" }, { "code": null, "e": 44941, "s": 44924, "text": "khushboogoyal499" }, { "code": null, "e": 44954, "s": 44941, "text": "ankita_saini" }, { "code": null, "e": 44982, "s": 44954, "text": "Graph Minimum Spanning Tree" }, { "code": null, "e": 45003, "s": 44982, "text": "Prim's Algorithm.MST" }, { "code": null, "e": 45027, "s": 45003, "text": "Technical Scripter 2020" }, { "code": null, "e": 45033, "s": 45027, "text": "Graph" }, { "code": null, "e": 45046, "s": 45033, "text": "Mathematical" }, { "code": null, "e": 45065, "s": 45046, "text": "Technical Scripter" }, { "code": null, "e": 45078, "s": 45065, "text": "Mathematical" }, { "code": null, "e": 45084, "s": 45078, "text": "Graph" }, { "code": null, "e": 45182, "s": 45084, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 45191, "s": 45182, "text": "Comments" }, { "code": null, "e": 45204, "s": 45191, "text": "Old Comments" }, { "code": null, "e": 45253, "s": 45204, "text": "Tree, Back, Edge and Cross Edges in DFS of Graph" }, { "code": null, "e": 45323, "s": 45253, "text": "Vertex Cover Problem | Set 1 (Introduction and Approximate Algorithm)" }, { "code": null, "e": 45402, "s": 45323, "text": "Comparison between Adjacency List and Adjacency Matrix representation of Graph" }, { "code": null, "e": 45449, "s": 45402, "text": "Eulerian path and circuit for undirected graph" }, { "code": null, "e": 45514, "s": 45449, "text": "Find if there is a path between two vertices in a directed graph" }, { "code": null, "e": 45544, "s": 45514, "text": "Program for Fibonacci numbers" }, { "code": null, "e": 45604, "s": 45544, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 45619, "s": 45604, "text": "C++ Data Types" }, { "code": null, "e": 45662, "s": 45619, "text": "Set in C++ Standard Template Library (STL)" } ]
Basics of GIFs with Python’s Matplotlib | by Thiago Carvalho | Towards Data Science
There are plenty of ways to build animations in Matplotlib. They even have an Animation class with functions and methods to support this task. But I often find those methods over-complicated, and many times I want to get something together without too much complexity. In this article, I’ll go through the basics of creating charts, saving them as images, and using Imageio to create a GIF. Without further ado, let’s import our libraries and start building our GIF. import osimport numpy as npimport matplotlib.pyplot as pltimport imageio We can use a line chart to start with something simple since it only requires a list with the y values. y = np.random.randint(30, 40, size=(40))plt.plot(y)plt.ylim(20,50) Cool, we used NumPy to create a list of random integers within the range 30 to 40 containing 40 numbers. The idea is to display the values in our line chart one by one. We’ll use a loop, slicing the array and incrementing the number of selected values at each loop. Let’s try without the loop first, just plotting, saving and displaying the image. ## ONE ##plt.plot(y[:-3])plt.ylim(20,50)plt.savefig('1.png')plt.show()## TWO ##plt.plot(y[:-2])plt.ylim(20,50)plt.savefig('2.png')plt.show()## THREE ##plt.plot(y[:-1])plt.ylim(20,50)plt.savefig('3.png')plt.show()## FOUR ##plt.plot(y)plt.ylim(20,50)plt.savefig('4.png')plt.show() Cool, now that we have four frames saved to our working directory. Imageio can help us build our first GIF. # Build GIFwith imageio.get_writer('mygif.gif', mode='I') as writer: for filename in ['1.png', '2.png', '3.png', '4.png']: image = imageio.imread(filename) writer.append_data(image) It looks weird, but there’s our GIF. Now let’s try with a loop. We can also systematically create the filenames and add a pause at the end so our animation will display the complete chart for a while before repeating. filenames = []for i in y: # plot the line chart plt.plot(y[:i]) plt.ylim(20,50) # create file name and append it to a list filename = f'{i}.png' filenames.append(filename) # save frame plt.savefig(filename) plt.close()# build gifwith imageio.get_writer('mygif.gif', mode='I') as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image) # Remove filesfor filename in set(filenames): os.remove(filename) Great! Now that we know the very basics, let’s try this with a bar chart. We can’t just add one value at a time for bar charts, or else our gif would take forever. Here we’ll have to replace all values for y to make all bars move simultaneously. Let’s apply the same code we used before with those small adjustments. We’ll create the values for the x-axis, a list of lists for the y-axis, and use a bar chart instead of a line chart. x = [1, 2, 3, 4, 5]coordinates_lists = [[0, 0, 0, 0, 0], [10, 30, 60, 30, 10], [70, 40, 20, 40, 70], [10, 20, 30, 40, 50], [50, 40, 30, 20, 10], [75, 0, 75, 0, 75], [0, 0, 0, 0, 0]]filenames = []for index, y in enumerate(coordinates_lists): # plot charts plt.bar(x, y) plt.ylim(0,80) # create file name and append it to a list filename = f'{index}.png' filenames.append(filename) # repeat last frame if (index == len(coordinates_lists)-1): for i in range(15): filenames.append(filename) # save frame plt.savefig(filename) plt.close()# build gifwith imageio.get_writer('mygif.gif', mode='I') as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image) # Remove filesfor filename in set(filenames): os.remove(filename) Ok, that didn’t go well. Pausing at the end of the GIF is not helping so much. We need to pause for each plot and then proceed to the next. Also, the bars are just jumping to their next position. We could slow down this movement to make the transition smoother. There are plenty of ways to make the movement smoother. The easiest is to divide the distance between the current and next locations by the number of frames our transition will have. # frames between transitionsn_frames = 10x = [1, 2, 3, 4, 5]coordinates_lists = [[0, 0, 0, 0, 0], [10, 30, 60, 30, 10], [70, 40, 20, 40, 70], [10, 20, 30, 40, 50], [50, 40, 30, 20, 10], [75, 0, 75, 0, 75], [0, 0, 0, 0, 0]]print('Creating charts\n')filenames = []for index in np.arange(0, len(coordinates_lists)-1): # get current and next y coordinates y = coordinates_lists[index] y1 = coordinates_lists[index+1] # calculate the distance to the next position y_path = np.array(y1) - np.array(y) for i in np.arange(0, n_frames + 1): # divide the distance by the number of frames # and multiply it by the current frame number y_temp = (y + (y_path / n_frames) * i) # plot plt.bar(x, y_temp) plt.ylim(0,80) # build file name and append to list of file names filename = f'images/frame_{index}_{i}.png' filenames.append(filename) # last frame of each viz stays longer if (i == n_frames): for i in range(5): filenames.append(filename) # save img plt.savefig(filename) plt.close()print('Charts saved\n')# Build GIFprint('Creating gif\n')with imageio.get_writer('mybars.gif', mode='I') as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image)print('Gif saved\n')print('Removing Images\n')# Remove filesfor filename in set(filenames): os.remove(filename)print('DONE') Nice! We can definitely improve on the aesthetics of our chart. Let’s try adding some detail. n_frames = 10bg_color = '#95A4AD'bar_color = '#283F4E'gif_name = 'bars'x = [1, 2, 3, 4, 5]coordinates_lists = [[0, 0, 0, 0, 0], [10, 30, 60, 30, 10], [70, 40, 20, 40, 70], [10, 20, 30, 40, 50], [50, 40, 30, 20, 10], [75, 0, 75, 0, 75], [0, 0, 0, 0, 0]]print('Creating charts\n')filenames = []for index in np.arange(0, len(coordinates_lists)-1): y = coordinates_lists[index] y1 = coordinates_lists[index+1] y_path = np.array(y1) - np.array(y) for i in np.arange(0, n_frames + 1): y_temp = (y + (y_path / n_frames) * i) # plot fig, ax = plt.subplots(figsize=(8, 4)) ax.set_facecolor(bg_color) plt.bar(x, y_temp, width=0.4, color=bar_color) plt.ylim(0,80) # remove spines ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) # grid ax.set_axisbelow(True) ax.yaxis.grid(color='gray', linestyle='dashed', alpha=0.7) # build file name and append to list of file names filename = f'images/frame_{index}_{i}.png' filenames.append(filename) # last frame of each viz stays longer if (i == n_frames): for i in range(5): filenames.append(filename) # save img plt.savefig(filename, dpi=96, facecolor=bg_color) plt.close()print('Charts saved\n')# Build GIFprint('Creating gif\n')with imageio.get_writer(f'{gif_name}.gif', mode='I') as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image)print('Gif saved\n')print('Removing Images\n')# Remove filesfor filename in set(filenames): os.remove(filename)print('DONE') Awesome! There’s definitely lots of room for improvement. You can add a title, change the way we transition by using some interpolation, and even make the bars move on the x-axis. To work with scatter plots, we’ll need to consider both the x and y-axis. We won’t necessarily have the same number of points to display on every frame, so we need to correct it to make the transitions. coordinates_lists = [[[0],[0]], [[100,200,300],[100,200,300]], [[400,500,600],[400,500,600]], [[400,500,600,400,500,600],[400,500,600,600, 500,400]], [[500],[500]], [[0],[0]]]gif_name = 'movie' n_frames=10bg_color='#95A4AD'marker_color='#283F4E' marker_size = 25print('building plots\n')filenames = []for index in np.arange(0, len(coordinates_lists)-1): # get current and next coordinates x = coordinates_lists[index][0] y = coordinates_lists[index][1] x1 = coordinates_lists[index+1][0] y1 = coordinates_lists[index+1][1] # Check if sizes match while len(x) < len(x1): diff = len(x1) - len(x) x = x + x[:diff] y = y + y[:diff] while len(x1) < len(x): diff = len(x) - len(x1) x1 = x1 + x1[:diff] y1 = y1 + y1[:diff] # calculate paths x_path = np.array(x1) - np.array(x) y_path = np.array(y1) - np.array(y) for i in np.arange(0, n_frames + 1): # calculate current position x_temp = (x + (x_path / n_frames) * i) y_temp = (y + (y_path / n_frames) * i) # plot fig, ax = plt.subplots(figsize=(6, 6), subplot_kw = dict(aspect="equal")) ax.set_facecolor(bg_color) plt.scatter(x_temp, y_temp, c=marker_color, s = marker_size) plt.xlim(0,1000) plt.ylim(0,1000) # remove spines ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) # grid ax.set_axisbelow(True) ax.yaxis.grid(color='gray', linestyle='dashed', alpha=0.7) ax.xaxis.grid(color='gray', linestyle='dashed', alpha=0.7) # build file name and append to list of file names filename = f'images/frame_{index}_{i}.png' filenames.append(filename) if (i == n_frames): for i in range(5): filenames.append(filename) # save img plt.savefig(filename, dpi=96, facecolor=bg_color) plt.close()# Build GIFprint('creating gif\n')with imageio.get_writer(f'{gif_name}.gif', mode='I') as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image)print('gif complete\n')print('Removing Images\n')# Remove filesfor filename in set(filenames): os.remove(filename)print('done') And that’s it! We explored how to create and save the charts with Matplotlib, how to use Imageio to build the GIFs, and then we checked some different chart types and some of the challenges we may find while handling each of them. I’ve used those same concepts to write letters in a scatter plot. You can check more about that project here. Thanks for reading my article!
[ { "code": null, "e": 314, "s": 171, "text": "There are plenty of ways to build animations in Matplotlib. They even have an Animation class with functions and methods to support this task." }, { "code": null, "e": 440, "s": 314, "text": "But I often find those methods over-complicated, and many times I want to get something together without too much complexity." }, { "code": null, "e": 562, "s": 440, "text": "In this article, I’ll go through the basics of creating charts, saving them as images, and using Imageio to create a GIF." }, { "code": null, "e": 638, "s": 562, "text": "Without further ado, let’s import our libraries and start building our GIF." }, { "code": null, "e": 711, "s": 638, "text": "import osimport numpy as npimport matplotlib.pyplot as pltimport imageio" }, { "code": null, "e": 815, "s": 711, "text": "We can use a line chart to start with something simple since it only requires a list with the y values." }, { "code": null, "e": 882, "s": 815, "text": "y = np.random.randint(30, 40, size=(40))plt.plot(y)plt.ylim(20,50)" }, { "code": null, "e": 987, "s": 882, "text": "Cool, we used NumPy to create a list of random integers within the range 30 to 40 containing 40 numbers." }, { "code": null, "e": 1148, "s": 987, "text": "The idea is to display the values in our line chart one by one. We’ll use a loop, slicing the array and incrementing the number of selected values at each loop." }, { "code": null, "e": 1230, "s": 1148, "text": "Let’s try without the loop first, just plotting, saving and displaying the image." }, { "code": null, "e": 1509, "s": 1230, "text": "## ONE ##plt.plot(y[:-3])plt.ylim(20,50)plt.savefig('1.png')plt.show()## TWO ##plt.plot(y[:-2])plt.ylim(20,50)plt.savefig('2.png')plt.show()## THREE ##plt.plot(y[:-1])plt.ylim(20,50)plt.savefig('3.png')plt.show()## FOUR ##plt.plot(y)plt.ylim(20,50)plt.savefig('4.png')plt.show()" }, { "code": null, "e": 1617, "s": 1509, "text": "Cool, now that we have four frames saved to our working directory. Imageio can help us build our first GIF." }, { "code": null, "e": 1816, "s": 1617, "text": "# Build GIFwith imageio.get_writer('mygif.gif', mode='I') as writer: for filename in ['1.png', '2.png', '3.png', '4.png']: image = imageio.imread(filename) writer.append_data(image)" }, { "code": null, "e": 1853, "s": 1816, "text": "It looks weird, but there’s our GIF." }, { "code": null, "e": 2034, "s": 1853, "text": "Now let’s try with a loop. We can also systematically create the filenames and add a pause at the end so our animation will display the complete chart for a while before repeating." }, { "code": null, "e": 2535, "s": 2034, "text": "filenames = []for i in y: # plot the line chart plt.plot(y[:i]) plt.ylim(20,50) # create file name and append it to a list filename = f'{i}.png' filenames.append(filename) # save frame plt.savefig(filename) plt.close()# build gifwith imageio.get_writer('mygif.gif', mode='I') as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image) # Remove filesfor filename in set(filenames): os.remove(filename)" }, { "code": null, "e": 2609, "s": 2535, "text": "Great! Now that we know the very basics, let’s try this with a bar chart." }, { "code": null, "e": 2699, "s": 2609, "text": "We can’t just add one value at a time for bar charts, or else our gif would take forever." }, { "code": null, "e": 2781, "s": 2699, "text": "Here we’ll have to replace all values for y to make all bars move simultaneously." }, { "code": null, "e": 2969, "s": 2781, "text": "Let’s apply the same code we used before with those small adjustments. We’ll create the values for the x-axis, a list of lists for the y-axis, and use a bar chart instead of a line chart." }, { "code": null, "e": 3941, "s": 2969, "text": "x = [1, 2, 3, 4, 5]coordinates_lists = [[0, 0, 0, 0, 0], [10, 30, 60, 30, 10], [70, 40, 20, 40, 70], [10, 20, 30, 40, 50], [50, 40, 30, 20, 10], [75, 0, 75, 0, 75], [0, 0, 0, 0, 0]]filenames = []for index, y in enumerate(coordinates_lists): # plot charts plt.bar(x, y) plt.ylim(0,80) # create file name and append it to a list filename = f'{index}.png' filenames.append(filename) # repeat last frame if (index == len(coordinates_lists)-1): for i in range(15): filenames.append(filename) # save frame plt.savefig(filename) plt.close()# build gifwith imageio.get_writer('mygif.gif', mode='I') as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image) # Remove filesfor filename in set(filenames): os.remove(filename)" }, { "code": null, "e": 3966, "s": 3941, "text": "Ok, that didn’t go well." }, { "code": null, "e": 4081, "s": 3966, "text": "Pausing at the end of the GIF is not helping so much. We need to pause for each plot and then proceed to the next." }, { "code": null, "e": 4203, "s": 4081, "text": "Also, the bars are just jumping to their next position. We could slow down this movement to make the transition smoother." }, { "code": null, "e": 4386, "s": 4203, "text": "There are plenty of ways to make the movement smoother. The easiest is to divide the distance between the current and next locations by the number of frames our transition will have." }, { "code": null, "e": 5972, "s": 4386, "text": "# frames between transitionsn_frames = 10x = [1, 2, 3, 4, 5]coordinates_lists = [[0, 0, 0, 0, 0], [10, 30, 60, 30, 10], [70, 40, 20, 40, 70], [10, 20, 30, 40, 50], [50, 40, 30, 20, 10], [75, 0, 75, 0, 75], [0, 0, 0, 0, 0]]print('Creating charts\\n')filenames = []for index in np.arange(0, len(coordinates_lists)-1): # get current and next y coordinates y = coordinates_lists[index] y1 = coordinates_lists[index+1] # calculate the distance to the next position y_path = np.array(y1) - np.array(y) for i in np.arange(0, n_frames + 1): # divide the distance by the number of frames # and multiply it by the current frame number y_temp = (y + (y_path / n_frames) * i) # plot plt.bar(x, y_temp) plt.ylim(0,80) # build file name and append to list of file names filename = f'images/frame_{index}_{i}.png' filenames.append(filename) # last frame of each viz stays longer if (i == n_frames): for i in range(5): filenames.append(filename) # save img plt.savefig(filename) plt.close()print('Charts saved\\n')# Build GIFprint('Creating gif\\n')with imageio.get_writer('mybars.gif', mode='I') as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image)print('Gif saved\\n')print('Removing Images\\n')# Remove filesfor filename in set(filenames): os.remove(filename)print('DONE')" }, { "code": null, "e": 6066, "s": 5972, "text": "Nice! We can definitely improve on the aesthetics of our chart. Let’s try adding some detail." }, { "code": null, "e": 7852, "s": 6066, "text": "n_frames = 10bg_color = '#95A4AD'bar_color = '#283F4E'gif_name = 'bars'x = [1, 2, 3, 4, 5]coordinates_lists = [[0, 0, 0, 0, 0], [10, 30, 60, 30, 10], [70, 40, 20, 40, 70], [10, 20, 30, 40, 50], [50, 40, 30, 20, 10], [75, 0, 75, 0, 75], [0, 0, 0, 0, 0]]print('Creating charts\\n')filenames = []for index in np.arange(0, len(coordinates_lists)-1): y = coordinates_lists[index] y1 = coordinates_lists[index+1] y_path = np.array(y1) - np.array(y) for i in np.arange(0, n_frames + 1): y_temp = (y + (y_path / n_frames) * i) # plot fig, ax = plt.subplots(figsize=(8, 4)) ax.set_facecolor(bg_color) plt.bar(x, y_temp, width=0.4, color=bar_color) plt.ylim(0,80) # remove spines ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) # grid ax.set_axisbelow(True) ax.yaxis.grid(color='gray', linestyle='dashed', alpha=0.7) # build file name and append to list of file names filename = f'images/frame_{index}_{i}.png' filenames.append(filename) # last frame of each viz stays longer if (i == n_frames): for i in range(5): filenames.append(filename) # save img plt.savefig(filename, dpi=96, facecolor=bg_color) plt.close()print('Charts saved\\n')# Build GIFprint('Creating gif\\n')with imageio.get_writer(f'{gif_name}.gif', mode='I') as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image)print('Gif saved\\n')print('Removing Images\\n')# Remove filesfor filename in set(filenames): os.remove(filename)print('DONE')" }, { "code": null, "e": 8032, "s": 7852, "text": "Awesome! There’s definitely lots of room for improvement. You can add a title, change the way we transition by using some interpolation, and even make the bars move on the x-axis." }, { "code": null, "e": 8235, "s": 8032, "text": "To work with scatter plots, we’ll need to consider both the x and y-axis. We won’t necessarily have the same number of points to display on every frame, so we need to correct it to make the transitions." }, { "code": null, "e": 10609, "s": 8235, "text": "coordinates_lists = [[[0],[0]], [[100,200,300],[100,200,300]], [[400,500,600],[400,500,600]], [[400,500,600,400,500,600],[400,500,600,600, 500,400]], [[500],[500]], [[0],[0]]]gif_name = 'movie' n_frames=10bg_color='#95A4AD'marker_color='#283F4E' marker_size = 25print('building plots\\n')filenames = []for index in np.arange(0, len(coordinates_lists)-1): # get current and next coordinates x = coordinates_lists[index][0] y = coordinates_lists[index][1] x1 = coordinates_lists[index+1][0] y1 = coordinates_lists[index+1][1] # Check if sizes match while len(x) < len(x1): diff = len(x1) - len(x) x = x + x[:diff] y = y + y[:diff] while len(x1) < len(x): diff = len(x) - len(x1) x1 = x1 + x1[:diff] y1 = y1 + y1[:diff] # calculate paths x_path = np.array(x1) - np.array(x) y_path = np.array(y1) - np.array(y) for i in np.arange(0, n_frames + 1): # calculate current position x_temp = (x + (x_path / n_frames) * i) y_temp = (y + (y_path / n_frames) * i) # plot fig, ax = plt.subplots(figsize=(6, 6), subplot_kw = dict(aspect=\"equal\")) ax.set_facecolor(bg_color) plt.scatter(x_temp, y_temp, c=marker_color, s = marker_size) plt.xlim(0,1000) plt.ylim(0,1000) # remove spines ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) # grid ax.set_axisbelow(True) ax.yaxis.grid(color='gray', linestyle='dashed', alpha=0.7) ax.xaxis.grid(color='gray', linestyle='dashed', alpha=0.7) # build file name and append to list of file names filename = f'images/frame_{index}_{i}.png' filenames.append(filename) if (i == n_frames): for i in range(5): filenames.append(filename) # save img plt.savefig(filename, dpi=96, facecolor=bg_color) plt.close()# Build GIFprint('creating gif\\n')with imageio.get_writer(f'{gif_name}.gif', mode='I') as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image)print('gif complete\\n')print('Removing Images\\n')# Remove filesfor filename in set(filenames): os.remove(filename)print('done')" }, { "code": null, "e": 10624, "s": 10609, "text": "And that’s it!" }, { "code": null, "e": 10840, "s": 10624, "text": "We explored how to create and save the charts with Matplotlib, how to use Imageio to build the GIFs, and then we checked some different chart types and some of the challenges we may find while handling each of them." }, { "code": null, "e": 10950, "s": 10840, "text": "I’ve used those same concepts to write letters in a scatter plot. You can check more about that project here." } ]
Python - Object Oriented
Python has been an object-oriented language since it existed. Because of this, creating and using classes and objects are downright easy. This chapter helps you become an expert in using Python's object-oriented programming support. If you do not have any previous experience with object-oriented (OO) programming, you may want to consult an introductory course on it or at least a tutorial of some sort so that you have a grasp of the basic concepts. However, here is small introduction of Object-Oriented Programming (OOP) to bring you at speed − Class − A user-defined prototype for an object that defines a set of attributes that characterize any object of the class. The attributes are data members (class variables and instance variables) and methods, accessed via dot notation. Class − A user-defined prototype for an object that defines a set of attributes that characterize any object of the class. The attributes are data members (class variables and instance variables) and methods, accessed via dot notation. Class variable − A variable that is shared by all instances of a class. Class variables are defined within a class but outside any of the class's methods. Class variables are not used as frequently as instance variables are. Class variable − A variable that is shared by all instances of a class. Class variables are defined within a class but outside any of the class's methods. Class variables are not used as frequently as instance variables are. Data member − A class variable or instance variable that holds data associated with a class and its objects. Data member − A class variable or instance variable that holds data associated with a class and its objects. Function overloading − The assignment of more than one behavior to a particular function. The operation performed varies by the types of objects or arguments involved. Function overloading − The assignment of more than one behavior to a particular function. The operation performed varies by the types of objects or arguments involved. Instance variable − A variable that is defined inside a method and belongs only to the current instance of a class. Instance variable − A variable that is defined inside a method and belongs only to the current instance of a class. Inheritance − The transfer of the characteristics of a class to other classes that are derived from it. Inheritance − The transfer of the characteristics of a class to other classes that are derived from it. Instance − An individual object of a certain class. An object obj that belongs to a class Circle, for example, is an instance of the class Circle. Instance − An individual object of a certain class. An object obj that belongs to a class Circle, for example, is an instance of the class Circle. Instantiation − The creation of an instance of a class. Instantiation − The creation of an instance of a class. Method − A special kind of function that is defined in a class definition. Method − A special kind of function that is defined in a class definition. Object − A unique instance of a data structure that's defined by its class. An object comprises both data members (class variables and instance variables) and methods. Object − A unique instance of a data structure that's defined by its class. An object comprises both data members (class variables and instance variables) and methods. Operator overloading − The assignment of more than one function to a particular operator. Operator overloading − The assignment of more than one function to a particular operator. The class statement creates a new class definition. The name of the class immediately follows the keyword class followed by a colon as follows − class ClassName: 'Optional class documentation string' class_suite The class has a documentation string, which can be accessed via ClassName.__doc__. The class has a documentation string, which can be accessed via ClassName.__doc__. The class_suite consists of all the component statements defining class members, data attributes and functions. The class_suite consists of all the component statements defining class members, data attributes and functions. Following is the example of a simple Python class − class Employee: 'Common base class for all employees' empCount = 0 def __init__(self, name, salary): self.name = name self.salary = salary Employee.empCount += 1 def displayCount(self): print "Total Employee %d" % Employee.empCount def displayEmployee(self): print "Name : ", self.name, ", Salary: ", self.salary The variable empCount is a class variable whose value is shared among all instances of a this class. This can be accessed as Employee.empCount from inside the class or outside the class. The variable empCount is a class variable whose value is shared among all instances of a this class. This can be accessed as Employee.empCount from inside the class or outside the class. The first method __init__() is a special method, which is called class constructor or initialization method that Python calls when you create a new instance of this class. The first method __init__() is a special method, which is called class constructor or initialization method that Python calls when you create a new instance of this class. You declare other class methods like normal functions with the exception that the first argument to each method is self. Python adds the self argument to the list for you; you do not need to include it when you call the methods. You declare other class methods like normal functions with the exception that the first argument to each method is self. Python adds the self argument to the list for you; you do not need to include it when you call the methods. To create instances of a class, you call the class using class name and pass in whatever arguments its __init__ method accepts. "This would create first object of Employee class" emp1 = Employee("Zara", 2000) "This would create second object of Employee class" emp2 = Employee("Manni", 5000) You access the object's attributes using the dot operator with object. Class variable would be accessed using class name as follows − emp1.displayEmployee() emp2.displayEmployee() print "Total Employee %d" % Employee.empCount Now, putting all the concepts together − #!/usr/bin/python class Employee: 'Common base class for all employees' empCount = 0 def __init__(self, name, salary): self.name = name self.salary = salary Employee.empCount += 1 def displayCount(self): print "Total Employee %d" % Employee.empCount def displayEmployee(self): print "Name : ", self.name, ", Salary: ", self.salary "This would create first object of Employee class" emp1 = Employee("Zara", 2000) "This would create second object of Employee class" emp2 = Employee("Manni", 5000) emp1.displayEmployee() emp2.displayEmployee() print "Total Employee %d" % Employee.empCount When the above code is executed, it produces the following result − Name : Zara ,Salary: 2000 Name : Manni ,Salary: 5000 Total Employee 2 You can add, remove, or modify attributes of classes and objects at any time − emp1.age = 7 # Add an 'age' attribute. emp1.age = 8 # Modify 'age' attribute. del emp1.age # Delete 'age' attribute. Instead of using the normal statements to access attributes, you can use the following functions − The getattr(obj, name[, default]) − to access the attribute of object. The getattr(obj, name[, default]) − to access the attribute of object. The hasattr(obj,name) − to check if an attribute exists or not. The hasattr(obj,name) − to check if an attribute exists or not. The setattr(obj,name,value) − to set an attribute. If attribute does not exist, then it would be created. The setattr(obj,name,value) − to set an attribute. If attribute does not exist, then it would be created. The delattr(obj, name) − to delete an attribute. The delattr(obj, name) − to delete an attribute. hasattr(emp1, 'age') # Returns true if 'age' attribute exists getattr(emp1, 'age') # Returns value of 'age' attribute setattr(emp1, 'age', 8) # Set attribute 'age' at 8 delattr(empl, 'age') # Delete attribute 'age' Every Python class keeps following built-in attributes and they can be accessed using dot operator like any other attribute − __dict__ − Dictionary containing the class's namespace. __dict__ − Dictionary containing the class's namespace. __doc__ − Class documentation string or none, if undefined. __doc__ − Class documentation string or none, if undefined. __name__ − Class name. __name__ − Class name. __module__ − Module name in which the class is defined. This attribute is "__main__" in interactive mode. __module__ − Module name in which the class is defined. This attribute is "__main__" in interactive mode. __bases__ − A possibly empty tuple containing the base classes, in the order of their occurrence in the base class list. __bases__ − A possibly empty tuple containing the base classes, in the order of their occurrence in the base class list. For the above class let us try to access all these attributes − #!/usr/bin/python class Employee: 'Common base class for all employees' empCount = 0 def __init__(self, name, salary): self.name = name self.salary = salary Employee.empCount += 1 def displayCount(self): print "Total Employee %d" % Employee.empCount def displayEmployee(self): print "Name : ", self.name, ", Salary: ", self.salary print "Employee.__doc__:", Employee.__doc__ print "Employee.__name__:", Employee.__name__ print "Employee.__module__:", Employee.__module__ print "Employee.__bases__:", Employee.__bases__ print "Employee.__dict__:", Employee.__dict__ When the above code is executed, it produces the following result − Employee.__doc__: Common base class for all employees Employee.__name__: Employee Employee.__module__: __main__ Employee.__bases__: () Employee.__dict__: {'__module__': '__main__', 'displayCount': <function displayCount at 0xb7c84994>, 'empCount': 2, 'displayEmployee': <function displayEmployee at 0xb7c8441c>, '__doc__': 'Common base class for all employees', '__init__': <function __init__ at 0xb7c846bc>} Python deletes unneeded objects (built-in types or class instances) automatically to free the memory space. The process by which Python periodically reclaims blocks of memory that no longer are in use is termed Garbage Collection. Python's garbage collector runs during program execution and is triggered when an object's reference count reaches zero. An object's reference count changes as the number of aliases that point to it changes. An object's reference count increases when it is assigned a new name or placed in a container (list, tuple, or dictionary). The object's reference count decreases when it's deleted with del, its reference is reassigned, or its reference goes out of scope. When an object's reference count reaches zero, Python collects it automatically. a = 40 # Create object <40> b = a # Increase ref. count of <40> c = [b] # Increase ref. count of <40> del a # Decrease ref. count of <40> b = 100 # Decrease ref. count of <40> c[0] = -1 # Decrease ref. count of <40> You normally will not notice when the garbage collector destroys an orphaned instance and reclaims its space. But a class can implement the special method __del__(), called a destructor, that is invoked when the instance is about to be destroyed. This method might be used to clean up any non memory resources used by an instance. This __del__() destructor prints the class name of an instance that is about to be destroyed − #!/usr/bin/python class Point: def __init__( self, x=0, y=0): self.x = x self.y = y def __del__(self): class_name = self.__class__.__name__ print class_name, "destroyed" pt1 = Point() pt2 = pt1 pt3 = pt1 print id(pt1), id(pt2), id(pt3) # prints the ids of the obejcts del pt1 del pt2 del pt3 When the above code is executed, it produces following result − 3083401324 3083401324 3083401324 Point destroyed Note − Ideally, you should define your classes in separate file, then you should import them in your main program file using import statement. Instead of starting from scratch, you can create a class by deriving it from a preexisting class by listing the parent class in parentheses after the new class name. The child class inherits the attributes of its parent class, and you can use those attributes as if they were defined in the child class. A child class can also override data members and methods from the parent. Derived classes are declared much like their parent class; however, a list of base classes to inherit from is given after the class name − class SubClassName (ParentClass1[, ParentClass2, ...]): 'Optional class documentation string' class_suite #!/usr/bin/python class Parent: # define parent class parentAttr = 100 def __init__(self): print "Calling parent constructor" def parentMethod(self): print 'Calling parent method' def setAttr(self, attr): Parent.parentAttr = attr def getAttr(self): print "Parent attribute :", Parent.parentAttr class Child(Parent): # define child class def __init__(self): print "Calling child constructor" def childMethod(self): print 'Calling child method' c = Child() # instance of child c.childMethod() # child calls its method c.parentMethod() # calls parent's method c.setAttr(200) # again call parent's method c.getAttr() # again call parent's method When the above code is executed, it produces the following result − Calling child constructor Calling child method Calling parent method Parent attribute : 200 Similar way, you can drive a class from multiple parent classes as follows − class A: # define your class A ..... class B: # define your class B ..... class C(A, B): # subclass of A and B ..... You can use issubclass() or isinstance() functions to check a relationships of two classes and instances. The issubclass(sub, sup) boolean function returns true if the given subclass sub is indeed a subclass of the superclass sup. The issubclass(sub, sup) boolean function returns true if the given subclass sub is indeed a subclass of the superclass sup. The isinstance(obj, Class) boolean function returns true if obj is an instance of class Class or is an instance of a subclass of Class The isinstance(obj, Class) boolean function returns true if obj is an instance of class Class or is an instance of a subclass of Class You can always override your parent class methods. One reason for overriding parent's methods is because you may want special or different functionality in your subclass. #!/usr/bin/python class Parent: # define parent class def myMethod(self): print 'Calling parent method' class Child(Parent): # define child class def myMethod(self): print 'Calling child method' c = Child() # instance of child c.myMethod() # child calls overridden method When the above code is executed, it produces the following result − Calling child method Following table lists some generic functionality that you can override in your own classes − __init__ ( self [,args...] ) Constructor (with any optional arguments) Sample Call : obj = className(args) __del__( self ) Destructor, deletes an object Sample Call : del obj __repr__( self ) Evaluable string representation Sample Call : repr(obj) __str__( self ) Printable string representation Sample Call : str(obj) __cmp__ ( self, x ) Object comparison Sample Call : cmp(obj, x) Suppose you have created a Vector class to represent two-dimensional vectors, what happens when you use the plus operator to add them? Most likely Python will yell at you. You could, however, define the __add__ method in your class to perform vector addition and then the plus operator would behave as per expectation − #!/usr/bin/python class Vector: def __init__(self, a, b): self.a = a self.b = b def __str__(self): return 'Vector (%d, %d)' % (self.a, self.b) def __add__(self,other): return Vector(self.a + other.a, self.b + other.b) v1 = Vector(2,10) v2 = Vector(5,-2) print v1 + v2 When the above code is executed, it produces the following result − Vector(7,8) An object's attributes may or may not be visible outside the class definition. You need to name attributes with a double underscore prefix, and those attributes then are not be directly visible to outsiders. #!/usr/bin/python class JustCounter: __secretCount = 0 def count(self): self.__secretCount += 1 print self.__secretCount counter = JustCounter() counter.count() counter.count() print counter.__secretCount When the above code is executed, it produces the following result − 1 2 Traceback (most recent call last): File "test.py", line 12, in <module> print counter.__secretCount AttributeError: JustCounter instance has no attribute '__secretCount' Python protects those members by internally changing the name to include the class name. You can access such attributes as object._className__attrName. If you would replace your last line as following, then it works for you − ......................... print counter._JustCounter__secretCount When the above code is executed, it produces the following result − 1 2 2 187 Lectures 17.5 hours Malhar Lathkar 55 Lectures 8 hours Arnab Chakraborty 136 Lectures 11 hours In28Minutes Official 75 Lectures 13 hours Eduonix Learning Solutions 70 Lectures 8.5 hours Lets Kode It 63 Lectures 6 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 2477, "s": 2244, "text": "Python has been an object-oriented language since it existed. Because of this, creating and using classes and objects are downright easy. This chapter helps you become an expert in using Python's object-oriented programming support." }, { "code": null, "e": 2696, "s": 2477, "text": "If you do not have any previous experience with object-oriented (OO) programming, you may want to consult an introductory course on it or at least a tutorial of some sort so that you have a grasp of the basic concepts." }, { "code": null, "e": 2793, "s": 2696, "text": "However, here is small introduction of Object-Oriented Programming (OOP) to bring you at speed −" }, { "code": null, "e": 3029, "s": 2793, "text": "Class − A user-defined prototype for an object that defines a set of attributes that characterize any object of the class. The attributes are data members (class variables and instance variables) and methods, accessed via dot notation." }, { "code": null, "e": 3265, "s": 3029, "text": "Class − A user-defined prototype for an object that defines a set of attributes that characterize any object of the class. The attributes are data members (class variables and instance variables) and methods, accessed via dot notation." }, { "code": null, "e": 3490, "s": 3265, "text": "Class variable − A variable that is shared by all instances of a class. Class variables are defined within a class but outside any of the class's methods. Class variables are not used as frequently as instance variables are." }, { "code": null, "e": 3715, "s": 3490, "text": "Class variable − A variable that is shared by all instances of a class. Class variables are defined within a class but outside any of the class's methods. Class variables are not used as frequently as instance variables are." }, { "code": null, "e": 3824, "s": 3715, "text": "Data member − A class variable or instance variable that holds data associated with a class and its objects." }, { "code": null, "e": 3933, "s": 3824, "text": "Data member − A class variable or instance variable that holds data associated with a class and its objects." }, { "code": null, "e": 4101, "s": 3933, "text": "Function overloading − The assignment of more than one behavior to a particular function. The operation performed varies by the types of objects or arguments involved." }, { "code": null, "e": 4269, "s": 4101, "text": "Function overloading − The assignment of more than one behavior to a particular function. The operation performed varies by the types of objects or arguments involved." }, { "code": null, "e": 4385, "s": 4269, "text": "Instance variable − A variable that is defined inside a method and belongs only to the current instance of a class." }, { "code": null, "e": 4501, "s": 4385, "text": "Instance variable − A variable that is defined inside a method and belongs only to the current instance of a class." }, { "code": null, "e": 4605, "s": 4501, "text": "Inheritance − The transfer of the characteristics of a class to other classes that are derived from it." }, { "code": null, "e": 4709, "s": 4605, "text": "Inheritance − The transfer of the characteristics of a class to other classes that are derived from it." }, { "code": null, "e": 4857, "s": 4709, "text": "Instance − An individual object of a certain class. An object obj that belongs to a class Circle, for example, is an instance of the class Circle." }, { "code": null, "e": 5005, "s": 4857, "text": "Instance − An individual object of a certain class. An object obj that belongs to a class Circle, for example, is an instance of the class Circle." }, { "code": null, "e": 5061, "s": 5005, "text": "Instantiation − The creation of an instance of a class." }, { "code": null, "e": 5117, "s": 5061, "text": "Instantiation − The creation of an instance of a class." }, { "code": null, "e": 5192, "s": 5117, "text": "Method − A special kind of function that is defined in a class definition." }, { "code": null, "e": 5267, "s": 5192, "text": "Method − A special kind of function that is defined in a class definition." }, { "code": null, "e": 5435, "s": 5267, "text": "Object − A unique instance of a data structure that's defined by its class. An object comprises both data members (class variables and instance variables) and methods." }, { "code": null, "e": 5603, "s": 5435, "text": "Object − A unique instance of a data structure that's defined by its class. An object comprises both data members (class variables and instance variables) and methods." }, { "code": null, "e": 5693, "s": 5603, "text": "Operator overloading − The assignment of more than one function to a particular operator." }, { "code": null, "e": 5783, "s": 5693, "text": "Operator overloading − The assignment of more than one function to a particular operator." }, { "code": null, "e": 5928, "s": 5783, "text": "The class statement creates a new class definition. The name of the class immediately follows the keyword class followed by a colon as follows −" }, { "code": null, "e": 6002, "s": 5928, "text": "class ClassName:\n 'Optional class documentation string'\n class_suite\n" }, { "code": null, "e": 6085, "s": 6002, "text": "The class has a documentation string, which can be accessed via ClassName.__doc__." }, { "code": null, "e": 6168, "s": 6085, "text": "The class has a documentation string, which can be accessed via ClassName.__doc__." }, { "code": null, "e": 6280, "s": 6168, "text": "The class_suite consists of all the component statements defining class members, data attributes and functions." }, { "code": null, "e": 6392, "s": 6280, "text": "The class_suite consists of all the component statements defining class members, data attributes and functions." }, { "code": null, "e": 6444, "s": 6392, "text": "Following is the example of a simple Python class −" }, { "code": null, "e": 6808, "s": 6444, "text": "class Employee:\n 'Common base class for all employees'\n empCount = 0\n\n def __init__(self, name, salary):\n self.name = name\n self.salary = salary\n Employee.empCount += 1\n \n def displayCount(self):\n print \"Total Employee %d\" % Employee.empCount\n\n def displayEmployee(self):\n print \"Name : \", self.name, \", Salary: \", self.salary" }, { "code": null, "e": 6995, "s": 6808, "text": "The variable empCount is a class variable whose value is shared among all instances of a this class. This can be accessed as Employee.empCount from inside the class or outside the class." }, { "code": null, "e": 7182, "s": 6995, "text": "The variable empCount is a class variable whose value is shared among all instances of a this class. This can be accessed as Employee.empCount from inside the class or outside the class." }, { "code": null, "e": 7354, "s": 7182, "text": "The first method __init__() is a special method, which is called class constructor or initialization method that Python calls when you create a new instance of this class." }, { "code": null, "e": 7526, "s": 7354, "text": "The first method __init__() is a special method, which is called class constructor or initialization method that Python calls when you create a new instance of this class." }, { "code": null, "e": 7755, "s": 7526, "text": "You declare other class methods like normal functions with the exception that the first argument to each method is self. Python adds the self argument to the list for you; you do not need to include it when you call the methods." }, { "code": null, "e": 7984, "s": 7755, "text": "You declare other class methods like normal functions with the exception that the first argument to each method is self. Python adds the self argument to the list for you; you do not need to include it when you call the methods." }, { "code": null, "e": 8112, "s": 7984, "text": "To create instances of a class, you call the class using class name and pass in whatever arguments its __init__ method accepts." }, { "code": null, "e": 8277, "s": 8112, "text": "\"This would create first object of Employee class\"\nemp1 = Employee(\"Zara\", 2000)\n\"This would create second object of Employee class\"\nemp2 = Employee(\"Manni\", 5000)\n" }, { "code": null, "e": 8411, "s": 8277, "text": "You access the object's attributes using the dot operator with object. Class variable would be accessed using class name as follows −" }, { "code": null, "e": 8504, "s": 8411, "text": "emp1.displayEmployee()\nemp2.displayEmployee()\nprint \"Total Employee %d\" % Employee.empCount\n" }, { "code": null, "e": 8546, "s": 8504, "text": "Now, putting all the concepts together −" }, { "code": null, "e": 9186, "s": 8546, "text": "#!/usr/bin/python\n\nclass Employee:\n 'Common base class for all employees'\n empCount = 0\n\n def __init__(self, name, salary):\n self.name = name\n self.salary = salary\n Employee.empCount += 1\n \n def displayCount(self):\n print \"Total Employee %d\" % Employee.empCount\n\n def displayEmployee(self):\n print \"Name : \", self.name, \", Salary: \", self.salary\n\n\"This would create first object of Employee class\"\nemp1 = Employee(\"Zara\", 2000)\n\"This would create second object of Employee class\"\nemp2 = Employee(\"Manni\", 5000)\nemp1.displayEmployee()\nemp2.displayEmployee()\nprint \"Total Employee %d\" % Employee.empCount" }, { "code": null, "e": 9255, "s": 9186, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 9330, "s": 9255, "text": "Name : Zara ,Salary: 2000\nName : Manni ,Salary: 5000\nTotal Employee 2\n" }, { "code": null, "e": 9409, "s": 9330, "text": "You can add, remove, or modify attributes of classes and objects at any time −" }, { "code": null, "e": 9530, "s": 9409, "text": "emp1.age = 7 # Add an 'age' attribute.\nemp1.age = 8 # Modify 'age' attribute.\ndel emp1.age # Delete 'age' attribute.\n" }, { "code": null, "e": 9629, "s": 9530, "text": "Instead of using the normal statements to access attributes, you can use the following functions −" }, { "code": null, "e": 9700, "s": 9629, "text": "The getattr(obj, name[, default]) − to access the attribute of object." }, { "code": null, "e": 9771, "s": 9700, "text": "The getattr(obj, name[, default]) − to access the attribute of object." }, { "code": null, "e": 9835, "s": 9771, "text": "The hasattr(obj,name) − to check if an attribute exists or not." }, { "code": null, "e": 9899, "s": 9835, "text": "The hasattr(obj,name) − to check if an attribute exists or not." }, { "code": null, "e": 10005, "s": 9899, "text": "The setattr(obj,name,value) − to set an attribute. If attribute does not exist, then it would be created." }, { "code": null, "e": 10111, "s": 10005, "text": "The setattr(obj,name,value) − to set an attribute. If attribute does not exist, then it would be created." }, { "code": null, "e": 10160, "s": 10111, "text": "The delattr(obj, name) − to delete an attribute." }, { "code": null, "e": 10209, "s": 10160, "text": "The delattr(obj, name) − to delete an attribute." }, { "code": null, "e": 10434, "s": 10209, "text": "hasattr(emp1, 'age') # Returns true if 'age' attribute exists\ngetattr(emp1, 'age') # Returns value of 'age' attribute\nsetattr(emp1, 'age', 8) # Set attribute 'age' at 8\ndelattr(empl, 'age') # Delete attribute 'age'\n" }, { "code": null, "e": 10560, "s": 10434, "text": "Every Python class keeps following built-in attributes and they can be accessed using dot operator like any other attribute −" }, { "code": null, "e": 10616, "s": 10560, "text": "__dict__ − Dictionary containing the class's namespace." }, { "code": null, "e": 10672, "s": 10616, "text": "__dict__ − Dictionary containing the class's namespace." }, { "code": null, "e": 10733, "s": 10672, "text": "__doc__ − Class documentation string or none, if undefined. " }, { "code": null, "e": 10794, "s": 10733, "text": "__doc__ − Class documentation string or none, if undefined. " }, { "code": null, "e": 10817, "s": 10794, "text": "__name__ − Class name." }, { "code": null, "e": 10840, "s": 10817, "text": "__name__ − Class name." }, { "code": null, "e": 10947, "s": 10840, "text": "__module__ − Module name in which the class is defined. This attribute is \"__main__\" in interactive mode. " }, { "code": null, "e": 11054, "s": 10947, "text": "__module__ − Module name in which the class is defined. This attribute is \"__main__\" in interactive mode. " }, { "code": null, "e": 11175, "s": 11054, "text": "__bases__ − A possibly empty tuple containing the base classes, in the order of their occurrence in the base class list." }, { "code": null, "e": 11296, "s": 11175, "text": "__bases__ − A possibly empty tuple containing the base classes, in the order of their occurrence in the base class list." }, { "code": null, "e": 11360, "s": 11296, "text": "For the above class let us try to access all these attributes −" }, { "code": null, "e": 11978, "s": 11360, "text": "#!/usr/bin/python\n\nclass Employee:\n 'Common base class for all employees'\n empCount = 0\n\n def __init__(self, name, salary):\n self.name = name\n self.salary = salary\n Employee.empCount += 1\n \n def displayCount(self):\n print \"Total Employee %d\" % Employee.empCount\n\n def displayEmployee(self):\n print \"Name : \", self.name, \", Salary: \", self.salary\n\nprint \"Employee.__doc__:\", Employee.__doc__\nprint \"Employee.__name__:\", Employee.__name__\nprint \"Employee.__module__:\", Employee.__module__\nprint \"Employee.__bases__:\", Employee.__bases__\nprint \"Employee.__dict__:\", Employee.__dict__" }, { "code": null, "e": 12047, "s": 11978, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 12460, "s": 12047, "text": "Employee.__doc__: Common base class for all employees\nEmployee.__name__: Employee\nEmployee.__module__: __main__\nEmployee.__bases__: ()\nEmployee.__dict__: {'__module__': '__main__', 'displayCount':\n<function displayCount at 0xb7c84994>, 'empCount': 2, \n'displayEmployee': <function displayEmployee at 0xb7c8441c>, \n'__doc__': 'Common base class for all employees', \n'__init__': <function __init__ at 0xb7c846bc>}\n" }, { "code": null, "e": 12691, "s": 12460, "text": "Python deletes unneeded objects (built-in types or class instances) automatically to free the memory space. The process by which Python periodically reclaims blocks of memory that no longer are in use is termed Garbage Collection." }, { "code": null, "e": 12899, "s": 12691, "text": "Python's garbage collector runs during program execution and is triggered when an object's reference count reaches zero. An object's reference count changes as the number of aliases that point to it changes." }, { "code": null, "e": 13236, "s": 12899, "text": "An object's reference count increases when it is assigned a new name or placed in a container (list, tuple, or dictionary). The object's reference count decreases when it's deleted with del, its reference is reassigned, or its reference goes out of scope. When an object's reference count reaches zero, Python collects it automatically." }, { "code": null, "e": 13490, "s": 13236, "text": "a = 40 # Create object <40>\nb = a # Increase ref. count of <40> \nc = [b] # Increase ref. count of <40> \n\ndel a # Decrease ref. count of <40>\nb = 100 # Decrease ref. count of <40> \nc[0] = -1 # Decrease ref. count of <40> \n" }, { "code": null, "e": 13821, "s": 13490, "text": "You normally will not notice when the garbage collector destroys an orphaned instance and reclaims its space. But a class can implement the special method __del__(), called a destructor, that is invoked when the instance is about to be destroyed. This method might be used to clean up any non memory resources used by an instance." }, { "code": null, "e": 13916, "s": 13821, "text": "This __del__() destructor prints the class name of an instance that is about to be destroyed −" }, { "code": null, "e": 14240, "s": 13916, "text": "#!/usr/bin/python\n\nclass Point:\n def __init__( self, x=0, y=0):\n self.x = x\n self.y = y\n def __del__(self):\n class_name = self.__class__.__name__\n print class_name, \"destroyed\"\n\npt1 = Point()\npt2 = pt1\npt3 = pt1\nprint id(pt1), id(pt2), id(pt3) # prints the ids of the obejcts\ndel pt1\ndel pt2\ndel pt3" }, { "code": null, "e": 14305, "s": 14240, "text": "When the above code is executed, it produces following result −" }, { "code": null, "e": 14355, "s": 14305, "text": "3083401324 3083401324 3083401324\nPoint destroyed\n" }, { "code": null, "e": 14498, "s": 14355, "text": "Note − Ideally, you should define your classes in separate file, then you should import them in your main program file using import statement." }, { "code": null, "e": 14664, "s": 14498, "text": "Instead of starting from scratch, you can create a class by deriving it from a preexisting class by listing the parent class in parentheses after the new class name." }, { "code": null, "e": 14876, "s": 14664, "text": "The child class inherits the attributes of its parent class, and you can use those attributes as if they were defined in the child class. A child class can also override data members and methods from the parent." }, { "code": null, "e": 15015, "s": 14876, "text": "Derived classes are declared much like their parent class; however, a list of base classes to inherit from is given after the class name −" }, { "code": null, "e": 15127, "s": 15015, "text": "class SubClassName (ParentClass1[, ParentClass2, ...]):\n 'Optional class documentation string'\n class_suite" }, { "code": null, "e": 15873, "s": 15127, "text": "#!/usr/bin/python\n\nclass Parent: # define parent class\n parentAttr = 100\n def __init__(self):\n print \"Calling parent constructor\"\n\n def parentMethod(self):\n print 'Calling parent method'\n\n def setAttr(self, attr):\n Parent.parentAttr = attr\n\n def getAttr(self):\n print \"Parent attribute :\", Parent.parentAttr\n\nclass Child(Parent): # define child class\n def __init__(self):\n print \"Calling child constructor\"\n\n def childMethod(self):\n print 'Calling child method'\n\nc = Child() # instance of child\nc.childMethod() # child calls its method\nc.parentMethod() # calls parent's method\nc.setAttr(200) # again call parent's method\nc.getAttr() # again call parent's method" }, { "code": null, "e": 15942, "s": 15873, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 16035, "s": 15942, "text": "Calling child constructor\nCalling child method\nCalling parent method\nParent attribute : 200\n" }, { "code": null, "e": 16112, "s": 16035, "text": "Similar way, you can drive a class from multiple parent classes as follows −" }, { "code": null, "e": 16249, "s": 16112, "text": "class A: # define your class A\n.....\n\nclass B: # define your class B\n.....\n\nclass C(A, B): # subclass of A and B\n.....\n" }, { "code": null, "e": 16355, "s": 16249, "text": "You can use issubclass() or isinstance() functions to check a relationships of two classes and instances." }, { "code": null, "e": 16480, "s": 16355, "text": "The issubclass(sub, sup) boolean function returns true if the given subclass sub is indeed a subclass of the superclass sup." }, { "code": null, "e": 16605, "s": 16480, "text": "The issubclass(sub, sup) boolean function returns true if the given subclass sub is indeed a subclass of the superclass sup." }, { "code": null, "e": 16740, "s": 16605, "text": "The isinstance(obj, Class) boolean function returns true if obj is an instance of class Class or is an instance of a subclass of Class" }, { "code": null, "e": 16875, "s": 16740, "text": "The isinstance(obj, Class) boolean function returns true if obj is an instance of class Class or is an instance of a subclass of Class" }, { "code": null, "e": 17046, "s": 16875, "text": "You can always override your parent class methods. One reason for overriding parent's methods is because you may want special or different functionality in your subclass." }, { "code": null, "e": 17363, "s": 17046, "text": "#!/usr/bin/python\n\nclass Parent: # define parent class\n def myMethod(self):\n print 'Calling parent method'\n\nclass Child(Parent): # define child class\n def myMethod(self):\n print 'Calling child method'\n\nc = Child() # instance of child\nc.myMethod() # child calls overridden method" }, { "code": null, "e": 17432, "s": 17363, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 17453, "s": 17432, "text": "Calling child method" }, { "code": null, "e": 17546, "s": 17453, "text": "Following table lists some generic functionality that you can override in your own classes −" }, { "code": null, "e": 17575, "s": 17546, "text": "__init__ ( self [,args...] )" }, { "code": null, "e": 17617, "s": 17575, "text": "Constructor (with any optional arguments)" }, { "code": null, "e": 17653, "s": 17617, "text": "Sample Call : obj = className(args)" }, { "code": null, "e": 17669, "s": 17653, "text": "__del__( self )" }, { "code": null, "e": 17699, "s": 17669, "text": "Destructor, deletes an object" }, { "code": null, "e": 17721, "s": 17699, "text": "Sample Call : del obj" }, { "code": null, "e": 17738, "s": 17721, "text": "__repr__( self )" }, { "code": null, "e": 17770, "s": 17738, "text": "Evaluable string representation" }, { "code": null, "e": 17794, "s": 17770, "text": "Sample Call : repr(obj)" }, { "code": null, "e": 17810, "s": 17794, "text": "__str__( self )" }, { "code": null, "e": 17842, "s": 17810, "text": "Printable string representation" }, { "code": null, "e": 17865, "s": 17842, "text": "Sample Call : str(obj)" }, { "code": null, "e": 17885, "s": 17865, "text": "__cmp__ ( self, x )" }, { "code": null, "e": 17903, "s": 17885, "text": "Object comparison" }, { "code": null, "e": 17929, "s": 17903, "text": "Sample Call : cmp(obj, x)" }, { "code": null, "e": 18101, "s": 17929, "text": "Suppose you have created a Vector class to represent two-dimensional vectors, what happens when you use the plus operator to add them? Most likely Python will yell at you." }, { "code": null, "e": 18249, "s": 18101, "text": "You could, however, define the __add__ method in your class to perform vector addition and then the plus operator would behave as per expectation −" }, { "code": null, "e": 18557, "s": 18249, "text": "#!/usr/bin/python\n\nclass Vector:\n def __init__(self, a, b):\n self.a = a\n self.b = b\n\n def __str__(self):\n return 'Vector (%d, %d)' % (self.a, self.b)\n \n def __add__(self,other):\n return Vector(self.a + other.a, self.b + other.b)\n\nv1 = Vector(2,10)\nv2 = Vector(5,-2)\nprint v1 + v2" }, { "code": null, "e": 18626, "s": 18557, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 18639, "s": 18626, "text": "Vector(7,8)\n" }, { "code": null, "e": 18847, "s": 18639, "text": "An object's attributes may or may not be visible outside the class definition. You need to name attributes with a double underscore prefix, and those attributes then are not be directly visible to outsiders." }, { "code": null, "e": 19075, "s": 18847, "text": "#!/usr/bin/python\n\nclass JustCounter:\n __secretCount = 0\n \n def count(self):\n self.__secretCount += 1\n print self.__secretCount\n\ncounter = JustCounter()\ncounter.count()\ncounter.count()\nprint counter.__secretCount" }, { "code": null, "e": 19144, "s": 19075, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 19328, "s": 19144, "text": "1\n2\nTraceback (most recent call last):\n File \"test.py\", line 12, in <module>\n print counter.__secretCount\nAttributeError: JustCounter instance has no attribute '__secretCount'\n" }, { "code": null, "e": 19554, "s": 19328, "text": "Python protects those members by internally changing the name to include the class name. You can access such attributes as object._className__attrName. If you would replace your last line as following, then it works for you −" }, { "code": null, "e": 19621, "s": 19554, "text": ".........................\nprint counter._JustCounter__secretCount\n" }, { "code": null, "e": 19690, "s": 19621, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 19697, "s": 19690, "text": "1\n2\n2\n" }, { "code": null, "e": 19734, "s": 19697, "text": "\n 187 Lectures \n 17.5 hours \n" }, { "code": null, "e": 19750, "s": 19734, "text": " Malhar Lathkar" }, { "code": null, "e": 19783, "s": 19750, "text": "\n 55 Lectures \n 8 hours \n" }, { "code": null, "e": 19802, "s": 19783, "text": " Arnab Chakraborty" }, { "code": null, "e": 19837, "s": 19802, "text": "\n 136 Lectures \n 11 hours \n" }, { "code": null, "e": 19859, "s": 19837, "text": " In28Minutes Official" }, { "code": null, "e": 19893, "s": 19859, "text": "\n 75 Lectures \n 13 hours \n" }, { "code": null, "e": 19921, "s": 19893, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 19956, "s": 19921, "text": "\n 70 Lectures \n 8.5 hours \n" }, { "code": null, "e": 19970, "s": 19956, "text": " Lets Kode It" }, { "code": null, "e": 20003, "s": 19970, "text": "\n 63 Lectures \n 6 hours \n" }, { "code": null, "e": 20020, "s": 20003, "text": " Abhilash Nelson" }, { "code": null, "e": 20027, "s": 20020, "text": " Print" }, { "code": null, "e": 20038, "s": 20027, "text": " Add Notes" } ]
How to work with document.head in JavaScript?
Use the document.head property in JavaScript to get the id of the <head> tag of the document. You can try to run the following code to implement document.head property in JavaScript − Live Demo <!DOCTYPE html> <html> <head id = "myid"> <title>JavaScript Example</title> </head> <body> <h1>Employee Information</h1> <form> Name: <input type = "text" name = "name" value = "Amit"><br> Subject: <input type = "text" name = "sub" value = "Java"> </form> <script> var a = document.head.id; document.write("<br>Head element id? "+a); </script> </body> </html>
[ { "code": null, "e": 1246, "s": 1062, "text": "Use the document.head property in JavaScript to get the id of the <head> tag of the document. You can try to run the following code to implement document.head property in JavaScript −" }, { "code": null, "e": 1257, "s": 1246, "text": " Live Demo" }, { "code": null, "e": 1701, "s": 1257, "text": "<!DOCTYPE html>\n<html>\n <head id = \"myid\">\n <title>JavaScript Example</title>\n </head>\n <body>\n <h1>Employee Information</h1>\n <form>\n Name: <input type = \"text\" name = \"name\" value = \"Amit\"><br>\n Subject: <input type = \"text\" name = \"sub\" value = \"Java\">\n </form>\n <script>\n var a = document.head.id;\n document.write(\"<br>Head element id? \"+a);\n </script>\n </body>\n</html>" } ]
Multi Class Text Classification With Deep Learning Using BERT | by Susan Li | Towards Data Science
Most of the researchers submit their research papers to academic conference because its a faster way of making the results available. Finding and selecting a suitable conference has always been challenging especially for young researchers. However, based on the previous conferences proceeding data, the researchers can increase their chances of paper acceptance and publication. We will try to solve this text classification problem with deep learning using BERT. Almost all the code were taken from this tutorial, the only difference is the data. The dataset contains 2,507 research paper titles, and have been manually classified into 5 categories (i.e. conferences) that can be downloaded from here. df['Conference'].value_counts() You may have noticed that our classes are imbalanced, and we will address this later on. df['label'] = df.Conference.replace(label_dict) Because the labels are imbalanced, we split the data set in a stratified fashion, using this as the class labels. Our labels distribution will look like this after the split. Tokenization is a process to take raw texts and split into tokens, which are numeric data to represent words. Constructs a BERT tokenizer. Based on WordPiece. Instantiate a pre-trained BERT model configuration to encode our data. To convert all the titles from text into encoded form, we use a function called batch_encode_plus , and we will proceed train and validation data separately. The 1st parameter inside the above function is the title text. add_special_tokens=True means the sequences will be encoded with the special tokens relative to their model. When batching sequences together, we set return_attention_mask=True, so it will return the attention mask according to the specific tokenizer defined by the max_length attribute. We also want to pad all the titles to certain maximum length. We actually do not need to set max_length=256, but just to play it safe. return_tensors='pt' to return PyTorch. And then we need to split the data into input_ids, attention_masks and labels. Finally, after we get encoded data set, we can create training data and validation data. We are treating each title as its unique sequence, so one sequence will be classified to one of the five labels (i.e. conferences). bert-base-uncased is a smaller pre-trained model. Using num_labels to indicate the number of output labels. We don’t really care about output_attentions. We also don’t need output_hidden_states. DataLoader combines a dataset and a sampler, and provides an iterable over the given dataset. We use RandomSampler for training and SequentialSampler for validation. Given the limited memory in my environment, I set batch_size=3. To construct an optimizer, we have to give it an iterable containing the parameters to optimize. Then, we can specify optimizer-specific options such as the learning rate, epsilon, etc. I found epochs=5 works well for this data set. Create a schedule with a learning rate that decreases linearly from the initial learning rate set in the optimizer to 0, after a warmup period during which it increases linearly from 0 to the initial learning rate set in the optimizer. We will use f1 score and accuracy per class as performance metrics. Jupyter notebook can be found on Github. Enjoy the rest of the weekend!
[ { "code": null, "e": 411, "s": 171, "text": "Most of the researchers submit their research papers to academic conference because its a faster way of making the results available. Finding and selecting a suitable conference has always been challenging especially for young researchers." }, { "code": null, "e": 636, "s": 411, "text": "However, based on the previous conferences proceeding data, the researchers can increase their chances of paper acceptance and publication. We will try to solve this text classification problem with deep learning using BERT." }, { "code": null, "e": 720, "s": 636, "text": "Almost all the code were taken from this tutorial, the only difference is the data." }, { "code": null, "e": 875, "s": 720, "text": "The dataset contains 2,507 research paper titles, and have been manually classified into 5 categories (i.e. conferences) that can be downloaded from here." }, { "code": null, "e": 907, "s": 875, "text": "df['Conference'].value_counts()" }, { "code": null, "e": 996, "s": 907, "text": "You may have noticed that our classes are imbalanced, and we will address this later on." }, { "code": null, "e": 1044, "s": 996, "text": "df['label'] = df.Conference.replace(label_dict)" }, { "code": null, "e": 1158, "s": 1044, "text": "Because the labels are imbalanced, we split the data set in a stratified fashion, using this as the class labels." }, { "code": null, "e": 1219, "s": 1158, "text": "Our labels distribution will look like this after the split." }, { "code": null, "e": 1329, "s": 1219, "text": "Tokenization is a process to take raw texts and split into tokens, which are numeric data to represent words." }, { "code": null, "e": 1378, "s": 1329, "text": "Constructs a BERT tokenizer. Based on WordPiece." }, { "code": null, "e": 1449, "s": 1378, "text": "Instantiate a pre-trained BERT model configuration to encode our data." }, { "code": null, "e": 1607, "s": 1449, "text": "To convert all the titles from text into encoded form, we use a function called batch_encode_plus , and we will proceed train and validation data separately." }, { "code": null, "e": 1670, "s": 1607, "text": "The 1st parameter inside the above function is the title text." }, { "code": null, "e": 1779, "s": 1670, "text": "add_special_tokens=True means the sequences will be encoded with the special tokens relative to their model." }, { "code": null, "e": 1958, "s": 1779, "text": "When batching sequences together, we set return_attention_mask=True, so it will return the attention mask according to the specific tokenizer defined by the max_length attribute." }, { "code": null, "e": 2020, "s": 1958, "text": "We also want to pad all the titles to certain maximum length." }, { "code": null, "e": 2093, "s": 2020, "text": "We actually do not need to set max_length=256, but just to play it safe." }, { "code": null, "e": 2132, "s": 2093, "text": "return_tensors='pt' to return PyTorch." }, { "code": null, "e": 2211, "s": 2132, "text": "And then we need to split the data into input_ids, attention_masks and labels." }, { "code": null, "e": 2300, "s": 2211, "text": "Finally, after we get encoded data set, we can create training data and validation data." }, { "code": null, "e": 2432, "s": 2300, "text": "We are treating each title as its unique sequence, so one sequence will be classified to one of the five labels (i.e. conferences)." }, { "code": null, "e": 2482, "s": 2432, "text": "bert-base-uncased is a smaller pre-trained model." }, { "code": null, "e": 2540, "s": 2482, "text": "Using num_labels to indicate the number of output labels." }, { "code": null, "e": 2586, "s": 2540, "text": "We don’t really care about output_attentions." }, { "code": null, "e": 2627, "s": 2586, "text": "We also don’t need output_hidden_states." }, { "code": null, "e": 2721, "s": 2627, "text": "DataLoader combines a dataset and a sampler, and provides an iterable over the given dataset." }, { "code": null, "e": 2793, "s": 2721, "text": "We use RandomSampler for training and SequentialSampler for validation." }, { "code": null, "e": 2857, "s": 2793, "text": "Given the limited memory in my environment, I set batch_size=3." }, { "code": null, "e": 3043, "s": 2857, "text": "To construct an optimizer, we have to give it an iterable containing the parameters to optimize. Then, we can specify optimizer-specific options such as the learning rate, epsilon, etc." }, { "code": null, "e": 3090, "s": 3043, "text": "I found epochs=5 works well for this data set." }, { "code": null, "e": 3326, "s": 3090, "text": "Create a schedule with a learning rate that decreases linearly from the initial learning rate set in the optimizer to 0, after a warmup period during which it increases linearly from 0 to the initial learning rate set in the optimizer." }, { "code": null, "e": 3394, "s": 3326, "text": "We will use f1 score and accuracy per class as performance metrics." } ]
Rexx - pull
This is used to pull input from the stack or from the default stream. Pull variable Variable − The variable to which the input value will be assigned to. Variable − The variable to which the input value will be assigned to. None /* Main program */ options arexx_bifs pull input say input When you run the above program, you need to enter some input. If you enter the input value of ‘Tutorial’, the program will return the word ‘TUTORIAL’. Print Add Notes Bookmark this page
[ { "code": null, "e": 2409, "s": 2339, "text": "This is used to pull input from the stack or from the default stream." }, { "code": null, "e": 2425, "s": 2409, "text": "Pull variable \n" }, { "code": null, "e": 2495, "s": 2425, "text": "Variable − The variable to which the input value will be assigned to." }, { "code": null, "e": 2565, "s": 2495, "text": "Variable − The variable to which the input value will be assigned to." }, { "code": null, "e": 2570, "s": 2565, "text": "None" }, { "code": null, "e": 2633, "s": 2570, "text": "/* Main program */ \noptions arexx_bifs \npull input \nsay input " }, { "code": null, "e": 2784, "s": 2633, "text": "When you run the above program, you need to enter some input. If you enter the input value of ‘Tutorial’, the program will return the word ‘TUTORIAL’." }, { "code": null, "e": 2791, "s": 2784, "text": " Print" }, { "code": null, "e": 2802, "s": 2791, "text": " Add Notes" } ]
Maximum absolute difference of value and index sums in C
We are given with an array of integers. The task is to calculate the maximum absolute difference of value and index sums. That is for each pair of indexes (i,j) in an array, we have to calculate | Arr[i] - A[j] | + |i-j| and find the maximum such sum possible. Here |A| means absolute value of A. If array has 4 elements then indexes are 0,1,2,3 and unique pairs will be ( (0,0), (1,1), (2,2), (3,3), (0,1), (0,2), (0,3), (1,2), (1,3), (2,3) ). Input − Arr[] = { 1,2,4,5 } Output − Maximum absolute difference of value and index sums − 7 Explanation − Index pairs and | A[i]-A[j] | + | i-j | are as follows 1. (0,0), (1,1), (2,2), (3,3)--------- |i-j| for each is 0. 2. (0,1)---------- |1-2| + |0-1|= 1+1 = 2 3. (0,2)---------- |1-4| + |0-2|= 3+2 = 5 4. (0,3)---------- |1-5| + |0-3|= 4+3 = 7 5. (1,2)---------- |2-4| + |1-2|= 2+1 = 3 6. (1,3)---------- |2-5| + |1-3|= 3+2 = 5 7. (2,3)---------- |4-5| + |2-3|= 1+1 = 2 Maximum value of such a sum is 7. Input − Arr[] = { 10,20,21 } Output − Maximum absolute difference of value and index sums − 13 Explanation − Index pairs and | A[i]-A[j] | + | i-j | are as follows 1. (0,0), (1,1), (2,2)--------- |i-j| for each is 0. 2. (0,1)---------- |10-20| + |0-1|= 10+1 = 11 3. (0,2)---------- |10-21| + |0-2|= 11+2 = 13 4. (1,2)---------- |20-21| + |1-2|= 1+1 = 2 Maximum value of such a sum is 13. We take an integer array having numbers as Arr[] We take an integer array having numbers as Arr[] The function maxabsDiff( int arr[],int n) is used to calculate the maximum absolute difference of value and index sums.. The function maxabsDiff( int arr[],int n) is used to calculate the maximum absolute difference of value and index sums.. We initialize the variable result with -1. We initialize the variable result with -1. Inside the for loop traverse the array of integers from the beginning. Inside the for loop traverse the array of integers from the beginning. In nested for loop traverse the remaining elements and calculate the absolute sum of element value and indexes i,j (abs(arr[i] - arr[j]) + abs(i - j)) and store in a variable say absDiff. In nested for loop traverse the remaining elements and calculate the absolute sum of element value and indexes i,j (abs(arr[i] - arr[j]) + abs(i - j)) and store in a variable say absDiff. If this new calculated sum is more than the previous one then store it in ‘result’. If this new calculated sum is more than the previous one then store it in ‘result’. Return result after traversing the whole array. Return result after traversing the whole array. Live Demo #include <stdio.h> #include <math.h> // Function to return maximum absolute difference int maxabsDiff(int arr[], int n){ int result = 0; for (int i = 0; i < n; i++) { for (int j = i; j < n; j++) { int absDiff= abs(arr[i] - arr[j]) + abs(i - j); if (absDiff > result) result = absDiff; } } return result; } int main(){ int Arr[] = {1,2,4,1,3,4,2,5,6,5}; printf("Maximum absolute difference of value and index sums: %d", maxabsDiff(Arr,10)); return 0; } If we run the above code it will generate the following output − Maximum absolute difference of value and index sums: 13
[ { "code": null, "e": 1507, "s": 1062, "text": "We are given with an array of integers. The task is to calculate the maximum absolute difference of value and index sums. That is for each pair of indexes (i,j) in an array, we have to calculate | Arr[i] - A[j] | + |i-j| and find the maximum such sum possible. Here |A| means absolute value of A. If array has 4 elements then indexes are 0,1,2,3 and unique pairs will be ( (0,0), (1,1), (2,2), (3,3), (0,1), (0,2), (0,3), (1,2), (1,3), (2,3) )." }, { "code": null, "e": 1535, "s": 1507, "text": "Input − Arr[] = { 1,2,4,5 }" }, { "code": null, "e": 1600, "s": 1535, "text": "Output − Maximum absolute difference of value and index sums − 7" }, { "code": null, "e": 1669, "s": 1600, "text": "Explanation − Index pairs and | A[i]-A[j] | + | i-j | are as follows" }, { "code": null, "e": 2015, "s": 1669, "text": "1. (0,0), (1,1), (2,2), (3,3)--------- |i-j| for each is 0.\n2. (0,1)---------- |1-2| + |0-1|= 1+1 = 2\n3. (0,2)---------- |1-4| + |0-2|= 3+2 = 5\n4. (0,3)---------- |1-5| + |0-3|= 4+3 = 7\n5. (1,2)---------- |2-4| + |1-2|= 2+1 = 3\n6. (1,3)---------- |2-5| + |1-3|= 3+2 = 5\n7. (2,3)---------- |4-5| + |2-3|= 1+1 = 2\nMaximum value of such a sum is 7." }, { "code": null, "e": 2044, "s": 2015, "text": "Input − Arr[] = { 10,20,21 }" }, { "code": null, "e": 2110, "s": 2044, "text": "Output − Maximum absolute difference of value and index sums − 13" }, { "code": null, "e": 2179, "s": 2110, "text": "Explanation − Index pairs and | A[i]-A[j] | + | i-j | are as follows" }, { "code": null, "e": 2403, "s": 2179, "text": "1. (0,0), (1,1), (2,2)--------- |i-j| for each is 0.\n2. (0,1)---------- |10-20| + |0-1|= 10+1 = 11\n3. (0,2)---------- |10-21| + |0-2|= 11+2 = 13\n4. (1,2)---------- |20-21| + |1-2|= 1+1 = 2\nMaximum value of such a sum is 13." }, { "code": null, "e": 2452, "s": 2403, "text": "We take an integer array having numbers as Arr[]" }, { "code": null, "e": 2501, "s": 2452, "text": "We take an integer array having numbers as Arr[]" }, { "code": null, "e": 2622, "s": 2501, "text": "The function maxabsDiff( int arr[],int n) is used to calculate the maximum absolute difference of value and index sums.." }, { "code": null, "e": 2743, "s": 2622, "text": "The function maxabsDiff( int arr[],int n) is used to calculate the maximum absolute difference of value and index sums.." }, { "code": null, "e": 2786, "s": 2743, "text": "We initialize the variable result with -1." }, { "code": null, "e": 2829, "s": 2786, "text": "We initialize the variable result with -1." }, { "code": null, "e": 2900, "s": 2829, "text": "Inside the for loop traverse the array of integers from the beginning." }, { "code": null, "e": 2971, "s": 2900, "text": "Inside the for loop traverse the array of integers from the beginning." }, { "code": null, "e": 3159, "s": 2971, "text": "In nested for loop traverse the remaining elements and calculate the absolute sum of element value and indexes i,j (abs(arr[i] - arr[j]) + abs(i - j)) and store in a variable say absDiff." }, { "code": null, "e": 3347, "s": 3159, "text": "In nested for loop traverse the remaining elements and calculate the absolute sum of element value and indexes i,j (abs(arr[i] - arr[j]) + abs(i - j)) and store in a variable say absDiff." }, { "code": null, "e": 3431, "s": 3347, "text": "If this new calculated sum is more than the previous one then store it in ‘result’." }, { "code": null, "e": 3515, "s": 3431, "text": "If this new calculated sum is more than the previous one then store it in ‘result’." }, { "code": null, "e": 3563, "s": 3515, "text": "Return result after traversing the whole array." }, { "code": null, "e": 3611, "s": 3563, "text": "Return result after traversing the whole array." }, { "code": null, "e": 3622, "s": 3611, "text": " Live Demo" }, { "code": null, "e": 4137, "s": 3622, "text": "#include <stdio.h>\n#include <math.h>\n// Function to return maximum absolute difference\nint maxabsDiff(int arr[], int n){\n int result = 0;\n for (int i = 0; i < n; i++) {\n for (int j = i; j < n; j++) {\n int absDiff= abs(arr[i] - arr[j]) + abs(i - j);\n if (absDiff > result)\n result = absDiff;\n }\n }\n return result;\n}\nint main(){\n int Arr[] = {1,2,4,1,3,4,2,5,6,5};\n printf(\"Maximum absolute difference of value and index sums: %d\", maxabsDiff(Arr,10));\n return 0;\n}" }, { "code": null, "e": 4202, "s": 4137, "text": "If we run the above code it will generate the following output −" }, { "code": null, "e": 4258, "s": 4202, "text": "Maximum absolute difference of value and index sums: 13" } ]
Implementing Linear and Polynomial Regression From Scratch | by Chris Tam | Towards Data Science
In this tutorial, we’ll go through how to implement a simple linear regression model using a least squares approach to fit the data. After that, we’ll extend the model to a polynomial regression model in order to capture more complex signals. We’ll be using the mean squared error to measure the quality of fit for every model we generate. You can download all the resources I used to write this article from my Github repo 👍. Let’s start by loading the training data into the memory and plotting it as a graph to see what we’re working with. Think of train_features as x-values and train_desired_outputs as y-values. The graph below is the resulting scatter plot of all the values. Now it’s time to write a simple linear regression model to try fit the data. Our goal is to find a line that best resembles the underlying pattern of the training data shown in the graph. We’re going to use the least squares method to parameterize our model with the coefficients that best describe the training set before seeing how well the model generalizes to data it hasn’t seen before. Recall that the simple linear regression model is parameterized by a y-intercept and the slope of the regression line. The process of finding parameters so that our model fits the training data is called ‘training’ our model. Given a design matrix X and a column vector of target outputs y, we can use the following equation to find the best intercept and slope coefficients for our linear model through least squares regression (for an in-depth view into the linear algebra concepts behind this equation, see this post): Here’s how we code it up. We first create a design matrix X which holds a column of ones (in order to estimate the y-intercept) and another column to hold the values of our explanatory variable x. Then we take the inverse of the dot product of X with its transpose, and dot product it with the dot product of the X-transpose and y (the y-values for training_desired_outputs). Here’s how our model fit the training data. We can quantify how well our model fits the data by using the mean squared error (MSE) to calculate the average squared difference between our line and the actual data point in the training set. The MSE is calculated as follows: Mean squared error on the training set: 2.1739455790492586 Not bad! The mean squared error is a measure of the quality of an estimator — in our case the estimator is our linear regression model. The MSE is always non-negative and values closer to zero are better. Now let’s see how well our model predicts on data it hasn’t seen before. We call this step the testing phase. Remember, our model is defined by the coefficients returned by So the function above returns w which contains coefficients for the y-intercept and the slope. We’re going to test our model using the same values we used to train it. Here’s how our model fit the testing data: Again we’ll use the MSE to evaluate how well our model fit the data (I’m using the same code provided earlier in the MSE.py gist). Mean squared error on the testing set: 2.3118753456727985 Our testing error is slightly above our training error. This makes sense because training errors should usually be lower than testing errors. It’s rare that a model would predict more accurately on data it hasn’t seen before than the data it was trained on. All we need to do to implement polynomial regression is to take our linear regression model and add more features. Recall the form of the linear regression model and compare it to the form of the polynomial regression model You can see that we need an extra coefficient for every additional feature, denoted by x2...xm. The order of the polynomial regression model depends on the number of features included in the model, so a model with m features is an mth-degree or mth-order polynomial regression. We’ll start with 2nd-order polynomial regression and you’ll notice that it’s quite easy to increase the complexity of your regression model (increasing model complexity isn’t always a good thing and can lead to overfitting!!!). Since we’re including another feature into our model, we’re going to have to account for it by adding another term into our design matrix. The general form of the design matrix with m-degrees looks like this Let’s go ahead and build this in code: Notice how we included a column for the x2 features on the right hand side of the design matrix X. The resulting three coefficients are stored in coeffs. Let’s apply the model to our training data and print out the regression line. Here’s how our 2nd-order polynomial regression model fit the training data: Mean squared error on the training set: 0.4846845031271553 And now let’s apply our model to the testing data. Here’s how it fit. Mean squared error on the testing set: 0.7573635655518132nd-order polynomial regression is a better fit than linear regression. When comparing the MSE’s of our linear regression model and 2nd-order polynomial regression model, we see that the latter fit the testing set better than former. Hooray! We were able to improve the accuracy of our model by increasing the complexity. Be warned, however, that increasing model complexity doesn’t always lead to better accuracy. In order to extend this model further, try implementing a 3rd-order polynomial regression by adding in a cubed term for the feature x in the design matrix X, like this: X = np.c_[np.ones(N), train_features_vals, np.square(train_features), np.power(train_features, 3)] Make sure that when you plot your regression line, you modify y_pred to take this extra term into account. y_pred = b[3]*np.power(x_line, 3) + b[2]*np.square(x_line) + b[1]*x_line + b[0] I hope you’ve found this tutorial helpful! I am happy to respond to any questions or comments so feel free to leave thoughts below or private notes in the text. Cheers!
[ { "code": null, "e": 599, "s": 172, "text": "In this tutorial, we’ll go through how to implement a simple linear regression model using a least squares approach to fit the data. After that, we’ll extend the model to a polynomial regression model in order to capture more complex signals. We’ll be using the mean squared error to measure the quality of fit for every model we generate. You can download all the resources I used to write this article from my Github repo 👍." }, { "code": null, "e": 855, "s": 599, "text": "Let’s start by loading the training data into the memory and plotting it as a graph to see what we’re working with. Think of train_features as x-values and train_desired_outputs as y-values. The graph below is the resulting scatter plot of all the values." }, { "code": null, "e": 1366, "s": 855, "text": "Now it’s time to write a simple linear regression model to try fit the data. Our goal is to find a line that best resembles the underlying pattern of the training data shown in the graph. We’re going to use the least squares method to parameterize our model with the coefficients that best describe the training set before seeing how well the model generalizes to data it hasn’t seen before. Recall that the simple linear regression model is parameterized by a y-intercept and the slope of the regression line." }, { "code": null, "e": 1769, "s": 1366, "text": "The process of finding parameters so that our model fits the training data is called ‘training’ our model. Given a design matrix X and a column vector of target outputs y, we can use the following equation to find the best intercept and slope coefficients for our linear model through least squares regression (for an in-depth view into the linear algebra concepts behind this equation, see this post):" }, { "code": null, "e": 2145, "s": 1769, "text": "Here’s how we code it up. We first create a design matrix X which holds a column of ones (in order to estimate the y-intercept) and another column to hold the values of our explanatory variable x. Then we take the inverse of the dot product of X with its transpose, and dot product it with the dot product of the X-transpose and y (the y-values for training_desired_outputs)." }, { "code": null, "e": 2189, "s": 2145, "text": "Here’s how our model fit the training data." }, { "code": null, "e": 2418, "s": 2189, "text": "We can quantify how well our model fits the data by using the mean squared error (MSE) to calculate the average squared difference between our line and the actual data point in the training set. The MSE is calculated as follows:" }, { "code": null, "e": 2478, "s": 2418, "text": "Mean squared error on the training set: 2.1739455790492586" }, { "code": null, "e": 2683, "s": 2478, "text": "Not bad! The mean squared error is a measure of the quality of an estimator — in our case the estimator is our linear regression model. The MSE is always non-negative and values closer to zero are better." }, { "code": null, "e": 2856, "s": 2683, "text": "Now let’s see how well our model predicts on data it hasn’t seen before. We call this step the testing phase. Remember, our model is defined by the coefficients returned by" }, { "code": null, "e": 3024, "s": 2856, "text": "So the function above returns w which contains coefficients for the y-intercept and the slope. We’re going to test our model using the same values we used to train it." }, { "code": null, "e": 3067, "s": 3024, "text": "Here’s how our model fit the testing data:" }, { "code": null, "e": 3198, "s": 3067, "text": "Again we’ll use the MSE to evaluate how well our model fit the data (I’m using the same code provided earlier in the MSE.py gist)." }, { "code": null, "e": 3257, "s": 3198, "text": "Mean squared error on the testing set: 2.3118753456727985" }, { "code": null, "e": 3515, "s": 3257, "text": "Our testing error is slightly above our training error. This makes sense because training errors should usually be lower than testing errors. It’s rare that a model would predict more accurately on data it hasn’t seen before than the data it was trained on." }, { "code": null, "e": 3739, "s": 3515, "text": "All we need to do to implement polynomial regression is to take our linear regression model and add more features. Recall the form of the linear regression model and compare it to the form of the polynomial regression model" }, { "code": null, "e": 4245, "s": 3739, "text": "You can see that we need an extra coefficient for every additional feature, denoted by x2...xm. The order of the polynomial regression model depends on the number of features included in the model, so a model with m features is an mth-degree or mth-order polynomial regression. We’ll start with 2nd-order polynomial regression and you’ll notice that it’s quite easy to increase the complexity of your regression model (increasing model complexity isn’t always a good thing and can lead to overfitting!!!)." }, { "code": null, "e": 4453, "s": 4245, "text": "Since we’re including another feature into our model, we’re going to have to account for it by adding another term into our design matrix. The general form of the design matrix with m-degrees looks like this" }, { "code": null, "e": 4492, "s": 4453, "text": "Let’s go ahead and build this in code:" }, { "code": null, "e": 4724, "s": 4492, "text": "Notice how we included a column for the x2 features on the right hand side of the design matrix X. The resulting three coefficients are stored in coeffs. Let’s apply the model to our training data and print out the regression line." }, { "code": null, "e": 4800, "s": 4724, "text": "Here’s how our 2nd-order polynomial regression model fit the training data:" }, { "code": null, "e": 4860, "s": 4800, "text": "Mean squared error on the training set: 0.4846845031271553" }, { "code": null, "e": 4911, "s": 4860, "text": "And now let’s apply our model to the testing data." }, { "code": null, "e": 4930, "s": 4911, "text": "Here’s how it fit." }, { "code": null, "e": 5060, "s": 4930, "text": "Mean squared error on the testing set: 0.7573635655518132nd-order polynomial regression is a better fit than linear regression. " }, { "code": null, "e": 5403, "s": 5060, "text": "When comparing the MSE’s of our linear regression model and 2nd-order polynomial regression model, we see that the latter fit the testing set better than former. Hooray! We were able to improve the accuracy of our model by increasing the complexity. Be warned, however, that increasing model complexity doesn’t always lead to better accuracy." }, { "code": null, "e": 5572, "s": 5403, "text": "In order to extend this model further, try implementing a 3rd-order polynomial regression by adding in a cubed term for the feature x in the design matrix X, like this:" }, { "code": null, "e": 5671, "s": 5572, "text": "X = np.c_[np.ones(N), train_features_vals, np.square(train_features), np.power(train_features, 3)]" }, { "code": null, "e": 5778, "s": 5671, "text": "Make sure that when you plot your regression line, you modify y_pred to take this extra term into account." }, { "code": null, "e": 5858, "s": 5778, "text": "y_pred = b[3]*np.power(x_line, 3) + b[2]*np.square(x_line) + b[1]*x_line + b[0]" } ]
Python PostgreSQL - Create Table
You can create a new table in a database in PostgreSQL using the CREATE TABLE statement. While executing this you need to specify the name of the table, column names and their data types. Following is the syntax of the CREATE TABLE statement in PostgreSQL. CREATE TABLE table_name( column1 datatype, column2 datatype, column3 datatype, ..... columnN datatype, ); Following example creates a table with name CRICKETERS in PostgreSQL. postgres=# CREATE TABLE CRICKETERS ( First_Name VARCHAR(255), Last_Name VARCHAR(255), Age INT, Place_Of_Birth VARCHAR(255), Country VARCHAR(255) ); CREATE TABLE postgres=# You can get the list of tables in a database in PostgreSQL using the \dt command. After creating a table, if you can verify the list of tables you can observe the newly created table in it as follows − postgres=# \dt List of relations Schema | Name | Type | Owner --------+------------+-------+---------- public | cricketers | table | postgres (1 row) postgres=# In the same way, you can get the description of the created table using \d as shown below − postgres=# \d cricketers Table "public.cricketers" Column | Type | Collation | Nullable | Default ----------------+------------------------+-----------+----------+--------- first_name | character varying(255) | | | last_name | character varying(255) | | | age | integer | | | place_of_birth | character varying(255) | | | country | character varying(255) | | | postgres=# To create a table using python you need to execute the CREATE TABLE statement using the execute() method of the Cursor of pyscopg2. The following Python example creates a table with name employee. import psycopg2 #Establishing the connection conn = psycopg2.connect( database="mydb", user='postgres', password='password', host='127.0.0.1', port= '5432' ) #Creating a cursor object using the cursor() method cursor = conn.cursor() #Doping EMPLOYEE table if already exists. cursor.execute("DROP TABLE IF EXISTS EMPLOYEE") #Creating table as per requirement sql ='''CREATE TABLE EMPLOYEE( FIRST_NAME CHAR(20) NOT NULL, LAST_NAME CHAR(20), AGE INT, SEX CHAR(1), INCOME FLOAT )''' cursor.execute(sql) print("Table created successfully........") conn.commit() #Closing the connection conn.close() Table created successfully........ 187 Lectures 17.5 hours Malhar Lathkar 55 Lectures 8 hours Arnab Chakraborty 136 Lectures 11 hours In28Minutes Official 75 Lectures 13 hours Eduonix Learning Solutions 70 Lectures 8.5 hours Lets Kode It 63 Lectures 6 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 3393, "s": 3205, "text": "You can create a new table in a database in PostgreSQL using the CREATE TABLE statement. While executing this you need to specify the name of the table, column names and their data types." }, { "code": null, "e": 3462, "s": 3393, "text": "Following is the syntax of the CREATE TABLE statement in PostgreSQL." }, { "code": null, "e": 3584, "s": 3462, "text": "CREATE TABLE table_name(\n column1 datatype,\n column2 datatype,\n column3 datatype,\n .....\n columnN datatype,\n);\n" }, { "code": null, "e": 3654, "s": 3584, "text": "Following example creates a table with name CRICKETERS in PostgreSQL." }, { "code": null, "e": 3841, "s": 3654, "text": "postgres=# CREATE TABLE CRICKETERS (\n First_Name VARCHAR(255),\n Last_Name VARCHAR(255),\n Age INT,\n Place_Of_Birth VARCHAR(255),\n Country VARCHAR(255)\n);\nCREATE TABLE\npostgres=#" }, { "code": null, "e": 4043, "s": 3841, "text": "You can get the list of tables in a database in PostgreSQL using the \\dt command. After creating a table, if you can verify the list of tables you can observe the newly created table in it as follows −" }, { "code": null, "e": 4226, "s": 4043, "text": "postgres=# \\dt\n List of relations\nSchema | Name | Type | Owner\n--------+------------+-------+----------\npublic | cricketers | table | postgres\n(1 row)\npostgres=#\n" }, { "code": null, "e": 4318, "s": 4226, "text": "In the same way, you can get the description of the created table using \\d as shown below −" }, { "code": null, "e": 4875, "s": 4318, "text": "postgres=# \\d cricketers\n Table \"public.cricketers\"\nColumn | Type | Collation | Nullable | Default\n----------------+------------------------+-----------+----------+---------\nfirst_name | character varying(255) | | |\nlast_name | character varying(255) | | |\nage | integer | | |\nplace_of_birth | character varying(255) | | |\ncountry | character varying(255) | | |\npostgres=#\n" }, { "code": null, "e": 5007, "s": 4875, "text": "To create a table using python you need to execute the CREATE TABLE statement using the execute() method of the Cursor of pyscopg2." }, { "code": null, "e": 5072, "s": 5007, "text": "The following Python example creates a table with name employee." }, { "code": null, "e": 5687, "s": 5072, "text": "import psycopg2\n\n#Establishing the connection\nconn = psycopg2.connect(\n database=\"mydb\", user='postgres', password='password', host='127.0.0.1', port= '5432'\n)\n#Creating a cursor object using the cursor() method\ncursor = conn.cursor()\n\n#Doping EMPLOYEE table if already exists.\ncursor.execute(\"DROP TABLE IF EXISTS EMPLOYEE\")\n\n#Creating table as per requirement\nsql ='''CREATE TABLE EMPLOYEE(\n FIRST_NAME CHAR(20) NOT NULL,\n LAST_NAME CHAR(20),\n AGE INT,\n SEX CHAR(1),\n INCOME FLOAT\n)'''\ncursor.execute(sql)\nprint(\"Table created successfully........\")\nconn.commit()\n#Closing the connection\nconn.close()" }, { "code": null, "e": 5723, "s": 5687, "text": "Table created successfully........\n" }, { "code": null, "e": 5760, "s": 5723, "text": "\n 187 Lectures \n 17.5 hours \n" }, { "code": null, "e": 5776, "s": 5760, "text": " Malhar Lathkar" }, { "code": null, "e": 5809, "s": 5776, "text": "\n 55 Lectures \n 8 hours \n" }, { "code": null, "e": 5828, "s": 5809, "text": " Arnab Chakraborty" }, { "code": null, "e": 5863, "s": 5828, "text": "\n 136 Lectures \n 11 hours \n" }, { "code": null, "e": 5885, "s": 5863, "text": " In28Minutes Official" }, { "code": null, "e": 5919, "s": 5885, "text": "\n 75 Lectures \n 13 hours \n" }, { "code": null, "e": 5947, "s": 5919, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 5982, "s": 5947, "text": "\n 70 Lectures \n 8.5 hours \n" }, { "code": null, "e": 5996, "s": 5982, "text": " Lets Kode It" }, { "code": null, "e": 6029, "s": 5996, "text": "\n 63 Lectures \n 6 hours \n" }, { "code": null, "e": 6046, "s": 6029, "text": " Abhilash Nelson" }, { "code": null, "e": 6053, "s": 6046, "text": " Print" }, { "code": null, "e": 6064, "s": 6053, "text": " Add Notes" } ]
Sort items in a Java TreeSet
First, create a TreeSet and add elements to it − TreeSet<String> set = new TreeSet<String>(); set.add("65"); set.add("45"); set.add("19"); set.add("27"); set.add("89"); set.add("57"); Now, sort it in ascending order, which is the default − Iterator<String> i = set.iterator(); while(i.hasNext()){ System.out.println(i.next()); } If you want to sort in descending order, then use the descendingIterator() method − Iterator<String> j = set.descendingIterator(); while(j.hasNext()) { System.out.println(j.next()); } The following is an example to sort items in a TreeSet in ascending and descending order − Live Demo import java.util.*; public class Demo { public static void main(String args[]){ TreeSet<String> set = new TreeSet<String>(); set.add("65"); set.add("45"); set.add("19"); set.add("27"); set.add("89"); set.add("57"); System.out.println("TreeSet elements (Sorted)..."); Iterator<String> i = set.iterator(); while(i.hasNext()) { System.out.println(i.next()); } System.out.println("\nTreeSet elements (Sorted) in Descending order..."); Iterator<String> j = set.descendingIterator(); while(j.hasNext()) { System.out.println(j.next()); } } } TreeSet elements (Sorted)... 19 27 45 57 65 89 TreeSet elements (Sorted) in Descending order... 89 65 57 45 27 19
[ { "code": null, "e": 1111, "s": 1062, "text": "First, create a TreeSet and add elements to it −" }, { "code": null, "e": 1246, "s": 1111, "text": "TreeSet<String> set = new TreeSet<String>();\nset.add(\"65\");\nset.add(\"45\");\nset.add(\"19\");\nset.add(\"27\");\nset.add(\"89\");\nset.add(\"57\");" }, { "code": null, "e": 1302, "s": 1246, "text": "Now, sort it in ascending order, which is the default −" }, { "code": null, "e": 1394, "s": 1302, "text": "Iterator<String> i = set.iterator();\nwhile(i.hasNext()){\n System.out.println(i.next());\n}" }, { "code": null, "e": 1478, "s": 1394, "text": "If you want to sort in descending order, then use the descendingIterator() method −" }, { "code": null, "e": 1581, "s": 1478, "text": "Iterator<String> j = set.descendingIterator();\nwhile(j.hasNext()) {\n System.out.println(j.next());\n}" }, { "code": null, "e": 1672, "s": 1581, "text": "The following is an example to sort items in a TreeSet in ascending and descending order −" }, { "code": null, "e": 1683, "s": 1672, "text": " Live Demo" }, { "code": null, "e": 2332, "s": 1683, "text": "import java.util.*;\npublic class Demo {\n public static void main(String args[]){\n TreeSet<String> set = new TreeSet<String>();\n set.add(\"65\");\n set.add(\"45\");\n set.add(\"19\");\n set.add(\"27\");\n set.add(\"89\");\n set.add(\"57\");\n System.out.println(\"TreeSet elements (Sorted)...\");\n Iterator<String> i = set.iterator();\n while(i.hasNext()) {\n System.out.println(i.next());\n }\n System.out.println(\"\\nTreeSet elements (Sorted) in Descending order...\");\n Iterator<String> j = set.descendingIterator();\n while(j.hasNext()) {\n System.out.println(j.next());\n }\n }\n}" }, { "code": null, "e": 2447, "s": 2332, "text": "TreeSet elements (Sorted)...\n19\n27\n45\n57\n65\n89\n\nTreeSet elements (Sorted) in Descending order...\n89\n65\n57\n45\n27\n19" } ]
Protein Sequence Classification. A case study on Pfam dataset to... | by Ronak Vijay | Towards Data Science
Proteins are large, complex biomolecules that play many critical roles in biological bodies. Proteins are made up of one or more long chains of amino acid sequences. These Sequences are the arrangement of amino acids in a protein, held together by peptide bonds. Proteins can be made from 20 different kinds of amino acids, and the structure and function of each protein are determined by the kinds of amino acids used to make it and how they are arranged. Understanding this relationship between amino acid sequence and protein function is a long-standing problem in molecular biology with far-reaching scientific implications. Can we use deep learning that learns the relationship between unaligned amino acid sequences and their functional annotations across all protein families of the Pfam database. Classification of protein’s amino acid sequence to one of the protein family accession, based on Pfam dataset. In other words, the task is: given the amino acid sequence of the protein domain, predict which class it belongs to. We have been provided with 5 features, they are as follows: sequence: These are usually the input features to the model. Amino acid sequence for this domain. There are 20 very common amino acids (frequency > 1,000,000), and 4 amino acids that are quite uncommon: X, U, B, O, Z. family_accession: These are usually the labels for the model. Accession number in form PFxxxxx.y (Pfam), where xxxxx is the family accession, and y is the version number. Some values of y are greater than ten, and so 'y' has two digits. sequence_name: Sequence name, in the form "uniprot_accession_id/start_index-end_index". aligned_sequence: Contains a single sequence from the multiple sequence alignment with the rest of the members of the family in seed, with gaps retained. family_id: One word name for family. Source: Kaggle sequence: HWLQMRDSMNTYNNMVNRCFATCIRSFQEKKVNAEEMDCTKRCVTKFVGYSQRVALRFAE family_accession: PF02953.15sequence_name: C5K6N5_PERM5/28-87aligned_sequence: ....HWLQMRDSMNTYNNMVNRCFATCI...........RS.F....QEKKVNAEE.....MDCT....KRCVTKFVGYSQRVALRFAE family_id: zf-Tim10_DDP It is a multi class classification problem, for a given sequence of amino acids we need to predict its protein family accession. Multi class log loss Accuracy In this section, we will explore, visualize and try to understand the given features. Given data is already splitted into 3 folders i.e., train, dev and test using random split. First, let’s load the training, val and test datasets. Sizes of the given sets are as follows: Train size: 1086741 (80%) Val size: 126171 (10%) Test size: 126171 (10%) NOTE: I have considered less data because of limited computational power. However, the same solution can also be scale for the whole Pfam dataset. First, let’s count the number of codes(amino acid) in each unaligned sequences. Most of the unaligned amino acid sequences have character counts in the range of 50–250. Let’s also find the frequency for each code(amino acid) in each unaligned sequences. Most frequent amino acid code is Leucine(L) followed by Alanine(A), Valine(V) and Glycine(G). As we can see, that the uncommon amino acids (i.e., X, U, B, O, Z) are present in very less quantity. Therefore we can consider only 20 common natural amino acids for sequence encoding at the preprocessing step. Amino acid sequences are represented with their corresponding 1 letter code, for example, code for alanine is (A), arginine is (R) and so on. The complete list of amino acids with there code can be found here. Example, unaligned sequence: PHPESRIRLSTRRDAHGMPIPRIESRLGPDAFARLRFMARTCRAILAAAGCAAPFEEFSSADAFSSTHVFGTCRMGHDPMRNVVDGWGRSHRWPNLFVADASLFPSSGGGESPGLTIQALALRT For building deep learning models, we have to transform this textual data into the numerical form that the machines can process. I have used one hot encoding method for the same with considering 20 common amino acids as other uncommon amino acids are less in quantity. The below code snippet creates a dictionary of considered 20 amino acids with integer values in incremental order to be further used for integer encoding. For each unaligned amino acid sequences, 1 letter code is replaced by an integer value using the created code dictionary. If the code is not present in the dictionary the value is simply replaced by 0, thus considering only 20 common amino acids. This step will convert 1 letter code sequence data into numerical data like this, [13, 7, 13, 4, 16, 15, 8, 15, 10, 16, 17, 15, 15, 3, 1, 7, 6, 11, 13, 8, 13, 15, 8, 4, 16, 15, 10, 6, 13, 3, 1, 5, 1, 15, 10, 15, 5, 11, 1, 15, 17, 2, 15, 1, 8, 10, 1, 1, 1, 6, 2, 1, 1, 13, 5, 4, 4, 5, 16, 16, 1, 3, 1, 5, 16, 16, 17, 7, 18, 5, 6, 17, 2, 15, 11, 6, 7, 3, 13, 11, 15, 12, 18, 18, 3, 6, 19, 6, 15, 16, 7, 15, 19, 13, 12, 10, 5, 18, 1, 3, 1, 16, 10, 5, 13, 16, 16, 6, 6, 6, 4, 16, 13, 6, 10, 17, 8, 14, 1, 10, 1, 10, 15, 17] Next post padding is done with max sequence length of 100 which pads with 0 if total sequence length is less than 100 else truncates the sequence up to max length of 100. Finally, each code in the sequences is converted into a one hot encoded vector. I have referred this paper for defining model architectures and trained two separate models, one is bidirectional LSTM and the other one is inspired by ResNet, a CNN based architecture. Variations of Recurrent neural networks like LSTMs are the first choice when working on NLP based problems as they were made to work with temporal or sequential data like text. RNNs are a type of Neural Network where the output from previous steps are fed as input to the current step, thus remembers some information about the sequence. RNNs are great when it comes to short contexts, but it has a limitation of remembering longer sequences because of vanishing gradient problem. LSTM (Long Short-Term Memory) Networks are improved versions of RNN, specialized in remembering information for an extended period using a gating mechanism which makes them selective in what previous information to be remembered, what to forget and how much current input is to be added for building the current cell state. Unidirectional LSTM only preserves information of the past because the inputs it has seen are from the past. Using bidirectional will run the inputs in two ways, one from past to future and one from future to past allowing it to preserve contextual information from both past and future at any point of time. More in-depth explanation for RNN and LSTM can be found here. Model architecture follows like this, first there is an embedding layer that learns the vector representation for each code followed by bidirectional LSTM. For regularization, Dropout is added as to prevent model over-fitting. The output layer i.e., softmax layer will give probability values for all the unique classes(1000) and based on the highest predicted probability, the model will classify amino acid sequences to one of its protein family accession. The model is trained with 33 epochs, batch_size of 256 and was able to achieve a loss of (0.386) with (95.8%) accuracy for test data. 439493/439493 [==============================] - 28s 65us/stepTrain loss: 0.36330516427409587Train accuracy: 0.9645910173696531----------------------------------------------------------------------54378/54378 [==============================] - 3s 63us/stepVal loss: 0.3869630661736021Val accuracy: 0.9577034830108782----------------------------------------------------------------------54378/54378 [==============================] - 3s 64us/stepTest loss: 0.3869193921893196Test accuracy: 0.9587149214887501 This model uses residual blocks inspired from ResNet architecture which also includes dilated convolutions offering larger receptive field without increasing number of model parameters. ResNet (Residual Networks) Deeper neural networks are difficult to train because of vanishing gradient problem — as the gradient is back-propagated to earlier layers, repeated multiplication may make the gradient infinitely small. As a result, as the network goes deeper, its performance gets saturated or even starts degrading rapidly. ResNets introduces skip connection or identity shortcut connection which is adding initial input to the output of the convolution block. This mitigates the problem of vanishing gradient by allowing the alternate shortcut path for gradient to flow through. It also allows the model to learn an identity function which ensures that the higher layer will perform at least as well as the lower layer, and not worse. Dilated Convolution Dilated convolutions introduce another parameter to convolutional layers called as dilation rate. This defines a spacing between the values in a kernel. A 3x3 kernel with a dilation rate of 2 will have the same field of view as a 5x5 kernel, while only using 9 parameters. Imagine taking a 5x5 kernel and deleting every second column and row. Dilated convolutions deliver a wider field of view at the same computational cost. ProtCNN Model Architecture Amino acids sequences are converted to one hot encoding with shape (batch_size, 100, 21) as input to the model. Initial convolution operation is applied to the input with kernel size of 1 to extract basic properties. Then two identical residual blocks are used to capture complex patterns in the data which are inspired from the ResNet architecture, this will help us to train model with more number of epochs and with better model performance. I have defined residual block slightly different from the ResNet paper. Instead of performing three convolution only two convolution are used with also adding one more parameter (dilation rate) to the first convolution so as to have more wider field of view with same number of model parameters. Each Convolution operation in the residual block follows a basic pattern of BatchNormalization => ReLU => Conv1D. In residual block, first convolution is done with a kernel size of 1x1 with a dilation rate and the second convolution operation is done with a larger kernel size of 3x3. Finally after applying the convolution operations, a skip connection is formed by adding initial input(shortcut) and the output from applied convolution operations. After two residual blocks, Max pooling is applied for reducing the spatial size of the representation. For regularization, Dropout is added as to prevent model over-fitting. This model is trained with 10 epochs, batch_size of 256 and validated on the validation data. 439493/439493 [==============================] - 38s 85us/stepTrain loss: 0.3558084576734698Train accuracy: 0.9969123512774948----------------------------------------------------------------------54378/54378 [==============================] - 5s 85us/stepVal loss: 0.39615299251274316Val accuracy: 0.9886718893955224----------------------------------------------------------------------54378/54378 [==============================] - 5s 85us/stepTest loss: 0.3949931418234982Test accuracy: 0.9882489242257847 As we can see, the results of this model are better than Bidirectional LSTM model. More improvements to this can be achieved by taking majority vote across an ensemble of ProtCNN models. In this case study, we have explored deep learning models that learn the relationship between unaligned amino acid sequences and their functional annotations. The ProtCNN model has achieved significant results which are more accurate and computationally efficient than current state of the art techniques like BLASTp to annotate protein sequences. These results suggest deep learning models will be a core component of future protein function prediction tools. Thank you for reading. The complete code can be found here.
[ { "code": null, "e": 629, "s": 172, "text": "Proteins are large, complex biomolecules that play many critical roles in biological bodies. Proteins are made up of one or more long chains of amino acid sequences. These Sequences are the arrangement of amino acids in a protein, held together by peptide bonds. Proteins can be made from 20 different kinds of amino acids, and the structure and function of each protein are determined by the kinds of amino acids used to make it and how they are arranged." }, { "code": null, "e": 977, "s": 629, "text": "Understanding this relationship between amino acid sequence and protein function is a long-standing problem in molecular biology with far-reaching scientific implications. Can we use deep learning that learns the relationship between unaligned amino acid sequences and their functional annotations across all protein families of the Pfam database." }, { "code": null, "e": 1088, "s": 977, "text": "Classification of protein’s amino acid sequence to one of the protein family accession, based on Pfam dataset." }, { "code": null, "e": 1205, "s": 1088, "text": "In other words, the task is: given the amino acid sequence of the protein domain, predict which class it belongs to." }, { "code": null, "e": 1265, "s": 1205, "text": "We have been provided with 5 features, they are as follows:" }, { "code": null, "e": 1483, "s": 1265, "text": "sequence: These are usually the input features to the model. Amino acid sequence for this domain. There are 20 very common amino acids (frequency > 1,000,000), and 4 amino acids that are quite uncommon: X, U, B, O, Z." }, { "code": null, "e": 1720, "s": 1483, "text": "family_accession: These are usually the labels for the model. Accession number in form PFxxxxx.y (Pfam), where xxxxx is the family accession, and y is the version number. Some values of y are greater than ten, and so 'y' has two digits." }, { "code": null, "e": 1808, "s": 1720, "text": "sequence_name: Sequence name, in the form \"uniprot_accession_id/start_index-end_index\"." }, { "code": null, "e": 1962, "s": 1808, "text": "aligned_sequence: Contains a single sequence from the multiple sequence alignment with the rest of the members of the family in seed, with gaps retained." }, { "code": null, "e": 1999, "s": 1962, "text": "family_id: One word name for family." }, { "code": null, "e": 2014, "s": 1999, "text": "Source: Kaggle" }, { "code": null, "e": 2278, "s": 2014, "text": "sequence: HWLQMRDSMNTYNNMVNRCFATCIRSFQEKKVNAEEMDCTKRCVTKFVGYSQRVALRFAE family_accession: PF02953.15sequence_name: C5K6N5_PERM5/28-87aligned_sequence: ....HWLQMRDSMNTYNNMVNRCFATCI...........RS.F....QEKKVNAEE.....MDCT....KRCVTKFVGYSQRVALRFAE family_id: zf-Tim10_DDP" }, { "code": null, "e": 2407, "s": 2278, "text": "It is a multi class classification problem, for a given sequence of amino acids we need to predict its protein family accession." }, { "code": null, "e": 2428, "s": 2407, "text": "Multi class log loss" }, { "code": null, "e": 2437, "s": 2428, "text": "Accuracy" }, { "code": null, "e": 2615, "s": 2437, "text": "In this section, we will explore, visualize and try to understand the given features. Given data is already splitted into 3 folders i.e., train, dev and test using random split." }, { "code": null, "e": 2670, "s": 2615, "text": "First, let’s load the training, val and test datasets." }, { "code": null, "e": 2710, "s": 2670, "text": "Sizes of the given sets are as follows:" }, { "code": null, "e": 2736, "s": 2710, "text": "Train size: 1086741 (80%)" }, { "code": null, "e": 2759, "s": 2736, "text": "Val size: 126171 (10%)" }, { "code": null, "e": 2783, "s": 2759, "text": "Test size: 126171 (10%)" }, { "code": null, "e": 2930, "s": 2783, "text": "NOTE: I have considered less data because of limited computational power. However, the same solution can also be scale for the whole Pfam dataset." }, { "code": null, "e": 3010, "s": 2930, "text": "First, let’s count the number of codes(amino acid) in each unaligned sequences." }, { "code": null, "e": 3099, "s": 3010, "text": "Most of the unaligned amino acid sequences have character counts in the range of 50–250." }, { "code": null, "e": 3184, "s": 3099, "text": "Let’s also find the frequency for each code(amino acid) in each unaligned sequences." }, { "code": null, "e": 3278, "s": 3184, "text": "Most frequent amino acid code is Leucine(L) followed by Alanine(A), Valine(V) and Glycine(G)." }, { "code": null, "e": 3490, "s": 3278, "text": "As we can see, that the uncommon amino acids (i.e., X, U, B, O, Z) are present in very less quantity. Therefore we can consider only 20 common natural amino acids for sequence encoding at the preprocessing step." }, { "code": null, "e": 3700, "s": 3490, "text": "Amino acid sequences are represented with their corresponding 1 letter code, for example, code for alanine is (A), arginine is (R) and so on. The complete list of amino acids with there code can be found here." }, { "code": null, "e": 3729, "s": 3700, "text": "Example, unaligned sequence:" }, { "code": null, "e": 3854, "s": 3729, "text": "PHPESRIRLSTRRDAHGMPIPRIESRLGPDAFARLRFMARTCRAILAAAGCAAPFEEFSSADAFSSTHVFGTCRMGHDPMRNVVDGWGRSHRWPNLFVADASLFPSSGGGESPGLTIQALALRT" }, { "code": null, "e": 4123, "s": 3854, "text": "For building deep learning models, we have to transform this textual data into the numerical form that the machines can process. I have used one hot encoding method for the same with considering 20 common amino acids as other uncommon amino acids are less in quantity." }, { "code": null, "e": 4278, "s": 4123, "text": "The below code snippet creates a dictionary of considered 20 amino acids with integer values in incremental order to be further used for integer encoding." }, { "code": null, "e": 4525, "s": 4278, "text": "For each unaligned amino acid sequences, 1 letter code is replaced by an integer value using the created code dictionary. If the code is not present in the dictionary the value is simply replaced by 0, thus considering only 20 common amino acids." }, { "code": null, "e": 4607, "s": 4525, "text": "This step will convert 1 letter code sequence data into numerical data like this," }, { "code": null, "e": 5045, "s": 4607, "text": "[13, 7, 13, 4, 16, 15, 8, 15, 10, 16, 17, 15, 15, 3, 1, 7, 6, 11, 13, 8, 13, 15, 8, 4, 16, 15, 10, 6, 13, 3, 1, 5, 1, 15, 10, 15, 5, 11, 1, 15, 17, 2, 15, 1, 8, 10, 1, 1, 1, 6, 2, 1, 1, 13, 5, 4, 4, 5, 16, 16, 1, 3, 1, 5, 16, 16, 17, 7, 18, 5, 6, 17, 2, 15, 11, 6, 7, 3, 13, 11, 15, 12, 18, 18, 3, 6, 19, 6, 15, 16, 7, 15, 19, 13, 12, 10, 5, 18, 1, 3, 1, 16, 10, 5, 13, 16, 16, 6, 6, 6, 4, 16, 13, 6, 10, 17, 8, 14, 1, 10, 1, 10, 15, 17]" }, { "code": null, "e": 5216, "s": 5045, "text": "Next post padding is done with max sequence length of 100 which pads with 0 if total sequence length is less than 100 else truncates the sequence up to max length of 100." }, { "code": null, "e": 5296, "s": 5216, "text": "Finally, each code in the sequences is converted into a one hot encoded vector." }, { "code": null, "e": 5482, "s": 5296, "text": "I have referred this paper for defining model architectures and trained two separate models, one is bidirectional LSTM and the other one is inspired by ResNet, a CNN based architecture." }, { "code": null, "e": 5659, "s": 5482, "text": "Variations of Recurrent neural networks like LSTMs are the first choice when working on NLP based problems as they were made to work with temporal or sequential data like text." }, { "code": null, "e": 5963, "s": 5659, "text": "RNNs are a type of Neural Network where the output from previous steps are fed as input to the current step, thus remembers some information about the sequence. RNNs are great when it comes to short contexts, but it has a limitation of remembering longer sequences because of vanishing gradient problem." }, { "code": null, "e": 6287, "s": 5963, "text": "LSTM (Long Short-Term Memory) Networks are improved versions of RNN, specialized in remembering information for an extended period using a gating mechanism which makes them selective in what previous information to be remembered, what to forget and how much current input is to be added for building the current cell state." }, { "code": null, "e": 6596, "s": 6287, "text": "Unidirectional LSTM only preserves information of the past because the inputs it has seen are from the past. Using bidirectional will run the inputs in two ways, one from past to future and one from future to past allowing it to preserve contextual information from both past and future at any point of time." }, { "code": null, "e": 6658, "s": 6596, "text": "More in-depth explanation for RNN and LSTM can be found here." }, { "code": null, "e": 6885, "s": 6658, "text": "Model architecture follows like this, first there is an embedding layer that learns the vector representation for each code followed by bidirectional LSTM. For regularization, Dropout is added as to prevent model over-fitting." }, { "code": null, "e": 7117, "s": 6885, "text": "The output layer i.e., softmax layer will give probability values for all the unique classes(1000) and based on the highest predicted probability, the model will classify amino acid sequences to one of its protein family accession." }, { "code": null, "e": 7251, "s": 7117, "text": "The model is trained with 33 epochs, batch_size of 256 and was able to achieve a loss of (0.386) with (95.8%) accuracy for test data." }, { "code": null, "e": 7765, "s": 7251, "text": "439493/439493 [==============================] - 28s 65us/stepTrain loss: 0.36330516427409587Train accuracy: 0.9645910173696531----------------------------------------------------------------------54378/54378 [==============================] - 3s 63us/stepVal loss: 0.3869630661736021Val accuracy: 0.9577034830108782----------------------------------------------------------------------54378/54378 [==============================] - 3s 64us/stepTest loss: 0.3869193921893196Test accuracy: 0.9587149214887501" }, { "code": null, "e": 7951, "s": 7765, "text": "This model uses residual blocks inspired from ResNet architecture which also includes dilated convolutions offering larger receptive field without increasing number of model parameters." }, { "code": null, "e": 7978, "s": 7951, "text": "ResNet (Residual Networks)" }, { "code": null, "e": 8288, "s": 7978, "text": "Deeper neural networks are difficult to train because of vanishing gradient problem — as the gradient is back-propagated to earlier layers, repeated multiplication may make the gradient infinitely small. As a result, as the network goes deeper, its performance gets saturated or even starts degrading rapidly." }, { "code": null, "e": 8425, "s": 8288, "text": "ResNets introduces skip connection or identity shortcut connection which is adding initial input to the output of the convolution block." }, { "code": null, "e": 8700, "s": 8425, "text": "This mitigates the problem of vanishing gradient by allowing the alternate shortcut path for gradient to flow through. It also allows the model to learn an identity function which ensures that the higher layer will perform at least as well as the lower layer, and not worse." }, { "code": null, "e": 8720, "s": 8700, "text": "Dilated Convolution" }, { "code": null, "e": 9063, "s": 8720, "text": "Dilated convolutions introduce another parameter to convolutional layers called as dilation rate. This defines a spacing between the values in a kernel. A 3x3 kernel with a dilation rate of 2 will have the same field of view as a 5x5 kernel, while only using 9 parameters. Imagine taking a 5x5 kernel and deleting every second column and row." }, { "code": null, "e": 9146, "s": 9063, "text": "Dilated convolutions deliver a wider field of view at the same computational cost." }, { "code": null, "e": 9173, "s": 9146, "text": "ProtCNN Model Architecture" }, { "code": null, "e": 9618, "s": 9173, "text": "Amino acids sequences are converted to one hot encoding with shape (batch_size, 100, 21) as input to the model. Initial convolution operation is applied to the input with kernel size of 1 to extract basic properties. Then two identical residual blocks are used to capture complex patterns in the data which are inspired from the ResNet architecture, this will help us to train model with more number of epochs and with better model performance." }, { "code": null, "e": 9914, "s": 9618, "text": "I have defined residual block slightly different from the ResNet paper. Instead of performing three convolution only two convolution are used with also adding one more parameter (dilation rate) to the first convolution so as to have more wider field of view with same number of model parameters." }, { "code": null, "e": 10199, "s": 9914, "text": "Each Convolution operation in the residual block follows a basic pattern of BatchNormalization => ReLU => Conv1D. In residual block, first convolution is done with a kernel size of 1x1 with a dilation rate and the second convolution operation is done with a larger kernel size of 3x3." }, { "code": null, "e": 10364, "s": 10199, "text": "Finally after applying the convolution operations, a skip connection is formed by adding initial input(shortcut) and the output from applied convolution operations." }, { "code": null, "e": 10538, "s": 10364, "text": "After two residual blocks, Max pooling is applied for reducing the spatial size of the representation. For regularization, Dropout is added as to prevent model over-fitting." }, { "code": null, "e": 10632, "s": 10538, "text": "This model is trained with 10 epochs, batch_size of 256 and validated on the validation data." }, { "code": null, "e": 11146, "s": 10632, "text": "439493/439493 [==============================] - 38s 85us/stepTrain loss: 0.3558084576734698Train accuracy: 0.9969123512774948----------------------------------------------------------------------54378/54378 [==============================] - 5s 85us/stepVal loss: 0.39615299251274316Val accuracy: 0.9886718893955224----------------------------------------------------------------------54378/54378 [==============================] - 5s 85us/stepTest loss: 0.3949931418234982Test accuracy: 0.9882489242257847" }, { "code": null, "e": 11333, "s": 11146, "text": "As we can see, the results of this model are better than Bidirectional LSTM model. More improvements to this can be achieved by taking majority vote across an ensemble of ProtCNN models." }, { "code": null, "e": 11681, "s": 11333, "text": "In this case study, we have explored deep learning models that learn the relationship between unaligned amino acid sequences and their functional annotations. The ProtCNN model has achieved significant results which are more accurate and computationally efficient than current state of the art techniques like BLASTp to annotate protein sequences." }, { "code": null, "e": 11794, "s": 11681, "text": "These results suggest deep learning models will be a core component of future protein function prediction tools." } ]
How does tuple comparison work in Python?
Tuples are compared position by position: the first item of the first tuple is compared to the first item of the second tuple; if they are not equal, this is the result of the comparison, else the second item is considered, then the third and so on. >>> a = (1, 2, 3) >>> b = (1, 2, 5) >>> a < b True There is another type of comparison that takes into account similar and different elements. This can be performed using sets. Sets will take the tuples and take only unique values. Then you can perform a & operation that acts like intersection to get the common objects from the tuples. >>> a = (1, 2, 3, 4, 5) >>> b = (9, 8, 7, 6, 5) >>> set(a) & set(b) {5} You can also use set.intersection function to perform this operation. >>> a = (1, 2, 3, 4, 5) >>> b = (9, 8, 7, 6, 5) >>> set(a).instersection(set(b)) set([5])
[ { "code": null, "e": 1313, "s": 1062, "text": "Tuples are compared position by position: the first item of the first tuple is compared to the first item of the second tuple; if they are not equal, this is the result of the comparison, else the second item is considered, then the third and so on. " }, { "code": null, "e": 1364, "s": 1313, "text": ">>> a = (1, 2, 3)\n>>> b = (1, 2, 5)\n>>> a < b\nTrue" }, { "code": null, "e": 1652, "s": 1364, "text": "There is another type of comparison that takes into account similar and different elements. This can be performed using sets. Sets will take the tuples and take only unique values. Then you can perform a & operation that acts like intersection to get the common objects from the tuples. " }, { "code": null, "e": 1724, "s": 1652, "text": ">>> a = (1, 2, 3, 4, 5)\n>>> b = (9, 8, 7, 6, 5)\n>>> set(a) & set(b)\n{5}" }, { "code": null, "e": 1795, "s": 1724, "text": "You can also use set.intersection function to perform this operation. " }, { "code": null, "e": 1885, "s": 1795, "text": ">>> a = (1, 2, 3, 4, 5)\n>>> b = (9, 8, 7, 6, 5)\n>>> set(a).instersection(set(b))\nset([5])" } ]
How to set color opacity with RGBA in CSS? - GeeksforGeeks
26 Apr, 2021 In this article, we will see how to set the color opacity with RGBA in CSS. RGBA is a color format, basically containing values for red, green, blue respectively and ‘A’ in RGBA stands for Alpha. To set the opacity of color we mainly change the value of alpha. The value of alpha varies from 0.0 (fully transparent) to 1.0 (fully opaque). Syntax: class/id { attribute: rgba(val1, val2, val3, val4) } Example: In the following example, we use the CSS background-color property with alpha value (opacity). HTML <!DOCTYPE html><html> <body> <h2 style="color:green"> GeeksforGeeks </h2> <b>Setting opacity with rgba</b> <p class="para1" style= "background-color: rgba(255, 0, 0, 0.0);"> Red </p> <p class="para2" style= "background-color:rgba(255, 0, 0, 0.9) ;"> Red </p> <p class="para3" style= "background-color: rgba(0, 255, 0, 0.3);"> Green </p> <p class="para4" style= "background-color: rgba(0, 255, 0, 0.7) ;"> Green </p> <p class="para5" style= "background-color:rgba(0, 0, 255, 0.4) ;"> Blue </p> <p class="para6" style= "background-color: rgba(0, 0, 255, 1.0);"> Blue </p> </body> </html> Output: We can see that different values of alpha representing the different transparency. RGBA bhartiomee25 CSS-Properties CSS-Questions Picked CSS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Design a web page using HTML and CSS Form validation using jQuery Search Bar using HTML, CSS and JavaScript How to set space between the flexbox ? How to style a checkbox using CSS? Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Convert a string to an integer in JavaScript Top 10 Angular Libraries For Web Developers
[ { "code": null, "e": 25321, "s": 25293, "text": "\n26 Apr, 2021" }, { "code": null, "e": 25660, "s": 25321, "text": "In this article, we will see how to set the color opacity with RGBA in CSS. RGBA is a color format, basically containing values for red, green, blue respectively and ‘A’ in RGBA stands for Alpha. To set the opacity of color we mainly change the value of alpha. The value of alpha varies from 0.0 (fully transparent) to 1.0 (fully opaque)." }, { "code": null, "e": 25668, "s": 25660, "text": "Syntax:" }, { "code": null, "e": 25721, "s": 25668, "text": "class/id { attribute: rgba(val1, val2, val3, val4) }" }, { "code": null, "e": 25825, "s": 25721, "text": "Example: In the following example, we use the CSS background-color property with alpha value (opacity)." }, { "code": null, "e": 25830, "s": 25825, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <body> <h2 style=\"color:green\"> GeeksforGeeks </h2> <b>Setting opacity with rgba</b> <p class=\"para1\" style= \"background-color: rgba(255, 0, 0, 0.0);\"> Red </p> <p class=\"para2\" style= \"background-color:rgba(255, 0, 0, 0.9) ;\"> Red </p> <p class=\"para3\" style= \"background-color: rgba(0, 255, 0, 0.3);\"> Green </p> <p class=\"para4\" style= \"background-color: rgba(0, 255, 0, 0.7) ;\"> Green </p> <p class=\"para5\" style= \"background-color:rgba(0, 0, 255, 0.4) ;\"> Blue </p> <p class=\"para6\" style= \"background-color: rgba(0, 0, 255, 1.0);\"> Blue </p> </body> </html>", "e": 26569, "s": 25830, "text": null }, { "code": null, "e": 26660, "s": 26569, "text": "Output: We can see that different values of alpha representing the different transparency." }, { "code": null, "e": 26665, "s": 26660, "text": "RGBA" }, { "code": null, "e": 26678, "s": 26665, "text": "bhartiomee25" }, { "code": null, "e": 26693, "s": 26678, "text": "CSS-Properties" }, { "code": null, "e": 26707, "s": 26693, "text": "CSS-Questions" }, { "code": null, "e": 26714, "s": 26707, "text": "Picked" }, { "code": null, "e": 26718, "s": 26714, "text": "CSS" }, { "code": null, "e": 26735, "s": 26718, "text": "Web Technologies" }, { "code": null, "e": 26833, "s": 26735, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26842, "s": 26833, "text": "Comments" }, { "code": null, "e": 26855, "s": 26842, "text": "Old Comments" }, { "code": null, "e": 26892, "s": 26855, "text": "Design a web page using HTML and CSS" }, { "code": null, "e": 26921, "s": 26892, "text": "Form validation using jQuery" }, { "code": null, "e": 26963, "s": 26921, "text": "Search Bar using HTML, CSS and JavaScript" }, { "code": null, "e": 27002, "s": 26963, "text": "How to set space between the flexbox ?" }, { "code": null, "e": 27037, "s": 27002, "text": "How to style a checkbox using CSS?" }, { "code": null, "e": 27079, "s": 27037, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 27112, "s": 27079, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 27155, "s": 27112, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 27200, "s": 27155, "text": "Convert a string to an integer in JavaScript" } ]
How to add image to ggplot2 in R ? - GeeksforGeeks
17 Jun, 2021 In this article, we will discuss how to insert or add an image into a plot using ggplot2 in R Programming Language. The ggplot() method of this package is used to initialize a ggplot object. It can be used to declare the input data frame for a graphic and can also be used to specify the set of plot aesthetics. The ggplot() function is used to construct the initial plot object and is almost always followed by components to add to the plot. Syntax: ggplot(data, mapping = aes()) Parameters : data – The data frame used for data plotting mapping – Default list of aesthetic mappings to use for plot. The “jpeg” package is used to provide an easy way to perform operations with JPEG images, that is, read, write and display bitmap images stored in the JPEG format. Syntax: install.packages(“jpeg”) The readJPEG() method of this package is used to access an image from a JPEG file/content into a raster array. It reads a bitmap image into a native rasterImage. Syntax: readJPEG(path, native = FALSE) Parameters : path – The name of the JPEG file to access into the working space. native – Indicator of the image representation. If TRUE then the result is a native raster representation. Another package, “patchwork” allows arbitrarily complex composition of plots by providing mathematical operators for combining multiple plots. It is used for enhancement and descriptive plot construction and analysis. It is mainly intended for users of ggplot2 and ensures its proper alignment. Syntax: install.packages(“patchwork”) The inset_element() function of this package provides a way to create additional insets and gives you full control over the placement and orientation of these insets with respect to each other. The method marks the specified graphics as an inset to be added to the preceding plot. Syntax: inset_element( p, left, bottom, right, top) Parameter: p – A grob, ggplot, patchwork, formula, raster, or nativeRaster object to be added as an inset to the plot. left, bottom, right, top – the coordinate positions for adding p. Example: R # loading the required librarieslibrary("jpeg")library("ggplot2")library("patchwork") # defining the x coordinatesxpos <- 1:5 # defining the y coordinatesypos <- xpos**2 data_frame = data.frame(xpos = xpos, ypos = ypos) print ("Data points")print (data_frame) # plotting the datagraph <- ggplot(data_frame, aes(xpos, ypos)) + geom_point() # specifying the image pathpath <- "/Users/mallikagupta/Desktop/GFG/gfg.jpeg" # read the jpef file from deviceimg <- readJPEG(path, native = TRUE) # adding image to graph img_graph <- graph + inset_element(p = img, left = 0.05, bottom = 0.65, right = 0.5, top = 0.95) # printing graph with image print (img_graph) Output [1] "Data points" xpos ypos 1 1 1 2 2 4 3 3 9 4 4 16 5 5 25 The grid package in R is concerned with graphical functions that are used for the creation of the ggplot2 plotting system. Syntax: install.packages(“grid”) The rasterGrob() method in R is used to create a raster image graphical object into the working space. It takes as input the image path (either PNG or JPEG) as the first argument and converts it into a raster graphical image. Syntax: rasterGrob (path, interpolate = TRUE) The qplot() method in R is used to create a quick plot which can be used as a wrapper over various other methods of creating plots. It makes it easier to create complex graphics. Syntax: qplot(x, y=NULL, data, geom=”auto”, xlim = c(NA, NA), ylim =c(NA, NA)) Parameters : x, y – x and y coordinates data – data frame to be used geom – indicator of the geom to be used xlim, ylim – x and y axis limits The annotation_custom() can be combined with the qplot() method which is a special geom intended for use as static annotations. The actual scales of the plots remain unmodified using this option. Syntax: annotation_custom(grob, xmin = -Inf, xmax = Inf, ymin = -Inf, ymax = Inf) Parameters : g – The grob to display xlim, ylim – x and y axis limits Example: R # loading the required librarieslibrary("jpeg")library("ggplot2")library("grid") # defining the x coordinatesxpos <- 1:5 # defining the y coordinatesypos <- xpos**2 data_frame = data.frame(xpos = xpos, ypos = ypos) print ("Data points")print (data_frame) # specifying the image path path <- "/Users/mallikagupta/Desktop/GFG/gfg.jpeg" # read the jpef file from deviceimg <- readJPEG(path, native = TRUE) # converting to raster imageimg <- rasterGrob(img, interpolate=TRUE) # plotting the dataqplot(xpos, ypos, geom="blank") + annotation_custom(g, xmin=-Inf, xmax=Inf, ymin=-Inf, ymax=Inf) + geom_point() Output [1] "Data points" xpos ypos 1 1 1 2 2 4 3 3 9 4 4 16 5 5 25 Picked R-ggplot R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Change Color of Bars in Barchart using ggplot2 in R How to Change Axis Scales in R Plots? How to Split Column Into Multiple Columns in R DataFrame? Group by function in R using Dplyr Replace Specific Characters in String in R How to filter R DataFrame by values in a column? R - if statement How to filter R dataframe by multiple conditions? How to import an Excel File into R ? Time Series Analysis in R
[ { "code": null, "e": 24851, "s": 24823, "text": "\n17 Jun, 2021" }, { "code": null, "e": 24967, "s": 24851, "text": "In this article, we will discuss how to insert or add an image into a plot using ggplot2 in R Programming Language." }, { "code": null, "e": 25295, "s": 24967, "text": "The ggplot() method of this package is used to initialize a ggplot object. It can be used to declare the input data frame for a graphic and can also be used to specify the set of plot aesthetics. The ggplot() function is used to construct the initial plot object and is almost always followed by components to add to the plot. " }, { "code": null, "e": 25303, "s": 25295, "text": "Syntax:" }, { "code": null, "e": 25333, "s": 25303, "text": "ggplot(data, mapping = aes())" }, { "code": null, "e": 25347, "s": 25333, "text": "Parameters : " }, { "code": null, "e": 25392, "s": 25347, "text": "data – The data frame used for data plotting" }, { "code": null, "e": 25454, "s": 25392, "text": "mapping – Default list of aesthetic mappings to use for plot." }, { "code": null, "e": 25618, "s": 25454, "text": "The “jpeg” package is used to provide an easy way to perform operations with JPEG images, that is, read, write and display bitmap images stored in the JPEG format." }, { "code": null, "e": 25626, "s": 25618, "text": "Syntax:" }, { "code": null, "e": 25651, "s": 25626, "text": "install.packages(“jpeg”)" }, { "code": null, "e": 25814, "s": 25651, "text": "The readJPEG() method of this package is used to access an image from a JPEG file/content into a raster array. It reads a bitmap image into a native rasterImage. " }, { "code": null, "e": 25822, "s": 25814, "text": "Syntax:" }, { "code": null, "e": 25853, "s": 25822, "text": "readJPEG(path, native = FALSE)" }, { "code": null, "e": 25867, "s": 25853, "text": "Parameters : " }, { "code": null, "e": 25936, "s": 25867, "text": "path – The name of the JPEG file to access into the working space. " }, { "code": null, "e": 26043, "s": 25936, "text": "native – Indicator of the image representation. If TRUE then the result is a native raster representation." }, { "code": null, "e": 26339, "s": 26043, "text": "Another package, “patchwork” allows arbitrarily complex composition of plots by providing mathematical operators for combining multiple plots. It is used for enhancement and descriptive plot construction and analysis. It is mainly intended for users of ggplot2 and ensures its proper alignment. " }, { "code": null, "e": 26347, "s": 26339, "text": "Syntax:" }, { "code": null, "e": 26377, "s": 26347, "text": "install.packages(“patchwork”)" }, { "code": null, "e": 26658, "s": 26377, "text": "The inset_element() function of this package provides a way to create additional insets and gives you full control over the placement and orientation of these insets with respect to each other. The method marks the specified graphics as an inset to be added to the preceding plot." }, { "code": null, "e": 26666, "s": 26658, "text": "Syntax:" }, { "code": null, "e": 26710, "s": 26666, "text": "inset_element( p, left, bottom, right, top)" }, { "code": null, "e": 26721, "s": 26710, "text": "Parameter:" }, { "code": null, "e": 26829, "s": 26721, "text": "p – A grob, ggplot, patchwork, formula, raster, or nativeRaster object to be added as an inset to the plot." }, { "code": null, "e": 26896, "s": 26829, "text": "left, bottom, right, top – the coordinate positions for adding p. " }, { "code": null, "e": 26905, "s": 26896, "text": "Example:" }, { "code": null, "e": 26907, "s": 26905, "text": "R" }, { "code": "# loading the required librarieslibrary(\"jpeg\")library(\"ggplot2\")library(\"patchwork\") # defining the x coordinatesxpos <- 1:5 # defining the y coordinatesypos <- xpos**2 data_frame = data.frame(xpos = xpos, ypos = ypos) print (\"Data points\")print (data_frame) # plotting the datagraph <- ggplot(data_frame, aes(xpos, ypos)) + geom_point() # specifying the image pathpath <- \"/Users/mallikagupta/Desktop/GFG/gfg.jpeg\" # read the jpef file from deviceimg <- readJPEG(path, native = TRUE) # adding image to graph img_graph <- graph + inset_element(p = img, left = 0.05, bottom = 0.65, right = 0.5, top = 0.95) # printing graph with image print (img_graph)", "e": 27676, "s": 26907, "text": null }, { "code": null, "e": 27683, "s": 27676, "text": "Output" }, { "code": null, "e": 27780, "s": 27683, "text": "[1] \"Data points\" \n xpos ypos \n1 1 1 \n2 2 4 \n3 3 9 \n4 4 16 \n5 5 25" }, { "code": null, "e": 27903, "s": 27780, "text": "The grid package in R is concerned with graphical functions that are used for the creation of the ggplot2 plotting system." }, { "code": null, "e": 27911, "s": 27903, "text": "Syntax:" }, { "code": null, "e": 27936, "s": 27911, "text": "install.packages(“grid”)" }, { "code": null, "e": 28163, "s": 27936, "text": "The rasterGrob() method in R is used to create a raster image graphical object into the working space. It takes as input the image path (either PNG or JPEG) as the first argument and converts it into a raster graphical image. " }, { "code": null, "e": 28171, "s": 28163, "text": "Syntax:" }, { "code": null, "e": 28209, "s": 28171, "text": "rasterGrob (path, interpolate = TRUE)" }, { "code": null, "e": 28388, "s": 28209, "text": "The qplot() method in R is used to create a quick plot which can be used as a wrapper over various other methods of creating plots. It makes it easier to create complex graphics." }, { "code": null, "e": 28397, "s": 28388, "text": "Syntax: " }, { "code": null, "e": 28468, "s": 28397, "text": "qplot(x, y=NULL, data, geom=”auto”, xlim = c(NA, NA), ylim =c(NA, NA))" }, { "code": null, "e": 28482, "s": 28468, "text": "Parameters : " }, { "code": null, "e": 28509, "s": 28482, "text": "x, y – x and y coordinates" }, { "code": null, "e": 28538, "s": 28509, "text": "data – data frame to be used" }, { "code": null, "e": 28578, "s": 28538, "text": "geom – indicator of the geom to be used" }, { "code": null, "e": 28611, "s": 28578, "text": "xlim, ylim – x and y axis limits" }, { "code": null, "e": 28808, "s": 28611, "text": "The annotation_custom() can be combined with the qplot() method which is a special geom intended for use as static annotations. The actual scales of the plots remain unmodified using this option. " }, { "code": null, "e": 28816, "s": 28808, "text": "Syntax:" }, { "code": null, "e": 28890, "s": 28816, "text": "annotation_custom(grob, xmin = -Inf, xmax = Inf, ymin = -Inf, ymax = Inf)" }, { "code": null, "e": 28904, "s": 28890, "text": "Parameters : " }, { "code": null, "e": 28928, "s": 28904, "text": "g – The grob to display" }, { "code": null, "e": 28961, "s": 28928, "text": "xlim, ylim – x and y axis limits" }, { "code": null, "e": 28970, "s": 28961, "text": "Example:" }, { "code": null, "e": 28972, "s": 28970, "text": "R" }, { "code": "# loading the required librarieslibrary(\"jpeg\")library(\"ggplot2\")library(\"grid\") # defining the x coordinatesxpos <- 1:5 # defining the y coordinatesypos <- xpos**2 data_frame = data.frame(xpos = xpos, ypos = ypos) print (\"Data points\")print (data_frame) # specifying the image path path <- \"/Users/mallikagupta/Desktop/GFG/gfg.jpeg\" # read the jpef file from deviceimg <- readJPEG(path, native = TRUE) # converting to raster imageimg <- rasterGrob(img, interpolate=TRUE) # plotting the dataqplot(xpos, ypos, geom=\"blank\") + annotation_custom(g, xmin=-Inf, xmax=Inf, ymin=-Inf, ymax=Inf) + geom_point()", "e": 29608, "s": 28972, "text": null }, { "code": null, "e": 29615, "s": 29608, "text": "Output" }, { "code": null, "e": 29705, "s": 29615, "text": "[1] \"Data points\"\n xpos ypos\n1 1 1\n2 2 4\n3 3 9\n4 4 16\n5 5 25" }, { "code": null, "e": 29712, "s": 29705, "text": "Picked" }, { "code": null, "e": 29721, "s": 29712, "text": "R-ggplot" }, { "code": null, "e": 29732, "s": 29721, "text": "R Language" }, { "code": null, "e": 29830, "s": 29732, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29839, "s": 29830, "text": "Comments" }, { "code": null, "e": 29852, "s": 29839, "text": "Old Comments" }, { "code": null, "e": 29904, "s": 29852, "text": "Change Color of Bars in Barchart using ggplot2 in R" }, { "code": null, "e": 29942, "s": 29904, "text": "How to Change Axis Scales in R Plots?" }, { "code": null, "e": 30000, "s": 29942, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 30035, "s": 30000, "text": "Group by function in R using Dplyr" }, { "code": null, "e": 30078, "s": 30035, "text": "Replace Specific Characters in String in R" }, { "code": null, "e": 30127, "s": 30078, "text": "How to filter R DataFrame by values in a column?" }, { "code": null, "e": 30144, "s": 30127, "text": "R - if statement" }, { "code": null, "e": 30194, "s": 30144, "text": "How to filter R dataframe by multiple conditions?" }, { "code": null, "e": 30231, "s": 30194, "text": "How to import an Excel File into R ?" } ]
How to make a GridLayout fit screen size in Android?
This example demonstrates how How to make a GridLayout fit screen size in Android. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <GridLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/tableGrid" android:layout_gravity="center" android:columnCount="4" android:orientation="horizontal" tools:context=".MainActivity"> <Button android:text="1" /> <Button android:text="2" /> <Button android:text="3" /> <Button android:text="4" /> <Button android:text="5" /> <Button android:text="6" /> <Button android:text="7" /> <Button android:text="8" /> <Button android:text="9" /> <Button android:text="10" /> <Button android:text="11" /> <Button android:text="12" /> <Button android:text="13" /> <Button android:text="14" /> <Button android:text="15" /> <Button android:text="16" /> </GridLayout> Step 3 − Add the following code to src/MainActivity.java package com.app.sample; import androidx.appcompat.app.AppCompatActivity; import android.os.Bundle; import android.view.Gravity; import android.widget.GridLayout; import android.widget.ImageView; import android.widget.TableLayout; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); GridLayout gridLayout = (GridLayout)findViewById(R.id.tableGrid); gridLayout.removeAllViews(); int total = 12; int column = 5; int row = total / column; gridLayout.setColumnCount(column); gridLayout.setRowCount(row + 1); for(int i =0, c = 0, r = 0; i < total; i++, c++){ if(c == column){ c = 0; r++; } ImageView oImageView = new ImageView(this); oImageView.setImageResource(R.drawable.ic_launcher_background); GridLayout.LayoutParams param =new GridLayout.LayoutParams(); param.height = TableLayout.LayoutParams.WRAP_CONTENT; param.width = TableLayout.LayoutParams.WRAP_CONTENT; param.rightMargin = 5; param.topMargin = 5; param.setGravity(Gravity.CENTER); param.columnSpec = GridLayout.spec(c); param.rowSpec = GridLayout.spec(r); oImageView.setLayoutParams (param); gridLayout.addView(oImageView); } } } Step 4 − Add the following code to Manifests/AndroidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.app.sample"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from the android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − Click here to download the project code.
[ { "code": null, "e": 1145, "s": 1062, "text": "This example demonstrates how How to make a GridLayout fit screen size in Android." }, { "code": null, "e": 1274, "s": 1145, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1339, "s": 1274, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 2313, "s": 1339, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<GridLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:app=\"http://schemas.android.com/apk/res-auto\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:id=\"@+id/tableGrid\"\n android:layout_gravity=\"center\"\n android:columnCount=\"4\"\n android:orientation=\"horizontal\"\n tools:context=\".MainActivity\">\n <Button android:text=\"1\" />\n <Button android:text=\"2\" />\n <Button android:text=\"3\" />\n <Button android:text=\"4\" />\n <Button android:text=\"5\" />\n <Button android:text=\"6\" />\n <Button android:text=\"7\" />\n <Button android:text=\"8\" />\n <Button android:text=\"9\" />\n <Button android:text=\"10\" />\n <Button android:text=\"11\" />\n <Button android:text=\"12\" />\n <Button android:text=\"13\" />\n <Button android:text=\"14\" />\n <Button android:text=\"15\" />\n <Button android:text=\"16\" />\n</GridLayout>" }, { "code": null, "e": 2370, "s": 2313, "text": "Step 3 − Add the following code to src/MainActivity.java" }, { "code": null, "e": 3824, "s": 2370, "text": "package com.app.sample;\nimport androidx.appcompat.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.view.Gravity;\nimport android.widget.GridLayout;\nimport android.widget.ImageView;\nimport android.widget.TableLayout;\npublic class MainActivity extends AppCompatActivity {\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n GridLayout gridLayout = (GridLayout)findViewById(R.id.tableGrid);\n gridLayout.removeAllViews();\n int total = 12;\n int column = 5;\n int row = total / column;\n gridLayout.setColumnCount(column);\n gridLayout.setRowCount(row + 1);\n for(int i =0, c = 0, r = 0; i < total; i++, c++){\n if(c == column){\n c = 0;\n r++;\n }\n ImageView oImageView = new ImageView(this);\n oImageView.setImageResource(R.drawable.ic_launcher_background);\n GridLayout.LayoutParams param =new GridLayout.LayoutParams();\n param.height = TableLayout.LayoutParams.WRAP_CONTENT;\n param.width = TableLayout.LayoutParams.WRAP_CONTENT;\n param.rightMargin = 5;\n param.topMargin = 5;\n param.setGravity(Gravity.CENTER);\n param.columnSpec = GridLayout.spec(c);\n param.rowSpec = GridLayout.spec(r);\n oImageView.setLayoutParams (param);\n gridLayout.addView(oImageView);\n }\n }\n}" }, { "code": null, "e": 3889, "s": 3824, "text": "Step 4 − Add the following code to Manifests/AndroidManifest.xml" }, { "code": null, "e": 4580, "s": 3889, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"com.app.sample\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 4931, "s": 4580, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from the android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −" }, { "code": null, "e": 4972, "s": 4931, "text": "Click here to download the project code." } ]
Flutter - Animation
Animation is a complex procedure in any mobile application. In spite of its complexity, Animation enhances the user experience to a new level and provides a rich user interaction. Due to its richness, animation becomes an integral part of modern mobile application. Flutter framework recognizes the importance of Animation and provides a simple and intuitive framework to develop all types of animations. Animation is a process of showing a series of images / picture in a particular order within a specific duration to give an illusion of movement. The most important aspects of the animation are as follows − Animation have two distinct values: Start value and End value. The animation starts from Start value and goes through a series of intermediate values and finally ends at End values. For example, to animate a widget to fade away, the initial value will be the full opacity and the final value will be the zero opacity. Animation have two distinct values: Start value and End value. The animation starts from Start value and goes through a series of intermediate values and finally ends at End values. For example, to animate a widget to fade away, the initial value will be the full opacity and the final value will be the zero opacity. The intermediate values may be linear or non-linear (curve) in nature and it can be configured. Understand that the animation works as it is configured. Each configuration provides a different feel to the animation. For example, fading a widget will be linear in nature whereas bouncing of a ball will be non-linear in nature. The intermediate values may be linear or non-linear (curve) in nature and it can be configured. Understand that the animation works as it is configured. Each configuration provides a different feel to the animation. For example, fading a widget will be linear in nature whereas bouncing of a ball will be non-linear in nature. The duration of the animation process affects the speed (slowness or fastness) of the animation. The duration of the animation process affects the speed (slowness or fastness) of the animation. The ability to control the animation process like starting the animation, stopping the animation, repeating the animation to set number of times, reversing the process of animation, etc., The ability to control the animation process like starting the animation, stopping the animation, repeating the animation to set number of times, reversing the process of animation, etc., In Flutter, animation system does not do any real animation. Instead, it provides only the values required at every frame to render the images. In Flutter, animation system does not do any real animation. Instead, it provides only the values required at every frame to render the images. Flutter animation system is based on Animation objects. The core animation classes and its usage are as follows − Generates interpolated values between two numbers over a certain duration. The most common Animation classes are − Animation<double> − interpolate values between two decimal number Animation<double> − interpolate values between two decimal number Animation<Color> − interpolate colors between two color Animation<Color> − interpolate colors between two color Animation<Size> − interpolate sizes between two size Animation<Size> − interpolate sizes between two size AnimationController − Special Animation object to control the animation itself. It generates new values whenever the application is ready for a new frame. It supports linear based animation and the value starts from 0.0 to 1.0 AnimationController − Special Animation object to control the animation itself. It generates new values whenever the application is ready for a new frame. It supports linear based animation and the value starts from 0.0 to 1.0 controller = AnimationController(duration: const Duration(seconds: 2), vsync: this); Here, controller controls the animation and duration option controls the duration of the animation process. vsync is a special option used to optimize the resource used in the animation. Similar to AnimationController but supports non-linear animation. CurvedAnimation can be used along with Animation object as below − controller = AnimationController(duration: const Duration(seconds: 2), vsync: this); animation = CurvedAnimation(parent: controller, curve: Curves.easeIn) Derived from Animatable<T> and used to generate numbers between any two numbers other than 0 and 1. It can be used along with Animation object by using animate method and passing actual Animation object. AnimationController controller = AnimationController( duration: const Duration(milliseconds: 1000), vsync: this); Animation<int> customTween = IntTween( begin: 0, end: 255).animate(controller); Tween can also used along with CurvedAnimation as below − Tween can also used along with CurvedAnimation as below − AnimationController controller = AnimationController( duration: const Duration(milliseconds: 500), vsync: this); final Animation curve = CurvedAnimation(parent: controller, curve: Curves.easeOut); Animation<int> customTween = IntTween(begin: 0, end: 255).animate(curve); Here, controller is the actual animation controller. curve provides the type of non-linearity and the customTween provides custom range from 0 to 255. The work flow of the animation is as follows − Define and start the animation controller in the initState of the StatefulWidget. Define and start the animation controller in the initState of the StatefulWidget. AnimationController(duration: const Duration(seconds: 2), vsync: this); animation = Tween<double>(begin: 0, end: 300).animate(controller); controller.forward(); Add animation based listener, addListener to change the state of the widget. Add animation based listener, addListener to change the state of the widget. animation = Tween<double>(begin: 0, end: 300).animate(controller) ..addListener(() { setState(() { // The state that has changed here is the animation object’s value. }); }); Build-in widgets, AnimatedWidget and AnimatedBuilder can be used to skip this process. Both widget accepts Animation object and get current values required for the animation. Build-in widgets, AnimatedWidget and AnimatedBuilder can be used to skip this process. Both widget accepts Animation object and get current values required for the animation. Get the animation values during the build process of the widget and then apply it for width, height or any relevant property instead of the original value. Get the animation values during the build process of the widget and then apply it for width, height or any relevant property instead of the original value. child: Container( height: animation.value, width: animation.value, child: <Widget>, ) Let us write a simple animation based application to understand the concept of animation in Flutter framework. Create a new Flutter application in Android studio, product_animation_app. Create a new Flutter application in Android studio, product_animation_app. Copy the assets folder from product_nav_app to product_animation_app and add assets inside the pubspec.yaml file. Copy the assets folder from product_nav_app to product_animation_app and add assets inside the pubspec.yaml file. flutter: assets: - assets/appimages/floppy.png - assets/appimages/iphone.png - assets/appimages/laptop.png - assets/appimages/pendrive.png - assets/appimages/pixel.png - assets/appimages/tablet.png Remove the default startup code (main.dart). Remove the default startup code (main.dart). Add import and basic main function. Add import and basic main function. import 'package:flutter/material.dart'; void main() => runApp(MyApp()); Create the MyApp widget derived from StatefulWidgtet. Create the MyApp widget derived from StatefulWidgtet. class MyApp extends StatefulWidget { _MyAppState createState() => _MyAppState(); } Create _MyAppState widget and implement initState and dispose in addition to default build method. Create _MyAppState widget and implement initState and dispose in addition to default build method. class _MyAppState extends State<MyApp> with SingleTickerProviderStateMixin { Animation<double> animation; AnimationController controller; @override void initState() { super.initState(); controller = AnimationController( duration: const Duration(seconds: 10), vsync: this ); animation = Tween<double>(begin: 0.0, end: 1.0).animate(controller); controller.forward(); } // This widget is the root of your application. @override Widget build(BuildContext context) { controller.forward(); return MaterialApp( title: 'Flutter Demo', theme: ThemeData(primarySwatch: Colors.blue,), home: MyHomePage(title: 'Product layout demo home page', animation: animation,) ); } @override void dispose() { controller.dispose(); super.dispose(); } } Here, In initState method, we have created an animation controller object (controller), an animation object (animation) and started the animation using controller.forward. In initState method, we have created an animation controller object (controller), an animation object (animation) and started the animation using controller.forward. In dispose method, we have disposed the animation controller object (controller). In dispose method, we have disposed the animation controller object (controller). In build method, send animation to MyHomePage widget through constructor. Now, MyHomePage widget can use the animation object to animate its content. In build method, send animation to MyHomePage widget through constructor. Now, MyHomePage widget can use the animation object to animate its content. Now, add ProductBox widget Now, add ProductBox widget class ProductBox extends StatelessWidget { ProductBox({Key key, this.name, this.description, this.price, this.image}) : super(key: key); final String name; final String description; final int price; final String image; Widget build(BuildContext context) { return Container( padding: EdgeInsets.all(2), height: 140, child: Card( child: Row( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: <Widget>[ Image.asset("assets/appimages/" + image), Expanded( child: Container( padding: EdgeInsets.all(5), child: Column( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: <Widget>[ Text(this.name, style: TextStyle(fontWeight: FontWeight.bold)), Text(this.description), Text("Price: " + this.price.toString()), ], ) ) ) ] ) ) ); } } Create a new widget, MyAnimatedWidget to do simple fade animation using opacity. Create a new widget, MyAnimatedWidget to do simple fade animation using opacity. class MyAnimatedWidget extends StatelessWidget { MyAnimatedWidget({this.child, this.animation}); final Widget child; final Animation<double> animation; Widget build(BuildContext context) => Center( child: AnimatedBuilder( animation: animation, builder: (context, child) => Container( child: Opacity(opacity: animation.value, child: child), ), child: child), ); } Here, we have used AniatedBuilder to do our animation. AnimatedBuilder is a widget which build its content while doing the animation at the same time. It accepts a animation object to get current animation value. We have used animation value, animation.value to set the opacity of the child widget. In effect, the widget will animate the child widget using opacity concept. Here, we have used AniatedBuilder to do our animation. AnimatedBuilder is a widget which build its content while doing the animation at the same time. It accepts a animation object to get current animation value. We have used animation value, animation.value to set the opacity of the child widget. In effect, the widget will animate the child widget using opacity concept. Finally, create the MyHomePage widget and use the animation object to animate any of its content. Finally, create the MyHomePage widget and use the animation object to animate any of its content. class MyHomePage extends StatelessWidget { MyHomePage({Key key, this.title, this.animation}) : super(key: key); final String title; final Animation<double> animation; @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: Text("Product Listing")),body: ListView( shrinkWrap: true, padding: const EdgeInsets.fromLTRB(2.0, 10.0, 2.0, 10.0), children: <Widget>[ FadeTransition( child: ProductBox( name: "iPhone", description: "iPhone is the stylist phone ever", price: 1000, image: "iphone.png" ), opacity: animation ), MyAnimatedWidget(child: ProductBox( name: "Pixel", description: "Pixel is the most featureful phone ever", price: 800, image: "pixel.png" ), animation: animation), ProductBox( name: "Laptop", description: "Laptop is most productive development tool", price: 2000, image: "laptop.png" ), ProductBox( name: "Tablet", description: "Tablet is the most useful device ever for meeting", price: 1500, image: "tablet.png" ), ProductBox( name: "Pendrive", description: "Pendrive is useful storage medium", price: 100, image: "pendrive.png" ), ProductBox( name: "Floppy Drive", description: "Floppy drive is useful rescue storage medium", price: 20, image: "floppy.png" ), ], ) ); } } Here, we have used FadeAnimation and MyAnimationWidget to animate the first two items in the list. FadeAnimation is a build-in animation class, which we used to animate its child using opacity concept. The complete code is as follows − The complete code is as follows − import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatefulWidget { _MyAppState createState() => _MyAppState(); } class _MyAppState extends State<MyApp> with SingleTickerProviderStateMixin { Animation<double> animation; AnimationController controller; @override void initState() { super.initState(); controller = AnimationController( duration: const Duration(seconds: 10), vsync: this); animation = Tween<double>(begin: 0.0, end: 1.0).animate(controller); controller.forward(); } // This widget is the root of your application. @override Widget build(BuildContext context) { controller.forward(); return MaterialApp( title: 'Flutter Demo', theme: ThemeData(primarySwatch: Colors.blue,), home: MyHomePage(title: 'Product layout demo home page', animation: animation,) ); } @override void dispose() { controller.dispose(); super.dispose(); } } class MyHomePage extends StatelessWidget { MyHomePage({Key key, this.title, this.animation}): super(key: key); final String title; final Animation<double> animation; @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: Text("Product Listing")), body: ListView( shrinkWrap: true, padding: const EdgeInsets.fromLTRB(2.0, 10.0, 2.0, 10.0), children: <Widget>[ FadeTransition( child: ProductBox( name: "iPhone", description: "iPhone is the stylist phone ever", price: 1000, image: "iphone.png" ), opacity: animation ), MyAnimatedWidget( child: ProductBox( name: "Pixel", description: "Pixel is the most featureful phone ever", price: 800, image: "pixel.png" ), animation: animation ), ProductBox( name: "Laptop", description: "Laptop is most productive development tool", price: 2000, image: "laptop.png" ), ProductBox( name: "Tablet", description: "Tablet is the most useful device ever for meeting", price: 1500, image: "tablet.png" ), ProductBox( name: "Pendrive", description: "Pendrive is useful storage medium", price: 100, image: "pendrive.png" ), ProductBox( name: "Floppy Drive", description: "Floppy drive is useful rescue storage medium", price: 20, image: "floppy.png" ), ], ) ); } } class ProductBox extends StatelessWidget { ProductBox({Key key, this.name, this.description, this.price, this.image}) : super(key: key); final String name; final String description; final int price; final String image; Widget build(BuildContext context) { return Container( padding: EdgeInsets.all(2), height: 140, child: Card( child: Row( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: <Widget>[ Image.asset("assets/appimages/" + image), Expanded( child: Container( padding: EdgeInsets.all(5), child: Column( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: <Widget>[ Text( this.name, style: TextStyle( fontWeight: FontWeight.bold ) ), Text(this.description), Text( "Price: " + this.price.toString() ), ], ) ) ) ] ) ) ); } } class MyAnimatedWidget extends StatelessWidget { MyAnimatedWidget({this.child, this.animation}); final Widget child; final Animation<double> animation; Widget build(BuildContext context) => Center( child: AnimatedBuilder( animation: animation, builder: (context, child) => Container( child: Opacity(opacity: animation.value, child: child), ), child: child ), ); } Compile and run the application to see the results. The initial and final version of the application is as follows − Compile and run the application to see the results. The initial and final version of the application is as follows − 34 Lectures 4 hours Sriyank Siddhartha 117 Lectures 10 hours Frahaan Hussain 27 Lectures 1 hours Skillbakerystudios 17 Lectures 51 mins Harsh Kumar Khatri 17 Lectures 1.5 hours Pramila Rawat 85 Lectures 16.5 hours Rahul Agarwal Print Add Notes Bookmark this page
[ { "code": null, "e": 2623, "s": 2218, "text": "Animation is a complex procedure in any mobile application. In spite of its complexity, Animation enhances the user experience to a new level and provides a rich user interaction. Due to its richness, animation becomes an integral part of modern mobile application. Flutter framework recognizes the importance of Animation and provides a simple and intuitive framework to develop all types of animations." }, { "code": null, "e": 2829, "s": 2623, "text": "Animation is a process of showing a series of images / picture in a particular order within a specific duration to give an illusion of movement. The most important aspects of the animation are as follows −" }, { "code": null, "e": 3147, "s": 2829, "text": "Animation have two distinct values: Start value and End value. The animation starts from Start value and goes through a series of intermediate values and finally ends at End values. For example, to animate a widget to fade away, the initial value will be the full opacity and the final value will be the zero opacity." }, { "code": null, "e": 3465, "s": 3147, "text": "Animation have two distinct values: Start value and End value. The animation starts from Start value and goes through a series of intermediate values and finally ends at End values. For example, to animate a widget to fade away, the initial value will be the full opacity and the final value will be the zero opacity." }, { "code": null, "e": 3792, "s": 3465, "text": "The intermediate values may be linear or non-linear (curve) in nature and it can be configured. Understand that the animation works as it is configured. Each configuration provides a different feel to the animation. For example, fading a widget will be linear in nature whereas bouncing of a ball will be non-linear in nature." }, { "code": null, "e": 4119, "s": 3792, "text": "The intermediate values may be linear or non-linear (curve) in nature and it can be configured. Understand that the animation works as it is configured. Each configuration provides a different feel to the animation. For example, fading a widget will be linear in nature whereas bouncing of a ball will be non-linear in nature." }, { "code": null, "e": 4216, "s": 4119, "text": "The duration of the animation process affects the speed (slowness or fastness) of the animation." }, { "code": null, "e": 4313, "s": 4216, "text": "The duration of the animation process affects the speed (slowness or fastness) of the animation." }, { "code": null, "e": 4501, "s": 4313, "text": "The ability to control the animation process like starting the animation, stopping the animation, repeating the animation to set number of times, reversing the process of animation, etc.," }, { "code": null, "e": 4689, "s": 4501, "text": "The ability to control the animation process like starting the animation, stopping the animation, repeating the animation to set number of times, reversing the process of animation, etc.," }, { "code": null, "e": 4833, "s": 4689, "text": "In Flutter, animation system does not do any real animation. Instead, it provides only the values required at every frame to render the images." }, { "code": null, "e": 4977, "s": 4833, "text": "In Flutter, animation system does not do any real animation. Instead, it provides only the values required at every frame to render the images." }, { "code": null, "e": 5091, "s": 4977, "text": "Flutter animation system is based on Animation objects. The core animation classes and its usage are as follows −" }, { "code": null, "e": 5206, "s": 5091, "text": "Generates interpolated values between two numbers over a certain duration. The most common Animation classes are −" }, { "code": null, "e": 5272, "s": 5206, "text": "Animation<double> − interpolate values between two decimal number" }, { "code": null, "e": 5338, "s": 5272, "text": "Animation<double> − interpolate values between two decimal number" }, { "code": null, "e": 5394, "s": 5338, "text": "Animation<Color> − interpolate colors between two color" }, { "code": null, "e": 5450, "s": 5394, "text": "Animation<Color> − interpolate colors between two color" }, { "code": null, "e": 5503, "s": 5450, "text": "Animation<Size> − interpolate sizes between two size" }, { "code": null, "e": 5556, "s": 5503, "text": "Animation<Size> − interpolate sizes between two size" }, { "code": null, "e": 5783, "s": 5556, "text": "AnimationController − Special Animation object to control the animation itself. It generates new values whenever the application is ready for a new frame. It supports linear based animation and the value starts from 0.0 to 1.0" }, { "code": null, "e": 6010, "s": 5783, "text": "AnimationController − Special Animation object to control the animation itself. It generates new values whenever the application is ready for a new frame. It supports linear based animation and the value starts from 0.0 to 1.0" }, { "code": null, "e": 6096, "s": 6010, "text": "controller = AnimationController(duration: const Duration(seconds: 2), vsync: this);\n" }, { "code": null, "e": 6283, "s": 6096, "text": "Here, controller controls the animation and duration option controls the duration of the animation process. vsync is a special option used to optimize the resource used in the animation." }, { "code": null, "e": 6416, "s": 6283, "text": "Similar to AnimationController but supports non-linear animation. CurvedAnimation can be used along with Animation object as below −" }, { "code": null, "e": 6572, "s": 6416, "text": "controller = AnimationController(duration: const Duration(seconds: 2), vsync: this); \nanimation = CurvedAnimation(parent: controller, curve: Curves.easeIn)" }, { "code": null, "e": 6776, "s": 6572, "text": "Derived from Animatable<T> and used to generate numbers between any two numbers other than 0 and 1. It can be used along with Animation object by using animate method and passing actual Animation object." }, { "code": null, "e": 6978, "s": 6776, "text": "AnimationController controller = AnimationController( \n duration: const Duration(milliseconds: 1000), \nvsync: this); Animation<int> customTween = IntTween(\n begin: 0, end: 255).animate(controller);" }, { "code": null, "e": 7036, "s": 6978, "text": "Tween can also used along with CurvedAnimation as below −" }, { "code": null, "e": 7094, "s": 7036, "text": "Tween can also used along with CurvedAnimation as below −" }, { "code": null, "e": 7370, "s": 7094, "text": "AnimationController controller = AnimationController(\n duration: const Duration(milliseconds: 500), vsync: this); \nfinal Animation curve = CurvedAnimation(parent: controller, curve: Curves.easeOut); \nAnimation<int> customTween = IntTween(begin: 0, end: 255).animate(curve);" }, { "code": null, "e": 7521, "s": 7370, "text": "Here, controller is the actual animation controller. curve provides the type of non-linearity and the customTween provides custom range from 0 to 255." }, { "code": null, "e": 7568, "s": 7521, "text": "The work flow of the animation is as follows −" }, { "code": null, "e": 7650, "s": 7568, "text": "Define and start the animation controller in the initState of the StatefulWidget." }, { "code": null, "e": 7732, "s": 7650, "text": "Define and start the animation controller in the initState of the StatefulWidget." }, { "code": null, "e": 7895, "s": 7732, "text": "AnimationController(duration: const Duration(seconds: 2), vsync: this); \nanimation = Tween<double>(begin: 0, end: 300).animate(controller); \ncontroller.forward();" }, { "code": null, "e": 7972, "s": 7895, "text": "Add animation based listener, addListener to change the state of the widget." }, { "code": null, "e": 8049, "s": 7972, "text": "Add animation based listener, addListener to change the state of the widget." }, { "code": null, "e": 8239, "s": 8049, "text": "animation = Tween<double>(begin: 0, end: 300).animate(controller) ..addListener(() {\n setState(() { \n // The state that has changed here is the animation object’s value. \n }); \n});" }, { "code": null, "e": 8414, "s": 8239, "text": "Build-in widgets, AnimatedWidget and AnimatedBuilder can be used to skip this process. Both widget accepts Animation object and get current values required for the animation." }, { "code": null, "e": 8589, "s": 8414, "text": "Build-in widgets, AnimatedWidget and AnimatedBuilder can be used to skip this process. Both widget accepts Animation object and get current values required for the animation." }, { "code": null, "e": 8745, "s": 8589, "text": "Get the animation values during the build process of the widget and then apply it for width, height or any relevant property instead of the original value." }, { "code": null, "e": 8901, "s": 8745, "text": "Get the animation values during the build process of the widget and then apply it for width, height or any relevant property instead of the original value." }, { "code": null, "e": 9000, "s": 8901, "text": "child: Container( \n height: animation.value, \n width: animation.value, \n child: <Widget>, \n)" }, { "code": null, "e": 9111, "s": 9000, "text": "Let us write a simple animation based application to understand the concept of animation in Flutter framework." }, { "code": null, "e": 9186, "s": 9111, "text": "Create a new Flutter application in Android studio, product_animation_app." }, { "code": null, "e": 9261, "s": 9186, "text": "Create a new Flutter application in Android studio, product_animation_app." }, { "code": null, "e": 9375, "s": 9261, "text": "Copy the assets folder from product_nav_app to product_animation_app and add assets inside the pubspec.yaml file." }, { "code": null, "e": 9489, "s": 9375, "text": "Copy the assets folder from product_nav_app to product_animation_app and add assets inside the pubspec.yaml file." }, { "code": null, "e": 9715, "s": 9489, "text": "flutter: \n assets: \n - assets/appimages/floppy.png \n - assets/appimages/iphone.png \n - assets/appimages/laptop.png \n - assets/appimages/pendrive.png \n - assets/appimages/pixel.png \n - assets/appimages/tablet.png" }, { "code": null, "e": 9760, "s": 9715, "text": "Remove the default startup code (main.dart)." }, { "code": null, "e": 9805, "s": 9760, "text": "Remove the default startup code (main.dart)." }, { "code": null, "e": 9841, "s": 9805, "text": "Add import and basic main function." }, { "code": null, "e": 9877, "s": 9841, "text": "Add import and basic main function." }, { "code": null, "e": 9950, "s": 9877, "text": "import 'package:flutter/material.dart'; \nvoid main() => runApp(MyApp());" }, { "code": null, "e": 10004, "s": 9950, "text": "Create the MyApp widget derived from StatefulWidgtet." }, { "code": null, "e": 10058, "s": 10004, "text": "Create the MyApp widget derived from StatefulWidgtet." }, { "code": null, "e": 10146, "s": 10058, "text": "class MyApp extends StatefulWidget { \n _MyAppState createState() => _MyAppState(); \n}" }, { "code": null, "e": 10245, "s": 10146, "text": "Create _MyAppState widget and implement initState and dispose in addition to default build method." }, { "code": null, "e": 10344, "s": 10245, "text": "Create _MyAppState widget and implement initState and dispose in addition to default build method." }, { "code": null, "e": 11217, "s": 10344, "text": "class _MyAppState extends State<MyApp> with SingleTickerProviderStateMixin { \n Animation<double> animation; \n AnimationController controller; \n @override void initState() {\n super.initState(); \n controller = AnimationController(\n duration: const Duration(seconds: 10), vsync: this\n ); \n animation = Tween<double>(begin: 0.0, end: 1.0).animate(controller); \n controller.forward(); \n } \n // This widget is the root of your application. \n @override \n Widget build(BuildContext context) {\n controller.forward(); \n return MaterialApp(\n title: 'Flutter Demo',\n theme: ThemeData(primarySwatch: Colors.blue,), \n home: MyHomePage(title: 'Product layout demo home page', animation: animation,)\n ); \n } \n @override \n void dispose() {\n controller.dispose();\n super.dispose();\n }\n}" }, { "code": null, "e": 11223, "s": 11217, "text": "Here," }, { "code": null, "e": 11389, "s": 11223, "text": "In initState method, we have created an animation controller object (controller), an animation object (animation) and started the animation using controller.forward." }, { "code": null, "e": 11555, "s": 11389, "text": "In initState method, we have created an animation controller object (controller), an animation object (animation) and started the animation using controller.forward." }, { "code": null, "e": 11637, "s": 11555, "text": "In dispose method, we have disposed the animation controller object (controller)." }, { "code": null, "e": 11719, "s": 11637, "text": "In dispose method, we have disposed the animation controller object (controller)." }, { "code": null, "e": 11869, "s": 11719, "text": "In build method, send animation to MyHomePage widget through constructor. Now, MyHomePage widget can use the animation object to animate its content." }, { "code": null, "e": 12019, "s": 11869, "text": "In build method, send animation to MyHomePage widget through constructor. Now, MyHomePage widget can use the animation object to animate its content." }, { "code": null, "e": 12046, "s": 12019, "text": "Now, add ProductBox widget" }, { "code": null, "e": 12073, "s": 12046, "text": "Now, add ProductBox widget" }, { "code": null, "e": 13363, "s": 12073, "text": "class ProductBox extends StatelessWidget {\n ProductBox({Key key, this.name, this.description, this.price, this.image})\n : super(key: key);\n final String name; \n final String description; \n final int price; \n final String image; \n \n Widget build(BuildContext context) {\n return Container(\n padding: EdgeInsets.all(2), \n height: 140, \n child: Card( \n child: Row( \n mainAxisAlignment: MainAxisAlignment.spaceEvenly, \n children: <Widget>[ \n Image.asset(\"assets/appimages/\" + image), \n Expanded( \n child: Container( \n padding: EdgeInsets.all(5), \n child: Column( \n mainAxisAlignment: MainAxisAlignment.spaceEvenly, \n children: <Widget>[ \n Text(this.name, style: \n TextStyle(fontWeight: FontWeight.bold)), \n Text(this.description), \n Text(\"Price: \" + this.price.toString()), \n ], \n )\n )\n )\n ]\n )\n )\n ); \n }\n}" }, { "code": null, "e": 13444, "s": 13363, "text": "Create a new widget, MyAnimatedWidget to do simple fade animation using opacity." }, { "code": null, "e": 13525, "s": 13444, "text": "Create a new widget, MyAnimatedWidget to do simple fade animation using opacity." }, { "code": null, "e": 13961, "s": 13525, "text": "class MyAnimatedWidget extends StatelessWidget { \n MyAnimatedWidget({this.child, this.animation}); \n \n final Widget child; \n final Animation<double> animation; \n \n Widget build(BuildContext context) => Center( \n child: AnimatedBuilder(\n animation: animation, \n builder: (context, child) => Container( \n child: Opacity(opacity: animation.value, child: child), \n ), \n child: child), \n ); \n}" }, { "code": null, "e": 14335, "s": 13961, "text": "Here, we have used AniatedBuilder to do our animation. AnimatedBuilder is a widget which build its content while doing the animation at the same time. It accepts a animation object to get current animation value. We have used animation value, animation.value to set the opacity of the child widget. In effect, the widget will animate the child widget using opacity concept." }, { "code": null, "e": 14709, "s": 14335, "text": "Here, we have used AniatedBuilder to do our animation. AnimatedBuilder is a widget which build its content while doing the animation at the same time. It accepts a animation object to get current animation value. We have used animation value, animation.value to set the opacity of the child widget. In effect, the widget will animate the child widget using opacity concept." }, { "code": null, "e": 14807, "s": 14709, "text": "Finally, create the MyHomePage widget and use the animation object to animate any of its content." }, { "code": null, "e": 14905, "s": 14807, "text": "Finally, create the MyHomePage widget and use the animation object to animate any of its content." }, { "code": null, "e": 16923, "s": 14905, "text": "class MyHomePage extends StatelessWidget {\n MyHomePage({Key key, this.title, this.animation}) : super(key: key); \n \n final String title; \n final Animation<double> \n animation; \n \n @override \n Widget build(BuildContext context) {\n return Scaffold(\n appBar: AppBar(title: Text(\"Product Listing\")),body: ListView(\n shrinkWrap: true,\n padding: const EdgeInsets.fromLTRB(2.0, 10.0, 2.0, 10.0), \n children: <Widget>[\n FadeTransition(\n child: ProductBox(\n name: \"iPhone\", \n description: \"iPhone is the stylist phone ever\", \n price: 1000, \n image: \"iphone.png\"\n ), opacity: animation\n ), \n MyAnimatedWidget(child: ProductBox(\n name: \"Pixel\", \n description: \"Pixel is the most featureful phone ever\", \n price: 800, \n image: \"pixel.png\"\n ), animation: animation), \n ProductBox(\n name: \"Laptop\", \n description: \"Laptop is most productive development tool\", \n price: 2000, \n image: \"laptop.png\"\n ), \n ProductBox(\n name: \"Tablet\", \n description: \"Tablet is the most useful device ever for meeting\", \n price: 1500, \n image: \"tablet.png\"\n ), \n ProductBox(\n name: \"Pendrive\", \n description: \"Pendrive is useful storage medium\", \n price: 100, \n image: \"pendrive.png\"\n ),\n ProductBox(\n name: \"Floppy Drive\", \n description: \"Floppy drive is useful rescue storage medium\", \n price: 20, \n image: \"floppy.png\"\n ),\n ],\n )\n );\n }\n}" }, { "code": null, "e": 17125, "s": 16923, "text": "Here, we have used FadeAnimation and MyAnimationWidget to animate the first two items in the list. FadeAnimation is a build-in animation class, which we used to animate its child using opacity concept." }, { "code": null, "e": 17159, "s": 17125, "text": "The complete code is as follows −" }, { "code": null, "e": 17193, "s": 17159, "text": "The complete code is as follows −" }, { "code": null, "e": 22211, "s": 17193, "text": "import 'package:flutter/material.dart'; \nvoid main() => runApp(MyApp()); \n\nclass MyApp extends StatefulWidget { \n _MyAppState createState() => _MyAppState(); \n} \nclass _MyAppState extends State<MyApp> with SingleTickerProviderStateMixin {\n Animation<double> animation; \n AnimationController controller; \n \n @override \n void initState() {\n super.initState(); \n controller = AnimationController(\n duration: const Duration(seconds: 10), vsync: this); \n animation = Tween<double>(begin: 0.0, end: 1.0).animate(controller); \n controller.forward(); \n } \n // This widget is the root of your application. \n @override \n Widget build(BuildContext context) {\n controller.forward(); \n return MaterialApp( \n title: 'Flutter Demo', theme: ThemeData(primarySwatch: Colors.blue,), \n home: MyHomePage(title: 'Product layout demo home page', animation: animation,) \n ); \n } \n @override \n void dispose() {\n controller.dispose();\n super.dispose(); \n } \n}\nclass MyHomePage extends StatelessWidget { \n MyHomePage({Key key, this.title, this.animation}): super(key: key);\n final String title; \n final Animation<double> animation; \n \n @override \n Widget build(BuildContext context) {\n return Scaffold(\n appBar: AppBar(title: Text(\"Product Listing\")), \n body: ListView(\n shrinkWrap: true, \n padding: const EdgeInsets.fromLTRB(2.0, 10.0, 2.0, 10.0), \n children: <Widget>[\n FadeTransition(\n child: ProductBox(\n name: \"iPhone\", \n description: \"iPhone is the stylist phone ever\", \n price: 1000, \n image: \"iphone.png\"\n ), \n opacity: animation\n ), \n MyAnimatedWidget(\n child: ProductBox( \n name: \"Pixel\", \n description: \"Pixel is the most featureful phone ever\", \n price: 800, \n image: \"pixel.png\"\n ), \n animation: animation\n ), \n ProductBox( \n name: \"Laptop\", \n description: \"Laptop is most productive development tool\", \n price: 2000, \n image: \"laptop.png\"\n ), \n ProductBox(\n name: \"Tablet\",\n description: \"Tablet is the most useful device ever for meeting\",\n price: 1500, \n image: \"tablet.png\"\n ), \n ProductBox(\n name: \"Pendrive\", \n description: \"Pendrive is useful storage medium\", \n price: 100, \n image: \"pendrive.png\"\n ), \n ProductBox(\n name: \"Floppy Drive\", \n description: \"Floppy drive is useful rescue storage medium\", \n price: 20, \n image: \"floppy.png\"\n ), \n ], \n )\n ); \n } \n} \nclass ProductBox extends StatelessWidget { \n ProductBox({Key key, this.name, this.description, this.price, this.image}) :\n super(key: key);\n final String name; \n final String description; \n final int price; \n final String image; \n Widget build(BuildContext context) {\n return Container(\n padding: EdgeInsets.all(2), \n height: 140, \n child: Card(\n child: Row(\n mainAxisAlignment: MainAxisAlignment.spaceEvenly, \n children: <Widget>[ \n Image.asset(\"assets/appimages/\" + image), \n Expanded(\n child: Container( \n padding: EdgeInsets.all(5), \n child: Column( \n mainAxisAlignment: MainAxisAlignment.spaceEvenly, \n children: <Widget>[ \n Text(\n this.name, style: TextStyle(\n fontWeight: FontWeight.bold\n )\n ), \n Text(this.description), Text(\n \"Price: \" + this.price.toString()\n ), \n ], \n )\n )\n ) \n ]\n )\n )\n ); \n } \n}\nclass MyAnimatedWidget extends StatelessWidget { \n MyAnimatedWidget({this.child, this.animation}); \n final Widget child; \n final Animation<double> animation; \n \n Widget build(BuildContext context) => Center( \n child: AnimatedBuilder(\n animation: animation, \n builder: (context, child) => Container( \n child: Opacity(opacity: animation.value, child: child), \n ), \n child: child\n ), \n ); \n}" }, { "code": null, "e": 22328, "s": 22211, "text": "Compile and run the application to see the results. The initial and final version of the application is as follows −" }, { "code": null, "e": 22445, "s": 22328, "text": "Compile and run the application to see the results. The initial and final version of the application is as follows −" }, { "code": null, "e": 22478, "s": 22445, "text": "\n 34 Lectures \n 4 hours \n" }, { "code": null, "e": 22498, "s": 22478, "text": " Sriyank Siddhartha" }, { "code": null, "e": 22533, "s": 22498, "text": "\n 117 Lectures \n 10 hours \n" }, { "code": null, "e": 22550, "s": 22533, "text": " Frahaan Hussain" }, { "code": null, "e": 22583, "s": 22550, "text": "\n 27 Lectures \n 1 hours \n" }, { "code": null, "e": 22603, "s": 22583, "text": " Skillbakerystudios" }, { "code": null, "e": 22635, "s": 22603, "text": "\n 17 Lectures \n 51 mins\n" }, { "code": null, "e": 22655, "s": 22635, "text": " Harsh Kumar Khatri" }, { "code": null, "e": 22690, "s": 22655, "text": "\n 17 Lectures \n 1.5 hours \n" }, { "code": null, "e": 22705, "s": 22690, "text": " Pramila Rawat" }, { "code": null, "e": 22741, "s": 22705, "text": "\n 85 Lectures \n 16.5 hours \n" }, { "code": null, "e": 22756, "s": 22741, "text": " Rahul Agarwal" }, { "code": null, "e": 22763, "s": 22756, "text": " Print" }, { "code": null, "e": 22774, "s": 22763, "text": " Add Notes" } ]
Angular Material - Content
The md-content, an Angular Directive, is a container element and is used for scrollable content. The layout-padding attribute can be added to have padded content. The following example shows the use of md-content directive and also the use of angular content display. am_content.htm <html lang = "en"> <head> <link rel = "stylesheet" href = "https://ajax.googleapis.com/ajax/libs/angular_material/1.0.0/angular-material.min.css"> <script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular.min.js"></script> <script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-animate.min.js"></script> <script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-aria.min.js"></script> <script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-messages.min.js"></script> <script src = "https://ajax.googleapis.com/ajax/libs/angular_material/1.0.0/angular-material.min.js"></script> <script type = "text/javascript"> angular.module('firstApplication', ['ngMaterial']); </script> </head> <body ng-app = "firstApplication" ng-cloak> <md-toolbar class = "md-warn"> <div class = "md-toolbar-tools"> <h2 class = "md-flex">HTML 5</h2> </div> </md-toolbar> <md-content flex layout-padding> <p>HTML5 is the next major revision of the HTML standard superseding HTML 4.01, XHTML 1.0, and XHTML 1.1. HTML5 is a standard for structuring and presenting content on the World Wide Web.</p> <p>HTML5 is a cooperation between the World Wide Web Consortium (W3C) and the Web Hypertext Application Technology Working Group (WHATWG).</p> <p>The new standard incorporates features like video playback and drag-and-drop that have been previously dependent on third-party browser plug-ins such as Adobe Flash, Microsoft Silverlight, and Google Gears.</p> </md-content> </body> </html> Verify the result. HTML5 is the next major revision of the HTML standard superseding HTML 4.01, XHTML 1.0, and XHTML 1.1. HTML5 is a standard for structuring and presenting content on the World Wide Web. HTML5 is a cooperation between the World Wide Web Consortium (W3C) and the Web Hypertext Application Technology Working Group (WHATWG). The new standard incorporates features like video playback and drag-and-drop that have been previously dependent on third-party browser plug-ins such as Adobe Flash, Microsoft Silverlight, and Google Gears. 16 Lectures 1.5 hours Anadi Sharma 28 Lectures 2.5 hours Anadi Sharma 11 Lectures 7.5 hours SHIVPRASAD KOIRALA 16 Lectures 2.5 hours Frahaan Hussain 69 Lectures 5 hours Senol Atac 53 Lectures 3.5 hours Senol Atac Print Add Notes Bookmark this page
[ { "code": null, "e": 2353, "s": 2190, "text": "The md-content, an Angular Directive, is a container element and is used for scrollable content. The layout-padding attribute can be added to have padded content." }, { "code": null, "e": 2458, "s": 2353, "text": "The following example shows the use of md-content directive and also the use of angular content display." }, { "code": null, "e": 2473, "s": 2458, "text": "am_content.htm" }, { "code": null, "e": 4258, "s": 2473, "text": "<html lang = \"en\">\n <head>\n <link rel = \"stylesheet\"\n href = \"https://ajax.googleapis.com/ajax/libs/angular_material/1.0.0/angular-material.min.css\">\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular.min.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-animate.min.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-aria.min.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-messages.min.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angular_material/1.0.0/angular-material.min.js\"></script>\n \n <script type = \"text/javascript\"> \n angular.module('firstApplication', ['ngMaterial']);\n </script>\n </head>\n \n <body ng-app = \"firstApplication\" ng-cloak>\n <md-toolbar class = \"md-warn\">\n <div class = \"md-toolbar-tools\">\n <h2 class = \"md-flex\">HTML 5</h2>\n </div>\n </md-toolbar>\n \n <md-content flex layout-padding>\n <p>HTML5 is the next major revision of the HTML standard superseding HTML\n 4.01, XHTML 1.0, and XHTML 1.1. HTML5 is a standard for structuring and\n presenting content on the World Wide Web.</p>\n \n <p>HTML5 is a cooperation between the World Wide Web Consortium (W3C) and the\n Web Hypertext Application Technology Working Group (WHATWG).</p>\n \n <p>The new standard incorporates features like video playback and drag-and-drop\n that have been previously dependent on third-party browser plug-ins such as Adobe\n Flash, Microsoft Silverlight, and Google Gears.</p>\n </md-content>\n </body>\n</html>" }, { "code": null, "e": 4277, "s": 4258, "text": "Verify the result." }, { "code": null, "e": 4462, "s": 4277, "text": "HTML5 is the next major revision of the HTML standard superseding HTML 4.01, XHTML 1.0, and XHTML 1.1. HTML5 is a standard for structuring and presenting content on the World Wide Web." }, { "code": null, "e": 4598, "s": 4462, "text": "HTML5 is a cooperation between the World Wide Web Consortium (W3C) and the Web Hypertext Application Technology Working Group (WHATWG)." }, { "code": null, "e": 4805, "s": 4598, "text": "The new standard incorporates features like video playback and drag-and-drop that have been previously dependent on third-party browser plug-ins such as Adobe Flash, Microsoft Silverlight, and Google Gears." }, { "code": null, "e": 4840, "s": 4805, "text": "\n 16 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4854, "s": 4840, "text": " Anadi Sharma" }, { "code": null, "e": 4889, "s": 4854, "text": "\n 28 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4903, "s": 4889, "text": " Anadi Sharma" }, { "code": null, "e": 4938, "s": 4903, "text": "\n 11 Lectures \n 7.5 hours \n" }, { "code": null, "e": 4958, "s": 4938, "text": " SHIVPRASAD KOIRALA" }, { "code": null, "e": 4993, "s": 4958, "text": "\n 16 Lectures \n 2.5 hours \n" }, { "code": null, "e": 5010, "s": 4993, "text": " Frahaan Hussain" }, { "code": null, "e": 5043, "s": 5010, "text": "\n 69 Lectures \n 5 hours \n" }, { "code": null, "e": 5055, "s": 5043, "text": " Senol Atac" }, { "code": null, "e": 5090, "s": 5055, "text": "\n 53 Lectures \n 3.5 hours \n" }, { "code": null, "e": 5102, "s": 5090, "text": " Senol Atac" }, { "code": null, "e": 5109, "s": 5102, "text": " Print" }, { "code": null, "e": 5120, "s": 5109, "text": " Add Notes" } ]
MySQL - STR_TO_DATE() Function
The DATE, DATETIME and TIMESTAMP datatypes in MySQL are used to store the date, date and time, time stamp values respectively. Where a time stamp is a numerical value representing the number of milliseconds from '1970-01-01 00:00:01' UTC (epoch) to the specified time. MySQL provides a set of functions to manipulate these values. The MYSQL STR_TO_DATE() function accepts a string value and a format string as parameters, extracts the DATE, TIME or, DATETIME values from the given string and returns the result. Following is the syntax of the above function – STR_TO_DATE(str,format) Following example demonstrates the usage of the STR_TO_DATE() function. mysql> SELECT STR_TO_DATE('5th Saturday September 2015', '%D %W %M %Y'); +-----------------------------------------------------------+ | STR_TO_DATE('5th Saturday September 2015', '%D %W %M %Y') | +-----------------------------------------------------------+ | 2015-09-05 | +-----------------------------------------------------------+ 1 row in set (0.00 sec) Following is another example of this function – mysql> SELECT STR_TO_DATE('Sat Sep 05 15', '%a %b %d %y'); +---------------------------------------------+ | STR_TO_DATE('Sat Sep 05 15', '%a %b %d %y') | +---------------------------------------------+ | 2015-09-05 | +---------------------------------------------+ 1 row in set (0.00 sec) Following query converts the string congaing a time value to TIME — mysql> SELECT STR_TO_DATE('20 Hours 40 Minutes 45 Seconds', '%H Hours %i Minutes %S Seconds'); +---------------------------------------------------------------------------------+ | STR_TO_DATE('20 Hours 40 Minutes 45 Seconds', '%H Hours %i Minutes %S Seconds') | +---------------------------------------------------------------------------------+ | 20:40:45 | +---------------------------------------------------------------------------------+ 1 row in set (0.00 sec) Following example converts the date-time string to a DATETIME value – mysql> SELECT STR_TO_DATE('Sep 05 15 10:23:00 PM', '%b %d %y %r'); +-----------------------------------------------------+ | STR_TO_DATE('Sep 05 15 10:23:00 PM', '%b %d %y %r') | +-----------------------------------------------------+ | 2015-09-05 22:23:00 | +-----------------------------------------------------+ 1 row in set (0.00 sec) mysql> SELECT STR_TO_DATE('July 22, 1998','%M %d,%Y'); +-----------------------------------------+ | STR_TO_DATE('July 22, 1998','%M %d,%Y') | +-----------------------------------------+ | 1998-07-22 | +-----------------------------------------+ 1 row in set (0.00 sec) Let us create a table with name MyPlayers in MySQL database using CREATE statement as shown below – mysql> CREATE TABLE MyPlayers( ID INT, First_Name VARCHAR(255), Last_Name VARCHAR(255), Date_Of_Birth VARCHAR(255), Place_Of_Birth VARCHAR(255), Country VARCHAR(255), PRIMARY KEY (ID) ); Now, we will insert 7 records in MyPlayers table using INSERT statements − insert into MyPlayers values(1, 'Shikhar', 'Dhawan', '5th December 1981, Saturday', 'Delhi', 'India'); insert into MyPlayers values(2, 'Jonathan', 'Trott', '22nd April 1981, Wednesday', 'CapeTown', 'SouthAfrica'); insert into MyPlayers values(3, 'Kumara', 'Sangakkara', '27th October 1977, Thursday', 'Matale', 'Srilanka'); insert into MyPlayers values(4, 'Virat', 'Kohli', '5th November 1988, Saturday', 'Delhi', 'India'); insert into MyPlayers values(5, 'Rohit', 'Sharma', '30th April 1987, Thursday', 'Nagpur', 'India'); insert into MyPlayers values(6, 'Ravindra', 'Jadeja', '6th December 1988, Tuesday', 'Nagpur', 'India'); insert into MyPlayers values(7, 'James', 'Anderson', '30th June 1982, Wednesday', 'Burnley', 'England'); Following query converts the string values in the Date_Of_Birth column into date and prints those — mysql> SELECT First_Name, Last_Name, Date_Of_Birth, Country, STR_TO_DATE(Date_Of_Birth, '%D %M %Y, %W') as FormattedDOB FROM MyPlayers; +------------+------------+-----------------------------+-------------+--------------+ | First_Name | Last_Name | Date_Of_Birth | Country | FormattedDOB | +------------+------------+-----------------------------+-------------+--------------+ | Shikhar | Dhawan | 5th December 1981, Saturday | India | 1981-12-05 | | Jonathan | Trott | 22nd April 1981, Wednesday | SouthAfrica | 1981-04-22 | | Kumara | Sangakkara | 27th October 1977, Thursday | Srilanka | 1977-10-27 | | Virat | Kohli | 5th November 1988, Saturday | India | 1988-11-05 | | Rohit | Sharma | 30th April 1987, Thursday | India | 1987-04-30 | | Ravindra | Jadeja | 6th December 1988, Tuesday | India | 1988-12-06 | | James | Anderson | 30th June 1982, Wednesday | England | 1982-06-30 | +------------+------------+-----------------------------+-------------+--------------+ 7 rows in set (0.00 sec) Suppose we have created a table named Subscribers with 5 records in it using the following queries – mysql> CREATE TABLE Subscribers( SubscriberName VARCHAR(255), PackageName VARCHAR(255), SubscriptionDate VARCHAR(255) ); insert into Subscribers values('Raja', 'Premium', '21st October 20'); insert into Subscribers values('Roja', 'Basic', '26th November 20'); insert into Subscribers values('Puja', 'Moderate', '7th March 21'); insert into Subscribers values('Vanaja', 'Basic', '21st February 21'); insert into Subscribers values('Jalaja', 'Premium', '30th January 21'); In the following example we are passing the column SubscriptionDate as date value to this function – mysql> SELECT SubscriberName, PackageName, SubscriptionDate, STR_TO_DATE(SubscriptionDate, '%D %M %y') as FormattedDate FROM Subscribers; +----------------+-------------+------------------+---------------+ | SubscriberName | PackageName | SubscriptionDate | FormattedDate | +----------------+-------------+------------------+---------------+ | Raja | Premium | 21st October 20 | 2020-10-21 | | Roja | Basic | 26th November 20 | 2020-11-26 | | Puja | Moderate | 7th March 21 | 2021-03-07 | | Vanaja | Basic | 21st February 21 | 2021-02-21 | | Jalaja | Premium | 30th January 21 | 2021-01-30 | +----------------+-------------+------------------+---------------+ 5 rows in set (0.00 sec) Suppose we have created a table named SubscribersData with 5 records in it using the following queries – mysql> CREATE TABLE SubscribersData( SubscriberName VARCHAR(255), PackageName VARCHAR(255), SubscriptionTimestamp VARCHAR(255) ); insert into SubscribersData values('Raja', 'Premium', '2020-10-21 20.53.49'); insert into SubscribersData values('Roja', 'Basic', '2020-11-26 10.13.19'); insert into SubscribersData values('Puja', 'Moderate', '2021-03-07 05.43.20'); insert into SubscribersData values('Vanaja', 'Basic', '2021-02-21 16.36.39'); insert into SubscribersData values('Jalaja', 'Premium', '2021-01-30 12.45.45'); In the following example we are trying to format the SubscriptionDate and SubscriptionTime columns as a single timestamp according to the USA standard – mysql> SELECT SubscriberName, PackageName, STR_TO_DATE(SubscriptionTimestamp, GET_FORMAT(TIMESTAMP, 'USA')) as TIMESTAMP FROM SubscribersData; +----------------+-------------+---------------------+ | SubscriberName | PackageName | TIMESTAMP | +----------------+-------------+---------------------+ | Raja | Premium | 2020-10-21 20:53:49 | | Roja | Basic | 2020-11-26 10:13:19 | | Puja | Moderate | 2021-03-07 05:43:20 | | Vanaja | Basic | 2021-02-21 16:36:39 | | Jalaja | Premium | 2021-01-30 12:45:45 | +----------------+-------------+---------------------+ 5 rows in set (0.00 sec) 31 Lectures 6 hours Eduonix Learning Solutions 84 Lectures 5.5 hours Frahaan Hussain 6 Lectures 3.5 hours DATAhill Solutions Srinivas Reddy 60 Lectures 10 hours Vijay Kumar Parvatha Reddy 10 Lectures 1 hours Harshit Srivastava 25 Lectures 4 hours Trevoir Williams Print Add Notes Bookmark this page
[ { "code": null, "e": 2664, "s": 2333, "text": "The DATE, DATETIME and TIMESTAMP datatypes in MySQL are used to store the date, date and time, time stamp values respectively. Where a time stamp is a numerical value representing the number of milliseconds from '1970-01-01 00:00:01' UTC (epoch) to the specified time. MySQL provides a set of functions to manipulate these values." }, { "code": null, "e": 2845, "s": 2664, "text": "The MYSQL STR_TO_DATE() function accepts a string value and a format string as parameters, extracts the DATE, TIME or, DATETIME values from the given string and returns the result." }, { "code": null, "e": 2893, "s": 2845, "text": "Following is the syntax of the above function –" }, { "code": null, "e": 2918, "s": 2893, "text": "STR_TO_DATE(str,format)\n" }, { "code": null, "e": 2990, "s": 2918, "text": "Following example demonstrates the usage of the STR_TO_DATE() function." }, { "code": null, "e": 3397, "s": 2990, "text": "mysql> SELECT STR_TO_DATE('5th Saturday September 2015', '%D %W %M %Y');\n+-----------------------------------------------------------+\n| STR_TO_DATE('5th Saturday September 2015', '%D %W %M %Y') |\n+-----------------------------------------------------------+\n| 2015-09-05 |\n+-----------------------------------------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 3445, "s": 3397, "text": "Following is another example of this function –" }, { "code": null, "e": 3768, "s": 3445, "text": "mysql> SELECT STR_TO_DATE('Sat Sep 05 15', '%a %b %d %y');\n+---------------------------------------------+\n| STR_TO_DATE('Sat Sep 05 15', '%a %b %d %y') |\n+---------------------------------------------+\n| 2015-09-05 |\n+---------------------------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 3836, "s": 3768, "text": "Following query converts the string congaing a time value to TIME —" }, { "code": null, "e": 4375, "s": 3836, "text": "mysql> SELECT STR_TO_DATE('20 Hours 40 Minutes 45 Seconds', '%H Hours %i Minutes %S Seconds');\n+---------------------------------------------------------------------------------+\n| STR_TO_DATE('20 Hours 40 Minutes 45 Seconds', '%H Hours %i Minutes %S Seconds') |\n+---------------------------------------------------------------------------------+\n| 20:40:45 |\n+---------------------------------------------------------------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 4445, "s": 4375, "text": "Following example converts the date-time string to a DATETIME value –" }, { "code": null, "e": 4816, "s": 4445, "text": "mysql> SELECT STR_TO_DATE('Sep 05 15 10:23:00 PM', '%b %d %y %r');\n+-----------------------------------------------------+\n| STR_TO_DATE('Sep 05 15 10:23:00 PM', '%b %d %y %r') |\n+-----------------------------------------------------+\n| 2015-09-05 22:23:00 |\n+-----------------------------------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 5115, "s": 4816, "text": "mysql> SELECT STR_TO_DATE('July 22, 1998','%M %d,%Y');\n+-----------------------------------------+\n| STR_TO_DATE('July 22, 1998','%M %d,%Y') |\n+-----------------------------------------+\n| 1998-07-22 |\n+-----------------------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 5215, "s": 5115, "text": "Let us create a table with name MyPlayers in MySQL database using CREATE statement as shown below –" }, { "code": null, "e": 5409, "s": 5215, "text": "mysql> CREATE TABLE MyPlayers(\n\tID INT,\n\tFirst_Name VARCHAR(255),\n\tLast_Name VARCHAR(255),\n\tDate_Of_Birth VARCHAR(255),\n\tPlace_Of_Birth VARCHAR(255),\n\tCountry VARCHAR(255),\n\tPRIMARY KEY (ID)\n);" }, { "code": null, "e": 5484, "s": 5409, "text": "Now, we will insert 7 records in MyPlayers table using INSERT statements −" }, { "code": null, "e": 6217, "s": 5484, "text": "insert into MyPlayers values(1, 'Shikhar', 'Dhawan', '5th December 1981, Saturday', 'Delhi', 'India');\ninsert into MyPlayers values(2, 'Jonathan', 'Trott', '22nd April 1981, Wednesday', 'CapeTown', 'SouthAfrica');\ninsert into MyPlayers values(3, 'Kumara', 'Sangakkara', '27th October 1977, Thursday', 'Matale', 'Srilanka');\ninsert into MyPlayers values(4, 'Virat', 'Kohli', '5th November 1988, Saturday', 'Delhi', 'India');\ninsert into MyPlayers values(5, 'Rohit', 'Sharma', '30th April 1987, Thursday', 'Nagpur', 'India');\ninsert into MyPlayers values(6, 'Ravindra', 'Jadeja', '6th December 1988, Tuesday', 'Nagpur', 'India');\ninsert into MyPlayers values(7, 'James', 'Anderson', '30th June 1982, Wednesday', 'Burnley', 'England');" }, { "code": null, "e": 6317, "s": 6217, "text": "Following query converts the string values in the Date_Of_Birth column into date and prints those —" }, { "code": null, "e": 7435, "s": 6317, "text": "mysql> SELECT First_Name, Last_Name, Date_Of_Birth, Country, STR_TO_DATE(Date_Of_Birth, '%D %M %Y, %W') as FormattedDOB FROM MyPlayers;\n+------------+------------+-----------------------------+-------------+--------------+\n| First_Name | Last_Name | Date_Of_Birth | Country | FormattedDOB |\n+------------+------------+-----------------------------+-------------+--------------+\n| Shikhar | Dhawan | 5th December 1981, Saturday | India | 1981-12-05 |\n| Jonathan | Trott | 22nd April 1981, Wednesday | SouthAfrica | 1981-04-22 |\n| Kumara | Sangakkara | 27th October 1977, Thursday | Srilanka | 1977-10-27 |\n| Virat | Kohli | 5th November 1988, Saturday | India | 1988-11-05 |\n| Rohit | Sharma | 30th April 1987, Thursday | India | 1987-04-30 |\n| Ravindra | Jadeja | 6th December 1988, Tuesday | India | 1988-12-06 |\n| James | Anderson | 30th June 1982, Wednesday | England | 1982-06-30 |\n+------------+------------+-----------------------------+-------------+--------------+\n7 rows in set (0.00 sec)" }, { "code": null, "e": 7536, "s": 7435, "text": "Suppose we have created a table named Subscribers with 5 records in it using the following queries –" }, { "code": null, "e": 8010, "s": 7536, "text": "mysql> CREATE TABLE Subscribers(\n\tSubscriberName VARCHAR(255),\n\tPackageName VARCHAR(255),\n\tSubscriptionDate VARCHAR(255)\n);\ninsert into Subscribers values('Raja', 'Premium', '21st October 20');\ninsert into Subscribers values('Roja', 'Basic', '26th November 20');\ninsert into Subscribers values('Puja', 'Moderate', '7th March 21');\ninsert into Subscribers values('Vanaja', 'Basic', '21st February 21');\ninsert into Subscribers values('Jalaja', 'Premium', '30th January 21');" }, { "code": null, "e": 8111, "s": 8010, "text": "In the following example we are passing the column SubscriptionDate as date value to this function –" }, { "code": null, "e": 8886, "s": 8111, "text": "mysql> SELECT SubscriberName, PackageName, SubscriptionDate, STR_TO_DATE(SubscriptionDate, '%D %M %y') as FormattedDate FROM Subscribers;\n+----------------+-------------+------------------+---------------+\n| SubscriberName | PackageName | SubscriptionDate | FormattedDate |\n+----------------+-------------+------------------+---------------+\n| Raja | Premium | 21st October 20 | 2020-10-21 |\n| Roja | Basic | 26th November 20 | 2020-11-26 |\n| Puja | Moderate | 7th March 21 | 2021-03-07 |\n| Vanaja | Basic | 21st February 21 | 2021-02-21 |\n| Jalaja | Premium | 30th January 21 | 2021-01-30 |\n+----------------+-------------+------------------+---------------+\n5 rows in set (0.00 sec)" }, { "code": null, "e": 8991, "s": 8886, "text": "Suppose we have created a table named SubscribersData with 5 records in it using the following queries –" }, { "code": null, "e": 9515, "s": 8991, "text": "mysql> CREATE TABLE SubscribersData(\n\tSubscriberName VARCHAR(255),\n\tPackageName VARCHAR(255),\n\tSubscriptionTimestamp VARCHAR(255)\n);\ninsert into SubscribersData values('Raja', 'Premium', '2020-10-21 20.53.49');\ninsert into SubscribersData values('Roja', 'Basic', '2020-11-26 10.13.19');\ninsert into SubscribersData values('Puja', 'Moderate', '2021-03-07 05.43.20');\ninsert into SubscribersData values('Vanaja', 'Basic', '2021-02-21 16.36.39');\ninsert into SubscribersData values('Jalaja', 'Premium', '2021-01-30 12.45.45');" }, { "code": null, "e": 9668, "s": 9515, "text": "In the following example we are trying to format the SubscriptionDate and SubscriptionTime columns as a single timestamp according to the USA standard –" }, { "code": null, "e": 10331, "s": 9668, "text": "mysql> SELECT SubscriberName, PackageName,\nSTR_TO_DATE(SubscriptionTimestamp, GET_FORMAT(TIMESTAMP, 'USA'))\nas TIMESTAMP FROM SubscribersData;\n+----------------+-------------+---------------------+\n| SubscriberName | PackageName | TIMESTAMP |\n+----------------+-------------+---------------------+\n| Raja | Premium | 2020-10-21 20:53:49 |\n| Roja | Basic | 2020-11-26 10:13:19 |\n| Puja | Moderate | 2021-03-07 05:43:20 |\n| Vanaja | Basic | 2021-02-21 16:36:39 |\n| Jalaja | Premium | 2021-01-30 12:45:45 |\n+----------------+-------------+---------------------+\n5 rows in set (0.00 sec)" }, { "code": null, "e": 10364, "s": 10331, "text": "\n 31 Lectures \n 6 hours \n" }, { "code": null, "e": 10392, "s": 10364, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 10427, "s": 10392, "text": "\n 84 Lectures \n 5.5 hours \n" }, { "code": null, "e": 10444, "s": 10427, "text": " Frahaan Hussain" }, { "code": null, "e": 10478, "s": 10444, "text": "\n 6 Lectures \n 3.5 hours \n" }, { "code": null, "e": 10513, "s": 10478, "text": " DATAhill Solutions Srinivas Reddy" }, { "code": null, "e": 10547, "s": 10513, "text": "\n 60 Lectures \n 10 hours \n" }, { "code": null, "e": 10575, "s": 10547, "text": " Vijay Kumar Parvatha Reddy" }, { "code": null, "e": 10608, "s": 10575, "text": "\n 10 Lectures \n 1 hours \n" }, { "code": null, "e": 10628, "s": 10608, "text": " Harshit Srivastava" }, { "code": null, "e": 10661, "s": 10628, "text": "\n 25 Lectures \n 4 hours \n" }, { "code": null, "e": 10679, "s": 10661, "text": " Trevoir Williams" }, { "code": null, "e": 10686, "s": 10679, "text": " Print" }, { "code": null, "e": 10697, "s": 10686, "text": " Add Notes" } ]
C/C++ Program for Greedy Algorithm to find Minimum number of Coins
A greedy algorithm is an algorithm used to find an optimal solution for the given problem. greedy algorithm works by finding locally optimal solutions ( optimal solution for a part of the problem) of each part so show the Global optimal solution could be found. In this problem, we will use a greedy algorithm to find the minimum number of coins/ notes that could makeup to the given sum. For this we will take under consideration all the valid coins or notes i.e. denominations of { 1, 2, 5, 10, 20, 50 , 100, 200 , 500 ,2000 }. And we need to return the number of these coins/notes we will need to make up to the sum. Let’s take a few examples to understand the context better − Input : 1231 Output : 7 Explanation − We will need two Rs 500 notes, two Rs 100 notes, one Rs 20 note, one Rs 10 note and one Re 1 coin. That sums to 2+2+1+1+1 = 7 Input : 2150 Output : 3 Explanation − We will need one Rs 2000 note, one Rs 100 note, and one Rs 50 note. To solve this problem using a greedy algorithm, we will find the which is the largest denomination that can be used. then we will subtract the largest denomination from the sum and again do the same process until the sum becomes zero. Input: sum, Initialise the coins = 0 Step 1: Find the largest denomination that can be used i.e. smaller than sum. Step 2: Add denomination two coins and subtract it from the Sum Step 3: Repeat step 2 until the sum becomes 0. Step 4: Print each value in coins. Live Demo #include <bits/stdc++.h> using namespace std; int notes[] = { 1, 2, 5, 10, 20, 50, 100, 200, 500, 2000 }; int n = sizeof(notes) / sizeof(notes[0]); void minchange(int sum){ vector<int> coins; for (int i = n - 1; i >= 0; i--) { while (sum >= notes[i]) { sum -= notes[i]; coins.push_back(notes[i]); } } for (int i = 0; i < coins.size(); i++) cout << coins[i] << "\t"; } int main(){ int n = 3253; cout << "The minimum number of coins/notes that sum up " << n << " is \t "; minchange(n); return 0; } The minimum number of coins/notes that sum up 3253 is 2000 500 500 200 50 2 1
[ { "code": null, "e": 1324, "s": 1062, "text": "A greedy algorithm is an algorithm used to find an optimal solution for the given problem. greedy algorithm works by finding locally optimal solutions ( optimal solution for a part of the problem) of each part so show the Global optimal solution could be found." }, { "code": null, "e": 1682, "s": 1324, "text": "In this problem, we will use a greedy algorithm to find the minimum number of coins/ notes that could makeup to the given sum. For this we will take under consideration all the valid coins or notes i.e. denominations of { 1, 2, 5, 10, 20, 50 , 100, 200 , 500 ,2000 }. And we need to return the number of these coins/notes we will need to make up to the sum." }, { "code": null, "e": 1743, "s": 1682, "text": "Let’s take a few examples to understand the context better −" }, { "code": null, "e": 1767, "s": 1743, "text": "Input : 1231\nOutput : 7" }, { "code": null, "e": 1907, "s": 1767, "text": "Explanation − We will need two Rs 500 notes, two Rs 100 notes, one Rs 20 note, one Rs 10 note and one Re 1 coin. That sums to 2+2+1+1+1 = 7" }, { "code": null, "e": 1931, "s": 1907, "text": "Input : 2150\nOutput : 3" }, { "code": null, "e": 2013, "s": 1931, "text": "Explanation − We will need one Rs 2000 note, one Rs 100 note, and one Rs 50 note." }, { "code": null, "e": 2248, "s": 2013, "text": "To solve this problem using a greedy algorithm, we will find the which is the largest denomination that can be used. then we will subtract the largest denomination from the sum and again do the same process until the sum becomes zero." }, { "code": null, "e": 2509, "s": 2248, "text": "Input: sum,\nInitialise the coins = 0\nStep 1: Find the largest denomination that can be used i.e. smaller than sum.\nStep 2: Add denomination two coins and subtract it from the Sum\nStep 3: Repeat step 2 until the sum becomes 0.\nStep 4: Print each value in coins." }, { "code": null, "e": 2520, "s": 2509, "text": " Live Demo" }, { "code": null, "e": 3076, "s": 2520, "text": "#include <bits/stdc++.h>\nusing namespace std;\nint notes[] = { 1, 2, 5, 10, 20, 50, 100, 200, 500, 2000 };\nint n = sizeof(notes) / sizeof(notes[0]);\nvoid minchange(int sum){\n vector<int> coins;\n for (int i = n - 1; i >= 0; i--) {\n while (sum >= notes[i]) {\n sum -= notes[i];\n coins.push_back(notes[i]);\n }\n }\n for (int i = 0; i < coins.size(); i++)\n cout << coins[i] << \"\\t\";\n}\nint main(){\n int n = 3253;\n cout << \"The minimum number of coins/notes that sum up \" << n << \" is \\t \";\n minchange(n);\n return 0;\n}" }, { "code": null, "e": 3154, "s": 3076, "text": "The minimum number of coins/notes that sum up 3253 is\n2000 500 500 200 50 2 1" } ]
MySQL Aliases
Aliases are used to give a table, or a column in a table, a temporary name. Aliases are often used to make column names more readable. An alias only exists for the duration of that query. An alias is created with the AS keyword. In this tutorial we will use the well-known Northwind sample database. Below is a selection from the "Customers" table: And a selection from the "Orders" table: The following SQL statement creates two aliases, one for the CustomerID column and one for the CustomerName column: The following SQL statement creates two aliases, one for the CustomerName column and one for the ContactName column. Note: Single or double quotation marks are required if the alias name contains spaces: The following SQL statement creates an alias named "Address" that combine four columns (Address, PostalCode, City and Country): The following SQL statement selects all the orders from the customer with CustomerID=4 (Around the Horn). We use the "Customers" and "Orders" tables, and give them the table aliases of "c" and "o" respectively (Here we use aliases to make the SQL shorter): The following SQL statement is the same as above, but without aliases: Aliases can be useful when: There are more than one table involved in a query Functions are used in the query Column names are big or not very readable Two or more columns are combined together When displaying the Customers table, make an ALIAS of the PostalCode column, the column should be called Pno instead. SELECT CustomerName, Address, PostalCode FROM Customers; Start the Exercise We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: [email protected] Your message has been sent to W3Schools.
[ { "code": null, "e": 76, "s": 0, "text": "Aliases are used to give a table, or a column in a table, a temporary name." }, { "code": null, "e": 135, "s": 76, "text": "Aliases are often used to make column names more readable." }, { "code": null, "e": 188, "s": 135, "text": "An alias only exists for the duration of that query." }, { "code": null, "e": 229, "s": 188, "text": "An alias is created with the AS keyword." }, { "code": null, "e": 300, "s": 229, "text": "In this tutorial we will use the well-known Northwind sample database." }, { "code": null, "e": 349, "s": 300, "text": "Below is a selection from the \"Customers\" table:" }, { "code": null, "e": 390, "s": 349, "text": "And a selection from the \"Orders\" table:" }, { "code": null, "e": 507, "s": 390, "text": "The following SQL statement creates two aliases, one for the CustomerID \ncolumn and one for the CustomerName column:" }, { "code": null, "e": 713, "s": 507, "text": "The following SQL statement creates two aliases, one for the CustomerName \ncolumn and one for the ContactName column. Note: Single or double quotation marks \nare required if the alias name contains spaces:" }, { "code": null, "e": 842, "s": 713, "text": "The following SQL statement creates an alias named \"Address\" that combine four columns (Address, PostalCode, \nCity and Country):" }, { "code": null, "e": 1102, "s": 842, "text": "The following SQL statement selects all the orders from the customer with \nCustomerID=4 (Around the Horn). We use the \"Customers\" and \"Orders\" tables, and \ngive them the table aliases of \"c\" and \"o\" respectively (Here we use \naliases to make the SQL shorter):" }, { "code": null, "e": 1173, "s": 1102, "text": "The following SQL statement is the same as above, but without aliases:" }, { "code": null, "e": 1201, "s": 1173, "text": "Aliases can be useful when:" }, { "code": null, "e": 1251, "s": 1201, "text": "There are more than one table involved in a query" }, { "code": null, "e": 1283, "s": 1251, "text": "Functions are used in the query" }, { "code": null, "e": 1325, "s": 1283, "text": "Column names are big or not very readable" }, { "code": null, "e": 1367, "s": 1325, "text": "Two or more columns are combined together" }, { "code": null, "e": 1485, "s": 1367, "text": "When displaying the Customers table,\nmake an ALIAS of the PostalCode column,\nthe column should be called Pno instead." }, { "code": null, "e": 1544, "s": 1485, "text": "SELECT CustomerName,\nAddress,\nPostalCode \nFROM Customers;\n" }, { "code": null, "e": 1563, "s": 1544, "text": "Start the Exercise" }, { "code": null, "e": 1596, "s": 1563, "text": "We just launchedW3Schools videos" }, { "code": null, "e": 1638, "s": 1596, "text": "Get certifiedby completinga course today!" }, { "code": null, "e": 1745, "s": 1638, "text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:" }, { "code": null, "e": 1764, "s": 1745, "text": "[email protected]" } ]
Python – Merge Dictionaries List with duplicate Keys
When it is required to merge dictionaries list with duplicate keys, the keys of the strings are iterated over and depending on the condition, the result is determined. Below is a demonstration of the same my_list_1 = [{"aba": 1, "best": 4}, {"python": 10, "fun": 15}, {"scala": "fun"}] my_list_2 = [{"scala": 6}, {"python": 3, "best": 10}, {"java": 1}] print("The first list is : ") print(my_list_1) print("The second list is : ") print(my_list_2) for i in range(0, len(my_list_1)): id_keys = list(my_list_1[i].keys()) for key in my_list_2[i]: if key not in id_keys: my_list_1[i][key] = my_list_2[i][key] print("The result is : " ) print(my_list_1) The first list is : [{'aba': 1, 'best': 4}, {'python': 10, 'fun': 15}, {'scala': 'fun'}] The second list is : [{'scala': 6}, {'python': 3, 'best': 10}, {'java': 1}] The result is : [{'aba': 1, 'best': 4, 'scala': 6}, {'python': 10, 'fun': 15, 'best': 10}, {'scala': 'fun', 'java': 1}] Two list of dictionaries are defined and displayed on the console. Two list of dictionaries are defined and displayed on the console. The list of dictionary is iterated over and the keys are accessed. The list of dictionary is iterated over and the keys are accessed. These keys are stored in a variable. These keys are stored in a variable. The second list of dictionary is iterated over, and if the keys in this are not present in the previous variable, the specific keys from both the lists are equated. The second list of dictionary is iterated over, and if the keys in this are not present in the previous variable, the specific keys from both the lists are equated. The result is displayed on the console. The result is displayed on the console.
[ { "code": null, "e": 1230, "s": 1062, "text": "When it is required to merge dictionaries list with duplicate keys, the keys of the strings are iterated over and depending on the condition, the result is determined." }, { "code": null, "e": 1267, "s": 1230, "text": "Below is a demonstration of the same" }, { "code": null, "e": 1737, "s": 1267, "text": "my_list_1 = [{\"aba\": 1, \"best\": 4}, {\"python\": 10, \"fun\": 15}, {\"scala\": \"fun\"}]\n\nmy_list_2 = [{\"scala\": 6}, {\"python\": 3, \"best\": 10}, {\"java\": 1}]\n\nprint(\"The first list is : \")\nprint(my_list_1)\nprint(\"The second list is : \")\nprint(my_list_2)\nfor i in range(0, len(my_list_1)):\n\n id_keys = list(my_list_1[i].keys())\n for key in my_list_2[i]:\n\n if key not in id_keys:\n my_list_1[i][key] = my_list_2[i][key]\n\nprint(\"The result is : \" )\nprint(my_list_1)" }, { "code": null, "e": 2022, "s": 1737, "text": "The first list is :\n[{'aba': 1, 'best': 4}, {'python': 10, 'fun': 15}, {'scala': 'fun'}]\nThe second list is :\n[{'scala': 6}, {'python': 3, 'best': 10}, {'java': 1}]\nThe result is :\n[{'aba': 1, 'best': 4, 'scala': 6}, {'python': 10, 'fun': 15, 'best': 10}, {'scala': 'fun', 'java': 1}]" }, { "code": null, "e": 2089, "s": 2022, "text": "Two list of dictionaries are defined and displayed on the console." }, { "code": null, "e": 2156, "s": 2089, "text": "Two list of dictionaries are defined and displayed on the console." }, { "code": null, "e": 2223, "s": 2156, "text": "The list of dictionary is iterated over and the keys are accessed." }, { "code": null, "e": 2290, "s": 2223, "text": "The list of dictionary is iterated over and the keys are accessed." }, { "code": null, "e": 2327, "s": 2290, "text": "These keys are stored in a variable." }, { "code": null, "e": 2364, "s": 2327, "text": "These keys are stored in a variable." }, { "code": null, "e": 2529, "s": 2364, "text": "The second list of dictionary is iterated over, and if the keys in this are not present in the previous variable, the specific keys from both the lists are equated." }, { "code": null, "e": 2694, "s": 2529, "text": "The second list of dictionary is iterated over, and if the keys in this are not present in the previous variable, the specific keys from both the lists are equated." }, { "code": null, "e": 2734, "s": 2694, "text": "The result is displayed on the console." }, { "code": null, "e": 2774, "s": 2734, "text": "The result is displayed on the console." } ]
Number Pattern | Practice | GeeksforGeeks
For a given number N. Print the pattern in the following form: 1 121 12321 1234321 for N = 4. Example 1: Input: N = 3 Output: 1 121 12321 Explanation: For N = 3 we will print the 3 strings according to the pattern. Example 2: Input: N = 6 Output: 1 121 12321 1234321 123454321 12345654321 Explanation: For N = 6 we will print the 6 strings according to the pattern. Your Task: You don't need to read input or print anything. Your task is to complete the function numberPattern() which takes an integer N as input parameter and returns the list of strings to be printed. Expected Time Complexity: O(N2) Expected Auxiliary Space: O(1) Constraints: 1 ≤ N ≤ 200 0 ambadkarajinkya2164 days ago class Solution{ ArrayList<String> numberPattern(int N){ ArrayList<String> list = new ArrayList<> (); for(int i=1;i<=N;i++) { String str =""; for(int j=1;j<=i;j++) { str+=(Integer.toString(j)); } for(int k=i-1;k>0;k--) { str+=(Integer.toString(k)); } list.add(str); } return list; }} 0 sushantwaybhase1232 months ago class Solution{public: vector<string> numberPattern(int N) { vector<string>v; for(int i=1;i<=N;i++) { string a; for(int j=1;j<=i;j++) { a+=to_string(j); } for(int k=i-1;k>=1;k--) { a+=to_string(k); } v.push_back(a); } return v; }}; 0 navneetkrug3 months ago for(int i=1;i<=n;i++) { for(int j=1;j<=((2*i-1)/2 + 1);j++) { cout << j; res=j; } for(--res ;res>=1;res--) { cout << res; } cout <<" "; } 0 pankajkumarravi6 months ago ArrayList<String> numberPattern(int N){ // code here ArrayList<String> arrayList=new ArrayList<>(); String L="1",R="1"; arrayList.add("1"); for (int i=2;i<=N;i++){ arrayList.add(L+i+R); L = L+i; R =i+R; } return arrayList; } -1 jiten9 months ago jiten solution in c++ :- string int_to_string(int x){ string ans; while(x){ ans.push_back(char(x%10)+'0'); x/=10; } reverse(ans.begin(), ans.end()); return ans; } vector<string> numberPattern(int N) { vector<string> res; for(int i = 1 ; i <= N ; i++){ string temp; for(int j = 1 ; j <= i ; j++){ temp+=int_to_string(j); } for(int j = i-1 ; j >= 1 ; j--){ temp+=int_to_string(j); } res.push_back(temp); } return res; }string int_to_string(int x){ string ans; while(x){ ans.push_back(char(x%10)+'0'); x/=10; } reverse(ans.begin(), ans.end()); return ans; } vector<string> numberPattern(int N) { vector<string> res; for(int i = 1 ; i <= N ; i++){ string temp; for(int j = 1 ; j <= i ; j++){ temp+=int_to_string(j); } for(int j = i-1 ; j >= 1 ; j--){ temp+=int_to_string(j); } res.push_back(temp); } return res; } 0 Yash Kadulkar1 year ago Yash Kadulkar Can someone suggest improvisation on this? def numberPattern(self, N):# code here disp = []for i in range(1,N+1): new = [] for j in range(1,i+1): new.append(str(j)) disp.append(''.join(new + new[-2::-1])) return disp 0 Shreyansh Kumar Singh1 year ago Shreyansh Kumar Singh https://uploads.disquscdn.c... 0 Sagar Pawar1 year ago Sagar Pawar class Solution{ ArrayList<string> numberPattern(int N){ // code here ArrayList<string> arr = new ArrayList<string>(); int digi=1,k,l=0; String tempS; for(int i=1; i<=N; i++){ k=i; tempS=""; for(int j=1; j<=digi; j++){ if(j>=i) tempS+=String.valueOf(k--); else tempS+=String.valueOf(j); } arr.add(l,tempS); l++; digi+=2; } return arr; }} 0 Ankur Parihar1 year ago Ankur Parihar wtf if wrong! why don't you guys atleast run against correct answer to check whether you're posting it correctly!My solution is working everywhere except here! Yet you expect us to buy your courses! 0 Tulsi Dey2 years ago Tulsi Dey Solution in java:Execution time: 0.47 https://uploads.disquscdn.c... We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 320, "s": 226, "text": "For a given number N. Print the pattern in the following form: 1 121 12321 1234321 for N = 4." }, { "code": null, "e": 331, "s": 320, "text": "Example 1:" }, { "code": null, "e": 442, "s": 331, "text": "Input:\nN = 3\nOutput:\n1 121 12321\nExplanation:\nFor N = 3 we will print the 3 strings \naccording to the pattern." }, { "code": null, "e": 453, "s": 442, "text": "Example 2:" }, { "code": null, "e": 594, "s": 453, "text": "Input:\nN = 6\nOutput:\n1 121 12321 1234321 123454321 12345654321\nExplanation:\nFor N = 6 we will print the 6 strings \naccording to the pattern." }, { "code": null, "e": 800, "s": 594, "text": "Your Task: \nYou don't need to read input or print anything. Your task is to complete the function numberPattern() which takes an integer N as input parameter and returns the list of strings to be printed." }, { "code": null, "e": 863, "s": 800, "text": "Expected Time Complexity: O(N2)\nExpected Auxiliary Space: O(1)" }, { "code": null, "e": 888, "s": 863, "text": "Constraints:\n1 ≤ N ≤ 200" }, { "code": null, "e": 890, "s": 888, "text": "0" }, { "code": null, "e": 919, "s": 890, "text": "ambadkarajinkya2164 days ago" }, { "code": null, "e": 1318, "s": 919, "text": "class Solution{ ArrayList<String> numberPattern(int N){ ArrayList<String> list = new ArrayList<> (); for(int i=1;i<=N;i++) { String str =\"\"; for(int j=1;j<=i;j++) { str+=(Integer.toString(j)); } for(int k=i-1;k>0;k--) { str+=(Integer.toString(k)); } list.add(str); } return list; }}" }, { "code": null, "e": 1320, "s": 1318, "text": "0" }, { "code": null, "e": 1351, "s": 1320, "text": "sushantwaybhase1232 months ago" }, { "code": null, "e": 1678, "s": 1351, "text": "class Solution{public: vector<string> numberPattern(int N) { vector<string>v; for(int i=1;i<=N;i++) { string a; for(int j=1;j<=i;j++) { a+=to_string(j); } for(int k=i-1;k>=1;k--) { a+=to_string(k); } v.push_back(a); } return v; }};" }, { "code": null, "e": 1682, "s": 1680, "text": "0" }, { "code": null, "e": 1706, "s": 1682, "text": "navneetkrug3 months ago" }, { "code": null, "e": 1922, "s": 1706, "text": " for(int i=1;i<=n;i++) { for(int j=1;j<=((2*i-1)/2 + 1);j++) { cout << j; res=j; } for(--res ;res>=1;res--) { cout << res; } cout <<\" \"; } " }, { "code": null, "e": 1924, "s": 1922, "text": "0" }, { "code": null, "e": 1952, "s": 1924, "text": "pankajkumarravi6 months ago" }, { "code": null, "e": 2252, "s": 1952, "text": " ArrayList<String> numberPattern(int N){ // code here ArrayList<String> arrayList=new ArrayList<>(); String L=\"1\",R=\"1\"; arrayList.add(\"1\"); for (int i=2;i<=N;i++){ arrayList.add(L+i+R); L = L+i; R =i+R; } return arrayList; }" }, { "code": null, "e": 2255, "s": 2252, "text": "-1" }, { "code": null, "e": 2273, "s": 2255, "text": "jiten9 months ago" }, { "code": null, "e": 2279, "s": 2273, "text": "jiten" }, { "code": null, "e": 2298, "s": 2279, "text": "solution in c++ :-" }, { "code": null, "e": 2496, "s": 2298, "text": "string int_to_string(int x){ string ans; while(x){ ans.push_back(char(x%10)+'0'); x/=10; } reverse(ans.begin(), ans.end()); return ans; }" }, { "code": null, "e": 2606, "s": 2496, "text": " vector<string> numberPattern(int N) { vector<string> res; for(int i = 1 ; i <= N ; i++){" }, { "code": null, "e": 2631, "s": 2606, "text": " string temp;" }, { "code": null, "e": 2726, "s": 2631, "text": " for(int j = 1 ; j <= i ; j++){ temp+=int_to_string(j); }" }, { "code": null, "e": 2823, "s": 2726, "text": " for(int j = i-1 ; j >= 1 ; j--){ temp+=int_to_string(j); }" }, { "code": null, "e": 3086, "s": 2823, "text": " res.push_back(temp); } return res; }string int_to_string(int x){ string ans; while(x){ ans.push_back(char(x%10)+'0'); x/=10; } reverse(ans.begin(), ans.end()); return ans; }" }, { "code": null, "e": 3196, "s": 3086, "text": " vector<string> numberPattern(int N) { vector<string> res; for(int i = 1 ; i <= N ; i++){" }, { "code": null, "e": 3221, "s": 3196, "text": " string temp;" }, { "code": null, "e": 3316, "s": 3221, "text": " for(int j = 1 ; j <= i ; j++){ temp+=int_to_string(j); }" }, { "code": null, "e": 3413, "s": 3316, "text": " for(int j = i-1 ; j >= 1 ; j--){ temp+=int_to_string(j); }" }, { "code": null, "e": 3479, "s": 3413, "text": " res.push_back(temp); } return res; }" }, { "code": null, "e": 3481, "s": 3479, "text": "0" }, { "code": null, "e": 3505, "s": 3481, "text": "Yash Kadulkar1 year ago" }, { "code": null, "e": 3519, "s": 3505, "text": "Yash Kadulkar" }, { "code": null, "e": 3562, "s": 3519, "text": "Can someone suggest improvisation on this?" }, { "code": null, "e": 3758, "s": 3562, "text": "def numberPattern(self, N):# code here disp = []for i in range(1,N+1): new = [] for j in range(1,i+1): new.append(str(j)) disp.append(''.join(new + new[-2::-1])) return disp" }, { "code": null, "e": 3760, "s": 3758, "text": "0" }, { "code": null, "e": 3792, "s": 3760, "text": "Shreyansh Kumar Singh1 year ago" }, { "code": null, "e": 3814, "s": 3792, "text": "Shreyansh Kumar Singh" }, { "code": null, "e": 3845, "s": 3814, "text": "https://uploads.disquscdn.c..." }, { "code": null, "e": 3847, "s": 3845, "text": "0" }, { "code": null, "e": 3869, "s": 3847, "text": "Sagar Pawar1 year ago" }, { "code": null, "e": 3881, "s": 3869, "text": "Sagar Pawar" }, { "code": null, "e": 4380, "s": 3881, "text": "class Solution{ ArrayList<string> numberPattern(int N){ // code here ArrayList<string> arr = new ArrayList<string>(); int digi=1,k,l=0; String tempS; for(int i=1; i<=N; i++){ k=i; tempS=\"\"; for(int j=1; j<=digi; j++){ if(j>=i) tempS+=String.valueOf(k--); else tempS+=String.valueOf(j); } arr.add(l,tempS); l++; digi+=2; } return arr; }}" }, { "code": null, "e": 4382, "s": 4380, "text": "0" }, { "code": null, "e": 4406, "s": 4382, "text": "Ankur Parihar1 year ago" }, { "code": null, "e": 4420, "s": 4406, "text": "Ankur Parihar" }, { "code": null, "e": 4619, "s": 4420, "text": "wtf if wrong! why don't you guys atleast run against correct answer to check whether you're posting it correctly!My solution is working everywhere except here! Yet you expect us to buy your courses!" }, { "code": null, "e": 4621, "s": 4619, "text": "0" }, { "code": null, "e": 4642, "s": 4621, "text": "Tulsi Dey2 years ago" }, { "code": null, "e": 4652, "s": 4642, "text": "Tulsi Dey" }, { "code": null, "e": 4721, "s": 4652, "text": "Solution in java:Execution time: 0.47 https://uploads.disquscdn.c..." }, { "code": null, "e": 4867, "s": 4721, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 4903, "s": 4867, "text": " Login to access your submissions. " }, { "code": null, "e": 4913, "s": 4903, "text": "\nProblem\n" }, { "code": null, "e": 4923, "s": 4913, "text": "\nContest\n" }, { "code": null, "e": 4986, "s": 4923, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 5134, "s": 4986, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 5342, "s": 5134, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 5448, "s": 5342, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
How to programmatically add buttons into a layout one by one in several lines in Android
This example demonstrates how do I programmatically add buttons into a layout one by one several lines in android. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="wrap_content" android:layout_height="wrap_content" tools:context=".MainActivity" android:orientation="vertical"> </LinearLayout> Step 3 − Add the following code to src/MainActivity.java package app.com.sample; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.widget.Button; import android.widget.LinearLayout; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); LinearLayout layout = new LinearLayout(this); layout.setOrientation(LinearLayout.VERTICAL); for (int i = 0; i < 3; i++) { LinearLayout row = new LinearLayout(this); row.setLayoutParams(new LinearLayout.LayoutParams (LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT)); for (int j = 0; j < 4; j++) { Button btnTag = new Button(this); btnTag.setLayoutParams(new LinearLayout.LayoutParams (LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.MATCH_PARENT)); btnTag.setText("Button " + (j + 1 + (i * 4 ))); btnTag.setId(j + 1 + (i * 4)); row.addView(btnTag); } layout.addView(row); } setContentView(layout); } } Step 4 - Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen – Click here to download the project code.
[ { "code": null, "e": 1177, "s": 1062, "text": "This example demonstrates how do I programmatically add buttons into a layout one by one several lines in android." }, { "code": null, "e": 1306, "s": 1177, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1371, "s": 1306, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 1696, "s": 1371, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n tools:context=\".MainActivity\"\n android:orientation=\"vertical\">\n</LinearLayout>" }, { "code": null, "e": 1753, "s": 1696, "text": "Step 3 − Add the following code to src/MainActivity.java" }, { "code": null, "e": 2972, "s": 1753, "text": "package app.com.sample;\nimport android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.widget.Button;\nimport android.widget.LinearLayout;\npublic class MainActivity extends AppCompatActivity {\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n LinearLayout layout = new LinearLayout(this);\n layout.setOrientation(LinearLayout.VERTICAL);\n for (int i = 0; i < 3; i++) {\n LinearLayout row = new LinearLayout(this);\n row.setLayoutParams(new LinearLayout.LayoutParams\n (LinearLayout.LayoutParams.WRAP_CONTENT,\n LinearLayout.LayoutParams.WRAP_CONTENT));\n for (int j = 0; j < 4; j++) {\n Button btnTag = new Button(this);\n btnTag.setLayoutParams(new LinearLayout.LayoutParams\n (LinearLayout.LayoutParams.WRAP_CONTENT,\n LinearLayout.LayoutParams.MATCH_PARENT));\n btnTag.setText(\"Button \" + (j + 1 + (i * 4 )));\n btnTag.setId(j + 1 + (i * 4));\n row.addView(btnTag);\n }\n layout.addView(row);\n }\n setContentView(layout);\n }\n}" }, { "code": null, "e": 3027, "s": 2972, "text": "Step 4 - Add the following code to androidManifest.xml" }, { "code": null, "e": 3700, "s": 3027, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"app.com.sample\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 4047, "s": 3700, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –" }, { "code": null, "e": 4088, "s": 4047, "text": "Click here to download the project code." } ]
IDE | GeeksforGeeks | A computer science portal for geeks
Please enter your email address or userHandle.
[]
MySQL show tables sort by table name?
You can sort the table_name property from INFORMATION_SCHEMA.TABLES with ORDER BY clause. Sort in ascending order or descending order with the help of ASC or DESC respectively. The syntax is as follows − SELECT table_name FROM information_schema.tables WHERE table_type = 'BASE TABLE' AND table_schema='yourDatabaseName' ORDER BY table_name DESC; Use the database with the name sample and have some tables. First, we will show all tables after that we will apply to sort on the table name. The query to display all tables is as follows − mysql> show tables; The following is the output − +--------------------------+ | Tables_in_sample | +--------------------------+ | blobsizedemo | | insert_prevent | | insertrecord_selecttable | | insertrecordprevent | | mytable | | newlinedemo | | notequaloperator | | sumofeverydistinct | | yourtable | +--------------------------+ 9 rows in set (0.00 sec) Here is the query to sort by table name. Now, let us display all tables in descending order with ORDER BY clause − mysql> SELECT table_name -> FROM information_schema.tables -> WHERE table_type = 'BASE TABLE' AND table_schema='sample' -> ORDER BY table_name DESC; The following is the output − +--------------------------+ | TABLE_NAME | +--------------------------+ | yourtable | | sumofeverydistinct | | notequaloperator | | newlinedemo | | mytable | | insertrecordprevent | | insertrecord_selecttable | | insert_prevent | | blobsizedemo | +--------------------------+ 9 rows in set (0.00 sec)
[ { "code": null, "e": 1266, "s": 1062, "text": "You can sort the table_name property from INFORMATION_SCHEMA.TABLES with ORDER BY clause. Sort in ascending order or descending order with the help of ASC or DESC respectively. The syntax is as follows −" }, { "code": null, "e": 1409, "s": 1266, "text": "SELECT table_name\nFROM information_schema.tables\nWHERE table_type = 'BASE TABLE' AND table_schema='yourDatabaseName'\nORDER BY table_name DESC;" }, { "code": null, "e": 1600, "s": 1409, "text": "Use the database with the name sample and have some tables. First, we will show all tables after that we will apply to sort on the table name. The query to display all tables is as follows −" }, { "code": null, "e": 1620, "s": 1600, "text": "mysql> show tables;" }, { "code": null, "e": 1650, "s": 1620, "text": "The following is the output −" }, { "code": null, "e": 2052, "s": 1650, "text": "+--------------------------+\n| Tables_in_sample |\n+--------------------------+\n| blobsizedemo |\n| insert_prevent |\n| insertrecord_selecttable |\n| insertrecordprevent |\n| mytable |\n| newlinedemo |\n| notequaloperator |\n| sumofeverydistinct |\n| yourtable |\n+--------------------------+\n9 rows in set (0.00 sec)" }, { "code": null, "e": 2167, "s": 2052, "text": "Here is the query to sort by table name. Now, let us display all tables in descending order with ORDER BY clause −" }, { "code": null, "e": 2325, "s": 2167, "text": "mysql> SELECT table_name\n -> FROM information_schema.tables\n -> WHERE table_type = 'BASE TABLE' AND table_schema='sample'\n -> ORDER BY table_name DESC;" }, { "code": null, "e": 2355, "s": 2325, "text": "The following is the output −" }, { "code": null, "e": 2757, "s": 2355, "text": "+--------------------------+\n| TABLE_NAME |\n+--------------------------+\n| yourtable |\n| sumofeverydistinct |\n| notequaloperator |\n| newlinedemo |\n| mytable |\n| insertrecordprevent |\n| insertrecord_selecttable |\n| insert_prevent |\n| blobsizedemo |\n+--------------------------+\n9 rows in set (0.00 sec)" } ]
How to validate HTML tag using Regular Expression - GeeksforGeeks
04 Feb, 2021 Given string str, the task is to check whether it is a valid HTML tag or not by using Regular Expression.The valid HTML tag must satisfy the following conditions: It should start with an opening tag (<).It should be followed by a double quotes string or single quotes string.It should not allow one double quotes string, one single quotes string or a closing tag (>) without single or double quotes enclosed.It should end with a closing tag (>). It should start with an opening tag (<). It should be followed by a double quotes string or single quotes string. It should not allow one double quotes string, one single quotes string or a closing tag (>) without single or double quotes enclosed. It should end with a closing tag (>). Examples: Input: str = “<input value = ‘>’>”; Output: true Explanation: The given string satisfies all the above mentioned conditions.Input: str = “<br/>”; Output: true Explanation: The given string satisfies all the above mentioned conditions.Input: str = “br/>”; Output: false Explanation: The given string doesn’t starts with an opening tag “<“. Therefore, it is not a valid HTML tag.Input: str = “<‘br/>”; Output: false Explanation: The given string has one single quotes string that is not allowed. Therefore, it is not a valid HTML tag.Input: str = “<input value => >”; Output: false Explanation: The given string has a closing tag (>) without single or double quotes enclosed that is not allowed. Therefore, it is not a valid HTML tag. Approach: The idea is to use Regular Expression to solve this problem. The following steps can be followed to compute the answer. Get the String. Create a regular expression to check valid HTML tag as mentioned below: regex = “<(“[^”]*”|'[^’]*’|[^'”>])*>”; Where: < represents the string should start with an opening tag (<).( represents the starting of the group.“[^”]*” represents the string should allow double quotes enclosed string.| represents or.‘[^’]*‘ represents the string should allow single quotes enclosed string.| represents or.[^'”>] represents the string should not contain one single quote, double quotes, and “>”.) represents the ending of the group.* represents 0 or more.> represents the string should end with a closing tag (>). < represents the string should start with an opening tag (<). ( represents the starting of the group. “[^”]*” represents the string should allow double quotes enclosed string. | represents or. ‘[^’]*‘ represents the string should allow single quotes enclosed string. | represents or. [^'”>] represents the string should not contain one single quote, double quotes, and “>”. ) represents the ending of the group. * represents 0 or more. > represents the string should end with a closing tag (>). Match the given string with the regular expression. In Java, this can be done by using Pattern.matcher(). Return true if the string matches with the given regular expression, else return false. Below is the implementation of the above approach: C++ Java Python3 // C++ program to validate the// HTML tag using Regular Expression#include <iostream>#include <regex>using namespace std; // Function to validate the HTML tag.bool isValidHTMLTag(string str){ // Regex to check valid HTML tag. const regex pattern("<(\"[^\"]*\"|'[^']*'|[^'\">])*>"); // If the HTML tag // is empty return false if (str.empty()) { return false; } // Return true if the HTML tag // matched the ReGex if(regex_match(str, pattern)) { return true; } else { return false; }} // Driver Codeint main(){ // Test Case 1: string str1 = "<input value = '>'>"; cout << isValidHTMLTag(str1) << endl; // Test Case 2: string str2 = "<br/>"; cout << isValidHTMLTag(str2) << endl; // Test Case 3: string str3 = "br/>"; cout << isValidHTMLTag(str3) << endl; // Test Case 4: string str4 = "<'br/>"; cout << isValidHTMLTag(str4) << endl; // Test Case 5: string str5 = "<input value => >"; cout << isValidHTMLTag(str5) << endl; return 0;} // This code is contributed by yuvraj_chandra // Java program to validate// HTML tag using regex. import java.util.regex.*; class GFG { // Function to validate // HTML tag using regex. public static boolean isValidHTMLTag(String str) { // Regex to check valid HTML tag. String regex = "<(\"[^\"]*\"|'[^']*'|[^'\">])*>"; // Compile the ReGex Pattern p = Pattern.compile(regex); // If the string is empty // return false if (str == null) { return false; } // Find match between given string // and regular expression // using Pattern.matcher() Matcher m = p.matcher(str); // Return if the string // matched the ReGex return m.matches(); } // Driver Code. public static void main(String args[]) { // Test Case 1: String str1 = "<input value = '>'>"; System.out.println(isValidHTMLTag(str1)); // Test Case 2: String str2 = "<br/>"; System.out.println(isValidHTMLTag(str2)); // Test Case 3: String str3 = "br/>"; System.out.println(isValidHTMLTag(str3)); // Test Case 4: String str4 = "<'br/>"; System.out.println(isValidHTMLTag(str4)); // Test Case 5: String str5 = "<input value => >"; System.out.println(isValidHTMLTag(str5)); }} # Python3 program to validate# HTML tag using regex. # using regular expressionimport re # Function to validate# HTML tag using regex.def isValidHTMLTag(str): # Regex to check valid # HTML tag using regex. regex = "<(\"[^\"]*\"|'[^']*'|[^'\">])*>" # Compile the ReGex p = re.compile(regex) # If the string is empty # return false if (str == None): return False # Return if the string # matched the ReGex if(re.search(p, str)): return True else: return False # Driver code # Test Case 1:str1 = "<input value = '>'>"print(isValidHTMLTag(str1)) # Test Case 2:str2 = "<br/>"print(isValidHTMLTag(str2)) # Test Case 3:str3 = "br/>"print(isValidHTMLTag(str3)) # Test Case 4:str4 = "<'br/>"print(isValidHTMLTag(str4)) # This code is contributed by avanitrachhadiya2155 true true false false false avanitrachhadiya2155 yuvraj_chandra CPP-regex java-regular-expression regular-expression HTML Pattern Searching Strings Strings Pattern Searching HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments REST API (Introduction) Design a web page using HTML and CSS Form validation using jQuery How to place text on image using HTML and CSS? How to auto-resize an image to fit a div container using CSS? KMP Algorithm for Pattern Searching Rabin-Karp Algorithm for Pattern Searching Check if a string is substring of another Naive algorithm for Pattern Searching Boyer Moore Algorithm for Pattern Searching
[ { "code": null, "e": 24581, "s": 24553, "text": "\n04 Feb, 2021" }, { "code": null, "e": 24745, "s": 24581, "text": "Given string str, the task is to check whether it is a valid HTML tag or not by using Regular Expression.The valid HTML tag must satisfy the following conditions: " }, { "code": null, "e": 25028, "s": 24745, "text": "It should start with an opening tag (<).It should be followed by a double quotes string or single quotes string.It should not allow one double quotes string, one single quotes string or a closing tag (>) without single or double quotes enclosed.It should end with a closing tag (>)." }, { "code": null, "e": 25069, "s": 25028, "text": "It should start with an opening tag (<)." }, { "code": null, "e": 25142, "s": 25069, "text": "It should be followed by a double quotes string or single quotes string." }, { "code": null, "e": 25276, "s": 25142, "text": "It should not allow one double quotes string, one single quotes string or a closing tag (>) without single or double quotes enclosed." }, { "code": null, "e": 25314, "s": 25276, "text": "It should end with a closing tag (>)." }, { "code": null, "e": 25325, "s": 25314, "text": "Examples: " }, { "code": null, "e": 26058, "s": 25325, "text": "Input: str = “<input value = ‘>’>”; Output: true Explanation: The given string satisfies all the above mentioned conditions.Input: str = “<br/>”; Output: true Explanation: The given string satisfies all the above mentioned conditions.Input: str = “br/>”; Output: false Explanation: The given string doesn’t starts with an opening tag “<“. Therefore, it is not a valid HTML tag.Input: str = “<‘br/>”; Output: false Explanation: The given string has one single quotes string that is not allowed. Therefore, it is not a valid HTML tag.Input: str = “<input value => >”; Output: false Explanation: The given string has a closing tag (>) without single or double quotes enclosed that is not allowed. Therefore, it is not a valid HTML tag." }, { "code": null, "e": 26189, "s": 26058, "text": "Approach: The idea is to use Regular Expression to solve this problem. The following steps can be followed to compute the answer. " }, { "code": null, "e": 26205, "s": 26189, "text": "Get the String." }, { "code": null, "e": 26279, "s": 26205, "text": "Create a regular expression to check valid HTML tag as mentioned below: " }, { "code": null, "e": 26319, "s": 26279, "text": "regex = “<(“[^”]*”|'[^’]*’|[^'”>])*>”; " }, { "code": null, "e": 26812, "s": 26319, "text": "Where: < represents the string should start with an opening tag (<).( represents the starting of the group.“[^”]*” represents the string should allow double quotes enclosed string.| represents or.‘[^’]*‘ represents the string should allow single quotes enclosed string.| represents or.[^'”>] represents the string should not contain one single quote, double quotes, and “>”.) represents the ending of the group.* represents 0 or more.> represents the string should end with a closing tag (>)." }, { "code": null, "e": 26874, "s": 26812, "text": "< represents the string should start with an opening tag (<)." }, { "code": null, "e": 26914, "s": 26874, "text": "( represents the starting of the group." }, { "code": null, "e": 26988, "s": 26914, "text": "“[^”]*” represents the string should allow double quotes enclosed string." }, { "code": null, "e": 27005, "s": 26988, "text": "| represents or." }, { "code": null, "e": 27079, "s": 27005, "text": "‘[^’]*‘ represents the string should allow single quotes enclosed string." }, { "code": null, "e": 27096, "s": 27079, "text": "| represents or." }, { "code": null, "e": 27186, "s": 27096, "text": "[^'”>] represents the string should not contain one single quote, double quotes, and “>”." }, { "code": null, "e": 27224, "s": 27186, "text": ") represents the ending of the group." }, { "code": null, "e": 27248, "s": 27224, "text": "* represents 0 or more." }, { "code": null, "e": 27307, "s": 27248, "text": "> represents the string should end with a closing tag (>)." }, { "code": null, "e": 27413, "s": 27307, "text": "Match the given string with the regular expression. In Java, this can be done by using Pattern.matcher()." }, { "code": null, "e": 27501, "s": 27413, "text": "Return true if the string matches with the given regular expression, else return false." }, { "code": null, "e": 27554, "s": 27501, "text": "Below is the implementation of the above approach: " }, { "code": null, "e": 27558, "s": 27554, "text": "C++" }, { "code": null, "e": 27563, "s": 27558, "text": "Java" }, { "code": null, "e": 27571, "s": 27563, "text": "Python3" }, { "code": "// C++ program to validate the// HTML tag using Regular Expression#include <iostream>#include <regex>using namespace std; // Function to validate the HTML tag.bool isValidHTMLTag(string str){ // Regex to check valid HTML tag. const regex pattern(\"<(\\\"[^\\\"]*\\\"|'[^']*'|[^'\\\">])*>\"); // If the HTML tag // is empty return false if (str.empty()) { return false; } // Return true if the HTML tag // matched the ReGex if(regex_match(str, pattern)) { return true; } else { return false; }} // Driver Codeint main(){ // Test Case 1: string str1 = \"<input value = '>'>\"; cout << isValidHTMLTag(str1) << endl; // Test Case 2: string str2 = \"<br/>\"; cout << isValidHTMLTag(str2) << endl; // Test Case 3: string str3 = \"br/>\"; cout << isValidHTMLTag(str3) << endl; // Test Case 4: string str4 = \"<'br/>\"; cout << isValidHTMLTag(str4) << endl; // Test Case 5: string str5 = \"<input value => >\"; cout << isValidHTMLTag(str5) << endl; return 0;} // This code is contributed by yuvraj_chandra", "e": 28601, "s": 27571, "text": null }, { "code": "// Java program to validate// HTML tag using regex. import java.util.regex.*; class GFG { // Function to validate // HTML tag using regex. public static boolean isValidHTMLTag(String str) { // Regex to check valid HTML tag. String regex = \"<(\\\"[^\\\"]*\\\"|'[^']*'|[^'\\\">])*>\"; // Compile the ReGex Pattern p = Pattern.compile(regex); // If the string is empty // return false if (str == null) { return false; } // Find match between given string // and regular expression // using Pattern.matcher() Matcher m = p.matcher(str); // Return if the string // matched the ReGex return m.matches(); } // Driver Code. public static void main(String args[]) { // Test Case 1: String str1 = \"<input value = '>'>\"; System.out.println(isValidHTMLTag(str1)); // Test Case 2: String str2 = \"<br/>\"; System.out.println(isValidHTMLTag(str2)); // Test Case 3: String str3 = \"br/>\"; System.out.println(isValidHTMLTag(str3)); // Test Case 4: String str4 = \"<'br/>\"; System.out.println(isValidHTMLTag(str4)); // Test Case 5: String str5 = \"<input value => >\"; System.out.println(isValidHTMLTag(str5)); }}", "e": 29956, "s": 28601, "text": null }, { "code": "# Python3 program to validate# HTML tag using regex. # using regular expressionimport re # Function to validate# HTML tag using regex.def isValidHTMLTag(str): # Regex to check valid # HTML tag using regex. regex = \"<(\\\"[^\\\"]*\\\"|'[^']*'|[^'\\\">])*>\" # Compile the ReGex p = re.compile(regex) # If the string is empty # return false if (str == None): return False # Return if the string # matched the ReGex if(re.search(p, str)): return True else: return False # Driver code # Test Case 1:str1 = \"<input value = '>'>\"print(isValidHTMLTag(str1)) # Test Case 2:str2 = \"<br/>\"print(isValidHTMLTag(str2)) # Test Case 3:str3 = \"br/>\"print(isValidHTMLTag(str3)) # Test Case 4:str4 = \"<'br/>\"print(isValidHTMLTag(str4)) # This code is contributed by avanitrachhadiya2155", "e": 30783, "s": 29956, "text": null }, { "code": null, "e": 30811, "s": 30783, "text": "true\ntrue\nfalse\nfalse\nfalse" }, { "code": null, "e": 30834, "s": 30813, "text": "avanitrachhadiya2155" }, { "code": null, "e": 30849, "s": 30834, "text": "yuvraj_chandra" }, { "code": null, "e": 30859, "s": 30849, "text": "CPP-regex" }, { "code": null, "e": 30883, "s": 30859, "text": "java-regular-expression" }, { "code": null, "e": 30902, "s": 30883, "text": "regular-expression" }, { "code": null, "e": 30907, "s": 30902, "text": "HTML" }, { "code": null, "e": 30925, "s": 30907, "text": "Pattern Searching" }, { "code": null, "e": 30933, "s": 30925, "text": "Strings" }, { "code": null, "e": 30941, "s": 30933, "text": "Strings" }, { "code": null, "e": 30959, "s": 30941, "text": "Pattern Searching" }, { "code": null, "e": 30964, "s": 30959, "text": "HTML" }, { "code": null, "e": 31062, "s": 30964, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 31071, "s": 31062, "text": "Comments" }, { "code": null, "e": 31084, "s": 31071, "text": "Old Comments" }, { "code": null, "e": 31108, "s": 31084, "text": "REST API (Introduction)" }, { "code": null, "e": 31145, "s": 31108, "text": "Design a web page using HTML and CSS" }, { "code": null, "e": 31174, "s": 31145, "text": "Form validation using jQuery" }, { "code": null, "e": 31221, "s": 31174, "text": "How to place text on image using HTML and CSS?" }, { "code": null, "e": 31283, "s": 31221, "text": "How to auto-resize an image to fit a div container using CSS?" }, { "code": null, "e": 31319, "s": 31283, "text": "KMP Algorithm for Pattern Searching" }, { "code": null, "e": 31362, "s": 31319, "text": "Rabin-Karp Algorithm for Pattern Searching" }, { "code": null, "e": 31404, "s": 31362, "text": "Check if a string is substring of another" }, { "code": null, "e": 31442, "s": 31404, "text": "Naive algorithm for Pattern Searching" } ]
Complete Step-by-Step Gradient Descent Algorithm from Scratch | by Albers Uzila | Towards Data Science
Table of Contents (read till the end to see how you can get the complete python code of this story)· What is Optimization?· Gradient Descent (the Easy Way)· Armijo Line Search· Gradient Descent (the Hard Way)· Conclusion If you’ve been studying machine learning long enough, you’ve probably heard terms such as SGD or Adam. They are two of many optimization algorithms. Optimization algorithms are the heart of machine learning which are responsible for the intricate work of machine learning models to learn from data. It turns out that optimization has been around for a long time, even outside of the machine learning realm. People optimize. Investors seek to create portfolios that avoid excessive risk while achieving a high rate of return. Manufacturers aim for maximum efficiency in the design and operation of their production processes. Engineers adjust parameters to optimize the performance of their designs. This first paragraph of the Numerical Optimization book by Jorge Nocedal already explains a lot. It continues to state that even nature optimizes. Physical systems tend to a state of minimum energy. The molecules in an isolated chemical system react with each other until the total potential energy of their electrons is minimized. Rays of light follow paths that minimize their travel time. To make sense of what optimization is, first of all, we must identify the objective, which could be the rate of return, energy, travel time, etc. The objective depends on certain characteristics of the system called variables. Our goal is to find values of the variables that optimize the objective. Often the variables are constrained, in some way. Mathematically speaking, optimization is the process of maximizing or minimizing an objective function f(x) by searching for the appropriate variables x subject to some constraints ci, which could be written compactly as follows. where E and I are sets of indices for equality and inequality constraints, respectively. This mathematical statement is definitely daunting at first sight, maybe because it is for the general description of optimization where not much could be inferred. But don’t worry, by the time we get to the code, everything will be clear. towardsdatascience.com Now, how do we solve the min function? Thanks to calculus, we have a tool called gradient. Imagine a ball at the top of a hill. We know that the hill has different slopes/gradients at different points. Due to gravity, the ball will move descent following the curve of the hill. Which way does it go? To the steepest gradient. After some time, the ball will reach a local minimum where the ground is relatively flat to its surroundings. This is the nature of gradient descent. We could quantize the smooth path of the ball into tiny steps. At k-th step, we will have two quantities: the step length αk and the direction pk. To see gradient descent in action, let’s first import some libraries. For starters, we will define a simple objective function f(x) = x2 − 2x − 3 where x is real numbers. Since gradient descent uses gradient, we will define the gradient of f as well, which is just the first derivative of f, that is, ∇f(x) = 2x − 2. Next, we define python functions for plotting the objective function and the learning path during the optimization process. What we mean by learning path is just points x after each descent step. From the plot below, we could easily see that f has a minimum value at x = 1 (hence f(x) = -4). Let’s say we start at x = -4 (indicated by a red dot below), we will see whether gradient descent can locate the local minimum x = 1. Define a simple gradient descent algorithm as follows. For every point xk at the beginning of step k, we maintain the step length αk constant and set the direction pk to be the negative of gradient value (steepest descent at xk). We take steps using the formula while the gradient is still above a certain tolerance value (1 × 10−5 in our case) and the number of steps is still below a certain maximum value (1000 in our case). Begin at x = -4, we run the gradient descent algorithm on f with different scenarios: αk = 0.1 αk = 0.9 αk = 1 × 10−4 αk = 1.01 Solution found: y = -4.0000 x = 1.0000 Solution found: y = -4.0000 x = 1.0000 Gradient descent does not converge. Gradient descent does not converge. Here’s what we got: The first scenario converges like a charm. Even though the step length is constant, the direction is decreasing towards zero and hence results in a convergence. The second scenario also converges even though the learning path is oscillating around the solution due to the big step length. The third scenario moves towards the solution. However, the step length is so small so that the number of iterations is maxed out. Increasing max_iter will solve the issue even though it will take much longer to arrive at the solution. The fourth scenario diverges due to the big step length. Here, we set max_iter = 8 to make the visualization more pleasing. All and all, the solution x = 1 could be reached by gradient descent with the right step length. You might be wondering, why don’t we use the exact analytical solution: take the derivative of f, then solve x such that the derivative is zero. For our previous example, we would find that the x that minimizes f would satisfy ∇f(x) = 2x − 2 = 0, that is, x = 1. Yes, this is one way to go. But this is not a recommended technique when you face an optimization problem where the derivative of f is really hard to calculate or impossible to solve. Now, what could we improve from our gradient descent algorithm? We see previously that the step length α is maintained constant throughout the steps and the wrong choice of α might end up diverges the steps. Could we search for the optimum α for each step direction? Enter line search. In the line search strategy, the algorithm chooses a direction pk (could be as simple as the steepest descent -∇f(x)) and searches along this direction from the current iterate xk for a new iterate with a lower function value. The distance to move along pk can be found by approximately solving the following one-dimensional minimization problem to find a step length α: As mentioned before, by solving this exactly, we would derive the maximum benefit from the direction pk, but an exact minimization may be expensive and is usually unnecessary. Instead, the line search algorithm generates a limited number of trial step lengths until it finds one that loosely approximates the minimum of f(xk + α pk). At the new point xk+1 = xk + α pk, a new search direction and step length are computed, and the process is repeated. A popular inexact line search condition stipulates that αk should, first of all, give a sufficient decrease in the objective function f, as measured by the so-called Armijo Condition: for some constant c1 ∈ (0, 1). In other words, the reduction in f should be proportional to both the step length αk and the directional derivative ∇fkpk. At iterate xk, we start with some initial αk, and while the Armijo Condition is not satisfied, we simply shrink αk with some shrinkage factor ρ. The shrinkage process will be terminated at some point since for a sufficiently small αk, the Armijo Condition is always satisfied. Below is the code for the Armijo Line Search. Note that we use ρ = 0.5 and c1 = 1 × 10−4. Now we are ready to implement Armijo Line Search to Gradient Descent Algorithm. First of all, let’s define a harder objective function to solve: the Griewank Function. We will be using the two-dimensional version of the Griewank Function (n = 2). To give some sense, we could plot this function as follows. We could see that the function has many local optima. We should expect the gradient descent algorithm to be trapped in one of these local minima. We could also notice that the global minimum is at x = [0, 0] where f(x) = 0. Finally, let’s build the gradient descent algorithm one last time. As stated before, we use pk = -∇f(xk) as the direction and αk from Armijo Line Search as the step length. Also, steps will be taken until one of the stopping criteria below is satisfied: The norm of the gradient of the objective function is close enough to zero, that is, The number of steps taken is 1000 We then make a python function for creating two plots: Learning path of x along with the contour plot of f(x) The value of f(x) per step taken For an initial value x0, we run gradient descent algorithm on f with different scenarios: x0 = [0, 3] x0 = [1, 2] x0 = [2, 1] x0 = [1, 3] x0 = [2, 2] x0 = [3, 1] Initial condition: y = 1.5254, x = [0 3] Iteration: 1 y = 1.1245, x = [0.0000 2.3959], gradient = 0.7029Iteration: 2 y = 0.6356, x = [0.0000 1.6929], gradient = 0.6591Iteration: 3 y = 0.2558, x = [0.0000 1.0338], gradient = 0.4726Iteration: 4 y = 0.0778, x = [0.0000 0.5612], gradient = 0.2736Iteration: 5 y = 0.0206, x = [0.0000 0.2876], gradient = 0.1430Iteration: 6 y = 0.0052, x = [0.0000 0.1447], gradient = 0.0723Iteration: 7 y = 0.0013, x = [0.0000 0.0724], gradient = 0.0362Iteration: 8 y = 0.0003, x = [0.0000 0.0362], gradient = 0.0181Iteration: 9 y = 0.0001, x = [0.0000 0.0181], gradient = 0.0090Iteration: 10 y = 0.0000, x = [0.0000 0.0090], gradient = 0.0045Iteration: 11 y = 0.0000, x = [0.0000 0.0045], gradient = 0.0023Iteration: 12 y = 0.0000, x = [0.0000 0.0023], gradient = 0.0011Iteration: 13 y = 0.0000, x = [0.0000 0.0011], gradient = 0.0006Iteration: 14 y = 0.0000, x = [0.0000 0.0006], gradient = 0.0003Iteration: 15 y = 0.0000, x = [0.0000 0.0003], gradient = 0.0001Iteration: 16 y = 0.0000, x = [0.0000 0.0001], gradient = 0.0001Iteration: 17 y = 0.0000, x = [0.0000 0.0001], gradient = 0.0000Iteration: 18 y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Iteration: 19 y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Solution: y = 0.0000, x = [0.0000 0.0000] Initial condition: y = 0.9170, x = [1 2] Iteration: 1 y = 0.7349, x = [0.8683 1.6216], gradient = 0.5225Iteration: 2 y = 0.4401, x = [0.5538 1.2044], gradient = 0.5705Iteration: 3 y = 0.1564, x = [0.2071 0.7513], gradient = 0.3932Iteration: 4 y = 0.0403, x = [0.0297 0.4004], gradient = 0.1997Iteration: 5 y = 0.0103, x = [0.0012 0.2027], gradient = 0.1011Iteration: 6 y = 0.0026, x = [0.0000 0.1016], gradient = 0.0508Iteration: 7 y = 0.0006, x = [0.0000 0.0508], gradient = 0.0254Iteration: 8 y = 0.0002, x = [0.0000 0.0254], gradient = 0.0127Iteration: 9 y = 0.0000, x = [0.0000 0.0127], gradient = 0.0063Iteration: 10 y = 0.0000, x = [0.0000 0.0063], gradient = 0.0032Iteration: 11 y = 0.0000, x = [0.0000 0.0032], gradient = 0.0016Iteration: 12 y = 0.0000, x = [0.0000 0.0016], gradient = 0.0008Iteration: 13 y = 0.0000, x = [0.0000 0.0008], gradient = 0.0004Iteration: 14 y = 0.0000, x = [0.0000 0.0004], gradient = 0.0002Iteration: 15 y = 0.0000, x = [0.0000 0.0002], gradient = 0.0001Iteration: 16 y = 0.0000, x = [0.0000 0.0001], gradient = 0.0000Iteration: 17 y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Iteration: 18 y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Iteration: 19 y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Solution: y = 0.0000, x = [0.0000 0.0000] Initial condition: y = 1.3176, x = [2 1] Iteration: 1 y = 0.8276, x = [1.3077 1.1907], gradient = 0.6583Iteration: 2 y = 0.4212, x = [0.6639 1.0529], gradient = 0.5903Iteration: 3 y = 0.1315, x = [0.2104 0.6750], gradient = 0.3682Iteration: 4 y = 0.0320, x = [0.0248 0.3570], gradient = 0.1784Iteration: 5 y = 0.0081, x = [0.0008 0.1803], gradient = 0.0900Iteration: 6 y = 0.0020, x = [0.0000 0.0903], gradient = 0.0452Iteration: 7 y = 0.0005, x = [0.0000 0.0451], gradient = 0.0226Iteration: 8 y = 0.0001, x = [0.0000 0.0225], gradient = 0.0113Iteration: 9 y = 0.0000, x = [0.0000 0.0113], gradient = 0.0056Iteration: 10 y = 0.0000, x = [0.0000 0.0056], gradient = 0.0028Iteration: 11 y = 0.0000, x = [0.0000 0.0028], gradient = 0.0014Iteration: 12 y = 0.0000, x = [0.0000 0.0014], gradient = 0.0007Iteration: 13 y = 0.0000, x = [0.0000 0.0007], gradient = 0.0004Iteration: 14 y = 0.0000, x = [0.0000 0.0004], gradient = 0.0002Iteration: 15 y = 0.0000, x = [0.0000 0.0002], gradient = 0.0001Iteration: 16 y = 0.0000, x = [0.0000 0.0001], gradient = 0.0000Iteration: 17 y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Iteration: 18 y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Iteration: 19 y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Solution: y = 0.0000, x = [0.0000 0.0000] Initial condition: y = 1.2852, x = [1 3] Iteration: 1 y = 1.0433, x = [1.4397 2.6729], gradient = 0.3230Iteration: 2 y = 0.9572, x = [1.7501 2.5838], gradient = 0.2763Iteration: 3 y = 0.8638, x = [1.9986 2.7045], gradient = 0.4098Iteration: 4 y = 0.6623, x = [2.3024 2.9796], gradient = 0.5544Iteration: 5 y = 0.3483, x = [2.6813 3.3842], gradient = 0.5380Iteration: 6 y = 0.1116, x = [3.0054 3.8137], gradient = 0.3231Iteration: 7 y = 0.0338, x = [3.1265 4.1133], gradient = 0.1618Iteration: 8 y = 0.0141, x = [3.1396 4.2745], gradient = 0.0818Iteration: 9 y = 0.0091, x = [3.1400 4.3564], gradient = 0.0411Iteration: 10 y = 0.0078, x = [3.1400 4.3974], gradient = 0.0205Iteration: 11 y = 0.0075, x = [3.1400 4.4179], gradient = 0.0103Iteration: 12 y = 0.0074, x = [3.1400 4.4282], gradient = 0.0051Iteration: 13 y = 0.0074, x = [3.1400 4.4333], gradient = 0.0026Iteration: 14 y = 0.0074, x = [3.1400 4.4359], gradient = 0.0013Iteration: 15 y = 0.0074, x = [3.1400 4.4372], gradient = 0.0006Iteration: 16 y = 0.0074, x = [3.1400 4.4378], gradient = 0.0003Iteration: 17 y = 0.0074, x = [3.1400 4.4381], gradient = 0.0002Iteration: 18 y = 0.0074, x = [3.1400 4.4383], gradient = 0.0001Iteration: 19 y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 20 y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 21 y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Solution: y = 0.0074, x = [3.1400 4.4384] Initial condition: y = 1.0669, x = [2 2] Iteration: 1 y = 0.9886, x = [1.8572 2.2897], gradient = 0.2035Iteration: 2 y = 0.9414, x = [1.9025 2.488 ], gradient = 0.2858Iteration: 3 y = 0.8372, x = [2.0788 2.713 ], gradient = 0.4378Iteration: 4 y = 0.6117, x = [2.3753 3.035 ], gradient = 0.5682Iteration: 5 y = 0.2941, x = [2.7514 3.461 ], gradient = 0.5082Iteration: 6 y = 0.0894, x = [3.0423 3.8777], gradient = 0.2863Iteration: 7 y = 0.0282, x = [3.1321 4.1495], gradient = 0.1438Iteration: 8 y = 0.0127, x = [3.1398 4.2931], gradient = 0.0726Iteration: 9 y = 0.0087, x = [3.1400 4.3657], gradient = 0.0364Iteration: 10 y = 0.0077, x = [3.1400 4.4021], gradient = 0.0182Iteration: 11 y = 0.0075, x = [3.1400 4.4203], gradient = 0.0091Iteration: 12 y = 0.0074, x = [3.1400 4.4294], gradient = 0.0045Iteration: 13 y = 0.0074, x = [3.1400 4.4339], gradient = 0.0023Iteration: 14 y = 0.0074, x = [3.1400 4.4362], gradient = 0.0011Iteration: 15 y = 0.0074, x = [3.1400 4.4373], gradient = 0.0006Iteration: 16 y = 0.0074, x = [3.1400 4.4379], gradient = 0.0003Iteration: 17 y = 0.0074, x = [3.1400 4.4382], gradient = 0.0001Iteration: 18 y = 0.0074, x = [3.1400 4.4383], gradient = 0.0001Iteration: 19 y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 20 y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 21 y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Solution: y = 0.0074, x = [3.1400 4.4384] Initial condition: y = 1.7551, x = [3 1] Iteration: 1 y = 1.5028, x = [2.8912 1.4543], gradient = 0.6001Iteration: 2 y = 1.1216, x = [2.7619 2.0402], gradient = 0.6522Iteration: 3 y = 0.7074, x = [2.7131 2.6906], gradient = 0.6214Iteration: 4 y = 0.3449, x = [2.8471 3.2973], gradient = 0.5273Iteration: 5 y = 0.1160, x = [3.0458 3.7858], gradient = 0.3246Iteration: 6 y = 0.0361, x = [3.1298 4.0993], gradient = 0.1683Iteration: 7 y = 0.0147, x = [3.1397 4.2673], gradient = 0.0854Iteration: 8 y = 0.0092, x = [3.1400 4.3528], gradient = 0.0429Iteration: 9 y = 0.0079, x = [3.1400 4.3956], gradient = 0.0214Iteration: 10 y = 0.0075, x = [3.1400 4.4170], gradient = 0.0107Iteration: 11 y = 0.0074, x = [3.1400 4.4278], gradient = 0.0053Iteration: 12 y = 0.0074, x = [3.1400 4.4331], gradient = 0.0027Iteration: 13 y = 0.0074, x = [3.1400 4.4358], gradient = 0.0013Iteration: 14 y = 0.0074, x = [3.1400 4.4371], gradient = 0.0007Iteration: 15 y = 0.0074, x = [3.1400 4.4378], gradient = 0.0003Iteration: 16 y = 0.0074, x = [3.1400 4.4381], gradient = 0.0002Iteration: 17 y = 0.0074, x = [3.1400 4.4383], gradient = 0.0001Iteration: 18 y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 19 y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 20 y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 21 y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Solution: y = 0.0074, x = [3.1400 4.4384] In the first 3 scenarios, the algorithm converges to the global minimum where f(x) = 0, although the first scenario seems like wasting too many steps since the first coordinate of x0 is already 0 for this case. In the last 3 scenarios, the algorithm is trapped in a local minimum where f(x) = 0.0074 since x0 is too far from coordinate [0, 0]. This comes with no surprise because the line search method looks for the minimum value of f by heading to the direction where the function value decreases and the norm of the gradient approaches zero, so that when this method has obtained a coordinate where the gradient is very close to zero, the point is considered as the function minimum value regardless of whether the minimum is local or global. If we apply this version of gradient descent on the original objective function f(x) = x2 − 2x − 3, we get the following result. Initial condition: y = 21.0000, x = -4 Iteration: 1 y = -4.0000, x = 1.0, gradient = 0.0000Solution: y = -4.0000, x = 1.0 The solution is found in one step! What do you think about this? Is it because the new gradient descent version is too overkill for this function? Or is this just a coincidence, and changing parameters such as ρ will produce no better result than the original simple gradient descent? Let me know your thoughts in the response section below! In this article, we understand the work of the Gradient Descent algorithm in optimization problems, ranging from a simple high school textbook problem to a real-world machine learning cost function minimization problem. The easy implementation assumes a constant learning rate, whereas a harder one searches for the learning rate using Armijo line search. The algorithm itself is the most basic and has many variations and implementations depending on the nature of the objective function. Like many other optimization algorithms, the Gradient Descent algorithm may be trapped in a local minimum. You might want to continue reading my related stories: Albers Uzila Thanks! If you enjoy this story and want to support me as a writer, consider becoming a member. For only $5 a month, you’ll get unlimited access to all stories on Medium. If you sign up using my link, I’ll earn a small commission. If you’re one of my referred Medium members, feel free to email me at geoclid.members[at]gmail.com to get the complete python code of this story.
[ { "code": null, "e": 392, "s": 171, "text": "Table of Contents (read till the end to see how you can get the complete python code of this story)· What is Optimization?· Gradient Descent (the Easy Way)· Armijo Line Search· Gradient Descent (the Hard Way)· Conclusion" }, { "code": null, "e": 799, "s": 392, "text": "If you’ve been studying machine learning long enough, you’ve probably heard terms such as SGD or Adam. They are two of many optimization algorithms. Optimization algorithms are the heart of machine learning which are responsible for the intricate work of machine learning models to learn from data. It turns out that optimization has been around for a long time, even outside of the machine learning realm." }, { "code": null, "e": 1091, "s": 799, "text": "People optimize. Investors seek to create portfolios that avoid excessive risk while achieving a high rate of return. Manufacturers aim for maximum efficiency in the design and operation of their production processes. Engineers adjust parameters to optimize the performance of their designs." }, { "code": null, "e": 1483, "s": 1091, "text": "This first paragraph of the Numerical Optimization book by Jorge Nocedal already explains a lot. It continues to state that even nature optimizes. Physical systems tend to a state of minimum energy. The molecules in an isolated chemical system react with each other until the total potential energy of their electrons is minimized. Rays of light follow paths that minimize their travel time." }, { "code": null, "e": 1833, "s": 1483, "text": "To make sense of what optimization is, first of all, we must identify the objective, which could be the rate of return, energy, travel time, etc. The objective depends on certain characteristics of the system called variables. Our goal is to find values of the variables that optimize the objective. Often the variables are constrained, in some way." }, { "code": null, "e": 2063, "s": 1833, "text": "Mathematically speaking, optimization is the process of maximizing or minimizing an objective function f(x) by searching for the appropriate variables x subject to some constraints ci, which could be written compactly as follows." }, { "code": null, "e": 2152, "s": 2063, "text": "where E and I are sets of indices for equality and inequality constraints, respectively." }, { "code": null, "e": 2392, "s": 2152, "text": "This mathematical statement is definitely daunting at first sight, maybe because it is for the general description of optimization where not much could be inferred. But don’t worry, by the time we get to the code, everything will be clear." }, { "code": null, "e": 2415, "s": 2392, "text": "towardsdatascience.com" }, { "code": null, "e": 2851, "s": 2415, "text": "Now, how do we solve the min function? Thanks to calculus, we have a tool called gradient. Imagine a ball at the top of a hill. We know that the hill has different slopes/gradients at different points. Due to gravity, the ball will move descent following the curve of the hill. Which way does it go? To the steepest gradient. After some time, the ball will reach a local minimum where the ground is relatively flat to its surroundings." }, { "code": null, "e": 3108, "s": 2851, "text": "This is the nature of gradient descent. We could quantize the smooth path of the ball into tiny steps. At k-th step, we will have two quantities: the step length αk and the direction pk. To see gradient descent in action, let’s first import some libraries." }, { "code": null, "e": 3355, "s": 3108, "text": "For starters, we will define a simple objective function f(x) = x2 − 2x − 3 where x is real numbers. Since gradient descent uses gradient, we will define the gradient of f as well, which is just the first derivative of f, that is, ∇f(x) = 2x − 2." }, { "code": null, "e": 3551, "s": 3355, "text": "Next, we define python functions for plotting the objective function and the learning path during the optimization process. What we mean by learning path is just points x after each descent step." }, { "code": null, "e": 3781, "s": 3551, "text": "From the plot below, we could easily see that f has a minimum value at x = 1 (hence f(x) = -4). Let’s say we start at x = -4 (indicated by a red dot below), we will see whether gradient descent can locate the local minimum x = 1." }, { "code": null, "e": 4043, "s": 3781, "text": "Define a simple gradient descent algorithm as follows. For every point xk at the beginning of step k, we maintain the step length αk constant and set the direction pk to be the negative of gradient value (steepest descent at xk). We take steps using the formula" }, { "code": null, "e": 4209, "s": 4043, "text": "while the gradient is still above a certain tolerance value (1 × 10−5 in our case) and the number of steps is still below a certain maximum value (1000 in our case)." }, { "code": null, "e": 4295, "s": 4209, "text": "Begin at x = -4, we run the gradient descent algorithm on f with different scenarios:" }, { "code": null, "e": 4304, "s": 4295, "text": "αk = 0.1" }, { "code": null, "e": 4313, "s": 4304, "text": "αk = 0.9" }, { "code": null, "e": 4327, "s": 4313, "text": "αk = 1 × 10−4" }, { "code": null, "e": 4337, "s": 4327, "text": "αk = 1.01" }, { "code": null, "e": 4378, "s": 4337, "text": "Solution found: y = -4.0000 x = 1.0000" }, { "code": null, "e": 4419, "s": 4378, "text": "Solution found: y = -4.0000 x = 1.0000" }, { "code": null, "e": 4455, "s": 4419, "text": "Gradient descent does not converge." }, { "code": null, "e": 4491, "s": 4455, "text": "Gradient descent does not converge." }, { "code": null, "e": 4511, "s": 4491, "text": "Here’s what we got:" }, { "code": null, "e": 4672, "s": 4511, "text": "The first scenario converges like a charm. Even though the step length is constant, the direction is decreasing towards zero and hence results in a convergence." }, { "code": null, "e": 4800, "s": 4672, "text": "The second scenario also converges even though the learning path is oscillating around the solution due to the big step length." }, { "code": null, "e": 5036, "s": 4800, "text": "The third scenario moves towards the solution. However, the step length is so small so that the number of iterations is maxed out. Increasing max_iter will solve the issue even though it will take much longer to arrive at the solution." }, { "code": null, "e": 5160, "s": 5036, "text": "The fourth scenario diverges due to the big step length. Here, we set max_iter = 8 to make the visualization more pleasing." }, { "code": null, "e": 5257, "s": 5160, "text": "All and all, the solution x = 1 could be reached by gradient descent with the right step length." }, { "code": null, "e": 5520, "s": 5257, "text": "You might be wondering, why don’t we use the exact analytical solution: take the derivative of f, then solve x such that the derivative is zero. For our previous example, we would find that the x that minimizes f would satisfy ∇f(x) = 2x − 2 = 0, that is, x = 1." }, { "code": null, "e": 5704, "s": 5520, "text": "Yes, this is one way to go. But this is not a recommended technique when you face an optimization problem where the derivative of f is really hard to calculate or impossible to solve." }, { "code": null, "e": 5971, "s": 5704, "text": "Now, what could we improve from our gradient descent algorithm? We see previously that the step length α is maintained constant throughout the steps and the wrong choice of α might end up diverges the steps. Could we search for the optimum α for each step direction?" }, { "code": null, "e": 5990, "s": 5971, "text": "Enter line search." }, { "code": null, "e": 6361, "s": 5990, "text": "In the line search strategy, the algorithm chooses a direction pk (could be as simple as the steepest descent -∇f(x)) and searches along this direction from the current iterate xk for a new iterate with a lower function value. The distance to move along pk can be found by approximately solving the following one-dimensional minimization problem to find a step length α:" }, { "code": null, "e": 6812, "s": 6361, "text": "As mentioned before, by solving this exactly, we would derive the maximum benefit from the direction pk, but an exact minimization may be expensive and is usually unnecessary. Instead, the line search algorithm generates a limited number of trial step lengths until it finds one that loosely approximates the minimum of f(xk + α pk). At the new point xk+1 = xk + α pk, a new search direction and step length are computed, and the process is repeated." }, { "code": null, "e": 6996, "s": 6812, "text": "A popular inexact line search condition stipulates that αk should, first of all, give a sufficient decrease in the objective function f, as measured by the so-called Armijo Condition:" }, { "code": null, "e": 7427, "s": 6996, "text": "for some constant c1 ∈ (0, 1). In other words, the reduction in f should be proportional to both the step length αk and the directional derivative ∇fkpk. At iterate xk, we start with some initial αk, and while the Armijo Condition is not satisfied, we simply shrink αk with some shrinkage factor ρ. The shrinkage process will be terminated at some point since for a sufficiently small αk, the Armijo Condition is always satisfied." }, { "code": null, "e": 7517, "s": 7427, "text": "Below is the code for the Armijo Line Search. Note that we use ρ = 0.5 and c1 = 1 × 10−4." }, { "code": null, "e": 7597, "s": 7517, "text": "Now we are ready to implement Armijo Line Search to Gradient Descent Algorithm." }, { "code": null, "e": 7685, "s": 7597, "text": "First of all, let’s define a harder objective function to solve: the Griewank Function." }, { "code": null, "e": 7824, "s": 7685, "text": "We will be using the two-dimensional version of the Griewank Function (n = 2). To give some sense, we could plot this function as follows." }, { "code": null, "e": 8048, "s": 7824, "text": "We could see that the function has many local optima. We should expect the gradient descent algorithm to be trapped in one of these local minima. We could also notice that the global minimum is at x = [0, 0] where f(x) = 0." }, { "code": null, "e": 8302, "s": 8048, "text": "Finally, let’s build the gradient descent algorithm one last time. As stated before, we use pk = -∇f(xk) as the direction and αk from Armijo Line Search as the step length. Also, steps will be taken until one of the stopping criteria below is satisfied:" }, { "code": null, "e": 8387, "s": 8302, "text": "The norm of the gradient of the objective function is close enough to zero, that is," }, { "code": null, "e": 8421, "s": 8387, "text": "The number of steps taken is 1000" }, { "code": null, "e": 8476, "s": 8421, "text": "We then make a python function for creating two plots:" }, { "code": null, "e": 8531, "s": 8476, "text": "Learning path of x along with the contour plot of f(x)" }, { "code": null, "e": 8564, "s": 8531, "text": "The value of f(x) per step taken" }, { "code": null, "e": 8654, "s": 8564, "text": "For an initial value x0, we run gradient descent algorithm on f with different scenarios:" }, { "code": null, "e": 8666, "s": 8654, "text": "x0 = [0, 3]" }, { "code": null, "e": 8678, "s": 8666, "text": "x0 = [1, 2]" }, { "code": null, "e": 8690, "s": 8678, "text": "x0 = [2, 1]" }, { "code": null, "e": 8702, "s": 8690, "text": "x0 = [1, 3]" }, { "code": null, "e": 8714, "s": 8702, "text": "x0 = [2, 2]" }, { "code": null, "e": 8726, "s": 8714, "text": "x0 = [3, 1]" }, { "code": null, "e": 10056, "s": 8726, "text": "Initial condition: y = 1.5254, x = [0 3] Iteration: 1 \t y = 1.1245, x = [0.0000 2.3959], gradient = 0.7029Iteration: 2 \t y = 0.6356, x = [0.0000 1.6929], gradient = 0.6591Iteration: 3 \t y = 0.2558, x = [0.0000 1.0338], gradient = 0.4726Iteration: 4 \t y = 0.0778, x = [0.0000 0.5612], gradient = 0.2736Iteration: 5 \t y = 0.0206, x = [0.0000 0.2876], gradient = 0.1430Iteration: 6 \t y = 0.0052, x = [0.0000 0.1447], gradient = 0.0723Iteration: 7 \t y = 0.0013, x = [0.0000 0.0724], gradient = 0.0362Iteration: 8 \t y = 0.0003, x = [0.0000 0.0362], gradient = 0.0181Iteration: 9 \t y = 0.0001, x = [0.0000 0.0181], gradient = 0.0090Iteration: 10 \t y = 0.0000, x = [0.0000 0.0090], gradient = 0.0045Iteration: 11 \t y = 0.0000, x = [0.0000 0.0045], gradient = 0.0023Iteration: 12 \t y = 0.0000, x = [0.0000 0.0023], gradient = 0.0011Iteration: 13 \t y = 0.0000, x = [0.0000 0.0011], gradient = 0.0006Iteration: 14 \t y = 0.0000, x = [0.0000 0.0006], gradient = 0.0003Iteration: 15 \t y = 0.0000, x = [0.0000 0.0003], gradient = 0.0001Iteration: 16 \t y = 0.0000, x = [0.0000 0.0001], gradient = 0.0001Iteration: 17 \t y = 0.0000, x = [0.0000 0.0001], gradient = 0.0000Iteration: 18 \t y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Iteration: 19 \t y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Solution: \t y = 0.0000, x = [0.0000 0.0000]" }, { "code": null, "e": 11386, "s": 10056, "text": "Initial condition: y = 0.9170, x = [1 2] Iteration: 1 \t y = 0.7349, x = [0.8683 1.6216], gradient = 0.5225Iteration: 2 \t y = 0.4401, x = [0.5538 1.2044], gradient = 0.5705Iteration: 3 \t y = 0.1564, x = [0.2071 0.7513], gradient = 0.3932Iteration: 4 \t y = 0.0403, x = [0.0297 0.4004], gradient = 0.1997Iteration: 5 \t y = 0.0103, x = [0.0012 0.2027], gradient = 0.1011Iteration: 6 \t y = 0.0026, x = [0.0000 0.1016], gradient = 0.0508Iteration: 7 \t y = 0.0006, x = [0.0000 0.0508], gradient = 0.0254Iteration: 8 \t y = 0.0002, x = [0.0000 0.0254], gradient = 0.0127Iteration: 9 \t y = 0.0000, x = [0.0000 0.0127], gradient = 0.0063Iteration: 10 \t y = 0.0000, x = [0.0000 0.0063], gradient = 0.0032Iteration: 11 \t y = 0.0000, x = [0.0000 0.0032], gradient = 0.0016Iteration: 12 \t y = 0.0000, x = [0.0000 0.0016], gradient = 0.0008Iteration: 13 \t y = 0.0000, x = [0.0000 0.0008], gradient = 0.0004Iteration: 14 \t y = 0.0000, x = [0.0000 0.0004], gradient = 0.0002Iteration: 15 \t y = 0.0000, x = [0.0000 0.0002], gradient = 0.0001Iteration: 16 \t y = 0.0000, x = [0.0000 0.0001], gradient = 0.0000Iteration: 17 \t y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Iteration: 18 \t y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Iteration: 19 \t y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Solution: \t y = 0.0000, x = [0.0000 0.0000]" }, { "code": null, "e": 12716, "s": 11386, "text": "Initial condition: y = 1.3176, x = [2 1] Iteration: 1 \t y = 0.8276, x = [1.3077 1.1907], gradient = 0.6583Iteration: 2 \t y = 0.4212, x = [0.6639 1.0529], gradient = 0.5903Iteration: 3 \t y = 0.1315, x = [0.2104 0.6750], gradient = 0.3682Iteration: 4 \t y = 0.0320, x = [0.0248 0.3570], gradient = 0.1784Iteration: 5 \t y = 0.0081, x = [0.0008 0.1803], gradient = 0.0900Iteration: 6 \t y = 0.0020, x = [0.0000 0.0903], gradient = 0.0452Iteration: 7 \t y = 0.0005, x = [0.0000 0.0451], gradient = 0.0226Iteration: 8 \t y = 0.0001, x = [0.0000 0.0225], gradient = 0.0113Iteration: 9 \t y = 0.0000, x = [0.0000 0.0113], gradient = 0.0056Iteration: 10 \t y = 0.0000, x = [0.0000 0.0056], gradient = 0.0028Iteration: 11 \t y = 0.0000, x = [0.0000 0.0028], gradient = 0.0014Iteration: 12 \t y = 0.0000, x = [0.0000 0.0014], gradient = 0.0007Iteration: 13 \t y = 0.0000, x = [0.0000 0.0007], gradient = 0.0004Iteration: 14 \t y = 0.0000, x = [0.0000 0.0004], gradient = 0.0002Iteration: 15 \t y = 0.0000, x = [0.0000 0.0002], gradient = 0.0001Iteration: 16 \t y = 0.0000, x = [0.0000 0.0001], gradient = 0.0000Iteration: 17 \t y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Iteration: 18 \t y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Iteration: 19 \t y = 0.0000, x = [0.0000 0.0000], gradient = 0.0000Solution: \t y = 0.0000, x = [0.0000 0.0000]" }, { "code": null, "e": 14178, "s": 12716, "text": "Initial condition: y = 1.2852, x = [1 3] Iteration: 1 \t y = 1.0433, x = [1.4397 2.6729], gradient = 0.3230Iteration: 2 \t y = 0.9572, x = [1.7501 2.5838], gradient = 0.2763Iteration: 3 \t y = 0.8638, x = [1.9986 2.7045], gradient = 0.4098Iteration: 4 \t y = 0.6623, x = [2.3024 2.9796], gradient = 0.5544Iteration: 5 \t y = 0.3483, x = [2.6813 3.3842], gradient = 0.5380Iteration: 6 \t y = 0.1116, x = [3.0054 3.8137], gradient = 0.3231Iteration: 7 \t y = 0.0338, x = [3.1265 4.1133], gradient = 0.1618Iteration: 8 \t y = 0.0141, x = [3.1396 4.2745], gradient = 0.0818Iteration: 9 \t y = 0.0091, x = [3.1400 4.3564], gradient = 0.0411Iteration: 10 \t y = 0.0078, x = [3.1400 4.3974], gradient = 0.0205Iteration: 11 \t y = 0.0075, x = [3.1400 4.4179], gradient = 0.0103Iteration: 12 \t y = 0.0074, x = [3.1400 4.4282], gradient = 0.0051Iteration: 13 \t y = 0.0074, x = [3.1400 4.4333], gradient = 0.0026Iteration: 14 \t y = 0.0074, x = [3.1400 4.4359], gradient = 0.0013Iteration: 15 \t y = 0.0074, x = [3.1400 4.4372], gradient = 0.0006Iteration: 16 \t y = 0.0074, x = [3.1400 4.4378], gradient = 0.0003Iteration: 17 \t y = 0.0074, x = [3.1400 4.4381], gradient = 0.0002Iteration: 18 \t y = 0.0074, x = [3.1400 4.4383], gradient = 0.0001Iteration: 19 \t y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 20 \t y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 21 \t y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Solution: \t y = 0.0074, x = [3.1400 4.4384]" }, { "code": null, "e": 15640, "s": 14178, "text": "Initial condition: y = 1.0669, x = [2 2] Iteration: 1 \t y = 0.9886, x = [1.8572 2.2897], gradient = 0.2035Iteration: 2 \t y = 0.9414, x = [1.9025 2.488 ], gradient = 0.2858Iteration: 3 \t y = 0.8372, x = [2.0788 2.713 ], gradient = 0.4378Iteration: 4 \t y = 0.6117, x = [2.3753 3.035 ], gradient = 0.5682Iteration: 5 \t y = 0.2941, x = [2.7514 3.461 ], gradient = 0.5082Iteration: 6 \t y = 0.0894, x = [3.0423 3.8777], gradient = 0.2863Iteration: 7 \t y = 0.0282, x = [3.1321 4.1495], gradient = 0.1438Iteration: 8 \t y = 0.0127, x = [3.1398 4.2931], gradient = 0.0726Iteration: 9 \t y = 0.0087, x = [3.1400 4.3657], gradient = 0.0364Iteration: 10 \t y = 0.0077, x = [3.1400 4.4021], gradient = 0.0182Iteration: 11 \t y = 0.0075, x = [3.1400 4.4203], gradient = 0.0091Iteration: 12 \t y = 0.0074, x = [3.1400 4.4294], gradient = 0.0045Iteration: 13 \t y = 0.0074, x = [3.1400 4.4339], gradient = 0.0023Iteration: 14 \t y = 0.0074, x = [3.1400 4.4362], gradient = 0.0011Iteration: 15 \t y = 0.0074, x = [3.1400 4.4373], gradient = 0.0006Iteration: 16 \t y = 0.0074, x = [3.1400 4.4379], gradient = 0.0003Iteration: 17 \t y = 0.0074, x = [3.1400 4.4382], gradient = 0.0001Iteration: 18 \t y = 0.0074, x = [3.1400 4.4383], gradient = 0.0001Iteration: 19 \t y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 20 \t y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 21 \t y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Solution: \t y = 0.0074, x = [3.1400 4.4384]" }, { "code": null, "e": 17102, "s": 15640, "text": "Initial condition: y = 1.7551, x = [3 1] Iteration: 1 \t y = 1.5028, x = [2.8912 1.4543], gradient = 0.6001Iteration: 2 \t y = 1.1216, x = [2.7619 2.0402], gradient = 0.6522Iteration: 3 \t y = 0.7074, x = [2.7131 2.6906], gradient = 0.6214Iteration: 4 \t y = 0.3449, x = [2.8471 3.2973], gradient = 0.5273Iteration: 5 \t y = 0.1160, x = [3.0458 3.7858], gradient = 0.3246Iteration: 6 \t y = 0.0361, x = [3.1298 4.0993], gradient = 0.1683Iteration: 7 \t y = 0.0147, x = [3.1397 4.2673], gradient = 0.0854Iteration: 8 \t y = 0.0092, x = [3.1400 4.3528], gradient = 0.0429Iteration: 9 \t y = 0.0079, x = [3.1400 4.3956], gradient = 0.0214Iteration: 10 \t y = 0.0075, x = [3.1400 4.4170], gradient = 0.0107Iteration: 11 \t y = 0.0074, x = [3.1400 4.4278], gradient = 0.0053Iteration: 12 \t y = 0.0074, x = [3.1400 4.4331], gradient = 0.0027Iteration: 13 \t y = 0.0074, x = [3.1400 4.4358], gradient = 0.0013Iteration: 14 \t y = 0.0074, x = [3.1400 4.4371], gradient = 0.0007Iteration: 15 \t y = 0.0074, x = [3.1400 4.4378], gradient = 0.0003Iteration: 16 \t y = 0.0074, x = [3.1400 4.4381], gradient = 0.0002Iteration: 17 \t y = 0.0074, x = [3.1400 4.4383], gradient = 0.0001Iteration: 18 \t y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 19 \t y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 20 \t y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Iteration: 21 \t y = 0.0074, x = [3.1400 4.4384], gradient = 0.0000Solution: \t y = 0.0074, x = [3.1400 4.4384]" }, { "code": null, "e": 17313, "s": 17102, "text": "In the first 3 scenarios, the algorithm converges to the global minimum where f(x) = 0, although the first scenario seems like wasting too many steps since the first coordinate of x0 is already 0 for this case." }, { "code": null, "e": 17848, "s": 17313, "text": "In the last 3 scenarios, the algorithm is trapped in a local minimum where f(x) = 0.0074 since x0 is too far from coordinate [0, 0]. This comes with no surprise because the line search method looks for the minimum value of f by heading to the direction where the function value decreases and the norm of the gradient approaches zero, so that when this method has obtained a coordinate where the gradient is very close to zero, the point is considered as the function minimum value regardless of whether the minimum is local or global." }, { "code": null, "e": 17977, "s": 17848, "text": "If we apply this version of gradient descent on the original objective function f(x) = x2 − 2x − 3, we get the following result." }, { "code": null, "e": 18103, "s": 17977, "text": "Initial condition: y = 21.0000, x = -4 Iteration: 1 \t y = -4.0000, x = 1.0, gradient = 0.0000Solution: \t y = -4.0000, x = 1.0" }, { "code": null, "e": 18138, "s": 18103, "text": "The solution is found in one step!" }, { "code": null, "e": 18445, "s": 18138, "text": "What do you think about this? Is it because the new gradient descent version is too overkill for this function? Or is this just a coincidence, and changing parameters such as ρ will produce no better result than the original simple gradient descent? Let me know your thoughts in the response section below!" }, { "code": null, "e": 19042, "s": 18445, "text": "In this article, we understand the work of the Gradient Descent algorithm in optimization problems, ranging from a simple high school textbook problem to a real-world machine learning cost function minimization problem. The easy implementation assumes a constant learning rate, whereas a harder one searches for the learning rate using Armijo line search. The algorithm itself is the most basic and has many variations and implementations depending on the nature of the objective function. Like many other optimization algorithms, the Gradient Descent algorithm may be trapped in a local minimum." }, { "code": null, "e": 19097, "s": 19042, "text": "You might want to continue reading my related stories:" }, { "code": null, "e": 19110, "s": 19097, "text": "Albers Uzila" }, { "code": null, "e": 19341, "s": 19110, "text": "Thanks! If you enjoy this story and want to support me as a writer, consider becoming a member. For only $5 a month, you’ll get unlimited access to all stories on Medium. If you sign up using my link, I’ll earn a small commission." } ]
Recommend using Scikit-Learn and Tensorflow Recommender | by Jesko Rehberg | Towards Data Science
Motivation: We will learn how to recommend sales items to customers, looking at customer’s individual purchase history. In my previous post I explained many concepts of recommending engines in detail: towardsdatascience.com This post focuses on recommending using Scikit-Learn and Tensorflow Recommender. Solution: First of all, let us have a look at our dataframe (data is stored in my github repository): import pandas as pdimport numpy as npdata = pd.read_excel(‘/content/gdrive/MyDrive/DDDDFolder/DDDD.xlsx’) data.head() So what we have got is Sales per Sales Day, Customer (e.g. Customer code 0 is one specific customer) and Sales Item (similar to Customer, Sales Item 0 is one specific Sales Item). The other columns are not relevant for our product recommendation, so we will simply drop those: DataPrep = data[['SalesItem', 'SalesAmount', 'Customer']] #we will only use SalesItem, SalesAmount and CustomerDataPrep.head() We want to see what Sales Items have been purchased by what Customer: DataGrouped = DataPrep.groupby(['Customer', 'SalesItem']).sum().reset_index() # Group togetherDataGrouped.head() Our Collaborative Filtering will be based on binary data. For every dataset we will add a 1 as purchased. That means, that this customer has purchased this item, no matter how many the customer actually has purchased in the past. We use this binary data approach for our recommending example. Another approach would be to use the SalesAmount and normalize it, in case you want to treat the Amount of SalesItems purchased as a kind of taste factor, meaning that someone who bought SalesItem x 100 times- while another Customer bought that same SalesItem x only 5 times- does not like it as much. I believe that very often in Sales Recommendations a binary approach makes more sense, but of course that really depends on your data: def create_DataBinary(DataGrouped):DataBinary = DataGrouped.copy()DataBinary['PurchasedYes'] = 1return DataBinaryDataBinary = create_DataBinary(DataGrouped)DataBinary.head() Finally, let’s get rid of the column SalesAmount: purchase_data=DataBinary.drop(['SalesAmount'], axis=1)purchase_data.head() For better convenience we add I as a prefix for Item for every SalesItem. Otherwise we would only have customer and SalesItem numbers, which can be a little bit puzzling: purchase_data[‘SalesItem’] = ‘I’ + purchase_data[‘SalesItem’].astype(str) Now our dataframe is eventually prepared for building the recommender. Why all the data munging you might ask? The reason for this is I hope it helps you to adopt this example to your data, especially if you are a newbie to this. Now we will import the necessary libraries for recommending with Scikit-Learn: from scipy.sparse import coo_matrix, csr_matrixfrom sklearn.metrics.pairwise import cosine_similarityfrom sklearn.preprocessing import LabelEncoder Let’s now calculate the Item-Item cosine similarity: def GetItemItemSim(user_ids, product_ids):SalesItemCustomerMatrix = csr_matrix(([1]*len(user_ids), (product_ids, user_ids)))similarity = cosine_similarity(SalesItemCustomerMatrix)return similarity, SalesItemCustomerMatrix Receiving the top 10 SalesItem recommendations per Customer in a dataframe, we will use the Item-Item Similarity Matrix from above cell via creating a SalesItemCustomerMatrixs (SalesItems per rows and Customer as columns filled binary incidence). def get_recommendations_from_similarity(similarity_matrix, SalesItemCustomerMatrix, top_n=10):CustomerSalesItemMatrix = csr_matrix(SalesItemCustomerMatrix.T)CustomerSalesItemScores = CustomerSalesItemMatrix.dot(similarity_matrix) # sum of similarities to all purchased productsRecForCust = []for user_id in range(CustomerSalesItemScores.shape[0]):scores = CustomerSalesItemScores[user_id, :]purchased_items = CustomerSalesItemMatrix.indices[CustomerSalesItemMatrix.indptr[user_id]:CustomerSalesItemMatrix.indptr[user_id+1]]scores[purchased_items] = -1 # do not recommend already purchased SalesItemstop_products_ids = np.argsort(scores)[-top_n:][::-1]recommendations = pd.DataFrame(top_products_ids.reshape(1, -1),index=[user_id],columns=['Top%s' % (i+1) for i in range(top_n)])RecForCust.append(recommendations)return pd.concat(RecForCust) Compute the recommendations: def get_recommendations(purchase_data):user_label_encoder = LabelEncoder()user_ids = user_label_encoder.fit_transform(purchase_data.Customer)product_label_encoder = LabelEncoder()product_ids = product_label_encoder.fit_transform(purchase_data.SalesItem)# compute recommendationssimilarity_matrix, SalesItemCustomerMatrix = GetItemItemSim(user_ids, product_ids)recommendations = get_recommendations_from_similarity(similarity_matrix, SalesItemCustomerMatrix)recommendations.index = user_label_encoder.inverse_transform(recommendations.index)for i in range(recommendations.shape[1]):recommendations.iloc[:, i] = product_label_encoder.inverse_transform(recommendations.iloc[:, i])return recommendations Let’s start our recommender: recommendations = get_recommendations(purchase_data) That means for instance, that Customer 0 (first row in above screenshot) should be highly interested in buying Item I769, second most relevant I253, top 3 I1146 etc. In case you want to export the recommendations to e.g. excel, you could do like that: dfrec = recommendationsdfrec.to_excel("ExportCustomerName-Itemname.xlsx") We are now finished with recommending Sales Items to Customers which they should be highly interested in (but have not already purchased), using Scikit-Learn. Tensorflow Recommender: Out of curiosity, let’s repeat this, this time using Tensorflow Recommender. The original code [1] has been taken from Google’s Brain Team and has only been slightly adopted to our data set. I ran the code in Google Colab: !pip install -q tensorflow-recommenders!pip install -q --upgrade tensorflow-recommendersfrom typing import Dict, Textimport numpy as npimport tensorflow as tfimport tensorflow_recommenders as tfrs We will also add C (for Customer) as a prefix to every customer id: purchase_data['Customer'] = 'C' + purchase_data['Customer'].astype(str)uniq = data.SalesItem.unique()uniq = pd.DataFrame(uniq)uniq.columns = ['SalesItem']uniqrat = purchase_data[['Customer', 'SalesItem', 'SalesAmount']]dataset = tf.purchase_data.Dataset.from_tensor_slices(dict(purchase_data))ratings = dataset.from_tensor_slices(dict(rat))SalesItem = dataset.from_tensor_slices(dict(uniq))ratings = ratings.map(lambda x: {"Customer": x["Customer"],"SalesItem": x["SalesItem"]})SalesItem = SalesItem.map(lambda x: x["SalesItem"])ratings.take(1)CustomerID_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)CustomerID_vocabulary.adapt(ratings.map(lambda x: x["Customer"]))SalesItem_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)SalesItem_vocabulary.adapt(SalesItem)#Define a model#We can define a TFRS model by inheriting from tfrs.Model and implementing the compute_loss method:class SalesItemRecModel(tfrs.Model):def __init__(self,CustomerModel: tf.keras.Model,SalesItemModel: tf.keras.Model,task: tfrs.tasks.Retrieval):super().__init__()# Set up Customer and SalesItem representations.self.CustomerModel = CustomerModelself.SalesItemModel = SalesItemModel# Set up a retrieval task.self.task = taskdef compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:# Define how the loss is computed.CustEmbeddings = self.CustomerModel(features["Customer"])SalesItemEmbeddings = self.SalesItemModel(features["SalesItem"])return self.task(CustEmbeddings, SalesItemEmbeddings) Define Customer and SalesItem models: CustomerModel = tf.keras.Sequential([CustomerID_vocabulary,tf.keras.layers.Embedding(CustomerID_vocabulary.vocab_size(), 64)])SalesItemModel = tf.keras.Sequential([SalesItem_vocabulary,tf.keras.layers.Embedding(SalesItem_vocabulary.vocab_size(), 64)])task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(SalesItem.batch(128).map(SalesItemModel))) Now we will create the model, train it, and generate predictions: model = SalesItemRecModel(CustomerModel, SalesItemModel, task)model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))# Train for 3 epochs.model.fit(ratings.batch(4096), epochs=3)# Use brute-force search to set up retrieval using the trained representations.index = tfrs.layers.factorized_top_k.BruteForce(model.CustomerModel)index.index(SalesItem.batch(100).map(model.SalesItemModel), SalesItem)customers = data.Customer.unique().tolist()fcst = pd.DataFrame()for x in customers:_, SalesItem = index(np.array([x]))fcst = pd.concat((fcst, pd.DataFrame(SalesItem[0, :10].numpy()).transpose()))fcst['Customer'] = customersfcst.to_excel('RecFc.xlsx', index=False) Hope this was supportive. Any questions, please let me know. You can connect with me on LinkedIn or Twitter. Many thanks for reading! Originally published on my website DAR-Analytics, where also the Jupyter Notebook and data file is available. Reference: [1] Maciej Kula and James Chen, Google Brain, Introducing TensorFlow Recommenders (2020), https://blog.tensorflow.org/2020/09/introducing-tensorflow-recommenders.html
[ { "code": null, "e": 184, "s": 172, "text": "Motivation:" }, { "code": null, "e": 373, "s": 184, "text": "We will learn how to recommend sales items to customers, looking at customer’s individual purchase history. In my previous post I explained many concepts of recommending engines in detail:" }, { "code": null, "e": 396, "s": 373, "text": "towardsdatascience.com" }, { "code": null, "e": 477, "s": 396, "text": "This post focuses on recommending using Scikit-Learn and Tensorflow Recommender." }, { "code": null, "e": 487, "s": 477, "text": "Solution:" }, { "code": null, "e": 579, "s": 487, "text": "First of all, let us have a look at our dataframe (data is stored in my github repository):" }, { "code": null, "e": 697, "s": 579, "text": "import pandas as pdimport numpy as npdata = pd.read_excel(‘/content/gdrive/MyDrive/DDDDFolder/DDDD.xlsx’) data.head()" }, { "code": null, "e": 877, "s": 697, "text": "So what we have got is Sales per Sales Day, Customer (e.g. Customer code 0 is one specific customer) and Sales Item (similar to Customer, Sales Item 0 is one specific Sales Item)." }, { "code": null, "e": 974, "s": 877, "text": "The other columns are not relevant for our product recommendation, so we will simply drop those:" }, { "code": null, "e": 1101, "s": 974, "text": "DataPrep = data[['SalesItem', 'SalesAmount', 'Customer']] #we will only use SalesItem, SalesAmount and CustomerDataPrep.head()" }, { "code": null, "e": 1171, "s": 1101, "text": "We want to see what Sales Items have been purchased by what Customer:" }, { "code": null, "e": 1284, "s": 1171, "text": "DataGrouped = DataPrep.groupby(['Customer', 'SalesItem']).sum().reset_index() # Group togetherDataGrouped.head()" }, { "code": null, "e": 2014, "s": 1284, "text": "Our Collaborative Filtering will be based on binary data. For every dataset we will add a 1 as purchased. That means, that this customer has purchased this item, no matter how many the customer actually has purchased in the past. We use this binary data approach for our recommending example. Another approach would be to use the SalesAmount and normalize it, in case you want to treat the Amount of SalesItems purchased as a kind of taste factor, meaning that someone who bought SalesItem x 100 times- while another Customer bought that same SalesItem x only 5 times- does not like it as much. I believe that very often in Sales Recommendations a binary approach makes more sense, but of course that really depends on your data:" }, { "code": null, "e": 2188, "s": 2014, "text": "def create_DataBinary(DataGrouped):DataBinary = DataGrouped.copy()DataBinary['PurchasedYes'] = 1return DataBinaryDataBinary = create_DataBinary(DataGrouped)DataBinary.head()" }, { "code": null, "e": 2238, "s": 2188, "text": "Finally, let’s get rid of the column SalesAmount:" }, { "code": null, "e": 2313, "s": 2238, "text": "purchase_data=DataBinary.drop(['SalesAmount'], axis=1)purchase_data.head()" }, { "code": null, "e": 2484, "s": 2313, "text": "For better convenience we add I as a prefix for Item for every SalesItem. Otherwise we would only have customer and SalesItem numbers, which can be a little bit puzzling:" }, { "code": null, "e": 2558, "s": 2484, "text": "purchase_data[‘SalesItem’] = ‘I’ + purchase_data[‘SalesItem’].astype(str)" }, { "code": null, "e": 2788, "s": 2558, "text": "Now our dataframe is eventually prepared for building the recommender. Why all the data munging you might ask? The reason for this is I hope it helps you to adopt this example to your data, especially if you are a newbie to this." }, { "code": null, "e": 2867, "s": 2788, "text": "Now we will import the necessary libraries for recommending with Scikit-Learn:" }, { "code": null, "e": 3015, "s": 2867, "text": "from scipy.sparse import coo_matrix, csr_matrixfrom sklearn.metrics.pairwise import cosine_similarityfrom sklearn.preprocessing import LabelEncoder" }, { "code": null, "e": 3068, "s": 3015, "text": "Let’s now calculate the Item-Item cosine similarity:" }, { "code": null, "e": 3290, "s": 3068, "text": "def GetItemItemSim(user_ids, product_ids):SalesItemCustomerMatrix = csr_matrix(([1]*len(user_ids), (product_ids, user_ids)))similarity = cosine_similarity(SalesItemCustomerMatrix)return similarity, SalesItemCustomerMatrix" }, { "code": null, "e": 3537, "s": 3290, "text": "Receiving the top 10 SalesItem recommendations per Customer in a dataframe, we will use the Item-Item Similarity Matrix from above cell via creating a SalesItemCustomerMatrixs (SalesItems per rows and Customer as columns filled binary incidence)." }, { "code": null, "e": 4378, "s": 3537, "text": "def get_recommendations_from_similarity(similarity_matrix, SalesItemCustomerMatrix, top_n=10):CustomerSalesItemMatrix = csr_matrix(SalesItemCustomerMatrix.T)CustomerSalesItemScores = CustomerSalesItemMatrix.dot(similarity_matrix) # sum of similarities to all purchased productsRecForCust = []for user_id in range(CustomerSalesItemScores.shape[0]):scores = CustomerSalesItemScores[user_id, :]purchased_items = CustomerSalesItemMatrix.indices[CustomerSalesItemMatrix.indptr[user_id]:CustomerSalesItemMatrix.indptr[user_id+1]]scores[purchased_items] = -1 # do not recommend already purchased SalesItemstop_products_ids = np.argsort(scores)[-top_n:][::-1]recommendations = pd.DataFrame(top_products_ids.reshape(1, -1),index=[user_id],columns=['Top%s' % (i+1) for i in range(top_n)])RecForCust.append(recommendations)return pd.concat(RecForCust)" }, { "code": null, "e": 4407, "s": 4378, "text": "Compute the recommendations:" }, { "code": null, "e": 5107, "s": 4407, "text": "def get_recommendations(purchase_data):user_label_encoder = LabelEncoder()user_ids = user_label_encoder.fit_transform(purchase_data.Customer)product_label_encoder = LabelEncoder()product_ids = product_label_encoder.fit_transform(purchase_data.SalesItem)# compute recommendationssimilarity_matrix, SalesItemCustomerMatrix = GetItemItemSim(user_ids, product_ids)recommendations = get_recommendations_from_similarity(similarity_matrix, SalesItemCustomerMatrix)recommendations.index = user_label_encoder.inverse_transform(recommendations.index)for i in range(recommendations.shape[1]):recommendations.iloc[:, i] = product_label_encoder.inverse_transform(recommendations.iloc[:, i])return recommendations" }, { "code": null, "e": 5136, "s": 5107, "text": "Let’s start our recommender:" }, { "code": null, "e": 5189, "s": 5136, "text": "recommendations = get_recommendations(purchase_data)" }, { "code": null, "e": 5355, "s": 5189, "text": "That means for instance, that Customer 0 (first row in above screenshot) should be highly interested in buying Item I769, second most relevant I253, top 3 I1146 etc." }, { "code": null, "e": 5441, "s": 5355, "text": "In case you want to export the recommendations to e.g. excel, you could do like that:" }, { "code": null, "e": 5515, "s": 5441, "text": "dfrec = recommendationsdfrec.to_excel(\"ExportCustomerName-Itemname.xlsx\")" }, { "code": null, "e": 5674, "s": 5515, "text": "We are now finished with recommending Sales Items to Customers which they should be highly interested in (but have not already purchased), using Scikit-Learn." }, { "code": null, "e": 5698, "s": 5674, "text": "Tensorflow Recommender:" }, { "code": null, "e": 5921, "s": 5698, "text": "Out of curiosity, let’s repeat this, this time using Tensorflow Recommender. The original code [1] has been taken from Google’s Brain Team and has only been slightly adopted to our data set. I ran the code in Google Colab:" }, { "code": null, "e": 6118, "s": 5921, "text": "!pip install -q tensorflow-recommenders!pip install -q --upgrade tensorflow-recommendersfrom typing import Dict, Textimport numpy as npimport tensorflow as tfimport tensorflow_recommenders as tfrs" }, { "code": null, "e": 6186, "s": 6118, "text": "We will also add C (for Customer) as a prefix to every customer id:" }, { "code": null, "e": 7752, "s": 6186, "text": "purchase_data['Customer'] = 'C' + purchase_data['Customer'].astype(str)uniq = data.SalesItem.unique()uniq = pd.DataFrame(uniq)uniq.columns = ['SalesItem']uniqrat = purchase_data[['Customer', 'SalesItem', 'SalesAmount']]dataset = tf.purchase_data.Dataset.from_tensor_slices(dict(purchase_data))ratings = dataset.from_tensor_slices(dict(rat))SalesItem = dataset.from_tensor_slices(dict(uniq))ratings = ratings.map(lambda x: {\"Customer\": x[\"Customer\"],\"SalesItem\": x[\"SalesItem\"]})SalesItem = SalesItem.map(lambda x: x[\"SalesItem\"])ratings.take(1)CustomerID_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)CustomerID_vocabulary.adapt(ratings.map(lambda x: x[\"Customer\"]))SalesItem_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)SalesItem_vocabulary.adapt(SalesItem)#Define a model#We can define a TFRS model by inheriting from tfrs.Model and implementing the compute_loss method:class SalesItemRecModel(tfrs.Model):def __init__(self,CustomerModel: tf.keras.Model,SalesItemModel: tf.keras.Model,task: tfrs.tasks.Retrieval):super().__init__()# Set up Customer and SalesItem representations.self.CustomerModel = CustomerModelself.SalesItemModel = SalesItemModel# Set up a retrieval task.self.task = taskdef compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:# Define how the loss is computed.CustEmbeddings = self.CustomerModel(features[\"Customer\"])SalesItemEmbeddings = self.SalesItemModel(features[\"SalesItem\"])return self.task(CustEmbeddings, SalesItemEmbeddings)" }, { "code": null, "e": 7790, "s": 7752, "text": "Define Customer and SalesItem models:" }, { "code": null, "e": 8148, "s": 7790, "text": "CustomerModel = tf.keras.Sequential([CustomerID_vocabulary,tf.keras.layers.Embedding(CustomerID_vocabulary.vocab_size(), 64)])SalesItemModel = tf.keras.Sequential([SalesItem_vocabulary,tf.keras.layers.Embedding(SalesItem_vocabulary.vocab_size(), 64)])task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(SalesItem.batch(128).map(SalesItemModel)))" }, { "code": null, "e": 8214, "s": 8148, "text": "Now we will create the model, train it, and generate predictions:" }, { "code": null, "e": 8875, "s": 8214, "text": "model = SalesItemRecModel(CustomerModel, SalesItemModel, task)model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))# Train for 3 epochs.model.fit(ratings.batch(4096), epochs=3)# Use brute-force search to set up retrieval using the trained representations.index = tfrs.layers.factorized_top_k.BruteForce(model.CustomerModel)index.index(SalesItem.batch(100).map(model.SalesItemModel), SalesItem)customers = data.Customer.unique().tolist()fcst = pd.DataFrame()for x in customers:_, SalesItem = index(np.array([x]))fcst = pd.concat((fcst, pd.DataFrame(SalesItem[0, :10].numpy()).transpose()))fcst['Customer'] = customersfcst.to_excel('RecFc.xlsx', index=False)" }, { "code": null, "e": 9009, "s": 8875, "text": "Hope this was supportive. Any questions, please let me know. You can connect with me on LinkedIn or Twitter. Many thanks for reading!" }, { "code": null, "e": 9119, "s": 9009, "text": "Originally published on my website DAR-Analytics, where also the Jupyter Notebook and data file is available." } ]
Dask DataFrames — How to Run Pandas in Parallel With Ease | by Dario Radečić | Towards Data Science
So you have some experience with Pandas, and you’re aware of its biggest limitation — it doesn’t scale all that easily. Is there a solution? Yes — Dask Data Frames. Most of Dask API is identical to Pandas, but Dask can run in parallel on all CPU cores. It can even run on a cluster, but that’s a topic for another time. Today you’ll see just how much faster Dask is than Pandas at processing 20GB of CSV files. The runtime values will vary from PC to PC, so we’ll compare relative values instead. For the record, I’m using an MBP 16" 8-core i9 with 16GB of RAM. Start the series from the beginning: Dask Delayed — How to Parallelize Your Python Code With Ease Dask Arrays — How to Parallelize Numpy With Ease The article is structured as follows: Dataset Generation Processing a Single CSV File Processing Multiple CSV Files Conclusion We could download a dataset online, but that’s not the point of today’s article. We’re only interested in size and not what’s inside. For that reason, we’ll create a dummy dataset with six columns. The first column is a timestamp — an entire year sampled at one-second intervals, and the other five are random integer values. To make things more complex, we’ll create 20 files, one for every year from 2000 to 2020. Before starting, make sure to create a data folder right where your notebook is located. Here’s the code snippet for creating the CSV files: import numpy as npimport pandas as pdimport dask.dataframe as ddfrom datetime import datetimefor year in np.arange(2000, 2021): dates = pd.date_range( start=datetime(year=year, month=1, day=1), end=datetime(year=year, month=12, day=31), freq=’S’ ) df = pd.DataFrame() df[‘Date’] = dates for i in range(5): df[f’X{i}’] = np.random.randint(low=0, high=100, size=len(df)) df.to_csv(f’data/{year}.csv’, index=False) You can now use a basic Linux command to list the data directory: !ls -lh data/ Here are the results: As you can see, all 20 files are around 1GB in size (1.09 to be more precise). The above code snippet took some time to execute, but it’s still way less than downloading a 20GB file. Up next, let’s see how to process and aggregate a single CSV file. Goal: Read in a single CSV file, group the values by month and calculate the total sum for every column. Loading a single CSV file with Pandas can’t be any easier. The read_csv() function accepts the parse_dates parameter, which automatically converts one or more columns to date type. This comes in handy because we can directly use the dt.month to access month values. Here’s the complete code snippet: %%timedf = pd.read_csv(‘data/2000.csv’, parse_dates=[‘Date’])monthly_total = df.groupby(df[‘Date’].dt.month).sum() And here’s the total runtime: Not too bad for a 1GB file, but the runtime will depend on your hardware. Let’s do the same thing with Dask. Here’s the code: %%timedf = dd.read_csv(‘data/2000.csv’, parse_dates=[‘Date’])monthly_total = df.groupby(df[‘Date’].dt.month).sum().compute() As always with Dask, no processing is done until the compute() function is called. You can see the total runtime below: Let’s compare the differences: It isn’t a significant difference, but Dask is overall a better option, even for a single data file. This is a good start, but we’re really interested in is processing multiple files at once. Let’s explore how to do so next. Goal: Read all CSV files, group them by year values and calculate the total sum for every column. Working with multiple data files with Pandas is a tedious task. In a nutshell, you have to read the files one by one and then stack them vertically. If you think about it, a single CPU core loads the datasets one at a time while the other cores sit idle. It’s not the most efficient way. The glob package will help you handle multiple CSV files at once. You can use the data/*.csv pattern to get all the CSV files in the data folder. Then, you’ll have to read them one by one in a loop. Finally, you can concatenate them and do the aggregation. Here’s the complete code snippet: %%timeimport globall_files = glob.glob('data/*.csv')dfs = []for fname in all_files: dfs.append(pd.read_csv(fname, parse_dates=['Date']))df = pd.concat(dfs, axis=0)yearly_total = df.groupby(df['Date'].dt.year).sum() And here are the runtime results: 15 and a half minutes seems like a lot, but you have to consider that a lot of swap memory was used in the process, as there isn’t a way to fit 20+GB of data into 16GB of RAM. If the notebook crashes completely, use a smaller number of CSV files. Let’s see which improvements Dask has to offer. It accepts the glob pattern to the read_csv() function, which means you won’t have to use loops. No operation is done until the compute() function is called, but that’s how the library works. Here’s the entire code snippet for the same loading and aggregation as before: %%timedf = dd.read_csv(‘data/*.csv’, parse_dates=[‘Date’])yearly_total = df.groupby(df[‘Date’].dt.year).sum().compute() Here are the runtime results: Let’s compare the differences: As you can see, the difference is more significant when processing multiple files — around 2.5X faster in Dask. A clear winner, no arguing here. Let’s wrap things up in the next section. Today you’ve learned how to switch from Pandas to Dask, and why you should do so when the datasets get large. Dask’s API is 99% identical to Pandas, so you shouldn’t have any trouble switching. Keep in mind — some data formats aren’t supported in Dask — such as XLS, Zip, and GZ. Also, the sorting operation isn’t supported, as it isn’t convenient to do in parallel. Stay tuned to the last part of the series — Dask Bags — which will teach you how Dask works with unstructured data. Loved the article? Become a Medium member to continue learning without limits. I’ll receive a portion of your membership fee if you use the following link, with no extra cost to you. medium.com Top 3 Reasons Why I Sold My M1 Macbook Pro as a Data Scientist How to Schedule Python Scripts With Cron — The Only Guide You’ll Ever Need Dask Delayed — How to Parallelize Your Python Code With Ease How to Create PDF Reports With Python — The Essential Guide Become a Data Scientist in 2021 Even Without a College Degree Follow me on Medium for more stories like this Sign up for my newsletter Connect on LinkedIn
[ { "code": null, "e": 313, "s": 172, "text": "So you have some experience with Pandas, and you’re aware of its biggest limitation — it doesn’t scale all that easily. Is there a solution?" }, { "code": null, "e": 337, "s": 313, "text": "Yes — Dask Data Frames." }, { "code": null, "e": 492, "s": 337, "text": "Most of Dask API is identical to Pandas, but Dask can run in parallel on all CPU cores. It can even run on a cluster, but that’s a topic for another time." }, { "code": null, "e": 734, "s": 492, "text": "Today you’ll see just how much faster Dask is than Pandas at processing 20GB of CSV files. The runtime values will vary from PC to PC, so we’ll compare relative values instead. For the record, I’m using an MBP 16\" 8-core i9 with 16GB of RAM." }, { "code": null, "e": 771, "s": 734, "text": "Start the series from the beginning:" }, { "code": null, "e": 832, "s": 771, "text": "Dask Delayed — How to Parallelize Your Python Code With Ease" }, { "code": null, "e": 881, "s": 832, "text": "Dask Arrays — How to Parallelize Numpy With Ease" }, { "code": null, "e": 919, "s": 881, "text": "The article is structured as follows:" }, { "code": null, "e": 938, "s": 919, "text": "Dataset Generation" }, { "code": null, "e": 967, "s": 938, "text": "Processing a Single CSV File" }, { "code": null, "e": 997, "s": 967, "text": "Processing Multiple CSV Files" }, { "code": null, "e": 1008, "s": 997, "text": "Conclusion" }, { "code": null, "e": 1142, "s": 1008, "text": "We could download a dataset online, but that’s not the point of today’s article. We’re only interested in size and not what’s inside." }, { "code": null, "e": 1334, "s": 1142, "text": "For that reason, we’ll create a dummy dataset with six columns. The first column is a timestamp — an entire year sampled at one-second intervals, and the other five are random integer values." }, { "code": null, "e": 1424, "s": 1334, "text": "To make things more complex, we’ll create 20 files, one for every year from 2000 to 2020." }, { "code": null, "e": 1565, "s": 1424, "text": "Before starting, make sure to create a data folder right where your notebook is located. Here’s the code snippet for creating the CSV files:" }, { "code": null, "e": 2023, "s": 1565, "text": "import numpy as npimport pandas as pdimport dask.dataframe as ddfrom datetime import datetimefor year in np.arange(2000, 2021): dates = pd.date_range( start=datetime(year=year, month=1, day=1), end=datetime(year=year, month=12, day=31), freq=’S’ ) df = pd.DataFrame() df[‘Date’] = dates for i in range(5): df[f’X{i}’] = np.random.randint(low=0, high=100, size=len(df)) df.to_csv(f’data/{year}.csv’, index=False)" }, { "code": null, "e": 2089, "s": 2023, "text": "You can now use a basic Linux command to list the data directory:" }, { "code": null, "e": 2103, "s": 2089, "text": "!ls -lh data/" }, { "code": null, "e": 2125, "s": 2103, "text": "Here are the results:" }, { "code": null, "e": 2308, "s": 2125, "text": "As you can see, all 20 files are around 1GB in size (1.09 to be more precise). The above code snippet took some time to execute, but it’s still way less than downloading a 20GB file." }, { "code": null, "e": 2375, "s": 2308, "text": "Up next, let’s see how to process and aggregate a single CSV file." }, { "code": null, "e": 2480, "s": 2375, "text": "Goal: Read in a single CSV file, group the values by month and calculate the total sum for every column." }, { "code": null, "e": 2661, "s": 2480, "text": "Loading a single CSV file with Pandas can’t be any easier. The read_csv() function accepts the parse_dates parameter, which automatically converts one or more columns to date type." }, { "code": null, "e": 2780, "s": 2661, "text": "This comes in handy because we can directly use the dt.month to access month values. Here’s the complete code snippet:" }, { "code": null, "e": 2895, "s": 2780, "text": "%%timedf = pd.read_csv(‘data/2000.csv’, parse_dates=[‘Date’])monthly_total = df.groupby(df[‘Date’].dt.month).sum()" }, { "code": null, "e": 2925, "s": 2895, "text": "And here’s the total runtime:" }, { "code": null, "e": 3051, "s": 2925, "text": "Not too bad for a 1GB file, but the runtime will depend on your hardware. Let’s do the same thing with Dask. Here’s the code:" }, { "code": null, "e": 3176, "s": 3051, "text": "%%timedf = dd.read_csv(‘data/2000.csv’, parse_dates=[‘Date’])monthly_total = df.groupby(df[‘Date’].dt.month).sum().compute()" }, { "code": null, "e": 3296, "s": 3176, "text": "As always with Dask, no processing is done until the compute() function is called. You can see the total runtime below:" }, { "code": null, "e": 3327, "s": 3296, "text": "Let’s compare the differences:" }, { "code": null, "e": 3519, "s": 3327, "text": "It isn’t a significant difference, but Dask is overall a better option, even for a single data file. This is a good start, but we’re really interested in is processing multiple files at once." }, { "code": null, "e": 3552, "s": 3519, "text": "Let’s explore how to do so next." }, { "code": null, "e": 3650, "s": 3552, "text": "Goal: Read all CSV files, group them by year values and calculate the total sum for every column." }, { "code": null, "e": 3799, "s": 3650, "text": "Working with multiple data files with Pandas is a tedious task. In a nutshell, you have to read the files one by one and then stack them vertically." }, { "code": null, "e": 3938, "s": 3799, "text": "If you think about it, a single CPU core loads the datasets one at a time while the other cores sit idle. It’s not the most efficient way." }, { "code": null, "e": 4195, "s": 3938, "text": "The glob package will help you handle multiple CSV files at once. You can use the data/*.csv pattern to get all the CSV files in the data folder. Then, you’ll have to read them one by one in a loop. Finally, you can concatenate them and do the aggregation." }, { "code": null, "e": 4229, "s": 4195, "text": "Here’s the complete code snippet:" }, { "code": null, "e": 4447, "s": 4229, "text": "%%timeimport globall_files = glob.glob('data/*.csv')dfs = []for fname in all_files: dfs.append(pd.read_csv(fname, parse_dates=['Date']))df = pd.concat(dfs, axis=0)yearly_total = df.groupby(df['Date'].dt.year).sum()" }, { "code": null, "e": 4481, "s": 4447, "text": "And here are the runtime results:" }, { "code": null, "e": 4728, "s": 4481, "text": "15 and a half minutes seems like a lot, but you have to consider that a lot of swap memory was used in the process, as there isn’t a way to fit 20+GB of data into 16GB of RAM. If the notebook crashes completely, use a smaller number of CSV files." }, { "code": null, "e": 4968, "s": 4728, "text": "Let’s see which improvements Dask has to offer. It accepts the glob pattern to the read_csv() function, which means you won’t have to use loops. No operation is done until the compute() function is called, but that’s how the library works." }, { "code": null, "e": 5047, "s": 4968, "text": "Here’s the entire code snippet for the same loading and aggregation as before:" }, { "code": null, "e": 5167, "s": 5047, "text": "%%timedf = dd.read_csv(‘data/*.csv’, parse_dates=[‘Date’])yearly_total = df.groupby(df[‘Date’].dt.year).sum().compute()" }, { "code": null, "e": 5197, "s": 5167, "text": "Here are the runtime results:" }, { "code": null, "e": 5228, "s": 5197, "text": "Let’s compare the differences:" }, { "code": null, "e": 5373, "s": 5228, "text": "As you can see, the difference is more significant when processing multiple files — around 2.5X faster in Dask. A clear winner, no arguing here." }, { "code": null, "e": 5415, "s": 5373, "text": "Let’s wrap things up in the next section." }, { "code": null, "e": 5609, "s": 5415, "text": "Today you’ve learned how to switch from Pandas to Dask, and why you should do so when the datasets get large. Dask’s API is 99% identical to Pandas, so you shouldn’t have any trouble switching." }, { "code": null, "e": 5782, "s": 5609, "text": "Keep in mind — some data formats aren’t supported in Dask — such as XLS, Zip, and GZ. Also, the sorting operation isn’t supported, as it isn’t convenient to do in parallel." }, { "code": null, "e": 5898, "s": 5782, "text": "Stay tuned to the last part of the series — Dask Bags — which will teach you how Dask works with unstructured data." }, { "code": null, "e": 6081, "s": 5898, "text": "Loved the article? Become a Medium member to continue learning without limits. I’ll receive a portion of your membership fee if you use the following link, with no extra cost to you." }, { "code": null, "e": 6092, "s": 6081, "text": "medium.com" }, { "code": null, "e": 6155, "s": 6092, "text": "Top 3 Reasons Why I Sold My M1 Macbook Pro as a Data Scientist" }, { "code": null, "e": 6230, "s": 6155, "text": "How to Schedule Python Scripts With Cron — The Only Guide You’ll Ever Need" }, { "code": null, "e": 6291, "s": 6230, "text": "Dask Delayed — How to Parallelize Your Python Code With Ease" }, { "code": null, "e": 6351, "s": 6291, "text": "How to Create PDF Reports With Python — The Essential Guide" }, { "code": null, "e": 6413, "s": 6351, "text": "Become a Data Scientist in 2021 Even Without a College Degree" }, { "code": null, "e": 6460, "s": 6413, "text": "Follow me on Medium for more stories like this" }, { "code": null, "e": 6486, "s": 6460, "text": "Sign up for my newsletter" } ]
Redis - Lists
Redis Lists are simply lists of strings, sorted by insertion order. You can add elements in Redis lists in the head or the tail of the list. Maximum length of a list is 232 - 1 elements (4294967295, more than 4 billion of elements per list). redis 127.0.0.1:6379> LPUSH tutorials redis (integer) 1 redis 127.0.0.1:6379> LPUSH tutorials mongodb (integer) 2 redis 127.0.0.1:6379> LPUSH tutorials mysql (integer) 3 redis 127.0.0.1:6379> LRANGE tutorials 0 10 1) "mysql" 2) "mongodb" 3) "redis" In the above example, three values are inserted in Redis list named ‘tutorials’ by the command LPUSH. Following table lists some basic commands related to lists. Removes and gets the first element in a list, or blocks until one is available Removes and gets the last element in a list, or blocks until one is available Pops a value from a list, pushes it to another list and returns it; or blocks until one is available Gets an element from a list by its index Inserts an element before or after another element in a list Gets the length of a list Removes and gets the first element in a list Prepends one or multiple values to a list Prepends a value to a list, only if the list exists Gets a range of elements from a list Removes elements from a list Sets the value of an element in a list by its index Trims a list to the specified range Removes and gets the last element in a list Removes the last element in a list, appends it to another list and returns it Appends one or multiple values to a list Appends a value to a list, only if the list exists 22 Lectures 40 mins Skillbakerystudios Print Add Notes Bookmark this page
[ { "code": null, "e": 2186, "s": 2045, "text": "Redis Lists are simply lists of strings, sorted by insertion order. You can add elements in Redis lists in the head or the tail of the list." }, { "code": null, "e": 2287, "s": 2186, "text": "Maximum length of a list is 232 - 1 elements (4294967295, more than 4 billion of elements per list)." }, { "code": null, "e": 2547, "s": 2287, "text": "redis 127.0.0.1:6379> LPUSH tutorials redis \n(integer) 1 \nredis 127.0.0.1:6379> LPUSH tutorials mongodb \n(integer) 2 \nredis 127.0.0.1:6379> LPUSH tutorials mysql \n(integer) 3 \nredis 127.0.0.1:6379> LRANGE tutorials 0 10 \n1) \"mysql\" \n2) \"mongodb\" \n3) \"redis\"\n" }, { "code": null, "e": 2649, "s": 2547, "text": "In the above example, three values are inserted in Redis list named ‘tutorials’ by the command LPUSH." }, { "code": null, "e": 2709, "s": 2649, "text": "Following table lists some basic commands related to lists." }, { "code": null, "e": 2788, "s": 2709, "text": "Removes and gets the first element in a list, or blocks until one is available" }, { "code": null, "e": 2866, "s": 2788, "text": "Removes and gets the last element in a list, or blocks until one is available" }, { "code": null, "e": 2967, "s": 2866, "text": "Pops a value from a list, pushes it to another list and returns it; or blocks until one is available" }, { "code": null, "e": 3008, "s": 2967, "text": "Gets an element from a list by its index" }, { "code": null, "e": 3069, "s": 3008, "text": "Inserts an element before or after another element in a list" }, { "code": null, "e": 3095, "s": 3069, "text": "Gets the length of a list" }, { "code": null, "e": 3140, "s": 3095, "text": "Removes and gets the first element in a list" }, { "code": null, "e": 3182, "s": 3140, "text": "Prepends one or multiple values to a list" }, { "code": null, "e": 3234, "s": 3182, "text": "Prepends a value to a list, only if the list exists" }, { "code": null, "e": 3271, "s": 3234, "text": "Gets a range of elements from a list" }, { "code": null, "e": 3300, "s": 3271, "text": "Removes elements from a list" }, { "code": null, "e": 3352, "s": 3300, "text": "Sets the value of an element in a list by its index" }, { "code": null, "e": 3388, "s": 3352, "text": "Trims a list to the specified range" }, { "code": null, "e": 3432, "s": 3388, "text": "Removes and gets the last element in a list" }, { "code": null, "e": 3510, "s": 3432, "text": "Removes the last element in a list, appends it to another list and returns it" }, { "code": null, "e": 3551, "s": 3510, "text": "Appends one or multiple values to a list" }, { "code": null, "e": 3602, "s": 3551, "text": "Appends a value to a list, only if the list exists" }, { "code": null, "e": 3634, "s": 3602, "text": "\n 22 Lectures \n 40 mins\n" }, { "code": null, "e": 3654, "s": 3634, "text": " Skillbakerystudios" }, { "code": null, "e": 3661, "s": 3654, "text": " Print" }, { "code": null, "e": 3672, "s": 3661, "text": " Add Notes" } ]
Python os.dup2() Method
Python method dup2() duplicates file descriptor fd to fd2, closing the latter first if necessary. Note − New file description would be assigned only when it is available. In the following example given below, 1000 would be assigned as a duplicate fd in case when 1000 is available. Following is the syntax for dup2() method − os.dup2(fd, fd2); fd − This is File descriptor to be duplicated. fd − This is File descriptor to be duplicated. fd2 − This is Duplicate file descriptor. fd2 − This is Duplicate file descriptor. This method returns a duplicate of file descriptor. The following example shows the usage of dup2() method. #!/usr/bin/python import os, sys # Open a file fd = os.open( "foo.txt", os.O_RDWR|os.O_CREAT ) # Write one string os.write(fd, "This is test") # Now duplicate this file descriptor as 1000 fd2 = 1000 os.dup2(fd, fd2); # Now read this file from the beginning using fd2. os.lseek(fd2, 0, 0) str = os.read(fd2, 100) print "Read String is : ", str # Close opened file os.close( fd ) print "Closed the file successfully!!" When we run above program, it produces following result − Read String is : This is test Closed the file successfully!! 187 Lectures 17.5 hours Malhar Lathkar 55 Lectures 8 hours Arnab Chakraborty 136 Lectures 11 hours In28Minutes Official 75 Lectures 13 hours Eduonix Learning Solutions 70 Lectures 8.5 hours Lets Kode It 63 Lectures 6 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 2342, "s": 2244, "text": "Python method dup2() duplicates file descriptor fd to fd2, closing the latter first if necessary." }, { "code": null, "e": 2526, "s": 2342, "text": "Note − New file description would be assigned only when it is available. In the following example given below, 1000 would be assigned as a duplicate fd in case when 1000 is available." }, { "code": null, "e": 2570, "s": 2526, "text": "Following is the syntax for dup2() method −" }, { "code": null, "e": 2589, "s": 2570, "text": "os.dup2(fd, fd2);\n" }, { "code": null, "e": 2636, "s": 2589, "text": "fd − This is File descriptor to be duplicated." }, { "code": null, "e": 2683, "s": 2636, "text": "fd − This is File descriptor to be duplicated." }, { "code": null, "e": 2724, "s": 2683, "text": "fd2 − This is Duplicate file descriptor." }, { "code": null, "e": 2765, "s": 2724, "text": "fd2 − This is Duplicate file descriptor." }, { "code": null, "e": 2817, "s": 2765, "text": "This method returns a duplicate of file descriptor." }, { "code": null, "e": 2873, "s": 2817, "text": "The following example shows the usage of dup2() method." }, { "code": null, "e": 3297, "s": 2873, "text": "#!/usr/bin/python\n\nimport os, sys\n\n# Open a file\nfd = os.open( \"foo.txt\", os.O_RDWR|os.O_CREAT )\n\n# Write one string\nos.write(fd, \"This is test\")\n\n# Now duplicate this file descriptor as 1000\nfd2 = 1000\nos.dup2(fd, fd2);\n\n# Now read this file from the beginning using fd2.\nos.lseek(fd2, 0, 0)\nstr = os.read(fd2, 100)\nprint \"Read String is : \", str\n\n# Close opened file\nos.close( fd )\n\nprint \"Closed the file successfully!!\"" }, { "code": null, "e": 3355, "s": 3297, "text": "When we run above program, it produces following result −" }, { "code": null, "e": 3418, "s": 3355, "text": "Read String is : This is test\nClosed the file successfully!!\n" }, { "code": null, "e": 3455, "s": 3418, "text": "\n 187 Lectures \n 17.5 hours \n" }, { "code": null, "e": 3471, "s": 3455, "text": " Malhar Lathkar" }, { "code": null, "e": 3504, "s": 3471, "text": "\n 55 Lectures \n 8 hours \n" }, { "code": null, "e": 3523, "s": 3504, "text": " Arnab Chakraborty" }, { "code": null, "e": 3558, "s": 3523, "text": "\n 136 Lectures \n 11 hours \n" }, { "code": null, "e": 3580, "s": 3558, "text": " In28Minutes Official" }, { "code": null, "e": 3614, "s": 3580, "text": "\n 75 Lectures \n 13 hours \n" }, { "code": null, "e": 3642, "s": 3614, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 3677, "s": 3642, "text": "\n 70 Lectures \n 8.5 hours \n" }, { "code": null, "e": 3691, "s": 3677, "text": " Lets Kode It" }, { "code": null, "e": 3724, "s": 3691, "text": "\n 63 Lectures \n 6 hours \n" }, { "code": null, "e": 3741, "s": 3724, "text": " Abhilash Nelson" }, { "code": null, "e": 3748, "s": 3741, "text": " Print" }, { "code": null, "e": 3759, "s": 3748, "text": " Add Notes" } ]
Visualizing High Dimensional Data | by Himanshu Sharma | Towards Data Science
Data visualization helps in identifying hidden patterns, associations, and trends between different columns of data. We create different types of charts, plots, graphs, etc. in order to understand what data is all about and how different columns are related to each other. It is easy to visualize data that have lower dimensions but when it comes to data having higher dimensions it is very difficult to analyze or visualize them because it is not possible to show a large number of dimensions in a visualization. But what if I tell you that there is a python toolbox it not only creates visually appealing visualizations but also facilitates dimensionality reduction in a single function call. Hypertools is an open-source python toolbox that creates visualizations from high dimensional datasets by reducing the dimensionality by itself. It is built on top of mainly matplotlib, sklearn, and seaborn. In this article, we will explore some of the visualizations that we can create using hypertools. Let’s get started... We will start by installing hypertools using pip. The command given below will do that. pip install hypertools In this step, we will import the required library that will be used for creating visualizations. import hypertools as hyp Now we will start creating different visualizations and see how hypertools work. Basic Plot Basic Plot # data loadingbasic = hyp.load('weights_sample')# Creating plotbasic.plot(fmt='.') 2. Cluster Plot clust = hyp.load('mushrooms')# Creating plotclust.plot(n_clusters=10) 3. Corpus Plots This plot is used for textual datasets. text = ['i am from India', 'India is in asia', 'Asia is the largest continent', 'There are 7 continents', 'Continents means earth surfaces ', 'Surfaces covers land area', 'land area is largest in asia']# creating plothyp.plot(text, '*', corpus=text) 4. UMAP from sklearn import datasetsdata = datasets.load_digits(n_class=6)df = digits.datahue = data.target.astype('str')hyp.plot(df, '.', reduce='UMAP', hue=hue, ndims=2) 5. Animated Plots ani = hyp.load('weights_avg')# plotani.plot(animate=True, chemtrails=True) Go ahead try this with different datasets and create beautiful visualizations to interpret data. In case you find any difficulty please let me know in the response section. This article is in collaboration with Piyush Ingale. Reference: https://hypertools.readthedocs.io/en/latest/ Thanks for reading! If you want to get in touch with me, feel free to reach me at [email protected] or my LinkedIn Profile. You can view my Github profile for different data science projects and packages tutorials. Also, feel free to explore my profile and read different articles I have written related to Data Science.
[ { "code": null, "e": 444, "s": 171, "text": "Data visualization helps in identifying hidden patterns, associations, and trends between different columns of data. We create different types of charts, plots, graphs, etc. in order to understand what data is all about and how different columns are related to each other." }, { "code": null, "e": 685, "s": 444, "text": "It is easy to visualize data that have lower dimensions but when it comes to data having higher dimensions it is very difficult to analyze or visualize them because it is not possible to show a large number of dimensions in a visualization." }, { "code": null, "e": 866, "s": 685, "text": "But what if I tell you that there is a python toolbox it not only creates visually appealing visualizations but also facilitates dimensionality reduction in a single function call." }, { "code": null, "e": 1171, "s": 866, "text": "Hypertools is an open-source python toolbox that creates visualizations from high dimensional datasets by reducing the dimensionality by itself. It is built on top of mainly matplotlib, sklearn, and seaborn. In this article, we will explore some of the visualizations that we can create using hypertools." }, { "code": null, "e": 1192, "s": 1171, "text": "Let’s get started..." }, { "code": null, "e": 1280, "s": 1192, "text": "We will start by installing hypertools using pip. The command given below will do that." }, { "code": null, "e": 1303, "s": 1280, "text": "pip install hypertools" }, { "code": null, "e": 1400, "s": 1303, "text": "In this step, we will import the required library that will be used for creating visualizations." }, { "code": null, "e": 1425, "s": 1400, "text": "import hypertools as hyp" }, { "code": null, "e": 1506, "s": 1425, "text": "Now we will start creating different visualizations and see how hypertools work." }, { "code": null, "e": 1517, "s": 1506, "text": "Basic Plot" }, { "code": null, "e": 1528, "s": 1517, "text": "Basic Plot" }, { "code": null, "e": 1611, "s": 1528, "text": "# data loadingbasic = hyp.load('weights_sample')# Creating plotbasic.plot(fmt='.')" }, { "code": null, "e": 1627, "s": 1611, "text": "2. Cluster Plot" }, { "code": null, "e": 1697, "s": 1627, "text": "clust = hyp.load('mushrooms')# Creating plotclust.plot(n_clusters=10)" }, { "code": null, "e": 1713, "s": 1697, "text": "3. Corpus Plots" }, { "code": null, "e": 1753, "s": 1713, "text": "This plot is used for textual datasets." }, { "code": null, "e": 2017, "s": 1753, "text": "text = ['i am from India', 'India is in asia', 'Asia is the largest continent', 'There are 7 continents', 'Continents means earth surfaces ', 'Surfaces covers land area', 'land area is largest in asia']# creating plothyp.plot(text, '*', corpus=text)" }, { "code": null, "e": 2025, "s": 2017, "text": "4. UMAP" }, { "code": null, "e": 2189, "s": 2025, "text": "from sklearn import datasetsdata = datasets.load_digits(n_class=6)df = digits.datahue = data.target.astype('str')hyp.plot(df, '.', reduce='UMAP', hue=hue, ndims=2)" }, { "code": null, "e": 2207, "s": 2189, "text": "5. Animated Plots" }, { "code": null, "e": 2282, "s": 2207, "text": "ani = hyp.load('weights_avg')# plotani.plot(animate=True, chemtrails=True)" }, { "code": null, "e": 2455, "s": 2282, "text": "Go ahead try this with different datasets and create beautiful visualizations to interpret data. In case you find any difficulty please let me know in the response section." }, { "code": null, "e": 2508, "s": 2455, "text": "This article is in collaboration with Piyush Ingale." }, { "code": null, "e": 2564, "s": 2508, "text": "Reference: https://hypertools.readthedocs.io/en/latest/" } ]
Different ways to start a Task in C#
To start a task in C#, follow any of the below given ways. Use a delegate to start a task. Task t = new Task(delegate { PrintMessage(); }); t.Start(); Use Task Factory to start a task. Task.Factory.StartNew(() => {Console.WriteLine("Welcome!"); }); You can also use Lambda. Task t = new Task( () => PrintMessage() ); t.Start();
[ { "code": null, "e": 1121, "s": 1062, "text": "To start a task in C#, follow any of the below given ways." }, { "code": null, "e": 1153, "s": 1121, "text": "Use a delegate to start a task." }, { "code": null, "e": 1214, "s": 1153, "text": "Task t = new Task(delegate { PrintMessage(); });\nt.Start();\n" }, { "code": null, "e": 1248, "s": 1214, "text": "Use Task Factory to start a task." }, { "code": null, "e": 1313, "s": 1248, "text": "Task.Factory.StartNew(() => {Console.WriteLine(\"Welcome!\"); });\n" }, { "code": null, "e": 1338, "s": 1313, "text": "You can also use Lambda." }, { "code": null, "e": 1393, "s": 1338, "text": "Task t = new Task( () => PrintMessage() );\nt.Start();\n" } ]
MySQL SELECT from two tables with a single query
Use UNION to select from two tables. Let us first create a table − mysql> create table DemoTable1 ( Id int NOT NULL AUTO_INCREMENT PRIMARY KEY, FirstName varchar(20) ); Query OK, 0 rows affected (0.90 sec) Insert some records in the table using insert command − mysql> insert into DemoTable1(FirstName) values('Chris') ; Query OK, 1 row affected (0.19 sec) mysql> insert into DemoTable1(FirstName) values('Adam'); Query OK, 1 row affected (0.21 sec) mysql> insert into DemoTable1(FirstName) values('Sam'); Query OK, 1 row affected (0.16 sec) Display all records from the table using select statement − mysql> select *from DemoTable1; This will produce the following output − +----+-----------+ | Id | FirstName | +----+-----------+ | 1 | Chris | | 2 | Adam | | 3 | Sam | +----+-----------+ 3 rows in set (0.00 sec) Following is the query to create second table − mysql> create table DemoTable2( Id int NOT NULL AUTO_INCREMENT PRIMARY KEY, FirstName varchar(20) ); Query OK, 0 rows affected (1.75 sec) Insert some records in the table using insert command − mysql> insert into DemoTable2(FirstName) values('John'); Query OK, 1 row affected (0.18 sec) mysql> insert into DemoTable2(FirstName) values('Tom'); Query OK, 1 row affected (0.14 sec) mysql> insert into DemoTable2(FirstName) values('Bob'); Query OK, 1 row affected (0.50 sec) Display all records from the table using select statement − mysql> select *from DemoTable2; This will produce the following output − +----+-----------+ | Id | FirstName | +----+-----------+ | 1 | John | | 2 | Tom | | 3 | Bob | +----+-----------+ 3 rows in set (0.00 sec) Following is the query to select from two tables using MySQL UNION − mysql> (select *from DemoTable1) union (select *from DemoTable2) order by FirstName; This will produce the following output − +----+-----------+ | Id | FirstName | +----+-----------+ | 2 | Adam | | 3 | Bob | | 1 | Chris | | 1 | John | | 3 | Sam | | 2 | Tom | +----+-----------+ 6 rows in set (0.00 sec)
[ { "code": null, "e": 1129, "s": 1062, "text": "Use UNION to select from two tables. Let us first create a table −" }, { "code": null, "e": 1274, "s": 1129, "text": "mysql> create table DemoTable1\n(\n Id int NOT NULL AUTO_INCREMENT PRIMARY KEY,\n FirstName varchar(20)\n);\nQuery OK, 0 rows affected (0.90 sec)" }, { "code": null, "e": 1330, "s": 1274, "text": "Insert some records in the table using insert command −" }, { "code": null, "e": 1610, "s": 1330, "text": "mysql> insert into DemoTable1(FirstName) values('Chris') ;\nQuery OK, 1 row affected (0.19 sec)\nmysql> insert into DemoTable1(FirstName) values('Adam');\nQuery OK, 1 row affected (0.21 sec)\nmysql> insert into DemoTable1(FirstName) values('Sam');\nQuery OK, 1 row affected (0.16 sec)" }, { "code": null, "e": 1670, "s": 1610, "text": "Display all records from the table using select statement −" }, { "code": null, "e": 1702, "s": 1670, "text": "mysql> select *from DemoTable1;" }, { "code": null, "e": 1743, "s": 1702, "text": "This will produce the following output −" }, { "code": null, "e": 1901, "s": 1743, "text": "+----+-----------+\n| Id | FirstName |\n+----+-----------+\n| 1 | Chris |\n| 2 | Adam |\n| 3 | Sam |\n+----+-----------+\n3 rows in set (0.00 sec)" }, { "code": null, "e": 1949, "s": 1901, "text": "Following is the query to create second table −" }, { "code": null, "e": 2093, "s": 1949, "text": "mysql> create table DemoTable2(\n Id int NOT NULL AUTO_INCREMENT PRIMARY KEY,\n FirstName varchar(20)\n);\nQuery OK, 0 rows affected (1.75 sec)" }, { "code": null, "e": 2149, "s": 2093, "text": "Insert some records in the table using insert command −" }, { "code": null, "e": 2426, "s": 2149, "text": "mysql> insert into DemoTable2(FirstName) values('John');\nQuery OK, 1 row affected (0.18 sec)\nmysql> insert into DemoTable2(FirstName) values('Tom');\nQuery OK, 1 row affected (0.14 sec)\nmysql> insert into DemoTable2(FirstName) values('Bob');\nQuery OK, 1 row affected (0.50 sec)" }, { "code": null, "e": 2486, "s": 2426, "text": "Display all records from the table using select statement −" }, { "code": null, "e": 2518, "s": 2486, "text": "mysql> select *from DemoTable2;" }, { "code": null, "e": 2559, "s": 2518, "text": "This will produce the following output −" }, { "code": null, "e": 2717, "s": 2559, "text": "+----+-----------+\n| Id | FirstName |\n+----+-----------+\n| 1 | John |\n| 2 | Tom |\n| 3 | Bob |\n+----+-----------+\n3 rows in set (0.00 sec)" }, { "code": null, "e": 2786, "s": 2717, "text": "Following is the query to select from two tables using MySQL UNION −" }, { "code": null, "e": 2880, "s": 2786, "text": "mysql> (select *from DemoTable1)\n union\n (select *from DemoTable2)\n order by FirstName;" }, { "code": null, "e": 2921, "s": 2880, "text": "This will produce the following output −" }, { "code": null, "e": 3136, "s": 2921, "text": "+----+-----------+\n| Id | FirstName |\n+----+-----------+\n| 2 | Adam |\n| 3 | Bob |\n| 1 | Chris |\n| 1 | John |\n| 3 | Sam |\n| 2 | Tom |\n+----+-----------+\n6 rows in set (0.00 sec)" } ]
Explain JavaScript "this" keyword?
The JavaScript this keyword references the object to which it belongs. It can refer to the global object if alone or inside a function. It refers to the owner object if inside a method and refers to the HTML element that received the event in an event listener. Following is the code for the JavaScript this Identifier − Live Demo <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Document</title> <style> body { font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif; } .sample { font-size: 18px; font-weight: 500; color: red; } </style> </head> <body> <h1>JavaScript this Identifier</h1> <div class="sample"></div> <div class="result"></div> <button class="Btn">CLICK HERE</button> <h3> Click on the above button to see which object 'this' refers to in multiple context </h3> <script> let thisRef = this; let sampleEle = document.querySelector(".sample"); function test() { return this; } let testObj = { a: 22, check() { return this; }, }; document.querySelector(".Btn").addEventListener( "click", () => { sampleEle.innerHTML = "This inside normal function = " + test() + "<br>"; sampleEle.innerHTML += "This inside a method = " + testObj.check() + "<br>"; sampleEle.innerHTML += "This without any scope = " + thisRef + "<br>"; }, false ); </script> </body> </html> The above code will produce the following output − On clicking the “CLICK HERE” button −
[ { "code": null, "e": 1324, "s": 1062, "text": "The JavaScript this keyword references the object to which it belongs. It can refer to the global object if alone or inside a function. It refers to the owner object if inside a method and refers to the HTML element that received the event in an event listener." }, { "code": null, "e": 1383, "s": 1324, "text": "Following is the code for the JavaScript this Identifier −" }, { "code": null, "e": 1394, "s": 1383, "text": " Live Demo" }, { "code": null, "e": 2577, "s": 1394, "text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\" />\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n<title>Document</title>\n<style>\n body {\n font-family: \"Segoe UI\", Tahoma, Geneva, Verdana, sans-serif;\n }\n .sample {\n font-size: 18px;\n font-weight: 500;\n color: red;\n }\n</style>\n</head>\n<body>\n<h1>JavaScript this Identifier</h1>\n<div class=\"sample\"></div>\n<div class=\"result\"></div>\n<button class=\"Btn\">CLICK HERE</button>\n<h3>\nClick on the above button to see which object 'this' refers to in multiple\ncontext\n</h3>\n<script>\n let thisRef = this;\n let sampleEle = document.querySelector(\".sample\");\n function test() {\n return this;\n }\n let testObj = {\n a: 22,\n check() {\n return this;\n },\n };\n document.querySelector(\".Btn\").addEventListener(\n \"click\",\n () => {\n sampleEle.innerHTML = \"This inside normal function = \" + test() + \"<br>\";\n sampleEle.innerHTML += \"This inside a method = \" + testObj.check() + \"<br>\";\n sampleEle.innerHTML += \"This without any scope = \" + thisRef + \"<br>\";\n },\n false\n );\n</script>\n</body>\n</html>" }, { "code": null, "e": 2628, "s": 2577, "text": "The above code will produce the following output −" }, { "code": null, "e": 2666, "s": 2628, "text": "On clicking the “CLICK HERE” button −" } ]
Add Two Numbers in Python
Suppose we have given two non-empty linked lists. These two lists are representing two non-negative integer numbers. The digits are stored in reverse order. Each of their nodes contains only one digit. Add the two numbers and return the result as a linked list. We are taking the assumption that the two numbers do not contain any leading zeros, except the number 0 itself. So if the numbers are 120 + 230, then the linked lists will be [0 → 2 → 1] + [0 → 3 → 2] = [0 → 5 → 3] = 350. To solve this, we will follow these steps Take two lists l1 and l2. Initialize head and temp as null c := 0 while l1 and l2 both are non-empty listsif l1 is non-empty, then set a := 0, otherwise set a := l1.valif l2 is non-empty, then set b := 0, otherwise set b := l2.valn := a + b + cif n > 9, then c := 1 otherwise 0node := create a new node with value n mod 10if head is nullhead := node and temp := nodeotherwisehead.next := node, and head := nodel1 := next node of l1, if l1 existsl2 := next node of l2, if l2 exists if l1 is non-empty, then set a := 0, otherwise set a := l1.val if l2 is non-empty, then set b := 0, otherwise set b := l2.val n := a + b + c if n > 9, then c := 1 otherwise 0 node := create a new node with value n mod 10 if head is nullhead := node and temp := node head := node and temp := node head := node and temp := node otherwisehead.next := node, and head := node head.next := node, and head := node l1 := next node of l1, if l1 exists l2 := next node of l2, if l2 exists if c is non-zero, thennode := new node with value 1, next of head := node node := new node with value 1, next of head := node return temp Let us see the following implementation to get a better understanding Live Demo class ListNode: def __init__(self, data, next = None): self.val = data self.next = next def make_list(elements): head = ListNode(elements[0]) for element in elements[1:]: ptr = head while ptr.next: ptr = ptr.next ptr.next = ListNode(element) return head def print_list(head): ptr = head print('[', end = "") while ptr: print(ptr.val, end = ", ") ptr = ptr.next print(']') class Solution: def addTwoNumbers(self, l1: ListNode, l2: ListNode) -> ListNode: head = None temp = None c = 0 while l1 or l2: if not l1: a= 0 else: a = l1.val if not l2: b=0 else: b = l2.val n = a +b + c c = 1 if n>9 else 0 node = ListNode(n%10) if not head: head = node temp = node else: head.next = node head = node l1 = l1.next if l1 else None l2 = l2.next if l2 else None if c: node = ListNode(1) head.next = node return temp ob1 = Solution() l1 = make_list([0,2,1]) l2 = make_list([0,3,2]) print_list(ob1.addTwoNumbers(l1, l2)) [0,2,1] [0,3,2] [0,5,3]
[ { "code": null, "e": 1546, "s": 1062, "text": "Suppose we have given two non-empty linked lists. These two lists are representing two non-negative integer numbers. The digits are stored in reverse order. Each of their nodes contains only one digit. Add the two numbers and return the result as a linked list. We are taking the assumption that the two numbers do not contain any leading zeros, except the number 0 itself. So if the numbers are 120 + 230, then the linked lists will be [0 → 2 → 1] + [0 → 3 → 2] = [0 → 5 → 3] = 350." }, { "code": null, "e": 1588, "s": 1546, "text": "To solve this, we will follow these steps" }, { "code": null, "e": 1647, "s": 1588, "text": "Take two lists l1 and l2. Initialize head and temp as null" }, { "code": null, "e": 1654, "s": 1647, "text": "c := 0" }, { "code": null, "e": 2069, "s": 1654, "text": "while l1 and l2 both are non-empty listsif l1 is non-empty, then set a := 0, otherwise set a := l1.valif l2 is non-empty, then set b := 0, otherwise set b := l2.valn := a + b + cif n > 9, then c := 1 otherwise 0node := create a new node with value n mod 10if head is nullhead := node and temp := nodeotherwisehead.next := node, and head := nodel1 := next node of l1, if l1 existsl2 := next node of l2, if l2 exists" }, { "code": null, "e": 2132, "s": 2069, "text": "if l1 is non-empty, then set a := 0, otherwise set a := l1.val" }, { "code": null, "e": 2195, "s": 2132, "text": "if l2 is non-empty, then set b := 0, otherwise set b := l2.val" }, { "code": null, "e": 2210, "s": 2195, "text": "n := a + b + c" }, { "code": null, "e": 2244, "s": 2210, "text": "if n > 9, then c := 1 otherwise 0" }, { "code": null, "e": 2290, "s": 2244, "text": "node := create a new node with value n mod 10" }, { "code": null, "e": 2335, "s": 2290, "text": "if head is nullhead := node and temp := node" }, { "code": null, "e": 2365, "s": 2335, "text": "head := node and temp := node" }, { "code": null, "e": 2395, "s": 2365, "text": "head := node and temp := node" }, { "code": null, "e": 2440, "s": 2395, "text": "otherwisehead.next := node, and head := node" }, { "code": null, "e": 2476, "s": 2440, "text": "head.next := node, and head := node" }, { "code": null, "e": 2512, "s": 2476, "text": "l1 := next node of l1, if l1 exists" }, { "code": null, "e": 2548, "s": 2512, "text": "l2 := next node of l2, if l2 exists" }, { "code": null, "e": 2622, "s": 2548, "text": "if c is non-zero, thennode := new node with value 1, next of head := node" }, { "code": null, "e": 2674, "s": 2622, "text": "node := new node with value 1, next of head := node" }, { "code": null, "e": 2686, "s": 2674, "text": "return temp" }, { "code": null, "e": 2756, "s": 2686, "text": "Let us see the following implementation to get a better understanding" }, { "code": null, "e": 2767, "s": 2756, "text": " Live Demo" }, { "code": null, "e": 4000, "s": 2767, "text": "class ListNode:\n def __init__(self, data, next = None):\n self.val = data\n self.next = next\ndef make_list(elements):\n head = ListNode(elements[0])\n for element in elements[1:]:\n ptr = head\n while ptr.next:\n ptr = ptr.next\n ptr.next = ListNode(element)\n return head\ndef print_list(head):\n ptr = head\n print('[', end = \"\")\n while ptr:\n print(ptr.val, end = \", \")\n ptr = ptr.next\n print(']')\nclass Solution:\n def addTwoNumbers(self, l1: ListNode, l2: ListNode) -> ListNode:\n head = None\n temp = None\n c = 0\n while l1 or l2:\n if not l1:\n a= 0\n else:\n a = l1.val\n if not l2:\n b=0\n else:\n b = l2.val\n n = a +b + c\n c = 1 if n>9 else 0\n node = ListNode(n%10)\n if not head:\n head = node\n temp = node\n else:\n head.next = node\n head = node\n l1 = l1.next if l1 else None\n l2 = l2.next if l2 else None\n if c:\n node = ListNode(1)\n head.next = node\n return temp\nob1 = Solution()\nl1 = make_list([0,2,1])\nl2 = make_list([0,3,2])\nprint_list(ob1.addTwoNumbers(l1, l2))" }, { "code": null, "e": 4016, "s": 4000, "text": "[0,2,1]\n[0,3,2]" }, { "code": null, "e": 4024, "s": 4016, "text": "[0,5,3]" } ]
Scala - Operators
An operator is a symbol that tells the compiler to perform specific mathematical or logical manipulations. Scala is rich in built-in operators and provides the following types of operators − Arithmetic Operators Relational Operators Logical Operators Bitwise Operators Assignment Operators This chapter will examine the arithmetic, relational, logical, bitwise, assignment and other operators one by one. The following arithmetic operators are supported by Scala language. For example, let us assume variable A holds 10 and variable B holds 20, then − Show Examples The following relational operators are supported by Scala language. For example let us assume variable A holds 10 and variable B holds 20, then − Show Examples The following logical operators are supported by Scala language. For example, assume variable A holds 1 and variable B holds 0, then − Show Examples Bitwise operator works on bits and perform bit by bit operation. The truth tables for &, |, and ^ are as follows − Assume if A = 60; and B = 13; now in binary format they will be as follows − A = 0011 1100 B = 0000 1101 ----------------------- A&B = 0000 1100 A|B = 0011 1101 A^B = 0011 0001 ~A = 1100 0011 The Bitwise operators supported by Scala language is listed in the following table. Assume variable A holds 60 and variable B holds 13, then − Show Examples There are following assignment operators supported by Scala language − Show Examples Operator precedence determines the grouping of terms in an expression. This affects how an expression is evaluated. Certain operators have higher precedence than others; for example, the multiplication operator has higher precedence than the addition operator − For example, x = 7 + 3 * 2; here, x is assigned 13, not 20 because operator * has higher precedence than +, so it first gets multiplied with 3*2 and then adds into 7. Take a look at the following table. Operators with the highest precedence appear at the top of the table and those with the lowest precedence appear at the bottom. Within an expression, higher precedence operators will be evaluated first. 82 Lectures 7 hours Arnab Chakraborty 23 Lectures 1.5 hours Mukund Kumar Mishra 52 Lectures 1.5 hours Bigdata Engineer 76 Lectures 5.5 hours Bigdata Engineer 69 Lectures 7.5 hours Bigdata Engineer 46 Lectures 4.5 hours Stone River ELearning Print Add Notes Bookmark this page
[ { "code": null, "e": 2189, "s": 1998, "text": "An operator is a symbol that tells the compiler to perform specific mathematical or logical manipulations. Scala is rich in built-in operators and provides the following types of operators −" }, { "code": null, "e": 2210, "s": 2189, "text": "Arithmetic Operators" }, { "code": null, "e": 2231, "s": 2210, "text": "Relational Operators" }, { "code": null, "e": 2249, "s": 2231, "text": "Logical Operators" }, { "code": null, "e": 2267, "s": 2249, "text": "Bitwise Operators" }, { "code": null, "e": 2288, "s": 2267, "text": "Assignment Operators" }, { "code": null, "e": 2403, "s": 2288, "text": "This chapter will examine the arithmetic, relational, logical, bitwise, assignment and other operators one by one." }, { "code": null, "e": 2550, "s": 2403, "text": "The following arithmetic operators are supported by Scala language. For example, let us assume variable A holds 10 and variable B holds 20, then −" }, { "code": null, "e": 2564, "s": 2550, "text": "Show Examples" }, { "code": null, "e": 2710, "s": 2564, "text": "The following relational operators are supported by Scala language. For example let us assume variable A holds 10 and variable B holds 20, then −" }, { "code": null, "e": 2724, "s": 2710, "text": "Show Examples" }, { "code": null, "e": 2859, "s": 2724, "text": "The following logical operators are supported by Scala language. For example, assume variable A holds 1 and variable B holds 0, then −" }, { "code": null, "e": 2873, "s": 2859, "text": "Show Examples" }, { "code": null, "e": 2988, "s": 2873, "text": "Bitwise operator works on bits and perform bit by bit operation. The truth tables for &, |, and ^ are as follows −" }, { "code": null, "e": 3065, "s": 2988, "text": "Assume if A = 60; and B = 13; now in binary format they will be as follows −" }, { "code": null, "e": 3181, "s": 3065, "text": "A = 0011 1100\nB = 0000 1101\n-----------------------\nA&B = 0000 1100\nA|B = 0011 1101\nA^B = 0011 0001\n~A = 1100 0011\n" }, { "code": null, "e": 3324, "s": 3181, "text": "The Bitwise operators supported by Scala language is listed in the following table. Assume variable A holds 60 and variable B holds 13, then −" }, { "code": null, "e": 3338, "s": 3324, "text": "Show Examples" }, { "code": null, "e": 3409, "s": 3338, "text": "There are following assignment operators supported by Scala language −" }, { "code": null, "e": 3423, "s": 3409, "text": "Show Examples" }, { "code": null, "e": 3685, "s": 3423, "text": "Operator precedence determines the grouping of terms in an expression. This affects how an expression is evaluated. Certain operators have higher precedence than others; for example, the multiplication operator has higher precedence than the addition operator −" }, { "code": null, "e": 3852, "s": 3685, "text": "For example, x = 7 + 3 * 2; here, x is assigned 13, not 20 because operator * has higher precedence than +, so it first gets multiplied with 3*2 and then adds into 7." }, { "code": null, "e": 4091, "s": 3852, "text": "Take a look at the following table. Operators with the highest precedence appear at the top of the table and those with the lowest precedence appear at the bottom. Within an expression, higher precedence operators will be evaluated first." }, { "code": null, "e": 4124, "s": 4091, "text": "\n 82 Lectures \n 7 hours \n" }, { "code": null, "e": 4143, "s": 4124, "text": " Arnab Chakraborty" }, { "code": null, "e": 4178, "s": 4143, "text": "\n 23 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4199, "s": 4178, "text": " Mukund Kumar Mishra" }, { "code": null, "e": 4234, "s": 4199, "text": "\n 52 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4252, "s": 4234, "text": " Bigdata Engineer" }, { "code": null, "e": 4287, "s": 4252, "text": "\n 76 Lectures \n 5.5 hours \n" }, { "code": null, "e": 4305, "s": 4287, "text": " Bigdata Engineer" }, { "code": null, "e": 4340, "s": 4305, "text": "\n 69 Lectures \n 7.5 hours \n" }, { "code": null, "e": 4358, "s": 4340, "text": " Bigdata Engineer" }, { "code": null, "e": 4393, "s": 4358, "text": "\n 46 Lectures \n 4.5 hours \n" }, { "code": null, "e": 4416, "s": 4393, "text": " Stone River ELearning" }, { "code": null, "e": 4423, "s": 4416, "text": " Print" }, { "code": null, "e": 4434, "s": 4423, "text": " Add Notes" } ]
CharMatcher Class in Java
The CharMatcher class determines a true or false value for any Java char value, just as Predicate does for any Object. Create the following java program using any editor of your choice in say C:/> Guava. Following is the GuavaTester.java code − import com.google.common.base.CharMatcher; import com.google.common.base.Splitter; public class GuavaTester { public static void main(String args[]) { GuavaTester tester = new GuavaTester(); tester.testCharMatcher(); } private void testCharMatcher() { System.out.println(CharMatcher.DIGIT.retainFrom("mahesh123")); // only the digits System.out.println(CharMatcher.WHITESPACE.trimAndCollapseFrom(" Mahesh Parashar ", ' ')); // trim whitespace at ends, and replace/collapse whitespace into single spaces System.out.println(CharMatcher.JAVA_DIGIT.replaceFrom("mahesh123", "*")); // star out all digits System.out.println(CharMatcher.JAVA_DIGIT.or(CharMatcher.JAVA_LOWER_CASE).retainFrom("mahesh123")); // eliminate all characters that aren't digits or lowercase } } Compile the class using javac compiler as follows C:\Guava>javac GuavaTester.java Now run the GuavaTester to see the result − C:\Guava>java GuavaTester This will produce the following output − 123 Mahesh Parashar mahesh*** mahesh123
[ { "code": null, "e": 1181, "s": 1062, "text": "The CharMatcher class determines a true or false value for any Java char value, just as Predicate does for any Object." }, { "code": null, "e": 1266, "s": 1181, "text": "Create the following java program using any editor of your choice in say C:/> Guava." }, { "code": null, "e": 1307, "s": 1266, "text": "Following is the GuavaTester.java code −" }, { "code": null, "e": 2156, "s": 1307, "text": "import com.google.common.base.CharMatcher;\nimport com.google.common.base.Splitter;\npublic class GuavaTester {\n public static void main(String args[]) {\n GuavaTester tester = new GuavaTester();\n tester.testCharMatcher();\n }\n private void testCharMatcher() {\n System.out.println(CharMatcher.DIGIT.retainFrom(\"mahesh123\")); // only the digits\n System.out.println(CharMatcher.WHITESPACE.trimAndCollapseFrom(\" Mahesh Parashar \", ' '));\n // trim whitespace at ends, and replace/collapse whitespace into single spaces\n System.out.println(CharMatcher.JAVA_DIGIT.replaceFrom(\"mahesh123\", \"*\"));\n // star out all digits \n System.out.println(CharMatcher.JAVA_DIGIT.or(CharMatcher.JAVA_LOWER_CASE).retainFrom(\"mahesh123\"));\n // eliminate all characters that aren't digits or lowercase\n }\n}\n" }, { "code": null, "e": 2206, "s": 2156, "text": "Compile the class using javac compiler as follows" }, { "code": null, "e": 2238, "s": 2206, "text": "C:\\Guava>javac GuavaTester.java" }, { "code": null, "e": 2282, "s": 2238, "text": "Now run the GuavaTester to see the result −" }, { "code": null, "e": 2308, "s": 2282, "text": "C:\\Guava>java GuavaTester" }, { "code": null, "e": 2349, "s": 2308, "text": "This will produce the following output −" }, { "code": null, "e": 2389, "s": 2349, "text": "123\nMahesh Parashar\nmahesh***\nmahesh123" } ]
JavaScript String - split() Method
This method splits a String object into an array of strings by separating the string into substrings. Its syntax is as follows − string.split([separator][, limit]); separator − Specifies the character to use for separating the string. If separator is omitted, the array returned contains one element consisting of the entire string. separator − Specifies the character to use for separating the string. If separator is omitted, the array returned contains one element consisting of the entire string. limit − Integer specifying a limit on the number of splits to be found. limit − Integer specifying a limit on the number of splits to be found. The split method returns the new array. Also, when the string is empty, split returns an array containing one empty string, rather than an empty array. Try the following example. <html> <head> <title>JavaScript String split() Method</title> </head> <body> <script type = "text/javascript"> var str = "Apples are round, and apples are juicy."; var splitted = str.split(" ", 3); document.write( splitted ); </script> </body> </html> Apples,are,round, 25 Lectures 2.5 hours Anadi Sharma 74 Lectures 10 hours Lets Kode It 72 Lectures 4.5 hours Frahaan Hussain 70 Lectures 4.5 hours Frahaan Hussain 46 Lectures 6 hours Eduonix Learning Solutions 88 Lectures 14 hours Eduonix Learning Solutions Print Add Notes Bookmark this page
[ { "code": null, "e": 2568, "s": 2466, "text": "This method splits a String object into an array of strings by separating the string into substrings." }, { "code": null, "e": 2595, "s": 2568, "text": "Its syntax is as follows −" }, { "code": null, "e": 2632, "s": 2595, "text": "string.split([separator][, limit]);\n" }, { "code": null, "e": 2800, "s": 2632, "text": "separator − Specifies the character to use for separating the string. If separator is omitted, the array returned contains one element consisting of the entire string." }, { "code": null, "e": 2968, "s": 2800, "text": "separator − Specifies the character to use for separating the string. If separator is omitted, the array returned contains one element consisting of the entire string." }, { "code": null, "e": 3040, "s": 2968, "text": "limit − Integer specifying a limit on the number of splits to be found." }, { "code": null, "e": 3112, "s": 3040, "text": "limit − Integer specifying a limit on the number of splits to be found." }, { "code": null, "e": 3264, "s": 3112, "text": "The split method returns the new array. Also, when the string is empty, split returns an array containing one empty string, rather than an empty array." }, { "code": null, "e": 3291, "s": 3264, "text": "Try the following example." }, { "code": null, "e": 3613, "s": 3291, "text": "<html>\n <head>\n <title>JavaScript String split() Method</title>\n </head>\n \n <body> \n <script type = \"text/javascript\">\n var str = \"Apples are round, and apples are juicy.\";\n var splitted = str.split(\" \", 3);\n document.write( splitted );\n </script> \n </body>\n</html>" }, { "code": null, "e": 3633, "s": 3613, "text": "Apples,are,round, \n" }, { "code": null, "e": 3668, "s": 3633, "text": "\n 25 Lectures \n 2.5 hours \n" }, { "code": null, "e": 3682, "s": 3668, "text": " Anadi Sharma" }, { "code": null, "e": 3716, "s": 3682, "text": "\n 74 Lectures \n 10 hours \n" }, { "code": null, "e": 3730, "s": 3716, "text": " Lets Kode It" }, { "code": null, "e": 3765, "s": 3730, "text": "\n 72 Lectures \n 4.5 hours \n" }, { "code": null, "e": 3782, "s": 3765, "text": " Frahaan Hussain" }, { "code": null, "e": 3817, "s": 3782, "text": "\n 70 Lectures \n 4.5 hours \n" }, { "code": null, "e": 3834, "s": 3817, "text": " Frahaan Hussain" }, { "code": null, "e": 3867, "s": 3834, "text": "\n 46 Lectures \n 6 hours \n" }, { "code": null, "e": 3895, "s": 3867, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 3929, "s": 3895, "text": "\n 88 Lectures \n 14 hours \n" }, { "code": null, "e": 3957, "s": 3929, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 3964, "s": 3957, "text": " Print" }, { "code": null, "e": 3975, "s": 3964, "text": " Add Notes" } ]
Jupyter Notebook Not Rendering on GitHub? Here’s a Simple Solution. | by Sidney Kung | Towards Data Science
Have you ever tried to preview your Jupyter Notebook file and received this pesky error message after about 20+ seconds of waiting? About 70% of the time, we see this message. As if it’s a gamble to see whether your .ipynb file will render or not. Good news, this “Sorry, something went wrong. Reload?” message doesn’t have an impact on your actual commit. It’s just an issue on GitHub’s end because it’s unable to render a preview of the file. After much research, it seems that no one on the internet is quite sure about why this problem is occurring. People speculate that it could the a problem with the file size, or perhaps the type of browser that the viewer is using. Regardless of what may be causing this problem, don’t worry. I have the answer to your prayers! Nbviewer is a web application that lets you enter the URL of a Jupyter Notebook file on GitHub, then it renders that notebook as a static HTML web page. This gives you a stable link to that page which you can share with others. In addition to always successfully rendering a Jupyter Notebook, it also has other advantages for users. On the rare occasion that an .ipynb file actually loads on Github, it sometimes fails to display certain objects. For instance, GitHub isn’t able to load Folium maps. When we try to generate a map of Los Angeles, we get this error: Output of folium.__version__ However if we use nbviewer, the full interactive map loads without a problem. See for yourself! Additionally, this website’s features aren’t exclusive to Python, you can use this to display files containing other programming languages such as Ruby and Julia. Overall, this is a great solution if you’re the only technical professional on the team, and you need to be able to easily and quickly distribute a Jupyter Notebook to your coworkers who may not have a learning environment installed. Unfortunately, you still can’t share files on private repositories. So, next time you find yourself waiting 20+ seconds for a code file to render on GitHub, remember that nbviewer is here to make your life a bit easier!
[ { "code": null, "e": 304, "s": 172, "text": "Have you ever tried to preview your Jupyter Notebook file and received this pesky error message after about 20+ seconds of waiting?" }, { "code": null, "e": 420, "s": 304, "text": "About 70% of the time, we see this message. As if it’s a gamble to see whether your .ipynb file will render or not." }, { "code": null, "e": 617, "s": 420, "text": "Good news, this “Sorry, something went wrong. Reload?” message doesn’t have an impact on your actual commit. It’s just an issue on GitHub’s end because it’s unable to render a preview of the file." }, { "code": null, "e": 848, "s": 617, "text": "After much research, it seems that no one on the internet is quite sure about why this problem is occurring. People speculate that it could the a problem with the file size, or perhaps the type of browser that the viewer is using." }, { "code": null, "e": 944, "s": 848, "text": "Regardless of what may be causing this problem, don’t worry. I have the answer to your prayers!" }, { "code": null, "e": 1172, "s": 944, "text": "Nbviewer is a web application that lets you enter the URL of a Jupyter Notebook file on GitHub, then it renders that notebook as a static HTML web page. This gives you a stable link to that page which you can share with others." }, { "code": null, "e": 1277, "s": 1172, "text": "In addition to always successfully rendering a Jupyter Notebook, it also has other advantages for users." }, { "code": null, "e": 1391, "s": 1277, "text": "On the rare occasion that an .ipynb file actually loads on Github, it sometimes fails to display certain objects." }, { "code": null, "e": 1509, "s": 1391, "text": "For instance, GitHub isn’t able to load Folium maps. When we try to generate a map of Los Angeles, we get this error:" }, { "code": null, "e": 1538, "s": 1509, "text": "Output of folium.__version__" }, { "code": null, "e": 1634, "s": 1538, "text": "However if we use nbviewer, the full interactive map loads without a problem. See for yourself!" }, { "code": null, "e": 1797, "s": 1634, "text": "Additionally, this website’s features aren’t exclusive to Python, you can use this to display files containing other programming languages such as Ruby and Julia." }, { "code": null, "e": 2099, "s": 1797, "text": "Overall, this is a great solution if you’re the only technical professional on the team, and you need to be able to easily and quickly distribute a Jupyter Notebook to your coworkers who may not have a learning environment installed. Unfortunately, you still can’t share files on private repositories." } ]
Check if the String starts with specified prefix in Golang - GeeksforGeeks
26 Aug, 2019 In Go language, strings are different from other languages like Java, C++, Python, etc. It is a sequence of variable-width characters where each and every character is represented by one or more bytes using UTF-8 Encoding.In Golang strings, you can check whether the string begins with the specified prefix or not with the help of HasPrefix() function. This function returns true if the given string starts with the specified prefix and return false if the given string does not start with the specified prefix. It is defined under the strings package so, you have to import strings package in your program for accessing HasPrefix function. Syntax: func HasPrefix(str, pre string) bool Here, str is the original string and pre is a string which represents the prefix. The return type of this function is of the bool type. Let us discuss this concept with the help of an example: Example: // Go program to illustrate how to check// the given string starts with the// specified prefixpackage main import ( "fmt" "strings") // Main functionfunc main() { // Creating and initializing strings // Using shorthand declaration s1 := "I am working as a Technical content writer in GeeksforGeeks!" s2 := "I am currently writing articles on Go language!" // Checking the given strings starts with the specified prefix // Using HasPrefix() function res1 := strings.HasPrefix(s1, "I") res2 := strings.HasPrefix(s1, "My") res3 := strings.HasPrefix(s2, "I") res4 := strings.HasPrefix(s2, "We") res5 := strings.HasPrefix("GeeksforGeeks", "Welcome") res6 := strings.HasPrefix("Welcome to GeeksforGeeks", "Welcome") // Displaying results fmt.Println("Result 1: ", res1) fmt.Println("Result 2: ", res2) fmt.Println("Result 3: ", res3) fmt.Println("Result 4: ", res4) fmt.Println("Result 5: ", res5) fmt.Println("Result 6: ", res6)} Output: Result 1: true Result 2: false Result 3: true Result 4: false Result 5: false Result 6: true Golang Golang-String Go Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Arrays in Go How to Split a String in Golang? Slices in Golang Golang Maps Different Ways to Find the Type of Variable in Golang Inheritance in GoLang How to compare times in Golang? Interfaces in Golang How to Trim a String in Golang? How to Parse JSON in Golang?
[ { "code": null, "e": 24069, "s": 24041, "text": "\n26 Aug, 2019" }, { "code": null, "e": 24710, "s": 24069, "text": "In Go language, strings are different from other languages like Java, C++, Python, etc. It is a sequence of variable-width characters where each and every character is represented by one or more bytes using UTF-8 Encoding.In Golang strings, you can check whether the string begins with the specified prefix or not with the help of HasPrefix() function. This function returns true if the given string starts with the specified prefix and return false if the given string does not start with the specified prefix. It is defined under the strings package so, you have to import strings package in your program for accessing HasPrefix function." }, { "code": null, "e": 24718, "s": 24710, "text": "Syntax:" }, { "code": null, "e": 24755, "s": 24718, "text": "func HasPrefix(str, pre string) bool" }, { "code": null, "e": 24948, "s": 24755, "text": "Here, str is the original string and pre is a string which represents the prefix. The return type of this function is of the bool type. Let us discuss this concept with the help of an example:" }, { "code": null, "e": 24957, "s": 24948, "text": "Example:" }, { "code": "// Go program to illustrate how to check// the given string starts with the// specified prefixpackage main import ( \"fmt\" \"strings\") // Main functionfunc main() { // Creating and initializing strings // Using shorthand declaration s1 := \"I am working as a Technical content writer in GeeksforGeeks!\" s2 := \"I am currently writing articles on Go language!\" // Checking the given strings starts with the specified prefix // Using HasPrefix() function res1 := strings.HasPrefix(s1, \"I\") res2 := strings.HasPrefix(s1, \"My\") res3 := strings.HasPrefix(s2, \"I\") res4 := strings.HasPrefix(s2, \"We\") res5 := strings.HasPrefix(\"GeeksforGeeks\", \"Welcome\") res6 := strings.HasPrefix(\"Welcome to GeeksforGeeks\", \"Welcome\") // Displaying results fmt.Println(\"Result 1: \", res1) fmt.Println(\"Result 2: \", res2) fmt.Println(\"Result 3: \", res3) fmt.Println(\"Result 4: \", res4) fmt.Println(\"Result 5: \", res5) fmt.Println(\"Result 6: \", res6)}", "e": 25953, "s": 24957, "text": null }, { "code": null, "e": 25961, "s": 25953, "text": "Output:" }, { "code": null, "e": 26061, "s": 25961, "text": "Result 1: true\nResult 2: false\nResult 3: true\nResult 4: false\nResult 5: false\nResult 6: true\n" }, { "code": null, "e": 26068, "s": 26061, "text": "Golang" }, { "code": null, "e": 26082, "s": 26068, "text": "Golang-String" }, { "code": null, "e": 26094, "s": 26082, "text": "Go Language" }, { "code": null, "e": 26192, "s": 26094, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26201, "s": 26192, "text": "Comments" }, { "code": null, "e": 26214, "s": 26201, "text": "Old Comments" }, { "code": null, "e": 26227, "s": 26214, "text": "Arrays in Go" }, { "code": null, "e": 26260, "s": 26227, "text": "How to Split a String in Golang?" }, { "code": null, "e": 26277, "s": 26260, "text": "Slices in Golang" }, { "code": null, "e": 26289, "s": 26277, "text": "Golang Maps" }, { "code": null, "e": 26343, "s": 26289, "text": "Different Ways to Find the Type of Variable in Golang" }, { "code": null, "e": 26365, "s": 26343, "text": "Inheritance in GoLang" }, { "code": null, "e": 26397, "s": 26365, "text": "How to compare times in Golang?" }, { "code": null, "e": 26418, "s": 26397, "text": "Interfaces in Golang" }, { "code": null, "e": 26450, "s": 26418, "text": "How to Trim a String in Golang?" } ]
Importing Data in Python
When running python programs, we need to use datasets for data analysis. Python has various modules which help us in importing the external data in various file formats to a python program. In this example we will see how to import data of various formats to a python program. The csv module enables us to read each of the row in the file using a comma as a delimiter. We first open the file in read only mode and then assign the delimiter. Finally use a for loop to read each row from the csv file. import csv with open("E:\\customers.csv",'r') as custfile: rows=csv.reader(custfile,delimiter=',') for r in rows: print(r) Running the above code gives us the following result − ['customerID', 'gender', 'Contract', 'PaperlessBilling', 'Churn'] ['7590-VHVEG', 'Female', 'Month-to-month', 'Yes', 'No'] ['5575-GNVDE', 'Male', 'One year', 'No', 'No'] ['3668-QPYBK', 'Male', 'Month-to-month', 'Yes', 'Yes'] ['7795-CFOCW', 'Male', 'One year', 'No', 'No'] ...... ....... The pandas library can actually handle most of the file types inclusing csv file. In this program let see how pandas library handles the excel file using the read_excel module. In the below example we read the excel version of the above file and get the same result when we read the file. import pandas as pd df = pd.ExcelFile("E:\\customers.xlsx") data=df.parse("customers") print(data.head(10)) Running the above code gives us the following result − customerID gender Contract PaperlessBilling Churn 0 7590-VHVEG Female Month-to-month Yes No 1 5575-GNVDE Male One year No No 2 3668-QPYBK Male Month-to-month Yes Yes 3 7795-CFOCW Male One year No No 4 9237-HQITU Female Month-to-month Yes Yes 5 9305-CDSKC Female Month-to-month Yes Yes 6 1452-KIOVK Male Month-to-month Yes No 7 6713-OKOMC Female Month-to-month No No 8 7892-POOKP Female Month-to-month Yes Yes 9 6388-TABGU Male One year No No We can also connect to database servers using a module called pyodbc. This will help us import data from relational sources using a sql query. Ofcourse we also have to define the connection details to the db before passing on the query. import pyodbc sql_conn = pyodbc.connect("Driver={SQL Server};Server=serverName;UID=UserName;PWD=Password;Database=sqldb;") data_sql = pd.read_sql_query(SQL QUERY’, sql_conn) data_sql.head() Depending the SQL query the result will be displayed.
[ { "code": null, "e": 1339, "s": 1062, "text": "When running python programs, we need to use datasets for data analysis. Python has various modules which help us in importing the external data in various file formats to a python program. In this example we will see how to import data of various formats to a python program." }, { "code": null, "e": 1562, "s": 1339, "text": "The csv module enables us to read each of the row in the file using a comma as a delimiter. We first open the file in read only mode and then assign the delimiter. Finally use a for loop to read each row from the csv file." }, { "code": null, "e": 1686, "s": 1562, "text": "import csv\n\nwith open(\"E:\\\\customers.csv\",'r') as custfile:\nrows=csv.reader(custfile,delimiter=',')\nfor r in rows:\nprint(r)" }, { "code": null, "e": 1741, "s": 1686, "text": "Running the above code gives us the following result −" }, { "code": null, "e": 2027, "s": 1741, "text": "['customerID', 'gender', 'Contract', 'PaperlessBilling', 'Churn']\n['7590-VHVEG', 'Female', 'Month-to-month', 'Yes', 'No']\n['5575-GNVDE', 'Male', 'One year', 'No', 'No']\n['3668-QPYBK', 'Male', 'Month-to-month', 'Yes', 'Yes']\n['7795-CFOCW', 'Male', 'One year', 'No', 'No']\n......\n......." }, { "code": null, "e": 2316, "s": 2027, "text": "The pandas library can actually handle most of the file types inclusing csv file. In this program let see how pandas library handles the excel file using the read_excel module. In the below example we read the excel version of the above file and get the same result when we read the file." }, { "code": null, "e": 2425, "s": 2316, "text": "import pandas as pd\n\ndf = pd.ExcelFile(\"E:\\\\customers.xlsx\")\ndata=df.parse(\"customers\")\nprint(data.head(10))" }, { "code": null, "e": 2480, "s": 2425, "text": "Running the above code gives us the following result −" }, { "code": null, "e": 3175, "s": 2480, "text": " customerID gender Contract PaperlessBilling Churn\n0 7590-VHVEG Female Month-to-month Yes No\n1 5575-GNVDE Male One year No No\n2 3668-QPYBK Male Month-to-month Yes Yes\n3 7795-CFOCW Male One year No No\n4 9237-HQITU Female Month-to-month Yes Yes\n5 9305-CDSKC Female Month-to-month Yes Yes\n6 1452-KIOVK Male Month-to-month Yes No\n7 6713-OKOMC Female Month-to-month No No\n8 7892-POOKP Female Month-to-month Yes Yes\n9 6388-TABGU Male One year No No" }, { "code": null, "e": 3412, "s": 3175, "text": "We can also connect to database servers using a module called pyodbc. This will help us import data from relational sources using a sql query. Ofcourse we also have to define the connection details to the db before passing on the query." }, { "code": null, "e": 3603, "s": 3412, "text": "import pyodbc\nsql_conn = pyodbc.connect(\"Driver={SQL Server};Server=serverName;UID=UserName;PWD=Password;Database=sqldb;\")\ndata_sql = pd.read_sql_query(SQL QUERY’, sql_conn)\ndata_sql.head()\n" }, { "code": null, "e": 3657, "s": 3603, "text": "Depending the SQL query the result will be displayed." } ]
An in-depth EfficientNet tutorial using TensorFlow — How to use EfficientNet on a custom dataset. | by Mostafa Ibrahim | Towards Data Science
Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. Source: arxiv EfficientNet has been quite a strong one of the state-of-art image classification networks for a while now. I can see it being used quite heavily in Kaggle competitions for image classification with 0.90+ AUC and I thought I would put our a tutorial here since there aren’t that many online. I won’t be going over the theoretical part of EfficientNet since there are tons of online resources for that, instead, I will be going over the coding bit. You can use efficientNet-pytorch, however, I usually find TensorFlow quicker and easier to use. The dataset we are going to be using here is a Chest X-ray dataset from the Kaggle competition VinBigData. We will be using a resized version of 512x512 images since the original images are quite huge (2k+). You can find the resized version here. Anyway, the main aim of the tutorial is to for you to use it on a custom dataset. Along with the images, we have a dataframe that specifies the class_id for each image: The first thing you want to do is to run !pip install tensorflow-gpu This will allow you to train your model on the GPU (if you have one). Next thing is to import a few packages: from tensorflow.keras.applications import * #Efficient Net included herefrom tensorflow.keras import modelsfrom tensorflow.keras import layersfrom keras.preprocessing.image import ImageDataGeneratorimport osimport shutilimport pandas as pdfrom sklearn import model_selectionfrom tqdm import tqdmfrom tensorflow.keras import optimizersimport tensorflow as tf#Use this to check if the GPU is configured correctlyfrom tensorflow.python.client import device_libprint(device_lib.list_local_devices()) To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at https: //github.com/tensorflow/tpu/tree/ master/models/official/efficientnet Source: arxiv Okay next thing we need to do is to set up efficientNet and install pre-trained weights # Options: EfficientNetB0, EfficientNetB1, EfficientNetB2, EfficientNetB3, ... up to 7# Higher the number, the more complex the model is. and the larger resolutions it can handle, but the more GPU memory it will need# loading pretrained conv base model#input_shape is (height, width, number of channels) for imagesconv_base = EfficientNetB6(weights="imagenet", include_top=False, input_shape=input_shape) Weights=”imagenet” allows us to do transfer learning, but you can set it to None if you want (you probably shouldn’t do this). include_top=False allows us to easily change the final layer to our custom dataset. After installing the model, we want to do a small bit of configuration to make it suitable for our custom dataset: model = models.Sequential()model.add(conv_base)model.add(layers.GlobalMaxPooling2D(name="gap"))#avoid overfittingmodel.add(layers.Dropout(dropout_rate=0.2, name="dropout_out"))# Set NUMBER_OF_CLASSES to the number of your final predictions.model.add(layers.Dense(NUMBER_OF_CLASSES, activation="softmax", name="fc_out"))conv_base.trainable = False The model is prepared. Now we need to prepare the dataset. We are going to be using a flow_from_directory along with Keras’s ImageDataGenerator. This method will be expecting training and validation directories. In each directory, there should be a separate directory for each class with the corresponding images under that directory. To start with, let’s create a directory for each class under training & validation. TRAIN_IMAGES_PATH = './vinbigdata/images/train' #12000VAL_IMAGES_PATH = './vinbigdata/images/val' #3000External_DIR = '../input/vinbigdata-512-image-dataset/vinbigdata/train' # 15000os.makedirs(TRAIN_IMAGES_PATH, exist_ok = True)os.makedirs(VAL_IMAGES_PATH, exist_ok = True)classes = [ 'Aortic enlargement', 'No Finding']# Create directories for each class.for class_id in [x for x in range(len(classes))]: os.makedirs(os.path.join(TRAIN_IMAGES_PATH, str(class_id)), exist_ok = True) os.makedirs(os.path.join(VAL_IMAGES_PATH, str(class_id)), exist_ok = True) The next thing to do is to copy each image to its correct directory: Input_dir = '/kaggle/input/vinbigdata-512-image-dataset/vinbigdata/train'def preproccess_data(df, images_path): for column, row in tqdm(df.iterrows(), total=len(df)): class_id = row['class_id'] shutil.copy(os.path.join(Input_dir, f"{row['image_id']}.png"), os.path.join(images_path, str(class_id)))df = pd.read_csv('../input/vinbigdata-512-image-dataset/vinbigdata/train.csv')df.head()#Split the dataset into 80% training and 20% validationdf_train, df_valid = model_selection.train_test_split(df, test_size=0.2, random_state=42, shuffle=True)#run the function on each of thempreproccess_data(df_train, TRAIN_IMAGES_PATH)preproccess_data(df_valid, VAL_IMAGES_PATH) Now, you can check the dataset directory and all of the images should be copied to their correct sub-directories. The next step is to pass the dataset to the generator and then start training: # I love the ImageDataGenerator class, it allows us to specifiy whatever augmentations we want so easily...train_datagen = ImageDataGenerator( rescale=1.0 / 255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode="nearest",)# Note that the validation data should not be augmented!#and a very important step is to normalise the images through rescalingtest_datagen = ImageDataGenerator(rescale=1.0 / 255)train_generator = train_datagen.flow_from_directory( # This is the target directory TRAIN_IMAGES_PATH, # All images will be resized to target height and width. target_size=(height, width), batch_size=batch_size, # Since we use categorical_crossentropy loss, we need categorical labels class_mode="categorical",)validation_generator = test_datagen.flow_from_directory( VAL_IMAGES_PATH, target_size=(height, width), batch_size=batch_size, class_mode="categorical",)model.compile( loss="categorical_crossentropy", optimizer=optimizers.RMSprop(lr=2e-5), metrics=["acc"],) If all goes according to plan, you should get a similar message to this: Found X images belonging to x classes.Found Y images belonging to x classes. history = model.fit_generator( train_generator, steps_per_epoch=NUMBER_OF_TRAINING_IMAGES // batch_size, epochs=epochs, validation_data=validation_generator, validation_steps=NUMBER_OF_VALIDATION_IMAGES // batch_size, verbose=1, use_multiprocessing=True, workers=4,) y_pred = model.predict(X_test)score = model.evaluate(X_test, y_test,verbose=1) # Import the modules from `sklearn.metrics` from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score# Confusion matrix confusion_matrix(y_test, y_pred) precision_score(y_test, y_pred) recall_score(y_test, y_pred) f1_score(y_test,y_pred) The next part is to further evaluate the model, there are a lot of resources for doing this and since you will probably be interested in exploring tons of different metrics, this should be quite easy using Keras. A few possible improvements on the tutorial here is to use cross-validation by creating several folds and then ensembling the final predictions. Furthermore, you can use more advanced data augmentation techniques such as Mixup, Cutup, and Jitter.
[ { "code": null, "e": 761, "s": 172, "text": "Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet." }, { "code": null, "e": 775, "s": 761, "text": "Source: arxiv" }, { "code": null, "e": 1067, "s": 775, "text": "EfficientNet has been quite a strong one of the state-of-art image classification networks for a while now. I can see it being used quite heavily in Kaggle competitions for image classification with 0.90+ AUC and I thought I would put our a tutorial here since there aren’t that many online." }, { "code": null, "e": 1319, "s": 1067, "text": "I won’t be going over the theoretical part of EfficientNet since there are tons of online resources for that, instead, I will be going over the coding bit. You can use efficientNet-pytorch, however, I usually find TensorFlow quicker and easier to use." }, { "code": null, "e": 1648, "s": 1319, "text": "The dataset we are going to be using here is a Chest X-ray dataset from the Kaggle competition VinBigData. We will be using a resized version of 512x512 images since the original images are quite huge (2k+). You can find the resized version here. Anyway, the main aim of the tutorial is to for you to use it on a custom dataset." }, { "code": null, "e": 1735, "s": 1648, "text": "Along with the images, we have a dataframe that specifies the class_id for each image:" }, { "code": null, "e": 1776, "s": 1735, "text": "The first thing you want to do is to run" }, { "code": null, "e": 1805, "s": 1776, "text": "!pip install tensorflow-gpu" }, { "code": null, "e": 1915, "s": 1805, "text": "This will allow you to train your model on the GPU (if you have one). Next thing is to import a few packages:" }, { "code": null, "e": 2411, "s": 1915, "text": "from tensorflow.keras.applications import * #Efficient Net included herefrom tensorflow.keras import modelsfrom tensorflow.keras import layersfrom keras.preprocessing.image import ImageDataGeneratorimport osimport shutilimport pandas as pdfrom sklearn import model_selectionfrom tqdm import tqdmfrom tensorflow.keras import optimizersimport tensorflow as tf#Use this to check if the GPU is configured correctlyfrom tensorflow.python.client import device_libprint(device_lib.list_local_devices())" }, { "code": null, "e": 3115, "s": 2411, "text": "To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at https: //github.com/tensorflow/tpu/tree/ master/models/official/efficientnet" }, { "code": null, "e": 3129, "s": 3115, "text": "Source: arxiv" }, { "code": null, "e": 3217, "s": 3129, "text": "Okay next thing we need to do is to set up efficientNet and install pre-trained weights" }, { "code": null, "e": 3625, "s": 3217, "text": "# Options: EfficientNetB0, EfficientNetB1, EfficientNetB2, EfficientNetB3, ... up to 7# Higher the number, the more complex the model is. and the larger resolutions it can handle, but the more GPU memory it will need# loading pretrained conv base model#input_shape is (height, width, number of channels) for imagesconv_base = EfficientNetB6(weights=\"imagenet\", include_top=False, input_shape=input_shape)" }, { "code": null, "e": 3836, "s": 3625, "text": "Weights=”imagenet” allows us to do transfer learning, but you can set it to None if you want (you probably shouldn’t do this). include_top=False allows us to easily change the final layer to our custom dataset." }, { "code": null, "e": 3951, "s": 3836, "text": "After installing the model, we want to do a small bit of configuration to make it suitable for our custom dataset:" }, { "code": null, "e": 4298, "s": 3951, "text": "model = models.Sequential()model.add(conv_base)model.add(layers.GlobalMaxPooling2D(name=\"gap\"))#avoid overfittingmodel.add(layers.Dropout(dropout_rate=0.2, name=\"dropout_out\"))# Set NUMBER_OF_CLASSES to the number of your final predictions.model.add(layers.Dense(NUMBER_OF_CLASSES, activation=\"softmax\", name=\"fc_out\"))conv_base.trainable = False" }, { "code": null, "e": 4633, "s": 4298, "text": "The model is prepared. Now we need to prepare the dataset. We are going to be using a flow_from_directory along with Keras’s ImageDataGenerator. This method will be expecting training and validation directories. In each directory, there should be a separate directory for each class with the corresponding images under that directory." }, { "code": null, "e": 4717, "s": 4633, "text": "To start with, let’s create a directory for each class under training & validation." }, { "code": null, "e": 5293, "s": 4717, "text": "TRAIN_IMAGES_PATH = './vinbigdata/images/train' #12000VAL_IMAGES_PATH = './vinbigdata/images/val' #3000External_DIR = '../input/vinbigdata-512-image-dataset/vinbigdata/train' # 15000os.makedirs(TRAIN_IMAGES_PATH, exist_ok = True)os.makedirs(VAL_IMAGES_PATH, exist_ok = True)classes = [ 'Aortic enlargement', 'No Finding']# Create directories for each class.for class_id in [x for x in range(len(classes))]: os.makedirs(os.path.join(TRAIN_IMAGES_PATH, str(class_id)), exist_ok = True) os.makedirs(os.path.join(VAL_IMAGES_PATH, str(class_id)), exist_ok = True)" }, { "code": null, "e": 5362, "s": 5293, "text": "The next thing to do is to copy each image to its correct directory:" }, { "code": null, "e": 6045, "s": 5362, "text": "Input_dir = '/kaggle/input/vinbigdata-512-image-dataset/vinbigdata/train'def preproccess_data(df, images_path): for column, row in tqdm(df.iterrows(), total=len(df)): class_id = row['class_id'] shutil.copy(os.path.join(Input_dir, f\"{row['image_id']}.png\"), os.path.join(images_path, str(class_id)))df = pd.read_csv('../input/vinbigdata-512-image-dataset/vinbigdata/train.csv')df.head()#Split the dataset into 80% training and 20% validationdf_train, df_valid = model_selection.train_test_split(df, test_size=0.2, random_state=42, shuffle=True)#run the function on each of thempreproccess_data(df_train, TRAIN_IMAGES_PATH)preproccess_data(df_valid, VAL_IMAGES_PATH)" }, { "code": null, "e": 6238, "s": 6045, "text": "Now, you can check the dataset directory and all of the images should be copied to their correct sub-directories. The next step is to pass the dataset to the generator and then start training:" }, { "code": null, "e": 7357, "s": 6238, "text": "# I love the ImageDataGenerator class, it allows us to specifiy whatever augmentations we want so easily...train_datagen = ImageDataGenerator( rescale=1.0 / 255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode=\"nearest\",)# Note that the validation data should not be augmented!#and a very important step is to normalise the images through rescalingtest_datagen = ImageDataGenerator(rescale=1.0 / 255)train_generator = train_datagen.flow_from_directory( # This is the target directory TRAIN_IMAGES_PATH, # All images will be resized to target height and width. target_size=(height, width), batch_size=batch_size, # Since we use categorical_crossentropy loss, we need categorical labels class_mode=\"categorical\",)validation_generator = test_datagen.flow_from_directory( VAL_IMAGES_PATH, target_size=(height, width), batch_size=batch_size, class_mode=\"categorical\",)model.compile( loss=\"categorical_crossentropy\", optimizer=optimizers.RMSprop(lr=2e-5), metrics=[\"acc\"],)" }, { "code": null, "e": 7430, "s": 7357, "text": "If all goes according to plan, you should get a similar message to this:" }, { "code": null, "e": 7507, "s": 7430, "text": "Found X images belonging to x classes.Found Y images belonging to x classes." }, { "code": null, "e": 7798, "s": 7507, "text": "history = model.fit_generator( train_generator, steps_per_epoch=NUMBER_OF_TRAINING_IMAGES // batch_size, epochs=epochs, validation_data=validation_generator, validation_steps=NUMBER_OF_VALIDATION_IMAGES // batch_size, verbose=1, use_multiprocessing=True, workers=4,)" }, { "code": null, "e": 7877, "s": 7798, "text": "y_pred = model.predict(X_test)score = model.evaluate(X_test, y_test,verbose=1)" }, { "code": null, "e": 8144, "s": 7877, "text": "# Import the modules from `sklearn.metrics` from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score# Confusion matrix confusion_matrix(y_test, y_pred) precision_score(y_test, y_pred) recall_score(y_test, y_pred) f1_score(y_test,y_pred) " }, { "code": null, "e": 8357, "s": 8144, "text": "The next part is to further evaluate the model, there are a lot of resources for doing this and since you will probably be interested in exploring tons of different metrics, this should be quite easy using Keras." } ]
How to retrieve auto-incremented value generated by PreparedStatement using JDBC?
While creating a table, in certain scenarios, we need values to column such as ID, to be generated/incremented automatically. Various databases support this feature in different ways. In MySQL database you can declare a column auto increment using the following syntax. CREATE TABLE table_name( ID INT PRIMARY KEY AUTO_INCREMENT, column_name1 data_type1, column_name2 data_type2, column_name3 data_type3, column_name4 data_type4, ............ ........... ); While inserting records in a table there is no need to insert value under the auto-incremented column. These will be generated automatically. For example, in a table if we have a column with name ID and data type INT, which is auto-incremented and, if we already have 6 records in that table. When you insert the next record using the INSERT statement the ID value of the new record will be 7 and the ID value of its next record will be 8. (You can specify the initial value and interval for these auto-incremented columns). If you insert records into a table which contains auto-incremented column, using a PreparedStatement object. You can retrieve the values of that particular column, generated by the current PreparedStatement object using the getGeneratedKeys() method. Let us create a table with name sales in MySQL database, with one of the columns as auto-incremented, using CREATE statement as shown below − CREATE TABLE Sales( ID INT PRIMARY KEY AUTO_INCREMENT, ProductName VARCHAR (20), CustomerName VARCHAR (20), DispatchDate date, DeliveryTime time, Price INT, Location VARCHAR(20) ); Now, to insert records into this table using PreparedStatement object and, to retrieve the auto-incremented values generated by it − Register the Driver class of the desired database using the registerDriver() method of the DriverManager class or, the forName() method of the class named Class. DriverManager.registerDriver(new com.mysql.jdbc.Driver()); Create a Connection object by passing the URL of the database, user-name and password of a user in the database (in string format) as parameters to the getConnection() method of the DriverManager class. Connection mysqlCon = DriverManager.getConnection(mysqlUrl, "root", "password"); Create a PreparedStatement object using the prepareStatement() method of the connection interface. To this method pass the INSERT statement with bind variables in string format as one parameter and, Statement.RETURN_GENERATED_KEYS as other parameter as − //Query to Insert values to the sales table String insertQuery = "INSERT INTO Sales (ProductName, CustomerName, DispatchDate, DeliveryTime, Price, Location) VALUES (?, ?, ?, ?, ?, ?)"; //Creating a PreparedStatement object PreparedStatement pstmt = con.prepareStatement(insertQuery, Statement.RETURN_GENERATED_KEYS); Set values of each record to the bind variables using the setXXX() methods and, add it to batch. pstmt.setString(1, "Key-Board"); pstmt.setString(2, "Raja"); pstmt.setDate(3, new Date(1567315800000L)); pstmt.setTime(4, new Time(1567315800000L)); pstmt.setInt(5, 7000); pstmt.setString(6, "Hyderabad"); pstmt.addBatch(); pstmt.setString(1, "Earphones"); pstmt.setString(2, "Roja"); pstmt.setDate(3, new Date(1556688600000L)); pstmt.setTime(4, new Time(1556688600000L)); pstmt.setInt(5, 2000); pstmt.setString(6, "Vishakhapatnam"); pstmt.addBatch(); ........... ........... After adding values of all the records to the batch, execute the batch using the executeBatch() method. pstmt.executeBatch(); Finally, get the auto-incremented keys generated by this PreparedStatement object using the getGeneratedKeys() method. ResultSet rs = pstmt.getGeneratedKeys(); while (rs.next()) { System.out.println(rs.getString(1)); } Following JDBC program inserts 5 records into the Sales table (created above) using PreparedStatement, retrieves and displays the auto-incremented values generated by it. import java.sql.Connection; import java.sql.Date; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.sql.Time; public class RetrievingData_AutoIncrement_Pstmt { public static void main(String args[]) throws SQLException { //Registering the Driver DriverManager.registerDriver(new com.mysql.jdbc.Driver()); //Getting the connection String mysqlUrl = "jdbc:mysql://localhost/sample_database"; Connection con = DriverManager.getConnection(mysqlUrl, "root", "password"); System.out.println("Connection established......"); //Query to Insert values to the sales table String insertQuery = "INSERT INTO Sales (ProductName, CustomerName, DispatchDate, DeliveryTime, Price, Location) VALUES (?, ?, ?, ?, ?, ?)"; //Creating a PreparedStatement object PreparedStatement pstmt = con.prepareStatement(insertQuery,Statement.RETURN_GENERATED_KEYS); pstmt.setString(1, "Key-Board"); pstmt.setString(2, "Raja"); pstmt.setDate(3, new Date(1567315800000L)); pstmt.setTime(4, new Time(1567315800000L)); pstmt.setInt(5, 7000); pstmt.setString(6, "Hyderabad"); pstmt.addBatch(); pstmt.setString(1, "Earphones"); pstmt.setString(2, "Roja"); pstmt.setDate(3, new Date(1556688600000L)); pstmt.setTime(4, new Time(1556688600000L)); pstmt.setInt(5, 2000); pstmt.setString(6, "Vishakhapatnam"); pstmt.addBatch(); pstmt.setString(1, "Mouse"); pstmt.setString(2, "Puja"); pstmt.setDate(3, new Date(1551418199000L)); pstmt.setTime(4, new Time(1551418199000L)); pstmt.setInt(5, 3000); pstmt.setString(6, "Vijayawada"); pstmt.addBatch(); pstmt.setString(1, "Mobile"); pstmt.setString(2, "Vanaja"); pstmt.setDate(3, new Date(1551415252000L)); pstmt.setTime(4, new Time(1551415252000L)); pstmt.setInt(5, 9000); pstmt.setString(6, "Chennai"); pstmt.addBatch(); pstmt.setString(1, "Headset"); pstmt.setString(2, "Jalaja"); pstmt.setDate(3, new Date(1554529139000L)); pstmt.setTime(4, new Time(1554529139000L)); pstmt.setInt(5, 6000); pstmt.setString(6, "Goa"); pstmt.addBatch(); System.out.println("Records inserted......"); //Executing the batch pstmt.executeBatch(); //Auto-incremented values generated by the current PreparedStatement object ResultSet res = pstmt.getGeneratedKeys(); System.out.println("Auto-incremented values of the column ID generated by the current PreparedStatement object: "); while (res.next()) { System.out.println(res.getString(1)); } } } Connection established...... Records inserted...... Auto-incremented values of the column ID generated by the current PreparedStatement object: 1 2 3 4 5
[ { "code": null, "e": 1246, "s": 1062, "text": "While creating a table, in certain scenarios, we need values to column such as ID, to be generated/incremented automatically. Various databases support this feature in different ways." }, { "code": null, "e": 1332, "s": 1246, "text": "In MySQL database you can declare a column auto increment using the following syntax." }, { "code": null, "e": 1538, "s": 1332, "text": "CREATE TABLE table_name(\n ID INT PRIMARY KEY AUTO_INCREMENT,\n column_name1 data_type1,\n column_name2 data_type2,\n column_name3 data_type3,\n column_name4 data_type4,\n ............ ...........\n);" }, { "code": null, "e": 1680, "s": 1538, "text": "While inserting records in a table there is no need to insert value under the auto-incremented column. These will be generated automatically." }, { "code": null, "e": 1978, "s": 1680, "text": "For example, in a table if we have a column with name ID and data type INT, which is auto-incremented and, if we already have 6 records in that table. When you insert the next record using the INSERT statement the ID value of the new record will be 7 and the ID value of its next record will be 8." }, { "code": null, "e": 2063, "s": 1978, "text": "(You can specify the initial value and interval for these auto-incremented columns)." }, { "code": null, "e": 2172, "s": 2063, "text": "If you insert records into a table which contains auto-incremented column, using a PreparedStatement object." }, { "code": null, "e": 2314, "s": 2172, "text": "You can retrieve the values of that particular column, generated by the current PreparedStatement object using the getGeneratedKeys() method." }, { "code": null, "e": 2456, "s": 2314, "text": "Let us create a table with name sales in MySQL database, with one of the columns as auto-incremented, using CREATE statement as shown below −" }, { "code": null, "e": 2658, "s": 2456, "text": "CREATE TABLE Sales(\n ID INT PRIMARY KEY AUTO_INCREMENT,\n ProductName VARCHAR (20),\n CustomerName VARCHAR (20),\n DispatchDate date,\n DeliveryTime time,\n Price INT,\n Location VARCHAR(20)\n);" }, { "code": null, "e": 2791, "s": 2658, "text": "Now, to insert records into this table using PreparedStatement object and, to retrieve the auto-incremented values generated by it −" }, { "code": null, "e": 2953, "s": 2791, "text": "Register the Driver class of the desired database using the registerDriver() method of the DriverManager class or, the forName() method of the class named Class." }, { "code": null, "e": 3012, "s": 2953, "text": "DriverManager.registerDriver(new com.mysql.jdbc.Driver());" }, { "code": null, "e": 3215, "s": 3012, "text": "Create a Connection object by passing the URL of the database, user-name and password of a user in the database (in string format) as parameters to the getConnection() method of the DriverManager class." }, { "code": null, "e": 3296, "s": 3215, "text": "Connection mysqlCon = DriverManager.getConnection(mysqlUrl, \"root\", \"password\");" }, { "code": null, "e": 3395, "s": 3296, "text": "Create a PreparedStatement object using the prepareStatement() method of the connection interface." }, { "code": null, "e": 3551, "s": 3395, "text": "To this method pass the INSERT statement with bind variables in string format as one parameter and, Statement.RETURN_GENERATED_KEYS as other parameter as −" }, { "code": null, "e": 3868, "s": 3551, "text": "//Query to Insert values to the sales table\nString insertQuery = \"INSERT INTO Sales (ProductName, CustomerName, DispatchDate, DeliveryTime, Price, Location) VALUES (?, ?, ?, ?, ?, ?)\";\n//Creating a PreparedStatement object\nPreparedStatement pstmt = con.prepareStatement(insertQuery, Statement.RETURN_GENERATED_KEYS);" }, { "code": null, "e": 3965, "s": 3868, "text": "Set values of each record to the bind variables using the setXXX() methods and, add it to batch." }, { "code": null, "e": 4440, "s": 3965, "text": "pstmt.setString(1, \"Key-Board\");\npstmt.setString(2, \"Raja\");\npstmt.setDate(3, new Date(1567315800000L));\npstmt.setTime(4, new Time(1567315800000L));\npstmt.setInt(5, 7000);\npstmt.setString(6, \"Hyderabad\");\npstmt.addBatch();\npstmt.setString(1, \"Earphones\");\npstmt.setString(2, \"Roja\");\npstmt.setDate(3, new Date(1556688600000L));\npstmt.setTime(4, new Time(1556688600000L));\npstmt.setInt(5, 2000);\npstmt.setString(6, \"Vishakhapatnam\");\npstmt.addBatch();\n........... ..........." }, { "code": null, "e": 4544, "s": 4440, "text": "After adding values of all the records to the batch, execute the batch using the executeBatch() method." }, { "code": null, "e": 4566, "s": 4544, "text": "pstmt.executeBatch();" }, { "code": null, "e": 4685, "s": 4566, "text": "Finally, get the auto-incremented keys generated by this PreparedStatement object using the getGeneratedKeys() method." }, { "code": null, "e": 4788, "s": 4685, "text": "ResultSet rs = pstmt.getGeneratedKeys();\nwhile (rs.next()) {\n System.out.println(rs.getString(1));\n}" }, { "code": null, "e": 4959, "s": 4788, "text": "Following JDBC program inserts 5 records into the Sales table (created above) using PreparedStatement, retrieves and displays the auto-incremented values generated by it." }, { "code": null, "e": 7735, "s": 4959, "text": "import java.sql.Connection;\nimport java.sql.Date;\nimport java.sql.DriverManager;\nimport java.sql.PreparedStatement;\nimport java.sql.ResultSet;\nimport java.sql.SQLException;\nimport java.sql.Statement;\nimport java.sql.Time;\npublic class RetrievingData_AutoIncrement_Pstmt {\n public static void main(String args[]) throws SQLException {\n //Registering the Driver\n DriverManager.registerDriver(new com.mysql.jdbc.Driver());\n //Getting the connection\n String mysqlUrl = \"jdbc:mysql://localhost/sample_database\";\n Connection con = DriverManager.getConnection(mysqlUrl, \"root\", \"password\");\n System.out.println(\"Connection established......\");\n //Query to Insert values to the sales table\n String insertQuery = \"INSERT INTO Sales (ProductName, CustomerName, DispatchDate, DeliveryTime, Price, Location) VALUES (?, ?, ?, ?, ?, ?)\";\n //Creating a PreparedStatement object\n PreparedStatement pstmt = con.prepareStatement(insertQuery,Statement.RETURN_GENERATED_KEYS);\n pstmt.setString(1, \"Key-Board\");\n pstmt.setString(2, \"Raja\");\n pstmt.setDate(3, new Date(1567315800000L));\n pstmt.setTime(4, new Time(1567315800000L));\n pstmt.setInt(5, 7000);\n pstmt.setString(6, \"Hyderabad\");\n pstmt.addBatch();\n pstmt.setString(1, \"Earphones\");\n pstmt.setString(2, \"Roja\");\n pstmt.setDate(3, new Date(1556688600000L));\n pstmt.setTime(4, new Time(1556688600000L));\n pstmt.setInt(5, 2000);\n pstmt.setString(6, \"Vishakhapatnam\");\n pstmt.addBatch();\n pstmt.setString(1, \"Mouse\");\n pstmt.setString(2, \"Puja\");\n pstmt.setDate(3, new Date(1551418199000L));\n pstmt.setTime(4, new Time(1551418199000L));\n pstmt.setInt(5, 3000);\n pstmt.setString(6, \"Vijayawada\");\n pstmt.addBatch();\n pstmt.setString(1, \"Mobile\");\n pstmt.setString(2, \"Vanaja\");\n pstmt.setDate(3, new Date(1551415252000L));\n pstmt.setTime(4, new Time(1551415252000L));\n pstmt.setInt(5, 9000);\n pstmt.setString(6, \"Chennai\");\n pstmt.addBatch();\n pstmt.setString(1, \"Headset\");\n pstmt.setString(2, \"Jalaja\");\n pstmt.setDate(3, new Date(1554529139000L));\n pstmt.setTime(4, new Time(1554529139000L));\n pstmt.setInt(5, 6000);\n pstmt.setString(6, \"Goa\");\n pstmt.addBatch();\n System.out.println(\"Records inserted......\");\n //Executing the batch\n pstmt.executeBatch();\n //Auto-incremented values generated by the current PreparedStatement object\n ResultSet res = pstmt.getGeneratedKeys();\n System.out.println(\"Auto-incremented values of the column ID generated by the current PreparedStatement object: \");\n while (res.next()) {\n System.out.println(res.getString(1));\n }\n }\n}" }, { "code": null, "e": 7889, "s": 7735, "text": "Connection established......\nRecords inserted......\nAuto-incremented values of the column ID generated by the current PreparedStatement object:\n1\n2\n3\n4\n5" } ]
Polygon: Real-Time Stocks, Forex, and Crypto Data. | by Bernard Brenyah | Towards Data Science
You may have heard this quote “Data Is The New Oil” one too many times. Regardless of one’s opinion on this assertion, it is widely recognised that data is crucial to the innovative processes of every industry. The financial industry, in particular, is one industry where having access to quality and updated data is crucial given the highly dynamic nature of this industry. Recently, I had been working on a trading bot which requires real-time data (stocks, forex, and cryptocurrency data) and I happen to have settled on a really good platform called Polygon which delivered all that I needed in a nice, affordable and well-documented service. The project I was working on required real-time cryptocurrency data over a period some period of time. After googling and asking around for various platforms, I ended up with Polygon. I found the service competitively priced, stable with excellent documentation. As a result of my positive experience with the platform, I decided to share my experience and code via this post for anyone who might be in the same boat. Polygon offers streaming clients via WebSockets or RESTful APIs for accessing real-time data in most of the popular programming languages Python to Go. They even have a dedicated Python package for a more straight forward access. Since the project required Python, I opted for their Python package. Enough talking, let us get right to business and start playing with some real-time cryptocurrency data after installing this python package: pip install polygon-api-client NB: This post will focus on cryptocurrencies for brevity’s sake but the code will include comments for readers who are interested in real-time stocks/equities, forex data. An API key is needed for access so head over to https://polygon.io/signup to get access to a key. The goal of the project was to gather some realtime data on some selected cryptocurrencies over a specified period. In the relatively simple script below, I demonstrate how one can interact with live streaming data using the Python client of Polygon. The script is designed with other types of assets supported by the platform in mind so it should be easy to adjust it for various use cases. The extensive and user-friendly documentation provided by the platform will be more than what one would need for every custom scenario. In this short post, I have introduced Polygon as a viable and excellent platform for projects or products that require various classes of realtime financial data using their Python client. I recommend readers who are interested in such services to visit their documentation for both WebSockets and RESTful APIs and explore their vast offerings for cryptocurrency/stock/forex data. Until the next post, happy coding! As always, I look forward to feedback (good or bad)! The code for this post, like with all other posts, is available on this GitHub repository.
[ { "code": null, "e": 819, "s": 172, "text": "You may have heard this quote “Data Is The New Oil” one too many times. Regardless of one’s opinion on this assertion, it is widely recognised that data is crucial to the innovative processes of every industry. The financial industry, in particular, is one industry where having access to quality and updated data is crucial given the highly dynamic nature of this industry. Recently, I had been working on a trading bot which requires real-time data (stocks, forex, and cryptocurrency data) and I happen to have settled on a really good platform called Polygon which delivered all that I needed in a nice, affordable and well-documented service." }, { "code": null, "e": 1237, "s": 819, "text": "The project I was working on required real-time cryptocurrency data over a period some period of time. After googling and asking around for various platforms, I ended up with Polygon. I found the service competitively priced, stable with excellent documentation. As a result of my positive experience with the platform, I decided to share my experience and code via this post for anyone who might be in the same boat." }, { "code": null, "e": 1536, "s": 1237, "text": "Polygon offers streaming clients via WebSockets or RESTful APIs for accessing real-time data in most of the popular programming languages Python to Go. They even have a dedicated Python package for a more straight forward access. Since the project required Python, I opted for their Python package." }, { "code": null, "e": 1677, "s": 1536, "text": "Enough talking, let us get right to business and start playing with some real-time cryptocurrency data after installing this python package:" }, { "code": null, "e": 1708, "s": 1677, "text": "pip install polygon-api-client" }, { "code": null, "e": 1978, "s": 1708, "text": "NB: This post will focus on cryptocurrencies for brevity’s sake but the code will include comments for readers who are interested in real-time stocks/equities, forex data. An API key is needed for access so head over to https://polygon.io/signup to get access to a key." }, { "code": null, "e": 2229, "s": 1978, "text": "The goal of the project was to gather some realtime data on some selected cryptocurrencies over a specified period. In the relatively simple script below, I demonstrate how one can interact with live streaming data using the Python client of Polygon." }, { "code": null, "e": 2506, "s": 2229, "text": "The script is designed with other types of assets supported by the platform in mind so it should be easy to adjust it for various use cases. The extensive and user-friendly documentation provided by the platform will be more than what one would need for every custom scenario." }, { "code": null, "e": 2922, "s": 2506, "text": "In this short post, I have introduced Polygon as a viable and excellent platform for projects or products that require various classes of realtime financial data using their Python client. I recommend readers who are interested in such services to visit their documentation for both WebSockets and RESTful APIs and explore their vast offerings for cryptocurrency/stock/forex data. Until the next post, happy coding!" } ]
How to handle JSON in Python?. In this article, we will learn and... | by Tanu N Prabhu | Towards Data Science
JSON is the abbreviation for JavaScript Object Notation: one of the easiest data formats for humans being like you and me to read and write. JSON is often declared as a text format that is completely language independent. It is based on a subset of the JavaScript Programming Language Standard ECMA-262 3rd Edition — December 1999. Quick heads up, the entire code of this tutorial can be found in my GitHub repository given below: github.com The JSON is built on two structures: A collection of name/value pairs. In many programming languages, we often refer to this as an object, record, struct, dictionary, hash table, etc.An ordered list of values. In many programming languages, we refer to this as an array, vector, list, or sequence. A collection of name/value pairs. In many programming languages, we often refer to this as an object, record, struct, dictionary, hash table, etc. An ordered list of values. In many programming languages, we refer to this as an array, vector, list, or sequence. An object is an unordered set of name/value pairs. Every time an object begins with {} (left brace and ends with a right brace). Each name is followed by : a colon and the name/value pairs are separated by a , comma. For example: example = '{"name": "Jake", "programming languages": ["Python", "Kotlin"]}' For extra details please refer to JSON official docs given below: www.json.org Let us stick to a single dataset throughout this article for consistency. I recommend bookmarking, fork, or star this repository because this repository has one of the mind-blowing JSON format datasets. github.com From the above list of JSON datasets, I hence chose the smallest JSON dataset. The details of the dataset up to an extent is as given below: Pss: How did I get this, please refer to the bonus example below: Name: Neighbourhoods in UK (Leicestershire)Link: https://data.police.uk/api/leicestershire/neighbourhoodsRangeIndex: 6 entries, 0 to 5Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 6 non-null object 1 name 6 non-null object dtypes: object(2) memory usage: 224.0+ KB This is how a JSON format data looks. Well, I know it looks hideous but not all the time the datasets are supposed to look neat; it is your job to make them look neat. I have only taken a few instances of the dataset, because of its size. [{"id":"NC04","name":"City Centre"},{"id":"NC66","name":"Cultural Quarter"},{"id":"NC67","name":"Riverside"},{"id":"NC68","name":"Clarendon Park"},{"id":"NE09","name":"Belgrave South"},{"id":"NE10","name":"Belgrave North"}] Now let’s just store the above JSON dataset in a variable named json_data. You will understand the whole purpose of the later article. json_data = '[{"id":"NC04","name":"City Centre"},{"id":"NC66","name":"Cultural Quarter"},{"id":"NC67","name":"Riverside"},{"id":"NC68","name":"Clarendon Park"},{"id":"NE09","name":"Belgrave South"},{"id":"NE10","name":"Belgrave North"}]' We actually don’t need to install any external modules. It is one of those built-in libraries wherein you actually don’t need to type. pip install json So all you have to do is just say: import json Note: Remember import json is a Python module, so to use it you need to import it first. Parsing a JSON string using loads(). The loads() takes a JSON string as input and it returns a dictionary or a list. Remember, if there is a single entry in the dataset then the loads() will return a dictionary otherwise for multiple entries as the above dataset the loads() will return a list. To be specific, a dataset that has only a single row is called a single entry, whereas a dataset with over two rows is called multiple entries. The example given below makes things more clear. The syntax of loads() is pretty trivial. We pass a single parameter most of the time which is the json data itself. variable = json.loads(json_data) Let's use the above stored json_data as our base dataset and then pass it to loads(). # Importing json moduleimport json# Passing the json data to the loads methodneighbourhood = json.loads(json_data)# Printing the result - listprint(neighbourhood)print("-----------------------------------------------------")# Checking the type of the variableprint(type(neighbourhood))print("-----------------------------------------------------")# Accessing the first element in the list - Using indexprint(neighbourhood[0]) Like I mentioned earlier, the loads() might return a list or a dictionary depending on the JSON format data. In this case, it is a list, so the above output would be: [{'id': 'NC04', 'name': 'City Centre'}, {'id': 'NC66', 'name': 'Cultural Quarter'}, {'id': 'NC67', 'name': 'Riverside'}, {'id': 'NC68', 'name': 'Clarendon Park'}, {'id': 'NE09', 'name': 'Belgrave South'}, {'id': 'NE10', 'name': 'Belgrave North'}]----------------------------------------------------- <class 'list'> ----------------------------------------------------- {'id': 'NC04', 'name': 'City Centre'} Parsing a Python object using dumps(). The dumps() takes a Python object as input and it returns a JSON string. The syntax of dumps() is pretty trivial. We pass a single parameter most of the time which is the Python Object (list) itself. variable = json.dumps(neighbourhood) Let’s use the above stored neighbourhood as our base dataset which is a Python list and then pass it to dumps(). # Importing json moduleimport json# Passing the python data to the dumps methodneighbourhood_list = json.dumps(neighbourhood)# Checking the type of the variableprint(type(neighbourhood_list))print("-----------------------------------------------------")# Printing the result - stringprint(neighbourhood_list) Now the dumps() takes the python object as input and then returns a JSON string as output as shown below: <class 'str'> ----------------------------------------------------- [{"id": "NC04", "name": "City Centre"}, {"id": "NC66", "name": "Cultural Quarter"}, {"id": "NC67", "name": "Riverside"}, {"id": "NC68", "name": "Clarendon Park"}, {"id": "NE09", "name": "Belgrave South"}, {"id": "NE10", "name": "Belgrave North"}] Now, for this conversion, let us use the most used Python objects on a daily basis such as: Dictionary List String Integer Note: Remember for the conversion of the Python objects into a JSON string you need to use dumps(). First, let us create four different types of variables which can hold the above python data dictt = {"name": "John Depp", "age": 48}listt = ["John", "Depp", 48]strr = "John Depp"intt = 48 Now let's feed these values to the dumps() and then see the conversion into a JSON string. # Python Object 1 - Dictionaryprint(json.dumps(dictt))# Checking the typeprint(type(json.dumps(dictt)))print("-----------------------------------------------------")# Python Object 2 - Listprint(json.dumps(listt))# Checking the typeprint(type(json.dumps(listt)))print("-----------------------------------------------------")# Python Object 3 - Stringprint(json.dumps(strr))# Checking the typeprint(type(json.dumps(strr)))print("-----------------------------------------------------")# Python Object 4 - Integerprint(json.dumps(intt))# Checking the typeprint(type(json.dumps(intt)))print("-----------------------------------------------------") Now no matter what, the type is always gonna be a str. {"name": "John Depp", "age": 48} <class 'str'> ----------------------------------------------------- ["John", "Depp", 48] <class 'str'> ----------------------------------------------------- "John Depp" <class 'str'> ----------------------------------------------------- 48 <class 'str'>----------------------------------------------------- Similarly, try to do the same with float, tuple, and true, or false objects and let me know the results in the comment section below. In this bonus example, you will get to know the conversion of JSON data into a data frame. For this understand the scenario below: Now think that you are doing a data science project for whoever it is. The details of the project are not so important in this case. So the first step is to gather the data, now most of the time on Kaggle, data.gov or etc, the data would already in a CSV format, but in this case, your boss or professor gives you data which is in a JSON format (happened to me). Now how to tackle this situation? The solution is pretty simple, it’s not rocket science, rather you need to use loads() and the pandas.DataFrame() methods. For this example, I will use the Neighbourhoods in UK (Leicestershire) dataset from above. # Import both JSON and Pandas libraryimport jsonimport pandas as pd# Use the `loads()` method to load the JSON datadf = json.loads(json_data)# Pass the generated JSON data into a pandas dataframedf = pd.DataFrame(df)print(df)print("-----------------------------------------------------")# Checking the type of the data print(type(df)) After executing the above code the result would be a neat looking dataframe The type of data now will be as shown below: <class 'pandas.core.frame.DataFrame'> This is how you can convert a not so looking neat JSON data into a beautiful-looking DataFrame. To get more information about the dataset use df.info(), this is how I got “Details of the dataset” Congratulations guys you have successfully completed reading/implementing this beautiful article “How to handle JSON in Python?”. This tutorial covers all the concepts along with specific examples to understand the handling of JSON data in Python. If you guys have any comments or suggestions, please use the comment section wisely. I hope you guys have learned something new today. Stay tuned for more updates. Until then, see you next time. Have a good day and stay safe!
[ { "code": null, "e": 503, "s": 171, "text": "JSON is the abbreviation for JavaScript Object Notation: one of the easiest data formats for humans being like you and me to read and write. JSON is often declared as a text format that is completely language independent. It is based on a subset of the JavaScript Programming Language Standard ECMA-262 3rd Edition — December 1999." }, { "code": null, "e": 602, "s": 503, "text": "Quick heads up, the entire code of this tutorial can be found in my GitHub repository given below:" }, { "code": null, "e": 613, "s": 602, "text": "github.com" }, { "code": null, "e": 650, "s": 613, "text": "The JSON is built on two structures:" }, { "code": null, "e": 911, "s": 650, "text": "A collection of name/value pairs. In many programming languages, we often refer to this as an object, record, struct, dictionary, hash table, etc.An ordered list of values. In many programming languages, we refer to this as an array, vector, list, or sequence." }, { "code": null, "e": 1058, "s": 911, "text": "A collection of name/value pairs. In many programming languages, we often refer to this as an object, record, struct, dictionary, hash table, etc." }, { "code": null, "e": 1173, "s": 1058, "text": "An ordered list of values. In many programming languages, we refer to this as an array, vector, list, or sequence." }, { "code": null, "e": 1390, "s": 1173, "text": "An object is an unordered set of name/value pairs. Every time an object begins with {} (left brace and ends with a right brace). Each name is followed by : a colon and the name/value pairs are separated by a , comma." }, { "code": null, "e": 1403, "s": 1390, "text": "For example:" }, { "code": null, "e": 1479, "s": 1403, "text": "example = '{\"name\": \"Jake\", \"programming languages\": [\"Python\", \"Kotlin\"]}'" }, { "code": null, "e": 1545, "s": 1479, "text": "For extra details please refer to JSON official docs given below:" }, { "code": null, "e": 1558, "s": 1545, "text": "www.json.org" }, { "code": null, "e": 1761, "s": 1558, "text": "Let us stick to a single dataset throughout this article for consistency. I recommend bookmarking, fork, or star this repository because this repository has one of the mind-blowing JSON format datasets." }, { "code": null, "e": 1772, "s": 1761, "text": "github.com" }, { "code": null, "e": 1851, "s": 1772, "text": "From the above list of JSON datasets, I hence chose the smallest JSON dataset." }, { "code": null, "e": 1913, "s": 1851, "text": "The details of the dataset up to an extent is as given below:" }, { "code": null, "e": 1979, "s": 1913, "text": "Pss: How did I get this, please refer to the bonus example below:" }, { "code": null, "e": 2328, "s": 1979, "text": "Name: Neighbourhoods in UK (Leicestershire)Link: https://data.police.uk/api/leicestershire/neighbourhoodsRangeIndex: 6 entries, 0 to 5Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 6 non-null object 1 name 6 non-null object dtypes: object(2) memory usage: 224.0+ KB" }, { "code": null, "e": 2496, "s": 2328, "text": "This is how a JSON format data looks. Well, I know it looks hideous but not all the time the datasets are supposed to look neat; it is your job to make them look neat." }, { "code": null, "e": 2567, "s": 2496, "text": "I have only taken a few instances of the dataset, because of its size." }, { "code": null, "e": 2791, "s": 2567, "text": "[{\"id\":\"NC04\",\"name\":\"City Centre\"},{\"id\":\"NC66\",\"name\":\"Cultural Quarter\"},{\"id\":\"NC67\",\"name\":\"Riverside\"},{\"id\":\"NC68\",\"name\":\"Clarendon Park\"},{\"id\":\"NE09\",\"name\":\"Belgrave South\"},{\"id\":\"NE10\",\"name\":\"Belgrave North\"}]" }, { "code": null, "e": 2926, "s": 2791, "text": "Now let’s just store the above JSON dataset in a variable named json_data. You will understand the whole purpose of the later article." }, { "code": null, "e": 3164, "s": 2926, "text": "json_data = '[{\"id\":\"NC04\",\"name\":\"City Centre\"},{\"id\":\"NC66\",\"name\":\"Cultural Quarter\"},{\"id\":\"NC67\",\"name\":\"Riverside\"},{\"id\":\"NC68\",\"name\":\"Clarendon Park\"},{\"id\":\"NE09\",\"name\":\"Belgrave South\"},{\"id\":\"NE10\",\"name\":\"Belgrave North\"}]'" }, { "code": null, "e": 3299, "s": 3164, "text": "We actually don’t need to install any external modules. It is one of those built-in libraries wherein you actually don’t need to type." }, { "code": null, "e": 3316, "s": 3299, "text": "pip install json" }, { "code": null, "e": 3351, "s": 3316, "text": "So all you have to do is just say:" }, { "code": null, "e": 3363, "s": 3351, "text": "import json" }, { "code": null, "e": 3452, "s": 3363, "text": "Note: Remember import json is a Python module, so to use it you need to import it first." }, { "code": null, "e": 3569, "s": 3452, "text": "Parsing a JSON string using loads(). The loads() takes a JSON string as input and it returns a dictionary or a list." }, { "code": null, "e": 3747, "s": 3569, "text": "Remember, if there is a single entry in the dataset then the loads() will return a dictionary otherwise for multiple entries as the above dataset the loads() will return a list." }, { "code": null, "e": 3940, "s": 3747, "text": "To be specific, a dataset that has only a single row is called a single entry, whereas a dataset with over two rows is called multiple entries. The example given below makes things more clear." }, { "code": null, "e": 4056, "s": 3940, "text": "The syntax of loads() is pretty trivial. We pass a single parameter most of the time which is the json data itself." }, { "code": null, "e": 4089, "s": 4056, "text": "variable = json.loads(json_data)" }, { "code": null, "e": 4175, "s": 4089, "text": "Let's use the above stored json_data as our base dataset and then pass it to loads()." }, { "code": null, "e": 4601, "s": 4175, "text": "# Importing json moduleimport json# Passing the json data to the loads methodneighbourhood = json.loads(json_data)# Printing the result - listprint(neighbourhood)print(\"-----------------------------------------------------\")# Checking the type of the variableprint(type(neighbourhood))print(\"-----------------------------------------------------\")# Accessing the first element in the list - Using indexprint(neighbourhood[0])" }, { "code": null, "e": 4768, "s": 4601, "text": "Like I mentioned earlier, the loads() might return a list or a dictionary depending on the JSON format data. In this case, it is a list, so the above output would be:" }, { "code": null, "e": 5175, "s": 4768, "text": "[{'id': 'NC04', 'name': 'City Centre'}, {'id': 'NC66', 'name': 'Cultural Quarter'}, {'id': 'NC67', 'name': 'Riverside'}, {'id': 'NC68', 'name': 'Clarendon Park'}, {'id': 'NE09', 'name': 'Belgrave South'}, {'id': 'NE10', 'name': 'Belgrave North'}]----------------------------------------------------- <class 'list'> ----------------------------------------------------- {'id': 'NC04', 'name': 'City Centre'}" }, { "code": null, "e": 5287, "s": 5175, "text": "Parsing a Python object using dumps(). The dumps() takes a Python object as input and it returns a JSON string." }, { "code": null, "e": 5414, "s": 5287, "text": "The syntax of dumps() is pretty trivial. We pass a single parameter most of the time which is the Python Object (list) itself." }, { "code": null, "e": 5451, "s": 5414, "text": "variable = json.dumps(neighbourhood)" }, { "code": null, "e": 5564, "s": 5451, "text": "Let’s use the above stored neighbourhood as our base dataset which is a Python list and then pass it to dumps()." }, { "code": null, "e": 5873, "s": 5564, "text": "# Importing json moduleimport json# Passing the python data to the dumps methodneighbourhood_list = json.dumps(neighbourhood)# Checking the type of the variableprint(type(neighbourhood_list))print(\"-----------------------------------------------------\")# Printing the result - stringprint(neighbourhood_list)" }, { "code": null, "e": 5979, "s": 5873, "text": "Now the dumps() takes the python object as input and then returns a JSON string as output as shown below:" }, { "code": null, "e": 6294, "s": 5979, "text": "<class 'str'> ----------------------------------------------------- [{\"id\": \"NC04\", \"name\": \"City Centre\"}, {\"id\": \"NC66\", \"name\": \"Cultural Quarter\"}, {\"id\": \"NC67\", \"name\": \"Riverside\"}, {\"id\": \"NC68\", \"name\": \"Clarendon Park\"}, {\"id\": \"NE09\", \"name\": \"Belgrave South\"}, {\"id\": \"NE10\", \"name\": \"Belgrave North\"}]" }, { "code": null, "e": 6386, "s": 6294, "text": "Now, for this conversion, let us use the most used Python objects on a daily basis such as:" }, { "code": null, "e": 6397, "s": 6386, "text": "Dictionary" }, { "code": null, "e": 6402, "s": 6397, "text": "List" }, { "code": null, "e": 6409, "s": 6402, "text": "String" }, { "code": null, "e": 6417, "s": 6409, "text": "Integer" }, { "code": null, "e": 6517, "s": 6417, "text": "Note: Remember for the conversion of the Python objects into a JSON string you need to use dumps()." }, { "code": null, "e": 6609, "s": 6517, "text": "First, let us create four different types of variables which can hold the above python data" }, { "code": null, "e": 6705, "s": 6609, "text": "dictt = {\"name\": \"John Depp\", \"age\": 48}listt = [\"John\", \"Depp\", 48]strr = \"John Depp\"intt = 48" }, { "code": null, "e": 6796, "s": 6705, "text": "Now let's feed these values to the dumps() and then see the conversion into a JSON string." }, { "code": null, "e": 7440, "s": 6796, "text": "# Python Object 1 - Dictionaryprint(json.dumps(dictt))# Checking the typeprint(type(json.dumps(dictt)))print(\"-----------------------------------------------------\")# Python Object 2 - Listprint(json.dumps(listt))# Checking the typeprint(type(json.dumps(listt)))print(\"-----------------------------------------------------\")# Python Object 3 - Stringprint(json.dumps(strr))# Checking the typeprint(type(json.dumps(strr)))print(\"-----------------------------------------------------\")# Python Object 4 - Integerprint(json.dumps(intt))# Checking the typeprint(type(json.dumps(intt)))print(\"-----------------------------------------------------\")" }, { "code": null, "e": 7495, "s": 7440, "text": "Now no matter what, the type is always gonna be a str." }, { "code": null, "e": 7835, "s": 7495, "text": "{\"name\": \"John Depp\", \"age\": 48} <class 'str'> ----------------------------------------------------- [\"John\", \"Depp\", 48] <class 'str'> ----------------------------------------------------- \"John Depp\" <class 'str'> ----------------------------------------------------- 48 <class 'str'>-----------------------------------------------------" }, { "code": null, "e": 7969, "s": 7835, "text": "Similarly, try to do the same with float, tuple, and true, or false objects and let me know the results in the comment section below." }, { "code": null, "e": 8100, "s": 7969, "text": "In this bonus example, you will get to know the conversion of JSON data into a data frame. For this understand the scenario below:" }, { "code": null, "e": 8497, "s": 8100, "text": "Now think that you are doing a data science project for whoever it is. The details of the project are not so important in this case. So the first step is to gather the data, now most of the time on Kaggle, data.gov or etc, the data would already in a CSV format, but in this case, your boss or professor gives you data which is in a JSON format (happened to me). Now how to tackle this situation?" }, { "code": null, "e": 8711, "s": 8497, "text": "The solution is pretty simple, it’s not rocket science, rather you need to use loads() and the pandas.DataFrame() methods. For this example, I will use the Neighbourhoods in UK (Leicestershire) dataset from above." }, { "code": null, "e": 9046, "s": 8711, "text": "# Import both JSON and Pandas libraryimport jsonimport pandas as pd# Use the `loads()` method to load the JSON datadf = json.loads(json_data)# Pass the generated JSON data into a pandas dataframedf = pd.DataFrame(df)print(df)print(\"-----------------------------------------------------\")# Checking the type of the data print(type(df))" }, { "code": null, "e": 9122, "s": 9046, "text": "After executing the above code the result would be a neat looking dataframe" }, { "code": null, "e": 9167, "s": 9122, "text": "The type of data now will be as shown below:" }, { "code": null, "e": 9205, "s": 9167, "text": "<class 'pandas.core.frame.DataFrame'>" }, { "code": null, "e": 9401, "s": 9205, "text": "This is how you can convert a not so looking neat JSON data into a beautiful-looking DataFrame. To get more information about the dataset use df.info(), this is how I got “Details of the dataset”" } ]
How to read csv file with Pandas without header? - GeeksforGeeks
03 Mar, 2021 Prerequisites: Pandas A header of the CSV file is an array of values assigned to each of the columns. It acts as a row header for the data. This article discusses how we can read a csv file without header using pandas. To do this header attribute should be set to None while reading the file. Syntax: read_csv(“file name”, header=None) Import module Read file Set header to None Display data Let us first see how data is displayed with headers, to make difference crystal clear. Data file used: file.csv sample.csv Example1: Python3 # importing python packageimport pandas as pd # read the contents of csv filedataset = pd.read_csv("file.csv") # display the modified resultdisplay(dataset) Output: Now let us see the implementation without headers. Example 2: Python3 # importing python packageimport pandas as pd # read the contents of csv filedataset = pd.read_csv("file.csv", header=None) # display the modified resultdisplay(dataset) Output: Example 3: Python3 # importing python packageimport pandas as pd # read the content of csv filedataset = pd.read_csv("sample1.csv", header=None) # display modified csv filedisplay(dataset) Output: Picked Python pandas-io Python-pandas Technical Scripter 2020 Python Technical Scripter Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python Different ways to create Pandas Dataframe Python String | replace() Python program to convert a list to string Reading and Writing to text files in Python sum() function in Python
[ { "code": null, "e": 24633, "s": 24605, "text": "\n03 Mar, 2021" }, { "code": null, "e": 24655, "s": 24633, "text": "Prerequisites: Pandas" }, { "code": null, "e": 24926, "s": 24655, "text": "A header of the CSV file is an array of values assigned to each of the columns. It acts as a row header for the data. This article discusses how we can read a csv file without header using pandas. To do this header attribute should be set to None while reading the file." }, { "code": null, "e": 24934, "s": 24926, "text": "Syntax:" }, { "code": null, "e": 24969, "s": 24934, "text": "read_csv(“file name”, header=None)" }, { "code": null, "e": 24983, "s": 24969, "text": "Import module" }, { "code": null, "e": 24993, "s": 24983, "text": "Read file" }, { "code": null, "e": 25012, "s": 24993, "text": "Set header to None" }, { "code": null, "e": 25026, "s": 25012, "text": "Display data " }, { "code": null, "e": 25113, "s": 25026, "text": "Let us first see how data is displayed with headers, to make difference crystal clear." }, { "code": null, "e": 25129, "s": 25113, "text": "Data file used:" }, { "code": null, "e": 25138, "s": 25129, "text": "file.csv" }, { "code": null, "e": 25149, "s": 25138, "text": "sample.csv" }, { "code": null, "e": 25160, "s": 25149, "text": "Example1: " }, { "code": null, "e": 25168, "s": 25160, "text": "Python3" }, { "code": "# importing python packageimport pandas as pd # read the contents of csv filedataset = pd.read_csv(\"file.csv\") # display the modified resultdisplay(dataset)", "e": 25327, "s": 25168, "text": null }, { "code": null, "e": 25335, "s": 25327, "text": "Output:" }, { "code": null, "e": 25386, "s": 25335, "text": "Now let us see the implementation without headers." }, { "code": null, "e": 25397, "s": 25386, "text": "Example 2:" }, { "code": null, "e": 25405, "s": 25397, "text": "Python3" }, { "code": "# importing python packageimport pandas as pd # read the contents of csv filedataset = pd.read_csv(\"file.csv\", header=None) # display the modified resultdisplay(dataset)", "e": 25577, "s": 25405, "text": null }, { "code": null, "e": 25585, "s": 25577, "text": "Output:" }, { "code": null, "e": 25596, "s": 25585, "text": "Example 3:" }, { "code": null, "e": 25604, "s": 25596, "text": "Python3" }, { "code": "# importing python packageimport pandas as pd # read the content of csv filedataset = pd.read_csv(\"sample1.csv\", header=None) # display modified csv filedisplay(dataset)", "e": 25776, "s": 25604, "text": null }, { "code": null, "e": 25784, "s": 25776, "text": "Output:" }, { "code": null, "e": 25791, "s": 25784, "text": "Picked" }, { "code": null, "e": 25808, "s": 25791, "text": "Python pandas-io" }, { "code": null, "e": 25822, "s": 25808, "text": "Python-pandas" }, { "code": null, "e": 25846, "s": 25822, "text": "Technical Scripter 2020" }, { "code": null, "e": 25853, "s": 25846, "text": "Python" }, { "code": null, "e": 25872, "s": 25853, "text": "Technical Scripter" }, { "code": null, "e": 25970, "s": 25872, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25979, "s": 25970, "text": "Comments" }, { "code": null, "e": 25992, "s": 25979, "text": "Old Comments" }, { "code": null, "e": 26010, "s": 25992, "text": "Python Dictionary" }, { "code": null, "e": 26045, "s": 26010, "text": "Read a file line by line in Python" }, { "code": null, "e": 26067, "s": 26045, "text": "Enumerate() in Python" }, { "code": null, "e": 26099, "s": 26067, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 26129, "s": 26099, "text": "Iterate over a list in Python" }, { "code": null, "e": 26171, "s": 26129, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 26197, "s": 26171, "text": "Python String | replace()" }, { "code": null, "e": 26240, "s": 26197, "text": "Python program to convert a list to string" }, { "code": null, "e": 26284, "s": 26240, "text": "Reading and Writing to text files in Python" } ]
Merge contents of two files into a third file using C
This is a c program to merge the contents of two files into the third file. For Example. java.txt is having initial content “Java is a programing language.” kotlin.txt is having initial content “ kotlin is a programing language.” ttpoint.txt is having initial content as blank files are merged ttpoint.txt will have final content as “Java is a programing language. kotlin is a programing language.” Begin Declare a[] array to the character datatype. Initialize a[] = "Java is a programing language.". Declare i of the integer datatype. Initialize i =0. Declare f1 as a pointer to the FILE type. Open a file “java.txt” to perform write operation using f1 pointer. while (a[i] != '\0') call fputc(a[i], f1) to put all data of a[] into f1 file object i++ Close the f1 file pointer. Declare a[] array to the character datatype. Initialize b[] = " kotlin is a programing language.". Declare i of the integer datatype. Initialize j =0. Declare f2 as a pointer to the FILE type. Open a file “kotlin.txt” to perform write operation using f2 pointer. while (b[j] != '\0') call fputc(b[j], f1) to put all data of b[] into f2 file object j++ Close the f2 file pointer. Open a file “java.txt” to perform read operation using f1 pointer. Open a file “ttpoint.txt” to perform write operation using f2 pointer. Declare f3 as a pointer to the FILE datatype. Open a file “ttpoint.txt” to perform write operation using f3 pointer. Declare a variable “c” to the character datatype. if (f1 == NULL || f2 == NULL || f3 == NULL) then print “couldn’t open the file.” Exit. While ((c = fgetc(f1)) != EOF) do Put all data of “c” variable into f3 file pointer using fputc() function. while ((c = fgetc(f2)) != EOF) do Put all data of “c” variable into f3 file pointer using fputc() function. Call fclose(f3) function to close the file pointer. Open the file ttpoint.txt using f3 file pointer. Print “Merged java.txt and python.txt into ttpoint.txt” while (!feof(f3)) Call putchar(fgetc(f3)) function to print the content of f3 file pointer. Close the f1 file pointer. Close the f2 file pointer. Close the f3 file pointer. End. #include <stdio.h> #include <stdlib.h> int main() { char a[] = "Java is a programing language."; int i=0; FILE *f1; // Open two files to be merged f1 = fopen("java.txt", "w"); while (a[i] != '\0') { fputc(a[i], f1); i++; } fclose(f1); char b[] = "kotlin is a programing language."; int j =0; FILE *f2; f2 = fopen("kotlin.txt", "w"); while (b[j] != '\0') { fputc(b[j], f2); j++; } fclose(f2); f1 = fopen("java.txt", "r"); f2 = fopen("kotlin.txt", "r"); FILE *f3 = fopen("ttpoint.txt", "w"); // Open file to store the result char c; if (f1 == NULL || f2 == NULL || f3 == NULL) { puts("Could not open files"); exit(0); } while ((c = fgetc(f1)) != EOF) // Copy contents of first file to ttpoint.txt fputc(c, f3); while ((c = fgetc(f2)) != EOF) // Copy contents of second file to ttpoint.txt fputc(c, f3); fclose(f3); f3 = fopen("ttpoint.txt", "r"); printf("Merged java.txt and kotlin.txt into ttpoint.txt\n"); while (!feof(f3)) putchar(fgetc(f3)); fclose(f1); //closing the file pointer. fclose(f2); fclose(f3); return 0; } Merged java.txt and kotlin.txt into ttpoint.txt Java is a programing language.kotlin is a programing language.
[ { "code": null, "e": 1138, "s": 1062, "text": "This is a c program to merge the contents of two files into the third file." }, { "code": null, "e": 1151, "s": 1138, "text": "For Example." }, { "code": null, "e": 1339, "s": 1151, "text": "java.txt is having initial content “Java is a programing language.”\nkotlin.txt is having initial content “ kotlin is a programing language.”\nttpoint.txt is having initial content as blank" }, { "code": null, "e": 1461, "s": 1339, "text": "files are merged\nttpoint.txt will have final content as “Java is a programing language. kotlin is a programing language.”" }, { "code": null, "e": 3310, "s": 1461, "text": "Begin\n Declare a[] array to the character datatype.\n Initialize a[] = \"Java is a programing language.\".\n Declare i of the integer datatype.\n Initialize i =0.\n Declare f1 as a pointer to the FILE type.\n Open a file “java.txt” to perform write operation using f1 pointer.\n while (a[i] != '\\0')\n call fputc(a[i], f1) to put all data of a[] into f1 file object i++\n Close the f1 file pointer.\n Declare a[] array to the character datatype.\n Initialize b[] = \" kotlin is a programing language.\".\n Declare i of the integer datatype.\n Initialize j =0.\n Declare f2 as a pointer to the FILE type.\n Open a file “kotlin.txt” to perform write operation using f2 pointer.\n while (b[j] != '\\0')\n call fputc(b[j], f1) to put all data of b[] into f2 file object j++\n Close the f2 file pointer.\n Open a file “java.txt” to perform read operation using f1 pointer.\n Open a file “ttpoint.txt” to perform write operation using f2 pointer.\n Declare f3 as a pointer to the FILE datatype.\n Open a file “ttpoint.txt” to perform write operation using f3 pointer.\n Declare a variable “c” to the character datatype.\n if (f1 == NULL || f2 == NULL || f3 == NULL) then\n print “couldn’t open the file.”\n Exit.\n While ((c = fgetc(f1)) != EOF) do\n Put all data of “c” variable into f3 file pointer using fputc() function.\n while ((c = fgetc(f2)) != EOF) do\n Put all data of “c” variable into f3 file pointer using fputc() function.\n Call fclose(f3) function to close the file pointer.\n Open the file ttpoint.txt using f3 file pointer.\n Print “Merged java.txt and python.txt into ttpoint.txt”\n while (!feof(f3))\n Call putchar(fgetc(f3)) function to print the content of f3 file pointer.\n Close the f1 file pointer.\n Close the f2 file pointer.\n Close the f3 file pointer.\nEnd." }, { "code": null, "e": 4465, "s": 3310, "text": "#include <stdio.h>\n#include <stdlib.h>\nint main() {\n char a[] = \"Java is a programing language.\";\n int i=0;\n FILE *f1; // Open two files to be merged\n f1 = fopen(\"java.txt\", \"w\");\n while (a[i] != '\\0') {\n fputc(a[i], f1);\n i++;\n }\n fclose(f1);\n char b[] = \"kotlin is a programing language.\";\n int j =0;\n FILE *f2;\n f2 = fopen(\"kotlin.txt\", \"w\");\n while (b[j] != '\\0') {\n fputc(b[j], f2);\n j++;\n }\n fclose(f2);\n f1 = fopen(\"java.txt\", \"r\");\n f2 = fopen(\"kotlin.txt\", \"r\");\n FILE *f3 = fopen(\"ttpoint.txt\", \"w\"); // Open file to store the result\n char c;\n if (f1 == NULL || f2 == NULL || f3 == NULL) {\n puts(\"Could not open files\");\n exit(0);\n }\n while ((c = fgetc(f1)) != EOF) // Copy contents of first file to ttpoint.txt\n fputc(c, f3);\n while ((c = fgetc(f2)) != EOF) // Copy contents of second file to ttpoint.txt\n fputc(c, f3);\n fclose(f3);\n f3 = fopen(\"ttpoint.txt\", \"r\");\n printf(\"Merged java.txt and kotlin.txt into ttpoint.txt\\n\");\n while (!feof(f3))\n putchar(fgetc(f3));\n fclose(f1); //closing the file pointer.\n fclose(f2);\n fclose(f3);\n return 0;\n}" }, { "code": null, "e": 4576, "s": 4465, "text": "Merged java.txt and kotlin.txt into ttpoint.txt\nJava is a programing language.kotlin is a programing language." } ]
Global keyword in Python - GeeksforGeeks
31 May, 2020 Global keyword is a keyword that allows a user to modify a variable outside of the current scope. It is used to create global variables from a non-global scope i.e inside a function. Global keyword is used inside a function only when we want to do assignments or when we want to change a variable. Global is not needed for printing and accessing. Rules of global keyword: If a variable is assigned a value anywhere within the function’s body, it’s assumed to be a local unless explicitly declared as global. Variables that are only referenced inside a function are implicitly global. We Use global keyword to use a global variable inside a function. There is no need to use global keyword outside a function. Use of global keyword:To access a global variable inside a function there is no need to use global keyword.Example 1: # Python program showing no need to# use global keyword for accessing# a global value # global variablea = 15b = 10 # function to perform additiondef add(): c = a + b print(c) # calling a functionadd() Output: 25 If we need to assign a new value to a global variable then we can do that by declaring the variable as global.Code 2: Without global keyword # Python program showing to modify# a global value without using global# keyword a = 15 # function to change a global valuedef change(): # increment value of a by 5 a = a + 5 print(a) change() Output: UnboundLocalError: local variable 'a' referenced before assignment This output is an error because we are trying to assign a value to a variable in an outer scope. This can be done with the use of global variable.Code 3 : With global keyword # Python program to modify a global# value inside a function x = 15def change(): # using a global keyword global x # increment value of a by 5 x = x + 5 print("Value of x inside a function :", x)change()print("Value of x outside a function :", x) Output: Value of x inside a function : 20 Value of x outside a function : 20 In the above example, we first define x as global keyword inside the function change(). The value of x is then incremented by 5, ie. x=x+5 and hence we get the output as 20.As we can see by changing the value inside the function change(), the change is also reflected in the value outside the global variable. Global variables across python modules :The best way to share global variables across different modules within the same program is to create a special module (often named config or cfg). Import the config module in all modules of your application; the module then becomes available as a global name. There is only one instance of each module and so any changes made to the module object get reflected everywhere. For Example, sharing global variables across modulesCode 1: Create a config.py file to store global variables: x = 0y = 0z ="none" Code 2: Create a modify.py file to modify global variables: import configconfig.x = 1config.y = 2config.z ="geeksforgeeks" Here we have modified the value of x, y, and z. These variables were defined in the module config.py, hence we have to import config module and we can use config.variable_name to access these variables.Code 3: Create a main.py file to modify global variables: import configimport modifyprint(config.x)print(config.y)print(config.z) Output: 1 2 geeksforgeeks Global in Nested functionsIn order to use global inside a nested functions, we have to declare a variable with global keyword inside a nested function # Python program showing a use of# global in nested function def add(): x = 15 def change(): global x x = 20 print("Before making changing: ", x) print("Making change") change() print("After making change: ", x) add()print("value of x",x) Output: Before making changing: 15 Making change After making change: 15 value of x 20 In the above example Before and after making change(), the variable x takes the value of local variable i.e x = 15. Outside of the add() function, the variable x will take value defined in the change() function i.e x = 20. because we have used global keyword in x to create global variable inside the change() function (local scope). AkshanshShrivastava cnzhixiang Picked python-basics Python-datatype Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python Different ways to create Pandas Dataframe Python String | replace()
[ { "code": null, "e": 41161, "s": 41133, "text": "\n31 May, 2020" }, { "code": null, "e": 41508, "s": 41161, "text": "Global keyword is a keyword that allows a user to modify a variable outside of the current scope. It is used to create global variables from a non-global scope i.e inside a function. Global keyword is used inside a function only when we want to do assignments or when we want to change a variable. Global is not needed for printing and accessing." }, { "code": null, "e": 41533, "s": 41508, "text": "Rules of global keyword:" }, { "code": null, "e": 41669, "s": 41533, "text": "If a variable is assigned a value anywhere within the function’s body, it’s assumed to be a local unless explicitly declared as global." }, { "code": null, "e": 41745, "s": 41669, "text": "Variables that are only referenced inside a function are implicitly global." }, { "code": null, "e": 41811, "s": 41745, "text": "We Use global keyword to use a global variable inside a function." }, { "code": null, "e": 41870, "s": 41811, "text": "There is no need to use global keyword outside a function." }, { "code": null, "e": 41988, "s": 41870, "text": "Use of global keyword:To access a global variable inside a function there is no need to use global keyword.Example 1:" }, { "code": "# Python program showing no need to# use global keyword for accessing# a global value # global variablea = 15b = 10 # function to perform additiondef add(): c = a + b print(c) # calling a functionadd()", "e": 42199, "s": 41988, "text": null }, { "code": null, "e": 42207, "s": 42199, "text": "Output:" }, { "code": null, "e": 42213, "s": 42207, "text": " \n25\n" }, { "code": null, "e": 42354, "s": 42213, "text": "If we need to assign a new value to a global variable then we can do that by declaring the variable as global.Code 2: Without global keyword" }, { "code": "# Python program showing to modify# a global value without using global# keyword a = 15 # function to change a global valuedef change(): # increment value of a by 5 a = a + 5 print(a) change()", "e": 42562, "s": 42354, "text": null }, { "code": null, "e": 42570, "s": 42562, "text": "Output:" }, { "code": null, "e": 42638, "s": 42570, "text": "UnboundLocalError: local variable 'a' referenced before assignment\n" }, { "code": null, "e": 42813, "s": 42638, "text": "This output is an error because we are trying to assign a value to a variable in an outer scope. This can be done with the use of global variable.Code 3 : With global keyword" }, { "code": "# Python program to modify a global# value inside a function x = 15def change(): # using a global keyword global x # increment value of a by 5 x = x + 5 print(\"Value of x inside a function :\", x)change()print(\"Value of x outside a function :\", x)", "e": 43081, "s": 42813, "text": null }, { "code": null, "e": 43089, "s": 43081, "text": "Output:" }, { "code": null, "e": 43159, "s": 43089, "text": "Value of x inside a function : 20\nValue of x outside a function : 20\n" }, { "code": null, "e": 43993, "s": 43159, "text": "In the above example, we first define x as global keyword inside the function change(). The value of x is then incremented by 5, ie. x=x+5 and hence we get the output as 20.As we can see by changing the value inside the function change(), the change is also reflected in the value outside the global variable. Global variables across python modules :The best way to share global variables across different modules within the same program is to create a special module (often named config or cfg). Import the config module in all modules of your application; the module then becomes available as a global name. There is only one instance of each module and so any changes made to the module object get reflected everywhere. For Example, sharing global variables across modulesCode 1: Create a config.py file to store global variables:" }, { "code": "x = 0y = 0z =\"none\"", "e": 44013, "s": 43993, "text": null }, { "code": null, "e": 44073, "s": 44013, "text": "Code 2: Create a modify.py file to modify global variables:" }, { "code": "import configconfig.x = 1config.y = 2config.z =\"geeksforgeeks\"", "e": 44136, "s": 44073, "text": null }, { "code": null, "e": 44396, "s": 44136, "text": "Here we have modified the value of x, y, and z. These variables were defined in the module config.py, hence we have to import config module and we can use config.variable_name to access these variables.Code 3: Create a main.py file to modify global variables:" }, { "code": "import configimport modifyprint(config.x)print(config.y)print(config.z)", "e": 44468, "s": 44396, "text": null }, { "code": null, "e": 44476, "s": 44468, "text": "Output:" }, { "code": null, "e": 44495, "s": 44476, "text": "1\n2\ngeeksforgeeks\n" }, { "code": null, "e": 44646, "s": 44495, "text": "Global in Nested functionsIn order to use global inside a nested functions, we have to declare a variable with global keyword inside a nested function" }, { "code": "# Python program showing a use of# global in nested function def add(): x = 15 def change(): global x x = 20 print(\"Before making changing: \", x) print(\"Making change\") change() print(\"After making change: \", x) add()print(\"value of x\",x)", "e": 44934, "s": 44646, "text": null }, { "code": null, "e": 44942, "s": 44934, "text": "Output:" }, { "code": null, "e": 45024, "s": 44942, "text": "Before making changing: 15\nMaking change\nAfter making change: 15\nvalue of x 20\n" }, { "code": null, "e": 45358, "s": 45024, "text": "In the above example Before and after making change(), the variable x takes the value of local variable i.e x = 15. Outside of the add() function, the variable x will take value defined in the change() function i.e x = 20. because we have used global keyword in x to create global variable inside the change() function (local scope)." }, { "code": null, "e": 45378, "s": 45358, "text": "AkshanshShrivastava" }, { "code": null, "e": 45389, "s": 45378, "text": "cnzhixiang" }, { "code": null, "e": 45396, "s": 45389, "text": "Picked" }, { "code": null, "e": 45410, "s": 45396, "text": "python-basics" }, { "code": null, "e": 45426, "s": 45410, "text": "Python-datatype" }, { "code": null, "e": 45433, "s": 45426, "text": "Python" }, { "code": null, "e": 45531, "s": 45433, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 45540, "s": 45531, "text": "Comments" }, { "code": null, "e": 45553, "s": 45540, "text": "Old Comments" }, { "code": null, "e": 45581, "s": 45553, "text": "Read JSON file using Python" }, { "code": null, "e": 45631, "s": 45581, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 45653, "s": 45631, "text": "Python map() function" }, { "code": null, "e": 45697, "s": 45653, "text": "How to get column names in Pandas dataframe" }, { "code": null, "e": 45732, "s": 45697, "text": "Read a file line by line in Python" }, { "code": null, "e": 45754, "s": 45732, "text": "Enumerate() in Python" }, { "code": null, "e": 45786, "s": 45754, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 45816, "s": 45786, "text": "Iterate over a list in Python" }, { "code": null, "e": 45858, "s": 45816, "text": "Different ways to create Pandas Dataframe" } ]