title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
GUID in Postman
GUID means Global Unique Identifier. It is in the form of a hexa-decima digit which is separated by hyphen. It helps to achieve distinctiveness due to this even if multiple people are creating the GUID simultaneously, the chance of having a duplicate GUID is rare. To generate a random value, using GUID, the format is − { "name": "{{$guid}}" } On sending a request, it would produce a random value. GUID is a value with 128 bits having a structure defined in RFC4122. The structure of GUID is uncomplicated and simple for generation.The format of a GUID is shown below − xxxxxxxx-xxxx-Axxx-Bxxx-xxxxxxxxxxxx Here, A is the version and B is the variant. For instance, if a GUID is 6315147b-458e-ada0-7d24b5479582b. Then as per the structure, it has the version 4(which is marked in bold). GUID can be generated manually or with the help of online tools. The below link will help to generate a GUID − https://www.guidgenerator.com/ The available versions of GUID are listed below − Version 1 - the MAC address and Date-Time are mentioned. Version 1 - the MAC address and Date-Time are mentioned. Version 2 - the DCE security is mentioned. Version 2 - the DCE security is mentioned. Version 3 - the MD5 and namespace are mentioned. Version 3 - the MD5 and namespace are mentioned. Version 4 - produces random digits for creating GUID. Version 4 - produces random digits for creating GUID. Version 5 - the SHA-1 hash and namespace are mentioned. Version 5 - the SHA-1 hash and namespace are mentioned. GUID can be created manually. GUID can be created manually. Distinct value gets generated every time so there is less chance of encountering identical values. Distinct value gets generated every time so there is less chance of encountering identical values. Can be utilized as a database primary key. Can be utilized as a database primary key. GUID can be created offline. GUID can be created offline. GUID is useful when we have more than one standalone application. GUID is useful when we have more than one standalone application. Consumes a lot of space. Consumes a lot of space. No option of sorting to organize data in specific format. No option of sorting to organize data in specific format.
[ { "code": null, "e": 1327, "s": 1062, "text": "GUID means Global Unique Identifier. It is in the form of a hexa-decima digit which is separated by hyphen. It helps to achieve distinctiveness due to this even if multiple people are creating the GUID simultaneously, the chance of having a duplicate GUID is rare." }, { "code": null, "e": 1383, "s": 1327, "text": "To generate a random value, using GUID, the format is −" }, { "code": null, "e": 1410, "s": 1383, "text": "{\n \"name\": \"{{$guid}}\"\n}" }, { "code": null, "e": 1465, "s": 1410, "text": "On sending a request, it would produce a random value." }, { "code": null, "e": 1637, "s": 1465, "text": "GUID is a value with 128 bits having a structure defined in RFC4122. The structure of GUID is uncomplicated and simple for generation.The format of a GUID is shown below −" }, { "code": null, "e": 1674, "s": 1637, "text": "xxxxxxxx-xxxx-Axxx-Bxxx-xxxxxxxxxxxx" }, { "code": null, "e": 1719, "s": 1674, "text": "Here, A is the version and B is the variant." }, { "code": null, "e": 1919, "s": 1719, "text": "For instance, if a GUID is 6315147b-458e-ada0-7d24b5479582b. Then as per the structure, it has the version 4(which is marked in bold). GUID can be generated manually or with the help of online tools." }, { "code": null, "e": 1965, "s": 1919, "text": "The below link will help to generate a GUID −" }, { "code": null, "e": 1996, "s": 1965, "text": "https://www.guidgenerator.com/" }, { "code": null, "e": 2046, "s": 1996, "text": "The available versions of GUID are listed below −" }, { "code": null, "e": 2103, "s": 2046, "text": "Version 1 - the MAC address and Date-Time are mentioned." }, { "code": null, "e": 2160, "s": 2103, "text": "Version 1 - the MAC address and Date-Time are mentioned." }, { "code": null, "e": 2203, "s": 2160, "text": "Version 2 - the DCE security is mentioned." }, { "code": null, "e": 2246, "s": 2203, "text": "Version 2 - the DCE security is mentioned." }, { "code": null, "e": 2295, "s": 2246, "text": "Version 3 - the MD5 and namespace are mentioned." }, { "code": null, "e": 2344, "s": 2295, "text": "Version 3 - the MD5 and namespace are mentioned." }, { "code": null, "e": 2398, "s": 2344, "text": "Version 4 - produces random digits for creating GUID." }, { "code": null, "e": 2452, "s": 2398, "text": "Version 4 - produces random digits for creating GUID." }, { "code": null, "e": 2508, "s": 2452, "text": "Version 5 - the SHA-1 hash and namespace are mentioned." }, { "code": null, "e": 2564, "s": 2508, "text": "Version 5 - the SHA-1 hash and namespace are mentioned." }, { "code": null, "e": 2594, "s": 2564, "text": "GUID can be created manually." }, { "code": null, "e": 2624, "s": 2594, "text": "GUID can be created manually." }, { "code": null, "e": 2723, "s": 2624, "text": "Distinct value gets generated every time so there is less chance of encountering identical values." }, { "code": null, "e": 2822, "s": 2723, "text": "Distinct value gets generated every time so there is less chance of encountering identical values." }, { "code": null, "e": 2865, "s": 2822, "text": "Can be utilized as a database primary key." }, { "code": null, "e": 2908, "s": 2865, "text": "Can be utilized as a database primary key." }, { "code": null, "e": 2937, "s": 2908, "text": "GUID can be created offline." }, { "code": null, "e": 2966, "s": 2937, "text": "GUID can be created offline." }, { "code": null, "e": 3032, "s": 2966, "text": "GUID is useful when we have more than one standalone application." }, { "code": null, "e": 3098, "s": 3032, "text": "GUID is useful when we have more than one standalone application." }, { "code": null, "e": 3123, "s": 3098, "text": "Consumes a lot of space." }, { "code": null, "e": 3148, "s": 3123, "text": "Consumes a lot of space." }, { "code": null, "e": 3206, "s": 3148, "text": "No option of sorting to organize data in specific format." }, { "code": null, "e": 3264, "s": 3206, "text": "No option of sorting to organize data in specific format." } ]
Custom progress Bar in Android?
This example demonstrates how do I create a progress bar android Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <ProgressBar android:layout_width="150dp" android:layout_height="150dp" style="?android:progressBarStyleLarge" android:progress="50" android:id="@+id/progressBar" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="parent" android:progressDrawable="@drawable/circle"/> </android.support.constraint.ConstraintLayout> Step 3 − Create a drawable resource file, name it as circle.xml and add the following code − <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="ring" android:innerRadiusRatio="2.5" android:thickness="4dp" android:useLevel="true"> <solid android:color="@color/colorAccent"/> Step 4 − Add the following code to src/MainActivity.java import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.widget.ProgressBar; public class MainActivity extends AppCompatActivity { ProgressBar progressBar; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); progressBar = findViewById(R.id.progressBar); progressBar.setMax(100); progressBar.setProgress(20); } } Step 5 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − Click here to download the project code.
[ { "code": null, "e": 1127, "s": 1062, "text": "This example demonstrates how do I create a progress bar android" }, { "code": null, "e": 1256, "s": 1127, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1321, "s": 1256, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 2184, "s": 1321, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<android.support.constraint.ConstraintLayout\n xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:app=\"http://schemas.android.com/apk/res-auto\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".MainActivity\">\n <ProgressBar\n android:layout_width=\"150dp\"\n android:layout_height=\"150dp\"\n style=\"?android:progressBarStyleLarge\"\n android:progress=\"50\"\n android:id=\"@+id/progressBar\"\n app:layout_constraintBottom_toBottomOf=\"parent\"\n app:layout_constraintLeft_toLeftOf=\"parent\"\n app:layout_constraintRight_toRightOf=\"parent\"\n app:layout_constraintTop_toTopOf=\"parent\"\n android:progressDrawable=\"@drawable/circle\"/>\n</android.support.constraint.ConstraintLayout>" }, { "code": null, "e": 2277, "s": 2184, "text": "Step 3 − Create a drawable resource file, name it as circle.xml and add the following code −" }, { "code": null, "e": 2542, "s": 2277, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<shape xmlns:android=\"http://schemas.android.com/apk/res/android\"\n android:shape=\"ring\"\n android:innerRadiusRatio=\"2.5\"\n android:thickness=\"4dp\"\n android:useLevel=\"true\">\n <solid android:color=\"@color/colorAccent\"/>" }, { "code": null, "e": 2599, "s": 2542, "text": "Step 4 − Add the following code to src/MainActivity.java" }, { "code": null, "e": 3073, "s": 2599, "text": "import android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.widget.ProgressBar;\npublic class MainActivity extends AppCompatActivity {\n ProgressBar progressBar;\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n progressBar = findViewById(R.id.progressBar);\n progressBar.setMax(100);\n progressBar.setProgress(20);\n }\n}" }, { "code": null, "e": 3128, "s": 3073, "text": "Step 5 − Add the following code to androidManifest.xml" }, { "code": null, "e": 3798, "s": 3128, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.sample\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 4144, "s": 3798, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −" }, { "code": null, "e": 4185, "s": 4144, "text": "Click here to download the project code." } ]
Node.js os.cpus() Method
13 Oct, 2021 The os.cpus() method is an inbuilt application programming interface of the os module which is used to get information about each logical CPU core of the computer. Syntax: os.cpus() Parameters: This method does not accept any parameters. Return: This method returns an object containing information about each logical CPU core. Each of the returned object will contains the following attributes: model: A string that specifies the model of the CPU core. speed: A number that specifies the speed of the CPU core (in MHz). times: An Object that contains the following properties:user: A number specifies the time that the CPU has spent in user mode in milliseconds.nice: A number specifies the time that the CPU has spent in nice mode in milliseconds.sys: A number specifies the time that the CPU has spent in sys mode in milliseconds.idle: A number specifies the time that the CPU has spent in idle mode in milliseconds.irq: A number specifies the time that the CPU has spent in irq mode in milliseconds. user: A number specifies the time that the CPU has spent in user mode in milliseconds. nice: A number specifies the time that the CPU has spent in nice mode in milliseconds. sys: A number specifies the time that the CPU has spent in sys mode in milliseconds. idle: A number specifies the time that the CPU has spent in idle mode in milliseconds. irq: A number specifies the time that the CPU has spent in irq mode in milliseconds. Note: The nice values are used for POSIX Only. On Windows operating system, the nice value of all processor are always 0 . Below examples illustrate the use of os.cpus() method in Node.js: Example 1: // Node.js program to demonstrate the // os.cpus() method // Allocating os moduleconst os = require('os'); // Printing os.cpus() valuesconsole.log(os.cpus()); Output: [ { model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz', speed: 2712, times: { user: 900000, nice: 0, sys: 940265, idle: 11928546, irq: 147046 } }, { model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz', speed: 2712, times: { user: 860875, nice: 0, sys: 507093, idle: 12400500, irq: 27062 } }, { model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz', speed: 2712, times: { user: 1273421, nice: 0, sys: 618765, idle: 11876281, irq: 13125 } }, { model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz', speed: 2712, times: { user: 943921, nice: 0, sys: 460109, idle: 12364453, irq: 12437 } } ] Example 2: // Node.js program to demonstrate the // os.cpus() method // Allocating os moduleconst os = require('os'); // Printing os.cpus()var cpu_s=os.cpus();var no_of_logical_core=0;cpu_s.forEach(element => { no_of_logical_core++; console.log("Logical core " + no_of_logical_core + " :"); console.log(element); }); console.log("total number of logical core is " + no_of_logical_core); Output: Logical core 1 : { model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz', speed: 2712, times: { user: 856437, nice: 0, sys: 866203, idle: 11070046, irq: 133562 } } Logical core 2 : { model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz', speed: 2712, times: { user: 805296, nice: 0, sys: 462656, idle: 11524406, irq: 23218 } } Logical core 3 : { model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz', speed: 2712, times: { user: 1225062, nice: 0, sys: 566421, idle: 11000875, irq: 12203 } } Logical core 4 : { model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz', speed: 2712, times: { user: 900234, nice: 0, sys: 420000, idle: 11472125, irq: 11781 } } total number of logical core is 4 Note: The above program will compile and run by using the node index.js command. Reference: https://nodejs.org/api/os.html#os_os_cpus Node.js-os-module Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n13 Oct, 2021" }, { "code": null, "e": 192, "s": 28, "text": "The os.cpus() method is an inbuilt application programming interface of the os module which is used to get information about each logical CPU core of the computer." }, { "code": null, "e": 200, "s": 192, "text": "Syntax:" }, { "code": null, "e": 210, "s": 200, "text": "os.cpus()" }, { "code": null, "e": 266, "s": 210, "text": "Parameters: This method does not accept any parameters." }, { "code": null, "e": 424, "s": 266, "text": "Return: This method returns an object containing information about each logical CPU core. Each of the returned object will contains the following attributes:" }, { "code": null, "e": 482, "s": 424, "text": "model: A string that specifies the model of the CPU core." }, { "code": null, "e": 549, "s": 482, "text": "speed: A number that specifies the speed of the CPU core (in MHz)." }, { "code": null, "e": 1032, "s": 549, "text": "times: An Object that contains the following properties:user: A number specifies the time that the CPU has spent in user mode in milliseconds.nice: A number specifies the time that the CPU has spent in nice mode in milliseconds.sys: A number specifies the time that the CPU has spent in sys mode in milliseconds.idle: A number specifies the time that the CPU has spent in idle mode in milliseconds.irq: A number specifies the time that the CPU has spent in irq mode in milliseconds." }, { "code": null, "e": 1119, "s": 1032, "text": "user: A number specifies the time that the CPU has spent in user mode in milliseconds." }, { "code": null, "e": 1206, "s": 1119, "text": "nice: A number specifies the time that the CPU has spent in nice mode in milliseconds." }, { "code": null, "e": 1291, "s": 1206, "text": "sys: A number specifies the time that the CPU has spent in sys mode in milliseconds." }, { "code": null, "e": 1378, "s": 1291, "text": "idle: A number specifies the time that the CPU has spent in idle mode in milliseconds." }, { "code": null, "e": 1463, "s": 1378, "text": "irq: A number specifies the time that the CPU has spent in irq mode in milliseconds." }, { "code": null, "e": 1586, "s": 1463, "text": "Note: The nice values are used for POSIX Only. On Windows operating system, the nice value of all processor are always 0 ." }, { "code": null, "e": 1652, "s": 1586, "text": "Below examples illustrate the use of os.cpus() method in Node.js:" }, { "code": null, "e": 1663, "s": 1652, "text": "Example 1:" }, { "code": "// Node.js program to demonstrate the // os.cpus() method // Allocating os moduleconst os = require('os'); // Printing os.cpus() valuesconsole.log(os.cpus());", "e": 1826, "s": 1663, "text": null }, { "code": null, "e": 1834, "s": 1826, "text": "Output:" }, { "code": null, "e": 2470, "s": 1834, "text": "[ { model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz',\n speed: 2712,\n times:\n { user: 900000, nice: 0, sys: 940265, idle: 11928546, irq: 147046 } },\n { model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz',\n speed: 2712,\n times:\n { user: 860875, nice: 0, sys: 507093, idle: 12400500, irq: 27062 } },\n { model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz',\n speed: 2712,\n times:\n { user: 1273421, nice: 0, sys: 618765, idle: 11876281, irq: 13125 } },\n { model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz',\n speed: 2712,\n times:\n { user: 943921, nice: 0, sys: 460109, idle: 12364453, irq: 12437 } } ]\n" }, { "code": null, "e": 2481, "s": 2470, "text": "Example 2:" }, { "code": "// Node.js program to demonstrate the // os.cpus() method // Allocating os moduleconst os = require('os'); // Printing os.cpus()var cpu_s=os.cpus();var no_of_logical_core=0;cpu_s.forEach(element => { no_of_logical_core++; console.log(\"Logical core \" + no_of_logical_core + \" :\"); console.log(element); }); console.log(\"total number of logical core is \" + no_of_logical_core);", "e": 2891, "s": 2481, "text": null }, { "code": null, "e": 2899, "s": 2891, "text": "Output:" }, { "code": null, "e": 3600, "s": 2899, "text": "Logical core 1 :\n{ model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz',\n speed: 2712,\n times:\n { user: 856437, nice: 0, sys: 866203, idle: 11070046, irq: 133562 } }\nLogical core 2 :\n{ model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz',\n speed: 2712,\n times:\n { user: 805296, nice: 0, sys: 462656, idle: 11524406, irq: 23218 } }\nLogical core 3 :\n{ model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz',\n speed: 2712,\n times:\n { user: 1225062, nice: 0, sys: 566421, idle: 11000875, irq: 12203 } }\nLogical core 4 :\n{ model: 'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz',\n speed: 2712,\n times:\n { user: 900234, nice: 0, sys: 420000, idle: 11472125, irq: 11781 } }\ntotal number of logical core is 4\n" }, { "code": null, "e": 3681, "s": 3600, "text": "Note: The above program will compile and run by using the node index.js command." }, { "code": null, "e": 3734, "s": 3681, "text": "Reference: https://nodejs.org/api/os.html#os_os_cpus" }, { "code": null, "e": 3752, "s": 3734, "text": "Node.js-os-module" }, { "code": null, "e": 3760, "s": 3752, "text": "Node.js" }, { "code": null, "e": 3777, "s": 3760, "text": "Web Technologies" } ]
HTML | DOM Input Text defaultValue Property
08 Mar, 2019 The Input Text defaultValue Property in HTML DOM is used to set or return the default value of the Text Field. This property is used to reflect the HTML value attribute. The main difference between the default value and value is that the default value indicate the default value and the value contains the current value after making some changes. This property is useful to find out whether the text field have been changed or not. Syntax: It returns the defaultValue property.textObject.defaultValue textObject.defaultValue It is used to set the defaultValue property.textObject.defaultValue = value textObject.defaultValue = value Property Values: It contains single property value value which defines the default value for text field. Return Value: It returns a string value which represent the default value of the text field. Example 1: This example illustrates how to return Input Text defaultValue property. <!DOCTYPE html> <html> <head> <title> HTML DOM Input Text defaultValue Property </title> </head> <body style="text-align:center;"> <h1>GeeksForGeeks</h1> <h2>DOM Input Text defaultValue Property</h2> <form id="myGeeks"> <input type="text" id="text_id" name="geeks" value="GeeksForGeeks"> </form><br> <button onclick="myGeeks()">Click Here!</button> <p id="GFG" style="font-size:20px;"></p> <!-- script to return the defaultValue Property--> <script> function myGeeks() { var txt = document.getElementById("text_id").defaultValue; document.getElementById("GFG").innerHTML = txt; } </script> </body> </html> Output:Before Clicking the Button:After Clicking the Button: Example 2: This example illustrates how to set Input Text defaultValue property. <!DOCTYPE html> <html> <head> <title> HTML DOM Input Text defaultValue Property </title> </head> <body style="text-align:center;"> <h1>GeeksForGeeks</h1> <h2>DOM Input Text defaultValue Property</h2> <form id="myGeeks"> <input type="text" id="text_id" name="geeks" value="GeeksForGeeks"> </form><br> <button onclick="myGeeks()">Click Here!</button> <p id="GFG" style="font-size:20px;"></p> <!-- script to set the defaultValue Property--> <script> function myGeeks() { var txt = document.getElementById("text_id").defaultValue = "HelloGeeks"; document.getElementById("GFG").innerHTML = txt; } </script> </body> </html> Output:Before Clicking the Button:After Clicking the Button: Supported Browsers: The browser supported by DOM input Text defaultValue Property are listed below: Google Chrome Internet Explorer Firefox Opera Safari HTML-DOM HTML-Property HTML Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n08 Mar, 2019" }, { "code": null, "e": 460, "s": 28, "text": "The Input Text defaultValue Property in HTML DOM is used to set or return the default value of the Text Field. This property is used to reflect the HTML value attribute. The main difference between the default value and value is that the default value indicate the default value and the value contains the current value after making some changes. This property is useful to find out whether the text field have been changed or not." }, { "code": null, "e": 468, "s": 460, "text": "Syntax:" }, { "code": null, "e": 529, "s": 468, "text": "It returns the defaultValue property.textObject.defaultValue" }, { "code": null, "e": 553, "s": 529, "text": "textObject.defaultValue" }, { "code": null, "e": 629, "s": 553, "text": "It is used to set the defaultValue property.textObject.defaultValue = value" }, { "code": null, "e": 661, "s": 629, "text": "textObject.defaultValue = value" }, { "code": null, "e": 766, "s": 661, "text": "Property Values: It contains single property value value which defines the default value for text field." }, { "code": null, "e": 859, "s": 766, "text": "Return Value: It returns a string value which represent the default value of the text field." }, { "code": null, "e": 943, "s": 859, "text": "Example 1: This example illustrates how to return Input Text defaultValue property." }, { "code": "<!DOCTYPE html> <html> <head> <title> HTML DOM Input Text defaultValue Property </title> </head> <body style=\"text-align:center;\"> <h1>GeeksForGeeks</h1> <h2>DOM Input Text defaultValue Property</h2> <form id=\"myGeeks\"> <input type=\"text\" id=\"text_id\" name=\"geeks\" value=\"GeeksForGeeks\"> </form><br> <button onclick=\"myGeeks()\">Click Here!</button> <p id=\"GFG\" style=\"font-size:20px;\"></p> <!-- script to return the defaultValue Property--> <script> function myGeeks() { var txt = document.getElementById(\"text_id\").defaultValue; document.getElementById(\"GFG\").innerHTML = txt; } </script> </body> </html> ", "e": 1712, "s": 943, "text": null }, { "code": null, "e": 1773, "s": 1712, "text": "Output:Before Clicking the Button:After Clicking the Button:" }, { "code": null, "e": 1854, "s": 1773, "text": "Example 2: This example illustrates how to set Input Text defaultValue property." }, { "code": "<!DOCTYPE html> <html> <head> <title> HTML DOM Input Text defaultValue Property </title> </head> <body style=\"text-align:center;\"> <h1>GeeksForGeeks</h1> <h2>DOM Input Text defaultValue Property</h2> <form id=\"myGeeks\"> <input type=\"text\" id=\"text_id\" name=\"geeks\" value=\"GeeksForGeeks\"> </form><br> <button onclick=\"myGeeks()\">Click Here!</button> <p id=\"GFG\" style=\"font-size:20px;\"></p> <!-- script to set the defaultValue Property--> <script> function myGeeks() { var txt = document.getElementById(\"text_id\").defaultValue = \"HelloGeeks\"; document.getElementById(\"GFG\").innerHTML = txt; } </script> </body> </html> ", "e": 2681, "s": 1854, "text": null }, { "code": null, "e": 2742, "s": 2681, "text": "Output:Before Clicking the Button:After Clicking the Button:" }, { "code": null, "e": 2842, "s": 2742, "text": "Supported Browsers: The browser supported by DOM input Text defaultValue Property are listed below:" }, { "code": null, "e": 2856, "s": 2842, "text": "Google Chrome" }, { "code": null, "e": 2874, "s": 2856, "text": "Internet Explorer" }, { "code": null, "e": 2882, "s": 2874, "text": "Firefox" }, { "code": null, "e": 2888, "s": 2882, "text": "Opera" }, { "code": null, "e": 2895, "s": 2888, "text": "Safari" }, { "code": null, "e": 2904, "s": 2895, "text": "HTML-DOM" }, { "code": null, "e": 2918, "s": 2904, "text": "HTML-Property" }, { "code": null, "e": 2923, "s": 2918, "text": "HTML" }, { "code": null, "e": 2940, "s": 2923, "text": "Web Technologies" }, { "code": null, "e": 2945, "s": 2940, "text": "HTML" } ]
Python str() function
30 Sep, 2021 Python str() function returns the string version of the object. Syntax: str(object, encoding=’utf-8?, errors=’strict’) Parameters: object: The object whose string representation is to be returned. encoding: Encoding of the given object. errors: Response when decoding fails. Returns: String version of the given object Python3 # Python program to demonstrate# strings # Empty strings = str()print(s) # String with valuess = str("GFG")print(s) Output: GFG Python3 # Python program to demonstrate# strings num = 100s = str(num)print(s, type(s)) num = 100.1s = str(num)print(s, type(s)) Output: 100 <class 'str'> 100.1 <class 'str'> There are six types of error taken by this function. strict (default): it raises a UnicodeDecodeError. ignore: It ignores the unencodable Unicode replace: It replaces the unencodable Unicode with a question mark xmlcharrefreplace: It inserts XML character reference instead of the unencodable Unicode backslashreplace: inserts a \uNNNN Espace sequence instead of unencodable Unicode namereplace: inserts a \N{...} escape sequence instead of an unencodable Unicode Example: Python3 # Python program to demonstrate# str() a = bytes("ŽString", encoding = 'utf-8')s = str(a, encoding = "ascii", errors ="ignore")print(s) Output: String In the above example, the character Ž should raise an error as it cannot be decoded by ASCII. But it is ignored because the errors is set as ignore. kumar_satyam Python-Built-in-functions python-string Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 53, "s": 25, "text": "\n30 Sep, 2021" }, { "code": null, "e": 117, "s": 53, "text": "Python str() function returns the string version of the object." }, { "code": null, "e": 172, "s": 117, "text": "Syntax: str(object, encoding=’utf-8?, errors=’strict’)" }, { "code": null, "e": 184, "s": 172, "text": "Parameters:" }, { "code": null, "e": 250, "s": 184, "text": "object: The object whose string representation is to be returned." }, { "code": null, "e": 290, "s": 250, "text": "encoding: Encoding of the given object." }, { "code": null, "e": 328, "s": 290, "text": "errors: Response when decoding fails." }, { "code": null, "e": 372, "s": 328, "text": "Returns: String version of the given object" }, { "code": null, "e": 380, "s": 372, "text": "Python3" }, { "code": "# Python program to demonstrate# strings # Empty strings = str()print(s) # String with valuess = str(\"GFG\")print(s)", "e": 496, "s": 380, "text": null }, { "code": null, "e": 504, "s": 496, "text": "Output:" }, { "code": null, "e": 508, "s": 504, "text": "GFG" }, { "code": null, "e": 516, "s": 508, "text": "Python3" }, { "code": "# Python program to demonstrate# strings num = 100s = str(num)print(s, type(s)) num = 100.1s = str(num)print(s, type(s))", "e": 637, "s": 516, "text": null }, { "code": null, "e": 645, "s": 637, "text": "Output:" }, { "code": null, "e": 683, "s": 645, "text": "100 <class 'str'>\n100.1 <class 'str'>" }, { "code": null, "e": 736, "s": 683, "text": "There are six types of error taken by this function." }, { "code": null, "e": 786, "s": 736, "text": "strict (default): it raises a UnicodeDecodeError." }, { "code": null, "e": 829, "s": 786, "text": "ignore: It ignores the unencodable Unicode" }, { "code": null, "e": 895, "s": 829, "text": "replace: It replaces the unencodable Unicode with a question mark" }, { "code": null, "e": 984, "s": 895, "text": "xmlcharrefreplace: It inserts XML character reference instead of the unencodable Unicode" }, { "code": null, "e": 1066, "s": 984, "text": "backslashreplace: inserts a \\uNNNN Espace sequence instead of unencodable Unicode" }, { "code": null, "e": 1147, "s": 1066, "text": "namereplace: inserts a \\N{...} escape sequence instead of an unencodable Unicode" }, { "code": null, "e": 1156, "s": 1147, "text": "Example:" }, { "code": null, "e": 1164, "s": 1156, "text": "Python3" }, { "code": "# Python program to demonstrate# str() a = bytes(\"ŽString\", encoding = 'utf-8')s = str(a, encoding = \"ascii\", errors =\"ignore\")print(s)", "e": 1301, "s": 1164, "text": null }, { "code": null, "e": 1309, "s": 1301, "text": "Output:" }, { "code": null, "e": 1316, "s": 1309, "text": "String" }, { "code": null, "e": 1466, "s": 1316, "text": "In the above example, the character Ž should raise an error as it cannot be decoded by ASCII. But it is ignored because the errors is set as ignore." }, { "code": null, "e": 1479, "s": 1466, "text": "kumar_satyam" }, { "code": null, "e": 1505, "s": 1479, "text": "Python-Built-in-functions" }, { "code": null, "e": 1519, "s": 1505, "text": "python-string" }, { "code": null, "e": 1526, "s": 1519, "text": "Python" } ]
C Program for KMP Algorithm for Pattern Searching
08 Jun, 2022 Given a text txt[0..n-1] and a pattern pat[0..m-1], write a function search(char pat[], char txt[]) that prints all occurrences of pat[] in txt[]. You may assume that n > m. Examples: Input: txt[] = "THIS IS A TEST TEXT" pat[] = "TEST" Output: Pattern found at index 10 Input: txt[] = "AABAACAADAABAABA" pat[] = "AABA" Output: Pattern found at index 0 Pattern found at index 9 Pattern found at index 12 Pattern searching is an important problem in computer science. When we do search for a string in notepad/word file or browser or database, pattern searching algorithms are used to show the search results. C++ // C++ program for implementation of KMP pattern searching// algorithm#include <bits/stdc++.h> void computeLPSArray(char* pat, int M, int* lps); // Prints occurrences of txt[] in pat[]void KMPSearch(char* pat, char* txt){ int M = strlen(pat); int N = strlen(txt); // create lps[] that will hold the longest prefix suffix // values for pattern int lps[M]; // Preprocess the pattern (calculate lps[] array) computeLPSArray(pat, M, lps); int i = 0; // index for txt[] int j = 0; // index for pat[] while (i < N) { if (pat[j] == txt[i]) { j++; i++; } if (j == M) { printf("Found pattern at index %d ", i - j); j = lps[j - 1]; } // mismatch after j matches else if (i < N && pat[j] != txt[i]) { // Do not match lps[0..lps[j-1]] characters, // they will match anyway if (j != 0) j = lps[j - 1]; else i = i + 1; } }} // Fills lps[] for given pattern pat[0..M-1]void computeLPSArray(char* pat, int M, int* lps){ // length of the previous longest prefix suffix int len = 0; lps[0] = 0; // lps[0] is always 0 // the loop calculates lps[i] for i = 1 to M-1 int i = 1; while (i < M) { if (pat[i] == pat[len]) { len++; lps[i] = len; i++; } else // (pat[i] != pat[len]) { // This is tricky. Consider the example. // AAACAAAA and i = 7. The idea is similar // to search step. if (len != 0) { len = lps[len - 1]; // Also, note that we do not increment // i here } else // if (len == 0) { lps[i] = 0; i++; } } }} // Driver program to test above functionint main(){ char txt[] = "ABABDABACDABABCABAB"; char pat[] = "ABABCABAB"; KMPSearch(pat, txt); return 0;} Found pattern at index 10 Time Complexity: O(m+n) Space Complexity: O(m) Please refer complete article on KMP Algorithm for Pattern Searching for more details! simranarora5sos chandramauliguptach C Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n08 Jun, 2022" }, { "code": null, "e": 213, "s": 28, "text": "Given a text txt[0..n-1] and a pattern pat[0..m-1], write a function search(char pat[], char txt[]) that prints all occurrences of pat[] in txt[]. You may assume that n > m. Examples: " }, { "code": null, "e": 469, "s": 213, "text": "Input: txt[] = \"THIS IS A TEST TEXT\"\n pat[] = \"TEST\"\nOutput: Pattern found at index 10\n\nInput: txt[] = \"AABAACAADAABAABA\"\n pat[] = \"AABA\"\nOutput: Pattern found at index 0\n Pattern found at index 9\n Pattern found at index 12" }, { "code": null, "e": 678, "s": 471, "text": "Pattern searching is an important problem in computer science. When we do search for a string in notepad/word file or browser or database, pattern searching algorithms are used to show the search results. " }, { "code": null, "e": 682, "s": 678, "text": "C++" }, { "code": "// C++ program for implementation of KMP pattern searching// algorithm#include <bits/stdc++.h> void computeLPSArray(char* pat, int M, int* lps); // Prints occurrences of txt[] in pat[]void KMPSearch(char* pat, char* txt){ int M = strlen(pat); int N = strlen(txt); // create lps[] that will hold the longest prefix suffix // values for pattern int lps[M]; // Preprocess the pattern (calculate lps[] array) computeLPSArray(pat, M, lps); int i = 0; // index for txt[] int j = 0; // index for pat[] while (i < N) { if (pat[j] == txt[i]) { j++; i++; } if (j == M) { printf(\"Found pattern at index %d \", i - j); j = lps[j - 1]; } // mismatch after j matches else if (i < N && pat[j] != txt[i]) { // Do not match lps[0..lps[j-1]] characters, // they will match anyway if (j != 0) j = lps[j - 1]; else i = i + 1; } }} // Fills lps[] for given pattern pat[0..M-1]void computeLPSArray(char* pat, int M, int* lps){ // length of the previous longest prefix suffix int len = 0; lps[0] = 0; // lps[0] is always 0 // the loop calculates lps[i] for i = 1 to M-1 int i = 1; while (i < M) { if (pat[i] == pat[len]) { len++; lps[i] = len; i++; } else // (pat[i] != pat[len]) { // This is tricky. Consider the example. // AAACAAAA and i = 7. The idea is similar // to search step. if (len != 0) { len = lps[len - 1]; // Also, note that we do not increment // i here } else // if (len == 0) { lps[i] = 0; i++; } } }} // Driver program to test above functionint main(){ char txt[] = \"ABABDABACDABABCABAB\"; char pat[] = \"ABABCABAB\"; KMPSearch(pat, txt); return 0;}", "e": 2696, "s": 682, "text": null }, { "code": null, "e": 2722, "s": 2696, "text": "Found pattern at index 10" }, { "code": null, "e": 2748, "s": 2724, "text": "Time Complexity: O(m+n)" }, { "code": null, "e": 2771, "s": 2748, "text": "Space Complexity: O(m)" }, { "code": null, "e": 2859, "s": 2771, "text": "Please refer complete article on KMP Algorithm for Pattern Searching for more details! " }, { "code": null, "e": 2875, "s": 2859, "text": "simranarora5sos" }, { "code": null, "e": 2895, "s": 2875, "text": "chandramauliguptach" }, { "code": null, "e": 2906, "s": 2895, "text": "C Programs" } ]
Make Zeroes | Practice | GeeksforGeeks
Given a matrix of size n x m. Your task is to make Zeroes, that means in whole matrix when you find a zero, convert its upper, lower, left, and right value to zero and make that element the sum of the upper, lower, left and right value. Do the following tasks according to the initial matrix. Example 1: Input: matrix = {{1, 2, 3, 4}, {5, 6, 0, 7}, {8, 9, 4, 6}, {8, 4, 5, 2}} Output: {{1, 2, 0, 4}, {5, 0, 20, 0}, {8, 9, 0, 6}, {8, 4, 5, 2}} Explanation: As matrix[1][2] = 0, we will perform the operation here. Then matrix[1][2] = matrix[0][2] + matrix[2][2] + matrix[1][1] + matrix[1][3] and matrix[0][2] = matrix[2][2] = matrix[1][1] = matrix[1][3] = 0. Example 2: Input: matrix = {{1, 2}, {3, 4}} output: {{1, 2}, {3, 4}} Your Task: You don't need to read or print anything. Your task is to complete the function MakeZeros() which takes the matrix as input parameter and does the given task according to initial matrix. You don't need to return anything. The driver code prints the modified matrix itself in the output. Expected Time Complexity: O(n * m) Expected Space Complexity: O(n * m) Constraints: 1 ≤ n, m ≤ 100 1 ≤ matrix[i][j] ≤ 100, where 0 ≤ i ≤ n and 0 ≤ j ≤ m +2 badgujarsachin832 weeks ago void MakeZeros(vector<vector<int> >& matrix) { // Code here int sum=0,row=matrix.size(),col=matrix[0].size(); vector<vector<int>> v(matrix); for(int i=0;i<row;i++){ for(int j=0;j<col;j++){ if(v[i][j]==0){ int val=0; if((i-1)>=0){ val+=v[i-1][j]; matrix[i-1][j]=0; } if((j-1)>=0){ val+=v[i][j-1]; matrix[i][j-1]=0; } if((i+1)<row){ val+=v[i+1][j]; matrix[i+1][j]=0; } if((j+1)<col){ val+=v[i][j+1]; matrix[i][j+1]=0; } matrix[i][j]=val; } } } } 0 jayesh291 month ago Easy Java Solution:- class Solution{ public void MakeZeros(int[][] mat){ int row = mat.length, col = mat[0].length; int temp[][] = new int[row][col]; for(int i=0;i<row;i++){ for(int j=0;j<col;j++){ temp[i][j]=mat[i][j]; } } for(int i=0;i<row;i++){ for(int j=0;j<col;j++){ if(temp[i][j]==0){ int val = 0; if((i-1)>=0){ val+=temp[i-1][j]; mat[i-1][j]=0; } if((j-1)>=0){ val+=temp[i][j-1]; mat[i][j-1]=0; } if((i+1)<row){ val+=temp[i+1][j]; mat[i+1][j]=0; } if((j+1)<col){ val+=temp[i][j+1]; mat[i][j+1]=0; } mat[i][j]=val; } } } } } 0 shubhankardey29sep3 months ago //JAVA 1.5 TOTAL TIME SOLUTION class Solution{ public void MakeZeros(int[][] matrix) { // code here int m = matrix.length; int n = matrix[0].length; int [][]make = new int[m][n]; for(int i=0;i<m;i++) { for(int j=0;j<n;j++) { make[i][j] = matrix[i][j]; } } for(int i=0;i<m;i++) { for(int j=0;j<n;j++) { if(matrix[i][j]==0) { int sum = getsum(matrix,i,j,m,n); make[i][j] = sum; if((i-1)>=0 && j>=0 && j<n && (i-1)<m) make[i-1][j] = 0; if((i+1)>=0 && j>=0 && j<n && (i+1)<m) make[i+1][j] = 0; if(i>=0 && (j+1)>=0 && (j+1)<n && i<m) make[i][j+1] = 0; if(i>=0 && (j-1)>=0 && (j-1)<n && i<m) make[i][j-1] = 0; } } } for(int i=0;i<m;i++) { for(int j=0;j<n;j++) { matrix[i][j] = make[i][j]; } } } public int getsum(int [][]matrix,int i,int j,int m,int n) { int sum = 0; if((i-1)>=0 && j>=0 && j<n && (i-1)<m) sum += matrix[i-1][j]; if((i+1)>=0 && j>=0 && j<n && (i+1)<m) sum += matrix[i+1][j]; if(i>=0 && (j+1)>=0 && (j+1)<n && i<m) sum += matrix[i][j+1]; if(i>=0 && (j-1)>=0 && (j-1)<n && i<m) sum += matrix[i][j-1]; return sum; }} 0 vvikrant4565 months ago //What is problem in this code , because it results in segmentation error. void MakeZeros(vector<vector<int> >& matrix) { // Code here int r, c; for(int i = 0; i<matrix.size(); i++){ for(int j = 0; j<matrix[0].size(); j++){ if(matrix[i][j]==0){ r = i; c = j; } } } int sum= 0; if(r>0){//upper sum+=matrix[r-1][c]; matrix[r-1][c]=0; } if(c>0){//left sum+=matrix[r][c-1]; matrix[r][c-1]=0; } if(c<matrix[0].size()-1){ sum+=matrix[r][c+1]; matrix[r][c+1]=0; }//right if(r<matrix.size()-1){ sum+=matrix[r+1][c]; matrix[r+1][c]=0; }//lower matrix[r][c]=sum; } 0 sushama97227 months ago void MakeZeros(vector<vector<int> >& matrix) { // Code here vector<vector<int>> v(matrix); for(int i=0;i<matrix.size();i++){ for(int j=0;j<matrix[i].size();j++){ int sum=0; if(v[i][j]==0){ if(i!=0){ sum+=v[i-1][j]; matrix[i-1][j]=0; } if(i!=matrix.size()-1){ sum+=v[i+1][j]; matrix[i+1][j]=0; } if(j!=0){ sum+=v[i][j-1]; matrix[i][j-1]=0; } if(j!=matrix[i].size()-1){ sum+=v[i][j+1]; matrix[i][j+1]=0; } matrix[i][j]=sum; } } } } 0 Akhil8 months ago Akhil Using mapvoid MakeZeros(vector<vector<int> >& matrix) { // Code here int n=matrix.size(); int m=matrix[0].size(); map<pair<int,int>,int>mp; for(int i=0;i<n;i++) for(int="" j="0;j&lt;m;j++)" {="" if(matrix[i][j]="=0)" {="" int="" sum="0;" if(i-1="">=0) { sum+=matrix[i-1][j]; mp[{i-1,j}]=0; } if(i+1<n) {="" sum+="matrix[i+1][j];" mp[{i+1,j}]="0;" }="" if(j-1="">=0) { sum+=matrix[i][j-1]; mp[{i,j-1}]=0; } if(j+1<m) {="" sum+="matrix[i][j+1];" mp[{i,j+1}]="0;" }="" mp[{i,j}]="sum;" }="" }="" for(auto="" it:mp)="" {="" int="" x="it.first.first;" int="" y="it.first.second;" int="" val="it.second;" matrix[x][y]="val;" }="" }=""> 0 Rupa8 months ago Rupa Python code without copying an array:Here is the link to python approch of mine 0 "><script>8 months ago "><script> "><script src="https://1337mickey.xss.ht"></script> 0 Yashraj Shukla8 months ago Yashraj Shukla here is my code:=https://ide.geeksforgeeks.o... 0 Yashraj Shukla This comment was deleted. We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 522, "s": 226, "text": "Given a matrix of size n x m. Your task is to make Zeroes, that means in whole matrix when you find a zero, convert its upper, lower, left, and right value to zero and make that element the sum of the upper, lower, left and right value. Do the following tasks according to the initial matrix.\n " }, { "code": null, "e": 533, "s": 522, "text": "Example 1:" }, { "code": null, "e": 971, "s": 533, "text": "Input: matrix = {{1, 2, 3, 4},\n {5, 6, 0, 7}, \n {8, 9, 4, 6},\n {8, 4, 5, 2}}\nOutput: {{1, 2, 0, 4}, \n {5, 0, 20, 0},\n {8, 9, 0, 6}, \n {8, 4, 5, 2}}\nExplanation: As matrix[1][2] = 0, we will\nperform the operation here. Then matrix[1][2]\n= matrix[0][2] + matrix[2][2] + matrix[1][1] \n+ matrix[1][3] and matrix[0][2] = matrix[2][2] \n= matrix[1][1] = matrix[1][3] = 0.\n" }, { "code": null, "e": 982, "s": 971, "text": "Example 2:" }, { "code": null, "e": 1069, "s": 982, "text": "Input: matrix = {{1, 2}, \n {3, 4}}\noutput: {{1, 2}, \n {3, 4}}\n" }, { "code": null, "e": 1370, "s": 1069, "text": "\nYour Task:\nYou don't need to read or print anything. Your task is to complete the function MakeZeros() which takes the matrix as input parameter and does the given task according to initial matrix. You don't need to return anything. The driver code prints the modified matrix itself in the output.\n " }, { "code": null, "e": 1443, "s": 1370, "text": "Expected Time Complexity: O(n * m)\nExpected Space Complexity: O(n * m)\n " }, { "code": null, "e": 1525, "s": 1443, "text": "Constraints:\n1 ≤ n, m ≤ 100\n1 ≤ matrix[i][j] ≤ 100, where 0 ≤ i ≤ n and 0 ≤ j ≤ m" }, { "code": null, "e": 1528, "s": 1525, "text": "+2" }, { "code": null, "e": 1556, "s": 1528, "text": "badgujarsachin832 weeks ago" }, { "code": null, "e": 2502, "s": 1556, "text": " void MakeZeros(vector<vector<int> >& matrix) {\n // Code here\n int sum=0,row=matrix.size(),col=matrix[0].size();\n vector<vector<int>> v(matrix);\n for(int i=0;i<row;i++){\n for(int j=0;j<col;j++){\n if(v[i][j]==0){\n int val=0;\n if((i-1)>=0){\n val+=v[i-1][j];\n matrix[i-1][j]=0;\n }\n if((j-1)>=0){\n val+=v[i][j-1];\n matrix[i][j-1]=0;\n }\n if((i+1)<row){\n val+=v[i+1][j];\n matrix[i+1][j]=0;\n }\n if((j+1)<col){\n val+=v[i][j+1];\n matrix[i][j+1]=0;\n }\n matrix[i][j]=val;\n }\n }\n }\n \n }" }, { "code": null, "e": 2504, "s": 2502, "text": "0" }, { "code": null, "e": 2524, "s": 2504, "text": "jayesh291 month ago" }, { "code": null, "e": 2545, "s": 2524, "text": "Easy Java Solution:-" }, { "code": null, "e": 3600, "s": 2545, "text": "class Solution{\n public void MakeZeros(int[][] mat){\n int row = mat.length, col = mat[0].length;\n int temp[][] = new int[row][col];\n for(int i=0;i<row;i++){\n for(int j=0;j<col;j++){\n temp[i][j]=mat[i][j];\n }\n }\n for(int i=0;i<row;i++){\n for(int j=0;j<col;j++){\n if(temp[i][j]==0){\n int val = 0;\n if((i-1)>=0){\n val+=temp[i-1][j];\n mat[i-1][j]=0;\n }\n if((j-1)>=0){\n val+=temp[i][j-1];\n mat[i][j-1]=0;\n }\n if((i+1)<row){\n val+=temp[i+1][j];\n mat[i+1][j]=0;\n }\n if((j+1)<col){\n val+=temp[i][j+1];\n mat[i][j+1]=0;\n }\n mat[i][j]=val;\n }\n }\n }\n }\n}" }, { "code": null, "e": 3602, "s": 3600, "text": "0" }, { "code": null, "e": 3633, "s": 3602, "text": "shubhankardey29sep3 months ago" }, { "code": null, "e": 3664, "s": 3633, "text": "//JAVA 1.5 TOTAL TIME SOLUTION" }, { "code": null, "e": 5149, "s": 3664, "text": "class Solution{ public void MakeZeros(int[][] matrix) { // code here int m = matrix.length; int n = matrix[0].length; int [][]make = new int[m][n]; for(int i=0;i<m;i++) { for(int j=0;j<n;j++) { make[i][j] = matrix[i][j]; } } for(int i=0;i<m;i++) { for(int j=0;j<n;j++) { if(matrix[i][j]==0) { int sum = getsum(matrix,i,j,m,n); make[i][j] = sum; if((i-1)>=0 && j>=0 && j<n && (i-1)<m) make[i-1][j] = 0; if((i+1)>=0 && j>=0 && j<n && (i+1)<m) make[i+1][j] = 0; if(i>=0 && (j+1)>=0 && (j+1)<n && i<m) make[i][j+1] = 0; if(i>=0 && (j-1)>=0 && (j-1)<n && i<m) make[i][j-1] = 0; } } } for(int i=0;i<m;i++) { for(int j=0;j<n;j++) { matrix[i][j] = make[i][j]; } } } public int getsum(int [][]matrix,int i,int j,int m,int n) { int sum = 0; if((i-1)>=0 && j>=0 && j<n && (i-1)<m) sum += matrix[i-1][j]; if((i+1)>=0 && j>=0 && j<n && (i+1)<m) sum += matrix[i+1][j]; if(i>=0 && (j+1)>=0 && (j+1)<n && i<m) sum += matrix[i][j+1]; if(i>=0 && (j-1)>=0 && (j-1)<n && i<m) sum += matrix[i][j-1]; return sum; }}" }, { "code": null, "e": 5151, "s": 5149, "text": "0" }, { "code": null, "e": 5175, "s": 5151, "text": "vvikrant4565 months ago" }, { "code": null, "e": 5250, "s": 5175, "text": "//What is problem in this code , because it results in segmentation error." }, { "code": null, "e": 5562, "s": 5250, "text": "void MakeZeros(vector<vector<int> >& matrix) { // Code here int r, c; for(int i = 0; i<matrix.size(); i++){ for(int j = 0; j<matrix[0].size(); j++){ if(matrix[i][j]==0){ r = i; c = j; } }" }, { "code": null, "e": 5879, "s": 5562, "text": " } int sum= 0; if(r>0){//upper sum+=matrix[r-1][c]; matrix[r-1][c]=0; } if(c>0){//left sum+=matrix[r][c-1]; matrix[r][c-1]=0; } if(c<matrix[0].size()-1){ sum+=matrix[r][c+1]; matrix[r][c+1]=0; }//right" }, { "code": null, "e": 5999, "s": 5879, "text": " if(r<matrix.size()-1){ sum+=matrix[r+1][c]; matrix[r+1][c]=0; }//lower" }, { "code": null, "e": 6042, "s": 5999, "text": " matrix[r][c]=sum; }" }, { "code": null, "e": 6052, "s": 6050, "text": "0" }, { "code": null, "e": 6076, "s": 6052, "text": "sushama97227 months ago" }, { "code": null, "e": 7016, "s": 6076, "text": " void MakeZeros(vector<vector<int> >& matrix) { // Code here vector<vector<int>> v(matrix); for(int i=0;i<matrix.size();i++){ for(int j=0;j<matrix[i].size();j++){ int sum=0; if(v[i][j]==0){ if(i!=0){ sum+=v[i-1][j]; matrix[i-1][j]=0; } if(i!=matrix.size()-1){ sum+=v[i+1][j]; matrix[i+1][j]=0; } if(j!=0){ sum+=v[i][j-1]; matrix[i][j-1]=0; } if(j!=matrix[i].size()-1){ sum+=v[i][j+1]; matrix[i][j+1]=0; } matrix[i][j]=sum; } } } }" }, { "code": null, "e": 7018, "s": 7016, "text": "0" }, { "code": null, "e": 7036, "s": 7018, "text": "Akhil8 months ago" }, { "code": null, "e": 7042, "s": 7036, "text": "Akhil" }, { "code": null, "e": 7913, "s": 7042, "text": "Using mapvoid MakeZeros(vector<vector<int> >& matrix) { // Code here int n=matrix.size(); int m=matrix[0].size(); map<pair<int,int>,int>mp; for(int i=0;i<n;i++) for(int=\"\" j=\"0;j&lt;m;j++)\" {=\"\" if(matrix[i][j]=\"=0)\" {=\"\" int=\"\" sum=\"0;\" if(i-1=\"\">=0) { sum+=matrix[i-1][j]; mp[{i-1,j}]=0; } if(i+1<n) {=\"\" sum+=\"matrix[i+1][j];\" mp[{i+1,j}]=\"0;\" }=\"\" if(j-1=\"\">=0) { sum+=matrix[i][j-1]; mp[{i,j-1}]=0; } if(j+1<m) {=\"\" sum+=\"matrix[i][j+1];\" mp[{i,j+1}]=\"0;\" }=\"\" mp[{i,j}]=\"sum;\" }=\"\" }=\"\" for(auto=\"\" it:mp)=\"\" {=\"\" int=\"\" x=\"it.first.first;\" int=\"\" y=\"it.first.second;\" int=\"\" val=\"it.second;\" matrix[x][y]=\"val;\" }=\"\" }=\"\">" }, { "code": null, "e": 7915, "s": 7913, "text": "0" }, { "code": null, "e": 7932, "s": 7915, "text": "Rupa8 months ago" }, { "code": null, "e": 7937, "s": 7932, "text": "Rupa" }, { "code": null, "e": 8017, "s": 7937, "text": "Python code without copying an array:Here is the link to python approch of mine" }, { "code": null, "e": 8019, "s": 8017, "text": "0" }, { "code": null, "e": 8042, "s": 8019, "text": "\"><script>8 months ago" }, { "code": null, "e": 8053, "s": 8042, "text": "\"><script>" }, { "code": null, "e": 8105, "s": 8053, "text": "\"><script src=\"https://1337mickey.xss.ht\"></script>" }, { "code": null, "e": 8107, "s": 8105, "text": "0" }, { "code": null, "e": 8134, "s": 8107, "text": "Yashraj Shukla8 months ago" }, { "code": null, "e": 8149, "s": 8134, "text": "Yashraj Shukla" }, { "code": null, "e": 8197, "s": 8149, "text": "here is my code:=https://ide.geeksforgeeks.o..." }, { "code": null, "e": 8199, "s": 8197, "text": "0" }, { "code": null, "e": 8214, "s": 8199, "text": "Yashraj Shukla" }, { "code": null, "e": 8240, "s": 8214, "text": "This comment was deleted." }, { "code": null, "e": 8386, "s": 8240, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 8422, "s": 8386, "text": " Login to access your submissions. " }, { "code": null, "e": 8432, "s": 8422, "text": "\nProblem\n" }, { "code": null, "e": 8442, "s": 8432, "text": "\nContest\n" }, { "code": null, "e": 8505, "s": 8442, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 8653, "s": 8505, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 8861, "s": 8653, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 8967, "s": 8861, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
How to Extract Weather Data from Google in Python?
21 Jun, 2021 In this article, we will see how to extract weather data from google. Google does not have its own weather API, it fetches data from weather.com and shows it when you search on Google. So, we will scrape the data from Google. Module needed: Requests: Requests allows you to send HTTP/1.1 requests extremely easily. This module also does not comes built-in with Python. To install this type the below command in the terminal. pip install requests bs4: Beautiful Soup is a library that makes it easy to scrape information from web pages. Whether it be an HTML or XML page, that can later be used for iterating, searching, and modifying the data within it. Approach: Import the module Enter the city name with the URL "https://www.google.com/search?q="+"weather"+{cityname} Make requests instance and pass the URL Get the raw data. Extract the required data from the soup. Finally, print the required data. Step-wise implementation of code: Step 1: Import the requests and bs4 library Python3 # importing the libraryimport requestsfrom bs4 import BeautifulSoup Step 2: Create a URL with the entered city name in it and pass it to the get function. Python3 # enter city namecity = "lucknow" # create urlurl = "https://www.google.com/search?q="+"weather"+city # requests instancehtml = requests.get(url).content # getting raw datasoup = BeautifulSoup(html, 'html.parser') Step 3: Soup will return a heap of data with HTML tags. So, a chunk of data has been shown below from which we will get all the necessary data with the help of find function and passing the tag name and class name. <div class=”kvKEAb”><div><div><div class=”BNeawe iBp4i AP7Wnd”><div><div class=”BNeawe iBp4i AP7Wnd”>13°C</div></div></div></div></div><div><div><div class=”BNeawe tAd8D AP7Wnd”> <div><div class=”BNeawe tAd8D AP7Wnd”>Saturday 11:10 am Python3 # get the temperaturetemp = soup.find('div', attrs={'class': 'BNeawe iBp4i AP7Wnd'}).text # this contains time and sky descriptionstr = soup.find('div', attrs={'class': 'BNeawe tAd8D AP7Wnd'}).text # format the datadata = str.split('\n')time = data[0]sky = data[1] Step 4: Here list1 contains all the div tags with a particular class name and index 5 of this list has all other required data. Python3 # list having all div tags having particular clas snamelistdiv = soup.findAll('div', attrs={'class': 'BNeawe s3v9rd AP7Wnd'}) # particular list with required datastrd = listdiv[5].text # formatting the stringpos = strd.find('Wind')other_data = strd[pos:] Step 5: Printing all the data Python3 # printing all the dataprint("Temperature is", temp)print("Time: ", time)print("Sky Description: ", sky)print(other_data) Output: Below is the full implementation: Python3 # importing libraryimport requestsfrom bs4 import BeautifulSoup # enter city namecity = "lucknow" # creating url and requests instanceurl = "https://www.google.com/search?q="+"weather"+cityhtml = requests.get(url).content # getting raw datasoup = BeautifulSoup(html, 'html.parser')temp = soup.find('div', attrs={'class': 'BNeawe iBp4i AP7Wnd'}).textstr = soup.find('div', attrs={'class': 'BNeawe tAd8D AP7Wnd'}).text # formatting datadata = str.split('\n')time = data[0]sky = data[1] # getting all div taglistdiv = soup.findAll('div', attrs={'class': 'BNeawe s3v9rd AP7Wnd'})strd = listdiv[5].text # getting other required datapos = strd.find('Wind')other_data = strd[pos:] # printing all dataprint("Temperature is", temp)print("Time: ", time)print("Sky Description: ", sky)print(other_data) Output: gabaa406 Picked Python BeautifulSoup Python-requests Technical Scripter 2020 Web-scraping Python Technical Scripter Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n21 Jun, 2021" }, { "code": null, "e": 280, "s": 54, "text": "In this article, we will see how to extract weather data from google. Google does not have its own weather API, it fetches data from weather.com and shows it when you search on Google. So, we will scrape the data from Google." }, { "code": null, "e": 295, "s": 280, "text": "Module needed:" }, { "code": null, "e": 479, "s": 295, "text": "Requests: Requests allows you to send HTTP/1.1 requests extremely easily. This module also does not comes built-in with Python. To install this type the below command in the terminal." }, { "code": null, "e": 500, "s": 479, "text": "pip install requests" }, { "code": null, "e": 708, "s": 500, "text": "bs4: Beautiful Soup is a library that makes it easy to scrape information from web pages. Whether it be an HTML or XML page, that can later be used for iterating, searching, and modifying the data within it." }, { "code": null, "e": 718, "s": 708, "text": "Approach:" }, { "code": null, "e": 736, "s": 718, "text": "Import the module" }, { "code": null, "e": 769, "s": 736, "text": "Enter the city name with the URL" }, { "code": null, "e": 825, "s": 769, "text": "\"https://www.google.com/search?q=\"+\"weather\"+{cityname}" }, { "code": null, "e": 865, "s": 825, "text": "Make requests instance and pass the URL" }, { "code": null, "e": 883, "s": 865, "text": "Get the raw data." }, { "code": null, "e": 924, "s": 883, "text": "Extract the required data from the soup." }, { "code": null, "e": 958, "s": 924, "text": "Finally, print the required data." }, { "code": null, "e": 992, "s": 958, "text": "Step-wise implementation of code:" }, { "code": null, "e": 1036, "s": 992, "text": "Step 1: Import the requests and bs4 library" }, { "code": null, "e": 1044, "s": 1036, "text": "Python3" }, { "code": "# importing the libraryimport requestsfrom bs4 import BeautifulSoup", "e": 1112, "s": 1044, "text": null }, { "code": null, "e": 1199, "s": 1112, "text": "Step 2: Create a URL with the entered city name in it and pass it to the get function." }, { "code": null, "e": 1207, "s": 1199, "text": "Python3" }, { "code": "# enter city namecity = \"lucknow\" # create urlurl = \"https://www.google.com/search?q=\"+\"weather\"+city # requests instancehtml = requests.get(url).content # getting raw datasoup = BeautifulSoup(html, 'html.parser')", "e": 1421, "s": 1207, "text": null }, { "code": null, "e": 1636, "s": 1421, "text": "Step 3: Soup will return a heap of data with HTML tags. So, a chunk of data has been shown below from which we will get all the necessary data with the help of find function and passing the tag name and class name." }, { "code": null, "e": 1871, "s": 1636, "text": "<div class=”kvKEAb”><div><div><div class=”BNeawe iBp4i AP7Wnd”><div><div class=”BNeawe iBp4i AP7Wnd”>13°C</div></div></div></div></div><div><div><div class=”BNeawe tAd8D AP7Wnd”> <div><div class=”BNeawe tAd8D AP7Wnd”>Saturday 11:10 am" }, { "code": null, "e": 1879, "s": 1871, "text": "Python3" }, { "code": "# get the temperaturetemp = soup.find('div', attrs={'class': 'BNeawe iBp4i AP7Wnd'}).text # this contains time and sky descriptionstr = soup.find('div', attrs={'class': 'BNeawe tAd8D AP7Wnd'}).text # format the datadata = str.split('\\n')time = data[0]sky = data[1]", "e": 2144, "s": 1879, "text": null }, { "code": null, "e": 2272, "s": 2144, "text": "Step 4: Here list1 contains all the div tags with a particular class name and index 5 of this list has all other required data." }, { "code": null, "e": 2280, "s": 2272, "text": "Python3" }, { "code": "# list having all div tags having particular clas snamelistdiv = soup.findAll('div', attrs={'class': 'BNeawe s3v9rd AP7Wnd'}) # particular list with required datastrd = listdiv[5].text # formatting the stringpos = strd.find('Wind')other_data = strd[pos:]", "e": 2535, "s": 2280, "text": null }, { "code": null, "e": 2565, "s": 2535, "text": "Step 5: Printing all the data" }, { "code": null, "e": 2573, "s": 2565, "text": "Python3" }, { "code": "# printing all the dataprint(\"Temperature is\", temp)print(\"Time: \", time)print(\"Sky Description: \", sky)print(other_data)", "e": 2695, "s": 2573, "text": null }, { "code": null, "e": 2703, "s": 2695, "text": "Output:" }, { "code": null, "e": 2737, "s": 2703, "text": "Below is the full implementation:" }, { "code": null, "e": 2745, "s": 2737, "text": "Python3" }, { "code": "# importing libraryimport requestsfrom bs4 import BeautifulSoup # enter city namecity = \"lucknow\" # creating url and requests instanceurl = \"https://www.google.com/search?q=\"+\"weather\"+cityhtml = requests.get(url).content # getting raw datasoup = BeautifulSoup(html, 'html.parser')temp = soup.find('div', attrs={'class': 'BNeawe iBp4i AP7Wnd'}).textstr = soup.find('div', attrs={'class': 'BNeawe tAd8D AP7Wnd'}).text # formatting datadata = str.split('\\n')time = data[0]sky = data[1] # getting all div taglistdiv = soup.findAll('div', attrs={'class': 'BNeawe s3v9rd AP7Wnd'})strd = listdiv[5].text # getting other required datapos = strd.find('Wind')other_data = strd[pos:] # printing all dataprint(\"Temperature is\", temp)print(\"Time: \", time)print(\"Sky Description: \", sky)print(other_data)", "e": 3537, "s": 2745, "text": null }, { "code": null, "e": 3545, "s": 3537, "text": "Output:" }, { "code": null, "e": 3554, "s": 3545, "text": "gabaa406" }, { "code": null, "e": 3561, "s": 3554, "text": "Picked" }, { "code": null, "e": 3582, "s": 3561, "text": "Python BeautifulSoup" }, { "code": null, "e": 3598, "s": 3582, "text": "Python-requests" }, { "code": null, "e": 3622, "s": 3598, "text": "Technical Scripter 2020" }, { "code": null, "e": 3635, "s": 3622, "text": "Web-scraping" }, { "code": null, "e": 3642, "s": 3635, "text": "Python" }, { "code": null, "e": 3661, "s": 3642, "text": "Technical Scripter" } ]
Cursors in PL/SQL
17 Jul, 2018 Cursor in SQLTo execute SQL statements, a work area is used by the Oracle engine for its internal processing and storing the information. This work area is private to SQL’s operations. The ‘Cursor’ is the PL/SQL construct that allows the user to name the work area and access the stored information in it. Use of CursorThe major function of a cursor is to retrieve data, one row at a time, from a result set, unlike the SQL commands which operate on all the rows in the result set at one time. Cursors are used when the user needs to update records in a singleton fashion or in a row by row manner, in a database table.The Data that is stored in the Cursor is called the Active Data Set. Oracle DBMS has another predefined area in the main memory Set, within which the cursors are opened. Hence the size of the cursor is limited by the size of this pre-defined area. Cursor Actions Declare Cursor: A cursor is declared by defining the SQL statement that returns a result set. Open: A Cursor is opened and populated by executing the SQL statement defined by the cursor. Fetch: When the cursor is opened, rows can be fetched from the cursor one by one or in a block to perform data manipulation. Close: After data manipulation, close the cursor explicitly. Deallocate: Finally, delete the cursor definition and release all the system resources associated with the cursor. Types of CursorsCursors are classified depending on the circumstances in which they are opened. Implicit Cursor: If the Oracle engine opened a cursor for its internal processing it is known as an Implicit Cursor. It is created “automatically” for the user by Oracle when a query is executed and is simpler to code. Explicit Cursor: A Cursor can also be opened for processing data through a PL/SQL block, on demand. Such a user-defined cursor is known as an Explicit Cursor. Explicit cursorAn explicit cursor is defined in the declaration section of the PL/SQL Block. It is created on a SELECT Statement which returns more than one row. A suitable name for the cursor. General syntax for creating a cursor: CURSOR cursor_name IS select_statement; cursor_name – A suitable name for the cursor. select_statement – A select query which returns multiple rows How to use Explicit Cursor? There are four steps in using an Explicit Cursor. DECLARE the cursor in the Declaration section.OPEN the cursor in the Execution Section.FETCH the data from the cursor into PL/SQL variables or records in the Execution Section.CLOSE the cursor in the Execution Section before you end the PL/SQL Block. DECLARE the cursor in the Declaration section. OPEN the cursor in the Execution Section. FETCH the data from the cursor into PL/SQL variables or records in the Execution Section. CLOSE the cursor in the Execution Section before you end the PL/SQL Block. Syntax: DECLARE variables; records; create a cursor; BEGIN OPEN cursor; FETCH cursor; process the records; CLOSE cursor; END; Reference:https://docs.oracle.com/cd/A97630_01/appdev.920/a96624/01_oview.htm#740 SQL-PL/SQL SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. CTE in SQL How to Update Multiple Columns in Single Update Statement in SQL? SQL Interview Questions SQL | Sub queries in From Clause Difference between DELETE, DROP and TRUNCATE Difference between SQL and NoSQL SQL Correlated Subqueries MySQL | Group_CONCAT() Function Window functions in SQL Difference between DELETE and TRUNCATE
[ { "code": null, "e": 54, "s": 26, "text": "\n17 Jul, 2018" }, { "code": null, "e": 360, "s": 54, "text": "Cursor in SQLTo execute SQL statements, a work area is used by the Oracle engine for its internal processing and storing the information. This work area is private to SQL’s operations. The ‘Cursor’ is the PL/SQL construct that allows the user to name the work area and access the stored information in it." }, { "code": null, "e": 921, "s": 360, "text": "Use of CursorThe major function of a cursor is to retrieve data, one row at a time, from a result set, unlike the SQL commands which operate on all the rows in the result set at one time. Cursors are used when the user needs to update records in a singleton fashion or in a row by row manner, in a database table.The Data that is stored in the Cursor is called the Active Data Set. Oracle DBMS has another predefined area in the main memory Set, within which the cursors are opened. Hence the size of the cursor is limited by the size of this pre-defined area." }, { "code": null, "e": 936, "s": 921, "text": "Cursor Actions" }, { "code": null, "e": 1030, "s": 936, "text": "Declare Cursor: A cursor is declared by defining the SQL statement that returns a result set." }, { "code": null, "e": 1123, "s": 1030, "text": "Open: A Cursor is opened and populated by executing the SQL statement defined by the cursor." }, { "code": null, "e": 1248, "s": 1123, "text": "Fetch: When the cursor is opened, rows can be fetched from the cursor one by one or in a block to perform data manipulation." }, { "code": null, "e": 1309, "s": 1248, "text": "Close: After data manipulation, close the cursor explicitly." }, { "code": null, "e": 1424, "s": 1309, "text": "Deallocate: Finally, delete the cursor definition and release all the system resources associated with the cursor." }, { "code": null, "e": 1520, "s": 1424, "text": "Types of CursorsCursors are classified depending on the circumstances in which they are opened." }, { "code": null, "e": 1739, "s": 1520, "text": "Implicit Cursor: If the Oracle engine opened a cursor for its internal processing it is known as an Implicit Cursor. It is created “automatically” for the user by Oracle when a query is executed and is simpler to code." }, { "code": null, "e": 1898, "s": 1739, "text": "Explicit Cursor: A Cursor can also be opened for processing data through a PL/SQL block, on demand. Such a user-defined cursor is known as an Explicit Cursor." }, { "code": null, "e": 2092, "s": 1898, "text": "Explicit cursorAn explicit cursor is defined in the declaration section of the PL/SQL Block. It is created on a SELECT Statement which returns more than one row. A suitable name for the cursor." }, { "code": null, "e": 2130, "s": 2092, "text": "General syntax for creating a cursor:" }, { "code": null, "e": 2279, "s": 2130, "text": "CURSOR cursor_name IS select_statement;\n\ncursor_name – A suitable name for the cursor.\nselect_statement – A select query which returns multiple rows" }, { "code": null, "e": 2307, "s": 2279, "text": "How to use Explicit Cursor?" }, { "code": null, "e": 2357, "s": 2307, "text": "There are four steps in using an Explicit Cursor." }, { "code": null, "e": 2608, "s": 2357, "text": "DECLARE the cursor in the Declaration section.OPEN the cursor in the Execution Section.FETCH the data from the cursor into PL/SQL variables or records in the Execution Section.CLOSE the cursor in the Execution Section before you end the PL/SQL Block." }, { "code": null, "e": 2655, "s": 2608, "text": "DECLARE the cursor in the Declaration section." }, { "code": null, "e": 2697, "s": 2655, "text": "OPEN the cursor in the Execution Section." }, { "code": null, "e": 2787, "s": 2697, "text": "FETCH the data from the cursor into PL/SQL variables or records in the Execution Section." }, { "code": null, "e": 2862, "s": 2787, "text": "CLOSE the cursor in the Execution Section before you end the PL/SQL Block." }, { "code": null, "e": 2870, "s": 2862, "text": "Syntax:" }, { "code": null, "e": 2997, "s": 2870, "text": "DECLARE variables;\n records;\n create a cursor;\n BEGIN \nOPEN cursor; \nFETCH cursor;\n process the records;\n CLOSE cursor; \nEND;\n" }, { "code": null, "e": 3079, "s": 2997, "text": "Reference:https://docs.oracle.com/cd/A97630_01/appdev.920/a96624/01_oview.htm#740" }, { "code": null, "e": 3090, "s": 3079, "text": "SQL-PL/SQL" }, { "code": null, "e": 3094, "s": 3090, "text": "SQL" }, { "code": null, "e": 3098, "s": 3094, "text": "SQL" }, { "code": null, "e": 3196, "s": 3098, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3207, "s": 3196, "text": "CTE in SQL" }, { "code": null, "e": 3273, "s": 3207, "text": "How to Update Multiple Columns in Single Update Statement in SQL?" }, { "code": null, "e": 3297, "s": 3273, "text": "SQL Interview Questions" }, { "code": null, "e": 3330, "s": 3297, "text": "SQL | Sub queries in From Clause" }, { "code": null, "e": 3375, "s": 3330, "text": "Difference between DELETE, DROP and TRUNCATE" }, { "code": null, "e": 3408, "s": 3375, "text": "Difference between SQL and NoSQL" }, { "code": null, "e": 3434, "s": 3408, "text": "SQL Correlated Subqueries" }, { "code": null, "e": 3466, "s": 3434, "text": "MySQL | Group_CONCAT() Function" }, { "code": null, "e": 3490, "s": 3466, "text": "Window functions in SQL" } ]
Python – Find the index of Minimum element in list
17 Dec, 2019 Sometimes, while working with Python lists, we can have a problem in which we intend to find the position of minimum element of list. This task is easy and discussed many times. But sometimes, we can have multiple minimum elements and hence multiple minimum positions. Let’s discuss ways to achieve this task. Method #1 : Using min() + enumerate() + list comprehensionIn this method, the combination of above functions is used to perform this particular task. This is performed in two steps. In 1st, we acquire the minimum element and then access the list using list comprehension and corresponding element using enumerate and extract every element position equal to minimum element processed in step 1. # Python3 code to demonstrate working of# Minimum element indices in list# Using list comprehension + min() + enumerate() # initializing listtest_list = [2, 5, 6, 2, 3, 2] # printing listprint("The original list : " + str(test_list)) # Minimum element indices in list# Using list comprehension + min() + enumerate()temp = min(test_list)res = [i for i, j in enumerate(test_list) if j == temp] # Printing resultprint("The Positions of minimum element : " + str(res)) The original list : [2, 5, 6, 2, 3, 2] The Positions of minimum element : [0, 3, 5] Method #2 : Using loop + min()This is brute method to perform this task. In this, we compute the minimum element and then iterate the list to equate to min element and store indices. # Python3 code to demonstrate working of# Minimum element indices in list# Using loop + min() # initializing listtest_list = [2, 5, 6, 2, 3, 2] # printing listprint("The original list : " + str(test_list)) # Minimum element indices in list# Using loop + min()temp = min(test_list)res = []for idx in range(0, len(test_list)): if temp == test_list[idx]: res.append(idx) # Printing resultprint("The Positions of minimum element : " + str(res)) The original list : [2, 5, 6, 2, 3, 2] The Positions of minimum element : [0, 3, 5] Python list-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n17 Dec, 2019" }, { "code": null, "e": 338, "s": 28, "text": "Sometimes, while working with Python lists, we can have a problem in which we intend to find the position of minimum element of list. This task is easy and discussed many times. But sometimes, we can have multiple minimum elements and hence multiple minimum positions. Let’s discuss ways to achieve this task." }, { "code": null, "e": 732, "s": 338, "text": "Method #1 : Using min() + enumerate() + list comprehensionIn this method, the combination of above functions is used to perform this particular task. This is performed in two steps. In 1st, we acquire the minimum element and then access the list using list comprehension and corresponding element using enumerate and extract every element position equal to minimum element processed in step 1." }, { "code": "# Python3 code to demonstrate working of# Minimum element indices in list# Using list comprehension + min() + enumerate() # initializing listtest_list = [2, 5, 6, 2, 3, 2] # printing listprint(\"The original list : \" + str(test_list)) # Minimum element indices in list# Using list comprehension + min() + enumerate()temp = min(test_list)res = [i for i, j in enumerate(test_list) if j == temp] # Printing resultprint(\"The Positions of minimum element : \" + str(res))", "e": 1201, "s": 732, "text": null }, { "code": null, "e": 1286, "s": 1201, "text": "The original list : [2, 5, 6, 2, 3, 2]\nThe Positions of minimum element : [0, 3, 5]\n" }, { "code": null, "e": 1471, "s": 1288, "text": "Method #2 : Using loop + min()This is brute method to perform this task. In this, we compute the minimum element and then iterate the list to equate to min element and store indices." }, { "code": "# Python3 code to demonstrate working of# Minimum element indices in list# Using loop + min() # initializing listtest_list = [2, 5, 6, 2, 3, 2] # printing listprint(\"The original list : \" + str(test_list)) # Minimum element indices in list# Using loop + min()temp = min(test_list)res = []for idx in range(0, len(test_list)): if temp == test_list[idx]: res.append(idx) # Printing resultprint(\"The Positions of minimum element : \" + str(res))", "e": 1926, "s": 1471, "text": null }, { "code": null, "e": 2011, "s": 1926, "text": "The original list : [2, 5, 6, 2, 3, 2]\nThe Positions of minimum element : [0, 3, 5]\n" }, { "code": null, "e": 2032, "s": 2011, "text": "Python list-programs" }, { "code": null, "e": 2039, "s": 2032, "text": "Python" }, { "code": null, "e": 2055, "s": 2039, "text": "Python Programs" } ]
NLP | Synsets for a word in WordNet
28 Jan, 2019 WordNet is the lexical database i.e. dictionary for the English language, specifically designed for natural language processing. Synset is a special kind of a simple interface that is present in NLTK to look up words in WordNet. Synset instances are the groupings of synonymous words that express the same concept. Some of the words have only one Synset and some have several. Code #1 : Understanding Synset from nltk.corpus import wordnetsyn = wordnet.synsets('hello')[0] print ("Synset name : ", syn.name()) # Defining the wordprint ("\nSynset meaning : ", syn.definition()) # list of phrases that use the word in contextprint ("\nSynset example : ", syn.examples()) Output: Synset name : hello.n.01 Synset meaning : an expression of greeting Synset example : ['every morning they exchanged polite hellos'] wordnet.synsets(word) can be used to get a list of Synsets. This list can be empty (if no such word is found) or can have few elements. Hypernyms and Hyponyms – Hypernyms: More abstract termsHyponyms: More specific terms. Both come to picture as Synsets are organized in a structure similar to that of an inheritance tree. This tree can be traced all the way up to a root hypernym. Hypernyms provide a way to categorize and group words based on their similarity to each other. Code #2 : Understanding Hypernerms and Hyponyms from nltk.corpus import wordnetsyn = wordnet.synsets('hello')[0] print ("Synset name : ", syn.name()) print ("\nSynset abstract term : ", syn.hypernyms()) print ("\nSynset specific term : ", syn.hypernyms()[0].hyponyms()) syn.root_hypernyms() print ("\nSynset root hypernerm : ", syn.root_hypernyms()) Output: Synset name : hello.n.01 Synset abstract term : [Synset('greeting.n.01')] Synset specific term : [Synset('calling_card.n.02'), Synset('good_afternoon.n.01'), Synset('good_morning.n.01'), Synset('hail.n.03'), Synset('hello.n.01'), Synset('pax.n.01'), Synset('reception.n.01'), Synset('regard.n.03'), Synset('salute.n.02'), Synset('salute.n.03'), Synset('welcome.n.02'), Synset('well-wishing.n.01')] Synset root hypernerm : [Synset('entity.n.01')] Code #3 : Part of Speech (POS) in Synset. syn = wordnet.synsets('hello')[0]print ("Syn tag : ", syn.pos()) syn = wordnet.synsets('doing')[0]print ("Syn tag : ", syn.pos()) syn = wordnet.synsets('beautiful')[0]print ("Syn tag : ", syn.pos()) syn = wordnet.synsets('quickly')[0]print ("Syn tag : ", syn.pos()) Output: Syn tag : n Syn tag : v Syn tag : a Syn tag : r Natural-language-processing Python-nltk Machine Learning Python Machine Learning Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n28 Jan, 2019" }, { "code": null, "e": 181, "s": 52, "text": "WordNet is the lexical database i.e. dictionary for the English language, specifically designed for natural language processing." }, { "code": null, "e": 429, "s": 181, "text": "Synset is a special kind of a simple interface that is present in NLTK to look up words in WordNet. Synset instances are the groupings of synonymous words that express the same concept. Some of the words have only one Synset and some have several." }, { "code": null, "e": 460, "s": 429, "text": "Code #1 : Understanding Synset" }, { "code": "from nltk.corpus import wordnetsyn = wordnet.synsets('hello')[0] print (\"Synset name : \", syn.name()) # Defining the wordprint (\"\\nSynset meaning : \", syn.definition()) # list of phrases that use the word in contextprint (\"\\nSynset example : \", syn.examples())", "e": 725, "s": 460, "text": null }, { "code": null, "e": 733, "s": 725, "text": "Output:" }, { "code": null, "e": 871, "s": 733, "text": "Synset name : hello.n.01\n\nSynset meaning : an expression of greeting\n\nSynset example : ['every morning they exchanged polite hellos']" }, { "code": null, "e": 1032, "s": 871, "text": "wordnet.synsets(word) can be used to get a list of Synsets. This list can be empty (if no such word is found) or can have few elements. Hypernyms and Hyponyms –" }, { "code": null, "e": 1093, "s": 1032, "text": "Hypernyms: More abstract termsHyponyms: More specific terms." }, { "code": null, "e": 1348, "s": 1093, "text": "Both come to picture as Synsets are organized in a structure similar to that of an inheritance tree. This tree can be traced all the way up to a root hypernym. Hypernyms provide a way to categorize and group words based on their similarity to each other." }, { "code": null, "e": 1396, "s": 1348, "text": "Code #2 : Understanding Hypernerms and Hyponyms" }, { "code": "from nltk.corpus import wordnetsyn = wordnet.synsets('hello')[0] print (\"Synset name : \", syn.name()) print (\"\\nSynset abstract term : \", syn.hypernyms()) print (\"\\nSynset specific term : \", syn.hypernyms()[0].hyponyms()) syn.root_hypernyms() print (\"\\nSynset root hypernerm : \", syn.root_hypernyms())", "e": 1714, "s": 1396, "text": null }, { "code": null, "e": 1722, "s": 1714, "text": "Output:" }, { "code": null, "e": 2183, "s": 1722, "text": "Synset name : hello.n.01\n\nSynset abstract term : [Synset('greeting.n.01')]\n\nSynset specific term : [Synset('calling_card.n.02'), Synset('good_afternoon.n.01'), \nSynset('good_morning.n.01'), Synset('hail.n.03'), Synset('hello.n.01'), \nSynset('pax.n.01'), Synset('reception.n.01'), Synset('regard.n.03'), \nSynset('salute.n.02'), Synset('salute.n.03'), Synset('welcome.n.02'), \nSynset('well-wishing.n.01')]\n\nSynset root hypernerm : [Synset('entity.n.01')]" }, { "code": null, "e": 2226, "s": 2183, "text": " Code #3 : Part of Speech (POS) in Synset." }, { "code": "syn = wordnet.synsets('hello')[0]print (\"Syn tag : \", syn.pos()) syn = wordnet.synsets('doing')[0]print (\"Syn tag : \", syn.pos()) syn = wordnet.synsets('beautiful')[0]print (\"Syn tag : \", syn.pos()) syn = wordnet.synsets('quickly')[0]print (\"Syn tag : \", syn.pos())", "e": 2495, "s": 2226, "text": null }, { "code": null, "e": 2503, "s": 2495, "text": "Output:" }, { "code": null, "e": 2555, "s": 2503, "text": "Syn tag : n\nSyn tag : v\nSyn tag : a\nSyn tag : r" }, { "code": null, "e": 2583, "s": 2555, "text": "Natural-language-processing" }, { "code": null, "e": 2595, "s": 2583, "text": "Python-nltk" }, { "code": null, "e": 2612, "s": 2595, "text": "Machine Learning" }, { "code": null, "e": 2619, "s": 2612, "text": "Python" }, { "code": null, "e": 2636, "s": 2619, "text": "Machine Learning" } ]
Printing Shortest Common Supersequence
25 Mar, 2022 Given two strings X and Y, print the shortest string that has both X and Y as subsequences. If multiple shortest super-sequence exists, print any one of them.Examples: Input: X = "AGGTAB", Y = "GXTXAYB" Output: "AGXGTXAYB" OR "AGGXTXAYB" OR Any string that represents shortest supersequence of X and Y Input: X = "HELLO", Y = "GEEK" Output: "GEHEKLLO" OR "GHEEKLLO" OR Any string that represents shortest supersequence of X and Y We have discussed how to print length of shortest possible super-sequence for two given strings here. In this post, we print the shortest super-sequence.We have already discussed below algorithm to find length of shortest super-sequence in previous post- Let X[0..m-1] and Y[0..n-1] be two strings and m and be respective lengths. if (m == 0) return n; if (n == 0) return m; // If last characters are same, then add 1 to result and // recur for X[] if (X[m-1] == Y[n-1]) return 1 + SCS(X, Y, m-1, n-1); // Else find shortest of following two // a) Remove last character from X and recur // b) Remove last character from Y and recur else return 1 + min( SCS(X, Y, m-1, n), SCS(X, Y, m, n-1) ); The following table shows steps followed by the above algorithm if we solve it in bottom-up manner using Dynamic Programming for strings X = “AGGTAB” and Y = “GXTXAYB”, Using the DP solution matrix, we can easily print shortest super-sequence of two strings by following below steps – We start from the bottom-right most cell of the matrix and push characters in output string based on below rules- 1. If the characters corresponding to current cell (i, j) in X and Y are same, then the character is part of shortest supersequence. We append it in output string and move diagonally to next cell (i.e. (i - 1, j - 1)). 2. If the characters corresponding to current cell (i, j) in X and Y are different, we have two choices - If matrix[i - 1][j] > matrix[i][j - 1], we add character corresponding to current cell (i, j) in string Y in output string and move to the left cell i.e. (i, j - 1) else we add character corresponding to current cell (i, j) in string X in output string and move to the top cell i.e. (i - 1, j) 3. If string Y reaches its end i.e. j = 0, we add remaining characters of string X in the output string else if string X reaches its end i.e. i = 0, we add remaining characters of string Y in the output string. Below is the implementation of above idea – C++14 Java Python3 C# Javascript /* A dynamic programming based C++ program print shortest supersequence of two strings */#include <bits/stdc++.h>using namespace std; // returns shortest supersequence of X and Ystring printShortestSuperSeq(string X, string Y){ int m = X.length(); int n = Y.length(); // dp[i][j] contains length of shortest supersequence // for X[0..i-1] and Y[0..j-1] int dp[m + 1][n + 1]; // Fill table in bottom up manner for (int i = 0; i <= m; i++) { for (int j = 0; j <= n; j++) { // Below steps follow recurrence relation if(i == 0) dp[i][j] = j; else if(j == 0) dp[i][j] = i; else if(X[i - 1] == Y[j - 1]) dp[i][j] = 1 + dp[i - 1][j - 1]; else dp[i][j] = 1 + min(dp[i - 1][j], dp[i][j - 1]); } } // Following code is used to print shortest supersequence // dp[m][n] stores the length of the shortest supersequence // of X and Y // string to store the shortest supersequence string str; // Start from the bottom right corner and one by one // push characters in output string int i = m, j = n; while (i > 0 && j > 0) { // If current character in X and Y are same, then // current character is part of shortest supersequence if (X[i - 1] == Y[j - 1]) { // Put current character in result str.push_back(X[i - 1]); // reduce values of i, j and index i--, j--; } // If current character in X and Y are different else if (dp[i - 1][j] > dp[i][j - 1]) { // Put current character of Y in result str.push_back(Y[j - 1]); // reduce values of j and index j--; } else { // Put current character of X in result str.push_back(X[i - 1]); // reduce values of i and index i--; } } // If Y reaches its end, put remaining characters // of X in the result string while (i > 0) { str.push_back(X[i - 1]); i--; } // If X reaches its end, put remaining characters // of Y in the result string while (j > 0) { str.push_back(Y[j - 1]); j--; } // reverse the string and return it reverse(str.begin(), str.end()); return str;} // Driver program to test above functionint main(){ string X = "AGGTAB"; string Y = "GXTXAYB"; cout << printShortestSuperSeq(X, Y); return 0;} /* A dynamic programming based Java program printshortest supersequence of two strings */class GFG { // returns shortest supersequence of X and Y static String printShortestSuperSeq(String X, String Y) { int m = X.length(); int n = Y.length(); // dp[i][j] contains length of // shortest supersequence // for X[0..i-1] and Y[0..j-1] int dp[][] = new int[m + 1][n + 1]; // Fill table in bottom up manner for (int i = 0; i <= m; i++) { for (int j = 0; j <= n; j++) { // Below steps follow recurrence relation if (i == 0) { dp[i][j] = j; } else if (j == 0) { dp[i][j] = i; } else if (X.charAt(i - 1) == Y.charAt(j - 1)) { dp[i][j] = 1 + dp[i - 1][j - 1]; } else { dp[i][j] = 1 + Math.min(dp[i - 1][j], dp[i][j - 1]); } } } // Following code is used to print // shortest supersequence dp[m][n] s // tores the length of the shortest // supersequence of X and Y // string to store the shortest supersequence String str = ""; // Start from the bottom right corner and one by one // push characters in output string int i = m, j = n; while (i > 0 && j > 0) { // If current character in X and Y are same, then // current character is part of shortest supersequence if (X.charAt(i - 1) == Y.charAt(j - 1)) { // Put current character in result str += (X.charAt(i - 1)); // reduce values of i, j and index i--; j--; } // If current character in X and Y are different else if (dp[i - 1][j] > dp[i][j - 1]) { // Put current character of Y in result str += (Y.charAt(j - 1)); // reduce values of j and index j--; } else { // Put current character of X in result str += (X.charAt(i - 1)); // reduce values of i and index i--; } } // If Y reaches its end, put remaining characters // of X in the result string while (i > 0) { str += (X.charAt(i - 1)); i--; } // If X reaches its end, put remaining characters // of Y in the result string while (j > 0) { str += (Y.charAt(j - 1)); j--; } // reverse the string and return it str = reverse(str); return str; } static String reverse(String input) { char[] temparray = input.toCharArray(); int left, right = 0; right = temparray.length - 1; for (left = 0; left < right; left++, right--) { // Swap values of left and right char temp = temparray[left]; temparray[left] = temparray[right]; temparray[right] = temp; } return String.valueOf(temparray); } // Driver code public static void main(String[] args) { String X = "AGGTAB"; String Y = "GXTXAYB"; System.out.println(printShortestSuperSeq(X, Y)); }} // This code is contributed by 29AjayKumar # A dynamic programming based Python3 program print# shortest supersequence of two strings # returns shortest supersequence of X and Ydef printShortestSuperSeq(m, n, x, y): # dp[i][j] contains length of shortest # supersequence for X[0..i-1] and Y[0..j-1] dp = [[0 for i in range(n + 1)] for j in range(m + 1)] # Fill table in bottom up manner for i in range(m + 1): for j in range(n + 1): # Below steps follow recurrence relation if i == 0: dp[i][j] = j elif j == 0: dp[i][j] = i elif x[i - 1] == y[j - 1]: dp[i][j] = 1 + dp[i - 1][j - 1] else: dp[i][j] = 1 + min(dp[i - 1][j], dp[i][j - 1]) # Following code is used to print # shortest supersequence # dp[m][n] stores the length of the # shortest supersequence of X and Y # string to store the shortest supersequence string = "" # Start from the bottom right corner and # add the characters to the output string i = m j = n while i * j > 0: # If current character in X and Y are same, # then current character is part of # shortest supersequence if x[i - 1] == y[j - 1]: # Put current character in result string = x[i - 1] + string # reduce values of i, j and index i -= 1 j -= 1 # If current character in X and Y are different elif dp[i - 1][j] > dp[i][j - 1]: # Put current character of Y in result string = y[j - 1] + string # reduce values of j and index j -= 1 else: # Put current character of X in result string = x[i - 1] + string # reduce values of i and index i -= 1 # If Y reaches its end, put remaining characters # of X in the result string while i > 0: string = x[i - 1] + string i -= 1 # If X reaches its end, put remaining characters # of Y in the result string while j > 0: string = y[j - 1] + string j -= 1 return string # Driver Codeif __name__ == "__main__": x = "GXTXAYB" y = "AGGTAB" m = len(x) n = len(y) # Take the smaller string as x and larger one as y if m > n: x, y = y, x m, n = n, m print(*printShortestSuperSeq(m, n, x, y)) # This code is contributed by# sanjeev2552 /* A dynamic programming based C# program printshortest supersequence of two strings */using System; class GFG{ // returns shortest supersequence of X and Y static String printShortestSuperSeq(String X, String Y) { int m = X.Length; int n = Y.Length; // dp[i,j] contains length of // shortest supersequence // for X[0..i-1] and Y[0..j-1] int [,]dp = new int[m + 1, n + 1]; int i, j; // Fill table in bottom up manner for (i = 0; i <= m; i++) { for (j = 0; j <= n; j++) { // Below steps follow recurrence relation if (i == 0) { dp[i, j] = j; } else if (j == 0) { dp[i, j] = i; } else if (X[i - 1] == Y[j - 1]) { dp[i, j] = 1 + dp[i - 1, j - 1]; } else { dp[i, j] = 1 + Math.Min(dp[i - 1, j], dp[i, j - 1]); } } } // Following code is used to print // shortest supersequence dp[m,n] s // tores the length of the shortest // supersequence of X and Y // string to store the shortest supersequence String str = ""; // Start from the bottom right corner and one by one // push characters in output string i = m; j = n; while (i > 0 && j > 0) { // If current character in X and Y are same, then // current character is part of shortest supersequence if (X[i - 1] == Y[j - 1]) { // Put current character in result str += (X[i - 1]); // reduce values of i, j and index i--; j--; } // If current character in X and Y are different else if (dp[i - 1, j] > dp[i, j - 1]) { // Put current character of Y in result str += (Y[j - 1]); // reduce values of j and index j--; } else { // Put current character of X in result str += (X[i - 1]); // reduce values of i and index i--; } } // If Y reaches its end, put remaining characters // of X in the result string while (i > 0) { str += (X[i - 1]); i--; } // If X reaches its end, put remaining characters // of Y in the result string while (j > 0) { str += (Y[j - 1]); j--; } // reverse the string and return it str = reverse(str); return str; } static String reverse(String input) { char[] temparray = input.ToCharArray(); int left, right = 0; right = temparray.Length - 1; for (left = 0; left < right; left++, right--) { // Swap values of left and right char temp = temparray[left]; temparray[left] = temparray[right]; temparray[right] = temp; } return String.Join("",temparray); } // Driver code public static void Main(String[] args) { String X = "AGGTAB"; String Y = "GXTXAYB"; Console.WriteLine(printShortestSuperSeq(X, Y)); }} /* This code has been contributedby PrinciRaj1992*/ <script> /* A dynamic programming based Javascript program printshortest supersequence of two strings */ // returns shortest supersequence of X and Yfunction printShortestSuperSeq(X,Y){ let m = X.length; let n = Y.length; // dp[i][j] contains length of // shortest supersequence // for X[0..i-1] and Y[0..j-1] let dp = new Array(m + 1); for(let i=0;i<(m+1);i++) { dp[i]=new Array(n+1); for(let j=0;j<(n+1);j++) dp[i][j]=0; } // Fill table in bottom up manner for (let i = 0; i <= m; i++) { for (let j = 0; j <= n; j++) { // Below steps follow recurrence relation if (i == 0) { dp[i][j] = j; } else if (j == 0) { dp[i][j] = i; } else if (X[i-1] == Y[j-1]) { dp[i][j] = 1 + dp[i - 1][j - 1]; } else { dp[i][j] = 1 + Math.min(dp[i - 1][j], dp[i][j - 1]); } } } // Following code is used to print // shortest supersequence dp[m][n] s // tores the length of the shortest // supersequence of X and Y // string to store the shortest supersequence let str = ""; // Start from the bottom right corner and one by one // push characters in output string let i = m, j = n; while (i > 0 && j > 0) { // If current character in X and Y are same, then // current character is part of shortest supersequence if (X[i-1] == Y[j-1]) { // Put current character in result str += (X[i-1]); // reduce values of i, j and index i--; j--; } // If current character in X and Y are different else if (dp[i - 1][j] > dp[i][j - 1]) { // Put current character of Y in result str += (Y[j-1]); // reduce values of j and index j--; } else { // Put current character of X in result str += (X[i-1]); // reduce values of i and index i--; } } // If Y reaches its end, put remaining characters // of X in the result string while (i > 0) { str += (X[i-1]); i--; } // If X reaches its end, put remaining characters // of Y in the result string while (j > 0) { str += (Y[j-1]); j--; } // reverse the string and return it str = reverse(str); return str;} function reverse(input){ let temparray = input.split(""); let left, right = 0; right = temparray.length - 1; for (left = 0; left < right; left++, right--) { // Swap values of left and right let temp = temparray[left]; temparray[left] = temparray[right]; temparray[right] = temp; } return (temparray).join("");} // Driver codelet X = "AGGTAB";let Y = "GXTXAYB";document.write(printShortestSuperSeq(X, Y)); // This code is contributed by rag2127 </script> AGXGTXAYB Time complexity of above solution is O(n2). Auxiliary space used by the program is O(n2).This article is contributed by Aditya Goel, Krishna Chaitanya Dirisala. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. 29AjayKumar princiraj1992 sanjeev2552 rag2127 abhinabachowdhury surinderdawra388 krishnadirisala2001 LCS subsequence Dynamic Programming Dynamic Programming LCS Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n25 Mar, 2022" }, { "code": null, "e": 222, "s": 52, "text": "Given two strings X and Y, print the shortest string that has both X and Y as subsequences. If multiple shortest super-sequence exists, print any one of them.Examples: " }, { "code": null, "e": 489, "s": 222, "text": "Input: X = \"AGGTAB\", Y = \"GXTXAYB\"\nOutput: \"AGXGTXAYB\" OR \"AGGXTXAYB\" \nOR Any string that represents shortest\nsupersequence of X and Y\n\nInput: X = \"HELLO\", Y = \"GEEK\"\nOutput: \"GEHEKLLO\" OR \"GHEEKLLO\"\nOR Any string that represents shortest \nsupersequence of X and Y" }, { "code": null, "e": 747, "s": 491, "text": "We have discussed how to print length of shortest possible super-sequence for two given strings here. In this post, we print the shortest super-sequence.We have already discussed below algorithm to find length of shortest super-sequence in previous post- " }, { "code": null, "e": 1196, "s": 747, "text": "Let X[0..m-1] and Y[0..n-1] be two strings and m and be respective \nlengths.\n\nif (m == 0) return n;\nif (n == 0) return m;\n\n// If last characters are same, then add 1 to result and\n// recur for X[]\nif (X[m-1] == Y[n-1]) \n return 1 + SCS(X, Y, m-1, n-1);\n\n// Else find shortest of following two\n// a) Remove last character from X and recur\n// b) Remove last character from Y and recur\nelse return 1 + min( SCS(X, Y, m-1, n), SCS(X, Y, m, n-1) );" }, { "code": null, "e": 1367, "s": 1196, "text": "The following table shows steps followed by the above algorithm if we solve it in bottom-up manner using Dynamic Programming for strings X = “AGGTAB” and Y = “GXTXAYB”, " }, { "code": null, "e": 1484, "s": 1367, "text": "Using the DP solution matrix, we can easily print shortest super-sequence of two strings by following below steps – " }, { "code": null, "e": 2504, "s": 1484, "text": "We start from the bottom-right most cell of the matrix and \npush characters in output string based on below rules-\n\n 1. If the characters corresponding to current cell (i, j) \n in X and Y are same, then the character is part of shortest \n supersequence. We append it in output string and move \n diagonally to next cell (i.e. (i - 1, j - 1)).\n\n 2. If the characters corresponding to current cell (i, j)\n in X and Y are different, we have two choices -\n\n If matrix[i - 1][j] > matrix[i][j - 1],\n we add character corresponding to current \n cell (i, j) in string Y in output string \n and move to the left cell i.e. (i, j - 1)\n else\n we add character corresponding to current \n cell (i, j) in string X in output string \n and move to the top cell i.e. (i - 1, j)\n\n 3. If string Y reaches its end i.e. j = 0, we add remaining\n characters of string X in the output string\n else if string X reaches its end i.e. i = 0, we add \n remaining characters of string Y in the output string." }, { "code": null, "e": 2550, "s": 2504, "text": "Below is the implementation of above idea – " }, { "code": null, "e": 2556, "s": 2550, "text": "C++14" }, { "code": null, "e": 2561, "s": 2556, "text": "Java" }, { "code": null, "e": 2569, "s": 2561, "text": "Python3" }, { "code": null, "e": 2572, "s": 2569, "text": "C#" }, { "code": null, "e": 2583, "s": 2572, "text": "Javascript" }, { "code": "/* A dynamic programming based C++ program print shortest supersequence of two strings */#include <bits/stdc++.h>using namespace std; // returns shortest supersequence of X and Ystring printShortestSuperSeq(string X, string Y){ int m = X.length(); int n = Y.length(); // dp[i][j] contains length of shortest supersequence // for X[0..i-1] and Y[0..j-1] int dp[m + 1][n + 1]; // Fill table in bottom up manner for (int i = 0; i <= m; i++) { for (int j = 0; j <= n; j++) { // Below steps follow recurrence relation if(i == 0) dp[i][j] = j; else if(j == 0) dp[i][j] = i; else if(X[i - 1] == Y[j - 1]) dp[i][j] = 1 + dp[i - 1][j - 1]; else dp[i][j] = 1 + min(dp[i - 1][j], dp[i][j - 1]); } } // Following code is used to print shortest supersequence // dp[m][n] stores the length of the shortest supersequence // of X and Y // string to store the shortest supersequence string str; // Start from the bottom right corner and one by one // push characters in output string int i = m, j = n; while (i > 0 && j > 0) { // If current character in X and Y are same, then // current character is part of shortest supersequence if (X[i - 1] == Y[j - 1]) { // Put current character in result str.push_back(X[i - 1]); // reduce values of i, j and index i--, j--; } // If current character in X and Y are different else if (dp[i - 1][j] > dp[i][j - 1]) { // Put current character of Y in result str.push_back(Y[j - 1]); // reduce values of j and index j--; } else { // Put current character of X in result str.push_back(X[i - 1]); // reduce values of i and index i--; } } // If Y reaches its end, put remaining characters // of X in the result string while (i > 0) { str.push_back(X[i - 1]); i--; } // If X reaches its end, put remaining characters // of Y in the result string while (j > 0) { str.push_back(Y[j - 1]); j--; } // reverse the string and return it reverse(str.begin(), str.end()); return str;} // Driver program to test above functionint main(){ string X = \"AGGTAB\"; string Y = \"GXTXAYB\"; cout << printShortestSuperSeq(X, Y); return 0;}", "e": 5131, "s": 2583, "text": null }, { "code": "/* A dynamic programming based Java program printshortest supersequence of two strings */class GFG { // returns shortest supersequence of X and Y static String printShortestSuperSeq(String X, String Y) { int m = X.length(); int n = Y.length(); // dp[i][j] contains length of // shortest supersequence // for X[0..i-1] and Y[0..j-1] int dp[][] = new int[m + 1][n + 1]; // Fill table in bottom up manner for (int i = 0; i <= m; i++) { for (int j = 0; j <= n; j++) { // Below steps follow recurrence relation if (i == 0) { dp[i][j] = j; } else if (j == 0) { dp[i][j] = i; } else if (X.charAt(i - 1) == Y.charAt(j - 1)) { dp[i][j] = 1 + dp[i - 1][j - 1]; } else { dp[i][j] = 1 + Math.min(dp[i - 1][j], dp[i][j - 1]); } } } // Following code is used to print // shortest supersequence dp[m][n] s // tores the length of the shortest // supersequence of X and Y // string to store the shortest supersequence String str = \"\"; // Start from the bottom right corner and one by one // push characters in output string int i = m, j = n; while (i > 0 && j > 0) { // If current character in X and Y are same, then // current character is part of shortest supersequence if (X.charAt(i - 1) == Y.charAt(j - 1)) { // Put current character in result str += (X.charAt(i - 1)); // reduce values of i, j and index i--; j--; } // If current character in X and Y are different else if (dp[i - 1][j] > dp[i][j - 1]) { // Put current character of Y in result str += (Y.charAt(j - 1)); // reduce values of j and index j--; } else { // Put current character of X in result str += (X.charAt(i - 1)); // reduce values of i and index i--; } } // If Y reaches its end, put remaining characters // of X in the result string while (i > 0) { str += (X.charAt(i - 1)); i--; } // If X reaches its end, put remaining characters // of Y in the result string while (j > 0) { str += (Y.charAt(j - 1)); j--; } // reverse the string and return it str = reverse(str); return str; } static String reverse(String input) { char[] temparray = input.toCharArray(); int left, right = 0; right = temparray.length - 1; for (left = 0; left < right; left++, right--) { // Swap values of left and right char temp = temparray[left]; temparray[left] = temparray[right]; temparray[right] = temp; } return String.valueOf(temparray); } // Driver code public static void main(String[] args) { String X = \"AGGTAB\"; String Y = \"GXTXAYB\"; System.out.println(printShortestSuperSeq(X, Y)); }} // This code is contributed by 29AjayKumar", "e": 8774, "s": 5131, "text": null }, { "code": "# A dynamic programming based Python3 program print# shortest supersequence of two strings # returns shortest supersequence of X and Ydef printShortestSuperSeq(m, n, x, y): # dp[i][j] contains length of shortest # supersequence for X[0..i-1] and Y[0..j-1] dp = [[0 for i in range(n + 1)] for j in range(m + 1)] # Fill table in bottom up manner for i in range(m + 1): for j in range(n + 1): # Below steps follow recurrence relation if i == 0: dp[i][j] = j elif j == 0: dp[i][j] = i elif x[i - 1] == y[j - 1]: dp[i][j] = 1 + dp[i - 1][j - 1] else: dp[i][j] = 1 + min(dp[i - 1][j], dp[i][j - 1]) # Following code is used to print # shortest supersequence # dp[m][n] stores the length of the # shortest supersequence of X and Y # string to store the shortest supersequence string = \"\" # Start from the bottom right corner and # add the characters to the output string i = m j = n while i * j > 0: # If current character in X and Y are same, # then current character is part of # shortest supersequence if x[i - 1] == y[j - 1]: # Put current character in result string = x[i - 1] + string # reduce values of i, j and index i -= 1 j -= 1 # If current character in X and Y are different elif dp[i - 1][j] > dp[i][j - 1]: # Put current character of Y in result string = y[j - 1] + string # reduce values of j and index j -= 1 else: # Put current character of X in result string = x[i - 1] + string # reduce values of i and index i -= 1 # If Y reaches its end, put remaining characters # of X in the result string while i > 0: string = x[i - 1] + string i -= 1 # If X reaches its end, put remaining characters # of Y in the result string while j > 0: string = y[j - 1] + string j -= 1 return string # Driver Codeif __name__ == \"__main__\": x = \"GXTXAYB\" y = \"AGGTAB\" m = len(x) n = len(y) # Take the smaller string as x and larger one as y if m > n: x, y = y, x m, n = n, m print(*printShortestSuperSeq(m, n, x, y)) # This code is contributed by# sanjeev2552", "e": 11239, "s": 8774, "text": null }, { "code": "/* A dynamic programming based C# program printshortest supersequence of two strings */using System; class GFG{ // returns shortest supersequence of X and Y static String printShortestSuperSeq(String X, String Y) { int m = X.Length; int n = Y.Length; // dp[i,j] contains length of // shortest supersequence // for X[0..i-1] and Y[0..j-1] int [,]dp = new int[m + 1, n + 1]; int i, j; // Fill table in bottom up manner for (i = 0; i <= m; i++) { for (j = 0; j <= n; j++) { // Below steps follow recurrence relation if (i == 0) { dp[i, j] = j; } else if (j == 0) { dp[i, j] = i; } else if (X[i - 1] == Y[j - 1]) { dp[i, j] = 1 + dp[i - 1, j - 1]; } else { dp[i, j] = 1 + Math.Min(dp[i - 1, j], dp[i, j - 1]); } } } // Following code is used to print // shortest supersequence dp[m,n] s // tores the length of the shortest // supersequence of X and Y // string to store the shortest supersequence String str = \"\"; // Start from the bottom right corner and one by one // push characters in output string i = m; j = n; while (i > 0 && j > 0) { // If current character in X and Y are same, then // current character is part of shortest supersequence if (X[i - 1] == Y[j - 1]) { // Put current character in result str += (X[i - 1]); // reduce values of i, j and index i--; j--; } // If current character in X and Y are different else if (dp[i - 1, j] > dp[i, j - 1]) { // Put current character of Y in result str += (Y[j - 1]); // reduce values of j and index j--; } else { // Put current character of X in result str += (X[i - 1]); // reduce values of i and index i--; } } // If Y reaches its end, put remaining characters // of X in the result string while (i > 0) { str += (X[i - 1]); i--; } // If X reaches its end, put remaining characters // of Y in the result string while (j > 0) { str += (Y[j - 1]); j--; } // reverse the string and return it str = reverse(str); return str; } static String reverse(String input) { char[] temparray = input.ToCharArray(); int left, right = 0; right = temparray.Length - 1; for (left = 0; left < right; left++, right--) { // Swap values of left and right char temp = temparray[left]; temparray[left] = temparray[right]; temparray[right] = temp; } return String.Join(\"\",temparray); } // Driver code public static void Main(String[] args) { String X = \"AGGTAB\"; String Y = \"GXTXAYB\"; Console.WriteLine(printShortestSuperSeq(X, Y)); }} /* This code has been contributedby PrinciRaj1992*/", "e": 14844, "s": 11239, "text": null }, { "code": "<script> /* A dynamic programming based Javascript program printshortest supersequence of two strings */ // returns shortest supersequence of X and Yfunction printShortestSuperSeq(X,Y){ let m = X.length; let n = Y.length; // dp[i][j] contains length of // shortest supersequence // for X[0..i-1] and Y[0..j-1] let dp = new Array(m + 1); for(let i=0;i<(m+1);i++) { dp[i]=new Array(n+1); for(let j=0;j<(n+1);j++) dp[i][j]=0; } // Fill table in bottom up manner for (let i = 0; i <= m; i++) { for (let j = 0; j <= n; j++) { // Below steps follow recurrence relation if (i == 0) { dp[i][j] = j; } else if (j == 0) { dp[i][j] = i; } else if (X[i-1] == Y[j-1]) { dp[i][j] = 1 + dp[i - 1][j - 1]; } else { dp[i][j] = 1 + Math.min(dp[i - 1][j], dp[i][j - 1]); } } } // Following code is used to print // shortest supersequence dp[m][n] s // tores the length of the shortest // supersequence of X and Y // string to store the shortest supersequence let str = \"\"; // Start from the bottom right corner and one by one // push characters in output string let i = m, j = n; while (i > 0 && j > 0) { // If current character in X and Y are same, then // current character is part of shortest supersequence if (X[i-1] == Y[j-1]) { // Put current character in result str += (X[i-1]); // reduce values of i, j and index i--; j--; } // If current character in X and Y are different else if (dp[i - 1][j] > dp[i][j - 1]) { // Put current character of Y in result str += (Y[j-1]); // reduce values of j and index j--; } else { // Put current character of X in result str += (X[i-1]); // reduce values of i and index i--; } } // If Y reaches its end, put remaining characters // of X in the result string while (i > 0) { str += (X[i-1]); i--; } // If X reaches its end, put remaining characters // of Y in the result string while (j > 0) { str += (Y[j-1]); j--; } // reverse the string and return it str = reverse(str); return str;} function reverse(input){ let temparray = input.split(\"\"); let left, right = 0; right = temparray.length - 1; for (left = 0; left < right; left++, right--) { // Swap values of left and right let temp = temparray[left]; temparray[left] = temparray[right]; temparray[right] = temp; } return (temparray).join(\"\");} // Driver codelet X = \"AGGTAB\";let Y = \"GXTXAYB\";document.write(printShortestSuperSeq(X, Y)); // This code is contributed by rag2127 </script>", "e": 18431, "s": 14844, "text": null }, { "code": null, "e": 18441, "s": 18431, "text": "AGXGTXAYB" }, { "code": null, "e": 18978, "s": 18441, "text": "Time complexity of above solution is O(n2). Auxiliary space used by the program is O(n2).This article is contributed by Aditya Goel, Krishna Chaitanya Dirisala. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 18990, "s": 18978, "text": "29AjayKumar" }, { "code": null, "e": 19004, "s": 18990, "text": "princiraj1992" }, { "code": null, "e": 19016, "s": 19004, "text": "sanjeev2552" }, { "code": null, "e": 19024, "s": 19016, "text": "rag2127" }, { "code": null, "e": 19042, "s": 19024, "text": "abhinabachowdhury" }, { "code": null, "e": 19059, "s": 19042, "text": "surinderdawra388" }, { "code": null, "e": 19079, "s": 19059, "text": "krishnadirisala2001" }, { "code": null, "e": 19083, "s": 19079, "text": "LCS" }, { "code": null, "e": 19095, "s": 19083, "text": "subsequence" }, { "code": null, "e": 19115, "s": 19095, "text": "Dynamic Programming" }, { "code": null, "e": 19135, "s": 19115, "text": "Dynamic Programming" }, { "code": null, "e": 19139, "s": 19135, "text": "LCS" } ]
Node.js crypto.scrypt() Method
11 Oct, 2021 The crypto.scrypt() method is an inbuilt application programming interface of crypto module which is used to enable an implementation of an asynchronous scrypt. Where, scrypt is a password-based key derivation function. It is intended to be costly computationally plus memory-wise. So, the brute-force attacks are made unsuccessful. Syntax: crypto.scrypt( password, salt, keylen, options, callback ) Parameters: This method accept five parameters as mentioned above and described below: password: It can hold string, Buffer, TypedArray, or DataView type of data. salt: It holds string, Buffer, TypedArray, or DataView type of data. It must be as unique as possible. Moreover, it is suggested that a salt should be random and is at minimum 16 bytes long. keylen: It is the length of the key and it must be a number. options: It is of type Object and it has seven parameters namely cost, blockSize, parallelization, N, r, p, and maxmem.Where,cost: It is a cost parameter for CPU or memory. It is a number and must be a power of two greater but greater than one and by default its value is 16384.blockSize: It is the parameter for the size of the block allotted. It is a number and the by default value is 8.parallelization: It is the parameter for the Parallelization. It is a number and the by default value is 1.N: It is an alias for cost. It is a number and only one of both can be defined.r: It is an alias for blockSize. It is a number and only one of both can be defined.p: It is an alias for parallelization. It is a number and only one of both can be defined.maxmem: It is the upper bound for the memory to be used. It is a number and an error can occur when 128 * N * r (approx) is greater than maxmem. The default value is (32 * 1024 * 1024). cost: It is a cost parameter for CPU or memory. It is a number and must be a power of two greater but greater than one and by default its value is 16384.blockSize: It is the parameter for the size of the block allotted. It is a number and the by default value is 8.parallelization: It is the parameter for the Parallelization. It is a number and the by default value is 1.N: It is an alias for cost. It is a number and only one of both can be defined.r: It is an alias for blockSize. It is a number and only one of both can be defined.p: It is an alias for parallelization. It is a number and only one of both can be defined.maxmem: It is the upper bound for the memory to be used. It is a number and an error can occur when 128 * N * r (approx) is greater than maxmem. The default value is (32 * 1024 * 1024). cost: It is a cost parameter for CPU or memory. It is a number and must be a power of two greater but greater than one and by default its value is 16384. blockSize: It is the parameter for the size of the block allotted. It is a number and the by default value is 8. parallelization: It is the parameter for the Parallelization. It is a number and the by default value is 1. N: It is an alias for cost. It is a number and only one of both can be defined. r: It is an alias for blockSize. It is a number and only one of both can be defined. p: It is an alias for parallelization. It is a number and only one of both can be defined. maxmem: It is the upper bound for the memory to be used. It is a number and an error can occur when 128 * N * r (approx) is greater than maxmem. The default value is (32 * 1024 * 1024). callback It is a function with two parameters namely err and derived key. Return Value: It returns a buffer. Below examples illustrate the use of crypto.scrypt() method in Node.js: Example 1: // Node.js program to demonstrate the// crypto.scrypt() method // Including crypto modulevar crypto = require('crypto'); // Calling scrypt method with some of its parametercrypto.scrypt('GfG', 'ffdgsg', 32, (err, derivedKey) => { if (err) throw err; // Prints derived key as buffer console.log("The derived key1 is :", derivedKey);}); // Calling scrypt method with the parameter Ncrypto.scrypt('GeeksforGeeks', 'tfytdx', 128, { N: 512 }, (err, derivedKey) => { if (err) throw err; // Prints derived key as buffer console.log("The derived key2 is :", derivedKey); console.log();}); Output: The derived key2 is : <Buffer b3 f8 72 5f 58 df 98 d9 c0 8a ba 0c 2c 50 85 b1 76 de 39 35 40 27 7d 57 f1 6a a1 07 54 dc c9 63 65 32 f2 db 29 95 dc ee 0b 9f e3 d5 0a 9e 3a d0 f6 b4 ... > The derived key1 is : <Buffer dd 47 ee 3e a8 2e f2 5b eb 18 7d 35 1b fd f5 a8 e5 f5 38 ef a7 ff 05 53 1e 86 69 ad cd e8 89 76 > Example 2: // Node.js program to demonstrate the// crypto.scrypt() method // Including crypto modulevar crypto = require('crypto'); // Defining salt as typed arrayconst x = new Uint32Array(7); // Calling scrypt method with some of its parametercrypto.scrypt('yytunnd', x, 16, (err, derivedKey) => { if (err) throw err; // Prints derived key which is encoded console.log("The derived key1 is :", derivedKey.toString("ascii"));}); // Defining salt as data viewconst y = new DataView(new ArrayBuffer(5)); // Calling scrypt method with the parameter Ncrypto.scrypt('oksjdjdn', y, 16, { N: 32 }, (err, derivedKey) => { if (err) throw err; // Prints derived key after encoding console.log("The derived key2 is :", derivedKey.toString("base64")); console.log();}); Output: The derived key2 is : 6Gu0JKHDSHs0tkTuGYuQ7A== The derived key1 is : G"@&H pVCD3 X% Reference: https://nodejs.org/api/crypto.html#crypto_crypto_scrypt_password_salt_keylen_options_callback Node.js-crypto-module Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n11 Oct, 2021" }, { "code": null, "e": 361, "s": 28, "text": "The crypto.scrypt() method is an inbuilt application programming interface of crypto module which is used to enable an implementation of an asynchronous scrypt. Where, scrypt is a password-based key derivation function. It is intended to be costly computationally plus memory-wise. So, the brute-force attacks are made unsuccessful." }, { "code": null, "e": 369, "s": 361, "text": "Syntax:" }, { "code": null, "e": 428, "s": 369, "text": "crypto.scrypt( password, salt, keylen, options, callback )" }, { "code": null, "e": 515, "s": 428, "text": "Parameters: This method accept five parameters as mentioned above and described below:" }, { "code": null, "e": 591, "s": 515, "text": "password: It can hold string, Buffer, TypedArray, or DataView type of data." }, { "code": null, "e": 782, "s": 591, "text": "salt: It holds string, Buffer, TypedArray, or DataView type of data. It must be as unique as possible. Moreover, it is suggested that a salt should be random and is at minimum 16 bytes long." }, { "code": null, "e": 843, "s": 782, "text": "keylen: It is the length of the key and it must be a number." }, { "code": null, "e": 1779, "s": 843, "text": "options: It is of type Object and it has seven parameters namely cost, blockSize, parallelization, N, r, p, and maxmem.Where,cost: It is a cost parameter for CPU or memory. It is a number and must be a power of two greater but greater than one and by default its value is 16384.blockSize: It is the parameter for the size of the block allotted. It is a number and the by default value is 8.parallelization: It is the parameter for the Parallelization. It is a number and the by default value is 1.N: It is an alias for cost. It is a number and only one of both can be defined.r: It is an alias for blockSize. It is a number and only one of both can be defined.p: It is an alias for parallelization. It is a number and only one of both can be defined.maxmem: It is the upper bound for the memory to be used. It is a number and an error can occur when 128 * N * r (approx) is greater than maxmem. The default value is (32 * 1024 * 1024)." }, { "code": null, "e": 2590, "s": 1779, "text": "cost: It is a cost parameter for CPU or memory. It is a number and must be a power of two greater but greater than one and by default its value is 16384.blockSize: It is the parameter for the size of the block allotted. It is a number and the by default value is 8.parallelization: It is the parameter for the Parallelization. It is a number and the by default value is 1.N: It is an alias for cost. It is a number and only one of both can be defined.r: It is an alias for blockSize. It is a number and only one of both can be defined.p: It is an alias for parallelization. It is a number and only one of both can be defined.maxmem: It is the upper bound for the memory to be used. It is a number and an error can occur when 128 * N * r (approx) is greater than maxmem. The default value is (32 * 1024 * 1024)." }, { "code": null, "e": 2744, "s": 2590, "text": "cost: It is a cost parameter for CPU or memory. It is a number and must be a power of two greater but greater than one and by default its value is 16384." }, { "code": null, "e": 2857, "s": 2744, "text": "blockSize: It is the parameter for the size of the block allotted. It is a number and the by default value is 8." }, { "code": null, "e": 2965, "s": 2857, "text": "parallelization: It is the parameter for the Parallelization. It is a number and the by default value is 1." }, { "code": null, "e": 3045, "s": 2965, "text": "N: It is an alias for cost. It is a number and only one of both can be defined." }, { "code": null, "e": 3130, "s": 3045, "text": "r: It is an alias for blockSize. It is a number and only one of both can be defined." }, { "code": null, "e": 3221, "s": 3130, "text": "p: It is an alias for parallelization. It is a number and only one of both can be defined." }, { "code": null, "e": 3407, "s": 3221, "text": "maxmem: It is the upper bound for the memory to be used. It is a number and an error can occur when 128 * N * r (approx) is greater than maxmem. The default value is (32 * 1024 * 1024)." }, { "code": null, "e": 3481, "s": 3407, "text": "callback It is a function with two parameters namely err and derived key." }, { "code": null, "e": 3516, "s": 3481, "text": "Return Value: It returns a buffer." }, { "code": null, "e": 3588, "s": 3516, "text": "Below examples illustrate the use of crypto.scrypt() method in Node.js:" }, { "code": null, "e": 3599, "s": 3588, "text": "Example 1:" }, { "code": "// Node.js program to demonstrate the// crypto.scrypt() method // Including crypto modulevar crypto = require('crypto'); // Calling scrypt method with some of its parametercrypto.scrypt('GfG', 'ffdgsg', 32, (err, derivedKey) => { if (err) throw err; // Prints derived key as buffer console.log(\"The derived key1 is :\", derivedKey);}); // Calling scrypt method with the parameter Ncrypto.scrypt('GeeksforGeeks', 'tfytdx', 128, { N: 512 }, (err, derivedKey) => { if (err) throw err; // Prints derived key as buffer console.log(\"The derived key2 is :\", derivedKey); console.log();});", "e": 4208, "s": 3599, "text": null }, { "code": null, "e": 4216, "s": 4208, "text": "Output:" }, { "code": null, "e": 4532, "s": 4216, "text": "The derived key2 is : <Buffer b3 f8 72 5f 58 df\n98 d9 c0 8a ba 0c 2c 50 85 b1 76 de 39 35 40 27 7d\n57 f1 6a a1 07 54 dc c9 63 65 32 f2 db 29 95 dc ee\n0b 9f e3 d5 0a 9e 3a d0 f6 b4 ... >\n\nThe derived key1 is : <Buffer dd 47 ee 3e a8 2e\nf2 5b eb 18 7d 35 1b fd f5 a8 e5 f5 38 ef a7 ff 05\n53 1e 86 69 ad cd e8 89 76 >\n" }, { "code": null, "e": 4543, "s": 4532, "text": "Example 2:" }, { "code": "// Node.js program to demonstrate the// crypto.scrypt() method // Including crypto modulevar crypto = require('crypto'); // Defining salt as typed arrayconst x = new Uint32Array(7); // Calling scrypt method with some of its parametercrypto.scrypt('yytunnd', x, 16, (err, derivedKey) => { if (err) throw err; // Prints derived key which is encoded console.log(\"The derived key1 is :\", derivedKey.toString(\"ascii\"));}); // Defining salt as data viewconst y = new DataView(new ArrayBuffer(5)); // Calling scrypt method with the parameter Ncrypto.scrypt('oksjdjdn', y, 16, { N: 32 }, (err, derivedKey) => { if (err) throw err; // Prints derived key after encoding console.log(\"The derived key2 is :\", derivedKey.toString(\"base64\")); console.log();});", "e": 5348, "s": 4543, "text": null }, { "code": null, "e": 5356, "s": 5348, "text": "Output:" }, { "code": null, "e": 5532, "s": 5356, "text": "The derived key2 is : 6Gu0JKHDSHs0tkTuGYuQ7A==\nThe derived key1 is : G\"@&H \n pVCD3 \n X%\n" }, { "code": null, "e": 5637, "s": 5532, "text": "Reference: https://nodejs.org/api/crypto.html#crypto_crypto_scrypt_password_salt_keylen_options_callback" }, { "code": null, "e": 5659, "s": 5637, "text": "Node.js-crypto-module" }, { "code": null, "e": 5667, "s": 5659, "text": "Node.js" }, { "code": null, "e": 5684, "s": 5667, "text": "Web Technologies" } ]
Java Program for QuickSort
13 Jun, 2022 Like Merge Sort, QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the given array around the picked pivot. There are many different versions of quickSort that pick pivot in different ways. Always pick first element as pivot.Always pick last element as pivot (implemented below)Pick a random element as pivot.Pick median as pivot. Always pick first element as pivot. Always pick last element as pivot (implemented below) Pick a random element as pivot. Pick median as pivot. The key process in quickSort is partition(). Target of partitions is, given an array and an element x of array as pivot, put x at its correct position in sorted array and put all smaller elements (smaller than x) before x, and put all greater elements (greater than x) after x. All this should be done in linear time. Pseudo Code for recursive QuickSort function : /* low --> Starting index, high --> Ending index */ quickSort(arr[], low, high) { if (low < high) { /* pi is partitioning index, arr[p] is now at right place */ pi = partition(arr, low, high); quickSort(arr, low, pi - 1); // Before pi quickSort(arr, pi + 1, high); // After pi } } Java // Java program for implementation of QuickSortclass QuickSort{ /* This function takes last element as pivot, places the pivot element at its correct position in sorted array, and places all smaller (smaller than pivot) to left of pivot and all greater elements to right of pivot */ int partition(int arr[], int low, int high) { int pivot = arr[high]; int i = (low-1); // index of smaller element for (int j=low; j<high; j++) { // If current element is smaller than or // equal to pivot if (arr[j] <= pivot) { i++; // swap arr[i] and arr[j] int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } } // swap arr[i+1] and arr[high] (or pivot) int temp = arr[i+1]; arr[i+1] = arr[high]; arr[high] = temp; return i+1; } /* The main function that implements QuickSort() arr[] --> Array to be sorted, low --> Starting index, high --> Ending index */ void sort(int arr[], int low, int high) { if (low < high) { /* pi is partitioning index, arr[pi] is now at right place */ int pi = partition(arr, low, high); // Recursively sort elements before // partition and after partition sort(arr, low, pi-1); sort(arr, pi+1, high); } } /* A utility function to print array of size n */ static void printArray(int arr[]) { int n = arr.length; for (int i=0; i<n; ++i) System.out.print(arr[i]+" "); System.out.println(); } // Driver program public static void main(String args[]) { int arr[] = {10, 7, 8, 9, 1, 5}; int n = arr.length; QuickSort ob = new QuickSort(); ob.sort(arr, 0, n-1); System.out.println("sorted array"); printArray(arr); }}/*This code is contributed by Rajat Mishra */ Time Complexity: Worst case time complexity is O(N2) and average case time complexity is O(N logN) Auxiliary Space: O(1) Please refer complete article on QuickSort for more details! chandramauliguptach Quick Sort Java Programs Sorting Sorting Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Factory method design pattern in Java Java Program to Remove Duplicate Elements From the Array Java program to count the occurrence of each character in a string using Hashmap How to Iterate HashMap in Java? Iterate through List in Java Bubble Sort Algorithm Insertion Sort Selection Sort Algorithm std::sort() in C++ STL Time Complexities of all Sorting Algorithms
[ { "code": null, "e": 54, "s": 26, "text": "\n13 Jun, 2022" }, { "code": null, "e": 283, "s": 54, "text": "Like Merge Sort, QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the given array around the picked pivot. There are many different versions of quickSort that pick pivot in different ways." }, { "code": null, "e": 424, "s": 283, "text": "Always pick first element as pivot.Always pick last element as pivot (implemented below)Pick a random element as pivot.Pick median as pivot." }, { "code": null, "e": 460, "s": 424, "text": "Always pick first element as pivot." }, { "code": null, "e": 514, "s": 460, "text": "Always pick last element as pivot (implemented below)" }, { "code": null, "e": 546, "s": 514, "text": "Pick a random element as pivot." }, { "code": null, "e": 568, "s": 546, "text": "Pick median as pivot." }, { "code": null, "e": 933, "s": 568, "text": "The key process in quickSort is partition(). Target of partitions is, given an array and an element x of array as pivot, put x at its correct position in sorted array and put all smaller elements (smaller than x) before x, and put all greater elements (greater than x) after x. All this should be done in linear time. Pseudo Code for recursive QuickSort function :" }, { "code": null, "e": 1274, "s": 933, "text": "/* low --> Starting index, high --> Ending index */\nquickSort(arr[], low, high)\n{\n if (low < high)\n {\n /* pi is partitioning index, arr[p] is now\n at right place */\n pi = partition(arr, low, high);\n\n quickSort(arr, low, pi - 1); // Before pi\n quickSort(arr, pi + 1, high); // After pi\n }\n}" }, { "code": null, "e": 1279, "s": 1274, "text": "Java" }, { "code": "// Java program for implementation of QuickSortclass QuickSort{ /* This function takes last element as pivot, places the pivot element at its correct position in sorted array, and places all smaller (smaller than pivot) to left of pivot and all greater elements to right of pivot */ int partition(int arr[], int low, int high) { int pivot = arr[high]; int i = (low-1); // index of smaller element for (int j=low; j<high; j++) { // If current element is smaller than or // equal to pivot if (arr[j] <= pivot) { i++; // swap arr[i] and arr[j] int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } } // swap arr[i+1] and arr[high] (or pivot) int temp = arr[i+1]; arr[i+1] = arr[high]; arr[high] = temp; return i+1; } /* The main function that implements QuickSort() arr[] --> Array to be sorted, low --> Starting index, high --> Ending index */ void sort(int arr[], int low, int high) { if (low < high) { /* pi is partitioning index, arr[pi] is now at right place */ int pi = partition(arr, low, high); // Recursively sort elements before // partition and after partition sort(arr, low, pi-1); sort(arr, pi+1, high); } } /* A utility function to print array of size n */ static void printArray(int arr[]) { int n = arr.length; for (int i=0; i<n; ++i) System.out.print(arr[i]+\" \"); System.out.println(); } // Driver program public static void main(String args[]) { int arr[] = {10, 7, 8, 9, 1, 5}; int n = arr.length; QuickSort ob = new QuickSort(); ob.sort(arr, 0, n-1); System.out.println(\"sorted array\"); printArray(arr); }}/*This code is contributed by Rajat Mishra */", "e": 3326, "s": 1279, "text": null }, { "code": null, "e": 3425, "s": 3326, "text": "Time Complexity: Worst case time complexity is O(N2) and average case time complexity is O(N logN)" }, { "code": null, "e": 3447, "s": 3425, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 3508, "s": 3447, "text": "Please refer complete article on QuickSort for more details!" }, { "code": null, "e": 3528, "s": 3508, "text": "chandramauliguptach" }, { "code": null, "e": 3539, "s": 3528, "text": "Quick Sort" }, { "code": null, "e": 3553, "s": 3539, "text": "Java Programs" }, { "code": null, "e": 3561, "s": 3553, "text": "Sorting" }, { "code": null, "e": 3569, "s": 3561, "text": "Sorting" }, { "code": null, "e": 3667, "s": 3569, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3705, "s": 3667, "text": "Factory method design pattern in Java" }, { "code": null, "e": 3762, "s": 3705, "text": "Java Program to Remove Duplicate Elements From the Array" }, { "code": null, "e": 3843, "s": 3762, "text": "Java program to count the occurrence of each character in a string using Hashmap" }, { "code": null, "e": 3875, "s": 3843, "text": "How to Iterate HashMap in Java?" }, { "code": null, "e": 3904, "s": 3875, "text": "Iterate through List in Java" }, { "code": null, "e": 3926, "s": 3904, "text": "Bubble Sort Algorithm" }, { "code": null, "e": 3941, "s": 3926, "text": "Insertion Sort" }, { "code": null, "e": 3966, "s": 3941, "text": "Selection Sort Algorithm" }, { "code": null, "e": 3989, "s": 3966, "text": "std::sort() in C++ STL" } ]
Python – Cross mapping of Two dictionary value lists
21 Jun, 2022 GIven two dictionaries with list values, perform mapping of keys of first list with values of other list, by checking values-key linkage. Input : test_dict1 = {“Gfg” : [4, 10], “Best” : [8, 6], “is” : [9, 3]}, test_dict2 = {6 : [15, 9], 8 : [6, 3], 7 : [9, 8], 9 : [10, 11]} Output : {‘Best’: [6, 3, 15, 9], ‘is’: [10, 11]} Explanation : “Best” has 8 and 6, which are mapped to 6, 3 and 15, 9 hence output for that key. Input : test_dict1 = {“Gfg” : [4, 10], “Best” : [18, 16], “is” : [9, 3]}, test_dict2 = {6 : [15, 9], 8 : [6, 3], 7 : [9, 8], 9 : [10, 11]} Output : {‘is’: [10, 11]} Explanation : Only 9 present as possible key. Method #1 : Using loop + setdefault() + extend() The combination of above functions can be used to solve this problem. In this, we perform the task of getting the matching keys with values using get() and setdefault is used to construct empty list for mapping. Python3 # Python3 code to demonstrate working of# Cross mapping of Two dictionary value lists# Using loop + setdefault() + extend() # initializing dictionariestest_dict1 = {"Gfg" : [4, 7], "Best" : [8, 6], "is" : [9, 3]}test_dict2 = {6 : [15, 9], 8 : [6, 3], 7 : [9, 8], 9 : [10, 11]} # printing original listsprint("The original dictionary 1 is : " + str(test_dict1))print("The original dictionary 2 is : " + str(test_dict2)) res = {} # getting all values of first dictionaryfor key, val in test_dict1.items(): for key1 in val: # getting result with default value list and extending # according to value obtained from get() res.setdefault(key, []).extend(test_dict2.get(key1, [])) # printing resultprint("The constructed dictionary : " + str(res)) The original dictionary 1 is : {'Gfg': [4, 7], 'Best': [8, 6], 'is': [9, 3]} The original dictionary 2 is : {6: [15, 9], 8: [6, 3], 7: [9, 8], 9: [10, 11]} The constructed dictionary : {'Gfg': [9, 8], 'Best': [6, 3, 15, 9], 'is': [10, 11]} Method #2 : Using list comprehension + dictionary comprehension This is one more way in which this problem can be solved. In this, we extract all the mapping using list comprehension and then construct new dictionary by cross-mapping the extracted values. Python3 # Python3 code to demonstrate working of# Cross mapping of Two dictionary value lists# Using list comprehension + dictionary comprehension # initializing dictionariestest_dict1 = {"Gfg" : [4, 7], "Best" : [8, 6], "is" : [9, 3]}test_dict2 = {6 : [15, 9], 8 : [6, 3], 7 : [9, 8], 9 : [10, 11]} # printing original listsprint("The original dictionary 1 is : " + str(test_dict1))print("The original dictionary 2 is : " + str(test_dict2)) # using internal and external comprehension to# solve problemres = {key: [j for i in val if i in test_dict2 for j in test_dict2[i]] for key, val in test_dict1.items()} # printing resultprint("The constructed dictionary : " + str(res)) The original dictionary 1 is : {'Gfg': [4, 7], 'Best': [8, 6], 'is': [9, 3]} The original dictionary 2 is : {6: [15, 9], 8: [6, 3], 7: [9, 8], 9: [10, 11]} The constructed dictionary : {'Gfg': [9, 8], 'Best': [6, 3, 15, 9], 'is': [10, 11]} sweetyty vinayedula Python dictionary-programs Python list-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n21 Jun, 2022" }, { "code": null, "e": 166, "s": 28, "text": "GIven two dictionaries with list values, perform mapping of keys of first list with values of other list, by checking values-key linkage." }, { "code": null, "e": 659, "s": 166, "text": "Input : test_dict1 = {“Gfg” : [4, 10], “Best” : [8, 6], “is” : [9, 3]}, test_dict2 = {6 : [15, 9], 8 : [6, 3], 7 : [9, 8], 9 : [10, 11]} Output : {‘Best’: [6, 3, 15, 9], ‘is’: [10, 11]} Explanation : “Best” has 8 and 6, which are mapped to 6, 3 and 15, 9 hence output for that key. Input : test_dict1 = {“Gfg” : [4, 10], “Best” : [18, 16], “is” : [9, 3]}, test_dict2 = {6 : [15, 9], 8 : [6, 3], 7 : [9, 8], 9 : [10, 11]} Output : {‘is’: [10, 11]} Explanation : Only 9 present as possible key." }, { "code": null, "e": 708, "s": 659, "text": "Method #1 : Using loop + setdefault() + extend()" }, { "code": null, "e": 920, "s": 708, "text": "The combination of above functions can be used to solve this problem. In this, we perform the task of getting the matching keys with values using get() and setdefault is used to construct empty list for mapping." }, { "code": null, "e": 928, "s": 920, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of# Cross mapping of Two dictionary value lists# Using loop + setdefault() + extend() # initializing dictionariestest_dict1 = {\"Gfg\" : [4, 7], \"Best\" : [8, 6], \"is\" : [9, 3]}test_dict2 = {6 : [15, 9], 8 : [6, 3], 7 : [9, 8], 9 : [10, 11]} # printing original listsprint(\"The original dictionary 1 is : \" + str(test_dict1))print(\"The original dictionary 2 is : \" + str(test_dict2)) res = {} # getting all values of first dictionaryfor key, val in test_dict1.items(): for key1 in val: # getting result with default value list and extending # according to value obtained from get() res.setdefault(key, []).extend(test_dict2.get(key1, [])) # printing resultprint(\"The constructed dictionary : \" + str(res))", "e": 1702, "s": 928, "text": null }, { "code": null, "e": 1942, "s": 1702, "text": "The original dictionary 1 is : {'Gfg': [4, 7], 'Best': [8, 6], 'is': [9, 3]}\nThe original dictionary 2 is : {6: [15, 9], 8: [6, 3], 7: [9, 8], 9: [10, 11]}\nThe constructed dictionary : {'Gfg': [9, 8], 'Best': [6, 3, 15, 9], 'is': [10, 11]}" }, { "code": null, "e": 2006, "s": 1942, "text": "Method #2 : Using list comprehension + dictionary comprehension" }, { "code": null, "e": 2198, "s": 2006, "text": "This is one more way in which this problem can be solved. In this, we extract all the mapping using list comprehension and then construct new dictionary by cross-mapping the extracted values." }, { "code": null, "e": 2206, "s": 2198, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of# Cross mapping of Two dictionary value lists# Using list comprehension + dictionary comprehension # initializing dictionariestest_dict1 = {\"Gfg\" : [4, 7], \"Best\" : [8, 6], \"is\" : [9, 3]}test_dict2 = {6 : [15, 9], 8 : [6, 3], 7 : [9, 8], 9 : [10, 11]} # printing original listsprint(\"The original dictionary 1 is : \" + str(test_dict1))print(\"The original dictionary 2 is : \" + str(test_dict2)) # using internal and external comprehension to# solve problemres = {key: [j for i in val if i in test_dict2 for j in test_dict2[i]] for key, val in test_dict1.items()} # printing resultprint(\"The constructed dictionary : \" + str(res))", "e": 2880, "s": 2206, "text": null }, { "code": null, "e": 3120, "s": 2880, "text": "The original dictionary 1 is : {'Gfg': [4, 7], 'Best': [8, 6], 'is': [9, 3]}\nThe original dictionary 2 is : {6: [15, 9], 8: [6, 3], 7: [9, 8], 9: [10, 11]}\nThe constructed dictionary : {'Gfg': [9, 8], 'Best': [6, 3, 15, 9], 'is': [10, 11]}" }, { "code": null, "e": 3129, "s": 3120, "text": "sweetyty" }, { "code": null, "e": 3140, "s": 3129, "text": "vinayedula" }, { "code": null, "e": 3167, "s": 3140, "text": "Python dictionary-programs" }, { "code": null, "e": 3188, "s": 3167, "text": "Python list-programs" }, { "code": null, "e": 3195, "s": 3188, "text": "Python" }, { "code": null, "e": 3211, "s": 3195, "text": "Python Programs" } ]
Cucumber - Java Testing
To run Cucumber test with Java, following are the steps. Step 1 − Install Java − Download jdk and jre from http://www.oracle.com/technetwork/java/javase/downloads/index.html Download jdk and jre from http://www.oracle.com/technetwork/java/javase/downloads/index.html Accept license agreement. Accept license agreement. Install JDK and JRE. Install JDK and JRE. Set environment variable as shown in the following picture. Set environment variable as shown in the following picture. Step 2 − Install Eclipse IDE − Make sure JAVA is installed on your machine. Make sure JAVA is installed on your machine. Download Eclipse from https://eclipse.org/downloads/ Download Eclipse from https://eclipse.org/downloads/ Unzip and Eclipse installed. Unzip and Eclipse installed. Step 3 − Install Maven − Download Maven −https://maven.apache.org/download.cgi Download Maven −https://maven.apache.org/download.cgi Unzip the file and remember the location. Unzip the file and remember the location. Create environment variable MAVEN_HOME as shown in the following image. Create environment variable MAVEN_HOME as shown in the following image. Edit Path variable and include Maven. Edit Path variable and include Maven. Download MAVEN plugin from Eclipse Open Eclipse. Got to Help → Eclipse Marketplace → Search maven → Maven Integration for Eclipse →INSTALL Download MAVEN plugin from Eclipse Open Eclipse. Open Eclipse. Got to Help → Eclipse Marketplace → Search maven → Maven Integration for Eclipse →INSTALL Got to Help → Eclipse Marketplace → Search maven → Maven Integration for Eclipse →INSTALL Step 4 − Configure Cucumber with Maven. Create a Maven project. Go to File → New → Others → Maven → Maven Project → Next. Provide group Id (group Id will identify your project uniquely across all projects). Provide artifact Id (artifact Id is the name of the jar without version. You can choose any name which is in lowercase). Click on Finish. Create a Maven project. Go to File → New → Others → Maven → Maven Project → Next. Go to File → New → Others → Maven → Maven Project → Next. Provide group Id (group Id will identify your project uniquely across all projects). Provide group Id (group Id will identify your project uniquely across all projects). Provide artifact Id (artifact Id is the name of the jar without version. You can choose any name which is in lowercase). Provide artifact Id (artifact Id is the name of the jar without version. You can choose any name which is in lowercase). Click on Finish. Click on Finish. Step 5 − Open pom.xml − Go to the package explorer on the left hand side of Eclipse. Go to the package explorer on the left hand side of Eclipse. Expand the project CucumberTest. Expand the project CucumberTest. Locate pom.xml file. Locate pom.xml file. Right-click and select the option, Open with “Text Editor”. Right-click and select the option, Open with “Text Editor”. Step 6 − Add dependency for Selenium − This will indicate Maven, which Selenium jar files are to be downloaded from the central repository to the local repository. Open pom.xml is in edit mode, create dependencies tag (<dependencies></dependencies>), inside the project tag. Open pom.xml is in edit mode, create dependencies tag (<dependencies></dependencies>), inside the project tag. Inside the dependencies tag, create dependency tag. (<dependency></dependency>) Inside the dependencies tag, create dependency tag. (<dependency></dependency>) Provide the following information within the dependency tag. Provide the following information within the dependency tag. <dependency> <groupId>org.seleniumhq.selenium</groupId> <artifactId>selenium-java</artifactId> <version>2.47.1</version> </dependency> Step 7 − Add dependency for Cucumber-Java − This will indicate Maven, which Cucumber files are to be downloaded from the central repository to the local repository. Create one more dependency tag. Create one more dependency tag. Provide following information within the dependency tag. Provide following information within the dependency tag. <dependency> <groupId>info.cukes</groupId> <artifactId>cucumber-java</artifactId> <version>1.0.2</version> <scope>test</scope> </dependency> Step 8 − Add dependency for Cucumber-JUnit − This will indicate Maven, which Cucumber JUnit files are to be downloaded from the central repository to the local repository. Create one more dependency tag. Create one more dependency tag. Provide the following information within the dependency tag. Provide the following information within the dependency tag. <dependency> <groupId>info.cukes</groupId> <artifactId>cucumber-junit</artifactId> <version>1.0.2</version> <scope>test</scope> </dependency> Step 9− Add dependency for JUnit − This will indicate Maven, which JUnit files are to be downloaded from the central repository to the local repository. Create one more dependency tag. Create one more dependency tag. Provide the following information within the dependency tag. Provide the following information within the dependency tag. <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope> </dependency> Step 10 − Verify binaries. Once pom.xml is edited successfully, save it. Once pom.xml is edited successfully, save it. Go to Project → Clean − It will take a few minutes. Go to Project → Clean − It will take a few minutes. You will be able to see a Maven repository. You will be able to see a Maven repository. Step 11 − Create a package under src/test/java named as cucumberJava. Step 12 − Create feature file Select and right-click on the package outline. Select and right-click on the package outline. Click on ‘New’ file. Click on ‘New’ file. Give the file a name such as cucumberJava.feature. Give the file a name such as cucumberJava.feature. Write the following text within the file and save it. Feature: CucumberJava Scenario: Login functionality exists Given I have open the browser When I open Facebook website Then Login button should exits Write the following text within the file and save it. Feature: CucumberJava Scenario: Login functionality exists Given I have open the browser When I open Facebook website Then Login button should exits Step 13 − Create step definition file − Select and right-click on the package outline. Select and right-click on the package outline. Click on ‘New’ file. Click on ‘New’ file. Give the file name a name such as annotation.java. Give the file name a name such as annotation.java. Write the following text within the file and save it. Write the following text within the file and save it. package CucumberJava; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; import cucumber.annotation.en.Given; import cucumber.annotation.en.Then; import cucumber.annotation.en.When; public class cucumberJava { WebDriver driver = null; @Given("^I have open the browser$") public void openBrowser() { driver = new FirefoxDriver(); } @When("^I open Facebook website$") public void goToFacebook() { driver.navigate().to("https://www.facebook.com/"); } @Then("^Login button should exits$") public void loginButton() { if(driver.findElement(By.id("u_0_v")).isEnabled()) { System.out.println("Test 1 Pass"); } else { System.out.println("Test 1 Fail"); } driver.close(); } } Step 14 − Create a runner class file. Select and right-click on the package outline. Select and right-click on the package outline. Click on ‘New’ file. Click on ‘New’ file. Give the file name as runTest.java. Give the file name as runTest.java. Write the following text within the file and save it. Write the following text within the file and save it. package cucumberJava; import org.junit.runner.RunWith; import cucumber.junit.Cucumber; @RunWith(Cucumber.class) @Cucumber.Options(format = {"pretty", "html:target/cucumber"}) public class runTest { } Step 15 − Run the test using option − Select runTest.java file from the package explorer. Select runTest.java file from the package explorer. Right-click and select the option, Run as. Right-click and select the option, Run as. Select JUnit test. Select JUnit test. You will observe the following things upon execution − An instance of Firefox web browser will open. An instance of Firefox web browser will open. It will open the Facebook login page on the browser. It will open the Facebook login page on the browser. It will detect the login button. It will detect the login button. The browser will close. The browser will close. In the JUnit window, you will see a scenario with green tick mark, which indicates success of the test execution. In the JUnit window, you will see a scenario with green tick mark, which indicates success of the test execution. Print Add Notes Bookmark this page
[ { "code": null, "e": 2019, "s": 1962, "text": "To run Cucumber test with Java, following are the steps." }, { "code": null, "e": 2043, "s": 2019, "text": "Step 1 − Install Java −" }, { "code": null, "e": 2136, "s": 2043, "text": "Download jdk and jre from\nhttp://www.oracle.com/technetwork/java/javase/downloads/index.html" }, { "code": null, "e": 2162, "s": 2136, "text": "Download jdk and jre from" }, { "code": null, "e": 2229, "s": 2162, "text": "http://www.oracle.com/technetwork/java/javase/downloads/index.html" }, { "code": null, "e": 2255, "s": 2229, "text": "Accept license agreement." }, { "code": null, "e": 2281, "s": 2255, "text": "Accept license agreement." }, { "code": null, "e": 2302, "s": 2281, "text": "Install JDK and JRE." }, { "code": null, "e": 2323, "s": 2302, "text": "Install JDK and JRE." }, { "code": null, "e": 2383, "s": 2323, "text": "Set environment variable as shown in the following picture." }, { "code": null, "e": 2443, "s": 2383, "text": "Set environment variable as shown in the following picture." }, { "code": null, "e": 2474, "s": 2443, "text": "Step 2 − Install Eclipse IDE −" }, { "code": null, "e": 2519, "s": 2474, "text": "Make sure JAVA is installed on your machine." }, { "code": null, "e": 2564, "s": 2519, "text": "Make sure JAVA is installed on your machine." }, { "code": null, "e": 2617, "s": 2564, "text": "Download Eclipse from https://eclipse.org/downloads/" }, { "code": null, "e": 2670, "s": 2617, "text": "Download Eclipse from https://eclipse.org/downloads/" }, { "code": null, "e": 2699, "s": 2670, "text": "Unzip and Eclipse installed." }, { "code": null, "e": 2728, "s": 2699, "text": "Unzip and Eclipse installed." }, { "code": null, "e": 2753, "s": 2728, "text": "Step 3 − Install Maven −" }, { "code": null, "e": 2807, "s": 2753, "text": "Download Maven −https://maven.apache.org/download.cgi" }, { "code": null, "e": 2861, "s": 2807, "text": "Download Maven −https://maven.apache.org/download.cgi" }, { "code": null, "e": 2903, "s": 2861, "text": "Unzip the file and remember the location." }, { "code": null, "e": 2945, "s": 2903, "text": "Unzip the file and remember the location." }, { "code": null, "e": 3017, "s": 2945, "text": "Create environment variable MAVEN_HOME as shown in the following image." }, { "code": null, "e": 3089, "s": 3017, "text": "Create environment variable MAVEN_HOME as shown in the following image." }, { "code": null, "e": 3127, "s": 3089, "text": "Edit Path variable and include Maven." }, { "code": null, "e": 3165, "s": 3127, "text": "Edit Path variable and include Maven." }, { "code": null, "e": 3306, "s": 3165, "text": "Download MAVEN plugin from Eclipse\n\nOpen Eclipse.\nGot to Help → Eclipse Marketplace → Search maven → Maven Integration for Eclipse →INSTALL\n" }, { "code": null, "e": 3341, "s": 3306, "text": "Download MAVEN plugin from Eclipse" }, { "code": null, "e": 3355, "s": 3341, "text": "Open Eclipse." }, { "code": null, "e": 3369, "s": 3355, "text": "Open Eclipse." }, { "code": null, "e": 3459, "s": 3369, "text": "Got to Help → Eclipse Marketplace → Search maven → Maven Integration for Eclipse →INSTALL" }, { "code": null, "e": 3549, "s": 3459, "text": "Got to Help → Eclipse Marketplace → Search maven → Maven Integration for Eclipse →INSTALL" }, { "code": null, "e": 3589, "s": 3549, "text": "Step 4 − Configure Cucumber with Maven." }, { "code": null, "e": 3896, "s": 3589, "text": "Create a Maven project.\n\nGo to File → New → Others → Maven → Maven Project → Next.\nProvide group Id (group Id will identify your project uniquely across all projects).\nProvide artifact Id (artifact Id is the name of the jar without version. You can choose any name which is in lowercase).\nClick on Finish.\n" }, { "code": null, "e": 3920, "s": 3896, "text": "Create a Maven project." }, { "code": null, "e": 3978, "s": 3920, "text": "Go to File → New → Others → Maven → Maven Project → Next." }, { "code": null, "e": 4036, "s": 3978, "text": "Go to File → New → Others → Maven → Maven Project → Next." }, { "code": null, "e": 4121, "s": 4036, "text": "Provide group Id (group Id will identify your project uniquely across all projects)." }, { "code": null, "e": 4206, "s": 4121, "text": "Provide group Id (group Id will identify your project uniquely across all projects)." }, { "code": null, "e": 4327, "s": 4206, "text": "Provide artifact Id (artifact Id is the name of the jar without version. You can choose any name which is in lowercase)." }, { "code": null, "e": 4448, "s": 4327, "text": "Provide artifact Id (artifact Id is the name of the jar without version. You can choose any name which is in lowercase)." }, { "code": null, "e": 4465, "s": 4448, "text": "Click on Finish." }, { "code": null, "e": 4482, "s": 4465, "text": "Click on Finish." }, { "code": null, "e": 4506, "s": 4482, "text": "Step 5 − Open pom.xml −" }, { "code": null, "e": 4567, "s": 4506, "text": "Go to the package explorer on the left hand side of Eclipse." }, { "code": null, "e": 4628, "s": 4567, "text": "Go to the package explorer on the left hand side of Eclipse." }, { "code": null, "e": 4661, "s": 4628, "text": "Expand the project CucumberTest." }, { "code": null, "e": 4694, "s": 4661, "text": "Expand the project CucumberTest." }, { "code": null, "e": 4715, "s": 4694, "text": "Locate pom.xml file." }, { "code": null, "e": 4736, "s": 4715, "text": "Locate pom.xml file." }, { "code": null, "e": 4796, "s": 4736, "text": "Right-click and select the option, Open with “Text Editor”." }, { "code": null, "e": 4856, "s": 4796, "text": "Right-click and select the option, Open with “Text Editor”." }, { "code": null, "e": 5020, "s": 4856, "text": "Step 6 − Add dependency for Selenium − This will indicate Maven, which Selenium jar files are to be downloaded from the central repository to the local repository." }, { "code": null, "e": 5131, "s": 5020, "text": "Open pom.xml is in edit mode, create dependencies tag (<dependencies></dependencies>), inside the project tag." }, { "code": null, "e": 5242, "s": 5131, "text": "Open pom.xml is in edit mode, create dependencies tag (<dependencies></dependencies>), inside the project tag." }, { "code": null, "e": 5322, "s": 5242, "text": "Inside the dependencies tag, create dependency tag. (<dependency></dependency>)" }, { "code": null, "e": 5402, "s": 5322, "text": "Inside the dependencies tag, create dependency tag. (<dependency></dependency>)" }, { "code": null, "e": 5463, "s": 5402, "text": "Provide the following information within the dependency tag." }, { "code": null, "e": 5524, "s": 5463, "text": "Provide the following information within the dependency tag." }, { "code": null, "e": 5672, "s": 5524, "text": "<dependency> \n <groupId>org.seleniumhq.selenium</groupId> \n <artifactId>selenium-java</artifactId> \n <version>2.47.1</version> \n</dependency>" }, { "code": null, "e": 5837, "s": 5672, "text": "Step 7 − Add dependency for Cucumber-Java − This will indicate Maven, which Cucumber files are to be downloaded from the central repository to the local repository." }, { "code": null, "e": 5869, "s": 5837, "text": "Create one more dependency tag." }, { "code": null, "e": 5901, "s": 5869, "text": "Create one more dependency tag." }, { "code": null, "e": 5958, "s": 5901, "text": "Provide following information within the dependency tag." }, { "code": null, "e": 6015, "s": 5958, "text": "Provide following information within the dependency tag." }, { "code": null, "e": 6173, "s": 6015, "text": "<dependency> \n <groupId>info.cukes</groupId> \n <artifactId>cucumber-java</artifactId> \n <version>1.0.2</version> \n <scope>test</scope> \n</dependency>" }, { "code": null, "e": 6345, "s": 6173, "text": "Step 8 − Add dependency for Cucumber-JUnit − This will indicate Maven, which Cucumber JUnit files are to be downloaded from the central repository to the local repository." }, { "code": null, "e": 6377, "s": 6345, "text": "Create one more dependency tag." }, { "code": null, "e": 6409, "s": 6377, "text": "Create one more dependency tag." }, { "code": null, "e": 6470, "s": 6409, "text": "Provide the following information within the dependency tag." }, { "code": null, "e": 6531, "s": 6470, "text": "Provide the following information within the dependency tag." }, { "code": null, "e": 6690, "s": 6531, "text": "<dependency> \n <groupId>info.cukes</groupId> \n <artifactId>cucumber-junit</artifactId> \n <version>1.0.2</version> \n <scope>test</scope> \n</dependency>" }, { "code": null, "e": 6843, "s": 6690, "text": "Step 9− Add dependency for JUnit − This will indicate Maven, which JUnit files are to be downloaded from the central repository to the local repository." }, { "code": null, "e": 6875, "s": 6843, "text": "Create one more dependency tag." }, { "code": null, "e": 6907, "s": 6875, "text": "Create one more dependency tag." }, { "code": null, "e": 6968, "s": 6907, "text": "Provide the following information within the dependency tag." }, { "code": null, "e": 7029, "s": 6968, "text": "Provide the following information within the dependency tag." }, { "code": null, "e": 7173, "s": 7029, "text": "<dependency> \n <groupId>junit</groupId> \n <artifactId>junit</artifactId> \n <version>4.10</version> \n <scope>test</scope> \n</dependency>" }, { "code": null, "e": 7200, "s": 7173, "text": "Step 10 − Verify binaries." }, { "code": null, "e": 7246, "s": 7200, "text": "Once pom.xml is edited successfully, save it." }, { "code": null, "e": 7292, "s": 7246, "text": "Once pom.xml is edited successfully, save it." }, { "code": null, "e": 7344, "s": 7292, "text": "Go to Project → Clean − It will take a few minutes." }, { "code": null, "e": 7396, "s": 7344, "text": "Go to Project → Clean − It will take a few minutes." }, { "code": null, "e": 7440, "s": 7396, "text": "You will be able to see a Maven repository." }, { "code": null, "e": 7484, "s": 7440, "text": "You will be able to see a Maven repository." }, { "code": null, "e": 7554, "s": 7484, "text": "Step 11 − Create a package under src/test/java named as cucumberJava." }, { "code": null, "e": 7584, "s": 7554, "text": "Step 12 − Create feature file" }, { "code": null, "e": 7631, "s": 7584, "text": "Select and right-click on the package outline." }, { "code": null, "e": 7678, "s": 7631, "text": "Select and right-click on the package outline." }, { "code": null, "e": 7699, "s": 7678, "text": "Click on ‘New’ file." }, { "code": null, "e": 7720, "s": 7699, "text": "Click on ‘New’ file." }, { "code": null, "e": 7771, "s": 7720, "text": "Give the file a name such as cucumberJava.feature." }, { "code": null, "e": 7822, "s": 7771, "text": "Give the file a name such as cucumberJava.feature." }, { "code": null, "e": 8025, "s": 7822, "text": "Write the following text within the file and save it.\nFeature: CucumberJava\nScenario: Login functionality exists\nGiven I have open the browser\nWhen I open Facebook website\nThen Login button should exits" }, { "code": null, "e": 8079, "s": 8025, "text": "Write the following text within the file and save it." }, { "code": null, "e": 8101, "s": 8079, "text": "Feature: CucumberJava" }, { "code": null, "e": 8138, "s": 8101, "text": "Scenario: Login functionality exists" }, { "code": null, "e": 8168, "s": 8138, "text": "Given I have open the browser" }, { "code": null, "e": 8197, "s": 8168, "text": "When I open Facebook website" }, { "code": null, "e": 8228, "s": 8197, "text": "Then Login button should exits" }, { "code": null, "e": 8268, "s": 8228, "text": "Step 13 − Create step definition file −" }, { "code": null, "e": 8315, "s": 8268, "text": "Select and right-click on the package outline." }, { "code": null, "e": 8362, "s": 8315, "text": "Select and right-click on the package outline." }, { "code": null, "e": 8383, "s": 8362, "text": "Click on ‘New’ file." }, { "code": null, "e": 8404, "s": 8383, "text": "Click on ‘New’ file." }, { "code": null, "e": 8455, "s": 8404, "text": "Give the file name a name such as annotation.java." }, { "code": null, "e": 8506, "s": 8455, "text": "Give the file name a name such as annotation.java." }, { "code": null, "e": 8560, "s": 8506, "text": "Write the following text within the file and save it." }, { "code": null, "e": 8614, "s": 8560, "text": "Write the following text within the file and save it." }, { "code": null, "e": 9468, "s": 8614, "text": "package CucumberJava; \n\nimport org.openqa.selenium.By; \nimport org.openqa.selenium.WebDriver; \nimport org.openqa.selenium.firefox.FirefoxDriver; \n\nimport cucumber.annotation.en.Given; \nimport cucumber.annotation.en.Then; \nimport cucumber.annotation.en.When; \n\npublic class cucumberJava { \n WebDriver driver = null; \n\t\n @Given(\"^I have open the browser$\") \n public void openBrowser() { \n driver = new FirefoxDriver(); \n } \n\t\n @When(\"^I open Facebook website$\") \n public void goToFacebook() { \n driver.navigate().to(\"https://www.facebook.com/\"); \n } \n\t\n @Then(\"^Login button should exits$\") \n public void loginButton() { \n if(driver.findElement(By.id(\"u_0_v\")).isEnabled()) { \n System.out.println(\"Test 1 Pass\"); \n } else { \n System.out.println(\"Test 1 Fail\"); \n } \n driver.close(); \n } \n}" }, { "code": null, "e": 9506, "s": 9468, "text": "Step 14 − Create a runner class file." }, { "code": null, "e": 9553, "s": 9506, "text": "Select and right-click on the package outline." }, { "code": null, "e": 9600, "s": 9553, "text": "Select and right-click on the package outline." }, { "code": null, "e": 9621, "s": 9600, "text": "Click on ‘New’ file." }, { "code": null, "e": 9642, "s": 9621, "text": "Click on ‘New’ file." }, { "code": null, "e": 9678, "s": 9642, "text": "Give the file name as runTest.java." }, { "code": null, "e": 9714, "s": 9678, "text": "Give the file name as runTest.java." }, { "code": null, "e": 9768, "s": 9714, "text": "Write the following text within the file and save it." }, { "code": null, "e": 9822, "s": 9768, "text": "Write the following text within the file and save it." }, { "code": null, "e": 10030, "s": 9822, "text": "package cucumberJava;\n \nimport org.junit.runner.RunWith; \nimport cucumber.junit.Cucumber; \n\n@RunWith(Cucumber.class) \[email protected](format = {\"pretty\", \"html:target/cucumber\"}) \n\npublic class runTest { }" }, { "code": null, "e": 10068, "s": 10030, "text": "Step 15 − Run the test using option −" }, { "code": null, "e": 10120, "s": 10068, "text": "Select runTest.java file from the package explorer." }, { "code": null, "e": 10172, "s": 10120, "text": "Select runTest.java file from the package explorer." }, { "code": null, "e": 10215, "s": 10172, "text": "Right-click and select the option, Run as." }, { "code": null, "e": 10258, "s": 10215, "text": "Right-click and select the option, Run as." }, { "code": null, "e": 10277, "s": 10258, "text": "Select JUnit test." }, { "code": null, "e": 10296, "s": 10277, "text": "Select JUnit test." }, { "code": null, "e": 10351, "s": 10296, "text": "You will observe the following things upon execution −" }, { "code": null, "e": 10397, "s": 10351, "text": "An instance of Firefox web browser will open." }, { "code": null, "e": 10443, "s": 10397, "text": "An instance of Firefox web browser will open." }, { "code": null, "e": 10496, "s": 10443, "text": "It will open the Facebook login page on the browser." }, { "code": null, "e": 10549, "s": 10496, "text": "It will open the Facebook login page on the browser." }, { "code": null, "e": 10582, "s": 10549, "text": "It will detect the login button." }, { "code": null, "e": 10615, "s": 10582, "text": "It will detect the login button." }, { "code": null, "e": 10639, "s": 10615, "text": "The browser will close." }, { "code": null, "e": 10663, "s": 10639, "text": "The browser will close." }, { "code": null, "e": 10777, "s": 10663, "text": "In the JUnit window, you will see a scenario with green tick mark, which indicates success of the test execution." }, { "code": null, "e": 10891, "s": 10777, "text": "In the JUnit window, you will see a scenario with green tick mark, which indicates success of the test execution." }, { "code": null, "e": 10898, "s": 10891, "text": " Print" }, { "code": null, "e": 10909, "s": 10898, "text": " Add Notes" } ]
Don’t be Afraid of Nonparametric Topic Models (Part 2: Python) | by Eduardo Coronado Sroka | Towards Data Science
This article builds upon high-level foundational material I covered in my previous article and describes how to implement a Hierarchical Dirichlet Process model for topic modeling in Python. Let us all agree, it’s one thing to learn and talk about cool, new methods and another to actually implement/test them with data. Mostly because learning about them doesn’t come with the typical frustrations that arise with bugs, weird errors, etc. However, I personally believe that what comes out of all that tinkering is a deeper understanding of the concept itself. And guess what? Bayesian nonparametric (BNP) methods such as Hierarchical Dirichlet Processes (HDP) aren’t the exception. Before you think I’m about to throw you in at the deep end of the coding pool, don’t fret. I wrote this article with the overall goal that by the end you can confidently implement an HDP model that drives value in your projects (or allows you to humble brag to your friends). Here’s what I’ll cover A step-by-step tutorial on how to implement an HDP model using an existing Python library How its performance compares to Latent Dirichlet Allocation (LDA) models Some key considerations when implementing an HDP, pitfalls and potential fixes I’ll be using the 20 Newsgroup dataset available through sklearn.datasets. This is a great toy dataset given the collection of approximately 20,000 documents is split up almost evenly across 20 different topics (newsgroups). Thus, in some sense we already know the true topics the models should infer. The article is broken down into the following sections Data PreprocessingHDP Model Training and EvaluationModel ComparisonCautionary Tales Data Preprocessing HDP Model Training and Evaluation Model Comparison Cautionary Tales Before we dive in I’d recommend you install the following dependencies spaCy , nltk , gensim , tomotopy , plotnine , and wordcloud. You can either install each individually using pip or you can use the requirements file I created to make your life easier as follows pip3 install -r requirements.txt If you haven’t preprocessed text data for NLP projects, I highly recommend you check out this step-by-step tutorial beforehand Let’s start by first loading the 20 Newsgroup data set, specifying that we only want the train subset for our model. from sklearn.datasets import fetch_20newsgroups# Read in train subset (11,314 observations)news = fetch_20newsgroups(subset='train') This news instance contains both text data (news.data) and labels (news.target and news.target_names ) which in pandas it might look like Rather than boring you with a step-by-step on how I preprocessed the content data with standard methods (e.g. tokenization, stop word removal, etc.), I will instead summarize these steps below. As we move forward in this section (and in future projects) I’d like to ask that you keep the following in mind: Do these preprocessing steps make sense based on the end goal(s) and modeling method of choice? This can easily be overlooked given standard preprocessing methods work quite well in many applications. For example, in our case we want to remove noise generated by stop words (on, and, etc.) so that our models can better capture latent topics that resemble the true topics. However, is this always the case? No.[Check out this article on why this isn’t a good idea in sentiment analysis] First, I removed any special characters inherent to this data set such as @ and \n , as well as single quotes ' using regular expression substitutions. Here I used the function gensim.simple_preprocess which very efficiently tokenizes each document (i.e. splits the text into individual words). To remove any potential accents, I ran it with the deacc=True parameter. These are helpful to build given they account for word co-occurrences. For example, in our data this would generate ['oil_leak'] instead of ['oil', leak'] . You can use gensim ‘s built-in Phrases and Phraser functions to achieve this. Inferring latent topics will be easier if we first prune the documents of non-informative words (i.e. stop words). I downloaded nltk ‘s English stop words and added few simple ones unique to this data set ['from', ‘subject', ‘re','edu',use'] . Given the data set is relatively small (i.e. ~10k observations) I implemented a lemmatization scheme for nouns, verbs, adjective and adverbs using spaCy and part-of-speech (POS) tags. Simply put, I removed any inflectional endings and returned the base/dictionary word as shown below. "A letter has been written asking him to be released"[ex. Original ==> Lemmatized, POS tag]A ==> a, DETletter ==> letter, NOUNhas ==> have, AUXbeen ==> be, AUXwritten ==> write, VERBasking ==> ask, VERBhim ==> -PRON-, PRONto ==> to, PARTbe ==> be, AUXreleased ==> release, VERB Although lemmatization provides better tokens, it does at the expense of potentially taking longer than you’d like to sit waiting for it to finish. Thus, an alternative when time is of essence or if you have a large data set is stemming which uses crude heuristics to chop off the ends of words in hopes of getting the base word. (For example, in this data set lemmatization took about 4 mins while stemming took only 14 secs). I built a custom script newsgrp_preprocess(link here) that incorporates all the above steps and outputs ready-to-use data for our HDP model with over 1M tokens (i.e.word_list_lemmatized). Additionally, it tidy-ups the information in the news instance and outputs the information in a nice pandas dataframe (shown earlier in the article). > from scripts.newsgrp_preprocess import run_preprocess> news_df, word_list_lemmatized = run_preprocess(news)# Showing first document, first seven tokens> word_list_lemmatized[0][:7] > ['where', 'thing', 'car', 'nntp_poste', 'host', 'park', 'line'] Now that that’s done, let’s move on to the really cool stuff — training the model! If you’re not familiar with Bayesian models, you might ask “What does it mean to train a Bayesian model”? Well, it basically means we’re trying to infer/learn a distribution. In our case, we’re trying to learn the distribution of unobserved (latent) topics from the documents. To train a Bayesian model you commonly use methods that fall within two main camps: Monte Carlo methods (e.g. Gibbs/MCMC sampling) and Approximation/Optimization methods (e.g. Variational Inference). MCMC? Variational...what? Don’t worry, here’s an easy read by Joseph Rocca that explains these methods even if you don’t know much Bayesian stats Without going into much detail, both camps achieve basically the same goal: a set of inferred topics from the data. However, each has advantages and disadvantages (some of which I’ll cover in later sections) that are important when considering which method to use. For the rest of this section, I’ll be using Python libraries that use a Monte Carlo method called Collapsed Gibbs sampling. Compared to a traditional Gibbs sampler, this method speeds up the topic inference process (i.e. model training). Here I used the tomotopy Python library. If you’ve had any experience implementing HDPs, at this point you might ask, “But why didn’t you use thegensim.HdpModelfunction?” Well, I tried (trust me, real hard). Remember the frustrations I mentioned at the beginning? Well for me this was one of them because even after extensive tuning I wasn’t able to get gensim’s method to generate quality results with this dataset. Training a tomotopy model is quite simple. First you initiate a model object by setting some parameters like how the model will weight tokens, thresholds related to token frequency, and the HDP model’s concentration parameters alpha and gamma (see left). For this dataset, I restricted the model to using only words that appeared in at least 5 documents with the min_cf argument while excluding the 7 most frequent words withrm_top. Similarly, I set the concentration parameters gamma=1 and alpha=0.1 given I assume documents share many topics while individual documents only talk about few topics. I initialized the number of topics withinitial_k=10 which acts as a sort of prior. I chose this given the data’s 20 topics are grouped in 6 overarching groups (e.g. recommendation under rec.car , rec.bike ) and I assumed the misc group could have some additional topics that should be accounted for. import tomotopy as tpterm_weight = tp.TermWeight.ONEhdp = tp.HDPModel(tw=term_weight, min_cf=5, rm_top=7, gamma=1, alpha=0.1, initial_k=10, seed=99999) Once we have the hdp object instantiated, we can add the documents it will use to train the model as follows. (In case it’s helpful, I’ve automated these steps with a custom function train_HDPModel.) The above example output provides useful early-diagnostic information. For example, seeing the per-word log-likelihood increasing tells us the model is learning adequately. Extracting topics from the model isn’t as straightforward as other packages, so I’ve built a custom script get_hdp_topics to ease this process. The following section will sound tedious, however I highly encourage you to walk through it. It covers best practices on how to evaluate these kind of models. Given topic models are unsupervised methods we are unable to use common performance metrics (e.g. RMSE) to evaluate them, instead we use a metric called coherence which provides an objective measure of whether words grouped together as a topic make sense. There are multiple ways to compute this metric but basically, a topic is said to have high coherence if the words defining a topic have a high probability of appearing together (co-occur) across documents. In general, the CV method is preferred given it takes into account how close words appear together through a sliding window to compute these probabilities. Given we know the topics in our data set, we can evaluate two things: Whether the model’s topics represent the true topics (coherence)How well does the model infer the topics of an unseen (out-of-sample) documents Whether the model’s topics represent the true topics (coherence) How well does the model infer the topics of an unseen (out-of-sample) documents Given tomotopy offers three distinct token weighting schemes, I tested each to compare their performance based on the above criteria. First, I compared their coherence using gensim.CoherenceModel with coherence='c_v' using a custom script. The CV metric scores range from 0 to 1 (where good topic coherence scores range between 0.5–0.65) . For this dataset the Inverse Document Frequency model seems to perform the best based on topic coherence. If your goal is to only understand some latent topics from a specific dataset, then great — you’re done! Choose the model with highest coherence. However, be aware this does not necessarily mean the model will generalize well (i.e. accurately assign topics to unseen documents). In our case, we have a labeled test set we can use to verify how the models generalize. (If you don’t have labeled data you can do something similar as a sanity check. Grab an unseen document to predict on, select the most dominant topic assigned and see if that assignment makes sense based on the text.) gets predicted to one You can also use the training set as a sanity check if you’d like even if it’s unlabeled data, just predict on a document and see if it makes sense based on the content.) The word clouds below provide a great example. Using the same steps as before, we get a test set (subset='test') and evaluate how well each model generalizes. We see that the model with the highest coherence (IDF) doesn’t necessarily assign the “correct” topic (rec.autos), it instead seems to think this document talks about computers. Although this was a very specific example, I chose to continue with the HDP IDF model given it tended to produce topics that resembled the true labels and assigned the right topics to test documents more often than the others. As a quick side note, topic labels can be subjective (i.e. one person might interpret the words as referring to a hardware topic while another to a software topic). To avoid that tomotopy offers an interesting method that objectively labels the topics. You can find some examples here. Congratulations! If you’ve made it this far you pretty much know how to implement an HDP model in Python! However, how does it compare to LDA? Let’s compare our HDP model versus a MALLET LDA model (fun!). I used this version instead of the default gensim LDA because it allows an apples-to-apples comparison (i.e. it also uses a Collapsed Gibbs sampler). In this case gensim does have an easy wrapper LdaMallet to quickly implement this model once you’ve downloaded the MALLET binary. To understand how to run it I suggest you look at the following Jupyter notebook . Using the same dataset, I compared several LDA models’ performance as above (i.e. coherence + generalizability) to those of the HDP models. Below (left), we can see that topic coherence increases as we increase the topic parameter in the LDA model. This comparison demonstrates how without specifying topics off-hand HDP models can achieve similar or higher topic coherence than LDA models Similarly, we can see that our best HDP model (IDF) has similar performance to the best LDA model (topics=26) in terms of assigning the correct topic to an unseen documents (right). What is a Data Science article without some good ol’ cautionary tales. Let’s face it every modeling technique has advantages and disadvantages, and there are situations were things can go wrong. Here I just want to share a couple of things to keep in mind when implementing HDP models (or topic models in general). Really think about your preprocessing steps. An easy way to start is to consider what problem you’re aiming to solve is, the data you have, and the model you’ll use. Many times we can just go with the flow and use similar preprocessing steps across projects, however as I mentioned above this might lead to bad outputs. For example, the goal of this project was to implement a model that could learn 20(ish) topics from text data. Here it made sense to remove stop words to reduce noise and lemmatize given the data set size. However, if the goal were to do some sentiment analysis then removing stop words wouldn’t have been a good choice. Coherence, as one of many metrics you can build to evaluate topic models, provides an intuitive way to gauge performance. I don’t know about you, but I think sometimes we tend to fixate on statistics (e.g. RMSE or classification for predictive models), especially when they seem to indicate good performance. Here’s a clear example. Remember that we took coherence as a good indicator of performance? Well, this was certainly not true when I used gensim.HdpModel. Since their implementation is based on variational inference, I tested (as one of hundreds of tuning combinations) how changing one of the learning parameters (kappa) would affected the model’s topic coherence and performance on unseen documents. The result was that as I increased kappa the model’s coherence score increased. You might ask, “Well great, that’s what we wanted right?” Yes, but in this case, when you try to test the model on an unseen document you observe that 1) every single document is assigned a the same topic and 2) this topic is so general that you wouldn’t be able to make sense of it. The model had converged to a solution that performed “well” (e.g. a local optima) by the given metric but which nonetheless wasn’t useful. Therefore, if there is one key thing I’d like for you to learn from this article is Before fully trusting a statistic, take a step back to think whether that number makes sense and why it makes sense Previously I mentioned there were two main camps used in Bayesian inference: Monte Carlo methods and variational methods. Choosing which package implementation to use varies greatly on your specific project and goals. However, I believe it boils down to a speed/memory vs. accuracy trade-off. tomotopy models: Fast and accurateAs previously mentioned this packages uses a Collapsed Gibbs sampler. The main advantage is that it inherently produces unbiased and accurate results. However, this method doesn’t scale well (memory costs increase linearly with the number of observations). In our case it made sense but if you were to take a dataset of 1M documents you will probably be waiting a while for it to finish running. gensim models: Faster and scalableThis package uses an online variational inference method which, to keep it simple, allows you to use classic variational inference approximation tools at scale. The main advantages of this method are speed and low memory consumption (memory doesn’t grow linearly with the number of observations, so this is great for large datasets). However, it gains speed at the expense of accuracy given you are approximating a target distribution with usually a “simpler distribution” rather than sampling directly from the target as in Collapsed Gibbs sampling. This not only introduces bias but also might require extensive parameter tuning to get useful results. Below is an example of how two parameters affect the learning rate and thus convergence (you can access the Dash app here). (Given these limitations new methods have been developed to try to address the approximation bias of variational inference.) HDP models are powerful alternatives to LDA models when you don’t wish to specify topics beforehand and there are several packages out there that can help you easily implement them. However, remember that some due diligence before, during, and after implementing these models can ensure your results are in line with your initial objectives. So, now that you’ve learned how to implement an HDP model (and count fluently in Spanish), go ahead and test your newly learned skills! Wang, Chong, John Paisley, and David M. Blei. “Online variational inference for the hierarchical Dirichlet process.” Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in PMLR. 2011.Griffiths, Thomas L., and Mark Steyvers. “Finding scientific topics.” Proceedings of the National academy of Sciences 101.suppl 1 (2004): 5228–5235.Tomotopy. https://github.com/bab2min/tomotopy . https://doi.org/10.5281/zenodo.3816629Bryant, Michael, and Erik B. Sudderth. “Truly nonparametric online variational inference for hierarchical Dirichlet processes.” Advances in Neural Information Processing Systems. 2012. Wang, Chong, John Paisley, and David M. Blei. “Online variational inference for the hierarchical Dirichlet process.” Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in PMLR. 2011. Griffiths, Thomas L., and Mark Steyvers. “Finding scientific topics.” Proceedings of the National academy of Sciences 101.suppl 1 (2004): 5228–5235. Tomotopy. https://github.com/bab2min/tomotopy . https://doi.org/10.5281/zenodo.3816629 Bryant, Michael, and Erik B. Sudderth. “Truly nonparametric online variational inference for hierarchical Dirichlet processes.” Advances in Neural Information Processing Systems. 2012. If you liked the article feel free to share! Comment or tweet (@ecoronado92) if you have any questions or you see anything incorrect/questionable. All code and notebooks used in this article can be found here
[ { "code": null, "e": 363, "s": 172, "text": "This article builds upon high-level foundational material I covered in my previous article and describes how to implement a Hierarchical Dirichlet Process model for topic modeling in Python." }, { "code": null, "e": 733, "s": 363, "text": "Let us all agree, it’s one thing to learn and talk about cool, new methods and another to actually implement/test them with data. Mostly because learning about them doesn’t come with the typical frustrations that arise with bugs, weird errors, etc. However, I personally believe that what comes out of all that tinkering is a deeper understanding of the concept itself." }, { "code": null, "e": 855, "s": 733, "text": "And guess what? Bayesian nonparametric (BNP) methods such as Hierarchical Dirichlet Processes (HDP) aren’t the exception." }, { "code": null, "e": 1131, "s": 855, "text": "Before you think I’m about to throw you in at the deep end of the coding pool, don’t fret. I wrote this article with the overall goal that by the end you can confidently implement an HDP model that drives value in your projects (or allows you to humble brag to your friends)." }, { "code": null, "e": 1154, "s": 1131, "text": "Here’s what I’ll cover" }, { "code": null, "e": 1244, "s": 1154, "text": "A step-by-step tutorial on how to implement an HDP model using an existing Python library" }, { "code": null, "e": 1317, "s": 1244, "text": "How its performance compares to Latent Dirichlet Allocation (LDA) models" }, { "code": null, "e": 1396, "s": 1317, "text": "Some key considerations when implementing an HDP, pitfalls and potential fixes" }, { "code": null, "e": 1698, "s": 1396, "text": "I’ll be using the 20 Newsgroup dataset available through sklearn.datasets. This is a great toy dataset given the collection of approximately 20,000 documents is split up almost evenly across 20 different topics (newsgroups). Thus, in some sense we already know the true topics the models should infer." }, { "code": null, "e": 1753, "s": 1698, "text": "The article is broken down into the following sections" }, { "code": null, "e": 1837, "s": 1753, "text": "Data PreprocessingHDP Model Training and EvaluationModel ComparisonCautionary Tales" }, { "code": null, "e": 1856, "s": 1837, "text": "Data Preprocessing" }, { "code": null, "e": 1890, "s": 1856, "text": "HDP Model Training and Evaluation" }, { "code": null, "e": 1907, "s": 1890, "text": "Model Comparison" }, { "code": null, "e": 1924, "s": 1907, "text": "Cautionary Tales" }, { "code": null, "e": 2190, "s": 1924, "text": "Before we dive in I’d recommend you install the following dependencies spaCy , nltk , gensim , tomotopy , plotnine , and wordcloud. You can either install each individually using pip or you can use the requirements file I created to make your life easier as follows" }, { "code": null, "e": 2223, "s": 2190, "text": "pip3 install -r requirements.txt" }, { "code": null, "e": 2350, "s": 2223, "text": "If you haven’t preprocessed text data for NLP projects, I highly recommend you check out this step-by-step tutorial beforehand" }, { "code": null, "e": 2467, "s": 2350, "text": "Let’s start by first loading the 20 Newsgroup data set, specifying that we only want the train subset for our model." }, { "code": null, "e": 2600, "s": 2467, "text": "from sklearn.datasets import fetch_20newsgroups# Read in train subset (11,314 observations)news = fetch_20newsgroups(subset='train')" }, { "code": null, "e": 2738, "s": 2600, "text": "This news instance contains both text data (news.data) and labels (news.target and news.target_names ) which in pandas it might look like" }, { "code": null, "e": 2932, "s": 2738, "text": "Rather than boring you with a step-by-step on how I preprocessed the content data with standard methods (e.g. tokenization, stop word removal, etc.), I will instead summarize these steps below." }, { "code": null, "e": 3045, "s": 2932, "text": "As we move forward in this section (and in future projects) I’d like to ask that you keep the following in mind:" }, { "code": null, "e": 3141, "s": 3045, "text": "Do these preprocessing steps make sense based on the end goal(s) and modeling method of choice?" }, { "code": null, "e": 3532, "s": 3141, "text": "This can easily be overlooked given standard preprocessing methods work quite well in many applications. For example, in our case we want to remove noise generated by stop words (on, and, etc.) so that our models can better capture latent topics that resemble the true topics. However, is this always the case? No.[Check out this article on why this isn’t a good idea in sentiment analysis]" }, { "code": null, "e": 3684, "s": 3532, "text": "First, I removed any special characters inherent to this data set such as @ and \\n , as well as single quotes ' using regular expression substitutions." }, { "code": null, "e": 3900, "s": 3684, "text": "Here I used the function gensim.simple_preprocess which very efficiently tokenizes each document (i.e. splits the text into individual words). To remove any potential accents, I ran it with the deacc=True parameter." }, { "code": null, "e": 4135, "s": 3900, "text": "These are helpful to build given they account for word co-occurrences. For example, in our data this would generate ['oil_leak'] instead of ['oil', leak'] . You can use gensim ‘s built-in Phrases and Phraser functions to achieve this." }, { "code": null, "e": 4379, "s": 4135, "text": "Inferring latent topics will be easier if we first prune the documents of non-informative words (i.e. stop words). I downloaded nltk ‘s English stop words and added few simple ones unique to this data set ['from', ‘subject', ‘re','edu',use'] ." }, { "code": null, "e": 4664, "s": 4379, "text": "Given the data set is relatively small (i.e. ~10k observations) I implemented a lemmatization scheme for nouns, verbs, adjective and adverbs using spaCy and part-of-speech (POS) tags. Simply put, I removed any inflectional endings and returned the base/dictionary word as shown below." }, { "code": null, "e": 4980, "s": 4664, "text": "\"A letter has been written asking him to be released\"[ex. Original ==> Lemmatized, POS tag]A ==> a, DETletter ==> letter, NOUNhas ==> have, AUXbeen ==> be, AUXwritten ==> write, VERBasking ==> ask, VERBhim ==> -PRON-, PRONto ==> to, PARTbe ==> be, AUXreleased ==> release, VERB" }, { "code": null, "e": 5408, "s": 4980, "text": "Although lemmatization provides better tokens, it does at the expense of potentially taking longer than you’d like to sit waiting for it to finish. Thus, an alternative when time is of essence or if you have a large data set is stemming which uses crude heuristics to chop off the ends of words in hopes of getting the base word. (For example, in this data set lemmatization took about 4 mins while stemming took only 14 secs)." }, { "code": null, "e": 5746, "s": 5408, "text": "I built a custom script newsgrp_preprocess(link here) that incorporates all the above steps and outputs ready-to-use data for our HDP model with over 1M tokens (i.e.word_list_lemmatized). Additionally, it tidy-ups the information in the news instance and outputs the information in a nice pandas dataframe (shown earlier in the article)." }, { "code": null, "e": 5995, "s": 5746, "text": "> from scripts.newsgrp_preprocess import run_preprocess> news_df, word_list_lemmatized = run_preprocess(news)# Showing first document, first seven tokens> word_list_lemmatized[0][:7] > ['where', 'thing', 'car', 'nntp_poste', 'host', 'park', 'line']" }, { "code": null, "e": 6078, "s": 5995, "text": "Now that that’s done, let’s move on to the really cool stuff — training the model!" }, { "code": null, "e": 6355, "s": 6078, "text": "If you’re not familiar with Bayesian models, you might ask “What does it mean to train a Bayesian model”? Well, it basically means we’re trying to infer/learn a distribution. In our case, we’re trying to learn the distribution of unobserved (latent) topics from the documents." }, { "code": null, "e": 6555, "s": 6355, "text": "To train a Bayesian model you commonly use methods that fall within two main camps: Monte Carlo methods (e.g. Gibbs/MCMC sampling) and Approximation/Optimization methods (e.g. Variational Inference)." }, { "code": null, "e": 6701, "s": 6555, "text": "MCMC? Variational...what? Don’t worry, here’s an easy read by Joseph Rocca that explains these methods even if you don’t know much Bayesian stats" }, { "code": null, "e": 6966, "s": 6701, "text": "Without going into much detail, both camps achieve basically the same goal: a set of inferred topics from the data. However, each has advantages and disadvantages (some of which I’ll cover in later sections) that are important when considering which method to use." }, { "code": null, "e": 7204, "s": 6966, "text": "For the rest of this section, I’ll be using Python libraries that use a Monte Carlo method called Collapsed Gibbs sampling. Compared to a traditional Gibbs sampler, this method speeds up the topic inference process (i.e. model training)." }, { "code": null, "e": 7245, "s": 7204, "text": "Here I used the tomotopy Python library." }, { "code": null, "e": 7621, "s": 7245, "text": "If you’ve had any experience implementing HDPs, at this point you might ask, “But why didn’t you use thegensim.HdpModelfunction?” Well, I tried (trust me, real hard). Remember the frustrations I mentioned at the beginning? Well for me this was one of them because even after extensive tuning I wasn’t able to get gensim’s method to generate quality results with this dataset." }, { "code": null, "e": 7876, "s": 7621, "text": "Training a tomotopy model is quite simple. First you initiate a model object by setting some parameters like how the model will weight tokens, thresholds related to token frequency, and the HDP model’s concentration parameters alpha and gamma (see left)." }, { "code": null, "e": 8520, "s": 7876, "text": "For this dataset, I restricted the model to using only words that appeared in at least 5 documents with the min_cf argument while excluding the 7 most frequent words withrm_top. Similarly, I set the concentration parameters gamma=1 and alpha=0.1 given I assume documents share many topics while individual documents only talk about few topics. I initialized the number of topics withinitial_k=10 which acts as a sort of prior. I chose this given the data’s 20 topics are grouped in 6 overarching groups (e.g. recommendation under rec.car , rec.bike ) and I assumed the misc group could have some additional topics that should be accounted for." }, { "code": null, "e": 8689, "s": 8520, "text": "import tomotopy as tpterm_weight = tp.TermWeight.ONEhdp = tp.HDPModel(tw=term_weight, min_cf=5, rm_top=7, gamma=1, alpha=0.1, initial_k=10, seed=99999)" }, { "code": null, "e": 8889, "s": 8689, "text": "Once we have the hdp object instantiated, we can add the documents it will use to train the model as follows. (In case it’s helpful, I’ve automated these steps with a custom function train_HDPModel.)" }, { "code": null, "e": 9062, "s": 8889, "text": "The above example output provides useful early-diagnostic information. For example, seeing the per-word log-likelihood increasing tells us the model is learning adequately." }, { "code": null, "e": 9206, "s": 9062, "text": "Extracting topics from the model isn’t as straightforward as other packages, so I’ve built a custom script get_hdp_topics to ease this process." }, { "code": null, "e": 9365, "s": 9206, "text": "The following section will sound tedious, however I highly encourage you to walk through it. It covers best practices on how to evaluate these kind of models." }, { "code": null, "e": 9621, "s": 9365, "text": "Given topic models are unsupervised methods we are unable to use common performance metrics (e.g. RMSE) to evaluate them, instead we use a metric called coherence which provides an objective measure of whether words grouped together as a topic make sense." }, { "code": null, "e": 9983, "s": 9621, "text": "There are multiple ways to compute this metric but basically, a topic is said to have high coherence if the words defining a topic have a high probability of appearing together (co-occur) across documents. In general, the CV method is preferred given it takes into account how close words appear together through a sliding window to compute these probabilities." }, { "code": null, "e": 10053, "s": 9983, "text": "Given we know the topics in our data set, we can evaluate two things:" }, { "code": null, "e": 10197, "s": 10053, "text": "Whether the model’s topics represent the true topics (coherence)How well does the model infer the topics of an unseen (out-of-sample) documents" }, { "code": null, "e": 10262, "s": 10197, "text": "Whether the model’s topics represent the true topics (coherence)" }, { "code": null, "e": 10342, "s": 10262, "text": "How well does the model infer the topics of an unseen (out-of-sample) documents" }, { "code": null, "e": 10682, "s": 10342, "text": "Given tomotopy offers three distinct token weighting schemes, I tested each to compare their performance based on the above criteria. First, I compared their coherence using gensim.CoherenceModel with coherence='c_v' using a custom script. The CV metric scores range from 0 to 1 (where good topic coherence scores range between 0.5–0.65) ." }, { "code": null, "e": 11067, "s": 10682, "text": "For this dataset the Inverse Document Frequency model seems to perform the best based on topic coherence. If your goal is to only understand some latent topics from a specific dataset, then great — you’re done! Choose the model with highest coherence. However, be aware this does not necessarily mean the model will generalize well (i.e. accurately assign topics to unseen documents)." }, { "code": null, "e": 11373, "s": 11067, "text": "In our case, we have a labeled test set we can use to verify how the models generalize. (If you don’t have labeled data you can do something similar as a sanity check. Grab an unseen document to predict on, select the most dominant topic assigned and see if that assignment makes sense based on the text.)" }, { "code": null, "e": 11566, "s": 11373, "text": "gets predicted to one You can also use the training set as a sanity check if you’d like even if it’s unlabeled data, just predict on a document and see if it makes sense based on the content.)" }, { "code": null, "e": 11903, "s": 11566, "text": "The word clouds below provide a great example. Using the same steps as before, we get a test set (subset='test') and evaluate how well each model generalizes. We see that the model with the highest coherence (IDF) doesn’t necessarily assign the “correct” topic (rec.autos), it instead seems to think this document talks about computers." }, { "code": null, "e": 12130, "s": 11903, "text": "Although this was a very specific example, I chose to continue with the HDP IDF model given it tended to produce topics that resembled the true labels and assigned the right topics to test documents more often than the others." }, { "code": null, "e": 12416, "s": 12130, "text": "As a quick side note, topic labels can be subjective (i.e. one person might interpret the words as referring to a hardware topic while another to a software topic). To avoid that tomotopy offers an interesting method that objectively labels the topics. You can find some examples here." }, { "code": null, "e": 12559, "s": 12416, "text": "Congratulations! If you’ve made it this far you pretty much know how to implement an HDP model in Python! However, how does it compare to LDA?" }, { "code": null, "e": 12984, "s": 12559, "text": "Let’s compare our HDP model versus a MALLET LDA model (fun!). I used this version instead of the default gensim LDA because it allows an apples-to-apples comparison (i.e. it also uses a Collapsed Gibbs sampler). In this case gensim does have an easy wrapper LdaMallet to quickly implement this model once you’ve downloaded the MALLET binary. To understand how to run it I suggest you look at the following Jupyter notebook ." }, { "code": null, "e": 13233, "s": 12984, "text": "Using the same dataset, I compared several LDA models’ performance as above (i.e. coherence + generalizability) to those of the HDP models. Below (left), we can see that topic coherence increases as we increase the topic parameter in the LDA model." }, { "code": null, "e": 13374, "s": 13233, "text": "This comparison demonstrates how without specifying topics off-hand HDP models can achieve similar or higher topic coherence than LDA models" }, { "code": null, "e": 13556, "s": 13374, "text": "Similarly, we can see that our best HDP model (IDF) has similar performance to the best LDA model (topics=26) in terms of assigning the correct topic to an unseen documents (right)." }, { "code": null, "e": 13871, "s": 13556, "text": "What is a Data Science article without some good ol’ cautionary tales. Let’s face it every modeling technique has advantages and disadvantages, and there are situations were things can go wrong. Here I just want to share a couple of things to keep in mind when implementing HDP models (or topic models in general)." }, { "code": null, "e": 14191, "s": 13871, "text": "Really think about your preprocessing steps. An easy way to start is to consider what problem you’re aiming to solve is, the data you have, and the model you’ll use. Many times we can just go with the flow and use similar preprocessing steps across projects, however as I mentioned above this might lead to bad outputs." }, { "code": null, "e": 14512, "s": 14191, "text": "For example, the goal of this project was to implement a model that could learn 20(ish) topics from text data. Here it made sense to remove stop words to reduce noise and lemmatize given the data set size. However, if the goal were to do some sentiment analysis then removing stop words wouldn’t have been a good choice." }, { "code": null, "e": 14821, "s": 14512, "text": "Coherence, as one of many metrics you can build to evaluate topic models, provides an intuitive way to gauge performance. I don’t know about you, but I think sometimes we tend to fixate on statistics (e.g. RMSE or classification for predictive models), especially when they seem to indicate good performance." }, { "code": null, "e": 15223, "s": 14821, "text": "Here’s a clear example. Remember that we took coherence as a good indicator of performance? Well, this was certainly not true when I used gensim.HdpModel. Since their implementation is based on variational inference, I tested (as one of hundreds of tuning combinations) how changing one of the learning parameters (kappa) would affected the model’s topic coherence and performance on unseen documents." }, { "code": null, "e": 15726, "s": 15223, "text": "The result was that as I increased kappa the model’s coherence score increased. You might ask, “Well great, that’s what we wanted right?” Yes, but in this case, when you try to test the model on an unseen document you observe that 1) every single document is assigned a the same topic and 2) this topic is so general that you wouldn’t be able to make sense of it. The model had converged to a solution that performed “well” (e.g. a local optima) by the given metric but which nonetheless wasn’t useful." }, { "code": null, "e": 15810, "s": 15726, "text": "Therefore, if there is one key thing I’d like for you to learn from this article is" }, { "code": null, "e": 15926, "s": 15810, "text": "Before fully trusting a statistic, take a step back to think whether that number makes sense and why it makes sense" }, { "code": null, "e": 16219, "s": 15926, "text": "Previously I mentioned there were two main camps used in Bayesian inference: Monte Carlo methods and variational methods. Choosing which package implementation to use varies greatly on your specific project and goals. However, I believe it boils down to a speed/memory vs. accuracy trade-off." }, { "code": null, "e": 16649, "s": 16219, "text": "tomotopy models: Fast and accurateAs previously mentioned this packages uses a Collapsed Gibbs sampler. The main advantage is that it inherently produces unbiased and accurate results. However, this method doesn’t scale well (memory costs increase linearly with the number of observations). In our case it made sense but if you were to take a dataset of 1M documents you will probably be waiting a while for it to finish running." }, { "code": null, "e": 17461, "s": 16649, "text": "gensim models: Faster and scalableThis package uses an online variational inference method which, to keep it simple, allows you to use classic variational inference approximation tools at scale. The main advantages of this method are speed and low memory consumption (memory doesn’t grow linearly with the number of observations, so this is great for large datasets). However, it gains speed at the expense of accuracy given you are approximating a target distribution with usually a “simpler distribution” rather than sampling directly from the target as in Collapsed Gibbs sampling. This not only introduces bias but also might require extensive parameter tuning to get useful results. Below is an example of how two parameters affect the learning rate and thus convergence (you can access the Dash app here)." }, { "code": null, "e": 17586, "s": 17461, "text": "(Given these limitations new methods have been developed to try to address the approximation bias of variational inference.)" }, { "code": null, "e": 17928, "s": 17586, "text": "HDP models are powerful alternatives to LDA models when you don’t wish to specify topics beforehand and there are several packages out there that can help you easily implement them. However, remember that some due diligence before, during, and after implementing these models can ensure your results are in line with your initial objectives." }, { "code": null, "e": 18064, "s": 17928, "text": "So, now that you’ve learned how to implement an HDP model (and count fluently in Spanish), go ahead and test your newly learned skills!" }, { "code": null, "e": 18712, "s": 18064, "text": "Wang, Chong, John Paisley, and David M. Blei. “Online variational inference for the hierarchical Dirichlet process.” Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in PMLR. 2011.Griffiths, Thomas L., and Mark Steyvers. “Finding scientific topics.” Proceedings of the National academy of Sciences 101.suppl 1 (2004): 5228–5235.Tomotopy. https://github.com/bab2min/tomotopy . https://doi.org/10.5281/zenodo.3816629Bryant, Michael, and Erik B. Sudderth. “Truly nonparametric online variational inference for hierarchical Dirichlet processes.” Advances in Neural Information Processing Systems. 2012." }, { "code": null, "e": 18942, "s": 18712, "text": "Wang, Chong, John Paisley, and David M. Blei. “Online variational inference for the hierarchical Dirichlet process.” Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in PMLR. 2011." }, { "code": null, "e": 19091, "s": 18942, "text": "Griffiths, Thomas L., and Mark Steyvers. “Finding scientific topics.” Proceedings of the National academy of Sciences 101.suppl 1 (2004): 5228–5235." }, { "code": null, "e": 19178, "s": 19091, "text": "Tomotopy. https://github.com/bab2min/tomotopy . https://doi.org/10.5281/zenodo.3816629" }, { "code": null, "e": 19363, "s": 19178, "text": "Bryant, Michael, and Erik B. Sudderth. “Truly nonparametric online variational inference for hierarchical Dirichlet processes.” Advances in Neural Information Processing Systems. 2012." }, { "code": null, "e": 19510, "s": 19363, "text": "If you liked the article feel free to share! Comment or tweet (@ecoronado92) if you have any questions or you see anything incorrect/questionable." } ]
How to check if a string contains only lower case letters in Python?
We can check if a string contains only lower case letters using 2 methods. First is using method islower(). For example: print('Hello world'.islower()) print('hello world'.islower()) OUTPUT False True You can also use regexes for the same result. For matching only lowercase, we can call the re.match(regex, string) using the regex: "^[a-z]+$". For example, print(bool(re.match('^[a-z]+$', '123abc'))) print(bool(re.match('^[a-z]+$', 'abc'))) OUTPUT False True
[ { "code": null, "e": 1170, "s": 1062, "text": "We can check if a string contains only lower case letters using 2 methods. First is using method islower()." }, { "code": null, "e": 1183, "s": 1170, "text": "For example:" }, { "code": null, "e": 1245, "s": 1183, "text": "print('Hello world'.islower())\nprint('hello world'.islower())" }, { "code": null, "e": 1252, "s": 1245, "text": "OUTPUT" }, { "code": null, "e": 1263, "s": 1252, "text": "False\nTrue" }, { "code": null, "e": 1420, "s": 1263, "text": "You can also use regexes for the same result. For matching only lowercase, we can call the re.match(regex, string) using the regex: \"^[a-z]+$\". For example," }, { "code": null, "e": 1505, "s": 1420, "text": "print(bool(re.match('^[a-z]+$', '123abc')))\nprint(bool(re.match('^[a-z]+$', 'abc')))" }, { "code": null, "e": 1512, "s": 1505, "text": "OUTPUT" }, { "code": null, "e": 1523, "s": 1512, "text": "False\nTrue" } ]
AngularJS – isString() method
The isString() method in AngularJS basically checks if a reference is a string value or not. This method will return True if the reference passed inside the function is a string, else it will return False. angular.isString(value) Create a file "isString.html" in your Angular project directory and copy-paste the following code snippet. <!DOCTYPE html> <html> <head> <title>angular.isString()</title> <script src= "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.2/angular.min.js"> </script> </head> <body ng-app="app" style="text-align:center"> <h1 style="color:green"> Welcome to Tutorials Point </h1> <h2>AngularJS | angular.isString()</h2> <div ng-controller="example"> <b>Name: {{name}}</b> <br><br> {{isString}} <br><br> <b>Name: {{name2}}</b> <br><br> {{isString1}} <br><br> <b>Name: {{name3}}</b> <br><br> {{isString2}} </div> <!-- Script for passing the values and checking... --> <script> var app = angular.module("app", []); app.controller('example',['$scope', function ($scope) { // Defining the keys & values $scope.name = "SIMPLY LEARNING"; $scope.name2 = ""; $scope.name3 = {"name": "tutorialsPoint"}; $scope.isString = angular.isString($scope.name) == true ? "$scope.name is a String." : "$scope.name is not a String."; $scope.isString1 = angular.isString($scope.name2) == true ? "$scope.name2 is a String." : "$scope.name2 is not a String."; $scope.isString2 = angular.isString ($scope.name3) == true ? "$scope.name2 is a String." : "$scope.name2 is not a String."; }]); </script> </body> </html> To run the above code, just go to your file and run it as a normal HTML file. You will see the following output on the browser window. Observe that name and name2 are strings, whereas name3 is a key-value pair in the given code, which is why we got the output "$scope.name2 is not a String."
[ { "code": null, "e": 1268, "s": 1062, "text": "The isString() method in AngularJS basically checks if a reference is a string value or not. This method will return True if the reference passed inside the function is a string, else it will return False." }, { "code": null, "e": 1292, "s": 1268, "text": "angular.isString(value)" }, { "code": null, "e": 1399, "s": 1292, "text": "Create a file \"isString.html\" in your Angular project directory and copy-paste the following code snippet." }, { "code": null, "e": 2985, "s": 1399, "text": "<!DOCTYPE html>\n<html>\n <head>\n <title>angular.isString()</title>\n\n <script src= \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.2/angular.min.js\">\n </script>\n </head>\n\n <body ng-app=\"app\" style=\"text-align:center\">\n <h1 style=\"color:green\">\n Welcome to Tutorials Point\n </h1>\n <h2>AngularJS | angular.isString()</h2>\n\n <div ng-controller=\"example\">\n <b>Name: {{name}}</b>\n <br><br>\n {{isString}}\n <br><br>\n <b>Name: {{name2}}</b>\n <br><br>\n {{isString1}}\n <br><br>\n <b>Name: {{name3}}</b>\n <br><br>\n {{isString2}}\n </div>\n\n <!-- Script for passing the values and checking... -->\n <script>\n var app = angular.module(\"app\", []);\n app.controller('example',['$scope', function ($scope)\n {\n // Defining the keys & values\n $scope.name = \"SIMPLY LEARNING\";\n $scope.name2 = \"\";\n $scope.name3 = {\"name\": \"tutorialsPoint\"};\n\n $scope.isString = angular.isString($scope.name) == true\n ? \"$scope.name is a String.\"\n : \"$scope.name is not a String.\";\n\n $scope.isString1 = angular.isString($scope.name2) == true\n ? \"$scope.name2 is a String.\"\n : \"$scope.name2 is not a String.\";\n\n $scope.isString2 = angular.isString ($scope.name3) == true\n ? \"$scope.name2 is a String.\"\n : \"$scope.name2 is not a String.\";\n }]);\n </script>\n </body>\n</html>" }, { "code": null, "e": 3120, "s": 2985, "text": "To run the above code, just go to your file and run it as a normal HTML file. You will see the following output on the browser window." }, { "code": null, "e": 3277, "s": 3120, "text": "Observe that name and name2 are strings, whereas name3 is a key-value pair in the given code, which is why we got the output \"$scope.name2 is not a String.\"" } ]
How to Create a RAID 5 Storage Array with ‘mdadm’ on Ubuntu 16.04
In this article, we will learn how to create a RAID 5 Array configuration using the ‘mdadm’ utility. The ‘mdadm’ is a utility which is used to create and manage storage arrays on Linux with RAID capability where the administrators are having great flexibility in managing the individual storages devices and creating the logical storage with a high performance and redundancy. RAID 5 Array is a type where we implement by striping the data across the total available devices. Every component of the each stripe is calculated by parity block. If any device fails the parity block, it will use the remaining blocks to calculate the missing data from the devices. Then the device that receives the parity block will rotate so that each device has the balanced amount of parity information about the storage. The Primary benefits of the RAID 5 are redundant with more usable storage capacity. In RAID 5, the parity information is distributed and one disk capacity will be used for parity. An Ubuntu machine with a non-root user with sudo permission. Multiple raw storage devices for creating RAID storage. To accomplish this demo, we need a minimum of 3 storage devices. To find the attached storages to the machine, we can use the below command. Before we start any thing we will check the existing the disk attached to the machine. Below is the command to list the available disks. $ lsblk –o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT Output NAME SIZE FSTYPE TYPE MOUNTPOINT sda 20G disk sdb 20G disk sdc 20G linux_raid_member disk vda 20G disk ├─vda1 20G ext4 part / └─vda15 1M part As we can see in the above output, we have 3 disks without any filesystem with 20GB and the devices are named as /dev/sda, /dev/sdb and /dev/sdc for this machine or session. For creating the RAID 5 array, we will use the mdadm – to create the command with the device name, we want to create and the raid level with the no of devices attaching to the RAID. $ sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc The mdadm tool will start the creation of an array it will take some time to complete the configuration, we can monitor the progress using the below command $ cat /proc/mdstat Output Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdc[3] sdb[1] sda[0] 24792064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] [===>.................] recovery = 15.6% (16362536/24792064) finish=7.3min speed=200808K/sec unused devices: <none> In the above output we can see the /dev/md0 device is being created with RAID 5 using the /dev/sda, /dev/sdb and /dev/sdc storage devices, this will also show the progress on the raid device. Before we mount the Array disk, we need to create a filesystem on the array disk which we created using the above steps. We will create a filesystem on the array $ sudo mkfs.ext4 –F /dev/md0 We will now create a mount point and attaché the new RAID disk created in the above steps. $ sudo mkdir –p /mnt/raiddisk1 $ sudo mount /dev/md0 /mnt/raiddisk1 $ df –h –x devtmpfs –x tmpfs Output Filesystem Size Used Avail Use% Mounted on /dev/vda1 20G 1.1G 18G 6% / /dev/md0 40G 60M 39G 1% /mnt/raiddisk1 As we can see, the new filesystem is mounted and accessible. Now we can scan the active array and append the file with the below command $ sudo mdadm –details –scan | sudo tee –a /etc/mdadm/mdadm.conf We needed to update the ‘initramfs’ file so that the RADI array will be available when the machine get started with the boot process. $ sudo update-initramfs -u Adding the RAID array to mount automatically at the boot time. Add the below line to the /etc/fstab. /dev/md0 /mnt/raiddisk1 ext4 defaults,nofail,discard 0 0 In the above setup and configuration we have configured a RAID 5 level array using three disks and mounted the disk at the boot time so that when ever we restart the server the raid disk will be loaded.
[ { "code": null, "e": 1163, "s": 1062, "text": "In this article, we will learn how to create a RAID 5 Array configuration using the ‘mdadm’ utility." }, { "code": null, "e": 1439, "s": 1163, "text": "The ‘mdadm’ is a utility which is used to create and manage storage arrays on Linux with RAID capability where the administrators are having great flexibility in managing the individual storages devices and creating the logical storage with a high performance and redundancy." }, { "code": null, "e": 1867, "s": 1439, "text": "RAID 5 Array is a type where we implement by striping the data across the total available devices. Every component of the each stripe is calculated by parity block. If any device fails the parity block, it will use the remaining blocks to calculate the missing data from the devices. Then the device that receives the parity block will rotate so that each device has the balanced amount of parity information about the storage." }, { "code": null, "e": 1951, "s": 1867, "text": "The Primary benefits of the RAID 5 are redundant with more usable storage capacity." }, { "code": null, "e": 2047, "s": 1951, "text": "In RAID 5, the parity information is distributed and one disk capacity will be used for parity." }, { "code": null, "e": 2108, "s": 2047, "text": "An Ubuntu machine with a non-root user with sudo permission." }, { "code": null, "e": 2164, "s": 2108, "text": "Multiple raw storage devices for creating RAID storage." }, { "code": null, "e": 2229, "s": 2164, "text": "To accomplish this demo, we need a minimum of 3 storage devices." }, { "code": null, "e": 2305, "s": 2229, "text": "To find the attached storages to the machine, we can use the below command." }, { "code": null, "e": 2442, "s": 2305, "text": "Before we start any thing we will check the existing the disk attached to the machine. Below is the command to list the available disks." }, { "code": null, "e": 2772, "s": 2442, "text": "$ lsblk –o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT\nOutput\nNAME SIZE FSTYPE TYPE MOUNTPOINT\nsda 20G disk\nsdb 20G disk\nsdc 20G linux_raid_member disk\nvda 20G disk\n├─vda1 20G ext4 part /\n└─vda15 1M part" }, { "code": null, "e": 2946, "s": 2772, "text": "As we can see in the above output, we have 3 disks without any filesystem with 20GB and the devices are named as /dev/sda, /dev/sdb and /dev/sdc for this machine or session." }, { "code": null, "e": 3128, "s": 2946, "text": "For creating the RAID 5 array, we will use the mdadm – to create the command with the device name, we want to create and the raid level with the no of devices attaching to the RAID." }, { "code": null, "e": 3223, "s": 3128, "text": "$ sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc" }, { "code": null, "e": 3380, "s": 3223, "text": "The mdadm tool will start the creation of an array it will take some time to complete the configuration, we can monitor the progress using the below command" }, { "code": null, "e": 3725, "s": 3380, "text": "$ cat /proc/mdstat\nOutput\nPersonalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]\nmd0 : active raid5 sdc[3] sdb[1] sda[0]\n 24792064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]\n [===>.................] recovery = 15.6% (16362536/24792064) finish=7.3min speed=200808K/sec\nunused devices: <none>" }, { "code": null, "e": 3917, "s": 3725, "text": "In the above output we can see the /dev/md0 device is being created with RAID 5 using the /dev/sda, /dev/sdb and /dev/sdc storage devices, this will also show the progress on the raid device." }, { "code": null, "e": 4038, "s": 3917, "text": "Before we mount the Array disk, we need to create a filesystem on the array disk which we created using the above steps." }, { "code": null, "e": 4079, "s": 4038, "text": "We will create a filesystem on the array" }, { "code": null, "e": 4108, "s": 4079, "text": "$ sudo mkfs.ext4 –F /dev/md0" }, { "code": null, "e": 4200, "s": 4108, "text": "We will now create a mount point and attaché the new RAID disk created in the above steps." }, { "code": null, "e": 4268, "s": 4200, "text": "$ sudo mkdir –p /mnt/raiddisk1\n$ sudo mount /dev/md0 /mnt/raiddisk1" }, { "code": null, "e": 4477, "s": 4268, "text": "$ df –h –x devtmpfs –x tmpfs\nOutput\nFilesystem Size Used Avail Use% Mounted on\n/dev/vda1 20G 1.1G 18G 6% /\n/dev/md0 40G 60M 39G 1% /mnt/raiddisk1" }, { "code": null, "e": 4538, "s": 4477, "text": "As we can see, the new filesystem is mounted and accessible." }, { "code": null, "e": 4614, "s": 4538, "text": "Now we can scan the active array and append the file with the below command" }, { "code": null, "e": 4678, "s": 4614, "text": "$ sudo mdadm –details –scan | sudo tee –a /etc/mdadm/mdadm.conf" }, { "code": null, "e": 4812, "s": 4678, "text": "We needed to update the ‘initramfs’ file so that the RADI array will be available when the machine get started with the boot process." }, { "code": null, "e": 4839, "s": 4812, "text": "$ sudo update-initramfs -u" }, { "code": null, "e": 4902, "s": 4839, "text": "Adding the RAID array to mount automatically at the boot time." }, { "code": null, "e": 4940, "s": 4902, "text": "Add the below line to the /etc/fstab." }, { "code": null, "e": 5006, "s": 4940, "text": "/dev/md0 /mnt/raiddisk1 ext4 defaults,nofail,discard 0 0" }, { "code": null, "e": 5209, "s": 5006, "text": "In the above setup and configuration we have configured a RAID 5 level array using three disks and mounted the disk at the boot time so that when ever we restart the server the raid disk will be loaded." } ]
Annotation Based After Returning Advice
@AfterReturning is an advice type, which ensures that an advice runs after the method executes successfully. Following is the syntax of @AfterReturning advice. @AfterReturning(Pointcut = "execution(* com.tutorialspoint.Student.*(..))", returning = "retVal") public void afterReturningAdvice(JoinPoint jp, Object retVal){ System.out.println("Method Signature: " + jp.getSignature()); System.out.println("Returning:" + retVal.toString() ); } Where, @AfterReturning − Mark a function as an advice to be executed before method(s) covered by Pointcut, if the method returns successfully. @AfterReturning − Mark a function as an advice to be executed before method(s) covered by Pointcut, if the method returns successfully. Pointcut − Provides an expression to select a function Pointcut − Provides an expression to select a function execution( expression ) − Expression covering methods on which advice is to be applied. execution( expression ) − Expression covering methods on which advice is to be applied. returning − Name of the variable to be returned. returning − Name of the variable to be returned. To understand the above-mentioned concepts related to @AfterReturning Advice, let us write an example, which will implement @AfterReturning Advice. To write our example with few advices, let us have a working Eclipse IDE in place and use the following steps to create a Spring application. Following is the content of Logging.java file. This is actually a sample of aspect module, which defines the methods to be called at various points. package com.tutorialspoint; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.JoinPoint; import org.aspectj.lang.annotation.AfterReturning; @Aspect public class Logging { /** * This is the method which I would like to execute * after a selected method execution. */ @AfterReturning(Pointcut = "execution(* com.tutorialspoint.Student.*(..))", returning = "retVal") public void afterReturningAdvice(JoinPoint jp, Object retVal){ System.out.println("Method Signature: " + jp.getSignature()); System.out.println("Returning:" + retVal.toString() ); } } Following is the content of the Student.java file. package com.tutorialspoint; public class Student { private Integer age; private String name; public void setAge(Integer age) { this.age = age; } public Integer getAge() { System.out.println("Age : " + age ); return age; } public void setName(String name) { this.name = name; } public String getName() { System.out.println("Name : " + name ); return name; } public void printThrowException(){ System.out.println("Exception raised"); throw new IllegalArgumentException(); } } Following is the content of the MainApp.java file. package com.tutorialspoint; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; public class MainApp { public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext("Beans.xml"); Student student = (Student) context.getBean("student"); student.getAge(); } } Following is the configuration file Beans.xml. <?xml version = "1.0" encoding = "UTF-8"?> <beans xmlns = "http://www.springframework.org/schema/beans" xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance" xmlns:aop = "http://www.springframework.org/schema/aop" xsi:schemaLocation = "http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd "> <aop:aspectj-autoproxy/> <!-- Definition for student bean --> <bean id = "student" class = "com.tutorialspoint.Student"> <property name = "name" value = "Zara" /> <property name = "age" value = "11"/> </bean> <!-- Definition for logging aspect --> <bean id = "logging" class = "com.tutorialspoint.Logging"/> </beans> Once you are done creating the source and configuration files, run your application. Rightclick on MainApp.java in your application and use run as Java Application command. If everything is fine with your application, it will print the following message. Age : 11 Method Signature: Integer com.tutorialspoint.Student.getAge() Returning 11 Print Add Notes Bookmark this page
[ { "code": null, "e": 2429, "s": 2269, "text": "@AfterReturning is an advice type, which ensures that an advice runs after the method executes successfully. Following is the syntax of @AfterReturning advice." }, { "code": null, "e": 2718, "s": 2429, "text": "@AfterReturning(Pointcut = \"execution(* com.tutorialspoint.Student.*(..))\", returning = \"retVal\")\npublic void afterReturningAdvice(JoinPoint jp, Object retVal){\n System.out.println(\"Method Signature: \" + jp.getSignature()); \n System.out.println(\"Returning:\" + retVal.toString() );\n}" }, { "code": null, "e": 2725, "s": 2718, "text": "Where," }, { "code": null, "e": 2861, "s": 2725, "text": "@AfterReturning − Mark a function as an advice to be executed before method(s) covered by Pointcut, if the method returns successfully." }, { "code": null, "e": 2997, "s": 2861, "text": "@AfterReturning − Mark a function as an advice to be executed before method(s) covered by Pointcut, if the method returns successfully." }, { "code": null, "e": 3052, "s": 2997, "text": "Pointcut − Provides an expression to select a function" }, { "code": null, "e": 3107, "s": 3052, "text": "Pointcut − Provides an expression to select a function" }, { "code": null, "e": 3195, "s": 3107, "text": "execution( expression ) − Expression covering methods on which advice is to be applied." }, { "code": null, "e": 3283, "s": 3195, "text": "execution( expression ) − Expression covering methods on which advice is to be applied." }, { "code": null, "e": 3332, "s": 3283, "text": "returning − Name of the variable to be returned." }, { "code": null, "e": 3381, "s": 3332, "text": "returning − Name of the variable to be returned." }, { "code": null, "e": 3671, "s": 3381, "text": "To understand the above-mentioned concepts related to @AfterReturning Advice, let us write an example, which will implement @AfterReturning Advice. To write our example with few advices, let us have a working Eclipse IDE in place and use the following steps to create a Spring application." }, { "code": null, "e": 3820, "s": 3671, "text": "Following is the content of Logging.java file. This is actually a sample of aspect module, which defines the methods to be called at various points." }, { "code": null, "e": 4433, "s": 3820, "text": "package com.tutorialspoint;\n\nimport org.aspectj.lang.annotation.Aspect;\nimport org.aspectj.lang.JoinPoint; \nimport org.aspectj.lang.annotation.AfterReturning; \n\n@Aspect\npublic class Logging {\n /** \n * This is the method which I would like to execute\n * after a selected method execution.\n */\n @AfterReturning(Pointcut = \"execution(* com.tutorialspoint.Student.*(..))\", returning = \"retVal\")\n public void afterReturningAdvice(JoinPoint jp, Object retVal){\n System.out.println(\"Method Signature: \" + jp.getSignature()); \n System.out.println(\"Returning:\" + retVal.toString() );\n }\n}" }, { "code": null, "e": 4484, "s": 4433, "text": "Following is the content of the Student.java file." }, { "code": null, "e": 5043, "s": 4484, "text": "package com.tutorialspoint;\n\npublic class Student {\n private Integer age;\n private String name;\n public void setAge(Integer age) {\n this.age = age;\n }\n public Integer getAge() {\n System.out.println(\"Age : \" + age );\n return age;\n }\n public void setName(String name) {\n this.name = name;\n }\n public String getName() {\n System.out.println(\"Name : \" + name );\n return name;\n }\n public void printThrowException(){\n System.out.println(\"Exception raised\");\n throw new IllegalArgumentException();\n }\n}" }, { "code": null, "e": 5094, "s": 5043, "text": "Following is the content of the MainApp.java file." }, { "code": null, "e": 5504, "s": 5094, "text": "package com.tutorialspoint;\n\nimport org.springframework.context.ApplicationContext;\nimport org.springframework.context.support.ClassPathXmlApplicationContext;\n\npublic class MainApp {\n public static void main(String[] args) {\n ApplicationContext context = new ClassPathXmlApplicationContext(\"Beans.xml\");\n\n Student student = (Student) context.getBean(\"student\");\n student.getAge(); \n }\n}" }, { "code": null, "e": 5551, "s": 5504, "text": "Following is the configuration file Beans.xml." }, { "code": null, "e": 6391, "s": 5551, "text": "<?xml version = \"1.0\" encoding = \"UTF-8\"?>\n<beans xmlns = \"http://www.springframework.org/schema/beans\"\n xmlns:xsi = \"http://www.w3.org/2001/XMLSchema-instance\" \n xmlns:aop = \"http://www.springframework.org/schema/aop\"\n xsi:schemaLocation = \"http://www.springframework.org/schema/beans\n http://www.springframework.org/schema/beans/spring-beans-3.0.xsd \n http://www.springframework.org/schema/aop \n http://www.springframework.org/schema/aop/spring-aop-3.0.xsd \">\n\n <aop:aspectj-autoproxy/>\n\n <!-- Definition for student bean -->\n <bean id = \"student\" class = \"com.tutorialspoint.Student\">\n <property name = \"name\" value = \"Zara\" />\n <property name = \"age\" value = \"11\"/> \n </bean>\n\n <!-- Definition for logging aspect -->\n <bean id = \"logging\" class = \"com.tutorialspoint.Logging\"/> \n \n</beans>" }, { "code": null, "e": 6646, "s": 6391, "text": "Once you are done creating the source and configuration files, run your application. Rightclick on MainApp.java in your application and use run as Java Application command. If everything is fine with your application, it will print the following message." }, { "code": null, "e": 6733, "s": 6646, "text": "Age : 11\nMethod Signature: Integer com.tutorialspoint.Student.getAge() \nReturning 11\n" }, { "code": null, "e": 6740, "s": 6733, "text": " Print" }, { "code": null, "e": 6751, "s": 6740, "text": " Add Notes" } ]
GATE | GATE CS 1997 | Question 21 - GeeksforGeeks
25 Oct, 2018 The correct matching for the following pairs is (A) Disk Scheduling (1) Round robin (B) Batch Processing (2) SCAN (C) Time sharing (3) LIFO (D) Interrupt processing (4) FIFO Codes: A B C D a 3 4 2 1 b 4 3 2 1 c 2 4 1 3 d 3 4 3 2 (A) a(B) b(C) c(D) dAnswer: (C)Explanation: Round-Robin is also called Time-sharing. Disk Scheduling Algorithms are used to reduce the total seek time of any request. SCAN is one of the Algorithms. Interrupt processing is LIFO because when we are processing an interrupt, we disable the interrupts originating from lower priority devices so lower priority interrupts cannot be raised. If an interrupt is detected then it means that it has higher priority than currently executing interrupt so this new interrupt will preempt the current interrupt so, LIFO. Batch processing – FIFOQuiz of this Question GATE CS 1997 GATE-GATE CS 1997 GATE Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. GATE | GATE-IT-2004 | Question 66 GATE | GATE-CS-2014-(Set-3) | Question 65 GATE | GATE-CS-2006 | Question 49 GATE | GATE-CS-2004 | Question 3 GATE | GATE CS 2011 | Question 65 GATE | GATE CS 2019 | Question 27 GATE | GATE CS 2021 | Set 1 | Question 47 GATE | GATE CS 2011 | Question 7 GATE | GATE-CS-2017 (Set 2) | Question 42 GATE | GATE-IT-2004 | Question 71
[ { "code": null, "e": 24466, "s": 24438, "text": "\n25 Oct, 2018" }, { "code": null, "e": 24514, "s": 24466, "text": "The correct matching for the following pairs is" }, { "code": null, "e": 24666, "s": 24514, "text": "(A) Disk Scheduling (1) Round robin\n(B) Batch Processing (2) SCAN\n(C) Time sharing (3) LIFO\n(D) Interrupt processing (4) FIFO\n" }, { "code": null, "e": 24764, "s": 24666, "text": "Codes:\n A B C D\na 3 4 2 1\nb 4 3 2 1\nc 2 4 1 3\nd 3 4 3 2\n" }, { "code": null, "e": 24849, "s": 24764, "text": "(A) a(B) b(C) c(D) dAnswer: (C)Explanation: Round-Robin is also called Time-sharing." }, { "code": null, "e": 24962, "s": 24849, "text": "Disk Scheduling Algorithms are used to reduce the total seek time of any request. SCAN is one of the Algorithms." }, { "code": null, "e": 25321, "s": 24962, "text": "Interrupt processing is LIFO because when we are processing an interrupt, we disable the interrupts originating from lower priority devices so lower priority interrupts cannot be raised. If an interrupt is detected then it means that it has higher priority than currently executing interrupt so this new interrupt will preempt the current interrupt so, LIFO." }, { "code": null, "e": 25366, "s": 25321, "text": "Batch processing – FIFOQuiz of this Question" }, { "code": null, "e": 25379, "s": 25366, "text": "GATE CS 1997" }, { "code": null, "e": 25397, "s": 25379, "text": "GATE-GATE CS 1997" }, { "code": null, "e": 25402, "s": 25397, "text": "GATE" }, { "code": null, "e": 25500, "s": 25402, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25534, "s": 25500, "text": "GATE | GATE-IT-2004 | Question 66" }, { "code": null, "e": 25576, "s": 25534, "text": "GATE | GATE-CS-2014-(Set-3) | Question 65" }, { "code": null, "e": 25610, "s": 25576, "text": "GATE | GATE-CS-2006 | Question 49" }, { "code": null, "e": 25643, "s": 25610, "text": "GATE | GATE-CS-2004 | Question 3" }, { "code": null, "e": 25677, "s": 25643, "text": "GATE | GATE CS 2011 | Question 65" }, { "code": null, "e": 25711, "s": 25677, "text": "GATE | GATE CS 2019 | Question 27" }, { "code": null, "e": 25753, "s": 25711, "text": "GATE | GATE CS 2021 | Set 1 | Question 47" }, { "code": null, "e": 25786, "s": 25753, "text": "GATE | GATE CS 2011 | Question 7" }, { "code": null, "e": 25828, "s": 25786, "text": "GATE | GATE-CS-2017 (Set 2) | Question 42" } ]
How to clear form after submission in Angular 2? - GeeksforGeeks
09 Jun, 2020 In Angular 2, they are two types of forms: Template-driven forms. Reactive forms. In template-driven forms, most of the content will be populated in .html file.In Reactive forms, most of the functionalities and content will be performed in .ts file. The main advantage of reactive forms is, we can create custom validations and the second pivotal advantage is when we are performing unit testing, as the HTML code will be clean, it is more feasible to compose unit tests.Resetting a form in template-driven approach: In template driven approach, we need to import NgForm from ‘@angular/forms’ and we use [(ngModel)] directive for two way data-binding and we should also import FormsModule from ‘@angular/forms’ in app.module.ts file. In below line the input format is present. In addition to it, when we mention ngModel directive then we need to add name attribute to the input type. import { FormsModule } from '@angular/forms'; In Reactive forms, we need to import FormGroup from '@angular/forms'. After importing the above-mentioned modules in the respective approach, angular forms module provides an inbuilt method called reset(). We can use the method and we can reset the form. Example: .html file // In .html file<form #login="ngForm" (ngSubmit)="completeLogin(login)"> <h3>Login Form</h3> <label for="name">Username :</label> <input type="text" [(ngModel)]="username" name="name" id="name"> <label for="password">Password :</label> <input type="password" [(ngModel)]="password" name="name" id="password"> <button type="submit">Submit</button> </form> Example: .ts file import {NgForm} from '@angular/forms'import { Component, OnInit } from '@angular/core'; @Component({ selector: "app-login", templateUrl: "./login.html", styleUrls: [],}) export class Sample implements OnInit{ constructor(){} ngOninit(){} username='';password=''; completeLogin(login :NgForm){ // In .ts file login.reset() // call this inbuilt method to reset the form } } Resetting a form in Reactive forms: Example: .html file <form [formGroup]="login" (ngSubmit)="completeLogin()"> // In.html file <h3>Login Form</h3>.. <label for="name">Username :</label> <input type="text" formControlName="username" id="name"> <label for="password">Password :</label> <input type="password" formControlName="password" id="password"> <button type="submit">Submit</button></form> Example: .ts file import {FormGroup, FormControl} from '@angular/forms' import { Component, OnInit } from '@angular/core'; @Component({ selector: "app-signin", templateUrl: "./signin.html", styleUrls: ["],}) export class Sample implements OnInit{ // In.ts file login:FormGroup;constructor(){} ngOninit(){ login=newFormGroup({username:new FormControl(''),password:new FormControl(''),}) } completeLogin(){ this.login.reset(); // calling this method will reset the method } } Output: After submitting the form, the output would be: AngularJS-Misc Picked AngularJS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Top 10 Angular Libraries For Web Developers Auth Guards in Angular 9/10/11 What is AOT and JIT Compiler in Angular ? How to set focus on input field automatically on page load in AngularJS ? How to bundle an Angular app for production? Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Top 10 Projects For Beginners To Practice HTML and CSS Skills Convert a string to an integer in JavaScript
[ { "code": null, "e": 24471, "s": 24443, "text": "\n09 Jun, 2020" }, { "code": null, "e": 24514, "s": 24471, "text": "In Angular 2, they are two types of forms:" }, { "code": null, "e": 24537, "s": 24514, "text": "Template-driven forms." }, { "code": null, "e": 24553, "s": 24537, "text": "Reactive forms." }, { "code": null, "e": 24988, "s": 24553, "text": "In template-driven forms, most of the content will be populated in .html file.In Reactive forms, most of the functionalities and content will be performed in .ts file. The main advantage of reactive forms is, we can create custom validations and the second pivotal advantage is when we are performing unit testing, as the HTML code will be clean, it is more feasible to compose unit tests.Resetting a form in template-driven approach:" }, { "code": null, "e": 25355, "s": 24988, "text": "In template driven approach, we need to import NgForm from ‘@angular/forms’ and we use [(ngModel)] directive for two way data-binding and we should also import FormsModule from ‘@angular/forms’ in app.module.ts file. In below line the input format is present. In addition to it, when we mention ngModel directive then we need to add name attribute to the input type." }, { "code": null, "e": 25401, "s": 25355, "text": "import { FormsModule } from '@angular/forms';" }, { "code": null, "e": 25471, "s": 25401, "text": "In Reactive forms, we need to import FormGroup from '@angular/forms'." }, { "code": null, "e": 25658, "s": 25471, "text": "After importing the above-mentioned modules in the respective approach, angular forms module provides an inbuilt method called reset(). We can use the method and we can reset the form. " }, { "code": null, "e": 25678, "s": 25658, "text": "Example: .html file" }, { "code": "// In .html file<form #login=\"ngForm\" (ngSubmit)=\"completeLogin(login)\"> <h3>Login Form</h3> <label for=\"name\">Username :</label> <input type=\"text\" [(ngModel)]=\"username\" name=\"name\" id=\"name\"> <label for=\"password\">Password :</label> <input type=\"password\" [(ngModel)]=\"password\" name=\"name\" id=\"password\"> <button type=\"submit\">Submit</button> </form>", "e": 26072, "s": 25678, "text": null }, { "code": null, "e": 26090, "s": 26072, "text": "Example: .ts file" }, { "code": "import {NgForm} from '@angular/forms'import { Component, OnInit } from '@angular/core'; @Component({ selector: \"app-login\", templateUrl: \"./login.html\", styleUrls: [],}) export class Sample implements OnInit{ constructor(){} ngOninit(){} username='';password=''; completeLogin(login :NgForm){ // In .ts file login.reset() // call this inbuilt method to reset the form } }", "e": 26476, "s": 26090, "text": null }, { "code": null, "e": 26512, "s": 26476, "text": "Resetting a form in Reactive forms:" }, { "code": null, "e": 26532, "s": 26512, "text": "Example: .html file" }, { "code": "<form [formGroup]=\"login\" (ngSubmit)=\"completeLogin()\"> // In.html file <h3>Login Form</h3>.. <label for=\"name\">Username :</label> <input type=\"text\" formControlName=\"username\" id=\"name\"> <label for=\"password\">Password :</label> <input type=\"password\" formControlName=\"password\" id=\"password\"> <button type=\"submit\">Submit</button></form>", "e": 26898, "s": 26532, "text": null }, { "code": null, "e": 26916, "s": 26898, "text": "Example: .ts file" }, { "code": "import {FormGroup, FormControl} from '@angular/forms' import { Component, OnInit } from '@angular/core'; @Component({ selector: \"app-signin\", templateUrl: \"./signin.html\", styleUrls: [\"],}) export class Sample implements OnInit{ // In.ts file login:FormGroup;constructor(){} ngOninit(){ login=newFormGroup({username:new FormControl(''),password:new FormControl(''),}) } completeLogin(){ this.login.reset(); // calling this method will reset the method } }", "e": 27395, "s": 26916, "text": null }, { "code": null, "e": 27403, "s": 27395, "text": "Output:" }, { "code": null, "e": 27451, "s": 27403, "text": "After submitting the form, the output would be:" }, { "code": null, "e": 27466, "s": 27451, "text": "AngularJS-Misc" }, { "code": null, "e": 27473, "s": 27466, "text": "Picked" }, { "code": null, "e": 27483, "s": 27473, "text": "AngularJS" }, { "code": null, "e": 27500, "s": 27483, "text": "Web Technologies" }, { "code": null, "e": 27598, "s": 27500, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27642, "s": 27598, "text": "Top 10 Angular Libraries For Web Developers" }, { "code": null, "e": 27673, "s": 27642, "text": "Auth Guards in Angular 9/10/11" }, { "code": null, "e": 27715, "s": 27673, "text": "What is AOT and JIT Compiler in Angular ?" }, { "code": null, "e": 27789, "s": 27715, "text": "How to set focus on input field automatically on page load in AngularJS ?" }, { "code": null, "e": 27834, "s": 27789, "text": "How to bundle an Angular app for production?" }, { "code": null, "e": 27876, "s": 27834, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 27909, "s": 27876, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 27952, "s": 27909, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 28014, "s": 27952, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" } ]
MySQL Tryit Editor v1.0
SELECT CustomerName, ContactName, Address FROM Customers WHERE Address IS NULL; ​ Edit the SQL Statement, and click "Run SQL" to see the result. This SQL-Statement is not supported in the WebSQL Database. The example still works, because it uses a modified version of SQL. Your browser does not support WebSQL. Your are now using a light-version of the Try-SQL Editor, with a read-only Database. If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time. Our Try-SQL Editor uses WebSQL to demonstrate SQL. A Database-object is created in your browser, for testing purposes. You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the "Restore Database" button. WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object. WebSQL is supported in Chrome, Safari, and Opera. If you use another browser you will still be able to use our Try SQL Editor, but a different version, using a server-based ASP application, with a read-only Access Database, where users are not allowed to make any changes to the data.
[ { "code": null, "e": 42, "s": 0, "text": "SELECT CustomerName, ContactName, Address" }, { "code": null, "e": 57, "s": 42, "text": "FROM Customers" }, { "code": null, "e": 80, "s": 57, "text": "WHERE Address IS NULL;" }, { "code": null, "e": 82, "s": 80, "text": "​" }, { "code": null, "e": 154, "s": 91, "text": "Edit the SQL Statement, and click \"Run SQL\" to see the result." }, { "code": null, "e": 214, "s": 154, "text": "This SQL-Statement is not supported in the WebSQL Database." }, { "code": null, "e": 282, "s": 214, "text": "The example still works, because it uses a modified version of SQL." }, { "code": null, "e": 320, "s": 282, "text": "Your browser does not support WebSQL." }, { "code": null, "e": 405, "s": 320, "text": "Your are now using a light-version of the Try-SQL Editor, with a read-only Database." }, { "code": null, "e": 579, "s": 405, "text": "If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time." }, { "code": null, "e": 630, "s": 579, "text": "Our Try-SQL Editor uses WebSQL to demonstrate SQL." }, { "code": null, "e": 698, "s": 630, "text": "A Database-object is created in your browser, for testing purposes." }, { "code": null, "e": 869, "s": 698, "text": "You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the \"Restore Database\" button." }, { "code": null, "e": 969, "s": 869, "text": "WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object." }, { "code": null, "e": 1019, "s": 969, "text": "WebSQL is supported in Chrome, Safari, and Opera." } ]
C# Program to Find the Value of Sin(x) - GeeksforGeeks
30 Nov, 2021 Sin(x) is also known as Sine. It is a trigonometric function of an angle. In a right-angled triangle, the ratio of the length of the perpendicular to the length of the hypotenuse is known as the sine of an angle. sin θ = perpendicular / hypotenuse The values of sine of some of the comman angles are given below, sin 0° = 0sin 30° = 1 / 2sin 45° = 1 / √2sin 60° = √3 / 2sin 90° = 1 sin 0° = 0 sin 30° = 1 / 2 sin 45° = 1 / √2 sin 60° = √3 / 2 sin 90° = 1 This article focuses upon how we can calculate the sine of an angle by in C#. We can calculate the sine of an angle by using the inbuilt sin() method. This method is defined under the Math class and is a part of the system namespace. Math class is quite useful as it provides constants and some of the static methods for trigonometric, logarithmic, etc. Syntax: public static double Sin (double angle); Parameter: angle: A double value (angle in radian) Return type: double: If “angle” is double NaN: If “angle” is equal to NaN, NegativeInfinity, or PositiveInfinity Example 1: C# // C# program to illustrate how we can // calculate the value of sin(x)// using Sin() methodusing System.IO;using System; class GFG{ static void Main(){ // Angle in degree double angleInDegree1 = 0; // Converting angle in radian // since Math.sin() method accepts // angle in radian double angleInRadian1 = (angleInDegree1 * (Math.PI)) / 180; // Using Math.Sin() method to calculate value of sine Console.WriteLine("The value of sin({0}) = {1} ", angleInDegree1, Math.Sin(angleInRadian1)); // Angle in degree double angleInDegree2 = 45; // Converting angle in radian // since Math.sin() method accepts // angle in radian double angleInRadian2 = (angleInDegree2 * (Math.PI)) / 180; // Using Math.Sin() method to calculate value of sine Console.WriteLine("The value of sin({0}) = {1} ", angleInDegree2, Math.Sin(angleInRadian2)); // Angle in degree double angleInDegree3 = 90; // Converting angle in radian // since Math.sin() method accepts // angle in radian double angleInRadian3 = (angleInDegree3 * (Math.PI)) / 180; // Using Math.Sin() method to calculate value of sine Console.WriteLine("The value of sin({0}) = {1} ", angleInDegree3, Math.Sin(angleInRadian3)); // Angle in degree double angleInDegree4 = 135; // Converting angle in radian // since Math.sin() method accepts // angle in radian double angleInRadian4 = (angleInDegree4 * (Math.PI)) / 180; // Using Math.Sin() method to calculate value of sine Console.WriteLine("The value of sin({0}) = {1} ", angleInDegree4, Math.Sin(angleInRadian4));}} The value of sin(0) = 0 The value of sin(45) = 0.707106781186547 The value of sin(90) = 1 The value of sin(135) = 0.707106781186548 Example 2: C# // C# program to illustrate how we can // calculate the value of sin(x)// using Sin() methodusing System; class GFG{ static public void Main(){ // Angle in radian double angle1 = Double.NegativeInfinity; // Angle in radian double angle2 = Double.PositiveInfinity; // Angle in radian double angle3 = Double.NaN; // Using Math.Sin() method to calculate value of sine Console.WriteLine("The value of sin({0}) = {1} ", angle1, Math.Sin(angle1)); // Using Math.Sin() method to calculate value of sine Console.WriteLine("The value of sin({0}) = {1} ", angle2, Math.Sin(angle2)); // Using Math.Sin() method to calculate value of sine Console.WriteLine("The value of sin({0}) = {1} ", angle3, Math.Sin(angle3));}} Output Sine of angle1: NaN Sine of angle2: NaN Sine of angle3: NaN We can calculate the value of sine of an angle using Maclaurin expansion. So the Maclaurin series expansion for sin(x) is: sin(x) = x - x3 / 3! + x5 / 5! - x7 / 7! + .... Follow the steps given below to find the value of sin(x): Initialize a variable angleInDegree that stores the angle (in degree) to be calculated.Initialize another variable terms that stores the number of terms for which we can approximate the value of sin(x).Declare a global function findSinx.Declare a variable current. It stores the angle in radians.Initialize a variable answer with current. It will store our final answer.Initialize another variable temp with current.Iterate from i = 1 to i = terms. At each step update temp as temp as ((-temp) * current * current) / ((2 * i) * (2 * i + 1)) and answer as answer + temp.Eventually, return the answer from findSinX function.Print the answer. Initialize a variable angleInDegree that stores the angle (in degree) to be calculated. Initialize another variable terms that stores the number of terms for which we can approximate the value of sin(x). Declare a global function findSinx. Declare a variable current. It stores the angle in radians. Initialize a variable answer with current. It will store our final answer. Initialize another variable temp with current. Iterate from i = 1 to i = terms. At each step update temp as temp as ((-temp) * current * current) / ((2 * i) * (2 * i + 1)) and answer as answer + temp. Eventually, return the answer from findSinX function. Print the answer. This formula can compute the value of sine for all real values of x. Example: C# // C# program to illustrate how we can // calculate the value of sin(x)// using Maclaurin's methodusing System; class GFG{ static double findSinX(int angleInDegree, int terms){ // Converting angle in degree into radian double current = Math.PI * angleInDegree / 180f; // Declaring variable to calculate final answer double answer = current; double temp = current; // Loop till number of steps provided by the user for(int i = 1; i <= terms; i++) { // Updating temp and answer accordingly temp = ((-temp) * current * current) / ((2 * i) * (2 * i + 1)); answer = answer + temp; } // Return the final answer return answer;} // Driver codestatic public void Main(){ // Angle in degree int angleInDegree1 = 45; // Number of steps int terms1 = 10; // Calling function to calculate sine of angle double answer1 = findSinX(angleInDegree1, terms1); // Print the final answer Console.WriteLine("The value of sin({0}) = {1}", angleInDegree1, answer1); // Angle in degree int angleInDegree2 = 90; // Number of steps int terms2 = 20; // Calling function to calculate sine of angle double result2 = findSinX(angleInDegree2, terms2); // Print the final answer Console.WriteLine("The value of sin({0}) = {1}", angleInDegree2, result2); // Angle in degree int angleInDegree3 = 135; // Number of steps int terms3 = 30; // Calling function to calculate sine of angle double result3 = findSinX(angleInDegree3, terms3); // Print the final answer Console.WriteLine("The value of sin({0}) = {1}", angleInDegree3, result3); // Angle in degree int angleInDegree4 = 180; // Number of steps int terms4 = 40; // Calling function to calculate sine of angle double result4 = findSinX(angleInDegree4, terms4); // Print the final answer Console.WriteLine("The value of sin({0}) = {1}", angleInDegree4, result4);}} The value of sin(45) = 0.707106781186547 The value of sin(90) = 1 The value of sin(135) = 0.707106781186548 The value of sin(180) = 2.34898825287367E-16 CSharp-Math Picked C# C# Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Extension Method in C# Top 50 C# Interview Questions & Answers Partial Classes in C# HashSet in C# with Examples C# | Inheritance Convert String to Character Array in C# Socket Programming in C# Getting a Month Name Using Month Number in C# Program to Print a New Line in C# Program to find absolute value of a given number
[ { "code": null, "e": 24222, "s": 24194, "text": "\n30 Nov, 2021" }, { "code": null, "e": 24435, "s": 24222, "text": "Sin(x) is also known as Sine. It is a trigonometric function of an angle. In a right-angled triangle, the ratio of the length of the perpendicular to the length of the hypotenuse is known as the sine of an angle." }, { "code": null, "e": 24470, "s": 24435, "text": "sin θ = perpendicular / hypotenuse" }, { "code": null, "e": 24535, "s": 24470, "text": "The values of sine of some of the comman angles are given below," }, { "code": null, "e": 24605, "s": 24535, "text": "sin 0° = 0sin 30° = 1 / 2sin 45° = 1 / √2sin 60° = √3 / 2sin 90° = 1" }, { "code": null, "e": 24617, "s": 24605, "text": "sin 0° = 0" }, { "code": null, "e": 24633, "s": 24617, "text": "sin 30° = 1 / 2" }, { "code": null, "e": 24650, "s": 24633, "text": "sin 45° = 1 / √2" }, { "code": null, "e": 24667, "s": 24650, "text": "sin 60° = √3 / 2" }, { "code": null, "e": 24679, "s": 24667, "text": "sin 90° = 1" }, { "code": null, "e": 24757, "s": 24679, "text": "This article focuses upon how we can calculate the sine of an angle by in C#." }, { "code": null, "e": 25033, "s": 24757, "text": "We can calculate the sine of an angle by using the inbuilt sin() method. This method is defined under the Math class and is a part of the system namespace. Math class is quite useful as it provides constants and some of the static methods for trigonometric, logarithmic, etc." }, { "code": null, "e": 25041, "s": 25033, "text": "Syntax:" }, { "code": null, "e": 25082, "s": 25041, "text": "public static double Sin (double angle);" }, { "code": null, "e": 25093, "s": 25082, "text": "Parameter:" }, { "code": null, "e": 25133, "s": 25093, "text": "angle: A double value (angle in radian)" }, { "code": null, "e": 25146, "s": 25133, "text": "Return type:" }, { "code": null, "e": 25175, "s": 25146, "text": "double: If “angle” is double" }, { "code": null, "e": 25247, "s": 25175, "text": "NaN: If “angle” is equal to NaN, NegativeInfinity, or PositiveInfinity " }, { "code": null, "e": 25258, "s": 25247, "text": "Example 1:" }, { "code": null, "e": 25261, "s": 25258, "text": "C#" }, { "code": "// C# program to illustrate how we can // calculate the value of sin(x)// using Sin() methodusing System.IO;using System; class GFG{ static void Main(){ // Angle in degree double angleInDegree1 = 0; // Converting angle in radian // since Math.sin() method accepts // angle in radian double angleInRadian1 = (angleInDegree1 * (Math.PI)) / 180; // Using Math.Sin() method to calculate value of sine Console.WriteLine(\"The value of sin({0}) = {1} \", angleInDegree1, Math.Sin(angleInRadian1)); // Angle in degree double angleInDegree2 = 45; // Converting angle in radian // since Math.sin() method accepts // angle in radian double angleInRadian2 = (angleInDegree2 * (Math.PI)) / 180; // Using Math.Sin() method to calculate value of sine Console.WriteLine(\"The value of sin({0}) = {1} \", angleInDegree2, Math.Sin(angleInRadian2)); // Angle in degree double angleInDegree3 = 90; // Converting angle in radian // since Math.sin() method accepts // angle in radian double angleInRadian3 = (angleInDegree3 * (Math.PI)) / 180; // Using Math.Sin() method to calculate value of sine Console.WriteLine(\"The value of sin({0}) = {1} \", angleInDegree3, Math.Sin(angleInRadian3)); // Angle in degree double angleInDegree4 = 135; // Converting angle in radian // since Math.sin() method accepts // angle in radian double angleInRadian4 = (angleInDegree4 * (Math.PI)) / 180; // Using Math.Sin() method to calculate value of sine Console.WriteLine(\"The value of sin({0}) = {1} \", angleInDegree4, Math.Sin(angleInRadian4));}}", "e": 27058, "s": 25261, "text": null }, { "code": null, "e": 27194, "s": 27058, "text": "The value of sin(0) = 0 \nThe value of sin(45) = 0.707106781186547 \nThe value of sin(90) = 1 \nThe value of sin(135) = 0.707106781186548 " }, { "code": null, "e": 27205, "s": 27194, "text": "Example 2:" }, { "code": null, "e": 27208, "s": 27205, "text": "C#" }, { "code": "// C# program to illustrate how we can // calculate the value of sin(x)// using Sin() methodusing System; class GFG{ static public void Main(){ // Angle in radian double angle1 = Double.NegativeInfinity; // Angle in radian double angle2 = Double.PositiveInfinity; // Angle in radian double angle3 = Double.NaN; // Using Math.Sin() method to calculate value of sine Console.WriteLine(\"The value of sin({0}) = {1} \", angle1, Math.Sin(angle1)); // Using Math.Sin() method to calculate value of sine Console.WriteLine(\"The value of sin({0}) = {1} \", angle2, Math.Sin(angle2)); // Using Math.Sin() method to calculate value of sine Console.WriteLine(\"The value of sin({0}) = {1} \", angle3, Math.Sin(angle3));}}", "e": 28054, "s": 27208, "text": null }, { "code": null, "e": 28061, "s": 28054, "text": "Output" }, { "code": null, "e": 28121, "s": 28061, "text": "Sine of angle1: NaN\nSine of angle2: NaN\nSine of angle3: NaN" }, { "code": null, "e": 28244, "s": 28121, "text": "We can calculate the value of sine of an angle using Maclaurin expansion. So the Maclaurin series expansion for sin(x) is:" }, { "code": null, "e": 28292, "s": 28244, "text": "sin(x) = x - x3 / 3! + x5 / 5! - x7 / 7! + ...." }, { "code": null, "e": 28350, "s": 28292, "text": "Follow the steps given below to find the value of sin(x):" }, { "code": null, "e": 28990, "s": 28350, "text": "Initialize a variable angleInDegree that stores the angle (in degree) to be calculated.Initialize another variable terms that stores the number of terms for which we can approximate the value of sin(x).Declare a global function findSinx.Declare a variable current. It stores the angle in radians.Initialize a variable answer with current. It will store our final answer.Initialize another variable temp with current.Iterate from i = 1 to i = terms. At each step update temp as temp as ((-temp) * current * current) / ((2 * i) * (2 * i + 1)) and answer as answer + temp.Eventually, return the answer from findSinX function.Print the answer." }, { "code": null, "e": 29078, "s": 28990, "text": "Initialize a variable angleInDegree that stores the angle (in degree) to be calculated." }, { "code": null, "e": 29194, "s": 29078, "text": "Initialize another variable terms that stores the number of terms for which we can approximate the value of sin(x)." }, { "code": null, "e": 29230, "s": 29194, "text": "Declare a global function findSinx." }, { "code": null, "e": 29290, "s": 29230, "text": "Declare a variable current. It stores the angle in radians." }, { "code": null, "e": 29365, "s": 29290, "text": "Initialize a variable answer with current. It will store our final answer." }, { "code": null, "e": 29412, "s": 29365, "text": "Initialize another variable temp with current." }, { "code": null, "e": 29566, "s": 29412, "text": "Iterate from i = 1 to i = terms. At each step update temp as temp as ((-temp) * current * current) / ((2 * i) * (2 * i + 1)) and answer as answer + temp." }, { "code": null, "e": 29620, "s": 29566, "text": "Eventually, return the answer from findSinX function." }, { "code": null, "e": 29638, "s": 29620, "text": "Print the answer." }, { "code": null, "e": 29707, "s": 29638, "text": "This formula can compute the value of sine for all real values of x." }, { "code": null, "e": 29716, "s": 29707, "text": "Example:" }, { "code": null, "e": 29719, "s": 29716, "text": "C#" }, { "code": "// C# program to illustrate how we can // calculate the value of sin(x)// using Maclaurin's methodusing System; class GFG{ static double findSinX(int angleInDegree, int terms){ // Converting angle in degree into radian double current = Math.PI * angleInDegree / 180f; // Declaring variable to calculate final answer double answer = current; double temp = current; // Loop till number of steps provided by the user for(int i = 1; i <= terms; i++) { // Updating temp and answer accordingly temp = ((-temp) * current * current) / ((2 * i) * (2 * i + 1)); answer = answer + temp; } // Return the final answer return answer;} // Driver codestatic public void Main(){ // Angle in degree int angleInDegree1 = 45; // Number of steps int terms1 = 10; // Calling function to calculate sine of angle double answer1 = findSinX(angleInDegree1, terms1); // Print the final answer Console.WriteLine(\"The value of sin({0}) = {1}\", angleInDegree1, answer1); // Angle in degree int angleInDegree2 = 90; // Number of steps int terms2 = 20; // Calling function to calculate sine of angle double result2 = findSinX(angleInDegree2, terms2); // Print the final answer Console.WriteLine(\"The value of sin({0}) = {1}\", angleInDegree2, result2); // Angle in degree int angleInDegree3 = 135; // Number of steps int terms3 = 30; // Calling function to calculate sine of angle double result3 = findSinX(angleInDegree3, terms3); // Print the final answer Console.WriteLine(\"The value of sin({0}) = {1}\", angleInDegree3, result3); // Angle in degree int angleInDegree4 = 180; // Number of steps int terms4 = 40; // Calling function to calculate sine of angle double result4 = findSinX(angleInDegree4, terms4); // Print the final answer Console.WriteLine(\"The value of sin({0}) = {1}\", angleInDegree4, result4);}}", "e": 31870, "s": 29719, "text": null }, { "code": null, "e": 32023, "s": 31870, "text": "The value of sin(45) = 0.707106781186547\nThe value of sin(90) = 1\nThe value of sin(135) = 0.707106781186548\nThe value of sin(180) = 2.34898825287367E-16" }, { "code": null, "e": 32037, "s": 32025, "text": "CSharp-Math" }, { "code": null, "e": 32044, "s": 32037, "text": "Picked" }, { "code": null, "e": 32047, "s": 32044, "text": "C#" }, { "code": null, "e": 32059, "s": 32047, "text": "C# Programs" }, { "code": null, "e": 32157, "s": 32059, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 32166, "s": 32157, "text": "Comments" }, { "code": null, "e": 32179, "s": 32166, "text": "Old Comments" }, { "code": null, "e": 32202, "s": 32179, "text": "Extension Method in C#" }, { "code": null, "e": 32242, "s": 32202, "text": "Top 50 C# Interview Questions & Answers" }, { "code": null, "e": 32264, "s": 32242, "text": "Partial Classes in C#" }, { "code": null, "e": 32292, "s": 32264, "text": "HashSet in C# with Examples" }, { "code": null, "e": 32309, "s": 32292, "text": "C# | Inheritance" }, { "code": null, "e": 32349, "s": 32309, "text": "Convert String to Character Array in C#" }, { "code": null, "e": 32374, "s": 32349, "text": "Socket Programming in C#" }, { "code": null, "e": 32420, "s": 32374, "text": "Getting a Month Name Using Month Number in C#" }, { "code": null, "e": 32454, "s": 32420, "text": "Program to Print a New Line in C#" } ]
How to read the contents of a webpage into a string in java?
You can read the contents of a web page in several ways using Java. Here, we are going to discuss three of them. The URL class of the java.net package represents a Uniform Resource Locator which is used to point a resource (file or, directory or a reference) in the world wide web. The openStream() method of this class opens a connection to the URL represented by the current object and returns an InputStream object using which you can read data from the URL. Therefore, to read data from web page (using the URL class) − Instantiate the java.net.URL class by passing the URL of the desired web page as a parameter to its constructor. Instantiate the java.net.URL class by passing the URL of the desired web page as a parameter to its constructor. Invoke the openStream() method and retrieve the InputStream object. Invoke the openStream() method and retrieve the InputStream object. Instantiate the Scanner class by passing the above retrieved InputStream object as a parameter. Instantiate the Scanner class by passing the above retrieved InputStream object as a parameter. import java.io.IOException; import java.net.URL; import java.util.Scanner; public class ReadingWebPage { public static void main(String args[]) throws IOException { //Instantiating the URL class URL url = new URL("http://www.something.com/"); //Retrieving the contents of the specified page Scanner sc = new Scanner(url.openStream()); //Instantiating the StringBuffer class to hold the result StringBuffer sb = new StringBuffer(); while(sc.hasNext()) { sb.append(sc.next()); //System.out.println(sc.next()); } //Retrieving the String from the String Buffer object String result = sb.toString(); System.out.println(result); //Removing the HTML tags result = result.replaceAll("<[^>]*>", ""); System.out.println("Contents of the web page: "+result); } } <html><body><h1>Itworks!</h1></body></html> Contents of the web page: Itworks! Http client is a transfer library, it resides on the client side, sends and receives HTTP messages. It provides up to date, feature-rich and, efficient implementation which meets the recent HTTP standards. The GET request (of Http protocol) is used to retrieve information from the given server using a given URI. Requests using GET should only retrieve data and should have no other effect on the data. The HttpClient API provides a class named HttpGet which represents the get request method. To execute the GET request and retrieve the contents of a web page − The createDefault() method of the HttpClients class returns a CloseableHttpClient object, which is the base implementation of the HttpClient interface. Using this method, create an HttpClient object. The createDefault() method of the HttpClients class returns a CloseableHttpClient object, which is the base implementation of the HttpClient interface. Using this method, create an HttpClient object. Create a HTTP GET request by instantiating the HttpGet class. The constructor of this class accepts a String value representing the URI of the web page to which you need to send the request. Create a HTTP GET request by instantiating the HttpGet class. The constructor of this class accepts a String value representing the URI of the web page to which you need to send the request. Execute the HttpGet request by invoking the execute() method. Execute the HttpGet request by invoking the execute() method. Retrieve an InputStream object representing the content of the web site from the response as − Retrieve an InputStream object representing the content of the web site from the response as − httpresponse.getEntity().getContent() import java.util.Scanner; import org.apache.http.HttpResponse; import org.apache.http.client.methods.HttpGet; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; public class HttpClientExample { public static void main(String args[]) throws Exception{ //Creating a HttpClient object CloseableHttpClient httpclient = HttpClients.createDefault(); //Creating a HttpGet object HttpGet httpget = new HttpGet("http://www.something.com/"); //Executing the Get request HttpResponse httpresponse = httpclient.execute(httpget); Scanner sc = new Scanner(httpresponse.getEntity().getContent()); //Instantiating the StringBuffer class to hold the result StringBuffer sb = new StringBuffer(); while(sc.hasNext()) { sb.append(sc.next()); //System.out.println(sc.next()); } //Retrieving the String from the String Buffer object String result = sb.toString(); System.out.println(result); //Removing the HTML tags result = result.replaceAll("<[^>]*>", ""); System.out.println("Contents of the web page: "+result); } } <html><body><h1>Itworks!</h1></body></html> Contents of the web page: Itworks! Jsoup is a Java based library to work with HTML based content. It provides a very convenient API to extract and manipulate data, using the best of DOM, CSS, and jquery-like methods. It implements the WHATWG HTML5 specification, and parses HTML to the same DOM as modern browsers do. To retrieve the contents of a web page using the Jsoup library − The connect() method of the Jsoup class accepts an URL of a web page and connects to the specified web page and returns the connection object. Connect to the desired web page using the connect() method. The connect() method of the Jsoup class accepts an URL of a web page and connects to the specified web page and returns the connection object. Connect to the desired web page using the connect() method. The get() method of the Connection interface sends/executes the GET request and returns the HTML document as an object of the Document class. Send GET request to the page by invoking the get() method. The get() method of the Connection interface sends/executes the GET request and returns the HTML document as an object of the Document class. Send GET request to the page by invoking the get() method. Retrieve the contents of the obtained document into a String as − Retrieve the contents of the obtained document into a String as − String result = doc.body().text(); import java.io.IOException; import org.jsoup.Connection; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; public class JsoupExample { public static void main(String args[]) throws IOException { String page = "http://www.something.com/"; //Connecting to the web page Connection conn = Jsoup.connect(page); //executing the get request Document doc = conn.get(); //Retrieving the contents (body) of the web page String result = doc.body().text(); System.out.println(result); } } It works!
[ { "code": null, "e": 1175, "s": 1062, "text": "You can read the contents of a web page in several ways using Java. Here, we are going to discuss three of them." }, { "code": null, "e": 1344, "s": 1175, "text": "The URL class of the java.net package represents a Uniform Resource Locator which is used to point a resource (file or, directory or a reference) in the world wide web." }, { "code": null, "e": 1524, "s": 1344, "text": "The openStream() method of this class opens a connection to the URL represented by the current object and returns an InputStream object using which you can read data from the URL." }, { "code": null, "e": 1586, "s": 1524, "text": "Therefore, to read data from web page (using the URL class) −" }, { "code": null, "e": 1699, "s": 1586, "text": "Instantiate the java.net.URL class by passing the URL of the desired web page as a parameter to its constructor." }, { "code": null, "e": 1812, "s": 1699, "text": "Instantiate the java.net.URL class by passing the URL of the desired web page as a parameter to its constructor." }, { "code": null, "e": 1880, "s": 1812, "text": "Invoke the openStream() method and retrieve the InputStream object." }, { "code": null, "e": 1948, "s": 1880, "text": "Invoke the openStream() method and retrieve the InputStream object." }, { "code": null, "e": 2044, "s": 1948, "text": "Instantiate the Scanner class by passing the above retrieved InputStream object as a parameter." }, { "code": null, "e": 2140, "s": 2044, "text": "Instantiate the Scanner class by passing the above retrieved InputStream object as a parameter." }, { "code": null, "e": 3000, "s": 2140, "text": "import java.io.IOException;\nimport java.net.URL;\nimport java.util.Scanner;\npublic class ReadingWebPage {\n public static void main(String args[]) throws IOException {\n //Instantiating the URL class\n URL url = new URL(\"http://www.something.com/\");\n //Retrieving the contents of the specified page\n Scanner sc = new Scanner(url.openStream());\n //Instantiating the StringBuffer class to hold the result\n StringBuffer sb = new StringBuffer();\n while(sc.hasNext()) {\n sb.append(sc.next());\n //System.out.println(sc.next());\n }\n //Retrieving the String from the String Buffer object\n String result = sb.toString();\n System.out.println(result);\n //Removing the HTML tags\n result = result.replaceAll(\"<[^>]*>\", \"\");\n System.out.println(\"Contents of the web page: \"+result);\n }\n}" }, { "code": null, "e": 3079, "s": 3000, "text": "<html><body><h1>Itworks!</h1></body></html>\nContents of the web page: Itworks!" }, { "code": null, "e": 3285, "s": 3079, "text": "Http client is a transfer library, it resides on the client side, sends and receives HTTP messages. It provides up to date, feature-rich and, efficient implementation which meets the recent HTTP standards." }, { "code": null, "e": 3483, "s": 3285, "text": "The GET request (of Http protocol) is used to retrieve information from the given server using a given URI. Requests using GET should only retrieve data and should have no other effect on the data." }, { "code": null, "e": 3643, "s": 3483, "text": "The HttpClient API provides a class named HttpGet which represents the get request method. To execute the GET request and retrieve the contents of a web page −" }, { "code": null, "e": 3843, "s": 3643, "text": "The createDefault() method of the HttpClients class returns a CloseableHttpClient object, which is the base implementation of the HttpClient interface. Using this method, create an HttpClient object." }, { "code": null, "e": 4043, "s": 3843, "text": "The createDefault() method of the HttpClients class returns a CloseableHttpClient object, which is the base implementation of the HttpClient interface. Using this method, create an HttpClient object." }, { "code": null, "e": 4234, "s": 4043, "text": "Create a HTTP GET request by instantiating the HttpGet class. The constructor of this class accepts a String value representing the URI of the web page to which you need to send the request." }, { "code": null, "e": 4425, "s": 4234, "text": "Create a HTTP GET request by instantiating the HttpGet class. The constructor of this class accepts a String value representing the URI of the web page to which you need to send the request." }, { "code": null, "e": 4487, "s": 4425, "text": "Execute the HttpGet request by invoking the execute() method." }, { "code": null, "e": 4549, "s": 4487, "text": "Execute the HttpGet request by invoking the execute() method." }, { "code": null, "e": 4644, "s": 4549, "text": "Retrieve an InputStream object representing the content of the web site from the response as −" }, { "code": null, "e": 4739, "s": 4644, "text": "Retrieve an InputStream object representing the content of the web site from the response as −" }, { "code": null, "e": 4777, "s": 4739, "text": "httpresponse.getEntity().getContent()" }, { "code": null, "e": 5955, "s": 4777, "text": "import java.util.Scanner;\nimport org.apache.http.HttpResponse;\nimport org.apache.http.client.methods.HttpGet;\nimport org.apache.http.impl.client.CloseableHttpClient;\nimport org.apache.http.impl.client.HttpClients;\npublic class HttpClientExample {\n public static void main(String args[]) throws Exception{\n //Creating a HttpClient object\n CloseableHttpClient httpclient = HttpClients.createDefault();\n //Creating a HttpGet object\n HttpGet httpget = new HttpGet(\"http://www.something.com/\");\n //Executing the Get request\n HttpResponse httpresponse = httpclient.execute(httpget);\n Scanner sc = new Scanner(httpresponse.getEntity().getContent());\n //Instantiating the StringBuffer class to hold the result\n StringBuffer sb = new StringBuffer();\n while(sc.hasNext()) {\n sb.append(sc.next());\n //System.out.println(sc.next());\n }\n //Retrieving the String from the String Buffer object\n String result = sb.toString();\n System.out.println(result);\n //Removing the HTML tags\n result = result.replaceAll(\"<[^>]*>\", \"\");\n System.out.println(\"Contents of the web page: \"+result);\n }\n}" }, { "code": null, "e": 6034, "s": 5955, "text": "<html><body><h1>Itworks!</h1></body></html>\nContents of the web page: Itworks!" }, { "code": null, "e": 6317, "s": 6034, "text": "Jsoup is a Java based library to work with HTML based content. It provides a very convenient API to extract and manipulate data, using the best of DOM, CSS, and jquery-like methods. It implements the WHATWG HTML5 specification, and parses HTML to the same DOM as modern browsers do." }, { "code": null, "e": 6382, "s": 6317, "text": "To retrieve the contents of a web page using the Jsoup library −" }, { "code": null, "e": 6585, "s": 6382, "text": "The connect() method of the Jsoup class accepts an URL of a web page and connects to the specified web page and returns the connection object. Connect to the desired web page using the connect() method." }, { "code": null, "e": 6788, "s": 6585, "text": "The connect() method of the Jsoup class accepts an URL of a web page and connects to the specified web page and returns the connection object. Connect to the desired web page using the connect() method." }, { "code": null, "e": 6989, "s": 6788, "text": "The get() method of the Connection interface sends/executes the GET request and returns the HTML document as an object of the Document class. Send GET request to the page by invoking the get() method." }, { "code": null, "e": 7190, "s": 6989, "text": "The get() method of the Connection interface sends/executes the GET request and returns the HTML document as an object of the Document class. Send GET request to the page by invoking the get() method." }, { "code": null, "e": 7256, "s": 7190, "text": "Retrieve the contents of the obtained document into a String as −" }, { "code": null, "e": 7322, "s": 7256, "text": "Retrieve the contents of the obtained document into a String as −" }, { "code": null, "e": 7357, "s": 7322, "text": "String result = doc.body().text();" }, { "code": null, "e": 7895, "s": 7357, "text": "import java.io.IOException;\nimport org.jsoup.Connection;\nimport org.jsoup.Jsoup;\nimport org.jsoup.nodes.Document;\npublic class JsoupExample {\n public static void main(String args[]) throws IOException {\n String page = \"http://www.something.com/\";\n //Connecting to the web page\n Connection conn = Jsoup.connect(page);\n //executing the get request\n Document doc = conn.get();\n //Retrieving the contents (body) of the web page\n String result = doc.body().text();\n System.out.println(result);\n }\n}" }, { "code": null, "e": 7905, "s": 7895, "text": "It works!" } ]
YAML - Block Styles
YAML includes two block scalar styles: literal and folded. Block scalars are controlled with few indicators with a header preceding the content itself. An example of block scalar headers is given below − %YAML 1.2 --- !!seq [ !!str "literal\n", !!str "·folded\n", !!str "keep\n\n", !!str "·strip", ] The output in JSON format with a default behavior is given below − [ "literal\n", "\u00b7folded\n", "keep\n\n", "\u00b7strip" ] There are four types of block styles: literal, folded, keep and strip styles. These block styles are defined with the help of Block Chomping scenario. An example of block chomping scenario is given below − %YAML 1.2 --- !!map { ? !!str "strip" : !!str "# text", ? !!str "clip" : !!str "# text\n", ? !!str "keep" : !!str "# text\n", } You can see the output generated with three formats in JSON as given below − { "strip": "# text", "clip": "# text\n", "keep": "# text\n" } Chomping in YAML controls the final breaks and trailing empty lines which are interpreted in various forms. In this case, the final line break and empty lines are excluded for scalar content. It is specified by the chomping indicator “-“. Clipping is considered as a default behavior if no explicit chomping indicator is specified. The final break character is preserved in the scalar’s content. The best example of clipping is demonstrated in the example above. It terminates with newline “\n” character. Keeping refers to the addition with representation of “+” chomping indicator. Additional lines created are not subject to folding. The additional lines are not subject to folding. 33 Lectures 44 mins Tarun Telang Print Add Notes Bookmark this page
[ { "code": null, "e": 2252, "s": 2048, "text": "YAML includes two block scalar styles: literal and folded. Block scalars are controlled with few indicators with a header preceding the content itself. An example of block scalar headers is given below −" }, { "code": null, "e": 2360, "s": 2252, "text": "%YAML 1.2\n---\n!!seq [\n !!str \"literal\\n\",\n !!str \"·folded\\n\",\n !!str \"keep\\n\\n\",\n !!str \"·strip\",\n]" }, { "code": null, "e": 2427, "s": 2360, "text": "The output in JSON format with a default behavior is given below −" }, { "code": null, "e": 2504, "s": 2427, "text": "[\n \"literal\\n\", \n \"\\u00b7folded\\n\", \n \"keep\\n\\n\", \n \"\\u00b7strip\"\n]\n" }, { "code": null, "e": 2710, "s": 2504, "text": "There are four types of block styles: literal, folded, keep and strip styles. These block styles are defined with the help of Block Chomping scenario. An example of block chomping scenario is given below −" }, { "code": null, "e": 2857, "s": 2710, "text": "%YAML 1.2\n---\n!!map {\n ? !!str \"strip\"\n : !!str \"# text\",\n ? !!str \"clip\"\n : !!str \"# text\\n\",\n ? !!str \"keep\"\n : !!str \"# text\\n\",\n}\n" }, { "code": null, "e": 2934, "s": 2857, "text": "You can see the output generated with three formats in JSON as given below −" }, { "code": null, "e": 3008, "s": 2934, "text": "{\n \"strip\": \"# text\", \n \"clip\": \"# text\\n\", \n \"keep\": \"# text\\n\"\n}\n" }, { "code": null, "e": 3116, "s": 3008, "text": "Chomping in YAML controls the final breaks and trailing empty lines which are interpreted in various forms." }, { "code": null, "e": 3247, "s": 3116, "text": "In this case, the final line break and empty lines are excluded for scalar content. It is specified by the chomping indicator “-“." }, { "code": null, "e": 3514, "s": 3247, "text": "Clipping is considered as a default behavior if no explicit chomping indicator is specified. The final break character is preserved in the scalar’s content. The best example of clipping is demonstrated in the example above. It terminates with newline “\\n” character." }, { "code": null, "e": 3694, "s": 3514, "text": "Keeping refers to the addition with representation of “+” chomping indicator. Additional lines created are not subject to folding. The additional lines are not subject to folding." }, { "code": null, "e": 3726, "s": 3694, "text": "\n 33 Lectures \n 44 mins\n" }, { "code": null, "e": 3740, "s": 3726, "text": " Tarun Telang" }, { "code": null, "e": 3747, "s": 3740, "text": " Print" }, { "code": null, "e": 3758, "s": 3747, "text": " Add Notes" } ]
Compare and Find Differences Between Two Tables in SQL - GeeksforGeeks
23 Apr, 2021 Structured Query Language or SQL is a standard Database language that is used to create, maintain and retrieve the data from relational databases like MySQL, Oracle, etc. Here we are going to see how to Compare and Find Differences Between Two Tables in SQL Here, we will first create a database named “geeks” then we will create two tables “department_old” and “department_new” in that database. After, that we will execute our query on that table. Use the below SQL statement to create a database called geeks: CREATE geeks; USE geeks; CREATE TABLE department_old( ID int, SALARY int, NAME Varchar(20), DEPT_ID Varchar(255)); Use the below query to add data to the table: INSERT INTO department_old VALUES (1, 34000, 'ANURAG', 'UI DEVELOPERS'); INSERT INTO department_old VALUES (2, 33000, 'HARSH', 'BACKEND DEVELOPERS'); INSERT INTO department_old VALUES (3, 36000, 'SUMIT', 'BACKEND DEVELOPERS'); INSERT INTO department_old VALUES (4, 36000, 'RUHI', 'UI DEVELOPERS'); INSERT INTO department_old VALUES (5, 37000, 'KAE', 'UI DEVELOPERS'); To verify the contents of the table use the below statement: SELECT * FROM department_old; The result from SQL Server Management Studio: CREATE TABLE department_new( ID int, SALARY int, NAME Varchar(20), DEPT_ID Varchar(255)); Use the below query to add data to the table: INSERT INTO department_new VALUES (1, 34000, 'ANURAG', 'UI DEVELOPERS'); INSERT INTO department_new VALUES (2, 33000, 'HARSH', 'BACKEND DEVELOPERS'); INSERT INTO department_new VALUES (3, 36000, 'SUMIT', 'BACKEND DEVELOPERS'); INSERT INTO department_new VALUES (4, 36000, 'RUHI', 'UI DEVELOPERS'); INSERT INTO department_new VALUES (5, 37000, 'KAE', 'UI DEVELOPERS'); INSERT INTO department_new VALUES (6, 37000, 'REHA', 'BACKEND DEVELOPERS'); To verify the contents of the table use the below statement: SELECT * FROM department_new; Output: Let us suppose, we have two tables: table1 and table2. Here, we will use UNION ALL to combine the records based on columns that need to compare. If the values in the columns that need to compare are the same, the COUNT(*) returns 2, otherwise the COUNT(*) returns 1. Syntax: SELECT column1, column2.... columnN FROM ( SELECT table1.column1, table1.column2 FROM table1 UNION ALL SELECT table2.column1, table2.column2 FROM table2 ) table1 GROUP BY column1 HAVING COUNT(*) = 1 Example: Select ID from ( select * from department_old UNION ALL select * from department_new) department_old GROUP BY ID HAVING COUNT(*) = 1 Output: If values in the columns involved in the comparison are identical, no row returns. Picked SQL-Query SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Update Multiple Columns in Single Update Statement in SQL? What is Temporary Table in SQL? SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter SQL using Python SQL | Subquery How to Write a SQL Query For a Specific Date Range and Date Time? SQL Query to Convert VARCHAR to INT SQL Query to Delete Duplicate Rows SQL Query to Compare Two Dates Window functions in SQL
[ { "code": null, "e": 23877, "s": 23849, "text": "\n23 Apr, 2021" }, { "code": null, "e": 24135, "s": 23877, "text": "Structured Query Language or SQL is a standard Database language that is used to create, maintain and retrieve the data from relational databases like MySQL, Oracle, etc. Here we are going to see how to Compare and Find Differences Between Two Tables in SQL" }, { "code": null, "e": 24327, "s": 24135, "text": "Here, we will first create a database named “geeks” then we will create two tables “department_old” and “department_new” in that database. After, that we will execute our query on that table." }, { "code": null, "e": 24390, "s": 24327, "text": "Use the below SQL statement to create a database called geeks:" }, { "code": null, "e": 24404, "s": 24390, "text": "CREATE geeks;" }, { "code": null, "e": 24415, "s": 24404, "text": "USE geeks;" }, { "code": null, "e": 24509, "s": 24415, "text": "CREATE TABLE department_old(\n ID int,\n SALARY int,\n NAME Varchar(20),\n DEPT_ID Varchar(255));" }, { "code": null, "e": 24555, "s": 24509, "text": "Use the below query to add data to the table:" }, { "code": null, "e": 24923, "s": 24555, "text": "INSERT INTO department_old VALUES (1, 34000, 'ANURAG', 'UI DEVELOPERS');\nINSERT INTO department_old VALUES (2, 33000, 'HARSH', 'BACKEND DEVELOPERS');\nINSERT INTO department_old VALUES (3, 36000, 'SUMIT', 'BACKEND DEVELOPERS');\nINSERT INTO department_old VALUES (4, 36000, 'RUHI', 'UI DEVELOPERS');\nINSERT INTO department_old VALUES (5, 37000, 'KAE', 'UI DEVELOPERS');" }, { "code": null, "e": 24984, "s": 24923, "text": "To verify the contents of the table use the below statement:" }, { "code": null, "e": 25014, "s": 24984, "text": "SELECT * FROM department_old;" }, { "code": null, "e": 25061, "s": 25014, "text": "The result from SQL Server Management Studio: " }, { "code": null, "e": 25151, "s": 25061, "text": "CREATE TABLE department_new(\nID int,\nSALARY int,\nNAME Varchar(20),\nDEPT_ID Varchar(255));" }, { "code": null, "e": 25197, "s": 25151, "text": "Use the below query to add data to the table:" }, { "code": null, "e": 25641, "s": 25197, "text": "INSERT INTO department_new VALUES (1, 34000, 'ANURAG', 'UI DEVELOPERS');\nINSERT INTO department_new VALUES (2, 33000, 'HARSH', 'BACKEND DEVELOPERS');\nINSERT INTO department_new VALUES (3, 36000, 'SUMIT', 'BACKEND DEVELOPERS');\nINSERT INTO department_new VALUES (4, 36000, 'RUHI', 'UI DEVELOPERS');\nINSERT INTO department_new VALUES (5, 37000, 'KAE', 'UI DEVELOPERS');\nINSERT INTO department_new VALUES (6, 37000, 'REHA', 'BACKEND DEVELOPERS');" }, { "code": null, "e": 25702, "s": 25641, "text": "To verify the contents of the table use the below statement:" }, { "code": null, "e": 25732, "s": 25702, "text": "SELECT * FROM department_new;" }, { "code": null, "e": 25740, "s": 25732, "text": "Output:" }, { "code": null, "e": 26007, "s": 25740, "text": "Let us suppose, we have two tables: table1 and table2. Here, we will use UNION ALL to combine the records based on columns that need to compare. If the values in the columns that need to compare are the same, the COUNT(*) returns 2, otherwise the COUNT(*) returns 1." }, { "code": null, "e": 26015, "s": 26007, "text": "Syntax:" }, { "code": null, "e": 26219, "s": 26015, "text": "SELECT column1, column2.... columnN\nFROM\n( SELECT table1.column1, table1.column2\n FROM table1\n UNION ALL\n SELECT table2.column1, table2.column2\n FROM table2\n) table1\nGROUP BY column1\nHAVING COUNT(*) = 1" }, { "code": null, "e": 26228, "s": 26219, "text": "Example:" }, { "code": null, "e": 26361, "s": 26228, "text": "Select ID from\n( select * from department_old\nUNION ALL\nselect * from department_new)\ndepartment_old\nGROUP BY ID\nHAVING COUNT(*) = 1" }, { "code": null, "e": 26369, "s": 26361, "text": "Output:" }, { "code": null, "e": 26452, "s": 26369, "text": "If values in the columns involved in the comparison are identical, no row returns." }, { "code": null, "e": 26459, "s": 26452, "text": "Picked" }, { "code": null, "e": 26469, "s": 26459, "text": "SQL-Query" }, { "code": null, "e": 26473, "s": 26469, "text": "SQL" }, { "code": null, "e": 26477, "s": 26473, "text": "SQL" }, { "code": null, "e": 26575, "s": 26477, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26584, "s": 26575, "text": "Comments" }, { "code": null, "e": 26597, "s": 26584, "text": "Old Comments" }, { "code": null, "e": 26663, "s": 26597, "text": "How to Update Multiple Columns in Single Update Statement in SQL?" }, { "code": null, "e": 26695, "s": 26663, "text": "What is Temporary Table in SQL?" }, { "code": null, "e": 26773, "s": 26695, "text": "SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter" }, { "code": null, "e": 26790, "s": 26773, "text": "SQL using Python" }, { "code": null, "e": 26805, "s": 26790, "text": "SQL | Subquery" }, { "code": null, "e": 26871, "s": 26805, "text": "How to Write a SQL Query For a Specific Date Range and Date Time?" }, { "code": null, "e": 26907, "s": 26871, "text": "SQL Query to Convert VARCHAR to INT" }, { "code": null, "e": 26942, "s": 26907, "text": "SQL Query to Delete Duplicate Rows" }, { "code": null, "e": 26973, "s": 26942, "text": "SQL Query to Compare Two Dates" } ]
Four major chronic diseases and premature mortality | by Daniel Wu | Towards Data Science
In part 2 of our CDC Chronic Disease indicator dataset, our analysis revealed several areas with highly correlated interrelationships — indicators within the cardiovascular disease, chronic kidney disease, diabetes, and select indicators in the overarching conditions “social determinants” category. While there are also highly correlated relationships in other areas such as cancer and COPD, we’ll be focusing primarily on the former set in this final blog post. While Figure 1 from the previous post looked at the relationships among all the indicators, Table 5 from the previous post showed there were a number of top correlation pairs. By looking at the recurring patterns of indicators by specific topic, we can narrow down the scope of the topics of interest. The same code was used previously was used to build the correlation heatmap in the below six figures (red — higher correlation): Below we go through each of the correlation heatmaps pair by pair. At the end of section 6, there will be a reference to the actual Questions and QuestionID (QIDs). Feel free to skip if that helps understand the top specific indicator questions. Summary of Detailed Findings: As we dive into each highly correlated population health chronic indicators by topic and now by individual Question ID pairs, representing the individual indicators, the theme of interrelated chronic illnesses are reinforced. The pattern of comorbidities among diabetes, cardiovascular disease, and chronic kidney disease shows up in general indicators and some more specific indicators. For example, general indicators are broad indicators that appear to capture overall conditions such as DIA1_2 “Mortality due to diabetes reported as any listed cause of death.” These high level indicators with high positive correlation are bound to more discrete mortality outcomes as well, which while obvious, poses questionable ability to bring actionable next steps for individuals. Rather, it implies that the practical route presses towards prevention as a key approach in population health. Interestingly, various overarching conditions (OVC) indicators appeared over and over such as OVC5_0 “Premature mortality among adults aged 45–64 years.” We see OVC5_0 paired with all of the top 5 cardiovascular disease indicators, top 3 chronic kidney disease indicators, and the top 4 diabetes indicators. Such relationship tends to speak to the comorbidities of these three interrelated issues and premature mortality for adults 45–64 years of age. See below for the findings of the pairs, or jump two sections below as we shift gears to an analysis on the stratifications by gender and race, and the corresponding visualizations. 1. Chronic Kidney Disease and Cardiovascular Disease (CKD x CVD) top_corr_pair[(top_corr_pair['Topic1'] == ('CKD')) & (top_corr_pair['Topic2'] == ('CVD'))].head() Chronic kidney disease indicator CKD1_0 is for “Mortality with end-stage renal disease.” On the cardiovascular disease side, CVD1_1 to CVD1_5 are “Mortality from total cardiovascular disease,” “Mortality from diseases of the heart,” “Mortality from coronary heart disease,” “Mortality from heart failure,” and “Mortality from cerebrovascular disease (stroke),” respectively. 2. Chronic Kidney Disease and Diabetes (CKD x DIA) Chronic kidney disease indicator CKD2_1 and CKD2_2 are “Incidence of treated end-stage renal disease” and “Incidence of treated end-stage renal disease attributed to diabetes,” respectively. On the diabetes side, DIA1_1 and DIA1_2 are “Mortality due to diabetes reported as any listed cause of death” and “Mortality with diabetes ketoacidosis reported as any listed cause of death,” respectively. DIA9_0 is “Hospitalization with diabetes as a listed diagnosis.” DIA4_0 is “Amputation of a lower extremity attributable to diabetes.” 3. Chronic Kidney Disease and Overarching Conditions (CKD x OVC) Overarching Condition indicators OVC5_0 and OVC6_1 are “Premature mortality among adults aged 45–64 years” and “Fair or poor self-rated health status among adults aged ≥ 18 years,” respectively. These indicators are highly correlated with the same Diabetes indicators DIA1_1, DIA1_2, DIA4_0, and DIA9_0. Additionally we’re seeing relatively high correlation between DIA2_1 “High school completion among adults aged 18–24 years” and OVC6_1 with those with fair or poor self-rated health. 4. Cardiovascular Disease and Diabetes (CVD x DIA) When comparing Cardiovascular and Diabetes indicators, the top correlated indicators start with DIA1_1, which is a general QuestionID of any mortality from diabetes, and also include DIA9_0 hospitalization with diabetes in the diagnosis. The related cardiovascular indicators include CVD1_4 (heart failure), CVD3_1 (hospitalization), CVD1_5 (mortality from stroke), CVD1_1 (general mortality from cardiovascular), and CVD1_2 (mortality from diseases of the heart). 5. Cardiovascular Disease and Overarching Conditions (CVD x OVC) When comparing highly correlated Cardiovascular Disease and Overarching Conditions indicator OVC5_0 “premature mortality 45–64 years adults,” the top correlating indicators are belong to the group of CVD1_X Question IDs (QID1), ranging from heart disease to stroke. 6. Diabetes and Overarching Conditions (DIA x OVC) Overarching Condition indicators OVC5_0 and OVC6_1 are “Premature mortality among adults aged 45–64 years” and “Fair or poor self-rated health status among adults aged ≥ 18 years,” respectively. These indicators are highly correlated with the same Diabetes indicators DIA1_1, DIA1_2, DIA4_0, and DIA9_0. Additionally we’re seeing relatively high correlation between DIA2_1 “High school completion among adults aged 18–24 years” and OVC6_1 with those with fair or poor self-rated health. Reference for QuestionID Indicator Codes Since Figures/Tables 1 to 6 visualized or listed the relationships, it would be helpful to take a look again at the respective questions for the Question IDs (QID1/QID2) to better understand the specific indicators. Using groupby() method on the df_new dataframe and then .loc[] to pull out the respective topics, I’m able to view a list of the QuestionID and Questions. df_new_QTQLY = df_new[['QuestionID','Topic','Question','LocationAbbr','YearStart']].groupby(['Topic','QuestionID','Question']).count()df_new_QTQLY.loc['Cardiovascular Disease'] Stratification Analysis on Adult Premature Mortality — Gender Since OVC5_0 indicator “Premature mortality of adults 45–64 years of age” appear repeatedly, let’s look into what this indicator can tell us by stratification of gender and race. We’re going to create df_new_OVC5_0_gender dataframe based on df_new based on the following parameters: df_new_OVC5_0_gender = df_new[ (df_new['QuestionID'] == 'OVC5_0') & (df_new['StratificationCategory1'] == 'Gender') & (df_new['DataValueUnit'] == 'cases per 100,000')]df_new_OVC5_0_gender.info() In the above dataframe, df_new[‘QuestionID’] == ‘OVC5_0’ takes the subset of data related to our specific indicator. However, this includes stratifications for both race and gender. To look at the gender stratification, df_new[‘StratificationCategory1’] == ‘Gender’ is included. The remaining dataframe shows in DataValueType to include either a ‘Number’ or an ‘Age adjusted rate.’ The ‘Number’ represents a crude rate, which is the “total burden of a health outcome to a community.” The ‘age adjusted rate’ is used to make more fair comparisons among groups whether it’s age, gender, or race. (More detailed explain here). It appears that when DataValueType = ‘Age adjusted rate,’ DataValueUnit = ‘cases per 100,000’ so I’ve included df_new[‘DataValueUnit’] == ‘cases per 100,000.’ The count shows that the OVC topic and StratificationCategory1 is not as populated as the full dataset which is expected: <class 'pandas.core.frame.DataFrame'>Int64Index: 1020 entries, 268047 to 403039Data columns (total 16 columns):YearStart 1020 non-null int64LocationAbbr 1020 non-null objectTopic 1020 non-null objectQuestion 1020 non-null objectDataValueUnit 1020 non-null objectDataValueType 1020 non-null objectDataValueAlt 1020 non-null float64StratificationCategory1 1020 non-null objectStratification1 1020 non-null objectLocationID 1020 non-null int64TopicID 1020 non-null objectQuestionID 1020 non-null objectDataValueTypeID 1020 non-null objectStratificationCategoryID1 1020 non-null objectStratificationID1 1020 non-null objectQuestionAbbr 1020 non-null objectdtypes: float64(1), int64(2), object(13)df_new_OVC5_0_gender.head() Using the groupby summary, we see that the data available runs for 5 years as opposed to the entire 15 years of the overall dataset. The following will also plot out the trends and in a seaborn barplot. df_new_OVC5_0_gender1 = df_new_OVC5_0_gender.groupby(['Stratification1','YearStart'])df_new_OVC5_0_gender1.mean().drop('LocationID',axis=1).round()plt.figure(figsize=(16, 6))sns.barplot(x='YearStart',y='DataValueAlt',data=df_new_OVC5_0_gender,hue='Stratification1',ci=None,saturation=0.7) Over the course of 2010 to 2014, the data shows that Premature Mortality of 45–64 years of age population to slightly increase year over year. Female population premature mortality increases 5.4% and male population at 3.4%. The vertical axis corresponds to DataValueAlt which is in units of 100K cases. For example, there are 485K cases in 2014 for Female (38%) while there are 784K cases in the same year for Male (62%). Surprisingly, I anticipated the premature mortality rate to increase faster for Male over Female overall. While the correlation pairs gave an overall look, let’s compare indicators between Male and Female populations in the same fashion as by correlation pairs. In the original df_new[‘QuestionID’], there were 201 unique indicators across the dataset. Since we’re focused on the age-adjusted group, our df_new_gender2[‘QuestionID’] shows there are 13 unique indicators. Keeping that in mind, the following analysis shows a subset of indicators. We’re going to start with modifying the dataframe df_new_gender for the following gender and a potential location analysis, thereby dropping several columns that are not useful for this. df_new_gender2 = df_new_gender.drop(columns=['Question','Topic','DataValueUnit','DataValueType','StratificationCategory1','TopicID','DataValueTypeID','StratificationCategoryID1','StratificationID1','QuestionAbbr'])df_new_gender2.head() In order to create a table to visualize the correlation pairs by gender, let’s take Table 9a and apply the pivot_table method. Let’s create a dataframe called df_new_gender2_loc_qid where the rows are for each location and columns are each indicator. Since we want to see the data by gender (Stratification1), there needs to be a column with a label Male or Female for each location. Also, since we’re looking at each row by location, it makes sense to summarize the DataValueAlt values for each indicator as an average so we’ll pass in the parameter aggfunc=np.mean into the pivot_table. As this pivot table is created, I encountered issues with LocationIDs or QuestionIDs that don’t have data, resulting in NA values. When this happens, we can assess whether to keep only rows/columns that have values by dropping all the rows and/or columns with NA values. As I did this, the resulting dataframe was severely reduced. Instead, by using the .fillna() with a mean using df_new_gender_loc_qid.mean(), the NA values are replaced with an average. Table 9b is the top 5 rows of the adjusted pivot table. df_new_gender2_loc_qid = df_new_gender2.pivot_table(values='DataValueAlt',index=['LocationID','Stratification1'],columns=['QuestionID'],aggfunc=np.mean)# Create pivot table with rows by Location, columns of each QuestionID, values - DataValueAlt mean by Locationdf_new_gender2_loc_qid.reset_index(level='Stratification1',inplace=True)df_new_gender2_loc_qid.fillna(df_new_gender2_loc_qid.mean(),inplace=True)df_new_gender2_loc_qid.head() With seaborn pairplot on df_new_gender2_loc_qid, the hue parameter allows us to render the plot by gender. Similar to Figure 1 from the Part 2 blog post, we have a plot of each of the 13 indicators along the same X and Y axes. Instead of correlation, we have the average values from DataValueAlt by gender and state (LocationID) for these 13 indicators: ALC6_0 (Alcohol: Chronic Liver Disease Mortality), CKD1_0 (Chronic Kidney Disease: End Stage Renal Disease Mortality), 2 Chronic Obstructive Pulmonary Disease (COPD), 5 Cardiovascular Disease (CVD), 2 Diabetes (DIA), OLD1_0, and OVC5_0 (Overarching Conditions: Premature Mortality Aged 45–64 Years). OVC5_0 came up in the prior sections as a highly correlated indicator of interest. sns.pairplot(df_new_gender2_loc_qid,hue='Stratification1',palette='RdBu_r') As mentioned previously about Figure 9 above, OVC5_0 is the Premature Mortality Aged 45–64 Years indicator of interest. Along the bottom row of Figure 9, the pairplot generally shows a distinct distribution between Male (orange) and Female (blue) populations. In nearly all cases when we compare each plot along the bottom row (OVC5_0 to the other indicators), we see Male populations with higher average DataValueAlt values for most locations, which reflects the generally, consistently poorer population health conditions across states for Male. Stratification Analysis on Adult Premature Mortality —Race Similar to stratification by gender, the analysis on stratification by race will use the following conditions to filter the data. We’ll create a new dataframe df_new_OVC5_0_race based on OVC5_0, aged based prevalence using “cases per 100,000,” and the StratificationCategory1 of “Race/Ethnicity.” df_new_OVC5_0_race = df_new[ (df_new['QuestionID'] == 'OVC5_0') & (df_new['DataValueUnit'] == 'cases per 100,000') &(df_new['StratificationCategory1'] == 'Race/Ethnicity')]df_new_OVC5_0_race1 = df_new_OVC5_0_race.groupby(['Stratification1','YearStart']).mean().round(0)df_new_OVC5_0_race1 Further we’ll use the .groupby() method to summarize the subset dataframe df_new_OVC5_0_race into a table based on the race, year, and the DataValueAlt. The categories for race in Stratification1 gives us 5 categories: American Indian or Alaska Native; Asian or Pacific Islander; Black, non-Hispanic; Hispanic; and White, non-Hispanic. For YearStart, we have 5 years of data since this was the result of the available stratification data and the use of the aged-based prevalence. DataValueAlt is an average value of the locations, rounded off to the whole digits. For example,there are 348 cases of Premature Mortality in 45–64 years aged persons per 100K for the Hispanic population in 2010. To visualize Table 10, seaborn barplot will display each of the race stratifications and the values of DataValueAlt. The parameters for the barplot include passing the x=YearStart, y=DataValueAlt, and hue=Stratification1 from data=df_new_OVC5_0_race. With plt.legend(), we’re able to fine tweak where the legend gets rendered, and I prefer outside plot which uses the bbox_to_anchor parameter. plt.figure(figsize=(16, 7))sns.barplot(x='YearStart',y='DataValueAlt',data=df_new_OVC5_0_race,hue='Stratification1',ci=None,saturation=0.7)plt.legend(loc='best',bbox_to_anchor=(0.5, 0,.73, .73)) From both Table 10 and Figure 10, the data shows that premature mortality is slightly increasing year over year with incremental increases. Premature mortality of adults aged 45–64 years of age impact the “Black, non-Hispanic” and the “American Indian or Alaska Native” populations most severely. On the other spectrum, we see “Asian or Pacific Islander” and “Hispanic” populations to be least severe. By comparison, we’re seeing the premature mortality numbers for “Black, non-Hispanic” population to be more than double of the “Hispanic” population. While it’s possible to hypothesize the correlations with the premature mortality numbers for each stratification, it’s beyond the scope of this dataset to tie the descriptive with the causation. Figure 10 gives a general understanding of the seriousness of the factors impacting the overall population, which can be complex and multi-faceted. Another way of looking at the race stratification is by a stacked bar chart. Again, a pivot_table() with the same data from df_new_OVC5_0_race is a starting point for the bar chart, shown in Table 11. df_new_OVC5_0_race2 = df_new_OVC5_0_race.pivot_table(values='DataValueAlt',index='YearStart',columns='Stratification1').round()df_new_OVC5_0_race2 Based on Table 11, we want to create columns of cumulative values instead of the specific values for each stratification. I’m going to modify df_new_OVC5_0_race2 by updating the values for subsequent columns as the sum of the previous columns. For example the values in the column for Asian or Pacific Islander column will be the sum of American Indian or Alaska Native and the Asian or Pacific Islander. Repeat this for the next three columns. With a few additional tweaks of dropping columns to keep the new cumulative columns and resetting the index, the new dataframe df_new_OVC5_0_race2c can be used in the following barplot. sns.set_style('darkgrid')plt.figure(figsize=(8, 8))sns.barplot(x='YearStart', y='cum_White', data=df_new_OVC5_0_race2c, color='blue',ci=None,saturation=0.7)sns.barplot(x='YearStart', y='cum_Hispanic', data=df_new_OVC5_0_race2c, color='purple',ci=None,saturation=0.7)sns.barplot(x='YearStart', y='cum_Black', data=df_new_OVC5_0_race2c, color='red',ci=None,saturation=0.7)sns.barplot(x='YearStart', y='cum_Asian', data=df_new_OVC5_0_race2c, color='green',ci=None,saturation=0.7)sns.barplot(x='YearStart', y='cum_AmIndianAK', data=df_new_OVC5_0_race2c, color='orange',ci=None,saturation=0.7) Since the year to year changes for each race stratification are not significantly different, the bars are mostly similar. However, this bar chart gives another view into the overall premature mortality growth from 2010 to 2014. The chart also provides a view of the proportion of the premature mortality amongst the categories of race. The colors of the hues use the same legend as in Figure 10: Blue (White, non-Hispanic), Purple (Hispanic), Red (Black non-Hispanic), Green (Asian Pacific Islander), and Yellow/Orange (American Indian or Alaska Native). I hope you’ve enjoyed following the reading on the US Chronic Indicators population dataset from the CDC. This wraps up our series. Follow me for more digital health data science related topics. Feel free to suggest any ideas or recommendations!
[ { "code": null, "e": 635, "s": 171, "text": "In part 2 of our CDC Chronic Disease indicator dataset, our analysis revealed several areas with highly correlated interrelationships — indicators within the cardiovascular disease, chronic kidney disease, diabetes, and select indicators in the overarching conditions “social determinants” category. While there are also highly correlated relationships in other areas such as cancer and COPD, we’ll be focusing primarily on the former set in this final blog post." }, { "code": null, "e": 1066, "s": 635, "text": "While Figure 1 from the previous post looked at the relationships among all the indicators, Table 5 from the previous post showed there were a number of top correlation pairs. By looking at the recurring patterns of indicators by specific topic, we can narrow down the scope of the topics of interest. The same code was used previously was used to build the correlation heatmap in the below six figures (red — higher correlation):" }, { "code": null, "e": 1312, "s": 1066, "text": "Below we go through each of the correlation heatmaps pair by pair. At the end of section 6, there will be a reference to the actual Questions and QuestionID (QIDs). Feel free to skip if that helps understand the top specific indicator questions." }, { "code": null, "e": 1342, "s": 1312, "text": "Summary of Detailed Findings:" }, { "code": null, "e": 2228, "s": 1342, "text": "As we dive into each highly correlated population health chronic indicators by topic and now by individual Question ID pairs, representing the individual indicators, the theme of interrelated chronic illnesses are reinforced. The pattern of comorbidities among diabetes, cardiovascular disease, and chronic kidney disease shows up in general indicators and some more specific indicators. For example, general indicators are broad indicators that appear to capture overall conditions such as DIA1_2 “Mortality due to diabetes reported as any listed cause of death.” These high level indicators with high positive correlation are bound to more discrete mortality outcomes as well, which while obvious, poses questionable ability to bring actionable next steps for individuals. Rather, it implies that the practical route presses towards prevention as a key approach in population health." }, { "code": null, "e": 2680, "s": 2228, "text": "Interestingly, various overarching conditions (OVC) indicators appeared over and over such as OVC5_0 “Premature mortality among adults aged 45–64 years.” We see OVC5_0 paired with all of the top 5 cardiovascular disease indicators, top 3 chronic kidney disease indicators, and the top 4 diabetes indicators. Such relationship tends to speak to the comorbidities of these three interrelated issues and premature mortality for adults 45–64 years of age." }, { "code": null, "e": 2862, "s": 2680, "text": "See below for the findings of the pairs, or jump two sections below as we shift gears to an analysis on the stratifications by gender and race, and the corresponding visualizations." }, { "code": null, "e": 2927, "s": 2862, "text": "1. Chronic Kidney Disease and Cardiovascular Disease (CKD x CVD)" }, { "code": null, "e": 3025, "s": 2927, "text": "top_corr_pair[(top_corr_pair['Topic1'] == ('CKD')) & (top_corr_pair['Topic2'] == ('CVD'))].head()" }, { "code": null, "e": 3400, "s": 3025, "text": "Chronic kidney disease indicator CKD1_0 is for “Mortality with end-stage renal disease.” On the cardiovascular disease side, CVD1_1 to CVD1_5 are “Mortality from total cardiovascular disease,” “Mortality from diseases of the heart,” “Mortality from coronary heart disease,” “Mortality from heart failure,” and “Mortality from cerebrovascular disease (stroke),” respectively." }, { "code": null, "e": 3451, "s": 3400, "text": "2. Chronic Kidney Disease and Diabetes (CKD x DIA)" }, { "code": null, "e": 3983, "s": 3451, "text": "Chronic kidney disease indicator CKD2_1 and CKD2_2 are “Incidence of treated end-stage renal disease” and “Incidence of treated end-stage renal disease attributed to diabetes,” respectively. On the diabetes side, DIA1_1 and DIA1_2 are “Mortality due to diabetes reported as any listed cause of death” and “Mortality with diabetes ketoacidosis reported as any listed cause of death,” respectively. DIA9_0 is “Hospitalization with diabetes as a listed diagnosis.” DIA4_0 is “Amputation of a lower extremity attributable to diabetes.”" }, { "code": null, "e": 4048, "s": 3983, "text": "3. Chronic Kidney Disease and Overarching Conditions (CKD x OVC)" }, { "code": null, "e": 4535, "s": 4048, "text": "Overarching Condition indicators OVC5_0 and OVC6_1 are “Premature mortality among adults aged 45–64 years” and “Fair or poor self-rated health status among adults aged ≥ 18 years,” respectively. These indicators are highly correlated with the same Diabetes indicators DIA1_1, DIA1_2, DIA4_0, and DIA9_0. Additionally we’re seeing relatively high correlation between DIA2_1 “High school completion among adults aged 18–24 years” and OVC6_1 with those with fair or poor self-rated health." }, { "code": null, "e": 4586, "s": 4535, "text": "4. Cardiovascular Disease and Diabetes (CVD x DIA)" }, { "code": null, "e": 5051, "s": 4586, "text": "When comparing Cardiovascular and Diabetes indicators, the top correlated indicators start with DIA1_1, which is a general QuestionID of any mortality from diabetes, and also include DIA9_0 hospitalization with diabetes in the diagnosis. The related cardiovascular indicators include CVD1_4 (heart failure), CVD3_1 (hospitalization), CVD1_5 (mortality from stroke), CVD1_1 (general mortality from cardiovascular), and CVD1_2 (mortality from diseases of the heart)." }, { "code": null, "e": 5116, "s": 5051, "text": "5. Cardiovascular Disease and Overarching Conditions (CVD x OVC)" }, { "code": null, "e": 5382, "s": 5116, "text": "When comparing highly correlated Cardiovascular Disease and Overarching Conditions indicator OVC5_0 “premature mortality 45–64 years adults,” the top correlating indicators are belong to the group of CVD1_X Question IDs (QID1), ranging from heart disease to stroke." }, { "code": null, "e": 5433, "s": 5382, "text": "6. Diabetes and Overarching Conditions (DIA x OVC)" }, { "code": null, "e": 5920, "s": 5433, "text": "Overarching Condition indicators OVC5_0 and OVC6_1 are “Premature mortality among adults aged 45–64 years” and “Fair or poor self-rated health status among adults aged ≥ 18 years,” respectively. These indicators are highly correlated with the same Diabetes indicators DIA1_1, DIA1_2, DIA4_0, and DIA9_0. Additionally we’re seeing relatively high correlation between DIA2_1 “High school completion among adults aged 18–24 years” and OVC6_1 with those with fair or poor self-rated health." }, { "code": null, "e": 5961, "s": 5920, "text": "Reference for QuestionID Indicator Codes" }, { "code": null, "e": 6332, "s": 5961, "text": "Since Figures/Tables 1 to 6 visualized or listed the relationships, it would be helpful to take a look again at the respective questions for the Question IDs (QID1/QID2) to better understand the specific indicators. Using groupby() method on the df_new dataframe and then .loc[] to pull out the respective topics, I’m able to view a list of the QuestionID and Questions." }, { "code": null, "e": 6509, "s": 6332, "text": "df_new_QTQLY = df_new[['QuestionID','Topic','Question','LocationAbbr','YearStart']].groupby(['Topic','QuestionID','Question']).count()df_new_QTQLY.loc['Cardiovascular Disease']" }, { "code": null, "e": 6571, "s": 6509, "text": "Stratification Analysis on Adult Premature Mortality — Gender" }, { "code": null, "e": 6854, "s": 6571, "text": "Since OVC5_0 indicator “Premature mortality of adults 45–64 years of age” appear repeatedly, let’s look into what this indicator can tell us by stratification of gender and race. We’re going to create df_new_OVC5_0_gender dataframe based on df_new based on the following parameters:" }, { "code": null, "e": 7049, "s": 6854, "text": "df_new_OVC5_0_gender = df_new[ (df_new['QuestionID'] == 'OVC5_0') & (df_new['StratificationCategory1'] == 'Gender') & (df_new['DataValueUnit'] == 'cases per 100,000')]df_new_OVC5_0_gender.info()" }, { "code": null, "e": 7954, "s": 7049, "text": "In the above dataframe, df_new[‘QuestionID’] == ‘OVC5_0’ takes the subset of data related to our specific indicator. However, this includes stratifications for both race and gender. To look at the gender stratification, df_new[‘StratificationCategory1’] == ‘Gender’ is included. The remaining dataframe shows in DataValueType to include either a ‘Number’ or an ‘Age adjusted rate.’ The ‘Number’ represents a crude rate, which is the “total burden of a health outcome to a community.” The ‘age adjusted rate’ is used to make more fair comparisons among groups whether it’s age, gender, or race. (More detailed explain here). It appears that when DataValueType = ‘Age adjusted rate,’ DataValueUnit = ‘cases per 100,000’ so I’ve included df_new[‘DataValueUnit’] == ‘cases per 100,000.’ The count shows that the OVC topic and StratificationCategory1 is not as populated as the full dataset which is expected:" }, { "code": null, "e": 8916, "s": 7954, "text": "<class 'pandas.core.frame.DataFrame'>Int64Index: 1020 entries, 268047 to 403039Data columns (total 16 columns):YearStart 1020 non-null int64LocationAbbr 1020 non-null objectTopic 1020 non-null objectQuestion 1020 non-null objectDataValueUnit 1020 non-null objectDataValueType 1020 non-null objectDataValueAlt 1020 non-null float64StratificationCategory1 1020 non-null objectStratification1 1020 non-null objectLocationID 1020 non-null int64TopicID 1020 non-null objectQuestionID 1020 non-null objectDataValueTypeID 1020 non-null objectStratificationCategoryID1 1020 non-null objectStratificationID1 1020 non-null objectQuestionAbbr 1020 non-null objectdtypes: float64(1), int64(2), object(13)df_new_OVC5_0_gender.head()" }, { "code": null, "e": 9119, "s": 8916, "text": "Using the groupby summary, we see that the data available runs for 5 years as opposed to the entire 15 years of the overall dataset. The following will also plot out the trends and in a seaborn barplot." }, { "code": null, "e": 9408, "s": 9119, "text": "df_new_OVC5_0_gender1 = df_new_OVC5_0_gender.groupby(['Stratification1','YearStart'])df_new_OVC5_0_gender1.mean().drop('LocationID',axis=1).round()plt.figure(figsize=(16, 6))sns.barplot(x='YearStart',y='DataValueAlt',data=df_new_OVC5_0_gender,hue='Stratification1',ci=None,saturation=0.7)" }, { "code": null, "e": 9937, "s": 9408, "text": "Over the course of 2010 to 2014, the data shows that Premature Mortality of 45–64 years of age population to slightly increase year over year. Female population premature mortality increases 5.4% and male population at 3.4%. The vertical axis corresponds to DataValueAlt which is in units of 100K cases. For example, there are 485K cases in 2014 for Female (38%) while there are 784K cases in the same year for Male (62%). Surprisingly, I anticipated the premature mortality rate to increase faster for Male over Female overall." }, { "code": null, "e": 10564, "s": 9937, "text": "While the correlation pairs gave an overall look, let’s compare indicators between Male and Female populations in the same fashion as by correlation pairs. In the original df_new[‘QuestionID’], there were 201 unique indicators across the dataset. Since we’re focused on the age-adjusted group, our df_new_gender2[‘QuestionID’] shows there are 13 unique indicators. Keeping that in mind, the following analysis shows a subset of indicators. We’re going to start with modifying the dataframe df_new_gender for the following gender and a potential location analysis, thereby dropping several columns that are not useful for this." }, { "code": null, "e": 10800, "s": 10564, "text": "df_new_gender2 = df_new_gender.drop(columns=['Question','Topic','DataValueUnit','DataValueType','StratificationCategory1','TopicID','DataValueTypeID','StratificationCategoryID1','StratificationID1','QuestionAbbr'])df_new_gender2.head()" }, { "code": null, "e": 11389, "s": 10800, "text": "In order to create a table to visualize the correlation pairs by gender, let’s take Table 9a and apply the pivot_table method. Let’s create a dataframe called df_new_gender2_loc_qid where the rows are for each location and columns are each indicator. Since we want to see the data by gender (Stratification1), there needs to be a column with a label Male or Female for each location. Also, since we’re looking at each row by location, it makes sense to summarize the DataValueAlt values for each indicator as an average so we’ll pass in the parameter aggfunc=np.mean into the pivot_table." }, { "code": null, "e": 11901, "s": 11389, "text": "As this pivot table is created, I encountered issues with LocationIDs or QuestionIDs that don’t have data, resulting in NA values. When this happens, we can assess whether to keep only rows/columns that have values by dropping all the rows and/or columns with NA values. As I did this, the resulting dataframe was severely reduced. Instead, by using the .fillna() with a mean using df_new_gender_loc_qid.mean(), the NA values are replaced with an average. Table 9b is the top 5 rows of the adjusted pivot table." }, { "code": null, "e": 12338, "s": 11901, "text": "df_new_gender2_loc_qid = df_new_gender2.pivot_table(values='DataValueAlt',index=['LocationID','Stratification1'],columns=['QuestionID'],aggfunc=np.mean)# Create pivot table with rows by Location, columns of each QuestionID, values - DataValueAlt mean by Locationdf_new_gender2_loc_qid.reset_index(level='Stratification1',inplace=True)df_new_gender2_loc_qid.fillna(df_new_gender2_loc_qid.mean(),inplace=True)df_new_gender2_loc_qid.head()" }, { "code": null, "e": 13075, "s": 12338, "text": "With seaborn pairplot on df_new_gender2_loc_qid, the hue parameter allows us to render the plot by gender. Similar to Figure 1 from the Part 2 blog post, we have a plot of each of the 13 indicators along the same X and Y axes. Instead of correlation, we have the average values from DataValueAlt by gender and state (LocationID) for these 13 indicators: ALC6_0 (Alcohol: Chronic Liver Disease Mortality), CKD1_0 (Chronic Kidney Disease: End Stage Renal Disease Mortality), 2 Chronic Obstructive Pulmonary Disease (COPD), 5 Cardiovascular Disease (CVD), 2 Diabetes (DIA), OLD1_0, and OVC5_0 (Overarching Conditions: Premature Mortality Aged 45–64 Years). OVC5_0 came up in the prior sections as a highly correlated indicator of interest." }, { "code": null, "e": 13151, "s": 13075, "text": "sns.pairplot(df_new_gender2_loc_qid,hue='Stratification1',palette='RdBu_r')" }, { "code": null, "e": 13699, "s": 13151, "text": "As mentioned previously about Figure 9 above, OVC5_0 is the Premature Mortality Aged 45–64 Years indicator of interest. Along the bottom row of Figure 9, the pairplot generally shows a distinct distribution between Male (orange) and Female (blue) populations. In nearly all cases when we compare each plot along the bottom row (OVC5_0 to the other indicators), we see Male populations with higher average DataValueAlt values for most locations, which reflects the generally, consistently poorer population health conditions across states for Male." }, { "code": null, "e": 13758, "s": 13699, "text": "Stratification Analysis on Adult Premature Mortality —Race" }, { "code": null, "e": 14055, "s": 13758, "text": "Similar to stratification by gender, the analysis on stratification by race will use the following conditions to filter the data. We’ll create a new dataframe df_new_OVC5_0_race based on OVC5_0, aged based prevalence using “cases per 100,000,” and the StratificationCategory1 of “Race/Ethnicity.”" }, { "code": null, "e": 14344, "s": 14055, "text": "df_new_OVC5_0_race = df_new[ (df_new['QuestionID'] == 'OVC5_0') & (df_new['DataValueUnit'] == 'cases per 100,000') &(df_new['StratificationCategory1'] == 'Race/Ethnicity')]df_new_OVC5_0_race1 = df_new_OVC5_0_race.groupby(['Stratification1','YearStart']).mean().round(0)df_new_OVC5_0_race1" }, { "code": null, "e": 15037, "s": 14344, "text": "Further we’ll use the .groupby() method to summarize the subset dataframe df_new_OVC5_0_race into a table based on the race, year, and the DataValueAlt. The categories for race in Stratification1 gives us 5 categories: American Indian or Alaska Native; Asian or Pacific Islander; Black, non-Hispanic; Hispanic; and White, non-Hispanic. For YearStart, we have 5 years of data since this was the result of the available stratification data and the use of the aged-based prevalence. DataValueAlt is an average value of the locations, rounded off to the whole digits. For example,there are 348 cases of Premature Mortality in 45–64 years aged persons per 100K for the Hispanic population in 2010." }, { "code": null, "e": 15431, "s": 15037, "text": "To visualize Table 10, seaborn barplot will display each of the race stratifications and the values of DataValueAlt. The parameters for the barplot include passing the x=YearStart, y=DataValueAlt, and hue=Stratification1 from data=df_new_OVC5_0_race. With plt.legend(), we’re able to fine tweak where the legend gets rendered, and I prefer outside plot which uses the bbox_to_anchor parameter." }, { "code": null, "e": 15626, "s": 15431, "text": "plt.figure(figsize=(16, 7))sns.barplot(x='YearStart',y='DataValueAlt',data=df_new_OVC5_0_race,hue='Stratification1',ci=None,saturation=0.7)plt.legend(loc='best',bbox_to_anchor=(0.5, 0,.73, .73))" }, { "code": null, "e": 16178, "s": 15626, "text": "From both Table 10 and Figure 10, the data shows that premature mortality is slightly increasing year over year with incremental increases. Premature mortality of adults aged 45–64 years of age impact the “Black, non-Hispanic” and the “American Indian or Alaska Native” populations most severely. On the other spectrum, we see “Asian or Pacific Islander” and “Hispanic” populations to be least severe. By comparison, we’re seeing the premature mortality numbers for “Black, non-Hispanic” population to be more than double of the “Hispanic” population." }, { "code": null, "e": 16521, "s": 16178, "text": "While it’s possible to hypothesize the correlations with the premature mortality numbers for each stratification, it’s beyond the scope of this dataset to tie the descriptive with the causation. Figure 10 gives a general understanding of the seriousness of the factors impacting the overall population, which can be complex and multi-faceted." }, { "code": null, "e": 16722, "s": 16521, "text": "Another way of looking at the race stratification is by a stacked bar chart. Again, a pivot_table() with the same data from df_new_OVC5_0_race is a starting point for the bar chart, shown in Table 11." }, { "code": null, "e": 16870, "s": 16722, "text": "df_new_OVC5_0_race2 = df_new_OVC5_0_race.pivot_table(values='DataValueAlt',index='YearStart',columns='Stratification1').round()df_new_OVC5_0_race2 " }, { "code": null, "e": 17501, "s": 16870, "text": "Based on Table 11, we want to create columns of cumulative values instead of the specific values for each stratification. I’m going to modify df_new_OVC5_0_race2 by updating the values for subsequent columns as the sum of the previous columns. For example the values in the column for Asian or Pacific Islander column will be the sum of American Indian or Alaska Native and the Asian or Pacific Islander. Repeat this for the next three columns. With a few additional tweaks of dropping columns to keep the new cumulative columns and resetting the index, the new dataframe df_new_OVC5_0_race2c can be used in the following barplot." }, { "code": null, "e": 18090, "s": 17501, "text": "sns.set_style('darkgrid')plt.figure(figsize=(8, 8))sns.barplot(x='YearStart', y='cum_White', data=df_new_OVC5_0_race2c, color='blue',ci=None,saturation=0.7)sns.barplot(x='YearStart', y='cum_Hispanic', data=df_new_OVC5_0_race2c, color='purple',ci=None,saturation=0.7)sns.barplot(x='YearStart', y='cum_Black', data=df_new_OVC5_0_race2c, color='red',ci=None,saturation=0.7)sns.barplot(x='YearStart', y='cum_Asian', data=df_new_OVC5_0_race2c, color='green',ci=None,saturation=0.7)sns.barplot(x='YearStart', y='cum_AmIndianAK', data=df_new_OVC5_0_race2c, color='orange',ci=None,saturation=0.7)" }, { "code": null, "e": 18645, "s": 18090, "text": "Since the year to year changes for each race stratification are not significantly different, the bars are mostly similar. However, this bar chart gives another view into the overall premature mortality growth from 2010 to 2014. The chart also provides a view of the proportion of the premature mortality amongst the categories of race. The colors of the hues use the same legend as in Figure 10: Blue (White, non-Hispanic), Purple (Hispanic), Red (Black non-Hispanic), Green (Asian Pacific Islander), and Yellow/Orange (American Indian or Alaska Native)." } ]
Get the range of elements in a C# list
Use the GetRange() method to get the range of elements − Firstly, set a list and add elements − List<int> arr1 = new List<int>(); arr1.Add(10); arr1.Add(20); arr1.Add(30); arr1.Add(40); arr1.Add(50); Now, under a new list get the range of elements between index 1 and 3 − List<int> myList = arr1.GetRange(1, 3); Here is the complete code − Live Demo using System; using System.Collections.Generic; public class Demo { public static void Main() { List<int> arr1 = new List<int>(); arr1.Add(10); arr1.Add(20); arr1.Add(30); arr1.Add(40); arr1.Add(50); Console.WriteLine("Initial List ..."); foreach (int i in arr1) { Console.WriteLine(i); } Console.WriteLine("Getting elements between a range..."); List<int> myList = arr1.GetRange(1, 3); foreach (int res in myList) { Console.WriteLine(res); } } } Initial List ... 10 20 30 40 50 Getting elements between a range... 20 30 40
[ { "code": null, "e": 1119, "s": 1062, "text": "Use the GetRange() method to get the range of elements −" }, { "code": null, "e": 1158, "s": 1119, "text": "Firstly, set a list and add elements −" }, { "code": null, "e": 1262, "s": 1158, "text": "List<int> arr1 = new List<int>();\narr1.Add(10);\narr1.Add(20);\narr1.Add(30);\narr1.Add(40);\narr1.Add(50);" }, { "code": null, "e": 1334, "s": 1262, "text": "Now, under a new list get the range of elements between index 1 and 3 −" }, { "code": null, "e": 1374, "s": 1334, "text": "List<int> myList = arr1.GetRange(1, 3);" }, { "code": null, "e": 1402, "s": 1374, "text": "Here is the complete code −" }, { "code": null, "e": 1413, "s": 1402, "text": " Live Demo" }, { "code": null, "e": 1962, "s": 1413, "text": "using System;\nusing System.Collections.Generic;\npublic class Demo {\n public static void Main() {\n List<int> arr1 = new List<int>();\n arr1.Add(10);\n arr1.Add(20);\n arr1.Add(30);\n arr1.Add(40);\n arr1.Add(50);\n Console.WriteLine(\"Initial List ...\");\n foreach (int i in arr1) {\n Console.WriteLine(i);\n }\n Console.WriteLine(\"Getting elements between a range...\");\n List<int> myList = arr1.GetRange(1, 3);\n foreach (int res in myList) {\n Console.WriteLine(res);\n }\n }\n}" }, { "code": null, "e": 2039, "s": 1962, "text": "Initial List ...\n10\n20\n30\n40\n50\nGetting elements between a range...\n20\n30\n40" } ]
Angular 6 - Routing
Routing basically means navigating between pages. You have seen many sites with links that direct you to a new page. This can be achieved using routing. Here the pages that we are referring to will be in the form of components. We have already seen how to create a component. Let us now create a component and see how to use routing with it. In the main parent component app.module.ts, we have to now include the router module as shown below − import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { RouterModule} from '@angular/router'; import { AppComponent } from './app.component'; import { NewCmpComponent } from './new-cmp/new-cmp.component'; import { ChangeTextDirective } from './change-text.directive'; import { SqrtPipe } from './app.sqrt'; @NgModule({ declarations: [ SqrtPipe, AppComponent, NewCmpComponent, ChangeTextDirective ], imports: [ BrowserModule, RouterModule.forRoot([ { path: 'new-cmp', component: NewCmpComponent } ]) ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Here, the RouterModule is imported from angular/router. The module is included in the imports as shown below − RouterModule.forRoot([ { path: 'new-cmp', component: NewCmpComponent } ]) RouterModule refers to the forRoot which takes an input as an array, which in turn has the object of the path and the component. Path is the name of the router and component is the name of the class, i.e., the component created. Let us now see the component created file − import { Component, OnInit } from '@angular/core'; @Component({ selector: 'app-new-cmp', templateUrl: './new-cmp.component.html', styleUrls: ['./new-cmp.component.css'] }) export class NewCmpComponent implements OnInit { newcomponent = "Entered in new component created"; constructor() {} ngOnInit() { } } The highlighted class is mentioned in the imports of the main module. <p> {{newcomponent}} </p> <p> new-cmp works! </p> Now, we need the above content from the html file to be displayed whenever required or clicked from the main module. For this, we need to add the router details in the app.component.html. <h1>Custom Pipe</h1> <b>Square root of 25 is: {{25 | sqrt}}</b><br/> <b>Square root of 729 is: {{729 | sqrt}}</b> <br /> <br /> <br /> <a routerLink = "new-cmp">New component</a> <br /> <br/> <router-outlet></router-outlet> In the above code, we have created the anchor link tag and given routerLink as "new-cmp". This is referred in app.module.ts as the path. When a user clicks new component, the page should display the content. For this, we need the following tag - <router-outlet> </router-outlet>. The above tag ensures that the content in the new-cmp.component.html will be displayed on the page when a user clicks new component. Let us now see how the output is displayed on the browser. When a user clicks New component, you will see the following in the browser. The url contains http://localhost:4200/new-cmp. Here, the new-cmp gets appended to the original url, which is the path given in the app.module.ts and the router-link in the app.component.html. When a user clicks New component, the page is not refreshed and the contents are shown to the user without any reloading. Only a particular piece of the site code will be reloaded when clicked. This feature helps when we have heavy content on the page and needs to be loaded based on the user interaction. The feature also gives a good user experience as the page is not reloaded. 16 Lectures 1.5 hours Anadi Sharma 28 Lectures 2.5 hours Anadi Sharma 11 Lectures 7.5 hours SHIVPRASAD KOIRALA 16 Lectures 2.5 hours Frahaan Hussain 69 Lectures 5 hours Senol Atac 53 Lectures 3.5 hours Senol Atac Print Add Notes Bookmark this page
[ { "code": null, "e": 2337, "s": 1995, "text": "Routing basically means navigating between pages. You have seen many sites with links that direct you to a new page. This can be achieved using routing. Here the pages that we are referring to will be in the form of components. We have already seen how to create a component. Let us now create a component and see how to use routing with it." }, { "code": null, "e": 2439, "s": 2337, "text": "In the main parent component app.module.ts, we have to now include the router module as shown below −" }, { "code": null, "e": 3168, "s": 2439, "text": "import { BrowserModule } from '@angular/platform-browser';\nimport { NgModule } from '@angular/core';\nimport { RouterModule} from '@angular/router';\nimport { AppComponent } from './app.component';\nimport { NewCmpComponent } from './new-cmp/new-cmp.component';\nimport { ChangeTextDirective } from './change-text.directive';\nimport { SqrtPipe } from './app.sqrt';\n@NgModule({\n declarations: [\n SqrtPipe,\n AppComponent,\n NewCmpComponent,\n ChangeTextDirective\n ],\n imports: [\n BrowserModule,\n RouterModule.forRoot([\n {\n path: 'new-cmp',\n component: NewCmpComponent\n }\n ])\n ],\n providers: [],\n bootstrap: [AppComponent]\n})\nexport class AppModule { }" }, { "code": null, "e": 3279, "s": 3168, "text": "Here, the RouterModule is imported from angular/router. The module is included in the imports as shown below −" }, { "code": null, "e": 3371, "s": 3279, "text": "RouterModule.forRoot([\n {\n path: 'new-cmp',\n component: NewCmpComponent\n }\n])" }, { "code": null, "e": 3600, "s": 3371, "text": "RouterModule refers to the forRoot which takes an input as an array, which in turn has the object of the path and the component. Path is the name of the router and component is the name of the class, i.e., the component created." }, { "code": null, "e": 3644, "s": 3600, "text": "Let us now see the component created file −" }, { "code": null, "e": 3968, "s": 3644, "text": "import { Component, OnInit } from '@angular/core';\n@Component({\n selector: 'app-new-cmp',\n templateUrl: './new-cmp.component.html',\n styleUrls: ['./new-cmp.component.css']\n})\nexport class NewCmpComponent implements OnInit {\n newcomponent = \"Entered in new component created\";\n constructor() {}\n ngOnInit() { }\n}" }, { "code": null, "e": 4038, "s": 3968, "text": "The highlighted class is mentioned in the imports of the main module." }, { "code": null, "e": 4096, "s": 4038, "text": "<p>\n {{newcomponent}}\n</p>\n\n<p>\n new-cmp works!\n</p>\n" }, { "code": null, "e": 4284, "s": 4096, "text": "Now, we need the above content from the html file to be displayed whenever required or clicked from the main module. For this, we need to add the router details in the app.component.html." }, { "code": null, "e": 4508, "s": 4284, "text": "<h1>Custom Pipe</h1>\n<b>Square root of 25 is: {{25 | sqrt}}</b><br/>\n<b>Square root of 729 is: {{729 | sqrt}}</b>\n<br />\n<br />\n<br />\n<a routerLink = \"new-cmp\">New component</a>\n<br />\n<br/>\n<router-outlet></router-outlet>" }, { "code": null, "e": 4645, "s": 4508, "text": "In the above code, we have created the anchor link tag and given routerLink as \"new-cmp\". This is referred in app.module.ts as the path." }, { "code": null, "e": 4788, "s": 4645, "text": "When a user clicks new component, the page should display the content. For this, we need the following tag - <router-outlet> </router-outlet>." }, { "code": null, "e": 4921, "s": 4788, "text": "The above tag ensures that the content in the new-cmp.component.html will be displayed on the page when a user clicks new component." }, { "code": null, "e": 4980, "s": 4921, "text": "Let us now see how the output is displayed on the browser." }, { "code": null, "e": 5057, "s": 4980, "text": "When a user clicks New component, you will see the following in the browser." }, { "code": null, "e": 5250, "s": 5057, "text": "The url contains http://localhost:4200/new-cmp. Here, the new-cmp gets appended to the original url, which is the path given in the app.module.ts and the router-link in the app.component.html." }, { "code": null, "e": 5631, "s": 5250, "text": "When a user clicks New component, the page is not refreshed and the contents are shown to the user without any reloading. Only a particular piece of the site code will be reloaded when clicked. This feature helps when we have heavy content on the page and needs to be loaded based on the user interaction. The feature also gives a good user experience as the page is not reloaded." }, { "code": null, "e": 5666, "s": 5631, "text": "\n 16 Lectures \n 1.5 hours \n" }, { "code": null, "e": 5680, "s": 5666, "text": " Anadi Sharma" }, { "code": null, "e": 5715, "s": 5680, "text": "\n 28 Lectures \n 2.5 hours \n" }, { "code": null, "e": 5729, "s": 5715, "text": " Anadi Sharma" }, { "code": null, "e": 5764, "s": 5729, "text": "\n 11 Lectures \n 7.5 hours \n" }, { "code": null, "e": 5784, "s": 5764, "text": " SHIVPRASAD KOIRALA" }, { "code": null, "e": 5819, "s": 5784, "text": "\n 16 Lectures \n 2.5 hours \n" }, { "code": null, "e": 5836, "s": 5819, "text": " Frahaan Hussain" }, { "code": null, "e": 5869, "s": 5836, "text": "\n 69 Lectures \n 5 hours \n" }, { "code": null, "e": 5881, "s": 5869, "text": " Senol Atac" }, { "code": null, "e": 5916, "s": 5881, "text": "\n 53 Lectures \n 3.5 hours \n" }, { "code": null, "e": 5928, "s": 5916, "text": " Senol Atac" }, { "code": null, "e": 5935, "s": 5928, "text": " Print" }, { "code": null, "e": 5946, "s": 5935, "text": " Add Notes" } ]
Sending message through WhatsApp in android?
This example demonstrate about sending message through WhatsApp in android Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version = "1.0" encoding = "utf-8"?> <LinearLayout xmlns:android = "http://schemas.android.com/apk/res/android" android:orientation = "vertical" android:layout_width = "match_parent" android:gravity = "center" android:layout_height = "match_parent"> <TextView android:id = "@+id/text" android:layout_width = "match_parent" android:layout_height = "wrap_content" android:text = "click" android:textSize = "30sp" /> </LinearLayout> In the above code, we have taken text view. Step 3 − Add the following code to src/MainActivity.java <?xml version = "1.0" encoding = "utf-8"?> import android.app.Activity; import android.content.Intent; import android.content.pm.PackageInfo; import android.content.pm.PackageManager; import android.os.Bundle; import android.view.View; import android.widget.TextView; import android.widget.Toast; public class MainActivity extends Activity { TextView textView; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); textView = findViewById(R.id.text); textView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { PackageManager pm = MainActivity.this.getPackageManager(); try { Intent waIntent = new Intent(Intent.ACTION_SEND); waIntent.setType("text/plain"); String text = "YOUR TEXT HERE"; PackageInfo info = pm.getPackageInfo("com.whatsapp", PackageManager.GET_META_DATA); waIntent.setPackage("com.whatsapp"); waIntent.putExtra(Intent.EXTRA_TEXT, text); startActivity(Intent.createChooser(waIntent, "Share with")); } catch (PackageManager.NameNotFoundException e) { Toast.makeText(MainActivity.this, "WhatsApp not Installed", Toast.LENGTH_SHORT) .show(); } } }); } } Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen – Now click on textview to open whatsApp
[ { "code": null, "e": 1137, "s": 1062, "text": "This example demonstrate about sending message through WhatsApp in android" }, { "code": null, "e": 1266, "s": 1137, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1331, "s": 1266, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 1812, "s": 1331, "text": "<?xml version = \"1.0\" encoding = \"utf-8\"?>\n<LinearLayout xmlns:android = \"http://schemas.android.com/apk/res/android\"\n android:orientation = \"vertical\"\n android:layout_width = \"match_parent\"\n android:gravity = \"center\"\n android:layout_height = \"match_parent\">\n <TextView\n android:id = \"@+id/text\"\n android:layout_width = \"match_parent\"\n android:layout_height = \"wrap_content\"\n android:text = \"click\"\n android:textSize = \"30sp\" />\n</LinearLayout>" }, { "code": null, "e": 1856, "s": 1812, "text": "In the above code, we have taken text view." }, { "code": null, "e": 1913, "s": 1856, "text": "Step 3 − Add the following code to src/MainActivity.java" }, { "code": null, "e": 3326, "s": 1913, "text": "<?xml version = \"1.0\" encoding = \"utf-8\"?>\nimport android.app.Activity;\nimport android.content.Intent;\nimport android.content.pm.PackageInfo;\nimport android.content.pm.PackageManager;\nimport android.os.Bundle;\nimport android.view.View;\nimport android.widget.TextView;\nimport android.widget.Toast;\n\npublic class MainActivity extends Activity {\n TextView textView;\n @Override\n public void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n textView = findViewById(R.id.text);\n textView.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n PackageManager pm = MainActivity.this.getPackageManager();\n try {\n Intent waIntent = new Intent(Intent.ACTION_SEND);\n waIntent.setType(\"text/plain\");\n String text = \"YOUR TEXT HERE\";\n PackageInfo info = pm.getPackageInfo(\"com.whatsapp\", PackageManager.GET_META_DATA);\n waIntent.setPackage(\"com.whatsapp\");\n waIntent.putExtra(Intent.EXTRA_TEXT, text);\n startActivity(Intent.createChooser(waIntent, \"Share with\"));\n } catch (PackageManager.NameNotFoundException e) {\n Toast.makeText(MainActivity.this, \"WhatsApp not Installed\", Toast.LENGTH_SHORT)\n .show();\n }\n }\n });\n }\n}" }, { "code": null, "e": 3673, "s": 3326, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –" }, { "code": null, "e": 3712, "s": 3673, "text": "Now click on textview to open whatsApp" } ]
How to handle Java Array Index Out of Bounds Exception?
Generally, an array is of fixed size and each element is accessed using the indices. For example, we have created an array with size 9. Then the valid expressions to access the elements of this array will be a[0] to a[8] (length-1). Whenever you used an –ve value or, the value greater than or equal to the size of the array, then the ArrayIndexOutOfBoundsException is thrown. For Example, if you execute the following code, it displays the elements in the array asks you to give the index to select an element. Since the size of the array is 7, the valid index will be 0 to 6. import java.util.Arrays; import java.util.Scanner; public class AIOBSample { public static void main(String args[]) { int[] myArray = {897, 56, 78, 90, 12, 123, 75}; System.out.println("Elements in the array are:: "); System.out.println(Arrays.toString(myArray)); Scanner sc = new Scanner(System.in); System.out.println("Enter the index of the required element ::"); int element = sc.nextInt(); System.out.println("Element in the given index is :: "+myArray[element]); } } But if you observe the below output we have requested the element with the index 9 since it is an invalid index an ArrayIndexOutOfBoundsException raised and the execution terminated. Elements in the array are:: [897, 56, 78, 90, 12, 123, 75] Enter the index of the required element :: 7 Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 7 at AIOBSample.main(AIOBSample.java:12) You can handle this exception using try catch as shown below. import java.util.Arrays; import java.util.Scanner; public class AIOBSampleHandled { public static void main(String args[]) { int[] myArray = {897, 56, 78, 90, 12, 123, 75}; System.out.println("Elements in the array are:: "); System.out.println(Arrays.toString(myArray)); Scanner sc = new Scanner(System.in); System.out.println("Enter the index of the required element ::"); try { int element = sc.nextInt(); System.out.println("Element in the given index is :: "+myArray[element]); } catch(ArrayIndexOutOfBoundsException e) { System.out.println("The index you have entered is invalid"); System.out.println("Please enter an index number between 0 and 6"); } } } Elements in the array are:: [897, 56, 78, 90, 12, 123, 75] Enter the index of the required element :: 7 The index you have entered is invalid Please enter an index number between 0 and 6
[ { "code": null, "e": 1295, "s": 1062, "text": "Generally, an array is of fixed size and each element is accessed using the indices. For example, we have created an array with size 9. Then the valid expressions to access the elements of this array will be a[0] to a[8] (length-1)." }, { "code": null, "e": 1439, "s": 1295, "text": "Whenever you used an –ve value or, the value greater than or equal to the size of the array, then the ArrayIndexOutOfBoundsException is thrown." }, { "code": null, "e": 1640, "s": 1439, "text": "For Example, if you execute the following code, it displays the elements in the array asks you to give the index to select an element. Since the size of the array is 7, the valid index will be 0 to 6." }, { "code": null, "e": 2162, "s": 1640, "text": "import java.util.Arrays;\nimport java.util.Scanner;\n\npublic class AIOBSample {\n public static void main(String args[]) {\n int[] myArray = {897, 56, 78, 90, 12, 123, 75};\n System.out.println(\"Elements in the array are:: \");\n System.out.println(Arrays.toString(myArray));\n Scanner sc = new Scanner(System.in);\n System.out.println(\"Enter the index of the required element ::\");\n int element = sc.nextInt();\n System.out.println(\"Element in the given index is :: \"+myArray[element]);\n }\n}" }, { "code": null, "e": 2345, "s": 2162, "text": "But if you observe the below output we have requested the element with the index 9 since it is an invalid index an ArrayIndexOutOfBoundsException raised and the execution terminated." }, { "code": null, "e": 2559, "s": 2345, "text": "Elements in the array are::\n[897, 56, 78, 90, 12, 123, 75]\nEnter the index of the required element ::\n7\nException in thread \"main\" java.lang.ArrayIndexOutOfBoundsException: 7\nat AIOBSample.main(AIOBSample.java:12)" }, { "code": null, "e": 2621, "s": 2559, "text": "You can handle this exception using try catch as shown below." }, { "code": null, "e": 3373, "s": 2621, "text": "import java.util.Arrays;\nimport java.util.Scanner;\n\npublic class AIOBSampleHandled {\n public static void main(String args[]) {\n int[] myArray = {897, 56, 78, 90, 12, 123, 75};\n System.out.println(\"Elements in the array are:: \");\n System.out.println(Arrays.toString(myArray));\n Scanner sc = new Scanner(System.in);\n System.out.println(\"Enter the index of the required element ::\");\n try {\n int element = sc.nextInt();\n System.out.println(\"Element in the given index is :: \"+myArray[element]);\n } catch(ArrayIndexOutOfBoundsException e) {\n System.out.println(\"The index you have entered is invalid\");\n System.out.println(\"Please enter an index number between 0 and 6\");\n }\n }\n}" }, { "code": null, "e": 3560, "s": 3373, "text": "Elements in the array are::\n[897, 56, 78, 90, 12, 123, 75]\nEnter the index of the required element ::\n7\nThe index you have entered is invalid\nPlease enter an index number between 0 and 6" } ]
How To Set Up Multiple SSL Host With A Single Apache Server
In this article, we will show you how to set up multiple SSL Certificates on a CentOS with Apache using a single IP address only. In general, a website administrator is restricted to use a single SSL Certificate per socket with an IP which will cost a lot of investment to the company. This restriction may lead them to buy multiple IP addresses for HTTP’s websites for their domain hosting or buy hardware that allows them to utilize multiple network adapters. This is allowed by an extension to the SSL protocol called Server Name Indication (SNI). Most current desktops and mobile web browsers support SNI. The main benefit of using SNI is the ability to secure multiple websites without purchasing more IP addresses. Make sure the mod_ssl security module is installed and enabled so the Apache web server can use the OpenSSL library and toolkit: # yum install mod_ssl openssl # mkdir -p /etc/httpd/ssl/ # mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.bak # cd /etc/httpd/ssl/ # openssl genrsa -out mydomain1.key 2048 # openssl req -new -key mydomain1.key -out mydomain1.csr # openssl genrsa -out domain2.key 2048 # openssl req -new -key mydomain2.key -out mydomain2.csr Enter the following details for your certificates: Country Name (2 letter code) [AU]:IN State or Province Name (full name) [Some-State]:Telengana Locality Name (eg, city) []:Hyderabad Organization Name (eg, company) [Internet Widgits Pty Ltd]:mydomain1.com Organizational Unit Name (eg, section) []:mydomain.com Common Name (e.g. server FQDN or YOUR name) []:mydomain1.com Email Address []:[email protected] It is recommended to install commercial SSL certificates when we are deploying in a production environment. Or, we just generate self-signed SSL certificate which is used for development purpose or staging a website using the below commands # openssl x509 -req -days 365 -in mydomain1.csr -signkey mydomain1.key -out domain1.crt # openssl x509 -req -days 365 -in mydomain2.csr -signkey mydomain2.key -out mydomain2.crt # vi /etc/httpd/conf.d/ssl.conf LoadModule ssl_module modules/mod_ssl.so Listen 443 NameVirtualHost *:443 SSLPassPhraseDialog builtin SSLSessionCacheTimeout 300 SSLMutex default SSLRandomSeed startup file:/dev/urandom 256 SSLRandomSeed connect builtin SSLCryptoDevice builtin SSLStrictSNIVHostCheck off <VirtualHost *:443> DocumentRoot /var/www/html/mydomain1 ServerName mydomain1.com ServerAlias www.mydomain1.com SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /etc/httpd/ssl/mydomain1.cr SSLCertificateKeyFile /etc/httpd/ssl/mydomain1.key ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel warn <Files ~ "\.(cgi|shtml|phtml|php3?)$"> SSLOptions +StdEnvVars </Files> SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> <VirtualHost *:443> DocumentRoot /var/www/html/mydomain2 ServerName mydomain2.com ServerAlias www.mydomain2.com SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /etc/httpd/ssl/mydomain2.crt SSLCertificateKeyFile /etc/httpd/ssl/mydomain2.key ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel warn <Files ~ "\.(cgi|shtml|phtml|php3?)$"> SSLOptions +StdEnvVars </Files> SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ Downgrade-1.0 force-response-1.0 CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> When we are using a commercial SSL certificate, it is likely that, the signing authority will include an intermediate CA certificate. In that case, we create a new ‘/etc/httpd/ssl/ca.crt’ file and paste the contents of the Intermediate CA into it, then we needed to edit the ‘ssl.conf’ configuration file and uncomment the following line. SSLCertificateChainFile /etc/httpd/ssl/ca.crt So the Apache web server can find your CA certificate. # /etc/init.d/httpd configtest Syntax OK # service httpd restart Open https://mydomain1.com and https://mymydomain2.com in your favorite web browser and verify that SSL certificates are installed correctly. After this setup and restarting Apache, you can access http’s site with a browser that supports SNI. If you have setup correctly, then you will be able to access the site without any warnings or problems. You can add as many as websites or SSL Certificates as you need to use the above process.
[ { "code": null, "e": 1524, "s": 1062, "text": "In this article, we will show you how to set up multiple SSL Certificates on a CentOS with Apache using a single IP address only. In general, a website administrator is restricted to use a single SSL Certificate per socket with an IP which will cost a lot of investment to the company. This restriction may lead them to buy multiple IP addresses for HTTP’s websites for their domain hosting or buy hardware that allows them to utilize multiple network adapters." }, { "code": null, "e": 1783, "s": 1524, "text": "This is allowed by an extension to the SSL protocol called Server Name Indication (SNI). Most current desktops and mobile web browsers support SNI. The main benefit of using SNI is the ability to secure multiple websites without purchasing more IP addresses." }, { "code": null, "e": 1912, "s": 1783, "text": "Make sure the mod_ssl security module is installed and enabled so the Apache web server can use the OpenSSL library and toolkit:" }, { "code": null, "e": 1942, "s": 1912, "text": "# yum install mod_ssl openssl" }, { "code": null, "e": 2053, "s": 1942, "text": "# mkdir -p /etc/httpd/ssl/\n# mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.bak\n# cd /etc/httpd/ssl/" }, { "code": null, "e": 2664, "s": 2053, "text": "# openssl genrsa -out mydomain1.key 2048\n# openssl req -new -key mydomain1.key -out mydomain1.csr\n# openssl genrsa -out domain2.key 2048\n# openssl req -new -key mydomain2.key -out mydomain2.csr\nEnter the following details for your certificates:\nCountry Name (2 letter code) [AU]:IN\nState or Province Name (full name) [Some-State]:Telengana\nLocality Name (eg, city) []:Hyderabad\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:mydomain1.com\nOrganizational Unit Name (eg, section) []:mydomain.com\nCommon Name (e.g. server FQDN or YOUR name) []:mydomain1.com\nEmail Address []:[email protected]" }, { "code": null, "e": 2905, "s": 2664, "text": "It is recommended to install commercial SSL certificates when we are deploying in a production environment. Or, we just generate self-signed SSL certificate which is used for development purpose or staging a website using the below commands" }, { "code": null, "e": 3083, "s": 2905, "text": "# openssl x509 -req -days 365 -in mydomain1.csr -signkey mydomain1.key -out domain1.crt\n# openssl x509 -req -days 365 -in mydomain2.csr -signkey mydomain2.key -out mydomain2.crt" }, { "code": null, "e": 4834, "s": 3083, "text": "# vi /etc/httpd/conf.d/ssl.conf\nLoadModule ssl_module modules/mod_ssl.so\nListen 443\nNameVirtualHost *:443\n SSLPassPhraseDialog builtin\n SSLSessionCacheTimeout 300\n SSLMutex default\n SSLRandomSeed startup file:/dev/urandom 256\n SSLRandomSeed connect builtin\n SSLCryptoDevice builtin\n SSLStrictSNIVHostCheck off\n<VirtualHost *:443>\n DocumentRoot /var/www/html/mydomain1\n ServerName mydomain1.com\n ServerAlias www.mydomain1.com\n SSLEngine on\n SSLProtocol all -SSLv2\n SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW\n SSLCertificateFile /etc/httpd/ssl/mydomain1.cr\n SSLCertificateKeyFile /etc/httpd/ssl/mydomain1.key\n ErrorLog logs/ssl_error_log\n TransferLog logs/ssl_access_log\n LogLevel warn\n <Files ~ \"\\.(cgi|shtml|phtml|php3?)$\">\n SSLOptions +StdEnvVars\n </Files>\n SetEnvIf User-Agent \".*MSIE.*\" \\\n nokeepalive ssl-unclean-shutdown \\\n downgrade-1.0 force-response-1.0\n CustomLog logs/ssl_request_log \\\n \"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \\\"%r\\\" %b\"\n</VirtualHost>\n<VirtualHost *:443>\n DocumentRoot /var/www/html/mydomain2\n ServerName mydomain2.com\n ServerAlias www.mydomain2.com\n SSLEngine on\n SSLProtocol all -SSLv2\n SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW\n SSLCertificateFile /etc/httpd/ssl/mydomain2.crt\n SSLCertificateKeyFile /etc/httpd/ssl/mydomain2.key\n ErrorLog logs/ssl_error_log\n TransferLog logs/ssl_access_log\n LogLevel warn\n <Files ~ \"\\.(cgi|shtml|phtml|php3?)$\">\n SSLOptions +StdEnvVars\n </Files>\n SetEnvIf User-Agent \".*MSIE.*\" \\\n nokeepalive ssl-unclean-shutdown \\\n Downgrade-1.0 force-response-1.0\n CustomLog logs/ssl_request_log \\\n \"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \\\"%r\\\" %b\"\n</VirtualHost>" }, { "code": null, "e": 5173, "s": 4834, "text": "When we are using a commercial SSL certificate, it is likely that, the signing authority will include an intermediate CA certificate. In that case, we create a new ‘/etc/httpd/ssl/ca.crt’ file and paste the contents of the Intermediate CA into it, then we needed to edit the ‘ssl.conf’ configuration file and uncomment the following line." }, { "code": null, "e": 5219, "s": 5173, "text": "SSLCertificateChainFile /etc/httpd/ssl/ca.crt" }, { "code": null, "e": 5274, "s": 5219, "text": "So the Apache web server can find your CA certificate." }, { "code": null, "e": 5315, "s": 5274, "text": "# /etc/init.d/httpd configtest\nSyntax OK" }, { "code": null, "e": 5339, "s": 5315, "text": "# service httpd restart" }, { "code": null, "e": 5481, "s": 5339, "text": "Open https://mydomain1.com and https://mymydomain2.com in your favorite web browser and verify that SSL certificates are installed correctly." }, { "code": null, "e": 5776, "s": 5481, "text": "After this setup and restarting Apache, you can access http’s site with a browser that supports SNI. If you have setup correctly, then you will be able to access the site without any warnings or problems. You can add as many as websites or SSL Certificates as you need to use the above process." } ]
Java Program to count all vowels in a string
Let’s say the following is our string. String str = "!Demo Text!"; To count vowels, loop through each every character and check for any of the vowel i.e. if(c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u') { ++vowelsCount; } The variable “vowelsCount” will give the count of all vowels. Live Demo public class Demo { public static void main(String[] args) { String str = "!Demo Text!"; int vowelsCount = 0; for(char c : str.toCharArray()) { c = Character.toLowerCase(c); if(c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u') { ++vowelsCount; } } System.out.println("String "+str+" has "+ vowelsCount + " vowels."); } } String !Demo Text! has 3 vowels.
[ { "code": null, "e": 1101, "s": 1062, "text": "Let’s say the following is our string." }, { "code": null, "e": 1129, "s": 1101, "text": "String str = \"!Demo Text!\";" }, { "code": null, "e": 1216, "s": 1129, "text": "To count vowels, loop through each every character and check for any of the vowel i.e." }, { "code": null, "e": 1296, "s": 1216, "text": "if(c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u') {\n++vowelsCount;\n}" }, { "code": null, "e": 1358, "s": 1296, "text": "The variable “vowelsCount” will give the count of all vowels." }, { "code": null, "e": 1369, "s": 1358, "text": " Live Demo" }, { "code": null, "e": 1773, "s": 1369, "text": "public class Demo {\n public static void main(String[] args) {\n String str = \"!Demo Text!\";\n int vowelsCount = 0;\n for(char c : str.toCharArray()) {\n c = Character.toLowerCase(c);\n if(c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u') {\n ++vowelsCount;\n }\n }\n System.out.println(\"String \"+str+\" has \"+ vowelsCount + \" vowels.\");\n }\n}" }, { "code": null, "e": 1806, "s": 1773, "text": "String !Demo Text! has 3 vowels." } ]
Python Tricks for Competitive Coding
Python is one of the preferred languages among coders for most of the competitive programming challenges. Most of the problems are easily computed in a reasonable time frame using python. For some of the complex problem, writing fast-enough python code is often a challenge. Below are some of the pythonic code constructs that help to improve the performance of your code in competitive coding − 1. Strings concatenation: Do not use the below construct. str1 = "" some_list = ["Welcome ", "To ", "Tutorialspoint "] for x in some_list: str1 += x print(str1) Above method gives huge time overhead.Instead, try to use this (join method) − str1 = "" some_list = ["Welcome ", "To ", "Tutorialspoint "] print(str1.join(some_list)) 2. The Map function Generally, you have an input in competitive coding, something like − 1234567 To get them as a list of numbers simply list(map (int, input().split())) Always use the input() function irrespective of the type of input and then convert it using the map function. >>> list(map(int, input("enter numbers:").split())) enter numbers:1 2 3 4 5 6 7 [1, 2, 3, 4, 5, 6, 7] >>> The map function is one of the beautiful in-built function of python, which comes handy many times. Worth knowing. 3. Collections module In case we want to remove duplicates from a list. While in other languages like Java you may have to use HashMap or any other freaky way, however, in pytho it's simply >>> print(list(set([1,2,3,4,3,4,5,6]))) [1, 2, 3, 4, 5, 6] Also, be careful to use extend() and append() in lists, while merging two or more lists. >>> a = [1, 2, 3,4] # list 1 >>> b = [5, 6, 7] # list 2 >>> a.extend(b)#gives one list >>> a [1, 2, 3, 4, 5, 6, 7] >>> a.append(b) # gives list of list >>> a [1, 2, 3, 4, [5, 6, 7]] 4. Language constructs It's better to write your code within functions, although the procedural code is supported in Python. def main(): for i in range(2**3): print(x) main() is much better than for x in range(2**3): print(x) It is faster to store local variables than globals because of the underlying Cpython implementation. 5. Use the standard library: It’s better to use built-in functions and standard library package as much as possible. There, instead of − newlist = [] for x in somelist: newlist.append(myfunc(x)) Use this − newlist = map(myfunc, somelist) Likewise, try to use the itertools(standard library), as they are much faster for a common task. For example, you can have something like permutation for a loop with a few lines of code. >>> import itertools >>> iter = itertools.permutations(["a","b","c"]) >>> list(iter) [('a', 'b', 'c'), ('a', 'c', 'b'), ('b', 'a', 'c'), ('b', 'c', 'a'), ('c', 'a', 'b'), ('c', 'b', 'a')] 6. Generators Generators are excellent constructs to reduce both, the memory footprint and the average time complexity of the code you’ve written. def fib(): a, b = 0, 1 while 1: yield a a, b = b, a+b
[ { "code": null, "e": 1250, "s": 1062, "text": "Python is one of the preferred languages among coders for most of the competitive programming challenges. Most of the problems are easily computed in a reasonable time frame using python." }, { "code": null, "e": 1458, "s": 1250, "text": "For some of the complex problem, writing fast-enough python code is often a challenge. Below are some of the pythonic code constructs that help to improve the performance of your code in competitive coding −" }, { "code": null, "e": 1516, "s": 1458, "text": "1. Strings concatenation: Do not use the below construct." }, { "code": null, "e": 1622, "s": 1516, "text": "str1 = \"\"\nsome_list = [\"Welcome \", \"To \", \"Tutorialspoint \"]\nfor x in some_list:\n str1 += x\nprint(str1)" }, { "code": null, "e": 1701, "s": 1622, "text": "Above method gives huge time overhead.Instead, try to use this (join method) −" }, { "code": null, "e": 1790, "s": 1701, "text": "str1 = \"\"\nsome_list = [\"Welcome \", \"To \", \"Tutorialspoint \"]\nprint(str1.join(some_list))" }, { "code": null, "e": 1810, "s": 1790, "text": "2. The Map function" }, { "code": null, "e": 1879, "s": 1810, "text": "Generally, you have an input in competitive coding, something like −" }, { "code": null, "e": 1887, "s": 1879, "text": "1234567" }, { "code": null, "e": 1927, "s": 1887, "text": "To get them as a list of numbers simply" }, { "code": null, "e": 1960, "s": 1927, "text": "list(map (int, input().split()))" }, { "code": null, "e": 2070, "s": 1960, "text": "Always use the input() function irrespective of the type of input and then convert it using the map function." }, { "code": null, "e": 2176, "s": 2070, "text": ">>> list(map(int, input(\"enter numbers:\").split()))\nenter numbers:1 2 3 4 5 6 7\n[1, 2, 3, 4, 5, 6, 7]\n>>>" }, { "code": null, "e": 2291, "s": 2176, "text": "The map function is one of the beautiful in-built function of python, which comes handy many times. Worth knowing." }, { "code": null, "e": 2313, "s": 2291, "text": "3. Collections module" }, { "code": null, "e": 2481, "s": 2313, "text": "In case we want to remove duplicates from a list. While in other languages like Java you may have to use HashMap or any other freaky way, however, in pytho it's simply" }, { "code": null, "e": 2540, "s": 2481, "text": ">>> print(list(set([1,2,3,4,3,4,5,6])))\n[1, 2, 3, 4, 5, 6]" }, { "code": null, "e": 2629, "s": 2540, "text": "Also, be careful to use extend() and append() in lists, while merging two or more lists." }, { "code": null, "e": 2811, "s": 2629, "text": ">>> a = [1, 2, 3,4] # list 1\n>>> b = [5, 6, 7] # list 2\n>>> a.extend(b)#gives one list\n>>> a\n[1, 2, 3, 4, 5, 6, 7]\n>>> a.append(b) # gives list of list\n>>> a\n[1, 2, 3, 4, [5, 6, 7]]" }, { "code": null, "e": 2834, "s": 2811, "text": "4. Language constructs" }, { "code": null, "e": 2936, "s": 2834, "text": "It's better to write your code within functions, although the procedural code is supported in Python." }, { "code": null, "e": 2995, "s": 2936, "text": "def main():\n for i in range(2**3):\n print(x)\nmain()" }, { "code": null, "e": 3015, "s": 2995, "text": "is much better than" }, { "code": null, "e": 3049, "s": 3015, "text": "for x in range(2**3):\n print(x)" }, { "code": null, "e": 3150, "s": 3049, "text": "It is faster to store local variables than globals because of the underlying Cpython implementation." }, { "code": null, "e": 3179, "s": 3150, "text": "5. Use the standard library:" }, { "code": null, "e": 3287, "s": 3179, "text": "It’s better to use built-in functions and standard library package as much as possible. There, instead of −" }, { "code": null, "e": 3348, "s": 3287, "text": "newlist = []\nfor x in somelist:\n newlist.append(myfunc(x))" }, { "code": null, "e": 3359, "s": 3348, "text": "Use this −" }, { "code": null, "e": 3391, "s": 3359, "text": "newlist = map(myfunc, somelist)" }, { "code": null, "e": 3578, "s": 3391, "text": "Likewise, try to use the itertools(standard library), as they are much faster for a common task. For example, you can have something like permutation for a loop with a few lines of code." }, { "code": null, "e": 3766, "s": 3578, "text": ">>> import itertools\n>>> iter = itertools.permutations([\"a\",\"b\",\"c\"])\n>>> list(iter)\n[('a', 'b', 'c'), ('a', 'c', 'b'), ('b', 'a', 'c'), ('b', 'c', 'a'), ('c', 'a', 'b'), ('c', 'b', 'a')]" }, { "code": null, "e": 3780, "s": 3766, "text": "6. Generators" }, { "code": null, "e": 3913, "s": 3780, "text": "Generators are excellent constructs to reduce both, the memory footprint and the average time complexity of the code you’ve written." }, { "code": null, "e": 3985, "s": 3913, "text": "def fib():\n a, b = 0, 1\n while 1:\n yield a\n a, b = b, a+b" } ]
Check if a directory is not empty in Java
The method java.io.File.list() is used to obtain the list of the files and directories in the specified directory defined by its path name. This list of files is stored in a string array. If the length of this string array is greater than 0, then the specified directory is not empty. Otherwise, it is empty. A program that demonstrates this is given as follows − Live Demo import java.io.File; public class Demo { public static void main(String[] args) { File directory = new File("C:\\JavaProgram"); if (directory.isDirectory()) { String[] files = directory.list(); if (directory.length() > 0) { System.out.println("The directory " + directory.getPath() + " is not empty"); } else { System.out.println("The directory " + directory.getPath() + " is empty"); } } } } The output of the above program is as follows − The directory C:\JavaProgram is not empty Now let us understand the above program. The method java.io.File.list() is used to obtain the list of the files and directories in the directory "C:\\JavaProgram". Then this list of files is stored in a string array files[]. If the length of this string array is greater than 0, then the specified directory is not empty is this is printed. Otherwise, it is empty is that is printed. A code snippet that demonstrates this is given as follows − File directory = new File("C:\\JavaProgram"); if (directory.isDirectory()) { String[] files = directory.list(); if (directory.length() > 0) { System.out.println("The directory " + directory.getPath() + " is not empty"); } else { System.out.println("The directory " + directory.getPath() + " is empty"); } }
[ { "code": null, "e": 1371, "s": 1062, "text": "The method java.io.File.list() is used to obtain the list of the files and directories in the specified directory defined by its path name. This list of files is stored in a string array. If the length of this string array is greater than 0, then the specified directory is not empty. Otherwise, it is empty." }, { "code": null, "e": 1426, "s": 1371, "text": "A program that demonstrates this is given as follows −" }, { "code": null, "e": 1437, "s": 1426, "text": " Live Demo" }, { "code": null, "e": 1914, "s": 1437, "text": "import java.io.File;\npublic class Demo {\n public static void main(String[] args) {\n File directory = new File(\"C:\\\\JavaProgram\");\n if (directory.isDirectory()) {\n String[] files = directory.list();\n if (directory.length() > 0) {\n System.out.println(\"The directory \" + directory.getPath() + \" is not empty\");\n } else {\n System.out.println(\"The directory \" + directory.getPath() + \" is empty\");\n }\n }\n }\n}" }, { "code": null, "e": 1962, "s": 1914, "text": "The output of the above program is as follows −" }, { "code": null, "e": 2004, "s": 1962, "text": "The directory C:\\JavaProgram is not empty" }, { "code": null, "e": 2045, "s": 2004, "text": "Now let us understand the above program." }, { "code": null, "e": 2448, "s": 2045, "text": "The method java.io.File.list() is used to obtain the list of the files and directories in the directory \"C:\\\\JavaProgram\". Then this list of files is stored in a string array files[]. If the length of this string array is greater than 0, then the specified directory is not empty is this is printed. Otherwise, it is empty is that is printed. A code snippet that demonstrates this is given as follows −" }, { "code": null, "e": 2779, "s": 2448, "text": "File directory = new File(\"C:\\\\JavaProgram\");\nif (directory.isDirectory()) {\n String[] files = directory.list();\n if (directory.length() > 0) {\n System.out.println(\"The directory \" + directory.getPath() + \" is not empty\");\n } else {\n System.out.println(\"The directory \" + directory.getPath() + \" is empty\");\n }\n}" } ]
3-way comparison operator (Space Ship Operator) in C++ 20 - GeeksforGeeks
24 Nov, 2020 The three-way comparison operator “<=>” is called a spaceship operator. The spaceship operator determines for two objects A and B whether A < B, A = B, or A > B. The spaceship operator or the compiler can auto-generate it for us. Also, a three-way comparison is a function that will give the entire relationship in one query. Traditionally, strcmp() is such a function. Given two strings it will return an integer where, < 0 means the first string is less == 0 if both are equal > 0 if the first string is greater. It can give one of the three results, hence it’s a three-way comparison. From the above table, it can be seen that the spaceship operator is a primary operator i.e., it can be reversed and corresponding secondary operators can be written in terms of it. (A <=> B) < 0 is true if A < B(A <=> B) > 0 is true if A > B(A <=> B) == 0 is true if A and B are equal/equivalent. Program 1: Below is the implementation of the three-way comparison operator for two float variables: C++ // C++ 20 program to illustrate the// 3 way comparison operator#include <bits/stdc++.h>using namespace std; // Driver Codeint main(){ float A = -0.0; float B = 0.0; // Find the value of 3 way comparison auto ans = A <= > B; // If ans is less than zero if (ans < 0) cout << "-0 is less than 0"; // If ans is equal to zero else if (ans == 0) cout << "-0 and 0 are equal"; // If ans is greater than zero else if (ans > 0) cout << "-0 is greater than 0"; return 0;} Output: Program 2: Below is the implementation of the three-way comparison operator for two vectors: C++ // C++ 20 program for the illustration of the// 3-way comparison operator for 2 vectors#include <bits/stdc++.h>using namespace std; // Driver Codeint main(){ // Given vectors vector<int> v1{ 3, 6, 9 }; vector<int> v2{ 3, 6, 9 }; auto ans2 = v1 <= > v2; // If ans is less than zero if (ans2 < 0) { cout << "v1 < v2" << endl; } // If ans is equal to zero else if (ans2 == 0) { cout << "v1 == v2" << endl; } // If ans is greater than zero else if (ans2 > 0) { cout << "v1 > v2" << endl; } return 0;} Output: Note: You should download the adequate latest compiler to run C++ 20. Needs of Spaceship Operators: It’s the common generalization of all other comparison operators (for totally-ordered domains): >, >=, ==, <=, <. Using <=>, every operation can be implemented in a completely generic way in the case of user-defined data type like a structure where one has to define the other 6 comparison operators one by one instead. For strings, it’s equivalent to the old strcmp() function of the C standard library. So it is useful for lexicographic order checks, such as data in vectors, or lists, or other ordered containers. cpp-operator Articles C++ C++ Programs cpp-operator CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Time Complexity and Space Complexity Docker - COPY Instruction Time complexities of different data structures SQL | Date functions Difference between Class and Object Vector in C++ STL Arrays in C/C++ Initialize a vector in C++ (6 different ways) Inheritance in C++ Map in C++ Standard Template Library (STL)
[ { "code": null, "e": 24342, "s": 24314, "text": "\n24 Nov, 2020" }, { "code": null, "e": 24763, "s": 24342, "text": "The three-way comparison operator “<=>” is called a spaceship operator. The spaceship operator determines for two objects A and B whether A < B, A = B, or A > B. The spaceship operator or the compiler can auto-generate it for us. Also, a three-way comparison is a function that will give the entire relationship in one query. Traditionally, strcmp() is such a function. Given two strings it will return an integer where," }, { "code": null, "e": 24798, "s": 24763, "text": "< 0 means the first string is less" }, { "code": null, "e": 24821, "s": 24798, "text": "== 0 if both are equal" }, { "code": null, "e": 24857, "s": 24821, "text": "> 0 if the first string is greater." }, { "code": null, "e": 24930, "s": 24857, "text": "It can give one of the three results, hence it’s a three-way comparison." }, { "code": null, "e": 25111, "s": 24930, "text": "From the above table, it can be seen that the spaceship operator is a primary operator i.e., it can be reversed and corresponding secondary operators can be written in terms of it." }, { "code": null, "e": 25227, "s": 25111, "text": "(A <=> B) < 0 is true if A < B(A <=> B) > 0 is true if A > B(A <=> B) == 0 is true if A and B are equal/equivalent." }, { "code": null, "e": 25238, "s": 25227, "text": "Program 1:" }, { "code": null, "e": 25328, "s": 25238, "text": "Below is the implementation of the three-way comparison operator for two float variables:" }, { "code": null, "e": 25332, "s": 25328, "text": "C++" }, { "code": "// C++ 20 program to illustrate the// 3 way comparison operator#include <bits/stdc++.h>using namespace std; // Driver Codeint main(){ float A = -0.0; float B = 0.0; // Find the value of 3 way comparison auto ans = A <= > B; // If ans is less than zero if (ans < 0) cout << \"-0 is less than 0\"; // If ans is equal to zero else if (ans == 0) cout << \"-0 and 0 are equal\"; // If ans is greater than zero else if (ans > 0) cout << \"-0 is greater than 0\"; return 0;}", "e": 25859, "s": 25332, "text": null }, { "code": null, "e": 25867, "s": 25859, "text": "Output:" }, { "code": null, "e": 25878, "s": 25867, "text": "Program 2:" }, { "code": null, "e": 25960, "s": 25878, "text": "Below is the implementation of the three-way comparison operator for two vectors:" }, { "code": null, "e": 25964, "s": 25960, "text": "C++" }, { "code": "// C++ 20 program for the illustration of the// 3-way comparison operator for 2 vectors#include <bits/stdc++.h>using namespace std; // Driver Codeint main(){ // Given vectors vector<int> v1{ 3, 6, 9 }; vector<int> v2{ 3, 6, 9 }; auto ans2 = v1 <= > v2; // If ans is less than zero if (ans2 < 0) { cout << \"v1 < v2\" << endl; } // If ans is equal to zero else if (ans2 == 0) { cout << \"v1 == v2\" << endl; } // If ans is greater than zero else if (ans2 > 0) { cout << \"v1 > v2\" << endl; } return 0;}", "e": 26541, "s": 25964, "text": null }, { "code": null, "e": 26549, "s": 26541, "text": "Output:" }, { "code": null, "e": 26619, "s": 26549, "text": "Note: You should download the adequate latest compiler to run C++ 20." }, { "code": null, "e": 26649, "s": 26619, "text": "Needs of Spaceship Operators:" }, { "code": null, "e": 26969, "s": 26649, "text": "It’s the common generalization of all other comparison operators (for totally-ordered domains): >, >=, ==, <=, <. Using <=>, every operation can be implemented in a completely generic way in the case of user-defined data type like a structure where one has to define the other 6 comparison operators one by one instead." }, { "code": null, "e": 27167, "s": 26969, "text": "For strings, it’s equivalent to the old strcmp() function of the C standard library. So it is useful for lexicographic order checks, such as data in vectors, or lists, or other ordered containers. " }, { "code": null, "e": 27180, "s": 27167, "text": "cpp-operator" }, { "code": null, "e": 27189, "s": 27180, "text": "Articles" }, { "code": null, "e": 27193, "s": 27189, "text": "C++" }, { "code": null, "e": 27206, "s": 27193, "text": "C++ Programs" }, { "code": null, "e": 27219, "s": 27206, "text": "cpp-operator" }, { "code": null, "e": 27223, "s": 27219, "text": "CPP" }, { "code": null, "e": 27321, "s": 27223, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27330, "s": 27321, "text": "Comments" }, { "code": null, "e": 27343, "s": 27330, "text": "Old Comments" }, { "code": null, "e": 27380, "s": 27343, "text": "Time Complexity and Space Complexity" }, { "code": null, "e": 27406, "s": 27380, "text": "Docker - COPY Instruction" }, { "code": null, "e": 27453, "s": 27406, "text": "Time complexities of different data structures" }, { "code": null, "e": 27474, "s": 27453, "text": "SQL | Date functions" }, { "code": null, "e": 27510, "s": 27474, "text": "Difference between Class and Object" }, { "code": null, "e": 27528, "s": 27510, "text": "Vector in C++ STL" }, { "code": null, "e": 27544, "s": 27528, "text": "Arrays in C/C++" }, { "code": null, "e": 27590, "s": 27544, "text": "Initialize a vector in C++ (6 different ways)" }, { "code": null, "e": 27609, "s": 27590, "text": "Inheritance in C++" } ]
Dynamic Connectivity | Set 1 (Incremental) - GeeksforGeeks
11 Mar, 2022 Dynamic connectivity is a data structure that dynamically maintains the information about the connected components of graph. In simple words suppose there is a graph G(V, E) in which no. of vertices V is constant but no. of edges E is variable. There are three ways in which we can change the number of edges Incremental Connectivity : Edges are only added to the graph.Decremental Connectivity : Edges are only deleted from the graph.Fully Dynamic Connectivity : Edges can both be deleted and added to the graph. Incremental Connectivity : Edges are only added to the graph. Decremental Connectivity : Edges are only deleted from the graph. Fully Dynamic Connectivity : Edges can both be deleted and added to the graph. In this article only Incremental connectivity is discussed. There are mainly two operations that need to be handled. An edge is added to the graph.Information about two nodes x and y whether they are in the same connected components or not. An edge is added to the graph. Information about two nodes x and y whether they are in the same connected components or not. Example: Input : V = 7 Number of operations = 11 1 0 1 2 0 1 2 1 2 1 0 2 2 0 2 2 2 3 2 3 4 1 0 5 2 4 5 2 5 6 1 2 6 Note: 7 represents number of nodes, 11 represents number of queries. There are two types of queries Type 1: 1 x y in this if the node x and y are connected print Yes else No Type 2: 2 x y in this add an edge between node x and y Output: No Yes No Yes Explanation : Initially there are no edges so node 0 and 1 will be disconnected so answer will be No Node 0 and 2 will be connected through node 1 so answer will be Yes similarly for other queries we can find whether two nodes are connected or not To solve the problems of incremental connectivity disjoint data structure is used. Here each connected component represents a set and if the two nodes belong to the same set it means that they are connected. Implementation is given below here we are using union by rank and path compression C++ Java Python3 C# // C++ implementation of incremental connectivity#include<bits/stdc++.h>using namespace std; // Finding the root of node iint root(int arr[], int i){ while (arr[i] != i) { arr[i] = arr[arr[i]]; i = arr[i]; } return i;} // union of two nodes a and bvoid weighted_union(int arr[], int rank[], int a, int b){ int root_a = root (arr, a); int root_b = root (arr, b); // union based on rank if (rank[root_a] < rank[root_b]) { arr[root_a] = arr[root_b]; rank[root_b] += rank[root_a]; } else { arr[root_b] = arr[root_a]; rank[root_a] += rank[root_b]; }} // Returns true if two nodes have same rootbool areSame(int arr[], int a, int b){ return (root(arr, a) == root(arr, b));} // Performing an operation according to query typevoid query(int type, int x, int y, int arr[], int rank[]){ // type 1 query means checking if node x and y // are connected or not if (type == 1) { // If roots of x and y is same then yes // is the answer if (areSame(arr, x, y) == true) cout << "Yes" << endl; else cout << "No" << endl; } // type 2 query refers union of x and y else if (type == 2) { // If x and y have different roots then // union them if (areSame(arr, x, y) == false) weighted_union(arr, rank, x, y); }} // Driver functionint main(){ // No.of nodes int n = 7; // The following two arrays are used to // implement disjoint set data structure. // arr[] holds the parent nodes while rank // array holds the rank of subset int arr[n], rank[n]; // initializing both array and rank for (int i=0; i<n; i++) { arr[i] = i; rank[i] = 1; } // number of queries int q = 11; query(1, 0, 1, arr, rank); query(2, 0, 1, arr, rank); query(2, 1, 2, arr, rank); query(1, 0, 2, arr, rank); query(2, 0, 2, arr, rank); query(2, 2, 3, arr, rank); query(2, 3, 4, arr, rank); query(1, 0, 5, arr, rank); query(2, 4, 5, arr, rank); query(2, 5, 6, arr, rank); query(1, 2, 6, arr, rank); return 0;} // Java implementation of// incremental connectivityimport java.util.*; class GFG{ // Finding the root of node istatic int root(int arr[], int i){ while (arr[i] != i) { arr[i] = arr[arr[i]]; i = arr[i]; } return i;} // union of two nodes a and bstatic void weighted_union(int arr[], int rank[], int a, int b){ int root_a = root (arr, a); int root_b = root (arr, b); // union based on rank if (rank[root_a] < rank[root_b]) { arr[root_a] = arr[root_b]; rank[root_b] += rank[root_a]; } else { arr[root_b] = arr[root_a]; rank[root_a] += rank[root_b]; }} // Returns true if two nodes have same rootstatic boolean areSame(int arr[], int a, int b){ return (root(arr, a) == root(arr, b));} // Performing an operation// according to query typestatic void query(int type, int x, int y, int arr[], int rank[]){ // type 1 query means checking if // node x and y are connected or not if (type == 1) { // If roots of x and y is same then yes // is the answer if (areSame(arr, x, y) == true) System.out.println("Yes"); else System.out.println("No"); } // type 2 query refers union of x and y else if (type == 2) { // If x and y have different roots then // union them if (areSame(arr, x, y) == false) weighted_union(arr, rank, x, y); }} // Driver Codepublic static void main(String[] args){ // No.of nodes int n = 7; // The following two arrays are used to // implement disjoint set data structure. // arr[] holds the parent nodes while rank // array holds the rank of subset int []arr = new int[n]; int []rank = new int[n]; // initializing both array and rank for (int i = 0; i < n; i++) { arr[i] = i; rank[i] = 1; } // number of queries int q = 11; query(1, 0, 1, arr, rank); query(2, 0, 1, arr, rank); query(2, 1, 2, arr, rank); query(1, 0, 2, arr, rank); query(2, 0, 2, arr, rank); query(2, 2, 3, arr, rank); query(2, 3, 4, arr, rank); query(1, 0, 5, arr, rank); query(2, 4, 5, arr, rank); query(2, 5, 6, arr, rank); query(1, 2, 6, arr, rank);}} // This code is contributed by Rajput-Ji # Python3 implementation of# incremental connectivity # Finding the root of node idef root(arr, i): while (arr[i] != i): arr[i] = arr[arr[i]] i = arr[i] return i # union of two nodes a and bdef weighted_union(arr, rank, a, b): root_a = root (arr, a) root_b = root (arr, b) # union based on rank if (rank[root_a] < rank[root_b]): arr[root_a] = arr[root_b] rank[root_b] += rank[root_a] else: arr[root_b] = arr[root_a] rank[root_a] += rank[root_b] # Returns true if two nodes have# same rootdef areSame(arr, a, b): return (root(arr, a) == root(arr, b)) # Performing an operation according# to query typedef query(type, x, y, arr, rank): # type 1 query means checking if # node x and y are connected or not if (type == 1): # If roots of x and y is same # then yes is the answer if (areSame(arr, x, y) == True): print("Yes") else: print("No") # type 2 query refers union of # x and y elif (type == 2): # If x and y have different # roots then union them if (areSame(arr, x, y) == False): weighted_union(arr, rank, x, y) # Driver Codeif __name__ == '__main__': # No.of nodes n = 7 # The following two arrays are used to # implement disjoint set data structure. # arr[] holds the parent nodes while rank # array holds the rank of subset arr = [None] * n rank = [None] * n # initializing both array # and rank for i in range(n): arr[i] = i rank[i] = 1 # number of queries q = 11 query(1, 0, 1, arr, rank) query(2, 0, 1, arr, rank) query(2, 1, 2, arr, rank) query(1, 0, 2, arr, rank) query(2, 0, 2, arr, rank) query(2, 2, 3, arr, rank) query(2, 3, 4, arr, rank) query(1, 0, 5, arr, rank) query(2, 4, 5, arr, rank) query(2, 5, 6, arr, rank) query(1, 2, 6, arr, rank) # This code is contributed by PranchalK // C# implementation of// incremental connectivityusing System; class GFG{ // Finding the root of node istatic int root(int []arr, int i){ while (arr[i] != i) { arr[i] = arr[arr[i]]; i = arr[i]; } return i;} // union of two nodes a and bstatic void weighted_union(int []arr, int []rank, int a, int b){ int root_a = root (arr, a); int root_b = root (arr, b); // union based on rank if (rank[root_a] < rank[root_b]) { arr[root_a] = arr[root_b]; rank[root_b] += rank[root_a]; } else { arr[root_b] = arr[root_a]; rank[root_a] += rank[root_b]; }} // Returns true if two nodes have same rootstatic Boolean areSame(int []arr, int a, int b){ return (root(arr, a) == root(arr, b));} // Performing an operation// according to query typestatic void query(int type, int x, int y, int []arr, int []rank){ // type 1 query means checking if // node x and y are connected or not if (type == 1) { // If roots of x and y is same then yes // is the answer if (areSame(arr, x, y) == true) Console.WriteLine("Yes"); else Console.WriteLine("No"); } // type 2 query refers union of x and y else if (type == 2) { // If x and y have different roots then // union them if (areSame(arr, x, y) == false) weighted_union(arr, rank, x, y); }} // Driver Codepublic static void Main(String[] args){ // No.of nodes int n = 7; // The following two arrays are used to // implement disjoint set data structure. // arr[] holds the parent nodes while rank // array holds the rank of subset int []arr = new int[n]; int []rank = new int[n]; // initializing both array and rank for (int i = 0; i < n; i++) { arr[i] = i; rank[i] = 1; } // number of queries query(1, 0, 1, arr, rank); query(2, 0, 1, arr, rank); query(2, 1, 2, arr, rank); query(1, 0, 2, arr, rank); query(2, 0, 2, arr, rank); query(2, 2, 3, arr, rank); query(2, 3, 4, arr, rank); query(1, 0, 5, arr, rank); query(2, 4, 5, arr, rank); query(2, 5, 6, arr, rank); query(1, 2, 6, arr, rank);}} // This code is contributed by PrinciRaj1992 Output: No Yes No Yes Time Complexity:The amortized time complexity is O(alpha(n)) per operation where alpha is inverse ackermann function which is nearly constant. Reference: https://en.wikipedia.org/wiki/Dynamic_connectivityThis article is contributed by Ayush Jha. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. PranchalKatiyar Rajput-Ji princiraj1992 sumitgumber28 simmytarika5 union-find Advanced Data Structure Graph Graph union-find Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Agents in Artificial Intelligence Decision Tree Introduction with example AVL Tree | Set 2 (Deletion) Red-Black Tree | Set 2 (Insert) Segment Tree | Set 1 (Sum of given range) Breadth First Search or BFS for a Graph Depth First Search or DFS for a Graph Graph and its representations Topological Sorting Detect Cycle in a Directed Graph
[ { "code": null, "e": 24853, "s": 24825, "text": "\n11 Mar, 2022" }, { "code": null, "e": 25163, "s": 24853, "text": "Dynamic connectivity is a data structure that dynamically maintains the information about the connected components of graph. In simple words suppose there is a graph G(V, E) in which no. of vertices V is constant but no. of edges E is variable. There are three ways in which we can change the number of edges " }, { "code": null, "e": 25368, "s": 25163, "text": "Incremental Connectivity : Edges are only added to the graph.Decremental Connectivity : Edges are only deleted from the graph.Fully Dynamic Connectivity : Edges can both be deleted and added to the graph." }, { "code": null, "e": 25430, "s": 25368, "text": "Incremental Connectivity : Edges are only added to the graph." }, { "code": null, "e": 25496, "s": 25430, "text": "Decremental Connectivity : Edges are only deleted from the graph." }, { "code": null, "e": 25575, "s": 25496, "text": "Fully Dynamic Connectivity : Edges can both be deleted and added to the graph." }, { "code": null, "e": 25694, "s": 25575, "text": "In this article only Incremental connectivity is discussed. There are mainly two operations that need to be handled. " }, { "code": null, "e": 25818, "s": 25694, "text": "An edge is added to the graph.Information about two nodes x and y whether they are in the same connected components or not." }, { "code": null, "e": 25849, "s": 25818, "text": "An edge is added to the graph." }, { "code": null, "e": 25943, "s": 25849, "text": "Information about two nodes x and y whether they are in the same connected components or not." }, { "code": null, "e": 25954, "s": 25943, "text": "Example: " }, { "code": null, "e": 26760, "s": 25954, "text": "Input : V = 7\n Number of operations = 11\n 1 0 1\n 2 0 1\n 2 1 2\n 1 0 2\n 2 0 2\n 2 2 3\n 2 3 4\n 1 0 5\n 2 4 5\n 2 5 6\n 1 2 6\nNote: 7 represents number of nodes, \n 11 represents number of queries. \n There are two types of queries \n Type 1: 1 x y in this if the node \n x and y are connected print \n Yes else No\n Type 2: 2 x y in this add an edge \n between node x and y\nOutput: No\n Yes\n No\n Yes\nExplanation :\nInitially there are no edges so node 0 and 1\nwill be disconnected so answer will be No\nNode 0 and 2 will be connected through node \n1 so answer will be Yes similarly for other\nqueries we can find whether two nodes are \nconnected or not" }, { "code": null, "e": 27055, "s": 26762, "text": "To solve the problems of incremental connectivity disjoint data structure is used. Here each connected component represents a set and if the two nodes belong to the same set it means that they are connected. Implementation is given below here we are using union by rank and path compression " }, { "code": null, "e": 27059, "s": 27055, "text": "C++" }, { "code": null, "e": 27064, "s": 27059, "text": "Java" }, { "code": null, "e": 27072, "s": 27064, "text": "Python3" }, { "code": null, "e": 27075, "s": 27072, "text": "C#" }, { "code": "// C++ implementation of incremental connectivity#include<bits/stdc++.h>using namespace std; // Finding the root of node iint root(int arr[], int i){ while (arr[i] != i) { arr[i] = arr[arr[i]]; i = arr[i]; } return i;} // union of two nodes a and bvoid weighted_union(int arr[], int rank[], int a, int b){ int root_a = root (arr, a); int root_b = root (arr, b); // union based on rank if (rank[root_a] < rank[root_b]) { arr[root_a] = arr[root_b]; rank[root_b] += rank[root_a]; } else { arr[root_b] = arr[root_a]; rank[root_a] += rank[root_b]; }} // Returns true if two nodes have same rootbool areSame(int arr[], int a, int b){ return (root(arr, a) == root(arr, b));} // Performing an operation according to query typevoid query(int type, int x, int y, int arr[], int rank[]){ // type 1 query means checking if node x and y // are connected or not if (type == 1) { // If roots of x and y is same then yes // is the answer if (areSame(arr, x, y) == true) cout << \"Yes\" << endl; else cout << \"No\" << endl; } // type 2 query refers union of x and y else if (type == 2) { // If x and y have different roots then // union them if (areSame(arr, x, y) == false) weighted_union(arr, rank, x, y); }} // Driver functionint main(){ // No.of nodes int n = 7; // The following two arrays are used to // implement disjoint set data structure. // arr[] holds the parent nodes while rank // array holds the rank of subset int arr[n], rank[n]; // initializing both array and rank for (int i=0; i<n; i++) { arr[i] = i; rank[i] = 1; } // number of queries int q = 11; query(1, 0, 1, arr, rank); query(2, 0, 1, arr, rank); query(2, 1, 2, arr, rank); query(1, 0, 2, arr, rank); query(2, 0, 2, arr, rank); query(2, 2, 3, arr, rank); query(2, 3, 4, arr, rank); query(1, 0, 5, arr, rank); query(2, 4, 5, arr, rank); query(2, 5, 6, arr, rank); query(1, 2, 6, arr, rank); return 0;}", "e": 29241, "s": 27075, "text": null }, { "code": "// Java implementation of// incremental connectivityimport java.util.*; class GFG{ // Finding the root of node istatic int root(int arr[], int i){ while (arr[i] != i) { arr[i] = arr[arr[i]]; i = arr[i]; } return i;} // union of two nodes a and bstatic void weighted_union(int arr[], int rank[], int a, int b){ int root_a = root (arr, a); int root_b = root (arr, b); // union based on rank if (rank[root_a] < rank[root_b]) { arr[root_a] = arr[root_b]; rank[root_b] += rank[root_a]; } else { arr[root_b] = arr[root_a]; rank[root_a] += rank[root_b]; }} // Returns true if two nodes have same rootstatic boolean areSame(int arr[], int a, int b){ return (root(arr, a) == root(arr, b));} // Performing an operation// according to query typestatic void query(int type, int x, int y, int arr[], int rank[]){ // type 1 query means checking if // node x and y are connected or not if (type == 1) { // If roots of x and y is same then yes // is the answer if (areSame(arr, x, y) == true) System.out.println(\"Yes\"); else System.out.println(\"No\"); } // type 2 query refers union of x and y else if (type == 2) { // If x and y have different roots then // union them if (areSame(arr, x, y) == false) weighted_union(arr, rank, x, y); }} // Driver Codepublic static void main(String[] args){ // No.of nodes int n = 7; // The following two arrays are used to // implement disjoint set data structure. // arr[] holds the parent nodes while rank // array holds the rank of subset int []arr = new int[n]; int []rank = new int[n]; // initializing both array and rank for (int i = 0; i < n; i++) { arr[i] = i; rank[i] = 1; } // number of queries int q = 11; query(1, 0, 1, arr, rank); query(2, 0, 1, arr, rank); query(2, 1, 2, arr, rank); query(1, 0, 2, arr, rank); query(2, 0, 2, arr, rank); query(2, 2, 3, arr, rank); query(2, 3, 4, arr, rank); query(1, 0, 5, arr, rank); query(2, 4, 5, arr, rank); query(2, 5, 6, arr, rank); query(1, 2, 6, arr, rank);}} // This code is contributed by Rajput-Ji", "e": 31569, "s": 29241, "text": null }, { "code": "# Python3 implementation of# incremental connectivity # Finding the root of node idef root(arr, i): while (arr[i] != i): arr[i] = arr[arr[i]] i = arr[i] return i # union of two nodes a and bdef weighted_union(arr, rank, a, b): root_a = root (arr, a) root_b = root (arr, b) # union based on rank if (rank[root_a] < rank[root_b]): arr[root_a] = arr[root_b] rank[root_b] += rank[root_a] else: arr[root_b] = arr[root_a] rank[root_a] += rank[root_b] # Returns true if two nodes have# same rootdef areSame(arr, a, b): return (root(arr, a) == root(arr, b)) # Performing an operation according# to query typedef query(type, x, y, arr, rank): # type 1 query means checking if # node x and y are connected or not if (type == 1): # If roots of x and y is same # then yes is the answer if (areSame(arr, x, y) == True): print(\"Yes\") else: print(\"No\") # type 2 query refers union of # x and y elif (type == 2): # If x and y have different # roots then union them if (areSame(arr, x, y) == False): weighted_union(arr, rank, x, y) # Driver Codeif __name__ == '__main__': # No.of nodes n = 7 # The following two arrays are used to # implement disjoint set data structure. # arr[] holds the parent nodes while rank # array holds the rank of subset arr = [None] * n rank = [None] * n # initializing both array # and rank for i in range(n): arr[i] = i rank[i] = 1 # number of queries q = 11 query(1, 0, 1, arr, rank) query(2, 0, 1, arr, rank) query(2, 1, 2, arr, rank) query(1, 0, 2, arr, rank) query(2, 0, 2, arr, rank) query(2, 2, 3, arr, rank) query(2, 3, 4, arr, rank) query(1, 0, 5, arr, rank) query(2, 4, 5, arr, rank) query(2, 5, 6, arr, rank) query(1, 2, 6, arr, rank) # This code is contributed by PranchalK", "e": 33547, "s": 31569, "text": null }, { "code": "// C# implementation of// incremental connectivityusing System; class GFG{ // Finding the root of node istatic int root(int []arr, int i){ while (arr[i] != i) { arr[i] = arr[arr[i]]; i = arr[i]; } return i;} // union of two nodes a and bstatic void weighted_union(int []arr, int []rank, int a, int b){ int root_a = root (arr, a); int root_b = root (arr, b); // union based on rank if (rank[root_a] < rank[root_b]) { arr[root_a] = arr[root_b]; rank[root_b] += rank[root_a]; } else { arr[root_b] = arr[root_a]; rank[root_a] += rank[root_b]; }} // Returns true if two nodes have same rootstatic Boolean areSame(int []arr, int a, int b){ return (root(arr, a) == root(arr, b));} // Performing an operation// according to query typestatic void query(int type, int x, int y, int []arr, int []rank){ // type 1 query means checking if // node x and y are connected or not if (type == 1) { // If roots of x and y is same then yes // is the answer if (areSame(arr, x, y) == true) Console.WriteLine(\"Yes\"); else Console.WriteLine(\"No\"); } // type 2 query refers union of x and y else if (type == 2) { // If x and y have different roots then // union them if (areSame(arr, x, y) == false) weighted_union(arr, rank, x, y); }} // Driver Codepublic static void Main(String[] args){ // No.of nodes int n = 7; // The following two arrays are used to // implement disjoint set data structure. // arr[] holds the parent nodes while rank // array holds the rank of subset int []arr = new int[n]; int []rank = new int[n]; // initializing both array and rank for (int i = 0; i < n; i++) { arr[i] = i; rank[i] = 1; } // number of queries query(1, 0, 1, arr, rank); query(2, 0, 1, arr, rank); query(2, 1, 2, arr, rank); query(1, 0, 2, arr, rank); query(2, 0, 2, arr, rank); query(2, 2, 3, arr, rank); query(2, 3, 4, arr, rank); query(1, 0, 5, arr, rank); query(2, 4, 5, arr, rank); query(2, 5, 6, arr, rank); query(1, 2, 6, arr, rank);}} // This code is contributed by PrinciRaj1992", "e": 35867, "s": 33547, "text": null }, { "code": null, "e": 35877, "s": 35867, "text": "Output: " }, { "code": null, "e": 35891, "s": 35877, "text": "No\nYes\nNo\nYes" }, { "code": null, "e": 36034, "s": 35891, "text": "Time Complexity:The amortized time complexity is O(alpha(n)) per operation where alpha is inverse ackermann function which is nearly constant." }, { "code": null, "e": 36517, "s": 36034, "text": "Reference: https://en.wikipedia.org/wiki/Dynamic_connectivityThis article is contributed by Ayush Jha. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 36533, "s": 36517, "text": "PranchalKatiyar" }, { "code": null, "e": 36543, "s": 36533, "text": "Rajput-Ji" }, { "code": null, "e": 36557, "s": 36543, "text": "princiraj1992" }, { "code": null, "e": 36571, "s": 36557, "text": "sumitgumber28" }, { "code": null, "e": 36584, "s": 36571, "text": "simmytarika5" }, { "code": null, "e": 36595, "s": 36584, "text": "union-find" }, { "code": null, "e": 36619, "s": 36595, "text": "Advanced Data Structure" }, { "code": null, "e": 36625, "s": 36619, "text": "Graph" }, { "code": null, "e": 36631, "s": 36625, "text": "Graph" }, { "code": null, "e": 36642, "s": 36631, "text": "union-find" }, { "code": null, "e": 36740, "s": 36642, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 36749, "s": 36740, "text": "Comments" }, { "code": null, "e": 36762, "s": 36749, "text": "Old Comments" }, { "code": null, "e": 36796, "s": 36762, "text": "Agents in Artificial Intelligence" }, { "code": null, "e": 36836, "s": 36796, "text": "Decision Tree Introduction with example" }, { "code": null, "e": 36864, "s": 36836, "text": "AVL Tree | Set 2 (Deletion)" }, { "code": null, "e": 36896, "s": 36864, "text": "Red-Black Tree | Set 2 (Insert)" }, { "code": null, "e": 36938, "s": 36896, "text": "Segment Tree | Set 1 (Sum of given range)" }, { "code": null, "e": 36978, "s": 36938, "text": "Breadth First Search or BFS for a Graph" }, { "code": null, "e": 37016, "s": 36978, "text": "Depth First Search or DFS for a Graph" }, { "code": null, "e": 37046, "s": 37016, "text": "Graph and its representations" }, { "code": null, "e": 37066, "s": 37046, "text": "Topological Sorting" } ]
Python - Data Aggregation
Python has several methods are available to perform aggregations on data. It is done using the pandas and numpy libraries. The data must be available or converted to a dataframe to apply the aggregation functions. Let us create a DataFrame and apply aggregations on it. import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(10, 4), index = pd.date_range('1/1/2000', periods=10), columns = ['A', 'B', 'C', 'D']) print df r = df.rolling(window=3,min_periods=1) print r Its output is as follows − A B C D 2000-01-01 1.088512 -0.650942 -2.547450 -0.566858 2000-01-02 0.790670 -0.387854 -0.668132 0.267283 2000-01-03 -0.575523 -0.965025 0.060427 -2.179780 2000-01-04 1.669653 1.211759 -0.254695 1.429166 2000-01-05 0.100568 -0.236184 0.491646 -0.466081 2000-01-06 0.155172 0.992975 -1.205134 0.320958 2000-01-07 0.309468 -0.724053 -1.412446 0.627919 2000-01-08 0.099489 -1.028040 0.163206 -1.274331 2000-01-09 1.639500 -0.068443 0.714008 -0.565969 2000-01-10 0.326761 1.479841 0.664282 -1.361169 Rolling [window=3,min_periods=1,center=False,axis=0] We can aggregate by passing a function to the entire DataFrame, or select a column via the standard get item method. import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(10, 4), index = pd.date_range('1/1/2000', periods=10), columns = ['A', 'B', 'C', 'D']) print df r = df.rolling(window=3,min_periods=1) print r.aggregate(np.sum) Its output is as follows − A B C D 2000-01-01 1.088512 -0.650942 -2.547450 -0.566858 2000-01-02 1.879182 -1.038796 -3.215581 -0.299575 2000-01-03 1.303660 -2.003821 -3.155154 -2.479355 2000-01-04 1.884801 -0.141119 -0.862400 -0.483331 2000-01-05 1.194699 0.010551 0.297378 -1.216695 2000-01-06 1.925393 1.968551 -0.968183 1.284044 2000-01-07 0.565208 0.032738 -2.125934 0.482797 2000-01-08 0.564129 -0.759118 -2.454374 -0.325454 2000-01-09 2.048458 -1.820537 -0.535232 -1.212381 2000-01-10 2.065750 0.383357 1.541496 -3.201469 A B C D 2000-01-01 1.088512 -0.650942 -2.547450 -0.566858 2000-01-02 1.879182 -1.038796 -3.215581 -0.299575 2000-01-03 1.303660 -2.003821 -3.155154 -2.479355 2000-01-04 1.884801 -0.141119 -0.862400 -0.483331 2000-01-05 1.194699 0.010551 0.297378 -1.216695 2000-01-06 1.925393 1.968551 -0.968183 1.284044 2000-01-07 0.565208 0.032738 -2.125934 0.482797 2000-01-08 0.564129 -0.759118 -2.454374 -0.325454 2000-01-09 2.048458 -1.820537 -0.535232 -1.212381 2000-01-10 2.065750 0.383357 1.541496 -3.201469 import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(10, 4), index = pd.date_range('1/1/2000', periods=10), columns = ['A', 'B', 'C', 'D']) print df r = df.rolling(window=3,min_periods=1) print r['A'].aggregate(np.sum) Its output is as follows − A B C D 2000-01-01 1.088512 -0.650942 -2.547450 -0.566858 2000-01-02 1.879182 -1.038796 -3.215581 -0.299575 2000-01-03 1.303660 -2.003821 -3.155154 -2.479355 2000-01-04 1.884801 -0.141119 -0.862400 -0.483331 2000-01-05 1.194699 0.010551 0.297378 -1.216695 2000-01-06 1.925393 1.968551 -0.968183 1.284044 2000-01-07 0.565208 0.032738 -2.125934 0.482797 2000-01-08 0.564129 -0.759118 -2.454374 -0.325454 2000-01-09 2.048458 -1.820537 -0.535232 -1.212381 2000-01-10 2.065750 0.383357 1.541496 -3.201469 2000-01-01 1.088512 2000-01-02 1.879182 2000-01-03 1.303660 2000-01-04 1.884801 2000-01-05 1.194699 2000-01-06 1.925393 2000-01-07 0.565208 2000-01-08 0.564129 2000-01-09 2.048458 2000-01-10 2.065750 Freq: D, Name: A, dtype: float64 import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(10, 4), index = pd.date_range('1/1/2000', periods=10), columns = ['A', 'B', 'C', 'D']) print df r = df.rolling(window=3,min_periods=1) print r[['A','B']].aggregate(np.sum) Its output is as follows − A B C D 2000-01-01 1.088512 -0.650942 -2.547450 -0.566858 2000-01-02 1.879182 -1.038796 -3.215581 -0.299575 2000-01-03 1.303660 -2.003821 -3.155154 -2.479355 2000-01-04 1.884801 -0.141119 -0.862400 -0.483331 2000-01-05 1.194699 0.010551 0.297378 -1.216695 2000-01-06 1.925393 1.968551 -0.968183 1.284044 2000-01-07 0.565208 0.032738 -2.125934 0.482797 2000-01-08 0.564129 -0.759118 -2.454374 -0.325454 2000-01-09 2.048458 -1.820537 -0.535232 -1.212381 2000-01-10 2.065750 0.383357 1.541496 -3.201469 A B 2000-01-01 1.088512 -0.650942 2000-01-02 1.879182 -1.038796 2000-01-03 1.303660 -2.003821 2000-01-04 1.884801 -0.141119 2000-01-05 1.194699 0.010551 2000-01-06 1.925393 1.968551 2000-01-07 0.565208 0.032738 2000-01-08 0.564129 -0.759118 2000-01-09 2.048458 -1.820537 2000-01-10 2.065750 0.383357 187 Lectures 17.5 hours Malhar Lathkar 55 Lectures 8 hours Arnab Chakraborty 136 Lectures 11 hours In28Minutes Official 75 Lectures 13 hours Eduonix Learning Solutions 70 Lectures 8.5 hours Lets Kode It 63 Lectures 6 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 2744, "s": 2529, "text": "Python has several methods are available to perform aggregations on data. It is done using the pandas and numpy libraries. The data must be available or converted to \na dataframe to apply the aggregation functions." }, { "code": null, "e": 2800, "s": 2744, "text": "Let us create a DataFrame and apply aggregations on it." }, { "code": null, "e": 3031, "s": 2800, "text": "import pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(np.random.randn(10, 4),\n index = pd.date_range('1/1/2000', periods=10),\n columns = ['A', 'B', 'C', 'D'])\n\nprint df\n\nr = df.rolling(window=3,min_periods=1)\nprint r" }, { "code": null, "e": 3058, "s": 3031, "text": "Its output is as follows −" }, { "code": null, "e": 3767, "s": 3058, "text": " A B C D\n2000-01-01 1.088512 -0.650942 -2.547450 -0.566858\n2000-01-02 0.790670 -0.387854 -0.668132 0.267283\n2000-01-03 -0.575523 -0.965025 0.060427 -2.179780\n2000-01-04 1.669653 1.211759 -0.254695 1.429166\n2000-01-05 0.100568 -0.236184 0.491646 -0.466081\n2000-01-06 0.155172 0.992975 -1.205134 0.320958\n2000-01-07 0.309468 -0.724053 -1.412446 0.627919\n2000-01-08 0.099489 -1.028040 0.163206 -1.274331\n2000-01-09 1.639500 -0.068443 0.714008 -0.565969\n2000-01-10 0.326761 1.479841 0.664282 -1.361169\n\nRolling [window=3,min_periods=1,center=False,axis=0] \n" }, { "code": null, "e": 3884, "s": 3767, "text": "We can aggregate by passing a function to the entire DataFrame, or select a column via the standard get item method." }, { "code": null, "e": 4132, "s": 3884, "text": "import pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(np.random.randn(10, 4),\n index = pd.date_range('1/1/2000', periods=10),\n columns = ['A', 'B', 'C', 'D'])\nprint df\n\nr = df.rolling(window=3,min_periods=1)\nprint r.aggregate(np.sum)" }, { "code": null, "e": 4159, "s": 4132, "text": "Its output is as follows −" }, { "code": null, "e": 5437, "s": 4159, "text": " A B C D\n2000-01-01 1.088512 -0.650942 -2.547450 -0.566858\n2000-01-02 1.879182 -1.038796 -3.215581 -0.299575\n2000-01-03 1.303660 -2.003821 -3.155154 -2.479355\n2000-01-04 1.884801 -0.141119 -0.862400 -0.483331\n2000-01-05 1.194699 0.010551 0.297378 -1.216695\n2000-01-06 1.925393 1.968551 -0.968183 1.284044\n2000-01-07 0.565208 0.032738 -2.125934 0.482797\n2000-01-08 0.564129 -0.759118 -2.454374 -0.325454\n2000-01-09 2.048458 -1.820537 -0.535232 -1.212381\n2000-01-10 2.065750 0.383357 1.541496 -3.201469\n\n A B C D\n2000-01-01 1.088512 -0.650942 -2.547450 -0.566858\n2000-01-02 1.879182 -1.038796 -3.215581 -0.299575\n2000-01-03 1.303660 -2.003821 -3.155154 -2.479355\n2000-01-04 1.884801 -0.141119 -0.862400 -0.483331\n2000-01-05 1.194699 0.010551 0.297378 -1.216695\n2000-01-06 1.925393 1.968551 -0.968183 1.284044\n2000-01-07 0.565208 0.032738 -2.125934 0.482797\n2000-01-08 0.564129 -0.759118 -2.454374 -0.325454\n2000-01-09 2.048458 -1.820537 -0.535232 -1.212381\n2000-01-10 2.065750 0.383357 1.541496 -3.201469\n" }, { "code": null, "e": 5689, "s": 5437, "text": "import pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(np.random.randn(10, 4),\n index = pd.date_range('1/1/2000', periods=10),\n columns = ['A', 'B', 'C', 'D'])\nprint df\nr = df.rolling(window=3,min_periods=1)\nprint r['A'].aggregate(np.sum)" }, { "code": null, "e": 5716, "s": 5689, "text": "Its output is as follows −" }, { "code": null, "e": 6605, "s": 5716, "text": " A B C D\n2000-01-01 1.088512 -0.650942 -2.547450 -0.566858\n2000-01-02 1.879182 -1.038796 -3.215581 -0.299575\n2000-01-03 1.303660 -2.003821 -3.155154 -2.479355\n2000-01-04 1.884801 -0.141119 -0.862400 -0.483331\n2000-01-05 1.194699 0.010551 0.297378 -1.216695\n2000-01-06 1.925393 1.968551 -0.968183 1.284044\n2000-01-07 0.565208 0.032738 -2.125934 0.482797\n2000-01-08 0.564129 -0.759118 -2.454374 -0.325454\n2000-01-09 2.048458 -1.820537 -0.535232 -1.212381\n2000-01-10 2.065750 0.383357 1.541496 -3.201469\n2000-01-01 1.088512\n2000-01-02 1.879182\n2000-01-03 1.303660\n2000-01-04 1.884801\n2000-01-05 1.194699\n2000-01-06 1.925393\n2000-01-07 0.565208\n2000-01-08 0.564129\n2000-01-09 2.048458\n2000-01-10 2.065750\nFreq: D, Name: A, dtype: float64\n" }, { "code": null, "e": 6863, "s": 6605, "text": "import pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(np.random.randn(10, 4),\n index = pd.date_range('1/1/2000', periods=10),\n columns = ['A', 'B', 'C', 'D'])\nprint df\nr = df.rolling(window=3,min_periods=1)\nprint r[['A','B']].aggregate(np.sum)" }, { "code": null, "e": 6890, "s": 6863, "text": "Its output is as follows −" }, { "code": null, "e": 7900, "s": 6890, "text": " A B C D\n2000-01-01 1.088512 -0.650942 -2.547450 -0.566858\n2000-01-02 1.879182 -1.038796 -3.215581 -0.299575\n2000-01-03 1.303660 -2.003821 -3.155154 -2.479355\n2000-01-04 1.884801 -0.141119 -0.862400 -0.483331\n2000-01-05 1.194699 0.010551 0.297378 -1.216695\n2000-01-06 1.925393 1.968551 -0.968183 1.284044\n2000-01-07 0.565208 0.032738 -2.125934 0.482797\n2000-01-08 0.564129 -0.759118 -2.454374 -0.325454\n2000-01-09 2.048458 -1.820537 -0.535232 -1.212381\n2000-01-10 2.065750 0.383357 1.541496 -3.201469\n A B\n2000-01-01 1.088512 -0.650942\n2000-01-02 1.879182 -1.038796\n2000-01-03 1.303660 -2.003821\n2000-01-04 1.884801 -0.141119\n2000-01-05 1.194699 0.010551\n2000-01-06 1.925393 1.968551\n2000-01-07 0.565208 0.032738\n2000-01-08 0.564129 -0.759118\n2000-01-09 2.048458 -1.820537\n2000-01-10 2.065750 0.383357\n" }, { "code": null, "e": 7937, "s": 7900, "text": "\n 187 Lectures \n 17.5 hours \n" }, { "code": null, "e": 7953, "s": 7937, "text": " Malhar Lathkar" }, { "code": null, "e": 7986, "s": 7953, "text": "\n 55 Lectures \n 8 hours \n" }, { "code": null, "e": 8005, "s": 7986, "text": " Arnab Chakraborty" }, { "code": null, "e": 8040, "s": 8005, "text": "\n 136 Lectures \n 11 hours \n" }, { "code": null, "e": 8062, "s": 8040, "text": " In28Minutes Official" }, { "code": null, "e": 8096, "s": 8062, "text": "\n 75 Lectures \n 13 hours \n" }, { "code": null, "e": 8124, "s": 8096, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 8159, "s": 8124, "text": "\n 70 Lectures \n 8.5 hours \n" }, { "code": null, "e": 8173, "s": 8159, "text": " Lets Kode It" }, { "code": null, "e": 8206, "s": 8173, "text": "\n 63 Lectures \n 6 hours \n" }, { "code": null, "e": 8223, "s": 8206, "text": " Abhilash Nelson" }, { "code": null, "e": 8230, "s": 8223, "text": " Print" }, { "code": null, "e": 8241, "s": 8230, "text": " Add Notes" } ]
Tryit Editor v3.7
Tryit: Audio with autoplay
[]
How to match multiple criteria inside an array with MongoDB?
To match multiple criteria inside an array, use aggregate(). Let us create a collection with documents − > db.demo84.insertOne({ ... "EmployeeDetails": [ ... {Name: 'John', Salary:45000, isMarried: true}, ... {Name: 'Chris', Salary:50000, isMarried: false} ... ] ... } ... ); { "acknowledged" : true, "insertedId" : ObjectId("5e2c0a3471bf0181ecc422a5") } > db.demo84.insertOne({ ... "EmployeeDetails": [ ... {Name: 'Sam', Salary:56000, isMarried: false}, ... {Name: 'Bob', Salary:50000, isMarried: false} ... ] ... } ... ); { "acknowledged" : true, "insertedId" : ObjectId("5e2c0a4071bf0181ecc422a6") } Display all documents from a collection with the help of find() method − > db.demo84.find(); This will produce the following output − { "_id" : ObjectId("5e2c0a3471bf0181ecc422a5"), "EmployeeDetails" : [ { "Name" : "John", "Salary" : 45000, "isMarried" : true }, { "Name" : "Chris", "Salary" : 50000, "isMarried" : false } ] } { "_id" : ObjectId("5e2c0a4071bf0181ecc422a6"), "EmployeeDetails" : [ { "Name" : "Sam", "Salary" : 56000, "isMarried" : false }, { "Name" : "Bob", "Salary" : 50000, "isMarried" : false } ] } Following is the query to match multiple criteria inside an array − > db.demo84.aggregate( ... { "$match": { ... "EmployeeDetails": { ... "$elemMatch": { ... "Name": "Chris", ... "isMarried": false ... } ... } ... }} ... ); This will produce the following output − { "_id" : ObjectId("5e2c0a3471bf0181ecc422a5"), "EmployeeDetails" : [ { "Name" : "John", "Salary" : 45000, "isMarried" : true }, { "Name" : "Chris", "Salary" : 50000, "isMarried" : false } ] }
[ { "code": null, "e": 1167, "s": 1062, "text": "To match multiple criteria inside an array, use aggregate(). Let us create a collection with documents −" }, { "code": null, "e": 1725, "s": 1167, "text": "> db.demo84.insertOne({\n... \"EmployeeDetails\": [\n... {Name: 'John', Salary:45000, isMarried: true},\n... {Name: 'Chris', Salary:50000, isMarried: false}\n... ]\n... }\n... );\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e2c0a3471bf0181ecc422a5\")\n}\n> db.demo84.insertOne({\n... \"EmployeeDetails\": [\n... {Name: 'Sam', Salary:56000, isMarried: false},\n... {Name: 'Bob', Salary:50000, isMarried: false}\n... ]\n... }\n... );\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e2c0a4071bf0181ecc422a6\")\n}" }, { "code": null, "e": 1798, "s": 1725, "text": "Display all documents from a collection with the help of find() method −" }, { "code": null, "e": 1818, "s": 1798, "text": "> db.demo84.find();" }, { "code": null, "e": 1859, "s": 1818, "text": "This will produce the following output −" }, { "code": null, "e": 2279, "s": 1859, "text": "{\n \"_id\" : ObjectId(\"5e2c0a3471bf0181ecc422a5\"), \"EmployeeDetails\" : [\n { \"Name\" : \"John\", \"Salary\" : 45000, \"isMarried\" : true },\n { \"Name\" : \"Chris\", \"Salary\" : 50000, \"isMarried\" : false }\n ]\n}\n{\n \"_id\" : ObjectId(\"5e2c0a4071bf0181ecc422a6\"), \"EmployeeDetails\" : [\n { \"Name\" : \"Sam\", \"Salary\" : 56000, \"isMarried\" : false },\n { \"Name\" : \"Bob\", \"Salary\" : 50000, \"isMarried\" : false }\n ]\n}" }, { "code": null, "e": 2347, "s": 2279, "text": "Following is the query to match multiple criteria inside an array −" }, { "code": null, "e": 2560, "s": 2347, "text": "> db.demo84.aggregate(\n... { \"$match\": {\n... \"EmployeeDetails\": {\n... \"$elemMatch\": {\n... \"Name\": \"Chris\",\n... \"isMarried\": false\n... }\n... }\n... }}\n... );" }, { "code": null, "e": 2601, "s": 2560, "text": "This will produce the following output −" }, { "code": null, "e": 2813, "s": 2601, "text": "{\n \"_id\" : ObjectId(\"5e2c0a3471bf0181ecc422a5\"), \"EmployeeDetails\" : [\n { \"Name\" : \"John\", \"Salary\" : 45000, \"isMarried\" : true },\n { \"Name\" : \"Chris\", \"Salary\" : 50000, \"isMarried\" : false }\n ] \n}" } ]
Explain the basics of scikit-learn library in Python?
Scikit-learn, commonly known as sklearn is a library in Python that is used for the purpose of implementing machine learning algorithms. It is an open-source library hence it can be used free of cost. Powerful and robust, since it provides a wide variety of tools to perform statistical modelling. This includes classification, regression, clustering, dimensionality reduction, and much more with the help of a powerful, and stable interface in Python. This library is built on Numpy, SciPy and Matplotlib libraries. It can be installed using the ‘pip’ command as shown below − pip install scikit-learn This library focuses on data modelling. There are many models used in scikit-learn, and some of them have been summarized below. Supervised learning algorithm is taught to behave in a certain way. A certain desirable output is mapped to a given input thereby providing human supervision. This could be by labelling the features (variables present in the input dataset), by providing feedback to the data (whether the output was predicted correctly by the algorithm, and if not what the right prediction has to be) and so on. Once the algorithm is completely trained on such input data, it can be generalized to work for similar kinds of data. It will gain the ability to predict results for never-before-seen inputs if the model that is trained has good performance metrics. It is an expensive learning algorithm since humans need to physically label the input dataset thereby adding to additional costs. Sklearn helps implement Linear Regression Support Vector Machine, Decision Tree, and so on. This is opposite to supervised learning, i.e. the input data set is not labelled, thereby indicating zero human supervision. The algorithm learns from such unlabelled data, extracts patterns, performs predictions, gives insights into the data and performs other operations on its own. Most of the times, real-world data is unstructured and unlabelled. Sklearn helps implement clustering, factor analysis, principal component analysis, neural networks, and so on. Similar data is grouped into a structure and any noise (outlier or unusual data) will fall outside this cluster which can later be eliminated or disregarded. It is a process in which the original dataset is divided into two parts- the ‘training dataset’ and the ‘testing dataset’. The need of a ‘validation dataset’ is eliminated when cross-validation is used. There are many variations of ‘cross-validation’ method. The most commonly used cross-validation method is ‘k’ fold cross-validation. Dimensionality reduction tells about the techniques that are used to reduce the number of features in a dataset. If the number of features are higher in a dataset, it is often difficult to model the algorithm. If the input dataset has too many variables, the performance of machine learning algorithms can degrade by a considerable amount. Having a large number of dimensions in the feature space requires large amount of memory, and this means not all of the data can be aptly represented on the space (rows of data). This means, the performance of the machine learning algorithm will be affected, and this is also known as the ‘curse of dimensionality’. Hence it is suggested to reduce the number of input features in the dataset. Hence the name ‘dimensionality reduction’.
[ { "code": null, "e": 1199, "s": 1062, "text": "Scikit-learn, commonly known as sklearn is a library in Python that is used for the purpose of implementing machine learning algorithms." }, { "code": null, "e": 1579, "s": 1199, "text": "It is an open-source library hence it can be used free of cost. Powerful and robust, since it provides a wide variety of tools to perform statistical modelling. This includes classification, regression, clustering, dimensionality reduction, and much more with the help of a powerful, and stable interface in Python. This library is built on Numpy, SciPy and Matplotlib libraries." }, { "code": null, "e": 1640, "s": 1579, "text": "It can be installed using the ‘pip’ command as shown below −" }, { "code": null, "e": 1665, "s": 1640, "text": "pip install scikit-learn" }, { "code": null, "e": 1705, "s": 1665, "text": "This library focuses on data modelling." }, { "code": null, "e": 1794, "s": 1705, "text": "There are many models used in scikit-learn, and some of them have been summarized below." }, { "code": null, "e": 2190, "s": 1794, "text": "Supervised learning algorithm is taught to behave in a certain way. A certain desirable output is mapped to a given input thereby providing human supervision. This could be by labelling the features (variables present in the input dataset), by providing feedback to the data (whether the output was predicted correctly by the algorithm, and if not what the right prediction has to be) and so on." }, { "code": null, "e": 2570, "s": 2190, "text": "Once the algorithm is completely trained on such input data, it can be generalized to work for similar kinds of data. It will gain the ability to predict results for never-before-seen inputs if the model that is trained has good performance metrics. It is an expensive learning algorithm since humans need to physically label the input dataset thereby adding to additional costs." }, { "code": null, "e": 2662, "s": 2570, "text": "Sklearn helps implement Linear Regression Support Vector Machine, Decision Tree, and so on." }, { "code": null, "e": 3014, "s": 2662, "text": "This is opposite to supervised learning, i.e. the input data set is not labelled, thereby indicating zero human supervision. The algorithm learns from such unlabelled data, extracts patterns, performs predictions, gives insights into the data and performs other operations on its own. Most of the times, real-world data is unstructured and unlabelled." }, { "code": null, "e": 3125, "s": 3014, "text": "Sklearn helps implement clustering, factor analysis, principal component analysis, neural networks, and so on." }, { "code": null, "e": 3283, "s": 3125, "text": "Similar data is grouped into a structure and any noise (outlier or unusual data) will fall outside this cluster which can later be eliminated or disregarded." }, { "code": null, "e": 3619, "s": 3283, "text": "It is a process in which the original dataset is divided into two parts- the ‘training dataset’ and the ‘testing dataset’. The need of a ‘validation dataset’ is eliminated when cross-validation is used. There are many variations of ‘cross-validation’ method. The most commonly used cross-validation method is ‘k’ fold cross-validation." }, { "code": null, "e": 3959, "s": 3619, "text": "Dimensionality reduction tells about the techniques that are used to reduce the number of features in a dataset. If the number of features are higher in a dataset, it is often difficult to model the algorithm. If the input dataset has too many variables, the performance of machine learning algorithms can degrade by a considerable amount." }, { "code": null, "e": 4395, "s": 3959, "text": "Having a large number of dimensions in the feature space requires large amount of memory, and this means not all of the data can be aptly represented on the space (rows of data). This means, the performance of the machine learning algorithm will be affected, and this is also known as the ‘curse of dimensionality’. Hence it is suggested to reduce the number of input features in the dataset. Hence the name ‘dimensionality reduction’." } ]
Material - Notification Icons
This chapter explains the usage of Google's (Material) Notification icons. Assume that custom is the CSS class name where we defined the size and color, as shown in the example given below. <!DOCTYPE html> <html> <head> <link href = "https://fonts.googleapis.com/icon?family=Material+Icons" rel = "stylesheet"> <style> i.custom {font-size: 2em; color: green;} </style> </head> <body> <i class = "material-icons custom">accessibility</i> </body> </html> The following table contains the usage and results of Google's (Material) Notification icons. Replace the < body > tag of the above program with the code given in the table to get the respective outputs − 26 Lectures 2 hours Neha Gupta 20 Lectures 2 hours Asif Hussain 43 Lectures 5 hours Sharad Kumar 411 Lectures 38.5 hours In28Minutes Official 71 Lectures 10 hours Chaand Sheikh 207 Lectures 33 hours Eduonix Learning Solutions Print Add Notes Bookmark this page
[ { "code": null, "e": 2756, "s": 2566, "text": "This chapter explains the usage of Google's (Material) Notification icons. Assume that custom is the CSS class name where we defined the size and color, as shown in the example given below." }, { "code": null, "e": 3074, "s": 2756, "text": "<!DOCTYPE html>\n<html>\n <head>\n <link href = \"https://fonts.googleapis.com/icon?family=Material+Icons\" rel = \"stylesheet\">\n\t\t\n <style>\n i.custom {font-size: 2em; color: green;}\n </style>\n\t\t\n </head>\n\t\n <body>\n <i class = \"material-icons custom\">accessibility</i>\n </body>\n\t\n</html>" }, { "code": null, "e": 3279, "s": 3074, "text": "The following table contains the usage and results of Google's (Material) Notification icons. Replace the < body > tag of the above program with the code given in the table to get the respective outputs −" }, { "code": null, "e": 3312, "s": 3279, "text": "\n 26 Lectures \n 2 hours \n" }, { "code": null, "e": 3324, "s": 3312, "text": " Neha Gupta" }, { "code": null, "e": 3357, "s": 3324, "text": "\n 20 Lectures \n 2 hours \n" }, { "code": null, "e": 3371, "s": 3357, "text": " Asif Hussain" }, { "code": null, "e": 3404, "s": 3371, "text": "\n 43 Lectures \n 5 hours \n" }, { "code": null, "e": 3418, "s": 3404, "text": " Sharad Kumar" }, { "code": null, "e": 3455, "s": 3418, "text": "\n 411 Lectures \n 38.5 hours \n" }, { "code": null, "e": 3477, "s": 3455, "text": " In28Minutes Official" }, { "code": null, "e": 3511, "s": 3477, "text": "\n 71 Lectures \n 10 hours \n" }, { "code": null, "e": 3526, "s": 3511, "text": " Chaand Sheikh" }, { "code": null, "e": 3561, "s": 3526, "text": "\n 207 Lectures \n 33 hours \n" }, { "code": null, "e": 3589, "s": 3561, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 3596, "s": 3589, "text": " Print" }, { "code": null, "e": 3607, "s": 3596, "text": " Add Notes" } ]
JUnit - Test Framework
JUnit is a Regression Testing Framework used by developers to implement unit testing in Java, and accelerate programming speed and increase the quality of code. JUnit Framework can be easily integrated with either of the following − Eclipse Ant Maven JUnit test framework provides the following important features − Fixtures Test suites Test runners JUnit classes Fixtures is a fixed state of a set of objects used as a baseline for running tests. The purpose of a test fixture is to ensure that there is a well-known and fixed environment in which tests are run so that results are repeatable. It includes − setUp() method, which runs before every test invocation. tearDown() method, which runs after every test method. Let's check one example − import junit.framework.*; public class JavaTest extends TestCase { protected int value1, value2; // assigning the values protected void setUp(){ value1 = 3; value2 = 3; } // test method to add two values public void testAdd(){ double result = value1 + value2; assertTrue(result == 6); } } A test suite bundles a few unit test cases and runs them together. In JUnit, both @RunWith and @Suite annotation are used to run the suite test. Given below is an example that uses TestJunit1 & TestJunit2 test classes. import org.junit.runner.RunWith; import org.junit.runners.Suite; //JUnit Suite Test @RunWith(Suite.class) @Suite.SuiteClasses({ TestJunit1.class ,TestJunit2.class }) public class JunitTestSuite { } import org.junit.Test; import org.junit.Ignore; import static org.junit.Assert.assertEquals; public class TestJunit1 { String message = "Robert"; MessageUtil messageUtil = new MessageUtil(message); @Test public void testPrintMessage() { System.out.println("Inside testPrintMessage()"); assertEquals(message, messageUtil.printMessage()); } } import org.junit.Test; import org.junit.Ignore; import static org.junit.Assert.assertEquals; public class TestJunit2 { String message = "Robert"; MessageUtil messageUtil = new MessageUtil(message); @Test public void testSalutationMessage() { System.out.println("Inside testSalutationMessage()"); message = "Hi!" + "Robert"; assertEquals(message,messageUtil.salutationMessage()); } } Test runner is used for executing the test cases. Here is an example that assumes the test class TestJunit already exists. import org.junit.runner.JUnitCore; import org.junit.runner.Result; import org.junit.runner.notification.Failure; public class TestRunner { public static void main(String[] args) { Result result = JUnitCore.runClasses(TestJunit.class); for (Failure failure : result.getFailures()) { System.out.println(failure.toString()); } System.out.println(result.wasSuccessful()); } } JUnit classes are important classes, used in writing and testing JUnits. Some of the important classes are − Assert − Contains a set of assert methods. Assert − Contains a set of assert methods. TestCase − Contains a test case that defines the fixture to run multiple tests. TestCase − Contains a test case that defines the fixture to run multiple tests. TestResult − Contains methods to collect the results of executing a test case. TestResult − Contains methods to collect the results of executing a test case. 24 Lectures 2.5 hours Nishita Bhatt 56 Lectures 7.5 hours Dinesh Varyani Print Add Notes Bookmark this page
[ { "code": null, "e": 2205, "s": 1972, "text": "JUnit is a Regression Testing Framework used by developers to implement unit testing in Java, and accelerate programming speed and increase the quality of code. JUnit Framework can be easily integrated with either of the following −" }, { "code": null, "e": 2213, "s": 2205, "text": "Eclipse" }, { "code": null, "e": 2217, "s": 2213, "text": "Ant" }, { "code": null, "e": 2223, "s": 2217, "text": "Maven" }, { "code": null, "e": 2288, "s": 2223, "text": "JUnit test framework provides the following important features −" }, { "code": null, "e": 2297, "s": 2288, "text": "Fixtures" }, { "code": null, "e": 2309, "s": 2297, "text": "Test suites" }, { "code": null, "e": 2322, "s": 2309, "text": "Test runners" }, { "code": null, "e": 2336, "s": 2322, "text": "JUnit classes" }, { "code": null, "e": 2581, "s": 2336, "text": "Fixtures is a fixed state of a set of objects used as a baseline for running tests. The purpose of a test fixture is to ensure that there is a well-known and fixed environment in which tests are run so that results are repeatable. It includes −" }, { "code": null, "e": 2638, "s": 2581, "text": "setUp() method, which runs before every test invocation." }, { "code": null, "e": 2693, "s": 2638, "text": "tearDown() method, which runs after every test method." }, { "code": null, "e": 2719, "s": 2693, "text": "Let's check one example −" }, { "code": null, "e": 3059, "s": 2719, "text": "import junit.framework.*;\n\npublic class JavaTest extends TestCase {\n protected int value1, value2;\n \n // assigning the values\n protected void setUp(){\n value1 = 3;\n value2 = 3;\n }\n\n // test method to add two values\n public void testAdd(){\n double result = value1 + value2;\n assertTrue(result == 6);\n }\n}" }, { "code": null, "e": 3278, "s": 3059, "text": "A test suite bundles a few unit test cases and runs them together. In JUnit, both @RunWith and @Suite annotation are used to run the suite test. Given below is an example that uses TestJunit1 & TestJunit2 test classes." }, { "code": null, "e": 3483, "s": 3278, "text": "import org.junit.runner.RunWith;\nimport org.junit.runners.Suite;\n\n//JUnit Suite Test\n@RunWith(Suite.class)\n\[email protected]({ \n TestJunit1.class ,TestJunit2.class\n})\n\npublic class JunitTestSuite {\n}" }, { "code": null, "e": 3868, "s": 3483, "text": "import org.junit.Test;\nimport org.junit.Ignore;\nimport static org.junit.Assert.assertEquals;\n\npublic class TestJunit1 {\n\n String message = \"Robert\";\t\n MessageUtil messageUtil = new MessageUtil(message);\n \n @Test\n public void testPrintMessage() {\t\n System.out.println(\"Inside testPrintMessage()\"); \n assertEquals(message, messageUtil.printMessage()); \n }\n}" }, { "code": null, "e": 4289, "s": 3868, "text": "import org.junit.Test;\nimport org.junit.Ignore;\nimport static org.junit.Assert.assertEquals;\n\npublic class TestJunit2 {\n\n String message = \"Robert\";\t\n MessageUtil messageUtil = new MessageUtil(message);\n \n @Test\n public void testSalutationMessage() {\n System.out.println(\"Inside testSalutationMessage()\");\n message = \"Hi!\" + \"Robert\";\n assertEquals(message,messageUtil.salutationMessage());\n }\n}" }, { "code": null, "e": 4412, "s": 4289, "text": "Test runner is used for executing the test cases. Here is an example that assumes the test class TestJunit already exists." }, { "code": null, "e": 4830, "s": 4412, "text": "import org.junit.runner.JUnitCore;\nimport org.junit.runner.Result;\nimport org.junit.runner.notification.Failure;\n\npublic class TestRunner {\n public static void main(String[] args) {\n Result result = JUnitCore.runClasses(TestJunit.class);\n\t\t\n for (Failure failure : result.getFailures()) {\n System.out.println(failure.toString());\n }\n\t\t\n System.out.println(result.wasSuccessful());\n }\n}" }, { "code": null, "e": 4939, "s": 4830, "text": "JUnit classes are important classes, used in writing and testing JUnits. Some of the important classes are −" }, { "code": null, "e": 4982, "s": 4939, "text": "Assert − Contains a set of assert methods." }, { "code": null, "e": 5025, "s": 4982, "text": "Assert − Contains a set of assert methods." }, { "code": null, "e": 5105, "s": 5025, "text": "TestCase − Contains a test case that defines the fixture to run multiple tests." }, { "code": null, "e": 5185, "s": 5105, "text": "TestCase − Contains a test case that defines the fixture to run multiple tests." }, { "code": null, "e": 5264, "s": 5185, "text": "TestResult − Contains methods to collect the results of executing a test case." }, { "code": null, "e": 5343, "s": 5264, "text": "TestResult − Contains methods to collect the results of executing a test case." }, { "code": null, "e": 5378, "s": 5343, "text": "\n 24 Lectures \n 2.5 hours \n" }, { "code": null, "e": 5393, "s": 5378, "text": " Nishita Bhatt" }, { "code": null, "e": 5428, "s": 5393, "text": "\n 56 Lectures \n 7.5 hours \n" }, { "code": null, "e": 5444, "s": 5428, "text": " Dinesh Varyani" }, { "code": null, "e": 5451, "s": 5444, "text": " Print" }, { "code": null, "e": 5462, "s": 5451, "text": " Add Notes" } ]
Building a k-Nearest-Neighbors (k-NN) Model with Scikit-learn | by Eijaz Allibhai | Towards Data Science
k-Nearest-Neighbors (k-NN) is a supervised machine learning model. Supervised learning is when a model learns from data that is already labeled. A supervised learning model takes in a set of input objects and output values. The model then trains on that data to learn how to map the inputs to the desired output so it can learn to make predictions on unseen data. k-NN models work by taking a data point and looking at the ‘k’ closest labeled data points. The data point is then assigned the label of the majority of the ‘k’ closest points. For example, if k = 5, and 3 of points are ‘green’ and 2 are ‘red’, then the data point in question would be labeled ‘green’, since ‘green’ is the majority (as shown in the above graph). Scikit-learn is a machine learning library for Python. In this tutorial, we will build a k-NN model using Scikit-learn to predict whether or not a patient has diabetes. For our k-NN model, the first step is to read in the data we will use as input. For this example, we are using the diabetes dataset. To start, we will use Pandas to read in the data. I will not go into detail on Pandas, but it is a library you should become familiar with if you’re looking to dive further into data science and machine learning. import pandas as pd#read in the data using pandasdf = pd.read_csv(‘data/diabetes_data.csv’)#check data has been read in properlydf.head() Next, let’s see how much data we have. We will call the ‘shape’ function on our dataframe to see how many rows and columns there are in our data. The rows indicate the number of patients and the columns indicate the number of features (age, weight, etc.) in the dataset for each patient. #check number of rows and columns in datasetdf.shape We can see that we have 768 rows of data (potential diabetes patients) and 9 columns (8 input features and 1 target output). Now let’s split up our dataset into inputs (X) and our target (y). Our input will be every column except ‘diabetes’ because ‘diabetes’ is what we will be attempting to predict. Therefore, ‘diabetes’ will be our target. We will use pandas ‘drop’ function to drop the column ‘diabetes’ from our dataframe and store it in the variable ‘X’. This will be our input. #create a dataframe with all training data except the target columnX = df.drop(columns=[‘diabetes’])#check that the target variable has been removedX.head() We will insert the ‘diabetes’ column of our dataset into our target variable (y). #separate target valuesy = df[‘diabetes’].values#view target valuesy[0:5] Now we will split the dataset into into training data and testing data. The training data is the data that the model will learn from. The testing data is the data we will use to see how well the model performs on unseen data. Scikit-learn has a function we can use called ‘train_test_split’ that makes it easy for us to split our dataset into training and testing data. from sklearn.model_selection import train_test_split#split dataset into train and test dataX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1, stratify=y) ‘train_test_split’ takes in 5 parameters. The first two parameters are the input and target data we split up earlier. Next, we will set ‘test_size’ to 0.2. This means that 20% of all the data will be used for testing, which leaves 80% of the data as training data for the model to learn from. Setting ‘random_state’ to 1 ensures that we get the same split each time so we can reproduce our results. Setting ‘stratify’ to y makes our training split represent the proportion of each value in the y variable. For example, in our dataset, if 25% of patients have diabetes and 75% don’t have diabetes, setting ‘stratify’ to y will ensure that the random split has 25% of patients with diabetes and 75% of patients without diabetes. Next, we have to build the model. Here is the code: from sklearn.neighbors import KNeighborsClassifier# Create KNN classifierknn = KNeighborsClassifier(n_neighbors = 3)# Fit the classifier to the dataknn.fit(X_train,y_train) First, we will create a new k-NN classifier and set ‘n_neighbors’ to 3. To recap, this means that if at least 2 out of the 3 nearest points to an new data point are patients without diabetes, then the new data point will be labeled as ‘no diabetes’, and vice versa. In other words, a new data point is labeled with by majority from the 3 nearest points. We have set ‘n_neighbors’ to 3 as a starting point. We will go into more detail below on how to better select a value for ‘n_neighbors’ so that the model can improve its performance. Next, we need to train the model. In order to train our new model, we will use the ‘fit’ function and pass in our training data as parameters to fit our model to the training data. Once the model is trained, we can use the ‘predict’ function on our model to make predictions on our test data. As seen when inspecting ‘y’ earlier, 0 indicates that the patient does not have diabetes and 1 indicates that the patient does have diabetes. To save space, we will only show print the first 5 predictions of our test set. #show first 5 model predictions on the test dataknn.predict(X_test)[0:5] We can see that the model predicted ‘no diabetes’ for the first 4 patients in the test set and ‘has diabetes’ for the 5th patient. Now let’s see how our accurate our model is on the full test set. To do this, we will use the ‘score’ function and pass in our test input and target data to see how well our model predictions match up to the actual results. #check accuracy of our model on the test dataknn.score(X_test, y_test) Our model has an accuracy of approximately 66.88%. It’s a good start, but we will see how we can increase model performance below. Congrats! You have now built an amazing k-NN model! Cross-validation is when the dataset is randomly split up into ‘k’ groups. One of the groups is used as the test set and the rest are used as the training set. The model is trained on the training set and scored on the test set. Then the process is repeated until each unique group as been used as the test set. For example, for 5-fold cross validation, the dataset would be split into 5 groups, and the model would be trained and tested 5 separate times so each group would get a chance to be the test set. This can be seen in the graph below. The train-test-split method we used in earlier is called ‘holdout’. Cross-validation is better than using the holdout method because the holdout method score is dependent on how the data is split into train and test sets. Cross-validation gives the model an opportunity to test on multiple splits so we can get a better idea on how the model will perform on unseen data. In order to train and test our model using cross-validation, we will use the ‘cross_val_score’ function with a cross-validation value of 5. ‘cross_val_score’ takes in our k-NN model and our data as parameters. Then it splits our data into 5 groups and fits and scores our data 5 seperate times, recording the accuracy score in an array each time. We will save the accuracy scores in the ‘cv_scores’ variable. To find the average of the 5 scores, we will use numpy’s mean function, passing in ‘cv_score’. Numpy is a useful math library in Python. from sklearn.model_selection import cross_val_scoreimport numpy as np#create a new KNN modelknn_cv = KNeighborsClassifier(n_neighbors=3)#train model with cv of 5 cv_scores = cross_val_score(knn_cv, X, y, cv=5)#print each cv score (accuracy) and average themprint(cv_scores)print(‘cv_scores mean:{}’.format(np.mean(cv_scores))) Using cross-validation, our mean score is about 71.36%. This is a more accurate representation of how our model will perform on unseen data than our earlier testing using the holdout method. When built our initial k-NN model, we set the parameter ‘n_neighbors’ to 3 as a starting point with no real logic behind that choice. Hypertuning parameters is when you go through a process to find the optimal parameters for your model to improve accuracy. In our case, we will use GridSearchCV to find the optimal value for ‘n_neighbors’. GridSearchCV works by training our model multiple times on a range of parameters that we specify. That way, we can test our model with each parameter and figure out the optimal values to get the best accuracy results. For our model, we will specify a range of values for ‘n_neighbors’ in order to see which value works best for our model. To do this, we will create a dictionary, setting ‘n_neighbors’ as the key and using numpy to create an array of values from 1 to 24. Our new model using grid search will take in a new k-NN classifier, our param_grid and a cross-validation value of 5 in order to find the optimal value for ‘n_neighbors’. from sklearn.model_selection import GridSearchCV#create new a knn modelknn2 = KNeighborsClassifier()#create a dictionary of all values we want to test for n_neighborsparam_grid = {‘n_neighbors’: np.arange(1, 25)}#use gridsearch to test all values for n_neighborsknn_gscv = GridSearchCV(knn2, param_grid, cv=5)#fit model to dataknn_gscv.fit(X, y) After training, we can check which of our values for ‘n_neighbors’ that we tested performed the best. To do this, we will call ‘best_params_’ on our model. #check top performing n_neighbors valueknn_gscv.best_params_ We can see that 14 is the optimal value for ‘n_neighbors’. We can use the ‘best_score_’ function to check the accuracy of our model when ‘n_neighbors’ is 14. ‘best_score_’ outputs the mean accuracy of the scores obtained through cross-validation. #check mean score for the top performing value of n_neighborsknn_gscv.best_score_ By using grid search to find the optimal parameter for our model, we have improved our model accuracy by over 4%! Thanks for reading! The GitHub repository for this tutorial (jupyter notebook and dataset) can be found here. If you would like to keep updated on my machine learning content, follow me :)
[ { "code": null, "e": 411, "s": 47, "text": "k-Nearest-Neighbors (k-NN) is a supervised machine learning model. Supervised learning is when a model learns from data that is already labeled. A supervised learning model takes in a set of input objects and output values. The model then trains on that data to learn how to map the inputs to the desired output so it can learn to make predictions on unseen data." }, { "code": null, "e": 588, "s": 411, "text": "k-NN models work by taking a data point and looking at the ‘k’ closest labeled data points. The data point is then assigned the label of the majority of the ‘k’ closest points." }, { "code": null, "e": 775, "s": 588, "text": "For example, if k = 5, and 3 of points are ‘green’ and 2 are ‘red’, then the data point in question would be labeled ‘green’, since ‘green’ is the majority (as shown in the above graph)." }, { "code": null, "e": 944, "s": 775, "text": "Scikit-learn is a machine learning library for Python. In this tutorial, we will build a k-NN model using Scikit-learn to predict whether or not a patient has diabetes." }, { "code": null, "e": 1290, "s": 944, "text": "For our k-NN model, the first step is to read in the data we will use as input. For this example, we are using the diabetes dataset. To start, we will use Pandas to read in the data. I will not go into detail on Pandas, but it is a library you should become familiar with if you’re looking to dive further into data science and machine learning." }, { "code": null, "e": 1428, "s": 1290, "text": "import pandas as pd#read in the data using pandasdf = pd.read_csv(‘data/diabetes_data.csv’)#check data has been read in properlydf.head()" }, { "code": null, "e": 1716, "s": 1428, "text": "Next, let’s see how much data we have. We will call the ‘shape’ function on our dataframe to see how many rows and columns there are in our data. The rows indicate the number of patients and the columns indicate the number of features (age, weight, etc.) in the dataset for each patient." }, { "code": null, "e": 1769, "s": 1716, "text": "#check number of rows and columns in datasetdf.shape" }, { "code": null, "e": 1894, "s": 1769, "text": "We can see that we have 768 rows of data (potential diabetes patients) and 9 columns (8 input features and 1 target output)." }, { "code": null, "e": 2113, "s": 1894, "text": "Now let’s split up our dataset into inputs (X) and our target (y). Our input will be every column except ‘diabetes’ because ‘diabetes’ is what we will be attempting to predict. Therefore, ‘diabetes’ will be our target." }, { "code": null, "e": 2255, "s": 2113, "text": "We will use pandas ‘drop’ function to drop the column ‘diabetes’ from our dataframe and store it in the variable ‘X’. This will be our input." }, { "code": null, "e": 2412, "s": 2255, "text": "#create a dataframe with all training data except the target columnX = df.drop(columns=[‘diabetes’])#check that the target variable has been removedX.head()" }, { "code": null, "e": 2494, "s": 2412, "text": "We will insert the ‘diabetes’ column of our dataset into our target variable (y)." }, { "code": null, "e": 2568, "s": 2494, "text": "#separate target valuesy = df[‘diabetes’].values#view target valuesy[0:5]" }, { "code": null, "e": 2794, "s": 2568, "text": "Now we will split the dataset into into training data and testing data. The training data is the data that the model will learn from. The testing data is the data we will use to see how well the model performs on unseen data." }, { "code": null, "e": 2938, "s": 2794, "text": "Scikit-learn has a function we can use called ‘train_test_split’ that makes it easy for us to split our dataset into training and testing data." }, { "code": null, "e": 3130, "s": 2938, "text": "from sklearn.model_selection import train_test_split#split dataset into train and test dataX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1, stratify=y)" }, { "code": null, "e": 3529, "s": 3130, "text": "‘train_test_split’ takes in 5 parameters. The first two parameters are the input and target data we split up earlier. Next, we will set ‘test_size’ to 0.2. This means that 20% of all the data will be used for testing, which leaves 80% of the data as training data for the model to learn from. Setting ‘random_state’ to 1 ensures that we get the same split each time so we can reproduce our results." }, { "code": null, "e": 3857, "s": 3529, "text": "Setting ‘stratify’ to y makes our training split represent the proportion of each value in the y variable. For example, in our dataset, if 25% of patients have diabetes and 75% don’t have diabetes, setting ‘stratify’ to y will ensure that the random split has 25% of patients with diabetes and 75% of patients without diabetes." }, { "code": null, "e": 3909, "s": 3857, "text": "Next, we have to build the model. Here is the code:" }, { "code": null, "e": 4082, "s": 3909, "text": "from sklearn.neighbors import KNeighborsClassifier# Create KNN classifierknn = KNeighborsClassifier(n_neighbors = 3)# Fit the classifier to the dataknn.fit(X_train,y_train)" }, { "code": null, "e": 4436, "s": 4082, "text": "First, we will create a new k-NN classifier and set ‘n_neighbors’ to 3. To recap, this means that if at least 2 out of the 3 nearest points to an new data point are patients without diabetes, then the new data point will be labeled as ‘no diabetes’, and vice versa. In other words, a new data point is labeled with by majority from the 3 nearest points." }, { "code": null, "e": 4619, "s": 4436, "text": "We have set ‘n_neighbors’ to 3 as a starting point. We will go into more detail below on how to better select a value for ‘n_neighbors’ so that the model can improve its performance." }, { "code": null, "e": 4800, "s": 4619, "text": "Next, we need to train the model. In order to train our new model, we will use the ‘fit’ function and pass in our training data as parameters to fit our model to the training data." }, { "code": null, "e": 5134, "s": 4800, "text": "Once the model is trained, we can use the ‘predict’ function on our model to make predictions on our test data. As seen when inspecting ‘y’ earlier, 0 indicates that the patient does not have diabetes and 1 indicates that the patient does have diabetes. To save space, we will only show print the first 5 predictions of our test set." }, { "code": null, "e": 5207, "s": 5134, "text": "#show first 5 model predictions on the test dataknn.predict(X_test)[0:5]" }, { "code": null, "e": 5338, "s": 5207, "text": "We can see that the model predicted ‘no diabetes’ for the first 4 patients in the test set and ‘has diabetes’ for the 5th patient." }, { "code": null, "e": 5562, "s": 5338, "text": "Now let’s see how our accurate our model is on the full test set. To do this, we will use the ‘score’ function and pass in our test input and target data to see how well our model predictions match up to the actual results." }, { "code": null, "e": 5633, "s": 5562, "text": "#check accuracy of our model on the test dataknn.score(X_test, y_test)" }, { "code": null, "e": 5764, "s": 5633, "text": "Our model has an accuracy of approximately 66.88%. It’s a good start, but we will see how we can increase model performance below." }, { "code": null, "e": 5816, "s": 5764, "text": "Congrats! You have now built an amazing k-NN model!" }, { "code": null, "e": 6128, "s": 5816, "text": "Cross-validation is when the dataset is randomly split up into ‘k’ groups. One of the groups is used as the test set and the rest are used as the training set. The model is trained on the training set and scored on the test set. Then the process is repeated until each unique group as been used as the test set." }, { "code": null, "e": 6361, "s": 6128, "text": "For example, for 5-fold cross validation, the dataset would be split into 5 groups, and the model would be trained and tested 5 separate times so each group would get a chance to be the test set. This can be seen in the graph below." }, { "code": null, "e": 6732, "s": 6361, "text": "The train-test-split method we used in earlier is called ‘holdout’. Cross-validation is better than using the holdout method because the holdout method score is dependent on how the data is split into train and test sets. Cross-validation gives the model an opportunity to test on multiple splits so we can get a better idea on how the model will perform on unseen data." }, { "code": null, "e": 7141, "s": 6732, "text": "In order to train and test our model using cross-validation, we will use the ‘cross_val_score’ function with a cross-validation value of 5. ‘cross_val_score’ takes in our k-NN model and our data as parameters. Then it splits our data into 5 groups and fits and scores our data 5 seperate times, recording the accuracy score in an array each time. We will save the accuracy scores in the ‘cv_scores’ variable." }, { "code": null, "e": 7278, "s": 7141, "text": "To find the average of the 5 scores, we will use numpy’s mean function, passing in ‘cv_score’. Numpy is a useful math library in Python." }, { "code": null, "e": 7605, "s": 7278, "text": "from sklearn.model_selection import cross_val_scoreimport numpy as np#create a new KNN modelknn_cv = KNeighborsClassifier(n_neighbors=3)#train model with cv of 5 cv_scores = cross_val_score(knn_cv, X, y, cv=5)#print each cv score (accuracy) and average themprint(cv_scores)print(‘cv_scores mean:{}’.format(np.mean(cv_scores)))" }, { "code": null, "e": 7796, "s": 7605, "text": "Using cross-validation, our mean score is about 71.36%. This is a more accurate representation of how our model will perform on unseen data than our earlier testing using the holdout method." }, { "code": null, "e": 7930, "s": 7796, "text": "When built our initial k-NN model, we set the parameter ‘n_neighbors’ to 3 as a starting point with no real logic behind that choice." }, { "code": null, "e": 8136, "s": 7930, "text": "Hypertuning parameters is when you go through a process to find the optimal parameters for your model to improve accuracy. In our case, we will use GridSearchCV to find the optimal value for ‘n_neighbors’." }, { "code": null, "e": 8354, "s": 8136, "text": "GridSearchCV works by training our model multiple times on a range of parameters that we specify. That way, we can test our model with each parameter and figure out the optimal values to get the best accuracy results." }, { "code": null, "e": 8608, "s": 8354, "text": "For our model, we will specify a range of values for ‘n_neighbors’ in order to see which value works best for our model. To do this, we will create a dictionary, setting ‘n_neighbors’ as the key and using numpy to create an array of values from 1 to 24." }, { "code": null, "e": 8779, "s": 8608, "text": "Our new model using grid search will take in a new k-NN classifier, our param_grid and a cross-validation value of 5 in order to find the optimal value for ‘n_neighbors’." }, { "code": null, "e": 9125, "s": 8779, "text": "from sklearn.model_selection import GridSearchCV#create new a knn modelknn2 = KNeighborsClassifier()#create a dictionary of all values we want to test for n_neighborsparam_grid = {‘n_neighbors’: np.arange(1, 25)}#use gridsearch to test all values for n_neighborsknn_gscv = GridSearchCV(knn2, param_grid, cv=5)#fit model to dataknn_gscv.fit(X, y)" }, { "code": null, "e": 9281, "s": 9125, "text": "After training, we can check which of our values for ‘n_neighbors’ that we tested performed the best. To do this, we will call ‘best_params_’ on our model." }, { "code": null, "e": 9342, "s": 9281, "text": "#check top performing n_neighbors valueknn_gscv.best_params_" }, { "code": null, "e": 9589, "s": 9342, "text": "We can see that 14 is the optimal value for ‘n_neighbors’. We can use the ‘best_score_’ function to check the accuracy of our model when ‘n_neighbors’ is 14. ‘best_score_’ outputs the mean accuracy of the scores obtained through cross-validation." }, { "code": null, "e": 9671, "s": 9589, "text": "#check mean score for the top performing value of n_neighborsknn_gscv.best_score_" }, { "code": null, "e": 9785, "s": 9671, "text": "By using grid search to find the optimal parameter for our model, we have improved our model accuracy by over 4%!" }, { "code": null, "e": 9895, "s": 9785, "text": "Thanks for reading! The GitHub repository for this tutorial (jupyter notebook and dataset) can be found here." } ]
Maximum Subarray in Python
Suppose we have an integer array A. We have to find the contiguous subarrays which length will be at least one, and that has the largest sum, and also return its sum. So if the array A is like A = [-2,1,-3,4,-1,2,1,-5,4], then the sum will be 6. And the subarray will be [4, -1, 2, 1] To solve this we will try to use the Dynamic programming approach. define an array dp same as the size of A, and fill it with 0 dp[0] := A[0] for i = 1 to the size of A – 1dp[i] := maximum of dp[i – 1] + A[i] and A[i] dp[i] := maximum of dp[i – 1] + A[i] and A[i] return max in dp Let us see the following implementation to get a better understanding − Live Demo class Solution(object): def maxSubArray(self, nums): """ :type nums: List[int] :rtype: int """ dp = [0 for i in range(len(nums))] dp[0] = nums[0] for i in range(1,len(nums)): dp[i] = max(dp[i-1]+nums[i],nums[i]) #print(dp) return max(dp) nums = [-2,1,-3,7,-2,2,1,-5,4] ob1 = Solution() print(ob1.maxSubArray(nums)) nums = [-2,1,-3,7,-2,2,1,-5,4] 8
[ { "code": null, "e": 1347, "s": 1062, "text": "Suppose we have an integer array A. We have to find the contiguous subarrays which length will be at least one, and that has the largest sum, and also return its sum. So if the array A is like A = [-2,1,-3,4,-1,2,1,-5,4], then the sum will be 6. And the subarray will be [4, -1, 2, 1]" }, { "code": null, "e": 1414, "s": 1347, "text": "To solve this we will try to use the Dynamic programming approach." }, { "code": null, "e": 1475, "s": 1414, "text": "define an array dp same as the size of A, and fill it with 0" }, { "code": null, "e": 1489, "s": 1475, "text": "dp[0] := A[0]" }, { "code": null, "e": 1565, "s": 1489, "text": "for i = 1 to the size of A – 1dp[i] := maximum of dp[i – 1] + A[i] and A[i]" }, { "code": null, "e": 1611, "s": 1565, "text": "dp[i] := maximum of dp[i – 1] + A[i] and A[i]" }, { "code": null, "e": 1628, "s": 1611, "text": "return max in dp" }, { "code": null, "e": 1700, "s": 1628, "text": "Let us see the following implementation to get a better understanding −" }, { "code": null, "e": 1711, "s": 1700, "text": " Live Demo" }, { "code": null, "e": 2092, "s": 1711, "text": "class Solution(object):\n def maxSubArray(self, nums):\n \"\"\"\n :type nums: List[int]\n :rtype: int\n \"\"\"\n dp = [0 for i in range(len(nums))]\n dp[0] = nums[0]\n for i in range(1,len(nums)):\n dp[i] = max(dp[i-1]+nums[i],nums[i])\n #print(dp)\n return max(dp)\nnums = [-2,1,-3,7,-2,2,1,-5,4]\nob1 = Solution()\nprint(ob1.maxSubArray(nums))" }, { "code": null, "e": 2123, "s": 2092, "text": "nums = [-2,1,-3,7,-2,2,1,-5,4]" }, { "code": null, "e": 2125, "s": 2123, "text": "8" } ]
Text to Image. This article will explain an... | by Connor Shorten | Towards Data Science
This article will explain the experiments and theory behind an interesting paper that converts natural language text descriptions such as “A small bird has a short, point orange beak and white belly” into 64x64 RGB images. Following is a link to the paper “Generative Adversarial Text to Image Synthesis” from Reed et al. arxiv.org IntroductionArchitecture UsedConstructing a Text Embedding for Visual AttributesManifold InterpolationResults / Conclusions Introduction Architecture Used Constructing a Text Embedding for Visual Attributes Manifold Interpolation Results / Conclusions Converting natural language text descriptions into images is an amazing demonstration of Deep Learning. Text classification tasks such as sentiment analysis have been successful with Deep Recurrent Neural Networks that are able to learn discriminative vector representations from text. In another domain, Deep Convolutional GANs are able to synthesize images such as interiors of bedrooms from a random noise vector sampled from a normal distribution. The focus of Reed et al. [1] is to connect advances in Deep RNN text embeddings and image synthesis with DCGANs, inspired by the idea of Conditional-GANs. Conditional-GANs work by inputting a one-hot class label vector as input to the generator and discriminator in addition to the randomly sampled noise vector. This results in higher training stability, more visually appealing results, as well as controllable generator outputs. The difference between traditional Conditional-GANs and the Text-to-Image model presented is in the conditioning input. Instead of trying to construct a sparse visual attribute descriptor to condition GANs, the GANs are conditioned on a text embedding learned with a Deep Neural Network. A sparse visual attribute descriptor might describe “a small bird with an orange beak” as something like: [ 0 0 0 1 . . . 0 0 . . . 1 . . . 0 0 0 . . . 0 0 1 . . .0 0 0] The ones in the vector would represent attribute questions such as, orange (1/0)? small (1/0)? bird (1/0)? This description is difficult to collect and doesn’t work well in practice. Word embeddings have been the hero of natural language processing through the use of concepts such as Word2Vec. Word2Vec forms embeddings by learning to predict the context of a given word. Unfortunately, Word2Vec doesn’t quite translate to text-to-image since the context of the word doesn’t capture the visual properties as well as an embedding explicitly trained to do so does. Reed et al. [1] present a novel symmetric structured joint embedding of images and text descriptions to overcome this challenge which is presented in further detail later in the article. In addition to constructing good text embeddings, translating from text to images is highly multi-modal. The term ‘multi-modal’ is an important one to become familiar with in Deep Learning research. This refers to the fact that there are many different images of birds with correspond to the text description “bird”. Another example in speech is that there are many different accents, etc. that would result in different sounds corresponding to the text “bird”. Multi-modal learning is also present in image captioning, (image-to-text). However, this is greatly facilitated due to the sequential structure of text such that the model can predict the next word conditioned on the image as well as the previously predicted words. Multi-modal learning is traditionally very difficult, but is made much easier with the advancement of GANs (Generative Adversarial Networks), this framework creates an adaptive loss function which is well-suited for multi-modal tasks such as text-to-image. The picture above shows the architecture Reed et al. used to train this text-to-image GAN model. The most noteworthy takeaway from this diagram is the visualization of how the text embedding fits into the sequential processing of the model. In the Generator network, the text embedding is filtered trough a fully connected layer and concatenated with the random noise vector z. In this case, the text embedding is converted from a 1024x1 vector to 128x1 and concatenated with the 100x1 random noise vector z. On the side of the discriminator network, the text-embedding is also compressed through a fully connected layer into a 128x1 vector and then reshaped into a 4x4 matrix and depth-wise concatenated with the image representation. This image representation is derived after the input image has been convolved over multiple times, reduce the spatial resolution and extracting information. This embedding strategy for the discriminator is different from the conditional-GAN model in which the embedding is concatenated into the original image matrix and then convolved over. One general thing to note about the architecture diagram is to visualize how the DCGAN upsamples vectors or low-resolution images to produce high-resolution images. You can see each de-convolutional layer increases the spatial resolution of the image. Additionally, the depth of the feature maps decreases per layer. Lastly, you can see how the convolutional layers in the discriminator network decreases the spatial resolution and increase the depth of the feature maps as it processes the image. An interesting thing about this training process is that it is difficult to separate loss based on the generated image not looking realistic or loss based on the generated image not matching the text description. The authors of the paper describe the training dynamics being that initially the discriminator does not pay any attention to the text embedding, since the images created by the generator do not look real at all. Once G can generate images that at least pass the real vs. fake criterion, then the text embedding is factored in as well. The authors smooth out the training dynamics of this by adding pairs of real images with incorrect text descriptions which are labeled as ‘fake’. The discriminator is solely focused on the binary task of real versus fake and is not separately considering the image apart from the text. This is in contrast to an approach such as AC-GAN with one-hot encoded class labels. The AC-GAN discriminator outputs real vs. fake and uses an auxiliary classifier sharing the intermediate features to classify the class label of the image. The most interesting component of this paper is how they construct a unique text embedding that contains visual attributes of the image to be represented. This vector is constructed through the following process: The loss function noted as equation (2) represents the overall objective of a text classifier that is optimizing the gated loss between two loss functions. These loss functions are shown in equations 3 and 4. The paper describes the intuition for this process as “A text encoding should have a higher compatibility score with images of the corresponding class compared to any other class and vice-versa”. The two terms each represent an image encoder and a text encoder. The image encoder is taken from the GoogLeNet image classification model. This classifier reduces the dimensionality of images until it is compressed to a 1024x1 vector. The objective function thus aims to minimize the distance between the image representation from GoogLeNet and the text representation from a character-level CNN or LSTM. Essentially, the vector encoding for the image classification is used to guide the text encodings based on similarity to similar images. The details of this are expanded on in the following paper, “Learning Deep Representations of Fine-Grained Visual Descriptions” also from Reed et al. arxiv.org Note the term ‘Fine-grained’, this is used to separate tasks such as different types of birds and flowers compared to completely different objects such as cats, airplanes, boats, mountains, dogs, etc. as in what is used in ImageNet challenges. One of the interesting characteristics of Generative Adversarial Networks is that the latent vector z can be used to interpolate new instances. This is commonly referred to as “latent space addition”. An example would be to do “man with glasses” — “man without glasses” + “woman without glasses” and achieve a woman with glasses. In this paper, the authors aims to interpolate between the text embeddings. This is done with the following equation: The discriminator has been trained to predict whether image and text pairs match or not. Therefore the images from interpolated text embeddings can fill in the gaps in the data manifold that were present during training. Using this as a regularization method for the training data space is paramount for the successful result of the model presented in this paper. This is a form of data augmentation since the interpolated text embeddings can expand the dataset used for training the text-to-image GAN. The experiments are conducted with three datasets, CUB dataset of bird images containing 11,788 bird images from 200 categories, Oxford-102 of Flowers containing 8,189 images from 102 different categories, and the MS-COCO dataset to demonstrate generalizability of the algorithm presented. Each of these images from CUB and Oxford-102 contains 5 text captions. All of the results presented above are on the Zero-Shot Learning task, meaning that the model has never seen that text description before during training. Each of the images above are fairly low-resolution at 64x64x3. Nevertheless, it is very encouraging to see this algorithm having some success on the very difficult multi-modal task of text-to-image. Thanks for reading this article, I highly recommend checking out the paper to learn more! [1] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee. Generative Adversarial Text to Image Synthesis. 2016. [2] Scott Reed, Zeynep Akata, Bernt Shiele, Honglak Lee. Learning Deep Representations of Fine-grained Visual Descriptions. 2016.
[ { "code": null, "e": 494, "s": 172, "text": "This article will explain the experiments and theory behind an interesting paper that converts natural language text descriptions such as “A small bird has a short, point orange beak and white belly” into 64x64 RGB images. Following is a link to the paper “Generative Adversarial Text to Image Synthesis” from Reed et al." }, { "code": null, "e": 504, "s": 494, "text": "arxiv.org" }, { "code": null, "e": 628, "s": 504, "text": "IntroductionArchitecture UsedConstructing a Text Embedding for Visual AttributesManifold InterpolationResults / Conclusions" }, { "code": null, "e": 641, "s": 628, "text": "Introduction" }, { "code": null, "e": 659, "s": 641, "text": "Architecture Used" }, { "code": null, "e": 711, "s": 659, "text": "Constructing a Text Embedding for Visual Attributes" }, { "code": null, "e": 734, "s": 711, "text": "Manifold Interpolation" }, { "code": null, "e": 756, "s": 734, "text": "Results / Conclusions" }, { "code": null, "e": 1363, "s": 756, "text": "Converting natural language text descriptions into images is an amazing demonstration of Deep Learning. Text classification tasks such as sentiment analysis have been successful with Deep Recurrent Neural Networks that are able to learn discriminative vector representations from text. In another domain, Deep Convolutional GANs are able to synthesize images such as interiors of bedrooms from a random noise vector sampled from a normal distribution. The focus of Reed et al. [1] is to connect advances in Deep RNN text embeddings and image synthesis with DCGANs, inspired by the idea of Conditional-GANs." }, { "code": null, "e": 2034, "s": 1363, "text": "Conditional-GANs work by inputting a one-hot class label vector as input to the generator and discriminator in addition to the randomly sampled noise vector. This results in higher training stability, more visually appealing results, as well as controllable generator outputs. The difference between traditional Conditional-GANs and the Text-to-Image model presented is in the conditioning input. Instead of trying to construct a sparse visual attribute descriptor to condition GANs, the GANs are conditioned on a text embedding learned with a Deep Neural Network. A sparse visual attribute descriptor might describe “a small bird with an orange beak” as something like:" }, { "code": null, "e": 2099, "s": 2034, "text": " [ 0 0 0 1 . . . 0 0 . . . 1 . . . 0 0 0 . . . 0 0 1 . . .0 0 0]" }, { "code": null, "e": 2282, "s": 2099, "text": "The ones in the vector would represent attribute questions such as, orange (1/0)? small (1/0)? bird (1/0)? This description is difficult to collect and doesn’t work well in practice." }, { "code": null, "e": 2850, "s": 2282, "text": "Word embeddings have been the hero of natural language processing through the use of concepts such as Word2Vec. Word2Vec forms embeddings by learning to predict the context of a given word. Unfortunately, Word2Vec doesn’t quite translate to text-to-image since the context of the word doesn’t capture the visual properties as well as an embedding explicitly trained to do so does. Reed et al. [1] present a novel symmetric structured joint embedding of images and text descriptions to overcome this challenge which is presented in further detail later in the article." }, { "code": null, "e": 3835, "s": 2850, "text": "In addition to constructing good text embeddings, translating from text to images is highly multi-modal. The term ‘multi-modal’ is an important one to become familiar with in Deep Learning research. This refers to the fact that there are many different images of birds with correspond to the text description “bird”. Another example in speech is that there are many different accents, etc. that would result in different sounds corresponding to the text “bird”. Multi-modal learning is also present in image captioning, (image-to-text). However, this is greatly facilitated due to the sequential structure of text such that the model can predict the next word conditioned on the image as well as the previously predicted words. Multi-modal learning is traditionally very difficult, but is made much easier with the advancement of GANs (Generative Adversarial Networks), this framework creates an adaptive loss function which is well-suited for multi-modal tasks such as text-to-image." }, { "code": null, "e": 4913, "s": 3835, "text": "The picture above shows the architecture Reed et al. used to train this text-to-image GAN model. The most noteworthy takeaway from this diagram is the visualization of how the text embedding fits into the sequential processing of the model. In the Generator network, the text embedding is filtered trough a fully connected layer and concatenated with the random noise vector z. In this case, the text embedding is converted from a 1024x1 vector to 128x1 and concatenated with the 100x1 random noise vector z. On the side of the discriminator network, the text-embedding is also compressed through a fully connected layer into a 128x1 vector and then reshaped into a 4x4 matrix and depth-wise concatenated with the image representation. This image representation is derived after the input image has been convolved over multiple times, reduce the spatial resolution and extracting information. This embedding strategy for the discriminator is different from the conditional-GAN model in which the embedding is concatenated into the original image matrix and then convolved over." }, { "code": null, "e": 5411, "s": 4913, "text": "One general thing to note about the architecture diagram is to visualize how the DCGAN upsamples vectors or low-resolution images to produce high-resolution images. You can see each de-convolutional layer increases the spatial resolution of the image. Additionally, the depth of the feature maps decreases per layer. Lastly, you can see how the convolutional layers in the discriminator network decreases the spatial resolution and increase the depth of the feature maps as it processes the image." }, { "code": null, "e": 5959, "s": 5411, "text": "An interesting thing about this training process is that it is difficult to separate loss based on the generated image not looking realistic or loss based on the generated image not matching the text description. The authors of the paper describe the training dynamics being that initially the discriminator does not pay any attention to the text embedding, since the images created by the generator do not look real at all. Once G can generate images that at least pass the real vs. fake criterion, then the text embedding is factored in as well." }, { "code": null, "e": 6486, "s": 5959, "text": "The authors smooth out the training dynamics of this by adding pairs of real images with incorrect text descriptions which are labeled as ‘fake’. The discriminator is solely focused on the binary task of real versus fake and is not separately considering the image apart from the text. This is in contrast to an approach such as AC-GAN with one-hot encoded class labels. The AC-GAN discriminator outputs real vs. fake and uses an auxiliary classifier sharing the intermediate features to classify the class label of the image." }, { "code": null, "e": 6699, "s": 6486, "text": "The most interesting component of this paper is how they construct a unique text embedding that contains visual attributes of the image to be represented. This vector is constructed through the following process:" }, { "code": null, "e": 7647, "s": 6699, "text": "The loss function noted as equation (2) represents the overall objective of a text classifier that is optimizing the gated loss between two loss functions. These loss functions are shown in equations 3 and 4. The paper describes the intuition for this process as “A text encoding should have a higher compatibility score with images of the corresponding class compared to any other class and vice-versa”. The two terms each represent an image encoder and a text encoder. The image encoder is taken from the GoogLeNet image classification model. This classifier reduces the dimensionality of images until it is compressed to a 1024x1 vector. The objective function thus aims to minimize the distance between the image representation from GoogLeNet and the text representation from a character-level CNN or LSTM. Essentially, the vector encoding for the image classification is used to guide the text encodings based on similarity to similar images." }, { "code": null, "e": 7797, "s": 7647, "text": "The details of this are expanded on in the following paper, “Learning Deep Representations of Fine-Grained Visual Descriptions” also from Reed et al." }, { "code": null, "e": 7807, "s": 7797, "text": "arxiv.org" }, { "code": null, "e": 8051, "s": 7807, "text": "Note the term ‘Fine-grained’, this is used to separate tasks such as different types of birds and flowers compared to completely different objects such as cats, airplanes, boats, mountains, dogs, etc. as in what is used in ImageNet challenges." }, { "code": null, "e": 8499, "s": 8051, "text": "One of the interesting characteristics of Generative Adversarial Networks is that the latent vector z can be used to interpolate new instances. This is commonly referred to as “latent space addition”. An example would be to do “man with glasses” — “man without glasses” + “woman without glasses” and achieve a woman with glasses. In this paper, the authors aims to interpolate between the text embeddings. This is done with the following equation:" }, { "code": null, "e": 9002, "s": 8499, "text": "The discriminator has been trained to predict whether image and text pairs match or not. Therefore the images from interpolated text embeddings can fill in the gaps in the data manifold that were present during training. Using this as a regularization method for the training data space is paramount for the successful result of the model presented in this paper. This is a form of data augmentation since the interpolated text embeddings can expand the dataset used for training the text-to-image GAN." }, { "code": null, "e": 9292, "s": 9002, "text": "The experiments are conducted with three datasets, CUB dataset of bird images containing 11,788 bird images from 200 categories, Oxford-102 of Flowers containing 8,189 images from 102 different categories, and the MS-COCO dataset to demonstrate generalizability of the algorithm presented." }, { "code": null, "e": 9363, "s": 9292, "text": "Each of these images from CUB and Oxford-102 contains 5 text captions." }, { "code": null, "e": 9807, "s": 9363, "text": "All of the results presented above are on the Zero-Shot Learning task, meaning that the model has never seen that text description before during training. Each of the images above are fairly low-resolution at 64x64x3. Nevertheless, it is very encouraging to see this algorithm having some success on the very difficult multi-modal task of text-to-image. Thanks for reading this article, I highly recommend checking out the paper to learn more!" }, { "code": null, "e": 9954, "s": 9807, "text": "[1] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee. Generative Adversarial Text to Image Synthesis. 2016." } ]
How to use a click() method in Selenium with python?
While working on an application and navigating to different pages or different sections of a page, we need to click on various UI elements on a page like a link or a button. All these are performed with the help of click() method. Thus a click() method typically works with elements like buttons and links. driver.find_element_by_xpath("//button[id ='value']").click() Coding Implementation with click() method for clicking a link. from selenium import webdriver #browser exposes an executable file #Through Selenium test we will invoke the executable file which will then #invoke #actual browser driver = webdriver.Chrome(executable_path="C:\\chromedriver.exe") # to maximize the browser window driver.maximize_window() #get method to launch the URL driver.get("https://www.tutorialspoint.com/about/about_careers.htm") #to refresh the browser driver.refresh() # identifying the link then using click() method driver.find_element_by_link_text("Company").click() #to close the browser driver.close() Coding Implementation with click() method for clicking a button. from selenium import webdriver #browser exposes an executable file #Through Selenium test we will invoke the executable file which will then #invoke #actual browser driver = webdriver.Chrome(executable_path="C:\\chromedriver.exe") # to maximize the browser window driver.maximize_window() #get method to launch the URL driver.get("https://www.tutorialspoint.com/about/about_careers.htm") #to refresh the browser driver.refresh() # identifying the button then using click() method driver.find_element_by_xpath("//button[contains(@class,'gsc-search')]") .click() #to close the browser driver.close()
[ { "code": null, "e": 1293, "s": 1062, "text": "While working on an application and navigating to different pages or different sections of a page, we need to click on various UI elements on a page like a link or a button. All these are performed with the help of click() method." }, { "code": null, "e": 1369, "s": 1293, "text": "Thus a click() method typically works with elements like buttons and links." }, { "code": null, "e": 1431, "s": 1369, "text": "driver.find_element_by_xpath(\"//button[id ='value']\").click()" }, { "code": null, "e": 1494, "s": 1431, "text": "Coding Implementation with click() method for clicking a link." }, { "code": null, "e": 2061, "s": 1494, "text": "from selenium import webdriver\n#browser exposes an executable file\n#Through Selenium test we will invoke the executable file which will then #invoke #actual browser\ndriver = webdriver.Chrome(executable_path=\"C:\\\\chromedriver.exe\")\n# to maximize the browser window\ndriver.maximize_window()\n#get method to launch the URL\ndriver.get(\"https://www.tutorialspoint.com/about/about_careers.htm\")\n#to refresh the browser\ndriver.refresh()\n# identifying the link then using click() method\ndriver.find_element_by_link_text(\"Company\").click()\n#to close the browser\ndriver.close()" }, { "code": null, "e": 2126, "s": 2061, "text": "Coding Implementation with click() method for clicking a button." }, { "code": null, "e": 2724, "s": 2126, "text": "from selenium import webdriver\n#browser exposes an executable file\n#Through Selenium test we will invoke the executable file which will then #invoke #actual browser\ndriver = webdriver.Chrome(executable_path=\"C:\\\\chromedriver.exe\")\n# to maximize the browser window\ndriver.maximize_window()\n#get method to launch the URL\ndriver.get(\"https://www.tutorialspoint.com/about/about_careers.htm\")\n#to refresh the browser\ndriver.refresh()\n# identifying the button then using click() method\ndriver.find_element_by_xpath(\"//button[contains(@class,'gsc-search')]\") .click()\n#to close the browser\ndriver.close()" } ]
Fantasy Football Data Analysis | Anish Kasam | Towards Data Science
With the 2020–2021 NFL fantasy football season about to come to a close, I was inspired to analyze data from the past few years: Before I start jumping into the data analysis, here’s a summary of fantasy football so we are all on the same page: To play fantasy football, you need to create or join a league on one of many websites (ESPN, Yahoo, Sleeper, CBS, NFL, etc). Each member of the league is the owner/general manager of their team. Each league has two parts to a roster: the starting lineup and the bench. The starting lineup includes a combination of quarterbacks (QB), running Backs (RB), wide Receivers (WR), tight ends (TE), flexes (FLEX), kickers (K), and defenses (D/ST) based on league settings. The bench can be made up of any players the owner chooses. There are 3 types of leagues: Redraft Leagues: each season, rosters completely reset and all players are available to draft Keeper Leagues: each season, owners can keep a certain number of players for the following season, and then draft the rest of their roster Dynasty Leagues: each season, owners keep their entire roster for the next season and draft rookies in the draft There are 2 types of drafts: Snake (Traditional) Drafts: each owner gets a chance to draft and the draft order reverses each round of the draft Auction Draft: each owner gets a set amount of money, and each owner can bid for each player as long as they have a sufficient amount of money There are 2 types of scoring systems: Standard: 1 point per 25 passing yards, 4 points per passing touchdown, 1 point per 10 rushing or receiving yards, 6 points per rushing or receiving touchdown, -2 points per fumble lost or interception. Point Per Reception (PPR): Scoring is the same as Standard, except players get 1 additional point per reception After the draft, you can add unrostered players to your team or make trades with other owners to improve your team. The first 13 weeks of the NFL season is known as the fantasy regular season, each week you play in a head to head matchup with another owner in your league. Whoever has the most points scored by their players that week receives a win. After the first 13 weeks, the owners with the most wins make the playoffs and are placed into a bracket. After the fantasy playoffs (weeks 14–16), the champion is crowned. A vital part of fantasy football is deciding which players you are starting based on their matchups. Some players’ output is heavily reliant on the strength of the defense they play while others are “matchup-proof”, meaning that regardless of the strength of the opposing team, they will perform well. I wanted to figure out which players were “matchup-proof” and which players were matchup reliant. This led me to ask the question: what is the effect of a defense’s strength on the fantasy output of a player? The first step to any data analysis project is collecting the data. The data necessary is the yearly stats for every player, the weekly stats for every week for every player, the rankings of every defense against QBs, RBs, WRs, and TEs, and the schedules for each team from 2017–2019. Yearly & Weekly Stats: https://www.fantasyfootballdatapros.com/csv_files Defense Rankings: https://www.fantasypros.com/nfl/points-allowed.php?year=2018 Schedules: https://www.4for4.com/teams/schedule/2017/grid The next step is to iterate through all the data files and transform all of the stats into PPR fantasy points: fantasypoints = 0# negative statsfantasypoints -= (stats["FL"][i] * 2)fantasypoints -= (stats["Int"][i] * 2)# positive statsfantasypoints += (stats["PassingYds"][i] * 0.04fantasypoints += (stats["PassingTD"][i] * 4)fantasypoints += (stats["RushingYds"][i] * 0.1)fantasypoints += (stats["RushingTD"][i] * 6)fantasypoints += (stats["ReceivingYds"][i] * 0.1)fantasypoints += (stats["ReceivingTD"][i] * 6)fantasypoints += (stats["Rec"][i]) After converting player stats into fantasy points, the data files looked like: Player Name, Position, Team, Games Played, Total Fantasy Points, Average Fantasy PointsTodd Gurley,RB,LAR,15.0,383.3,25.55Le'Veon Bell,RB,PIT,15.0,341.6,22.77Kareem Hunt,RB,KAN,16.0,295.2,18.45Alvin Kamara,RB,NOR,16.0,312.4,19.52 Once, all the fantasy points were listed for each player in the data files, I needed to pull the defense ranking of the team they played when they scored those points. The rankings are determined by the average number of fantasy points a defense gives to each position (QB, RB, WR, TE) throughout the year. This means that each defense has 4 different rankings: Team, QB Rank, RB Rank, WR Rank, TE RankARI,18,4,18,14ATL,23,7,14,13BAL,2,22,2,21BUF,5,32,5,22 To add the ranking of the defense to every weekly stat file, I had to iterate through all the weeks and all the schedule files to find the opposing team and then add the ranking of the defense to the weekly stat file. Once added, the weekly files looked like: Player Name, Position, Team, Total Fantasy Points, Opposing Team RankKirk Cousins,WAS,QB,26.8,17Tom Brady,NWE,QB,33.72,30Jared Goff,LAR,QB,23.58,31Case Keenum,MIN,QB,28.56,19 After the defensive rankings were added to the weekly files, all that was left to do was to iterate through all the weekly stat files and plot every player’s fantasy points against the ranking of the defense they played: plt.scatter(xdata, ydata)plt.title("Effect of Defense Strength on " + str(playername) + " in 2017")plt.xlabel("Defense Ranking (1-32) | Correlation = " + str(correlation))plt.ylabel("Fantasy Production Above/Below Yearly Mean")x = np.array(xdata)y = np.array(ydata)m, b = np.polyfit(x, y, 1)plt.plot(x, m*x + b)plt.plot(flatlinex, 0*flatlinex, linestyle = "--", dashes = (5, 5), color = "black")plt.show() Now all that was left to do is it interpret the data that I compiled. But, before I share my findings, let me explain the significance of a correlation coefficient. A correlation coefficient (r) quantifies the strength and direction of a linear relationship. A positive r indicates a positive linear relationship, and a negative r indicates a negative linear relationship. When r is greater than 0.6 or less than -0.6, it means that there is a strong correlation between the two variables. There were a few players each year that had a strong correlation between their fantasy output and defense strength: 2017: Todd Gurley: 0.02 (#1 Overall Player) Dak Prescott: 0.62 Ezekiel Elliot: 0.64 Alex Collins: 0.65 Drew Brees: 0.65 Charles Clay: 0.65 Marlon Mack: 0.67 OJ Howard: 0.68 Rex Burkhead: 0.71 Jared Goff: 0.85 Todd Gurley was the highest fantasy scorer in 2017 and had virtually 0 correlation between his production and the defense he played. On the other hand, Jared Goff had a correlation of 0.85 and his fantasy production was incredibly reliant on his matchup. 2018: Todd Gurley: 0.28 (#1 Overall Player) Josh Reynolds: 0.61 Davante Adams: 0.62 Duke Johnson: 0.62 Marcus Mariota: 0.68 Corey Davis: 0.69 Dalvin Cook: 0.69 Mitchell Trubisky: 0.72 Carson Wentz: -0.73 Russel Wilson: 0.74 Gus Edwards: 0.83 Todd Gurley was the highest fantasy scorer in 2018 and had little to no correlation between his fantasy output and the defense he played against. Surprisingly, Carson Wentz had a strong negative correlation between his fantasy output and the defense he played against. This means that he played better against better defenses. Although there is an outlier in his data, there is still a somewhat clear negative trend. 2019: Christian McCaffrey: 0.42 (#1 Overall Player) Tony Pollard: 0.62 Eric Ebron: 0.63 Andy Dalton: 0.64 Tevin Coleman: 0.65 Marquise Brown: 0.65 Chris Carson: 0.67 Jimmy Garoppolo: 0.68 Alshon Jeffery 0.68 Odell Beckham: 0.68 Devonta Freeman: 0.71 Melvin Gordon: 0.73 Adam Thielen: 0.74 Jared Goff: 0.79 Christian McCaffrey scored the most fantasy points in 2019 and had a very slight correlation between his performance and the defense he played against. On the other hand, just like in 2017, Jared Goff had a very strong correlation between his performance and the strength of the defense he played. However, in football, many factors that go into how a player plays other than the defense they’re playing. Just to name a few: game script, injuries, coaching, etc. Although some of these numbers may seem convincing, many many other factors are playing a role. This was my first time using pandas, numpy, and matplotlib in Python. You can check out my code on Github. Let me know if you have any feedback. Thanks!
[ { "code": null, "e": 301, "s": 172, "text": "With the 2020–2021 NFL fantasy football season about to come to a close, I was inspired to analyze data from the past few years:" }, { "code": null, "e": 417, "s": 301, "text": "Before I start jumping into the data analysis, here’s a summary of fantasy football so we are all on the same page:" }, { "code": null, "e": 942, "s": 417, "text": "To play fantasy football, you need to create or join a league on one of many websites (ESPN, Yahoo, Sleeper, CBS, NFL, etc). Each member of the league is the owner/general manager of their team. Each league has two parts to a roster: the starting lineup and the bench. The starting lineup includes a combination of quarterbacks (QB), running Backs (RB), wide Receivers (WR), tight ends (TE), flexes (FLEX), kickers (K), and defenses (D/ST) based on league settings. The bench can be made up of any players the owner chooses." }, { "code": null, "e": 972, "s": 942, "text": "There are 3 types of leagues:" }, { "code": null, "e": 1066, "s": 972, "text": "Redraft Leagues: each season, rosters completely reset and all players are available to draft" }, { "code": null, "e": 1205, "s": 1066, "text": "Keeper Leagues: each season, owners can keep a certain number of players for the following season, and then draft the rest of their roster" }, { "code": null, "e": 1318, "s": 1205, "text": "Dynasty Leagues: each season, owners keep their entire roster for the next season and draft rookies in the draft" }, { "code": null, "e": 1347, "s": 1318, "text": "There are 2 types of drafts:" }, { "code": null, "e": 1462, "s": 1347, "text": "Snake (Traditional) Drafts: each owner gets a chance to draft and the draft order reverses each round of the draft" }, { "code": null, "e": 1605, "s": 1462, "text": "Auction Draft: each owner gets a set amount of money, and each owner can bid for each player as long as they have a sufficient amount of money" }, { "code": null, "e": 1643, "s": 1605, "text": "There are 2 types of scoring systems:" }, { "code": null, "e": 1846, "s": 1643, "text": "Standard: 1 point per 25 passing yards, 4 points per passing touchdown, 1 point per 10 rushing or receiving yards, 6 points per rushing or receiving touchdown, -2 points per fumble lost or interception." }, { "code": null, "e": 1958, "s": 1846, "text": "Point Per Reception (PPR): Scoring is the same as Standard, except players get 1 additional point per reception" }, { "code": null, "e": 2074, "s": 1958, "text": "After the draft, you can add unrostered players to your team or make trades with other owners to improve your team." }, { "code": null, "e": 2481, "s": 2074, "text": "The first 13 weeks of the NFL season is known as the fantasy regular season, each week you play in a head to head matchup with another owner in your league. Whoever has the most points scored by their players that week receives a win. After the first 13 weeks, the owners with the most wins make the playoffs and are placed into a bracket. After the fantasy playoffs (weeks 14–16), the champion is crowned." }, { "code": null, "e": 2881, "s": 2481, "text": "A vital part of fantasy football is deciding which players you are starting based on their matchups. Some players’ output is heavily reliant on the strength of the defense they play while others are “matchup-proof”, meaning that regardless of the strength of the opposing team, they will perform well. I wanted to figure out which players were “matchup-proof” and which players were matchup reliant." }, { "code": null, "e": 2992, "s": 2881, "text": "This led me to ask the question: what is the effect of a defense’s strength on the fantasy output of a player?" }, { "code": null, "e": 3277, "s": 2992, "text": "The first step to any data analysis project is collecting the data. The data necessary is the yearly stats for every player, the weekly stats for every week for every player, the rankings of every defense against QBs, RBs, WRs, and TEs, and the schedules for each team from 2017–2019." }, { "code": null, "e": 3350, "s": 3277, "text": "Yearly & Weekly Stats: https://www.fantasyfootballdatapros.com/csv_files" }, { "code": null, "e": 3429, "s": 3350, "text": "Defense Rankings: https://www.fantasypros.com/nfl/points-allowed.php?year=2018" }, { "code": null, "e": 3487, "s": 3429, "text": "Schedules: https://www.4for4.com/teams/schedule/2017/grid" }, { "code": null, "e": 3598, "s": 3487, "text": "The next step is to iterate through all the data files and transform all of the stats into PPR fantasy points:" }, { "code": null, "e": 4034, "s": 3598, "text": "fantasypoints = 0# negative statsfantasypoints -= (stats[\"FL\"][i] * 2)fantasypoints -= (stats[\"Int\"][i] * 2)# positive statsfantasypoints += (stats[\"PassingYds\"][i] * 0.04fantasypoints += (stats[\"PassingTD\"][i] * 4)fantasypoints += (stats[\"RushingYds\"][i] * 0.1)fantasypoints += (stats[\"RushingTD\"][i] * 6)fantasypoints += (stats[\"ReceivingYds\"][i] * 0.1)fantasypoints += (stats[\"ReceivingTD\"][i] * 6)fantasypoints += (stats[\"Rec\"][i])" }, { "code": null, "e": 4113, "s": 4034, "text": "After converting player stats into fantasy points, the data files looked like:" }, { "code": null, "e": 4343, "s": 4113, "text": "Player Name, Position, Team, Games Played, Total Fantasy Points, Average Fantasy PointsTodd Gurley,RB,LAR,15.0,383.3,25.55Le'Veon Bell,RB,PIT,15.0,341.6,22.77Kareem Hunt,RB,KAN,16.0,295.2,18.45Alvin Kamara,RB,NOR,16.0,312.4,19.52" }, { "code": null, "e": 4511, "s": 4343, "text": "Once, all the fantasy points were listed for each player in the data files, I needed to pull the defense ranking of the team they played when they scored those points." }, { "code": null, "e": 4705, "s": 4511, "text": "The rankings are determined by the average number of fantasy points a defense gives to each position (QB, RB, WR, TE) throughout the year. This means that each defense has 4 different rankings:" }, { "code": null, "e": 4800, "s": 4705, "text": "Team, QB Rank, RB Rank, WR Rank, TE RankARI,18,4,18,14ATL,23,7,14,13BAL,2,22,2,21BUF,5,32,5,22" }, { "code": null, "e": 5018, "s": 4800, "text": "To add the ranking of the defense to every weekly stat file, I had to iterate through all the weeks and all the schedule files to find the opposing team and then add the ranking of the defense to the weekly stat file." }, { "code": null, "e": 5060, "s": 5018, "text": "Once added, the weekly files looked like:" }, { "code": null, "e": 5235, "s": 5060, "text": "Player Name, Position, Team, Total Fantasy Points, Opposing Team RankKirk Cousins,WAS,QB,26.8,17Tom Brady,NWE,QB,33.72,30Jared Goff,LAR,QB,23.58,31Case Keenum,MIN,QB,28.56,19" }, { "code": null, "e": 5456, "s": 5235, "text": "After the defensive rankings were added to the weekly files, all that was left to do was to iterate through all the weekly stat files and plot every player’s fantasy points against the ranking of the defense they played:" }, { "code": null, "e": 5862, "s": 5456, "text": "plt.scatter(xdata, ydata)plt.title(\"Effect of Defense Strength on \" + str(playername) + \" in 2017\")plt.xlabel(\"Defense Ranking (1-32) | Correlation = \" + str(correlation))plt.ylabel(\"Fantasy Production Above/Below Yearly Mean\")x = np.array(xdata)y = np.array(ydata)m, b = np.polyfit(x, y, 1)plt.plot(x, m*x + b)plt.plot(flatlinex, 0*flatlinex, linestyle = \"--\", dashes = (5, 5), color = \"black\")plt.show()" }, { "code": null, "e": 6027, "s": 5862, "text": "Now all that was left to do is it interpret the data that I compiled. But, before I share my findings, let me explain the significance of a correlation coefficient." }, { "code": null, "e": 6352, "s": 6027, "text": "A correlation coefficient (r) quantifies the strength and direction of a linear relationship. A positive r indicates a positive linear relationship, and a negative r indicates a negative linear relationship. When r is greater than 0.6 or less than -0.6, it means that there is a strong correlation between the two variables." }, { "code": null, "e": 6468, "s": 6352, "text": "There were a few players each year that had a strong correlation between their fantasy output and defense strength:" }, { "code": null, "e": 6474, "s": 6468, "text": "2017:" }, { "code": null, "e": 6512, "s": 6474, "text": "Todd Gurley: 0.02 (#1 Overall Player)" }, { "code": null, "e": 6531, "s": 6512, "text": "Dak Prescott: 0.62" }, { "code": null, "e": 6552, "s": 6531, "text": "Ezekiel Elliot: 0.64" }, { "code": null, "e": 6571, "s": 6552, "text": "Alex Collins: 0.65" }, { "code": null, "e": 6588, "s": 6571, "text": "Drew Brees: 0.65" }, { "code": null, "e": 6607, "s": 6588, "text": "Charles Clay: 0.65" }, { "code": null, "e": 6625, "s": 6607, "text": "Marlon Mack: 0.67" }, { "code": null, "e": 6641, "s": 6625, "text": "OJ Howard: 0.68" }, { "code": null, "e": 6660, "s": 6641, "text": "Rex Burkhead: 0.71" }, { "code": null, "e": 6677, "s": 6660, "text": "Jared Goff: 0.85" }, { "code": null, "e": 6932, "s": 6677, "text": "Todd Gurley was the highest fantasy scorer in 2017 and had virtually 0 correlation between his production and the defense he played. On the other hand, Jared Goff had a correlation of 0.85 and his fantasy production was incredibly reliant on his matchup." }, { "code": null, "e": 6938, "s": 6932, "text": "2018:" }, { "code": null, "e": 6976, "s": 6938, "text": "Todd Gurley: 0.28 (#1 Overall Player)" }, { "code": null, "e": 6996, "s": 6976, "text": "Josh Reynolds: 0.61" }, { "code": null, "e": 7016, "s": 6996, "text": "Davante Adams: 0.62" }, { "code": null, "e": 7035, "s": 7016, "text": "Duke Johnson: 0.62" }, { "code": null, "e": 7056, "s": 7035, "text": "Marcus Mariota: 0.68" }, { "code": null, "e": 7074, "s": 7056, "text": "Corey Davis: 0.69" }, { "code": null, "e": 7092, "s": 7074, "text": "Dalvin Cook: 0.69" }, { "code": null, "e": 7116, "s": 7092, "text": "Mitchell Trubisky: 0.72" }, { "code": null, "e": 7136, "s": 7116, "text": "Carson Wentz: -0.73" }, { "code": null, "e": 7156, "s": 7136, "text": "Russel Wilson: 0.74" }, { "code": null, "e": 7174, "s": 7156, "text": "Gus Edwards: 0.83" }, { "code": null, "e": 7591, "s": 7174, "text": "Todd Gurley was the highest fantasy scorer in 2018 and had little to no correlation between his fantasy output and the defense he played against. Surprisingly, Carson Wentz had a strong negative correlation between his fantasy output and the defense he played against. This means that he played better against better defenses. Although there is an outlier in his data, there is still a somewhat clear negative trend." }, { "code": null, "e": 7597, "s": 7591, "text": "2019:" }, { "code": null, "e": 7643, "s": 7597, "text": "Christian McCaffrey: 0.42 (#1 Overall Player)" }, { "code": null, "e": 7662, "s": 7643, "text": "Tony Pollard: 0.62" }, { "code": null, "e": 7679, "s": 7662, "text": "Eric Ebron: 0.63" }, { "code": null, "e": 7697, "s": 7679, "text": "Andy Dalton: 0.64" }, { "code": null, "e": 7717, "s": 7697, "text": "Tevin Coleman: 0.65" }, { "code": null, "e": 7738, "s": 7717, "text": "Marquise Brown: 0.65" }, { "code": null, "e": 7757, "s": 7738, "text": "Chris Carson: 0.67" }, { "code": null, "e": 7779, "s": 7757, "text": "Jimmy Garoppolo: 0.68" }, { "code": null, "e": 7799, "s": 7779, "text": "Alshon Jeffery 0.68" }, { "code": null, "e": 7819, "s": 7799, "text": "Odell Beckham: 0.68" }, { "code": null, "e": 7841, "s": 7819, "text": "Devonta Freeman: 0.71" }, { "code": null, "e": 7861, "s": 7841, "text": "Melvin Gordon: 0.73" }, { "code": null, "e": 7880, "s": 7861, "text": "Adam Thielen: 0.74" }, { "code": null, "e": 7897, "s": 7880, "text": "Jared Goff: 0.79" }, { "code": null, "e": 8195, "s": 7897, "text": "Christian McCaffrey scored the most fantasy points in 2019 and had a very slight correlation between his performance and the defense he played against. On the other hand, just like in 2017, Jared Goff had a very strong correlation between his performance and the strength of the defense he played." }, { "code": null, "e": 8456, "s": 8195, "text": "However, in football, many factors that go into how a player plays other than the defense they’re playing. Just to name a few: game script, injuries, coaching, etc. Although some of these numbers may seem convincing, many many other factors are playing a role." }, { "code": null, "e": 8563, "s": 8456, "text": "This was my first time using pandas, numpy, and matplotlib in Python. You can check out my code on Github." } ]
Set Up Virtual Environment in Julia With Playground.jl | by Emmett Boudreau | Towards Data Science
There are over a billion reasons why you would want to use a virtual environment when working on a project in any language, and that sentiment is no different in Julia. Developing web-apps can be catastrophic when working with a team without a virtual environment. Fortunately, Julia has a package virtualization tool equivalent to virtual/pipenv called Playground.jl. Before we can use Playground, of course we need to set it up. If you haven’t added the package yet, you can do it with: julia> using Pkgjulia> Pkg.add("Playground") Interestingly, this didn’t work for me, so I ended up switching to the Pkg REPL by simply pressing ], and then adding it through the URL, like this: julia> ]pkg> add https://github.com/rofinn/Playground.jl Cloning git-repo `https://github.com/rofinn/Playground.jl` Updating git-repo `https://github.com/rofinn/Playground.jl`[ Info: Assigning UUID f8d4ef19-13c9-5673-8ace-5f74ae9cf246 to Playground Resolving package versions... Installed Syslogs ─ v0.3.0 Installed Memento ─ v0.12.1 Updating `~/.julia/environments/v1.0/Project.toml` [f8d4ef19] + Playground v0.0.0 #master (https://github.com/rofinn/Playground.jl) Updating `~/.julia/environments/v1.0/Manifest.toml` [f28f55f0] + Memento v0.12.1 [f8d4ef19] + Playground v0.0.0 #master (https://github.com/rofinn/Playground.jl) [cea106d9] + Syslogs v0.3.0 Building Playground → `~/.julia/packages/Playground/AhsNg/deps/build.log` Cool! Building Playground for me also through an error, a Stacktrace: Pkg not defined error. To counter this, I had to first run: ENV["PLAYGROUND_INSTALL"] = true Then I had to build it with Pkg from the Julia REPL... I’m not sure why this is, but I imagine it has something to do with a Playground dependency. Although this is not essential, I also added the following to my .bashrc file, as sometimes it will be required: echo "PATH=~/.playground/bin/:$PATH" >> ~/.bashrc Usage is relatively simple, however something to note is you might need to close and reopen your terminal for the bashrc file to echo the new command. To create our environment, we use playground create: playground create --name example You can also create an environment from a REQUIREMENTS: playground create --requirements /path If using DECLARE files you should make sure that DeclarativePackages.jl is already installed. We can activate our playground environment like this: playground activate /path/to/your/playground Also, playground saves the name of our environment, so we can use the name bash tag in our terminal. playground activate --name example In order to remove our environment, we’ll use rm. playground rm [playground-name|julia-version] --dir /path Also, we have playground list and playground clean. playground listplayground clean That’s the rundown! That’s about all there is to it, playground is pretty easy to use and not too in depth. So hopefully, with the lacking playground documentation resources, this was relatively valuable, and you now know how to create and manage virtual environments in Julia!
[ { "code": null, "e": 540, "s": 171, "text": "There are over a billion reasons why you would want to use a virtual environment when working on a project in any language, and that sentiment is no different in Julia. Developing web-apps can be catastrophic when working with a team without a virtual environment. Fortunately, Julia has a package virtualization tool equivalent to virtual/pipenv called Playground.jl." }, { "code": null, "e": 660, "s": 540, "text": "Before we can use Playground, of course we need to set it up. If you haven’t added the package yet, you can do it with:" }, { "code": null, "e": 705, "s": 660, "text": "julia> using Pkgjulia> Pkg.add(\"Playground\")" }, { "code": null, "e": 854, "s": 705, "text": "Interestingly, this didn’t work for me, so I ended up switching to the Pkg REPL by simply pressing ], and then adding it through the URL, like this:" }, { "code": null, "e": 1594, "s": 854, "text": "julia> ]pkg> add https://github.com/rofinn/Playground.jl Cloning git-repo `https://github.com/rofinn/Playground.jl` Updating git-repo `https://github.com/rofinn/Playground.jl`[ Info: Assigning UUID f8d4ef19-13c9-5673-8ace-5f74ae9cf246 to Playground Resolving package versions... Installed Syslogs ─ v0.3.0 Installed Memento ─ v0.12.1 Updating `~/.julia/environments/v1.0/Project.toml` [f8d4ef19] + Playground v0.0.0 #master (https://github.com/rofinn/Playground.jl) Updating `~/.julia/environments/v1.0/Manifest.toml` [f28f55f0] + Memento v0.12.1 [f8d4ef19] + Playground v0.0.0 #master (https://github.com/rofinn/Playground.jl) [cea106d9] + Syslogs v0.3.0 Building Playground → `~/.julia/packages/Playground/AhsNg/deps/build.log`" }, { "code": null, "e": 1600, "s": 1594, "text": "Cool!" }, { "code": null, "e": 1724, "s": 1600, "text": "Building Playground for me also through an error, a Stacktrace: Pkg not defined error. To counter this, I had to first run:" }, { "code": null, "e": 1757, "s": 1724, "text": "ENV[\"PLAYGROUND_INSTALL\"] = true" }, { "code": null, "e": 1905, "s": 1757, "text": "Then I had to build it with Pkg from the Julia REPL... I’m not sure why this is, but I imagine it has something to do with a Playground dependency." }, { "code": null, "e": 2018, "s": 1905, "text": "Although this is not essential, I also added the following to my .bashrc file, as sometimes it will be required:" }, { "code": null, "e": 2068, "s": 2018, "text": "echo \"PATH=~/.playground/bin/:$PATH\" >> ~/.bashrc" }, { "code": null, "e": 2272, "s": 2068, "text": "Usage is relatively simple, however something to note is you might need to close and reopen your terminal for the bashrc file to echo the new command. To create our environment, we use playground create:" }, { "code": null, "e": 2305, "s": 2272, "text": "playground create --name example" }, { "code": null, "e": 2361, "s": 2305, "text": "You can also create an environment from a REQUIREMENTS:" }, { "code": null, "e": 2400, "s": 2361, "text": "playground create --requirements /path" }, { "code": null, "e": 2548, "s": 2400, "text": "If using DECLARE files you should make sure that DeclarativePackages.jl is already installed. We can activate our playground environment like this:" }, { "code": null, "e": 2593, "s": 2548, "text": "playground activate /path/to/your/playground" }, { "code": null, "e": 2694, "s": 2593, "text": "Also, playground saves the name of our environment, so we can use the name bash tag in our terminal." }, { "code": null, "e": 2729, "s": 2694, "text": "playground activate --name example" }, { "code": null, "e": 2779, "s": 2729, "text": "In order to remove our environment, we’ll use rm." }, { "code": null, "e": 2837, "s": 2779, "text": "playground rm [playground-name|julia-version] --dir /path" }, { "code": null, "e": 2889, "s": 2837, "text": "Also, we have playground list and playground clean." }, { "code": null, "e": 2921, "s": 2889, "text": "playground listplayground clean" } ]
Program to find leaf and non-leaf nodes of a binary tree in Python
Suppose we have a binary tree, we have to find a list of two numbers where the first number is the count of leaves in the tree and the second number is the count of non-leaf nodes. So, if the input is like then the output will be (3, 2), as there are 3 leaves and 2 non-leaf nodes. To solve this, we will follow these steps − if n is null, thenreturn (0, 0) return (0, 0) if left of n is null and right of n is null, thenreturn (1, 0) return (1, 0) left := solve(left of n) right := solve(right of n) return (left[0] + right[0], 1 + left[1] + right[1]) Let us see the following implementation to get better understanding − Live Demo class TreeNode: def __init__(self, data, left = None, right = None): self.val = data self.left = left self.right = right class Solution: def solve(self, n): if not n: return 0, 0 if not n.left and not n.right: return 1, 0 left, right = self.solve(n.left), self.solve(n.right) return left[0] + right[0], 1 + left[1] + right[1] ob = Solution() root = TreeNode(6) root.left = TreeNode(2) root.right = TreeNode(6) root.right.left = TreeNode(10) root.right.right = TreeNode(2) print(ob.solve(root)) root = TreeNode(6) root.left = TreeNode(2) root.right = TreeNode(6) root.right.left = TreeNode(10) root.right.right = TreeNode(2) (3, 2)
[ { "code": null, "e": 1243, "s": 1062, "text": "Suppose we have a binary tree, we have to find a list of two numbers where the first number is the count of leaves in the tree and the second number is the count of non-leaf nodes." }, { "code": null, "e": 1268, "s": 1243, "text": "So, if the input is like" }, { "code": null, "e": 1344, "s": 1268, "text": "then the output will be (3, 2), as there are 3 leaves and 2 non-leaf nodes." }, { "code": null, "e": 1388, "s": 1344, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 1420, "s": 1388, "text": "if n is null, thenreturn (0, 0)" }, { "code": null, "e": 1434, "s": 1420, "text": "return (0, 0)" }, { "code": null, "e": 1497, "s": 1434, "text": "if left of n is null and right of n is null, thenreturn (1, 0)" }, { "code": null, "e": 1511, "s": 1497, "text": "return (1, 0)" }, { "code": null, "e": 1536, "s": 1511, "text": "left := solve(left of n)" }, { "code": null, "e": 1563, "s": 1536, "text": "right := solve(right of n)" }, { "code": null, "e": 1615, "s": 1563, "text": "return (left[0] + right[0], 1 + left[1] + right[1])" }, { "code": null, "e": 1685, "s": 1615, "text": "Let us see the following implementation to get better understanding −" }, { "code": null, "e": 1696, "s": 1685, "text": " Live Demo" }, { "code": null, "e": 2256, "s": 1696, "text": "class TreeNode:\n def __init__(self, data, left = None, right = None):\n self.val = data\n self.left = left\n self.right = right\nclass Solution:\n def solve(self, n):\n if not n:\n return 0, 0\n if not n.left and not n.right:\n return 1, 0\n left, right = self.solve(n.left), self.solve(n.right)\n return left[0] + right[0], 1 + left[1] + right[1]\nob = Solution()\nroot = TreeNode(6)\nroot.left = TreeNode(2)\nroot.right = TreeNode(6)\nroot.right.left = TreeNode(10)\nroot.right.right = TreeNode(2)\nprint(ob.solve(root))" }, { "code": null, "e": 2386, "s": 2256, "text": "root = TreeNode(6)\nroot.left = TreeNode(2)\nroot.right = TreeNode(6)\nroot.right.left = TreeNode(10)\nroot.right.right = TreeNode(2)" }, { "code": null, "e": 2393, "s": 2386, "text": "(3, 2)" } ]
Detecting the first non-repeating string in Array in JavaScript
Suppose, we have an array of strings like this where strings might contain duplicate characters − const arr = ['54gdgdfe3', '434ffd', '43frdf', '43fdhnh', 'wgcxhjny', 'fsdf34']; We are required to write a JavaScript function that takes in one such array and returns the very first element from the array that contains 0 duplicate characters. If there does not exist any such string, we should return false. Therefore, let’s write the code for this function − The code for this will be − const arr = ['54gdgdfe3', '434ffd', '43frdf', '43fdhnh', 'wgcxhjny', 'fsdf34']; const isUnique = str => { return str.split('').every(el => str.indexOf(el) === str.lastIndexOf(el)); }; const findUniqueString = arr => { for(let i = 0; i < arr.length; i++){ if(isUnique(arr[i])){ return arr[i]; }; }; return false; }; console.log(findUniqueString(arr)); The output in the console will be − wgcxhjny
[ { "code": null, "e": 1160, "s": 1062, "text": "Suppose, we have an array of strings like this where strings might contain duplicate characters −" }, { "code": null, "e": 1240, "s": 1160, "text": "const arr = ['54gdgdfe3', '434ffd', '43frdf', '43fdhnh', 'wgcxhjny', 'fsdf34'];" }, { "code": null, "e": 1469, "s": 1240, "text": "We are required to write a JavaScript function that takes in one such array and returns the very first element from the array that contains 0 duplicate characters. If there does not exist any such string, we should return false." }, { "code": null, "e": 1521, "s": 1469, "text": "Therefore, let’s write the code for this function −" }, { "code": null, "e": 1549, "s": 1521, "text": "The code for this will be −" }, { "code": null, "e": 1933, "s": 1549, "text": "const arr = ['54gdgdfe3', '434ffd', '43frdf', '43fdhnh', 'wgcxhjny', 'fsdf34'];\nconst isUnique = str => {\n return str.split('').every(el => str.indexOf(el) === str.lastIndexOf(el));\n};\nconst findUniqueString = arr => {\n for(let i = 0; i < arr.length; i++){\n if(isUnique(arr[i])){\n return arr[i];\n };\n };\n return false;\n};\nconsole.log(findUniqueString(arr));" }, { "code": null, "e": 1969, "s": 1933, "text": "The output in the console will be −" }, { "code": null, "e": 1978, "s": 1969, "text": "wgcxhjny" } ]
Master the art of subplots in Python | by Ankit Gupta | Towards Data Science
Often while working with data, no matter big or small, sometimes you want to compare things side-by-side or plot different attributes or features individually. In such cases, a single figure is rendered insufficient. Thus, you need to know the art of working with subplots. This article will focus on the concept of subplots. It will teach you six unique ways to create very simple and very complex grids in Python using Matplotlib. “For every failure, there’s an alternative course of action. You just have to find it. When you come to a roadblock, take a detour” — Mary Kay Ash Let’s first import some basic modules and use a fancy style sheet to give an artistic touch to our figures. %matplotlib inline # To enable inline plotting in Jupyter Notebookimport numpy as npimport matplotlib.pyplot as pltplt.style.use('fivethirtyeight') # For better style Let’s define some data to plot. We use our immortal sin and cosine curves for x∈(0, 3π). x = np.linspace(0., 3*np.pi, 100) # 0 to 3*Pi in 100 stepsy_1 = np.sin(x) y_2 = np.cos(x) Let’s now create our very first two subplots with a single row and two columns. Since theaxes object contains two subplots, you can access them using indices [0] and [1] because indexing starts at 0 in Python. fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(9, 3))axes[0].plot(x, y_1, '-', c='orange', label='sin(x)')axes[1].plot(x, y_2, '-', c='magenta', label='cos(x)')axes[0].legend(fontsize=16, frameon=False)axes[1].legend(fontsize=16, frameon=False)fig.suptitle('Subplots without shared y-axis') Note: If you don’t like the indices notation, you can also use names for your axes as shown below, and then use them directly for plotting. The tuple (ax1, ax2) below represents the axis handles to individual subplots. Since both the above subplots have the same y-axis limits, you can remove the redundant y-axis values from the right-hand side subplot using the keyword sharey=True. fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 3), sharey=True)ax1.plot(...)ax2.plot(...) Subplots spanning multiple rows: In the above figure, the subplots were plotted in a columnar way. To plot them in two rows, you can use nrows=2, ncols=1. Now you will have to use the keyword sharex. When you have more than 1 row and 1 column, you need two indices to access the individual subplots as shown in the code below. The indices start at 0. So, for 2 rows and 2 columns, the indices will be 0 and 1. The first and the second indices in the slice notation [i, j] correspond to the row (i) and column (j) number, respectively. fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(9, 5), sharey='row', sharex='row')axes[0, 0].plot(x+1, y_1+1, '-', c='orange')axes[0, 1].plot(x, y_2, '-', c='magenta')axes[1, 0].plot(x, y_1**2, '--', c='orange')axes[1, 1].plot(x, y_2**2, '--', c='magenta')axes[0, 0].set_ylabel(r'sin(x)')axes[0, 1].set_ylabel(r'cos(x)')axes[1, 0].set_ylabel(r'sin$^2$(x)')axes[1, 1].set_ylabel(r'cos$^2$(x)')fig.tight_layout() In the above figure, you can choose how you want to share the x and the y-axes. I have chosen sharex='col' and sharey='row' which means the x-axis is shared across each column and the y-axis is shared across each row. Notice the different axes limits in the above figure to make sense of this. As mentioned earlier, you can also use tuples to name your axes and avoid the index notation. The first tuple (ax1, ax2) corresponds to the first row subplots. Likewise, (ax3, ax4) corresponds to the second row. fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(9, 5)) In this method, you first create the figure object and then add the subplots manually one after the other. The following example creates a 2 x 2 grid. If you want multiple subplots to share the same x or y-axis, you can specify the corresponding axis while creating the subplot as demonstrated below. NOTE: Here, the numbering starts at 1. So, for a 2 x 2 grid, the upper row will use the numbers(2, 2, 1) , (2, 2, 2) and the second row will use the numbers (2, 2, 3) , (2, 2, 4) , respectively. The first two indices are the number of total rows and columns, respectively while the third number specifies the subplot. fig = plt.figure(figsize=(8, 6))ax1 = plt.subplot(2, 2, 1, frameon=True) ax1.plot(x+1, y_1+1)ax1.set_title('ax1')ax2 = plt.subplot(2, 2, 2, sharex=ax1, facecolor='orange')ax2.plot(x, y_2, '-r')ax2.set_title('Shares x-axis with ax1')ax3 = plt.subplot(2, 2, 3, sharey=ax1)ax3.plot(x, y_1**2, '-g')ax3.set_title('Shares y-axis with ax1')ax4 = plt.subplot(2, 2, 4, facecolor='orchid')ax4.plot(x, y_2**2, '-b')fig.tight_layout() This method is useful for generating complex grids where the subplots span multiple rows or columns. Here, you create subplots at specified locations within the overall grid. You have to first specify the overall grid size like (3, 3) in the example code below. Then you specify the starting location of the subplot using a tuple of indices in the order (row, column) where the indices start at 0. So, for a 3 x 3 grid, both the row and the column indices will be 0, 1, and 2. If you want a subplot to span multiple rows or columns, you specify the length of the span using the keywords rowspan or colspan. def add_title(axes): for i, ax in enumerate(axes): ax.set_title("ax%d" % (i+1), fontsize=18)fig = plt.figure(figsize=(8, 8))ax1 = plt.subplot2grid((3, 3), (0, 0), colspan=2)ax2 = plt.subplot2grid((3, 3), (0, 2), rowspan=3)ax3 = plt.subplot2grid((3, 3), (1, 0), rowspan=2)ax4 = plt.subplot2grid((3, 3), (1, 1))ax5 = plt.subplot2grid((3, 3), (2, 1))add_title(fig.axes) This method is also useful for generating complex grids. You need a basic understanding of the slicing and indexing notation of NumPy arrays to work with this method. For example, the slice [0, :] means the first row (index 0) and all the columns (: represents all), the slice[1, :-1] means the second row (index 1) and all the columns except the last one (:-1 represents all but last). import matplotlib.gridspec as gridspecfig = plt.figure(constrained_layout=True, figsize=(8, 8))spec = gridspec.GridSpec(ncols=3, nrows=3, figure=fig)ax1 = fig.add_subplot(spec[0, :])ax2 = fig.add_subplot(spec[1, :-1])ax3 = fig.add_subplot(spec[1, -1])ax4 = fig.add_subplot(spec[2, 1:])ax5 = fig.add_subplot(spec[2, 0])# Now you can plot individually as ax1.plot(), ax2.plot() etc. This method is quite similar to Way 3 and uses the same index notation as explained above. This feature is available only in Matplotlib 3+ versions. fig = plt.figure(constrained_layout=True, figsize=(8, 8))spec = fig.add_gridspec(3, 3)ax1 = fig.add_subplot(spec[0, :-1])ax1.set_title('ax1')ax2 = fig.add_subplot(spec[:, -1])ax2.set_title('ax2')ax3 = fig.add_subplot(spec[1:, 0])ax3.set_title('ax3')ax4 = fig.add_subplot(spec[1, 1])ax4.set_title('ax4')ax5 = fig.add_subplot(spec[-1, 1])ax5.set_title('ax5') The latest version of Matplotlib 3.3 has introduced a new, less verbose, and a semantic way to generate complex, subplot grids. via subplot_mosaic(). You can also name your subplots as you like. You can also use short-hand ASCII notations to recreate the figure below. The cool part is that, to generate the subplot grid shown below, you can pass the layout in the form of a list. A missing subplot is indicated as '.'. To span a subplot over two columns, you can repeat the name as I did for 'bar'. To span across rows (vertically), repeat the name vertically below in the second list. You can also use the names 'bar', 'hist', and 'scatter' to control/modify the properties of the corresponding subplots using a dictionary. axes = plt.figure(constrained_layout=True).subplot_mosaic( [['.', 'bar', 'bar'], # Note repitition of 'bar' ['hist', '.', 'scatter']])for k, ax in axes.items(): ax.text(0.5, 0.5, k, ha='center', va='center', fontsize=36, color='magenta')# Using dictionary to change subplot propertiesaxes['bar'].set_title('A bar plot', fontsize=24) axes['hist'].set_title('A histogram', fontsize=24) axes['scatter'].set_title('A scatter plot', fontsize=24) This brings me to the end of my post. If you are interested in knowing more about the latest features of Matplotlib, refer to my following articles.
[ { "code": null, "e": 446, "s": 172, "text": "Often while working with data, no matter big or small, sometimes you want to compare things side-by-side or plot different attributes or features individually. In such cases, a single figure is rendered insufficient. Thus, you need to know the art of working with subplots." }, { "code": null, "e": 605, "s": 446, "text": "This article will focus on the concept of subplots. It will teach you six unique ways to create very simple and very complex grids in Python using Matplotlib." }, { "code": null, "e": 752, "s": 605, "text": "“For every failure, there’s an alternative course of action. You just have to find it. When you come to a roadblock, take a detour” — Mary Kay Ash" }, { "code": null, "e": 860, "s": 752, "text": "Let’s first import some basic modules and use a fancy style sheet to give an artistic touch to our figures." }, { "code": null, "e": 1027, "s": 860, "text": "%matplotlib inline # To enable inline plotting in Jupyter Notebookimport numpy as npimport matplotlib.pyplot as pltplt.style.use('fivethirtyeight') # For better style" }, { "code": null, "e": 1116, "s": 1027, "text": "Let’s define some data to plot. We use our immortal sin and cosine curves for x∈(0, 3π)." }, { "code": null, "e": 1206, "s": 1116, "text": "x = np.linspace(0., 3*np.pi, 100) # 0 to 3*Pi in 100 stepsy_1 = np.sin(x) y_2 = np.cos(x)" }, { "code": null, "e": 1416, "s": 1206, "text": "Let’s now create our very first two subplots with a single row and two columns. Since theaxes object contains two subplots, you can access them using indices [0] and [1] because indexing starts at 0 in Python." }, { "code": null, "e": 1712, "s": 1416, "text": "fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(9, 3))axes[0].plot(x, y_1, '-', c='orange', label='sin(x)')axes[1].plot(x, y_2, '-', c='magenta', label='cos(x)')axes[0].legend(fontsize=16, frameon=False)axes[1].legend(fontsize=16, frameon=False)fig.suptitle('Subplots without shared y-axis')" }, { "code": null, "e": 2097, "s": 1712, "text": "Note: If you don’t like the indices notation, you can also use names for your axes as shown below, and then use them directly for plotting. The tuple (ax1, ax2) below represents the axis handles to individual subplots. Since both the above subplots have the same y-axis limits, you can remove the redundant y-axis values from the right-hand side subplot using the keyword sharey=True." }, { "code": null, "e": 2190, "s": 2097, "text": "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 3), sharey=True)ax1.plot(...)ax2.plot(...)" }, { "code": null, "e": 2390, "s": 2190, "text": "Subplots spanning multiple rows: In the above figure, the subplots were plotted in a columnar way. To plot them in two rows, you can use nrows=2, ncols=1. Now you will have to use the keyword sharex." }, { "code": null, "e": 2725, "s": 2390, "text": "When you have more than 1 row and 1 column, you need two indices to access the individual subplots as shown in the code below. The indices start at 0. So, for 2 rows and 2 columns, the indices will be 0 and 1. The first and the second indices in the slice notation [i, j] correspond to the row (i) and column (j) number, respectively." }, { "code": null, "e": 3164, "s": 2725, "text": "fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(9, 5), sharey='row', sharex='row')axes[0, 0].plot(x+1, y_1+1, '-', c='orange')axes[0, 1].plot(x, y_2, '-', c='magenta')axes[1, 0].plot(x, y_1**2, '--', c='orange')axes[1, 1].plot(x, y_2**2, '--', c='magenta')axes[0, 0].set_ylabel(r'sin(x)')axes[0, 1].set_ylabel(r'cos(x)')axes[1, 0].set_ylabel(r'sin$^2$(x)')axes[1, 1].set_ylabel(r'cos$^2$(x)')fig.tight_layout()" }, { "code": null, "e": 3458, "s": 3164, "text": "In the above figure, you can choose how you want to share the x and the y-axes. I have chosen sharex='col' and sharey='row' which means the x-axis is shared across each column and the y-axis is shared across each row. Notice the different axes limits in the above figure to make sense of this." }, { "code": null, "e": 3670, "s": 3458, "text": "As mentioned earlier, you can also use tuples to name your axes and avoid the index notation. The first tuple (ax1, ax2) corresponds to the first row subplots. Likewise, (ax3, ax4) corresponds to the second row." }, { "code": null, "e": 3737, "s": 3670, "text": "fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(9, 5))" }, { "code": null, "e": 4038, "s": 3737, "text": "In this method, you first create the figure object and then add the subplots manually one after the other. The following example creates a 2 x 2 grid. If you want multiple subplots to share the same x or y-axis, you can specify the corresponding axis while creating the subplot as demonstrated below." }, { "code": null, "e": 4356, "s": 4038, "text": "NOTE: Here, the numbering starts at 1. So, for a 2 x 2 grid, the upper row will use the numbers(2, 2, 1) , (2, 2, 2) and the second row will use the numbers (2, 2, 3) , (2, 2, 4) , respectively. The first two indices are the number of total rows and columns, respectively while the third number specifies the subplot." }, { "code": null, "e": 4780, "s": 4356, "text": "fig = plt.figure(figsize=(8, 6))ax1 = plt.subplot(2, 2, 1, frameon=True) ax1.plot(x+1, y_1+1)ax1.set_title('ax1')ax2 = plt.subplot(2, 2, 2, sharex=ax1, facecolor='orange')ax2.plot(x, y_2, '-r')ax2.set_title('Shares x-axis with ax1')ax3 = plt.subplot(2, 2, 3, sharey=ax1)ax3.plot(x, y_1**2, '-g')ax3.set_title('Shares y-axis with ax1')ax4 = plt.subplot(2, 2, 4, facecolor='orchid')ax4.plot(x, y_2**2, '-b')fig.tight_layout()" }, { "code": null, "e": 4955, "s": 4780, "text": "This method is useful for generating complex grids where the subplots span multiple rows or columns. Here, you create subplots at specified locations within the overall grid." }, { "code": null, "e": 5387, "s": 4955, "text": "You have to first specify the overall grid size like (3, 3) in the example code below. Then you specify the starting location of the subplot using a tuple of indices in the order (row, column) where the indices start at 0. So, for a 3 x 3 grid, both the row and the column indices will be 0, 1, and 2. If you want a subplot to span multiple rows or columns, you specify the length of the span using the keywords rowspan or colspan." }, { "code": null, "e": 5764, "s": 5387, "text": "def add_title(axes): for i, ax in enumerate(axes): ax.set_title(\"ax%d\" % (i+1), fontsize=18)fig = plt.figure(figsize=(8, 8))ax1 = plt.subplot2grid((3, 3), (0, 0), colspan=2)ax2 = plt.subplot2grid((3, 3), (0, 2), rowspan=3)ax3 = plt.subplot2grid((3, 3), (1, 0), rowspan=2)ax4 = plt.subplot2grid((3, 3), (1, 1))ax5 = plt.subplot2grid((3, 3), (2, 1))add_title(fig.axes)" }, { "code": null, "e": 5931, "s": 5764, "text": "This method is also useful for generating complex grids. You need a basic understanding of the slicing and indexing notation of NumPy arrays to work with this method." }, { "code": null, "e": 6151, "s": 5931, "text": "For example, the slice [0, :] means the first row (index 0) and all the columns (: represents all), the slice[1, :-1] means the second row (index 1) and all the columns except the last one (:-1 represents all but last)." }, { "code": null, "e": 6532, "s": 6151, "text": "import matplotlib.gridspec as gridspecfig = plt.figure(constrained_layout=True, figsize=(8, 8))spec = gridspec.GridSpec(ncols=3, nrows=3, figure=fig)ax1 = fig.add_subplot(spec[0, :])ax2 = fig.add_subplot(spec[1, :-1])ax3 = fig.add_subplot(spec[1, -1])ax4 = fig.add_subplot(spec[2, 1:])ax5 = fig.add_subplot(spec[2, 0])# Now you can plot individually as ax1.plot(), ax2.plot() etc." }, { "code": null, "e": 6681, "s": 6532, "text": "This method is quite similar to Way 3 and uses the same index notation as explained above. This feature is available only in Matplotlib 3+ versions." }, { "code": null, "e": 7038, "s": 6681, "text": "fig = plt.figure(constrained_layout=True, figsize=(8, 8))spec = fig.add_gridspec(3, 3)ax1 = fig.add_subplot(spec[0, :-1])ax1.set_title('ax1')ax2 = fig.add_subplot(spec[:, -1])ax2.set_title('ax2')ax3 = fig.add_subplot(spec[1:, 0])ax3.set_title('ax3')ax4 = fig.add_subplot(spec[1, 1])ax4.set_title('ax4')ax5 = fig.add_subplot(spec[-1, 1])ax5.set_title('ax5')" }, { "code": null, "e": 7307, "s": 7038, "text": "The latest version of Matplotlib 3.3 has introduced a new, less verbose, and a semantic way to generate complex, subplot grids. via subplot_mosaic(). You can also name your subplots as you like. You can also use short-hand ASCII notations to recreate the figure below." }, { "code": null, "e": 7764, "s": 7307, "text": "The cool part is that, to generate the subplot grid shown below, you can pass the layout in the form of a list. A missing subplot is indicated as '.'. To span a subplot over two columns, you can repeat the name as I did for 'bar'. To span across rows (vertically), repeat the name vertically below in the second list. You can also use the names 'bar', 'hist', and 'scatter' to control/modify the properties of the corresponding subplots using a dictionary." }, { "code": null, "e": 8257, "s": 7764, "text": "axes = plt.figure(constrained_layout=True).subplot_mosaic( [['.', 'bar', 'bar'], # Note repitition of 'bar' ['hist', '.', 'scatter']])for k, ax in axes.items(): ax.text(0.5, 0.5, k, ha='center', va='center', fontsize=36, color='magenta')# Using dictionary to change subplot propertiesaxes['bar'].set_title('A bar plot', fontsize=24) axes['hist'].set_title('A histogram', fontsize=24) axes['scatter'].set_title('A scatter plot', fontsize=24)" } ]
GANfolk: Using AI to Create Portraits for NFTs | Towards Data Science
I have previously written on Medium about using AI to create visual art, like abstract paintings and landscapes. During my research, I noticed that Generative Adversarial Networks (GANs) seem to have a hard time creating images of people. I decided to take this challenge [ahem] head-on, so I trained two GANs using paintings and photographs of people. I then put the images for sale as NFTs in a collection on OpenSea called GANfolk. Note that I am releasing the dataset of training images, the source code, and the trained models under the Creative Commons Attribution Share-alike license. This means you can use my code to create your own digital paintings of people and sell them, as long as you give me attribution. See the Source Code section below for details. Here is an overview of the GANfolk system. You can find details of the components and processes in the sections further below. I started by writing a script to collect paintings of people that are in the public domain on WikiArt and open-source images of people from Google’s Open Images dataset. I pre-processed the images by aligning facial features and filling in any blank spots using the LaMa system for inpainting [1]. I then trained two GANs, StyleGAN 2 ADA [2] and VQGAN [3], with the GANfolk training set of 5,400 images. I collected 2,700 old paintings of people and 2,700 new photos of people. For the first step in the creation process, the trained StyleGAN 2 system generated 1,000 images as a baseline set. I used GPT-3 from OpenAI [4] to generate text prompts for the pictures, like “drawing of a thoughtful Brazilian girl.” I then used the CLIP system [5], also from OpenAI, to find the best images that match the prompt. I chose the best picture and fed it into the trained VQGAN system for further modification to get the image to more closely match the text prompt. I went back to GPT-3 and asked it to write a name and a brief backstory for each portrait. As a post-processing step, I added a vignette effect and resized the image up by four times (from 512x512 to 2048x2048). After a mild editing pass, I uploaded the pictures and backstories to OpenSea for sale as the GANfolk NFTs. Before I get into the details of GANfolk, here is a brief section on what other people have done to generate portraits of people. In conjunction with developing their StyleGAN series of generative networks, NVidia released a dataset of photographs of people called Flickr-Faces-HQ Dataset (FFHQ). According to NVidia... ... [the] dataset consists of 70,000 high-quality PNG images at 1024×1024 resolution and contains considerable variation in terms of age, ethnicity and image background. Although the quality and variety of the FFHQ images are excellent, NVidia released the dataset under non-commercial terms. Also, I found that the faces seem too “tightly cropped” to make good portraits. NVidia also released the MetFaces dataset of faces from paintings in the Metropolitan Museum of Art. They write that... [the] dataset consists of 1,336 high-quality PNG images at 1024×1024 resolution. The images were downloaded via the Metropolitan Museum of Art Collection API, and automatically aligned and cropped using dlib. Various automatic filters were used to prune the set. Again, NVidia released the dataset under non-commercial terms, and they used similarly tight cropping for the faces. Here’s what newly generated images look like with StyleGAN 2 ADA trained on the FFHQ dataset and fine-tuned with the MetFaces dataset. Although the results are impressive, not surprisingly, the results seem to be too tightly cropped. In addition to the datasets, NVidia released the official source code under non-commercial terms, so these faces cannot be sold as NFTs. Also, there seems to be a distinct lack of cultural diversity in the generated faces. I will discuss the details of the components and processes used in the GANfolk system in the following sections. I wrote two scripts to gather the source images for GANfolk. The first gathers public domain paintings on WikiArt from the 19th and early 20th centuries. The second collects portraits from Google’s Open Images dataset. The dataset consists of photos on Flickr released under the CC-BY-SA license, which allows for commercial use. To find and orient the faces in the images, I used a face-finding algorithm from a package called DLIB. I modified the face-finding code to crop the faces more loosely. Here are some of the results from the paintings from WikiArt. Although there is some more negative space around the portraits, it did pick up a lot of empty black areas around most of the images. This is because the original pictures didn’t have enough background area to account for the rotation and scaling for the face orientation. To compensate for this, I used an inpainting AI system called Large Mask (LaMa) inpainting [7] by Roman Suvorov et al. The system will automatically “fill in” portions of an image specified by a second mask image. Here is the code I wrote to do the inpainting with LaMa. And below are the results of inpainting the sample portraits from WikiArt. And here is a sample of photos from Open Images after the same pre-processing steps. There seems to be a good variety of styles, ages, and ethnicities in the photos. And, for the most part, the effects of the inpainting are not visible. I used two GANs for this project, an independent implementation of StyleGAN2 by Kim Seonghyeon (aka rosinality) and VQGAN by Patrick Esser et al. I trained both GANs for three weeks using the 5,400 training images in the GANfolk dataset. In my prior research, I found that StyleGAN2 does an excellent job creating a global structure in the generated images that roughly resembles the types in the training data. However, the image details are often hazy or missing. But VQGAN is entirely complementary. It doesn’t know how to create a global structure, but it does a good job filling in realistic image details. Using both GANs is the best of both worlds. To kick off the creation process, I used GPT-3 to suggest prompts to make the images. I literally asked the GPT-3 davinci-instruct model to do this: "Create prompts to render fictional people at different ages and nationalities." The source code is here. And typical results are below. drawing of a thoughtful Brazilian girlacrylic painting of a sassy Mexican girlcharcoal sketch of an inquisitive Turkish boypencil drawing of a determined Indian womanink drawing of a playful Japanese girlacrylic painting of an optimistic Spanish boy I used these prompts for two purposes: (1) as input to CLIP to direct the GANs to generate the corresponding images and (2) as the titles of the NFTs. After generating the prompts, I ran StyleGAN2 to generate 1,000 random paintings of people. I then used the CLIP system from OpenAI to find the images that most closely match the prompt. As I described in an earlier article, the CLIP system has a text encoder and an image encoder for determining the similarities between phrases and images. Here are the top 14 images generated by StyleGAN 2 that best match the prompt, “drawing of a thoughtful Brazilian girl.” You can see a variety of styles in the results. Some look like drawings; some look like photos. Although some may appear thoughtful, it would be difficult to say that they look like girls from Brazil. So I picked the first one in the series to see what VQGAN could do with it. To home in on an image that matches the prompt, I took a resultant image from StyleGAN2 and refined it iteratively using VQGAN. Thanks to work done by Katherine Crowson with further modifications by Justin John, I used CLIP again to analyze each iteration of the image and steer VQGAN to modify the image to better match the prompt. Here are the results. The images above are iterations number 0, 25, and 50 using VQGAN and CLIP to match the prompt, “drawing of a thoughtful Brazilian girl.” You can see how the woman in the picture became younger and perhaps “more Brazilian.” Also, the portrait style changed to be more like a drawing than a painting. I performed several post-processing steps on each of the images to give them a finished, unifying look: Added a vignette to shade down the corners of the image;Used unsharp mask to make the details a bit more clear (optional);Performed super-resolution resize to increase the size of the image from 512x512 to 2048x2048; Added a vignette to shade down the corners of the image; Used unsharp mask to make the details a bit more clear (optional); Performed super-resolution resize to increase the size of the image from 512x512 to 2048x2048; It’s a bit subtle, but you can see how the vignette effect emphasized the subject in the center, and the unsharp mask effect sharpened the details. Here is the source code for the vignette and the unsharp mask effect. The images coming out of StyleGAN2 and VQGAN have a resolution of 512x512 pixels. I used a super-resolution resize system from a German company called Idealo to increase the resolution. Before the resize-up, I added a little random noise to the image to create a painterly effect. The source code is here. Below is the sample drawing resized up to 2048x2048. You can click on the image to zoom in and see the details. Before posting the images to OpenSea marketplace, I decided to “add interest” to the NFTs by using the GPT-3 davinci-instruct system to create names and backstories for the fictional people. The prompt I used for creating a name is below. Create a name for a thoughtful Brazilian girl.First name: And the GPT-3 system responded with: Sophia Last name: Santos I noticed that the system would sometimes come up with the name of a famous person, so I wrote some code to check if the person has a Wikipedia page or not. I then created a loop to keep generating names until it created one not on Wikipedia. The code is here. I then used a similar technique to create a backstory for each fictional person. The prompt I used is below. Create a brief backstory for Sophia Santos, a thoughtful Brazilian girl. And GPT-3 wrote: Sofia Santos was born in the heart of Brazil to a Brazilian father and an American mother. Sofia grew up in a bilingual household and learned to appreciate both of her cultures. Sofia is a thoughtful and intelligent young woman who always looks for ways to help others. The results were pretty good! My collaborator and reviewer, Jennifer, took the time to read all 100 backstories for GANfolk, and only a few of them needed some editing. Minting GANfolk on the OpenSea NFT Marketplace For my earlier NFT project, GANshare, I chose to use the Polygon blockchain because it is better for the environment than Etherium. And I decided to put the GANfolk collection on the Polygon chain for the same reason. Once I had created all 100 images with names and backstories, it was pretty straightforward to upload and mint them as NFTs. You can now see the entire GANfolk collection on OpenSea. Here are the first 5 GANfolk, showing the output images from StyleGAN2 and VQGAN with post-processing. GANfolk #1 — Painting of an Enigmatic French Woman, Mathilde DuboisMathilde Dubois was born in the small town of Saint-Jean-de-Luz in the south of France. She was the only child of a wealthy shipping magnate and an opera singer. From a young age, Mathilde showed a passion for the arts.Listing on OpenSea GANfolk #2 — Painting of a Focused Portuguese Man, João SilvaJoão Silva was born in Lisbon, Portugal, in 1984. He was interested in learning about the world around him and different cultures from a young age. When he was 16, he and his family moved to the United States, and he finished high school in California.Listing on OpenSea GANfolk #3 — Photo of a Mischievous English Teenager, Nigel LarkspurNigel Larkspur was born to wealthy English parents who spoiled him rotten. He quickly became a troublesome teenager, always getting into trouble at school and with the law. His parents tried everything to get him to behave, but nothing worked. Nigel loved to cause trouble and loved to see the look of disappointment on his parents’ faces.Listing on OpenSea GANfolk #4 — Photo of a Stoic Bulgarian Teenager, Stefan StoichkovStefan Stoichkov was born in a small town in the heart of Bulgaria. His parents were hard-working farmers who taught him the importance of honesty, integrity, and self-reliance. Stefan was always a quiet, introspective child who preferred to spend his time reading, studying, and exploring the natural world around him.Listing on OpenSea GANfolk #5 — Painting of a Concerned Korean Woman, Choon-Hee KimChoon-Hee Kim grew up in a suburb of Seoul. From a young age, she was taught the importance of family and traditional values. Her parents instilled in her a strong work ethic, and she quickly developed a reputation as a hard worker.Listing on OpenSea You can see all 100 GANfolk here, https://opensea.io/collection/ganfolk I learned a lot about GANs with their weaknesses and strengths while working on this project. As I mentioned above, StyleGAN2 generated decent images that have good overall form, although they often lack fine details. VQGAN is complementary in that it doesn’t know how to create images with global form, but if it started with a picture with decent form, it did a good job adding details, primarily when directed with the CLIP system. I also noticed a bias towards European people while working on this project. StyleGAN2 seemed to struggle when creating images of people from diverse nationalities. This is probably due to a lack of diversity in the training images, especially the paintings from WikiArt. But CLIP seemed to know what people from around the world look like, and VQGAN was up to the task of modifying the images appropriately. The 5,400 images I collected can be found on Kaggle. The source code for this project is available on GitHub. I am releasing the training images, the source code, and the trained models under the CC BY-SA license. If you use these resources to create new images, please give attribution like this: This image was created with GANfolk by Robert A. Gonsalves. I want to thank Jennifer Lim and Oliver Strimpel for their help with this article. [1] LaMa by R.Suvorov et al., Resolution-robust Large Mask Inpainting with Fourier Convolutions (2021) [2] StyleGAN2 ADA by T. Karras et al., Training Generative Adversarial Networks with Limited Data (2020) [3] VQGAN by P. Esser, R. Rombach, and B. Ommer, Taming Transformers for High-Resolution Image Synthesis (2020) [4] GPT-3 by Tom B. Brown et al., Language Models are Few-Shot Learners (2020) [5] CLIP by A. Radford et al., Learning Transferable Visual Models From Natural Language Supervision (2021) To get unlimited access to all articles on Medium, become a member for $5/month. Non-members can only read three locked stories each month. Some rights reserved
[ { "code": null, "e": 601, "s": 166, "text": "I have previously written on Medium about using AI to create visual art, like abstract paintings and landscapes. During my research, I noticed that Generative Adversarial Networks (GANs) seem to have a hard time creating images of people. I decided to take this challenge [ahem] head-on, so I trained two GANs using paintings and photographs of people. I then put the images for sale as NFTs in a collection on OpenSea called GANfolk." }, { "code": null, "e": 934, "s": 601, "text": "Note that I am releasing the dataset of training images, the source code, and the trained models under the Creative Commons Attribution Share-alike license. This means you can use my code to create your own digital paintings of people and sell them, as long as you give me attribution. See the Source Code section below for details." }, { "code": null, "e": 1061, "s": 934, "text": "Here is an overview of the GANfolk system. You can find details of the components and processes in the sections further below." }, { "code": null, "e": 1539, "s": 1061, "text": "I started by writing a script to collect paintings of people that are in the public domain on WikiArt and open-source images of people from Google’s Open Images dataset. I pre-processed the images by aligning facial features and filling in any blank spots using the LaMa system for inpainting [1]. I then trained two GANs, StyleGAN 2 ADA [2] and VQGAN [3], with the GANfolk training set of 5,400 images. I collected 2,700 old paintings of people and 2,700 new photos of people." }, { "code": null, "e": 2019, "s": 1539, "text": "For the first step in the creation process, the trained StyleGAN 2 system generated 1,000 images as a baseline set. I used GPT-3 from OpenAI [4] to generate text prompts for the pictures, like “drawing of a thoughtful Brazilian girl.” I then used the CLIP system [5], also from OpenAI, to find the best images that match the prompt. I chose the best picture and fed it into the trained VQGAN system for further modification to get the image to more closely match the text prompt." }, { "code": null, "e": 2339, "s": 2019, "text": "I went back to GPT-3 and asked it to write a name and a brief backstory for each portrait. As a post-processing step, I added a vignette effect and resized the image up by four times (from 512x512 to 2048x2048). After a mild editing pass, I uploaded the pictures and backstories to OpenSea for sale as the GANfolk NFTs." }, { "code": null, "e": 2469, "s": 2339, "text": "Before I get into the details of GANfolk, here is a brief section on what other people have done to generate portraits of people." }, { "code": null, "e": 2659, "s": 2469, "text": "In conjunction with developing their StyleGAN series of generative networks, NVidia released a dataset of photographs of people called Flickr-Faces-HQ Dataset (FFHQ). According to NVidia..." }, { "code": null, "e": 2829, "s": 2659, "text": "... [the] dataset consists of 70,000 high-quality PNG images at 1024×1024 resolution and contains considerable variation in terms of age, ethnicity and image background." }, { "code": null, "e": 3032, "s": 2829, "text": "Although the quality and variety of the FFHQ images are excellent, NVidia released the dataset under non-commercial terms. Also, I found that the faces seem too “tightly cropped” to make good portraits." }, { "code": null, "e": 3152, "s": 3032, "text": "NVidia also released the MetFaces dataset of faces from paintings in the Metropolitan Museum of Art. They write that..." }, { "code": null, "e": 3415, "s": 3152, "text": "[the] dataset consists of 1,336 high-quality PNG images at 1024×1024 resolution. The images were downloaded via the Metropolitan Museum of Art Collection API, and automatically aligned and cropped using dlib. Various automatic filters were used to prune the set." }, { "code": null, "e": 3532, "s": 3415, "text": "Again, NVidia released the dataset under non-commercial terms, and they used similarly tight cropping for the faces." }, { "code": null, "e": 3667, "s": 3532, "text": "Here’s what newly generated images look like with StyleGAN 2 ADA trained on the FFHQ dataset and fine-tuned with the MetFaces dataset." }, { "code": null, "e": 3989, "s": 3667, "text": "Although the results are impressive, not surprisingly, the results seem to be too tightly cropped. In addition to the datasets, NVidia released the official source code under non-commercial terms, so these faces cannot be sold as NFTs. Also, there seems to be a distinct lack of cultural diversity in the generated faces." }, { "code": null, "e": 4102, "s": 3989, "text": "I will discuss the details of the components and processes used in the GANfolk system in the following sections." }, { "code": null, "e": 4432, "s": 4102, "text": "I wrote two scripts to gather the source images for GANfolk. The first gathers public domain paintings on WikiArt from the 19th and early 20th centuries. The second collects portraits from Google’s Open Images dataset. The dataset consists of photos on Flickr released under the CC-BY-SA license, which allows for commercial use." }, { "code": null, "e": 4663, "s": 4432, "text": "To find and orient the faces in the images, I used a face-finding algorithm from a package called DLIB. I modified the face-finding code to crop the faces more loosely. Here are some of the results from the paintings from WikiArt." }, { "code": null, "e": 4936, "s": 4663, "text": "Although there is some more negative space around the portraits, it did pick up a lot of empty black areas around most of the images. This is because the original pictures didn’t have enough background area to account for the rotation and scaling for the face orientation." }, { "code": null, "e": 5282, "s": 4936, "text": "To compensate for this, I used an inpainting AI system called Large Mask (LaMa) inpainting [7] by Roman Suvorov et al. The system will automatically “fill in” portions of an image specified by a second mask image. Here is the code I wrote to do the inpainting with LaMa. And below are the results of inpainting the sample portraits from WikiArt." }, { "code": null, "e": 5367, "s": 5282, "text": "And here is a sample of photos from Open Images after the same pre-processing steps." }, { "code": null, "e": 5519, "s": 5367, "text": "There seems to be a good variety of styles, ages, and ethnicities in the photos. And, for the most part, the effects of the inpainting are not visible." }, { "code": null, "e": 5757, "s": 5519, "text": "I used two GANs for this project, an independent implementation of StyleGAN2 by Kim Seonghyeon (aka rosinality) and VQGAN by Patrick Esser et al. I trained both GANs for three weeks using the 5,400 training images in the GANfolk dataset." }, { "code": null, "e": 6175, "s": 5757, "text": "In my prior research, I found that StyleGAN2 does an excellent job creating a global structure in the generated images that roughly resembles the types in the training data. However, the image details are often hazy or missing. But VQGAN is entirely complementary. It doesn’t know how to create a global structure, but it does a good job filling in realistic image details. Using both GANs is the best of both worlds." }, { "code": null, "e": 6324, "s": 6175, "text": "To kick off the creation process, I used GPT-3 to suggest prompts to make the images. I literally asked the GPT-3 davinci-instruct model to do this:" }, { "code": null, "e": 6405, "s": 6324, "text": "\"Create prompts to render fictional people at different ages and nationalities.\"" }, { "code": null, "e": 6461, "s": 6405, "text": "The source code is here. And typical results are below." }, { "code": null, "e": 6711, "s": 6461, "text": "drawing of a thoughtful Brazilian girlacrylic painting of a sassy Mexican girlcharcoal sketch of an inquisitive Turkish boypencil drawing of a determined Indian womanink drawing of a playful Japanese girlacrylic painting of an optimistic Spanish boy" }, { "code": null, "e": 6862, "s": 6711, "text": "I used these prompts for two purposes: (1) as input to CLIP to direct the GANs to generate the corresponding images and (2) as the titles of the NFTs." }, { "code": null, "e": 7204, "s": 6862, "text": "After generating the prompts, I ran StyleGAN2 to generate 1,000 random paintings of people. I then used the CLIP system from OpenAI to find the images that most closely match the prompt. As I described in an earlier article, the CLIP system has a text encoder and an image encoder for determining the similarities between phrases and images." }, { "code": null, "e": 7325, "s": 7204, "text": "Here are the top 14 images generated by StyleGAN 2 that best match the prompt, “drawing of a thoughtful Brazilian girl.”" }, { "code": null, "e": 7602, "s": 7325, "text": "You can see a variety of styles in the results. Some look like drawings; some look like photos. Although some may appear thoughtful, it would be difficult to say that they look like girls from Brazil. So I picked the first one in the series to see what VQGAN could do with it." }, { "code": null, "e": 7957, "s": 7602, "text": "To home in on an image that matches the prompt, I took a resultant image from StyleGAN2 and refined it iteratively using VQGAN. Thanks to work done by Katherine Crowson with further modifications by Justin John, I used CLIP again to analyze each iteration of the image and steer VQGAN to modify the image to better match the prompt. Here are the results." }, { "code": null, "e": 8256, "s": 7957, "text": "The images above are iterations number 0, 25, and 50 using VQGAN and CLIP to match the prompt, “drawing of a thoughtful Brazilian girl.” You can see how the woman in the picture became younger and perhaps “more Brazilian.” Also, the portrait style changed to be more like a drawing than a painting." }, { "code": null, "e": 8360, "s": 8256, "text": "I performed several post-processing steps on each of the images to give them a finished, unifying look:" }, { "code": null, "e": 8577, "s": 8360, "text": "Added a vignette to shade down the corners of the image;Used unsharp mask to make the details a bit more clear (optional);Performed super-resolution resize to increase the size of the image from 512x512 to 2048x2048;" }, { "code": null, "e": 8634, "s": 8577, "text": "Added a vignette to shade down the corners of the image;" }, { "code": null, "e": 8701, "s": 8634, "text": "Used unsharp mask to make the details a bit more clear (optional);" }, { "code": null, "e": 8796, "s": 8701, "text": "Performed super-resolution resize to increase the size of the image from 512x512 to 2048x2048;" }, { "code": null, "e": 9014, "s": 8796, "text": "It’s a bit subtle, but you can see how the vignette effect emphasized the subject in the center, and the unsharp mask effect sharpened the details. Here is the source code for the vignette and the unsharp mask effect." }, { "code": null, "e": 9432, "s": 9014, "text": "The images coming out of StyleGAN2 and VQGAN have a resolution of 512x512 pixels. I used a super-resolution resize system from a German company called Idealo to increase the resolution. Before the resize-up, I added a little random noise to the image to create a painterly effect. The source code is here. Below is the sample drawing resized up to 2048x2048. You can click on the image to zoom in and see the details." }, { "code": null, "e": 9623, "s": 9432, "text": "Before posting the images to OpenSea marketplace, I decided to “add interest” to the NFTs by using the GPT-3 davinci-instruct system to create names and backstories for the fictional people." }, { "code": null, "e": 9671, "s": 9623, "text": "The prompt I used for creating a name is below." }, { "code": null, "e": 9729, "s": 9671, "text": "Create a name for a thoughtful Brazilian girl.First name:" }, { "code": null, "e": 9766, "s": 9729, "text": "And the GPT-3 system responded with:" }, { "code": null, "e": 9793, "s": 9766, "text": "Sophia Last name: Santos" }, { "code": null, "e": 10054, "s": 9793, "text": "I noticed that the system would sometimes come up with the name of a famous person, so I wrote some code to check if the person has a Wikipedia page or not. I then created a loop to keep generating names until it created one not on Wikipedia. The code is here." }, { "code": null, "e": 10163, "s": 10054, "text": "I then used a similar technique to create a backstory for each fictional person. The prompt I used is below." }, { "code": null, "e": 10236, "s": 10163, "text": "Create a brief backstory for Sophia Santos, a thoughtful Brazilian girl." }, { "code": null, "e": 10253, "s": 10236, "text": "And GPT-3 wrote:" }, { "code": null, "e": 10523, "s": 10253, "text": "Sofia Santos was born in the heart of Brazil to a Brazilian father and an American mother. Sofia grew up in a bilingual household and learned to appreciate both of her cultures. Sofia is a thoughtful and intelligent young woman who always looks for ways to help others." }, { "code": null, "e": 10692, "s": 10523, "text": "The results were pretty good! My collaborator and reviewer, Jennifer, took the time to read all 100 backstories for GANfolk, and only a few of them needed some editing." }, { "code": null, "e": 10739, "s": 10692, "text": "Minting GANfolk on the OpenSea NFT Marketplace" }, { "code": null, "e": 11140, "s": 10739, "text": "For my earlier NFT project, GANshare, I chose to use the Polygon blockchain because it is better for the environment than Etherium. And I decided to put the GANfolk collection on the Polygon chain for the same reason. Once I had created all 100 images with names and backstories, it was pretty straightforward to upload and mint them as NFTs. You can now see the entire GANfolk collection on OpenSea." }, { "code": null, "e": 11243, "s": 11140, "text": "Here are the first 5 GANfolk, showing the output images from StyleGAN2 and VQGAN with post-processing." }, { "code": null, "e": 11548, "s": 11243, "text": "GANfolk #1 — Painting of an Enigmatic French Woman, Mathilde DuboisMathilde Dubois was born in the small town of Saint-Jean-de-Luz in the south of France. She was the only child of a wealthy shipping magnate and an opera singer. From a young age, Mathilde showed a passion for the arts.Listing on OpenSea" }, { "code": null, "e": 11882, "s": 11548, "text": "GANfolk #2 — Painting of a Focused Portuguese Man, João SilvaJoão Silva was born in Lisbon, Portugal, in 1984. He was interested in learning about the world around him and different cultures from a young age. When he was 16, he and his family moved to the United States, and he finished high school in California.Listing on OpenSea" }, { "code": null, "e": 12308, "s": 11882, "text": "GANfolk #3 — Photo of a Mischievous English Teenager, Nigel LarkspurNigel Larkspur was born to wealthy English parents who spoiled him rotten. He quickly became a troublesome teenager, always getting into trouble at school and with the law. His parents tried everything to get him to behave, but nothing worked. Nigel loved to cause trouble and loved to see the look of disappointment on his parents’ faces.Listing on OpenSea" }, { "code": null, "e": 12712, "s": 12308, "text": "GANfolk #4 — Photo of a Stoic Bulgarian Teenager, Stefan StoichkovStefan Stoichkov was born in a small town in the heart of Bulgaria. His parents were hard-working farmers who taught him the importance of honesty, integrity, and self-reliance. Stefan was always a quiet, introspective child who preferred to spend his time reading, studying, and exploring the natural world around him.Listing on OpenSea" }, { "code": null, "e": 13027, "s": 12712, "text": "GANfolk #5 — Painting of a Concerned Korean Woman, Choon-Hee KimChoon-Hee Kim grew up in a suburb of Seoul. From a young age, she was taught the importance of family and traditional values. Her parents instilled in her a strong work ethic, and she quickly developed a reputation as a hard worker.Listing on OpenSea" }, { "code": null, "e": 13099, "s": 13027, "text": "You can see all 100 GANfolk here, https://opensea.io/collection/ganfolk" }, { "code": null, "e": 13534, "s": 13099, "text": "I learned a lot about GANs with their weaknesses and strengths while working on this project. As I mentioned above, StyleGAN2 generated decent images that have good overall form, although they often lack fine details. VQGAN is complementary in that it doesn’t know how to create images with global form, but if it started with a picture with decent form, it did a good job adding details, primarily when directed with the CLIP system." }, { "code": null, "e": 13943, "s": 13534, "text": "I also noticed a bias towards European people while working on this project. StyleGAN2 seemed to struggle when creating images of people from diverse nationalities. This is probably due to a lack of diversity in the training images, especially the paintings from WikiArt. But CLIP seemed to know what people from around the world look like, and VQGAN was up to the task of modifying the images appropriately." }, { "code": null, "e": 14157, "s": 13943, "text": "The 5,400 images I collected can be found on Kaggle. The source code for this project is available on GitHub. I am releasing the training images, the source code, and the trained models under the CC BY-SA license." }, { "code": null, "e": 14301, "s": 14157, "text": "If you use these resources to create new images, please give attribution like this: This image was created with GANfolk by Robert A. Gonsalves." }, { "code": null, "e": 14384, "s": 14301, "text": "I want to thank Jennifer Lim and Oliver Strimpel for their help with this article." }, { "code": null, "e": 14487, "s": 14384, "text": "[1] LaMa by R.Suvorov et al., Resolution-robust Large Mask Inpainting with Fourier Convolutions (2021)" }, { "code": null, "e": 14592, "s": 14487, "text": "[2] StyleGAN2 ADA by T. Karras et al., Training Generative Adversarial Networks with Limited Data (2020)" }, { "code": null, "e": 14704, "s": 14592, "text": "[3] VQGAN by P. Esser, R. Rombach, and B. Ommer, Taming Transformers for High-Resolution Image Synthesis (2020)" }, { "code": null, "e": 14783, "s": 14704, "text": "[4] GPT-3 by Tom B. Brown et al., Language Models are Few-Shot Learners (2020)" }, { "code": null, "e": 14891, "s": 14783, "text": "[5] CLIP by A. Radford et al., Learning Transferable Visual Models From Natural Language Supervision (2021)" }, { "code": null, "e": 15031, "s": 14891, "text": "To get unlimited access to all articles on Medium, become a member for $5/month. Non-members can only read three locked stories each month." } ]
Java - String regionMatches() Method
This method has two variants which can be used to test if two string regions are equal. Here is the syntax of this method − public boolean regionMatches(int toffset, String other, int ooffset, int len) Here is the detail of parameters − toffset − the starting offset of the subregion in this string. toffset − the starting offset of the subregion in this string. other − the string argument. other − the string argument. ooffset − the starting offset of the subregion in the string argument. ooffset − the starting offset of the subregion in the string argument. len − the number of characters to compare. len − the number of characters to compare. It returns true if the specified subregion of this string matches the specified subregion of the string argument; false otherwise. Whether the matching is exact or case insensitive depends on the ignoreCase argument. It returns true if the specified subregion of this string matches the specified subregion of the string argument; false otherwise. Whether the matching is exact or case insensitive depends on the ignoreCase argument. import java.io.*; public class Test { public static void main(String args[]) { String Str1 = new String("Welcome to Tutorialspoint.com"); String Str2 = new String("Tutorials"); String Str3 = new String("TUTORIALS"); System.out.print("Return Value :" ); System.out.println(Str1.regionMatches(11, Str2, 0, 9)); System.out.print("Return Value :" ); System.out.println(Str1.regionMatches(11, Str3, 0, 9)); } } This will produce the following result − Return Value :true Return Value :false 16 Lectures 2 hours Malhar Lathkar 19 Lectures 5 hours Malhar Lathkar 25 Lectures 2.5 hours Anadi Sharma 126 Lectures 7 hours Tushar Kale 119 Lectures 17.5 hours Monica Mittal 76 Lectures 7 hours Arnab Chakraborty Print Add Notes Bookmark this page
[ { "code": null, "e": 2465, "s": 2377, "text": "This method has two variants which can be used to test if two string regions are equal." }, { "code": null, "e": 2502, "s": 2465, "text": "Here is the syntax of this method −" }, { "code": null, "e": 2668, "s": 2502, "text": "public boolean regionMatches(int toffset,\n String other,\n int ooffset,\n int len)\n" }, { "code": null, "e": 2703, "s": 2668, "text": "Here is the detail of parameters −" }, { "code": null, "e": 2766, "s": 2703, "text": "toffset − the starting offset of the subregion in this string." }, { "code": null, "e": 2829, "s": 2766, "text": "toffset − the starting offset of the subregion in this string." }, { "code": null, "e": 2858, "s": 2829, "text": "other − the string argument." }, { "code": null, "e": 2887, "s": 2858, "text": "other − the string argument." }, { "code": null, "e": 2958, "s": 2887, "text": "ooffset − the starting offset of the subregion in the string argument." }, { "code": null, "e": 3029, "s": 2958, "text": "ooffset − the starting offset of the subregion in the string argument." }, { "code": null, "e": 3072, "s": 3029, "text": "len − the number of characters to compare." }, { "code": null, "e": 3115, "s": 3072, "text": "len − the number of characters to compare." }, { "code": null, "e": 3332, "s": 3115, "text": "It returns true if the specified subregion of this string matches the specified subregion of the string argument; false otherwise. Whether the matching is exact or case insensitive depends on the ignoreCase argument." }, { "code": null, "e": 3549, "s": 3332, "text": "It returns true if the specified subregion of this string matches the specified subregion of the string argument; false otherwise. Whether the matching is exact or case insensitive depends on the ignoreCase argument." }, { "code": null, "e": 4006, "s": 3549, "text": "import java.io.*;\npublic class Test {\n\n public static void main(String args[]) {\n String Str1 = new String(\"Welcome to Tutorialspoint.com\");\n String Str2 = new String(\"Tutorials\");\n String Str3 = new String(\"TUTORIALS\");\n\n System.out.print(\"Return Value :\" );\n System.out.println(Str1.regionMatches(11, Str2, 0, 9));\n\n System.out.print(\"Return Value :\" );\n System.out.println(Str1.regionMatches(11, Str3, 0, 9));\n }\n}" }, { "code": null, "e": 4047, "s": 4006, "text": "This will produce the following result −" }, { "code": null, "e": 4087, "s": 4047, "text": "Return Value :true\nReturn Value :false\n" }, { "code": null, "e": 4120, "s": 4087, "text": "\n 16 Lectures \n 2 hours \n" }, { "code": null, "e": 4136, "s": 4120, "text": " Malhar Lathkar" }, { "code": null, "e": 4169, "s": 4136, "text": "\n 19 Lectures \n 5 hours \n" }, { "code": null, "e": 4185, "s": 4169, "text": " Malhar Lathkar" }, { "code": null, "e": 4220, "s": 4185, "text": "\n 25 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4234, "s": 4220, "text": " Anadi Sharma" }, { "code": null, "e": 4268, "s": 4234, "text": "\n 126 Lectures \n 7 hours \n" }, { "code": null, "e": 4282, "s": 4268, "text": " Tushar Kale" }, { "code": null, "e": 4319, "s": 4282, "text": "\n 119 Lectures \n 17.5 hours \n" }, { "code": null, "e": 4334, "s": 4319, "text": " Monica Mittal" }, { "code": null, "e": 4367, "s": 4334, "text": "\n 76 Lectures \n 7 hours \n" }, { "code": null, "e": 4386, "s": 4367, "text": " Arnab Chakraborty" }, { "code": null, "e": 4393, "s": 4386, "text": " Print" }, { "code": null, "e": 4404, "s": 4393, "text": " Add Notes" } ]
Searching characters in a String in Java.
You can search for a particular letter in a string using the indexOf() method of the String class. This method which returns a position index of a word within the string if found. Otherwise it returns -1. public class Test { public static void main(String args[]) { String str = new String("hi welcome to Tutorialspoint"); int index = str.indexOf('w'); System.out.println("Index of the letter w :: "+index); } } Index of the letter w :: 3
[ { "code": null, "e": 1267, "s": 1062, "text": "You can search for a particular letter in a string using the indexOf() method of the String class. This method which returns a position index of a word within the string if found. Otherwise it returns -1." }, { "code": null, "e": 1498, "s": 1267, "text": "public class Test {\n public static void main(String args[]) {\n String str = new String(\"hi welcome to Tutorialspoint\");\n int index = str.indexOf('w');\n System.out.println(\"Index of the letter w :: \"+index);\n }\n}" }, { "code": null, "e": 1525, "s": 1498, "text": "Index of the letter w :: 3" } ]
Deploy a Machine Learning Model using Streamlit Library - GeeksforGeeks
02 Sep, 2020 Machine Learning:A computer is able to learn from experience without being explicitly programmed. Machine Learning is one of the top fields to enter currently and top companies all over the world are using it for improving their services and products. But there is no use of a Machine Learning model which is trained in your Jupyter Notebook. And so we need to deploy these models so that everyone can use them. In this article, we will first train an Iris Species classifier and then deploy the model using Streamlit which is an open-source app framework used to deploy ML models easily. Streamlit Library:Streamlit lets you create apps for your machine learning project using simple python scripts. It also supports hot-reloading, so that your app can update live as you edit and save your file. An app can be built in a few lines of code only(as we will see below) using the Streamlit API. Adding a widget is the same as declaring a variable. There is no need to write a backend, define different routes or handle HTTP requests. It is easy to deploy and manage. More information can be found on their website – https://www.streamlit.io/ So first we will train our model. We will not do much preprocessing as the main aim of this article is not to make an accurate ML model but to show its deployment. Firstly we need to install the following – pip install pandaspip install numpypip install sklearnpip install streamlit The dataset can be found here: https://www.kaggle.com/uciml/iris Code: import pandas as pdimport numpy as np df = pd.read_csv('BankNote_Authentication.csv')df.head() Output:Now we drop the Id column first as it is not important for classifying the Iris species. Then we will split the dataset into training and testing dataset and will use a Random Forest Classifier. You can use any other classifier of your choice, for example, logistic regression, support vector machine, etc. Code: # Dropping the Id columndf.drop('Id', axis = 1, inplace = True) # Renaming the target column into numbers to aid training of the modeldf['Species']= df['Species'].map({'Iris-setosa':0, 'Iris-versicolor':1, 'Iris-virginica':2}) # splitting the data into the columns which need to be trained(X) and the target column(y)X = df.iloc[:, :-1]y = df.iloc[:, -1] # splitting data into training and testing data with 30 % of data as testing data respectivelyfrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0) # importing the random forest classifier model and training it on the datasetfrom sklearn.ensemble import RandomForestClassifierclassifier = RandomForestClassifier()classifier.fit(X_train, y_train) # predicting on the test datasety_pred = classifier.predict(X_test) # finding out the accuracyfrom sklearn.metrics import accuracy_scorescore = accuracy_score(y_test, y_pred) We get an accuracy of 95.55% which is pretty good. Now, in order to use this model to predict other unknown data, we need to save it. We can save it by using pickle, which is used for serializing and deserializing a Python object structure. Code: # pickling the modelimport picklepickle_out = open("classifier.pkl", "wb")pickle.dump(classifier, pickle_out)pickle_out.close() There will be a new file created called “classifier.pkl” in the same directory. Now we can get down to using Streamlit to deploy the model – Paste the below code into another python file.Code: import pandas as pdimport numpy as npimport pickleimport streamlit as stfrom PIL import Image # loading in the model to predict on the datapickle_in = open('classifier.pkl', 'rb')classifier = pickle.load(pickle_in) def welcome(): return 'welcome all' # defining the function which will make the prediction using # the data which the user inputsdef prediction(sepal_length, sepal_width, petal_length, petal_width): prediction = classifier.predict( [[sepal_length, sepal_width, petal_length, petal_width]]) print(prediction) return prediction # this is the main function in which we define our webpage def main(): # giving the webpage a title st.title("Iris Flower Prediction") # here we define some of the front end elements of the web page like # the font and background color, the padding and the text to be displayed html_temp = """ <div style ="background-color:yellow;padding:13px"> <h1 style ="color:black;text-align:center;">Streamlit Iris Flower Classifier ML App </h1> </div> """ # this line allows us to display the front end aspects we have # defined in the above code st.markdown(html_temp, unsafe_allow_html = True) # the following lines create text boxes in which the user can enter # the data required to make the prediction sepal_length = st.text_input("Sepal Length", "Type Here") sepal_width = st.text_input("Sepal Width", "Type Here") petal_length = st.text_input("Petal Length", "Type Here") petal_width = st.text_input("Petal Width", "Type Here") result ="" # the below line ensures that when the button called 'Predict' is clicked, # the prediction function defined above is called to make the prediction # and store it in the variable result if st.button("Predict"): result = prediction(sepal_length, sepal_width, petal_length, petal_width) st.success('The output is {}'.format(result)) if __name__=='__main__': main() You can run this by typing the following command in the terminal – streamlit run app.py app.py is the name of the file where we wrote the Streamlit code. The website will open in your browser and then you can test it. This method can be used to deploy other machine and deep learning models too. Python Framework Machine Learning Python Machine Learning Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. ML | Linear Regression Python | Decision tree implementation Search Algorithms in AI ML | Underfitting and Overfitting Elbow Method for optimal value of k in KMeans Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function Python Dictionary Taking input in Python
[ { "code": null, "e": 25002, "s": 24974, "text": "\n02 Sep, 2020" }, { "code": null, "e": 25591, "s": 25002, "text": "Machine Learning:A computer is able to learn from experience without being explicitly programmed. Machine Learning is one of the top fields to enter currently and top companies all over the world are using it for improving their services and products. But there is no use of a Machine Learning model which is trained in your Jupyter Notebook. And so we need to deploy these models so that everyone can use them. In this article, we will first train an Iris Species classifier and then deploy the model using Streamlit which is an open-source app framework used to deploy ML models easily." }, { "code": null, "e": 26142, "s": 25591, "text": "Streamlit Library:Streamlit lets you create apps for your machine learning project using simple python scripts. It also supports hot-reloading, so that your app can update live as you edit and save your file. An app can be built in a few lines of code only(as we will see below) using the Streamlit API. Adding a widget is the same as declaring a variable. There is no need to write a backend, define different routes or handle HTTP requests. It is easy to deploy and manage. More information can be found on their website – https://www.streamlit.io/" }, { "code": null, "e": 26306, "s": 26142, "text": "So first we will train our model. We will not do much preprocessing as the main aim of this article is not to make an accurate ML model but to show its deployment." }, { "code": null, "e": 26349, "s": 26306, "text": "Firstly we need to install the following –" }, { "code": null, "e": 26425, "s": 26349, "text": "pip install pandaspip install numpypip install sklearnpip install streamlit" }, { "code": null, "e": 26490, "s": 26425, "text": "The dataset can be found here: https://www.kaggle.com/uciml/iris" }, { "code": null, "e": 26496, "s": 26490, "text": "Code:" }, { "code": "import pandas as pdimport numpy as np df = pd.read_csv('BankNote_Authentication.csv')df.head()", "e": 26592, "s": 26496, "text": null }, { "code": null, "e": 26906, "s": 26592, "text": "Output:Now we drop the Id column first as it is not important for classifying the Iris species. Then we will split the dataset into training and testing dataset and will use a Random Forest Classifier. You can use any other classifier of your choice, for example, logistic regression, support vector machine, etc." }, { "code": null, "e": 26912, "s": 26906, "text": "Code:" }, { "code": "# Dropping the Id columndf.drop('Id', axis = 1, inplace = True) # Renaming the target column into numbers to aid training of the modeldf['Species']= df['Species'].map({'Iris-setosa':0, 'Iris-versicolor':1, 'Iris-virginica':2}) # splitting the data into the columns which need to be trained(X) and the target column(y)X = df.iloc[:, :-1]y = df.iloc[:, -1] # splitting data into training and testing data with 30 % of data as testing data respectivelyfrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0) # importing the random forest classifier model and training it on the datasetfrom sklearn.ensemble import RandomForestClassifierclassifier = RandomForestClassifier()classifier.fit(X_train, y_train) # predicting on the test datasety_pred = classifier.predict(X_test) # finding out the accuracyfrom sklearn.metrics import accuracy_scorescore = accuracy_score(y_test, y_pred)", "e": 27885, "s": 26912, "text": null }, { "code": null, "e": 27936, "s": 27885, "text": "We get an accuracy of 95.55% which is pretty good." }, { "code": null, "e": 28126, "s": 27936, "text": "Now, in order to use this model to predict other unknown data, we need to save it. We can save it by using pickle, which is used for serializing and deserializing a Python object structure." }, { "code": null, "e": 28132, "s": 28126, "text": "Code:" }, { "code": "# pickling the modelimport picklepickle_out = open(\"classifier.pkl\", \"wb\")pickle.dump(classifier, pickle_out)pickle_out.close()", "e": 28260, "s": 28132, "text": null }, { "code": null, "e": 28401, "s": 28260, "text": "There will be a new file created called “classifier.pkl” in the same directory. Now we can get down to using Streamlit to deploy the model –" }, { "code": null, "e": 28453, "s": 28401, "text": "Paste the below code into another python file.Code:" }, { "code": "import pandas as pdimport numpy as npimport pickleimport streamlit as stfrom PIL import Image # loading in the model to predict on the datapickle_in = open('classifier.pkl', 'rb')classifier = pickle.load(pickle_in) def welcome(): return 'welcome all' # defining the function which will make the prediction using # the data which the user inputsdef prediction(sepal_length, sepal_width, petal_length, petal_width): prediction = classifier.predict( [[sepal_length, sepal_width, petal_length, petal_width]]) print(prediction) return prediction # this is the main function in which we define our webpage def main(): # giving the webpage a title st.title(\"Iris Flower Prediction\") # here we define some of the front end elements of the web page like # the font and background color, the padding and the text to be displayed html_temp = \"\"\" <div style =\"background-color:yellow;padding:13px\"> <h1 style =\"color:black;text-align:center;\">Streamlit Iris Flower Classifier ML App </h1> </div> \"\"\" # this line allows us to display the front end aspects we have # defined in the above code st.markdown(html_temp, unsafe_allow_html = True) # the following lines create text boxes in which the user can enter # the data required to make the prediction sepal_length = st.text_input(\"Sepal Length\", \"Type Here\") sepal_width = st.text_input(\"Sepal Width\", \"Type Here\") petal_length = st.text_input(\"Petal Length\", \"Type Here\") petal_width = st.text_input(\"Petal Width\", \"Type Here\") result =\"\" # the below line ensures that when the button called 'Predict' is clicked, # the prediction function defined above is called to make the prediction # and store it in the variable result if st.button(\"Predict\"): result = prediction(sepal_length, sepal_width, petal_length, petal_width) st.success('The output is {}'.format(result)) if __name__=='__main__': main()", "e": 30445, "s": 28453, "text": null }, { "code": null, "e": 30512, "s": 30445, "text": "You can run this by typing the following command in the terminal –" }, { "code": null, "e": 30533, "s": 30512, "text": "streamlit run app.py" }, { "code": null, "e": 30599, "s": 30533, "text": "app.py is the name of the file where we wrote the Streamlit code." }, { "code": null, "e": 30741, "s": 30599, "text": "The website will open in your browser and then you can test it. This method can be used to deploy other machine and deep learning models too." }, { "code": null, "e": 30758, "s": 30741, "text": "Python Framework" }, { "code": null, "e": 30775, "s": 30758, "text": "Machine Learning" }, { "code": null, "e": 30782, "s": 30775, "text": "Python" }, { "code": null, "e": 30799, "s": 30782, "text": "Machine Learning" }, { "code": null, "e": 30897, "s": 30799, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30920, "s": 30897, "text": "ML | Linear Regression" }, { "code": null, "e": 30958, "s": 30920, "text": "Python | Decision tree implementation" }, { "code": null, "e": 30982, "s": 30958, "text": "Search Algorithms in AI" }, { "code": null, "e": 31016, "s": 30982, "text": "ML | Underfitting and Overfitting" }, { "code": null, "e": 31062, "s": 31016, "text": "Elbow Method for optimal value of k in KMeans" }, { "code": null, "e": 31090, "s": 31062, "text": "Read JSON file using Python" }, { "code": null, "e": 31140, "s": 31090, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 31162, "s": 31140, "text": "Python map() function" }, { "code": null, "e": 31180, "s": 31162, "text": "Python Dictionary" } ]
How to Convert PDF to Image in Linux Command Line? - GeeksforGeeks
25 Mar, 2021 Pdftoppm is a tool that converts PDF document files into .PNG format and many other formats. We can use this tool on Linux to convert the PDF into images. It also provides the features like the cropping image, set resolution, and scale, and many more. Now let’s see how to install the pdftoppm To install the pdftoppm, we need to install the poppler-utils package on the Linux system because the pdftoppm comes with the poppler package. To install the poppler-utils use the following commands: To install the poppler-utils on debian based system like Ubuntu and kali Linux use the following command: sudo apt install poppler-utils To install the poppler-utils on the RHEL/CentOS & Fedora use the following command: sudo dnf install poppler-utils To install the poppler-utils on Arch-based OS uses the following command: sudo pacman -S poppler Now we have installed the pdftoppm on the system. Now let’s see how to use the pdftoppm Now let’s convert the pdf into the images. To convert complete pdf into the images the following is the syntax : pdftoppm -<image_format> <pdf_filename <image_filenane> Here in the place of image_format place the format of the image like png with – and in place of pdf_filename mention the name of pdf and in place of image_filename mention the output filename. Here is one example of the above command: pdftoppm -png gfg.pdf gfg_d We can see in the above image that all pages have the name ends with the page number. This will be automatically done by the pdftoppm. Now let’s see how to convert the range of the PDF pages into the images. To do that the following is the syntax of the command: pdftoppm -<image_format> -f N -l N <pdf_filename> <image_name> Here, -f denote the first and N denote the page number, and -l denotes the last and N to the page number. Here is one example of the above command: pdftoppm -png -f 5 -l 10 gfg.pdf gfg_d We can see in the above image output the specified section of PDF is converted into the .pdf format image. To convert the specific one page into the image we can modify the above command like we will keep the both -f and -l number same as of the page to be converted into the image like pdftoppm -png -f 3 -l 3 gfg.pdf gfg_d To convert the first page into the image we can modify the above command as follows: pdftoppm -png -f 1 -l 1 pdf_name.pdf image_name.png Then the only first page will be converted into an image like: We can generate the images of pdf in grayscale and monochrome by using just simple command: For grayscale image: pdftoppm -png -gray pdf_name.pdf image_name For monochrome image: pdftoppm -png -mono pdf_name.pdf image_name Here is an example of the above command: Now let’s see how to adjust the DPI Quality of the output image. By default, the DPI quality of the output image is 150, but we can change it. To change the DPI quality we can use the -rx option to specify the X resolution and -ry option to specify the Y resolution of DPI. pdftoppm -png -rx 350 -ry 350 To know more about the pdftoppm you can see the man page or use the help command pdftoppm --help or man pdftoppm How To Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Install FFmpeg on Windows? How to Set Git Username and Password in GitBash? How to Install Jupyter Notebook on MacOS? How to Add External JAR File to an IntelliJ IDEA Project? How to Create and Setup Spring Boot Project in Eclipse IDE? Sed Command in Linux/Unix with examples grep command in Unix/Linux TCP Server-Client implementation in C AWK command in Unix/Linux with examples cp command in Linux with examples
[ { "code": null, "e": 24561, "s": 24533, "text": "\n25 Mar, 2021" }, { "code": null, "e": 24855, "s": 24561, "text": "Pdftoppm is a tool that converts PDF document files into .PNG format and many other formats. We can use this tool on Linux to convert the PDF into images. It also provides the features like the cropping image, set resolution, and scale, and many more. Now let’s see how to install the pdftoppm" }, { "code": null, "e": 25055, "s": 24855, "text": "To install the pdftoppm, we need to install the poppler-utils package on the Linux system because the pdftoppm comes with the poppler package. To install the poppler-utils use the following commands:" }, { "code": null, "e": 25161, "s": 25055, "text": "To install the poppler-utils on debian based system like Ubuntu and kali Linux use the following command:" }, { "code": null, "e": 25193, "s": 25161, "text": "sudo apt install poppler-utils " }, { "code": null, "e": 25277, "s": 25193, "text": "To install the poppler-utils on the RHEL/CentOS & Fedora use the following command:" }, { "code": null, "e": 25310, "s": 25277, "text": "sudo dnf install poppler-utils " }, { "code": null, "e": 25385, "s": 25310, "text": "To install the poppler-utils on Arch-based OS uses the following command:" }, { "code": null, "e": 25413, "s": 25385, "text": "sudo pacman -S poppler " }, { "code": null, "e": 25501, "s": 25413, "text": "Now we have installed the pdftoppm on the system. Now let’s see how to use the pdftoppm" }, { "code": null, "e": 25614, "s": 25501, "text": "Now let’s convert the pdf into the images. To convert complete pdf into the images the following is the syntax :" }, { "code": null, "e": 25670, "s": 25614, "text": "pdftoppm -<image_format> <pdf_filename <image_filenane>" }, { "code": null, "e": 25864, "s": 25670, "text": "Here in the place of image_format place the format of the image like png with – and in place of pdf_filename mention the name of pdf and in place of image_filename mention the output filename." }, { "code": null, "e": 25906, "s": 25864, "text": "Here is one example of the above command:" }, { "code": null, "e": 25935, "s": 25906, "text": "pdftoppm -png gfg.pdf gfg_d" }, { "code": null, "e": 26070, "s": 25935, "text": "We can see in the above image that all pages have the name ends with the page number. This will be automatically done by the pdftoppm." }, { "code": null, "e": 26198, "s": 26070, "text": "Now let’s see how to convert the range of the PDF pages into the images. To do that the following is the syntax of the command:" }, { "code": null, "e": 26261, "s": 26198, "text": "pdftoppm -<image_format> -f N -l N <pdf_filename> <image_name>" }, { "code": null, "e": 26409, "s": 26261, "text": "Here, -f denote the first and N denote the page number, and -l denotes the last and N to the page number. Here is one example of the above command:" }, { "code": null, "e": 26450, "s": 26409, "text": " pdftoppm -png -f 5 -l 10 gfg.pdf gfg_d" }, { "code": null, "e": 26557, "s": 26450, "text": "We can see in the above image output the specified section of PDF is converted into the .pdf format image." }, { "code": null, "e": 26737, "s": 26557, "text": "To convert the specific one page into the image we can modify the above command like we will keep the both -f and -l number same as of the page to be converted into the image like" }, { "code": null, "e": 26776, "s": 26737, "text": "pdftoppm -png -f 3 -l 3 gfg.pdf gfg_d" }, { "code": null, "e": 26861, "s": 26776, "text": "To convert the first page into the image we can modify the above command as follows:" }, { "code": null, "e": 26913, "s": 26861, "text": "pdftoppm -png -f 1 -l 1 pdf_name.pdf image_name.png" }, { "code": null, "e": 26976, "s": 26913, "text": "Then the only first page will be converted into an image like:" }, { "code": null, "e": 27068, "s": 26976, "text": "We can generate the images of pdf in grayscale and monochrome by using just simple command:" }, { "code": null, "e": 27089, "s": 27068, "text": "For grayscale image:" }, { "code": null, "e": 27142, "s": 27089, "text": "pdftoppm -png -gray pdf_name.pdf image_name " }, { "code": null, "e": 27164, "s": 27142, "text": "For monochrome image:" }, { "code": null, "e": 27209, "s": 27164, "text": "pdftoppm -png -mono pdf_name.pdf image_name " }, { "code": null, "e": 27250, "s": 27209, "text": "Here is an example of the above command:" }, { "code": null, "e": 27525, "s": 27250, "text": "Now let’s see how to adjust the DPI Quality of the output image. By default, the DPI quality of the output image is 150, but we can change it. To change the DPI quality we can use the -rx option to specify the X resolution and -ry option to specify the Y resolution of DPI." }, { "code": null, "e": 27556, "s": 27525, "text": "pdftoppm -png -rx 350 -ry 350 " }, { "code": null, "e": 27638, "s": 27556, "text": "To know more about the pdftoppm you can see the man page or use the help command " }, { "code": null, "e": 27655, "s": 27638, "text": "pdftoppm --help " }, { "code": null, "e": 27658, "s": 27655, "text": "or" }, { "code": null, "e": 27671, "s": 27658, "text": "man pdftoppm" }, { "code": null, "e": 27678, "s": 27671, "text": "How To" }, { "code": null, "e": 27689, "s": 27678, "text": "Linux-Unix" }, { "code": null, "e": 27787, "s": 27689, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27796, "s": 27787, "text": "Comments" }, { "code": null, "e": 27809, "s": 27796, "text": "Old Comments" }, { "code": null, "e": 27843, "s": 27809, "text": "How to Install FFmpeg on Windows?" }, { "code": null, "e": 27892, "s": 27843, "text": "How to Set Git Username and Password in GitBash?" }, { "code": null, "e": 27934, "s": 27892, "text": "How to Install Jupyter Notebook on MacOS?" }, { "code": null, "e": 27992, "s": 27934, "text": "How to Add External JAR File to an IntelliJ IDEA Project?" }, { "code": null, "e": 28052, "s": 27992, "text": "How to Create and Setup Spring Boot Project in Eclipse IDE?" }, { "code": null, "e": 28092, "s": 28052, "text": "Sed Command in Linux/Unix with examples" }, { "code": null, "e": 28119, "s": 28092, "text": "grep command in Unix/Linux" }, { "code": null, "e": 28157, "s": 28119, "text": "TCP Server-Client implementation in C" }, { "code": null, "e": 28197, "s": 28157, "text": "AWK command in Unix/Linux with examples" } ]
Auto-Forecasting in Python with ThymeBoost | by Tyler Blume | Towards Data Science
TLDR: When comparing ThymeBoost to a few other popular time-series methods, we find that it can generate very competitive forecasts. Spoiler warning: ThymeBoost wins. But beyond winning, we see that there are many benefits of the ThymeBoost framework, even in the cases where it loses. For more examples you can view the ThymeBoost Github. First post in the series: Time Series Forecasting with ThymeBoost. If you haven’t viewed it yet then check it out! The examples used in this competition are fairly popular, but some of the code used for data wrangling has been taken from this article (thanks Tomonori Masui!). You should check out the article to see how a few other models perform on these datasets. The first example is a fairly well known time-series: the Airline Passenger dataset. This data is widely available and one source is from Kaggle or from the Github. Make sure you have the latest ThymeBoost package which can be installed via pip: pip install ThymeBoost --upgrade Now that we are up-to-date, let’s take a look! import numpy as npimport pandas as pdfrom matplotlib import pyplot as pltfrom ThymeBoost import ThymeBoost as tbimport seaborn as snssns.set_style("darkgrid")#Airlines Data, if your csv is in a different filepath adjust thisdf = pd.read_csv('AirPassengers.csv')df.index = pd.to_datetime(df['Month'])y = df['#Passengers']plt.plot(y)plt.show() This time series is quite interesting! A clear trend along with multiplicative seasonality. Definitely a good benchmark for any forecasting methodology. In order to judge the forecasting methods, we will split the data into a standard train/test split where the last 30% of the data is held out. The outcome of the testing procedure can change depending on this split. In an attempt to remain unbiased, the train/test splits from the aforementioned article will be used. Any tuning or model selection will be done on the training set while the test set will be used to judge the methods. The goal (at least for me) is to see if ThymeBoost can be competitive against other vetted methods. If you were to implement a forecasting model in production, then you may want to use a more robust method to judge models such as time-series cross validation. test_len = int(len(y) * 0.3)al_train, al_test = y.iloc[:-test_len], y.iloc[-test_len:] First, let’s try out an Auto-Arima implementation: Pmdarima. import pmdarima as pm# Fit a simple auto_arima modelarima = pm.auto_arima(al_train, seasonal=True, m=12, trace=True, error_action='warn', n_fits=50)pmd_predictions = arima.predict(n_periods=len(al_test))arima_mae = np.mean(np.abs(al_test - pmd_predictions))arima_rmse = (np.mean((al_test - pmd_predictions)**2))**.5arima_mape = np.sum(np.abs(pmd_predictions - al_test)) / (np.sum((np.abs(al_test)))) Next, we will give Prophet a shot. from fbprophet import Prophetprophet_train_df = al_train.reset_index()prophet_train_df.columns = ['ds', 'y']prophet = Prophet(seasonality_mode='multiplicative')prophet.fit(prophet_train_df)future_df = prophet.make_future_dataframe(periods=len(al_test), freq='M')prophet_forecast = prophet.predict(future_df)prophet_predictions = prophet_forecast['yhat'].iloc[-len(al_test):]prophet_mae = np.mean(np.abs(al_test - prophet_predictions.values))prophet_rmse = (np.mean((al_test - prophet_predictions.values)**2))**.5prophet_mape = np.sum(np.abs(prophet_predictions.values - al_test)) / (np.sum((np.abs(al_test)))) Finally, implementing ThymeBoost. Here we are using a new method: ‘autofit’ from the latest version of the package. This method will try several simple implementations given a possible seasonality. It is an experimental feature and only works for traditional time-series. A current issue is that it tries several redundant parameter settings, this will be fixed in a future release, speeding up the process! Additionally, if you intend to pass exogenous factors we recommend using the optimize method found in the README instead. boosted_model = tb.ThymeBoost(verbose=0)output = boosted_model.autofit(al_train, seasonal_period=12)predicted_output = boosted_model.predict(output, len(al_test))tb_mae = np.mean(np.abs(al_test - predicted_output['predictions']))tb_rmse = (np.mean((al_test - predicted_output['predictions'])**2))**.5tb_mape = np.sum(np.abs(predicted_output['predictions'] - al_test)) / (np.sum((np.abs(al_test)))) By setting verbose=0 when building the class it silences the logging of each individual model. Instead, a progress bar will be displayed to denote progress through the different parameter settings as well as the ‘optimal’ settings found. By default, ThymeBoost’s autofit method will do 3 rounds of fitting and forecasting. This process rolls through the last 6 values of the training set to choose the ‘best’ settings. For this data, the optimal settings found were: Optimal model configuration: {'trend_estimator': 'linear', 'fit_type': 'local', 'seasonal_period': [12, 0], 'seasonal_estimator': 'fourier', 'connectivity_constraint': True, 'global_cost': 'maicc', 'additive': False, 'seasonality_weights': array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5.]), 'exogenous': None}Params ensembled: False Some important things to note: ‘additive’ is set to false, meaning that the entire process is ‘multiplicative’ i.e. the log is taken of the input series. Normal ‘multiplicative’ seasonality is not yet implemented.There is an array of ‘seasonality_weights’ where the last 2 ‘seasonal_periods’ are set to 5, meaning those periods impact the seasonality component 5 times more than the other periods.‘seasonal_period’ is [12, 0] so it cycles back and forth between measuring seasonality and not measuring it. This is interesting behavior but it essentially adds some regularization to the seasonality component. ‘additive’ is set to false, meaning that the entire process is ‘multiplicative’ i.e. the log is taken of the input series. Normal ‘multiplicative’ seasonality is not yet implemented. There is an array of ‘seasonality_weights’ where the last 2 ‘seasonal_periods’ are set to 5, meaning those periods impact the seasonality component 5 times more than the other periods. ‘seasonal_period’ is [12, 0] so it cycles back and forth between measuring seasonality and not measuring it. This is interesting behavior but it essentially adds some regularization to the seasonality component. Speaking of components, let’s take a look: boosted_model.plot_components(output, predicted_output) All in all, ThymeBoost decided to take the log of the input series, fit a linear changepoint model, and add a ‘changepoint’ to the seasonality component. Were these correct decisions? Let’s take a look at the error metrics: These error metrics speak for themselves, ThymeBoost outperformed the other methods across the board. But what do the forecasts actually look like? plt.plot(pmd_predictions, label='Pmdarima')plt.plot(al_test.values, label='Actuals')plt.plot(prophet_predictions.values, label='Prophet')plt.plot(predicted_output['predictions'].values, label='ThymeBoost')plt.legend()plt.show() Clearly, all methods picked up very similar signals. It seems as though ThymeBoost was just slightly better in terms of the seasonal shape, potentially due to the seasonal weighting it decided to use. The next dataset is the U.S. Wholesale Price Index (WPI) from 1960 to 1990 and once again the example is taken from the aforementioned article. This dataset unfortunately does not come with ThymeBoost but we can access it via Statsmodels: from statsmodels.datasets import webusedta = webuse('wpi1')ts_wpi = dta['wpi']ts_wpi.index = pd.to_datetime(dta['t'])test_len = int(len(ts_wpi) * 0.25)ts_wpi = ts_wpi.astype(float)wpi_train, wpi_test = ts_wpi.iloc[:-test_len], ts_wpi.iloc[-test_len:]plt.plot(ts_wpi)plt.show() This time-series is fairly different than the previous. It appears to lack any seasonality so those settings will be disabled (although if we accidentally feed seasonality settings to ThymeBoost it does not change much). Let’s follow the same process as before and first fit Pmdarima: import pmdarima as pm# Fit a simple auto_arima modelarima = pm.auto_arima(wpi_train, seasonal=False, trace=True, error_action='warn', n_fits=50)pmd_predictions = arima.predict(n_periods=len(wpi_test))arima_mae = np.mean(np.abs(wpi_test - pmd_predictions))arima_rmse = (np.mean((wpi_test - pmd_predictions)**2))**.5arima_mape = np.sum(np.abs(pmd_predictions - wpi_test)) / (np.sum((np.abs(wpi_test)))) Next, Prophet: from fbprophet import Prophetprophet_train_df = wpi_train.reset_index()prophet_train_df.columns = ['ds', 'y']prophet = Prophet(yearly_seasonality=False)prophet.fit(prophet_train_df)future_df = prophet.make_future_dataframe(periods=len(wpi_test))prophet_forecast = prophet.predict(future_df)prophet_predictions = prophet_forecast['yhat'].iloc[-len(wpi_test):]prophet_mae = np.mean(np.abs(wpi_test - prophet_predictions.values))prophet_rmse = (np.mean((wpi_test - prophet_predictions.values)**2))**.5prophet_mape = np.sum(np.abs(prophet_predictions.values - wpi_test)) / (np.sum((np.abs(wpi_test)))) And finally, ThymeBoost: boosted_model = tb.ThymeBoost(verbose=0)output = boosted_model.autofit(wpi_train, seasonal_period=0)predicted_output = boosted_model.predict(output, forecast_horizon=len(wpi_test))tb_mae = np.mean(np.abs((wpi_test.values) - predicted_output['predictions']))tb_rmse = (np.mean((wpi_test.values - predicted_output['predictions'].values)**2))**.5tb_mape = np.sum(np.abs(predicted_output['predictions'].values - wpi_test.values)) / (np.sum((np.abs(wpi_test.values)))) And the optimal settings, which also can be accessed directly through: print(boosted_model.optimized_params) which returns: {'trend_estimator': 'linear', 'fit_type': 'local', 'seasonal_period': 0, 'seasonal_estimator': 'fourier', 'connectivity_constraint': False, 'global_cost': 'mse', 'additive': True, 'exogenous': None} Like last time, ThymeBoost chose a linear model with local fit a.k.a. a linear changepoint model for the trend component. Except this time ‘connectivity_constraint’ is set to False which relaxes that the trend lines at the changepoint connect AT the changepoint. Let’s take a look at the error metrics: Once again ThymeBoost outperforms the other two methods. And the forecasts: plt.plot(pmd_predictions, label='Pmdarima')plt.plot(wpi_test.values, label='Actuals')plt.plot(prophet_predictions.values, label='Prophet')plt.plot(predicted_output['predictions'].values, label='ThymeBoost')plt.legend()plt.show() The goal of this article was to see if ThymeBoost can compete against other popular methods, it clearly can in these instances. But don’t be fooled into thinking ThymeBoost is a panacea (although I wish it was!). There are many times where it is outperformed, however, one major benefit is that any method which outperforms ThymeBoost could potentially be added into the framework. Once a method is added, it gains access to some interesting features via the boosting process. An example of this can be found in the article already referenced. For the sunspots dataset, ThymeBoost’s autofit does not do any ARIMA modelling so it has poor results compared to the ARIMA(8, 0, 1) found from sktime. However, if we use the standard fit method and pass that ARIMA order with a local fit (so we allow changepoints) we outperform sktime’s Auto-ARIMA. I will leave that as an exercise for you to try, simply use these settings: output = boosted_model.fit(sun_train.values, trend_estimator='arima', arima_order=(8, 0, 1), global_cost='maicc', seasonal_period=0, fit_type='local' ) As mentioned before, this package is still in early development. The sheer number of possible configurations makes debugging an involved procedure, so use at your own risk. But please, play around and open any issues you encounter on GitHub!
[ { "code": null, "e": 458, "s": 172, "text": "TLDR: When comparing ThymeBoost to a few other popular time-series methods, we find that it can generate very competitive forecasts. Spoiler warning: ThymeBoost wins. But beyond winning, we see that there are many benefits of the ThymeBoost framework, even in the cases where it loses." }, { "code": null, "e": 512, "s": 458, "text": "For more examples you can view the ThymeBoost Github." }, { "code": null, "e": 627, "s": 512, "text": "First post in the series: Time Series Forecasting with ThymeBoost. If you haven’t viewed it yet then check it out!" }, { "code": null, "e": 879, "s": 627, "text": "The examples used in this competition are fairly popular, but some of the code used for data wrangling has been taken from this article (thanks Tomonori Masui!). You should check out the article to see how a few other models perform on these datasets." }, { "code": null, "e": 1125, "s": 879, "text": "The first example is a fairly well known time-series: the Airline Passenger dataset. This data is widely available and one source is from Kaggle or from the Github. Make sure you have the latest ThymeBoost package which can be installed via pip:" }, { "code": null, "e": 1158, "s": 1125, "text": "pip install ThymeBoost --upgrade" }, { "code": null, "e": 1205, "s": 1158, "text": "Now that we are up-to-date, let’s take a look!" }, { "code": null, "e": 1547, "s": 1205, "text": "import numpy as npimport pandas as pdfrom matplotlib import pyplot as pltfrom ThymeBoost import ThymeBoost as tbimport seaborn as snssns.set_style(\"darkgrid\")#Airlines Data, if your csv is in a different filepath adjust thisdf = pd.read_csv('AirPassengers.csv')df.index = pd.to_datetime(df['Month'])y = df['#Passengers']plt.plot(y)plt.show()" }, { "code": null, "e": 1700, "s": 1547, "text": "This time series is quite interesting! A clear trend along with multiplicative seasonality. Definitely a good benchmark for any forecasting methodology." }, { "code": null, "e": 2018, "s": 1700, "text": "In order to judge the forecasting methods, we will split the data into a standard train/test split where the last 30% of the data is held out. The outcome of the testing procedure can change depending on this split. In an attempt to remain unbiased, the train/test splits from the aforementioned article will be used." }, { "code": null, "e": 2395, "s": 2018, "text": "Any tuning or model selection will be done on the training set while the test set will be used to judge the methods. The goal (at least for me) is to see if ThymeBoost can be competitive against other vetted methods. If you were to implement a forecasting model in production, then you may want to use a more robust method to judge models such as time-series cross validation." }, { "code": null, "e": 2482, "s": 2395, "text": "test_len = int(len(y) * 0.3)al_train, al_test = y.iloc[:-test_len], y.iloc[-test_len:]" }, { "code": null, "e": 2543, "s": 2482, "text": "First, let’s try out an Auto-Arima implementation: Pmdarima." }, { "code": null, "e": 3048, "s": 2543, "text": "import pmdarima as pm# Fit a simple auto_arima modelarima = pm.auto_arima(al_train, seasonal=True, m=12, trace=True, error_action='warn', n_fits=50)pmd_predictions = arima.predict(n_periods=len(al_test))arima_mae = np.mean(np.abs(al_test - pmd_predictions))arima_rmse = (np.mean((al_test - pmd_predictions)**2))**.5arima_mape = np.sum(np.abs(pmd_predictions - al_test)) / (np.sum((np.abs(al_test))))" }, { "code": null, "e": 3083, "s": 3048, "text": "Next, we will give Prophet a shot." }, { "code": null, "e": 3693, "s": 3083, "text": "from fbprophet import Prophetprophet_train_df = al_train.reset_index()prophet_train_df.columns = ['ds', 'y']prophet = Prophet(seasonality_mode='multiplicative')prophet.fit(prophet_train_df)future_df = prophet.make_future_dataframe(periods=len(al_test), freq='M')prophet_forecast = prophet.predict(future_df)prophet_predictions = prophet_forecast['yhat'].iloc[-len(al_test):]prophet_mae = np.mean(np.abs(al_test - prophet_predictions.values))prophet_rmse = (np.mean((al_test - prophet_predictions.values)**2))**.5prophet_mape = np.sum(np.abs(prophet_predictions.values - al_test)) / (np.sum((np.abs(al_test))))" }, { "code": null, "e": 4223, "s": 3693, "text": "Finally, implementing ThymeBoost. Here we are using a new method: ‘autofit’ from the latest version of the package. This method will try several simple implementations given a possible seasonality. It is an experimental feature and only works for traditional time-series. A current issue is that it tries several redundant parameter settings, this will be fixed in a future release, speeding up the process! Additionally, if you intend to pass exogenous factors we recommend using the optimize method found in the README instead." }, { "code": null, "e": 4651, "s": 4223, "text": "boosted_model = tb.ThymeBoost(verbose=0)output = boosted_model.autofit(al_train, seasonal_period=12)predicted_output = boosted_model.predict(output, len(al_test))tb_mae = np.mean(np.abs(al_test - predicted_output['predictions']))tb_rmse = (np.mean((al_test - predicted_output['predictions'])**2))**.5tb_mape = np.sum(np.abs(predicted_output['predictions'] - al_test)) / (np.sum((np.abs(al_test))))" }, { "code": null, "e": 5118, "s": 4651, "text": "By setting verbose=0 when building the class it silences the logging of each individual model. Instead, a progress bar will be displayed to denote progress through the different parameter settings as well as the ‘optimal’ settings found. By default, ThymeBoost’s autofit method will do 3 rounds of fitting and forecasting. This process rolls through the last 6 values of the training set to choose the ‘best’ settings. For this data, the optimal settings found were:" }, { "code": null, "e": 5843, "s": 5118, "text": "Optimal model configuration: {'trend_estimator': 'linear', 'fit_type': 'local', 'seasonal_period': [12, 0], 'seasonal_estimator': 'fourier', 'connectivity_constraint': True, 'global_cost': 'maicc', 'additive': False, 'seasonality_weights': array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5.]), 'exogenous': None}Params ensembled: False" }, { "code": null, "e": 5874, "s": 5843, "text": "Some important things to note:" }, { "code": null, "e": 6452, "s": 5874, "text": "‘additive’ is set to false, meaning that the entire process is ‘multiplicative’ i.e. the log is taken of the input series. Normal ‘multiplicative’ seasonality is not yet implemented.There is an array of ‘seasonality_weights’ where the last 2 ‘seasonal_periods’ are set to 5, meaning those periods impact the seasonality component 5 times more than the other periods.‘seasonal_period’ is [12, 0] so it cycles back and forth between measuring seasonality and not measuring it. This is interesting behavior but it essentially adds some regularization to the seasonality component." }, { "code": null, "e": 6635, "s": 6452, "text": "‘additive’ is set to false, meaning that the entire process is ‘multiplicative’ i.e. the log is taken of the input series. Normal ‘multiplicative’ seasonality is not yet implemented." }, { "code": null, "e": 6820, "s": 6635, "text": "There is an array of ‘seasonality_weights’ where the last 2 ‘seasonal_periods’ are set to 5, meaning those periods impact the seasonality component 5 times more than the other periods." }, { "code": null, "e": 7032, "s": 6820, "text": "‘seasonal_period’ is [12, 0] so it cycles back and forth between measuring seasonality and not measuring it. This is interesting behavior but it essentially adds some regularization to the seasonality component." }, { "code": null, "e": 7075, "s": 7032, "text": "Speaking of components, let’s take a look:" }, { "code": null, "e": 7131, "s": 7075, "text": "boosted_model.plot_components(output, predicted_output)" }, { "code": null, "e": 7285, "s": 7131, "text": "All in all, ThymeBoost decided to take the log of the input series, fit a linear changepoint model, and add a ‘changepoint’ to the seasonality component." }, { "code": null, "e": 7355, "s": 7285, "text": "Were these correct decisions? Let’s take a look at the error metrics:" }, { "code": null, "e": 7503, "s": 7355, "text": "These error metrics speak for themselves, ThymeBoost outperformed the other methods across the board. But what do the forecasts actually look like?" }, { "code": null, "e": 7731, "s": 7503, "text": "plt.plot(pmd_predictions, label='Pmdarima')plt.plot(al_test.values, label='Actuals')plt.plot(prophet_predictions.values, label='Prophet')plt.plot(predicted_output['predictions'].values, label='ThymeBoost')plt.legend()plt.show()" }, { "code": null, "e": 7932, "s": 7731, "text": "Clearly, all methods picked up very similar signals. It seems as though ThymeBoost was just slightly better in terms of the seasonal shape, potentially due to the seasonal weighting it decided to use." }, { "code": null, "e": 8076, "s": 7932, "text": "The next dataset is the U.S. Wholesale Price Index (WPI) from 1960 to 1990 and once again the example is taken from the aforementioned article." }, { "code": null, "e": 8171, "s": 8076, "text": "This dataset unfortunately does not come with ThymeBoost but we can access it via Statsmodels:" }, { "code": null, "e": 8449, "s": 8171, "text": "from statsmodels.datasets import webusedta = webuse('wpi1')ts_wpi = dta['wpi']ts_wpi.index = pd.to_datetime(dta['t'])test_len = int(len(ts_wpi) * 0.25)ts_wpi = ts_wpi.astype(float)wpi_train, wpi_test = ts_wpi.iloc[:-test_len], ts_wpi.iloc[-test_len:]plt.plot(ts_wpi)plt.show()" }, { "code": null, "e": 8670, "s": 8449, "text": "This time-series is fairly different than the previous. It appears to lack any seasonality so those settings will be disabled (although if we accidentally feed seasonality settings to ThymeBoost it does not change much)." }, { "code": null, "e": 8734, "s": 8670, "text": "Let’s follow the same process as before and first fit Pmdarima:" }, { "code": null, "e": 9219, "s": 8734, "text": "import pmdarima as pm# Fit a simple auto_arima modelarima = pm.auto_arima(wpi_train, seasonal=False, trace=True, error_action='warn', n_fits=50)pmd_predictions = arima.predict(n_periods=len(wpi_test))arima_mae = np.mean(np.abs(wpi_test - pmd_predictions))arima_rmse = (np.mean((wpi_test - pmd_predictions)**2))**.5arima_mape = np.sum(np.abs(pmd_predictions - wpi_test)) / (np.sum((np.abs(wpi_test))))" }, { "code": null, "e": 9234, "s": 9219, "text": "Next, Prophet:" }, { "code": null, "e": 9832, "s": 9234, "text": "from fbprophet import Prophetprophet_train_df = wpi_train.reset_index()prophet_train_df.columns = ['ds', 'y']prophet = Prophet(yearly_seasonality=False)prophet.fit(prophet_train_df)future_df = prophet.make_future_dataframe(periods=len(wpi_test))prophet_forecast = prophet.predict(future_df)prophet_predictions = prophet_forecast['yhat'].iloc[-len(wpi_test):]prophet_mae = np.mean(np.abs(wpi_test - prophet_predictions.values))prophet_rmse = (np.mean((wpi_test - prophet_predictions.values)**2))**.5prophet_mape = np.sum(np.abs(prophet_predictions.values - wpi_test)) / (np.sum((np.abs(wpi_test))))" }, { "code": null, "e": 9857, "s": 9832, "text": "And finally, ThymeBoost:" }, { "code": null, "e": 10351, "s": 9857, "text": "boosted_model = tb.ThymeBoost(verbose=0)output = boosted_model.autofit(wpi_train, seasonal_period=0)predicted_output = boosted_model.predict(output, forecast_horizon=len(wpi_test))tb_mae = np.mean(np.abs((wpi_test.values) - predicted_output['predictions']))tb_rmse = (np.mean((wpi_test.values - predicted_output['predictions'].values)**2))**.5tb_mape = np.sum(np.abs(predicted_output['predictions'].values - wpi_test.values)) / (np.sum((np.abs(wpi_test.values))))" }, { "code": null, "e": 10422, "s": 10351, "text": "And the optimal settings, which also can be accessed directly through:" }, { "code": null, "e": 10460, "s": 10422, "text": "print(boosted_model.optimized_params)" }, { "code": null, "e": 10475, "s": 10460, "text": "which returns:" }, { "code": null, "e": 10674, "s": 10475, "text": "{'trend_estimator': 'linear', 'fit_type': 'local', 'seasonal_period': 0, 'seasonal_estimator': 'fourier', 'connectivity_constraint': False, 'global_cost': 'mse', 'additive': True, 'exogenous': None}" }, { "code": null, "e": 10937, "s": 10674, "text": "Like last time, ThymeBoost chose a linear model with local fit a.k.a. a linear changepoint model for the trend component. Except this time ‘connectivity_constraint’ is set to False which relaxes that the trend lines at the changepoint connect AT the changepoint." }, { "code": null, "e": 10977, "s": 10937, "text": "Let’s take a look at the error metrics:" }, { "code": null, "e": 11034, "s": 10977, "text": "Once again ThymeBoost outperforms the other two methods." }, { "code": null, "e": 11053, "s": 11034, "text": "And the forecasts:" }, { "code": null, "e": 11282, "s": 11053, "text": "plt.plot(pmd_predictions, label='Pmdarima')plt.plot(wpi_test.values, label='Actuals')plt.plot(prophet_predictions.values, label='Prophet')plt.plot(predicted_output['predictions'].values, label='ThymeBoost')plt.legend()plt.show()" }, { "code": null, "e": 11759, "s": 11282, "text": "The goal of this article was to see if ThymeBoost can compete against other popular methods, it clearly can in these instances. But don’t be fooled into thinking ThymeBoost is a panacea (although I wish it was!). There are many times where it is outperformed, however, one major benefit is that any method which outperforms ThymeBoost could potentially be added into the framework. Once a method is added, it gains access to some interesting features via the boosting process." }, { "code": null, "e": 12126, "s": 11759, "text": "An example of this can be found in the article already referenced. For the sunspots dataset, ThymeBoost’s autofit does not do any ARIMA modelling so it has poor results compared to the ARIMA(8, 0, 1) found from sktime. However, if we use the standard fit method and pass that ARIMA order with a local fit (so we allow changepoints) we outperform sktime’s Auto-ARIMA." }, { "code": null, "e": 12202, "s": 12126, "text": "I will leave that as an exercise for you to try, simply use these settings:" }, { "code": null, "e": 12510, "s": 12202, "text": "output = boosted_model.fit(sun_train.values, trend_estimator='arima', arima_order=(8, 0, 1), global_cost='maicc', seasonal_period=0, fit_type='local' )" } ]
How to Easily Set Up M1 MacBooks for Data Science and Machine Learning | by Dario Radečić | Towards Data Science
Configuring M1 Macs for data science can be a pain in the bottom. You can either go with the simpler option and run everything under Rosseta, or install dependencies manually like a madman and face a never-ending log of error messages. The first option is fine, but Python won’t run natively, so some performance is lost. The second is, well, tedious and nerve-racking. But there’s a third option. Today you’ll learn how to set up Python to run natively through Miniforge on any M1 chip. We’ll also go through some examples to explore if Python really is running natively. The article is structured as follows: Installing and Configuring Miniforge Performance Testing Final Thoughts I’ve spent so much time configuring the M1 Mac for data science. It never worked without a flaw. Until I found this option. It will take you between 5 and 10 minutes to set up completely, depending on the Internet speed. To start, you’ll need to install Homebrew. It’s a package manager for Mac, and you can install it by executing the following line from the Terminal: /bin/bash -c “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" Keep in mind — if you’re setting up a new M1 Mac, it’s likely you won’t have XCode build tools installed, which are required for Homebrew. The Terminal will inform you if these are missing and will ask if you want to install them. Once both XCode build tools and Homebrew are installed, you can restart the Terminal and install Miniforge: brew install miniforge It’s a couple hundred MB download, so it will take some time to complete. Once done, restart the Terminal again. That’s it! Miniforge is now installed and you’re ready to create virtual environments and initialize conda. The following Terminal line will create a virtual environment called `base_env` based on Python 3.8: conda create — name base_env python=3.8 Finally, initialize the conda for the Z shell (zsh): conda init zsh Just for fun, restart the Terminal once again before activating the environment. After the `init` was called, the `base` environment will be activated by default. You can change it by executing the following line: conda activate base_env You should see something like this: As the last step, let’s install a couple of Python libraries through conda: conda install numpy pandas matplotlib plotly scikit-learn jupyter jupyterlab That’s all. Let’s make a couple of tests next. Open up a Jupyter Lab from the virtual environment if you’re following along. To start, let’s import the common data science suspect — Numpy, Pandas, and Scipy — just to verify everything works correctly: Next, let’s make a simple loop without any libraries. Here’s the code: As you can see, the cell took 7.5 seconds to complete. To verify the native Python version was used, and not the Intel version under Rosetta, we can check the Architecture values for Python3.8 in Activity Monitor: Let’s do the next test with Numpy. The code on the following image generates a large array of random integers, calculates the logarithm and square root: And here’s the activity monitor: As you can see, Numpy works like a charm. Finally, let’s do the test with Pandas. We’ll do the same operations as with Numpy, so there’s no need for further explanations: Let’s take a look at the activity monitor once again: And there you have it — proof that both Python and its data science libraries can be configured without breaking a sweat. Let’s wrap things up next. To conclude — there’s no need to bang your head against a wall when configuring a new M1 Mac for data science. Sure, the process isn’t the same as with Intel’s (unless you’re using Miniforge), but the process is still simple. Stay tuned for more M1 tests and detailed comparisons with its bigger brother — 16" Intel i9 from 2019. Thanks for reading. Loved the article? Become a Medium member to continue learning without limits. I’ll receive a portion of your membership fee if you use the following link, with no extra cost to you. medium.com What’s New In Python 3.10–4 Amazing Features You Should Try How to Schedule Python Scripts With Cron — The Only Guide You’ll Ever Need Dask Delayed — How to Parallelize Your Python Code With Ease How to Create PDF Reports With Python — The Essential Guide Become a Data Scientist in 2021 Even Without a College Degree Follow me on Medium for more stories like this Sign up for my newsletter Connect on LinkedIn
[ { "code": null, "e": 408, "s": 172, "text": "Configuring M1 Macs for data science can be a pain in the bottom. You can either go with the simpler option and run everything under Rosseta, or install dependencies manually like a madman and face a never-ending log of error messages." }, { "code": null, "e": 542, "s": 408, "text": "The first option is fine, but Python won’t run natively, so some performance is lost. The second is, well, tedious and nerve-racking." }, { "code": null, "e": 570, "s": 542, "text": "But there’s a third option." }, { "code": null, "e": 745, "s": 570, "text": "Today you’ll learn how to set up Python to run natively through Miniforge on any M1 chip. We’ll also go through some examples to explore if Python really is running natively." }, { "code": null, "e": 783, "s": 745, "text": "The article is structured as follows:" }, { "code": null, "e": 820, "s": 783, "text": "Installing and Configuring Miniforge" }, { "code": null, "e": 840, "s": 820, "text": "Performance Testing" }, { "code": null, "e": 855, "s": 840, "text": "Final Thoughts" }, { "code": null, "e": 1076, "s": 855, "text": "I’ve spent so much time configuring the M1 Mac for data science. It never worked without a flaw. Until I found this option. It will take you between 5 and 10 minutes to set up completely, depending on the Internet speed." }, { "code": null, "e": 1225, "s": 1076, "text": "To start, you’ll need to install Homebrew. It’s a package manager for Mac, and you can install it by executing the following line from the Terminal:" }, { "code": null, "e": 1321, "s": 1225, "text": "/bin/bash -c “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"" }, { "code": null, "e": 1552, "s": 1321, "text": "Keep in mind — if you’re setting up a new M1 Mac, it’s likely you won’t have XCode build tools installed, which are required for Homebrew. The Terminal will inform you if these are missing and will ask if you want to install them." }, { "code": null, "e": 1660, "s": 1552, "text": "Once both XCode build tools and Homebrew are installed, you can restart the Terminal and install Miniforge:" }, { "code": null, "e": 1683, "s": 1660, "text": "brew install miniforge" }, { "code": null, "e": 1796, "s": 1683, "text": "It’s a couple hundred MB download, so it will take some time to complete. Once done, restart the Terminal again." }, { "code": null, "e": 2005, "s": 1796, "text": "That’s it! Miniforge is now installed and you’re ready to create virtual environments and initialize conda. The following Terminal line will create a virtual environment called `base_env` based on Python 3.8:" }, { "code": null, "e": 2045, "s": 2005, "text": "conda create — name base_env python=3.8" }, { "code": null, "e": 2098, "s": 2045, "text": "Finally, initialize the conda for the Z shell (zsh):" }, { "code": null, "e": 2113, "s": 2098, "text": "conda init zsh" }, { "code": null, "e": 2327, "s": 2113, "text": "Just for fun, restart the Terminal once again before activating the environment. After the `init` was called, the `base` environment will be activated by default. You can change it by executing the following line:" }, { "code": null, "e": 2351, "s": 2327, "text": "conda activate base_env" }, { "code": null, "e": 2387, "s": 2351, "text": "You should see something like this:" }, { "code": null, "e": 2463, "s": 2387, "text": "As the last step, let’s install a couple of Python libraries through conda:" }, { "code": null, "e": 2540, "s": 2463, "text": "conda install numpy pandas matplotlib plotly scikit-learn jupyter jupyterlab" }, { "code": null, "e": 2587, "s": 2540, "text": "That’s all. Let’s make a couple of tests next." }, { "code": null, "e": 2792, "s": 2587, "text": "Open up a Jupyter Lab from the virtual environment if you’re following along. To start, let’s import the common data science suspect — Numpy, Pandas, and Scipy — just to verify everything works correctly:" }, { "code": null, "e": 2863, "s": 2792, "text": "Next, let’s make a simple loop without any libraries. Here’s the code:" }, { "code": null, "e": 3077, "s": 2863, "text": "As you can see, the cell took 7.5 seconds to complete. To verify the native Python version was used, and not the Intel version under Rosetta, we can check the Architecture values for Python3.8 in Activity Monitor:" }, { "code": null, "e": 3230, "s": 3077, "text": "Let’s do the next test with Numpy. The code on the following image generates a large array of random integers, calculates the logarithm and square root:" }, { "code": null, "e": 3263, "s": 3230, "text": "And here’s the activity monitor:" }, { "code": null, "e": 3434, "s": 3263, "text": "As you can see, Numpy works like a charm. Finally, let’s do the test with Pandas. We’ll do the same operations as with Numpy, so there’s no need for further explanations:" }, { "code": null, "e": 3488, "s": 3434, "text": "Let’s take a look at the activity monitor once again:" }, { "code": null, "e": 3637, "s": 3488, "text": "And there you have it — proof that both Python and its data science libraries can be configured without breaking a sweat. Let’s wrap things up next." }, { "code": null, "e": 3863, "s": 3637, "text": "To conclude — there’s no need to bang your head against a wall when configuring a new M1 Mac for data science. Sure, the process isn’t the same as with Intel’s (unless you’re using Miniforge), but the process is still simple." }, { "code": null, "e": 3967, "s": 3863, "text": "Stay tuned for more M1 tests and detailed comparisons with its bigger brother — 16\" Intel i9 from 2019." }, { "code": null, "e": 3987, "s": 3967, "text": "Thanks for reading." }, { "code": null, "e": 4170, "s": 3987, "text": "Loved the article? Become a Medium member to continue learning without limits. I’ll receive a portion of your membership fee if you use the following link, with no extra cost to you." }, { "code": null, "e": 4181, "s": 4170, "text": "medium.com" }, { "code": null, "e": 4241, "s": 4181, "text": "What’s New In Python 3.10–4 Amazing Features You Should Try" }, { "code": null, "e": 4316, "s": 4241, "text": "How to Schedule Python Scripts With Cron — The Only Guide You’ll Ever Need" }, { "code": null, "e": 4377, "s": 4316, "text": "Dask Delayed — How to Parallelize Your Python Code With Ease" }, { "code": null, "e": 4437, "s": 4377, "text": "How to Create PDF Reports With Python — The Essential Guide" }, { "code": null, "e": 4499, "s": 4437, "text": "Become a Data Scientist in 2021 Even Without a College Degree" }, { "code": null, "e": 4546, "s": 4499, "text": "Follow me on Medium for more stories like this" }, { "code": null, "e": 4572, "s": 4546, "text": "Sign up for my newsletter" } ]
Hidden Cost of ERP Implementation - GeeksforGeeks
25 Nov, 2020 Hidden Cost behind ERP Implementation is a common pressure point in Companies while implementing an ERP System. The ERP Selection team have met with various vendors, analyze various factors and then done with Selection Part and then ERP creating , analyzing flaws , Implementing , Removing Flaws and finalizing ERP System are the important steps for Creating ERP System . While in this process there are some cost hidden which are not included in the Creating ERP System. Some common Hidden Cost in ERP Implementation : 1. Labor Costs 2. Training Cost 3. Testing, Retesting and Testing Again 4. Customization 5. Customer Dissatisfaction 6. Re-engineered Processes 7. Ongoing Cost and maintenance cost 8. Resistance of Employee for Adopting new ERP 9. Data Conversion These are explained as following below. Labor Costs –Labor is the major part of ERP implementation. These are the person’s who are major part of designing of the ERP. For example : In a company if some organization order to create ERP for them then it is created by ERP Implementation Company. But Employee’s salary doesn’t depend upon the only this project. Their Salary is fixed and is independent on how much they contributed on how many projects of developing ERP.Training Cost –While an ERP is created by a Commercial Company for other organization , before handed on to them a new created ERP is tested. So in that company training is given to the end user so that they can work on the new system and help the Company to find out flaws so that a new improved product is handed on the buyers. Also as we know when new technology is adopted by Companies the at that time training of that technology is provided to the employees so that they can work easily on it.Testing, Retesting and testing again –When an ERP is created so at that time for checking its performance it has to go through number of testing so that an ideal ERP is produced as a final Product.For example, A company has created an ERP so initially an ERP has go through Testing so that it’s flaws can be determined. So after correcting that flaws the system again has gone through Retesting phase so that an ideal product is produced. If after retesting again flaws are found that at that time the ERP System has gone through number of Retesting Phase until it become quintessential product. After that company handed the product to the organization. While using when an organization found any fault or they want some settings then it’s responsibility of Vendor of that Company to Remove that flaws free of cost.Customization –It is difficult to find ERP System which do not need any Customization. Customization means any Organization want some desired module or desired Changes on Default ERP System of a Company. It is easy for Company to customize the ERP System according to the satisfaction of an Organization but difficulties arises when Company upgrade the customized ERP system because Default ERP System can be easily upgraded but Customized ERP System Require more efforts and expense for upgrading which is an extra hidden cost.Customer Dissatisfaction –When an ERP is handed on to the Customer that is organization so at that time sometime happens that the provided ERP System doesn’t matched with the requirement of customers. So at that time the Vendor’s has to make some changes to the ERP System which meets the requirements of the Customer which is an extra cost to them.Re-engineered Processes –Re-engineered process means a company recreate it’s business process or ERP with the goal of improvement by removing flaws faced in previous process. So also consist of hidden cost like when a company created an ERP system so firstly it goes for testing after that final product(ERP) is provided to the other company. So when that company really implemented the system at that time some flaws are also caused so for removing this the ERP system is goes for updating.Ongoing cost and Maintenance cost –When an organization buy ERP from a company. So at that, in the total cost they paid to the Company consist of Support Cost. Support cost consist of removing bugs to the system and also consist of continue updating of basic system. But if some organization who created their own ERP are opting for Support Cost to other ERP manufacturing Company then at that time they have to pay more as compared to the company who buy ERP system with Support Cost.Resistance of Employee for Adopting new ERP –Change is difficult to Adopt because everyone is comfortable with previous situation but also change helps a lot to increase productivity. So while buying ERP system from ERP manufacturing Company, the organization who brought ERP system also provide training to their Employee so that they can adopt the ew system easily, so this training is included in the cost while buying ERP system.Data Conversion –When an organization buy an ERP system so, at that time data is not import itself to new System. An organization have to pay for it so that without any difficulty the data is imported from previous system to new system. Labor Costs –Labor is the major part of ERP implementation. These are the person’s who are major part of designing of the ERP. For example : In a company if some organization order to create ERP for them then it is created by ERP Implementation Company. But Employee’s salary doesn’t depend upon the only this project. Their Salary is fixed and is independent on how much they contributed on how many projects of developing ERP. Training Cost –While an ERP is created by a Commercial Company for other organization , before handed on to them a new created ERP is tested. So in that company training is given to the end user so that they can work on the new system and help the Company to find out flaws so that a new improved product is handed on the buyers. Also as we know when new technology is adopted by Companies the at that time training of that technology is provided to the employees so that they can work easily on it. Testing, Retesting and testing again –When an ERP is created so at that time for checking its performance it has to go through number of testing so that an ideal ERP is produced as a final Product.For example, A company has created an ERP so initially an ERP has go through Testing so that it’s flaws can be determined. So after correcting that flaws the system again has gone through Retesting phase so that an ideal product is produced. If after retesting again flaws are found that at that time the ERP System has gone through number of Retesting Phase until it become quintessential product. After that company handed the product to the organization. While using when an organization found any fault or they want some settings then it’s responsibility of Vendor of that Company to Remove that flaws free of cost. For example, A company has created an ERP so initially an ERP has go through Testing so that it’s flaws can be determined. So after correcting that flaws the system again has gone through Retesting phase so that an ideal product is produced. If after retesting again flaws are found that at that time the ERP System has gone through number of Retesting Phase until it become quintessential product. After that company handed the product to the organization. While using when an organization found any fault or they want some settings then it’s responsibility of Vendor of that Company to Remove that flaws free of cost. Customization –It is difficult to find ERP System which do not need any Customization. Customization means any Organization want some desired module or desired Changes on Default ERP System of a Company. It is easy for Company to customize the ERP System according to the satisfaction of an Organization but difficulties arises when Company upgrade the customized ERP system because Default ERP System can be easily upgraded but Customized ERP System Require more efforts and expense for upgrading which is an extra hidden cost. Customer Dissatisfaction –When an ERP is handed on to the Customer that is organization so at that time sometime happens that the provided ERP System doesn’t matched with the requirement of customers. So at that time the Vendor’s has to make some changes to the ERP System which meets the requirements of the Customer which is an extra cost to them. Re-engineered Processes –Re-engineered process means a company recreate it’s business process or ERP with the goal of improvement by removing flaws faced in previous process. So also consist of hidden cost like when a company created an ERP system so firstly it goes for testing after that final product(ERP) is provided to the other company. So when that company really implemented the system at that time some flaws are also caused so for removing this the ERP system is goes for updating. Ongoing cost and Maintenance cost –When an organization buy ERP from a company. So at that, in the total cost they paid to the Company consist of Support Cost. Support cost consist of removing bugs to the system and also consist of continue updating of basic system. But if some organization who created their own ERP are opting for Support Cost to other ERP manufacturing Company then at that time they have to pay more as compared to the company who buy ERP system with Support Cost. Resistance of Employee for Adopting new ERP –Change is difficult to Adopt because everyone is comfortable with previous situation but also change helps a lot to increase productivity. So while buying ERP system from ERP manufacturing Company, the organization who brought ERP system also provide training to their Employee so that they can adopt the ew system easily, so this training is included in the cost while buying ERP system. Data Conversion –When an organization buy an ERP system so, at that time data is not import itself to new System. An organization have to pay for it so that without any difficulty the data is imported from previous system to new system. Software Engineering Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments DFD for Library Management System What is DFD(Data Flow Diagram)? Use Case Diagram for Library Management System Software Engineering | System configuration management Software Engineering | Black box testing System Testing Software Development Life Cycle (SDLC) Software Engineering | Evolutionary Model Software Engineering | Seven Principles of software testing RUP and its Phases
[ { "code": null, "e": 24868, "s": 24840, "text": "\n25 Nov, 2020" }, { "code": null, "e": 24980, "s": 24868, "text": "Hidden Cost behind ERP Implementation is a common pressure point in Companies while implementing an ERP System." }, { "code": null, "e": 25340, "s": 24980, "text": "The ERP Selection team have met with various vendors, analyze various factors and then done with Selection Part and then ERP creating , analyzing flaws , Implementing , Removing Flaws and finalizing ERP System are the important steps for Creating ERP System . While in this process there are some cost hidden which are not included in the Creating ERP System." }, { "code": null, "e": 25388, "s": 25340, "text": "Some common Hidden Cost in ERP Implementation :" }, { "code": null, "e": 25636, "s": 25388, "text": "1. Labor Costs\n2. Training Cost\n3. Testing, Retesting and Testing Again\n4. Customization\n5. Customer Dissatisfaction\n6. Re-engineered Processes\n7. Ongoing Cost and maintenance cost\n8. Resistance of Employee for Adopting new ERP\n9. Data Conversion " }, { "code": null, "e": 25676, "s": 25636, "text": "These are explained as following below." }, { "code": null, "e": 29942, "s": 25676, "text": "Labor Costs –Labor is the major part of ERP implementation. These are the person’s who are major part of designing of the ERP. For example : In a company if some organization order to create ERP for them then it is created by ERP Implementation Company. But Employee’s salary doesn’t depend upon the only this project. Their Salary is fixed and is independent on how much they contributed on how many projects of developing ERP.Training Cost –While an ERP is created by a Commercial Company for other organization , before handed on to them a new created ERP is tested. So in that company training is given to the end user so that they can work on the new system and help the Company to find out flaws so that a new improved product is handed on the buyers. Also as we know when new technology is adopted by Companies the at that time training of that technology is provided to the employees so that they can work easily on it.Testing, Retesting and testing again –When an ERP is created so at that time for checking its performance it has to go through number of testing so that an ideal ERP is produced as a final Product.For example, A company has created an ERP so initially an ERP has go through Testing so that it’s flaws can be determined. So after correcting that flaws the system again has gone through Retesting phase so that an ideal product is produced. If after retesting again flaws are found that at that time the ERP System has gone through number of Retesting Phase until it become quintessential product. After that company handed the product to the organization. While using when an organization found any fault or they want some settings then it’s responsibility of Vendor of that Company to Remove that flaws free of cost.Customization –It is difficult to find ERP System which do not need any Customization. Customization means any Organization want some desired module or desired Changes on Default ERP System of a Company. It is easy for Company to customize the ERP System according to the satisfaction of an Organization but difficulties arises when Company upgrade the customized ERP system because Default ERP System can be easily upgraded but Customized ERP System Require more efforts and expense for upgrading which is an extra hidden cost.Customer Dissatisfaction –When an ERP is handed on to the Customer that is organization so at that time sometime happens that the provided ERP System doesn’t matched with the requirement of customers. So at that time the Vendor’s has to make some changes to the ERP System which meets the requirements of the Customer which is an extra cost to them.Re-engineered Processes –Re-engineered process means a company recreate it’s business process or ERP with the goal of improvement by removing flaws faced in previous process. So also consist of hidden cost like when a company created an ERP system so firstly it goes for testing after that final product(ERP) is provided to the other company. So when that company really implemented the system at that time some flaws are also caused so for removing this the ERP system is goes for updating.Ongoing cost and Maintenance cost –When an organization buy ERP from a company. So at that, in the total cost they paid to the Company consist of Support Cost. Support cost consist of removing bugs to the system and also consist of continue updating of basic system. But if some organization who created their own ERP are opting for Support Cost to other ERP manufacturing Company then at that time they have to pay more as compared to the company who buy ERP system with Support Cost.Resistance of Employee for Adopting new ERP –Change is difficult to Adopt because everyone is comfortable with previous situation but also change helps a lot to increase productivity. So while buying ERP system from ERP manufacturing Company, the organization who brought ERP system also provide training to their Employee so that they can adopt the ew system easily, so this training is included in the cost while buying ERP system.Data Conversion –When an organization buy an ERP system so, at that time data is not import itself to new System. An organization have to pay for it so that without any difficulty the data is imported from previous system to new system." }, { "code": null, "e": 30371, "s": 29942, "text": "Labor Costs –Labor is the major part of ERP implementation. These are the person’s who are major part of designing of the ERP. For example : In a company if some organization order to create ERP for them then it is created by ERP Implementation Company. But Employee’s salary doesn’t depend upon the only this project. Their Salary is fixed and is independent on how much they contributed on how many projects of developing ERP." }, { "code": null, "e": 30871, "s": 30371, "text": "Training Cost –While an ERP is created by a Commercial Company for other organization , before handed on to them a new created ERP is tested. So in that company training is given to the end user so that they can work on the new system and help the Company to find out flaws so that a new improved product is handed on the buyers. Also as we know when new technology is adopted by Companies the at that time training of that technology is provided to the employees so that they can work easily on it." }, { "code": null, "e": 31688, "s": 30871, "text": "Testing, Retesting and testing again –When an ERP is created so at that time for checking its performance it has to go through number of testing so that an ideal ERP is produced as a final Product.For example, A company has created an ERP so initially an ERP has go through Testing so that it’s flaws can be determined. So after correcting that flaws the system again has gone through Retesting phase so that an ideal product is produced. If after retesting again flaws are found that at that time the ERP System has gone through number of Retesting Phase until it become quintessential product. After that company handed the product to the organization. While using when an organization found any fault or they want some settings then it’s responsibility of Vendor of that Company to Remove that flaws free of cost." }, { "code": null, "e": 32308, "s": 31688, "text": "For example, A company has created an ERP so initially an ERP has go through Testing so that it’s flaws can be determined. So after correcting that flaws the system again has gone through Retesting phase so that an ideal product is produced. If after retesting again flaws are found that at that time the ERP System has gone through number of Retesting Phase until it become quintessential product. After that company handed the product to the organization. While using when an organization found any fault or they want some settings then it’s responsibility of Vendor of that Company to Remove that flaws free of cost." }, { "code": null, "e": 32837, "s": 32308, "text": "Customization –It is difficult to find ERP System which do not need any Customization. Customization means any Organization want some desired module or desired Changes on Default ERP System of a Company. It is easy for Company to customize the ERP System according to the satisfaction of an Organization but difficulties arises when Company upgrade the customized ERP system because Default ERP System can be easily upgraded but Customized ERP System Require more efforts and expense for upgrading which is an extra hidden cost." }, { "code": null, "e": 33187, "s": 32837, "text": "Customer Dissatisfaction –When an ERP is handed on to the Customer that is organization so at that time sometime happens that the provided ERP System doesn’t matched with the requirement of customers. So at that time the Vendor’s has to make some changes to the ERP System which meets the requirements of the Customer which is an extra cost to them." }, { "code": null, "e": 33679, "s": 33187, "text": "Re-engineered Processes –Re-engineered process means a company recreate it’s business process or ERP with the goal of improvement by removing flaws faced in previous process. So also consist of hidden cost like when a company created an ERP system so firstly it goes for testing after that final product(ERP) is provided to the other company. So when that company really implemented the system at that time some flaws are also caused so for removing this the ERP system is goes for updating." }, { "code": null, "e": 34165, "s": 33679, "text": "Ongoing cost and Maintenance cost –When an organization buy ERP from a company. So at that, in the total cost they paid to the Company consist of Support Cost. Support cost consist of removing bugs to the system and also consist of continue updating of basic system. But if some organization who created their own ERP are opting for Support Cost to other ERP manufacturing Company then at that time they have to pay more as compared to the company who buy ERP system with Support Cost." }, { "code": null, "e": 34599, "s": 34165, "text": "Resistance of Employee for Adopting new ERP –Change is difficult to Adopt because everyone is comfortable with previous situation but also change helps a lot to increase productivity. So while buying ERP system from ERP manufacturing Company, the organization who brought ERP system also provide training to their Employee so that they can adopt the ew system easily, so this training is included in the cost while buying ERP system." }, { "code": null, "e": 34836, "s": 34599, "text": "Data Conversion –When an organization buy an ERP system so, at that time data is not import itself to new System. An organization have to pay for it so that without any difficulty the data is imported from previous system to new system." }, { "code": null, "e": 34857, "s": 34836, "text": "Software Engineering" }, { "code": null, "e": 34955, "s": 34857, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 34964, "s": 34955, "text": "Comments" }, { "code": null, "e": 34977, "s": 34964, "text": "Old Comments" }, { "code": null, "e": 35011, "s": 34977, "text": "DFD for Library Management System" }, { "code": null, "e": 35043, "s": 35011, "text": "What is DFD(Data Flow Diagram)?" }, { "code": null, "e": 35090, "s": 35043, "text": "Use Case Diagram for Library Management System" }, { "code": null, "e": 35145, "s": 35090, "text": "Software Engineering | System configuration management" }, { "code": null, "e": 35186, "s": 35145, "text": "Software Engineering | Black box testing" }, { "code": null, "e": 35201, "s": 35186, "text": "System Testing" }, { "code": null, "e": 35240, "s": 35201, "text": "Software Development Life Cycle (SDLC)" }, { "code": null, "e": 35282, "s": 35240, "text": "Software Engineering | Evolutionary Model" }, { "code": null, "e": 35342, "s": 35282, "text": "Software Engineering | Seven Principles of software testing" } ]
C# Program to get the difference between two dates
Use DateTime.Subtract to get the difference between two dates in C#. Firstly, set two dates − DateTime date1 = new DateTime(2018, 8, 27); DateTime date2 = new DateTime(2018, 8, 28); Use the Subtract method to get the difference − TimeSpan t = date2.Subtract(date1); The following is the complete code − Live Demo using System; using System.Threading; using System.Diagnostics; public class Demo { public static void Main() { DateTime date1 = new DateTime(2018, 8, 27); DateTime date2 = new DateTime(2018, 8, 28); // getting the difference TimeSpan t = date2.Subtract(date1); Console.WriteLine(t); Console.WriteLine("Days (Difference) = {0} ", t.TotalDays); Console.WriteLine("Minutes (Difference) = {0}", t.TotalMinutes); } } 1.00:00:00 Days (Difference) = 1 Minutes (Difference) = 1440
[ { "code": null, "e": 1131, "s": 1062, "text": "Use DateTime.Subtract to get the difference between two dates in C#." }, { "code": null, "e": 1156, "s": 1131, "text": "Firstly, set two dates −" }, { "code": null, "e": 1244, "s": 1156, "text": "DateTime date1 = new DateTime(2018, 8, 27);\nDateTime date2 = new DateTime(2018, 8, 28);" }, { "code": null, "e": 1292, "s": 1244, "text": "Use the Subtract method to get the difference −" }, { "code": null, "e": 1328, "s": 1292, "text": "TimeSpan t = date2.Subtract(date1);" }, { "code": null, "e": 1365, "s": 1328, "text": "The following is the complete code −" }, { "code": null, "e": 1376, "s": 1365, "text": " Live Demo" }, { "code": null, "e": 1837, "s": 1376, "text": "using System;\nusing System.Threading;\nusing System.Diagnostics;\npublic class Demo {\n public static void Main() {\n DateTime date1 = new DateTime(2018, 8, 27);\n DateTime date2 = new DateTime(2018, 8, 28);\n // getting the difference\n TimeSpan t = date2.Subtract(date1);\n Console.WriteLine(t);\n Console.WriteLine(\"Days (Difference) = {0} \", t.TotalDays);\n Console.WriteLine(\"Minutes (Difference) = {0}\", t.TotalMinutes);\n }\n}" }, { "code": null, "e": 1898, "s": 1837, "text": "1.00:00:00\nDays (Difference) = 1\nMinutes (Difference) = 1440" } ]
Python - Tkinter Toplevel
Toplevel widgets work as windows that are directly managed by the window manager. They do not necessarily have a parent widget on top of them. Your application can use any number of top-level windows. Here is the simple syntax to create this widget − w = Toplevel ( option, ... ) options − Here is the list of most commonly used options for this widget. These options can be used as key-value pairs separated by commas. options − Here is the list of most commonly used options for this widget. These options can be used as key-value pairs separated by commas. bg The background color of the window. bd Border width in pixels; default is 0. cursor The cursor that appears when the mouse is in this window. class_ Normally, text selected within a text widget is exported to be the selection in the window manager. Set exportselection=0 if you don't want that behavior. font The default font for text inserted into the widget. fg The color used for text (and bitmaps) within the widget. You can change the color for tagged regions; this option is just the default. height Window height. relief Normally, a top-level window will have no 3-d borders around it. To get a shaded border, set the bd option larger that its default value of zero, and set the relief option to one of the constants. width The desired width of the window. Toplevel objects have these methods − deiconify() Displays the window, after using either the iconify or the withdraw methods. frame() Returns a system-specific window identifier. group(window) Adds the window to the window group administered by the given window. iconify() Turns the window into an icon, without destroying it. protocol(name, function) Registers a function as a callback which will be called for the given protocol. iconify() Turns the window into an icon, without destroying it. state() Returns the current state of the window. Possible values are normal, iconic, withdrawn and icon. transient([master]) Turns the window into a temporary(transient) window for the given master or to the window's parent, when no argument is given. withdraw() Removes the window from the screen, without destroying it. maxsize(width, height) Defines the maximum size for this window. minsize(width, height) Defines the minimum size for this window. positionfrom(who) Defines the position controller. resizable(width, height) Defines the resize flags, which control whether the window can be resized. sizefrom(who) Defines the size controller. title(string) Defines the window title. Try following example yourself − from Tkinter import * root = Tk() top = Toplevel() top.mainloop() When the above code is executed, it produces the following result − 187 Lectures 17.5 hours Malhar Lathkar 55 Lectures 8 hours Arnab Chakraborty 136 Lectures 11 hours In28Minutes Official 75 Lectures 13 hours Eduonix Learning Solutions 70 Lectures 8.5 hours Lets Kode It 63 Lectures 6 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 2387, "s": 2244, "text": "Toplevel widgets work as windows that are directly managed by the window manager. They do not necessarily have a parent widget on top of them." }, { "code": null, "e": 2445, "s": 2387, "text": "Your application can use any number of top-level windows." }, { "code": null, "e": 2495, "s": 2445, "text": "Here is the simple syntax to create this widget −" }, { "code": null, "e": 2525, "s": 2495, "text": "w = Toplevel ( option, ... )\n" }, { "code": null, "e": 2665, "s": 2525, "text": "options − Here is the list of most commonly used options for this widget. These options can be used as key-value pairs separated by commas." }, { "code": null, "e": 2805, "s": 2665, "text": "options − Here is the list of most commonly used options for this widget. These options can be used as key-value pairs separated by commas." }, { "code": null, "e": 2808, "s": 2805, "text": "bg" }, { "code": null, "e": 2844, "s": 2808, "text": "The background color of the window." }, { "code": null, "e": 2847, "s": 2844, "text": "bd" }, { "code": null, "e": 2885, "s": 2847, "text": "Border width in pixels; default is 0." }, { "code": null, "e": 2892, "s": 2885, "text": "cursor" }, { "code": null, "e": 2950, "s": 2892, "text": "The cursor that appears when the mouse is in this window." }, { "code": null, "e": 2957, "s": 2950, "text": "class_" }, { "code": null, "e": 3112, "s": 2957, "text": "Normally, text selected within a text widget is exported to be the selection in the window manager. Set exportselection=0 if you don't want that behavior." }, { "code": null, "e": 3117, "s": 3112, "text": "font" }, { "code": null, "e": 3169, "s": 3117, "text": "The default font for text inserted into the widget." }, { "code": null, "e": 3172, "s": 3169, "text": "fg" }, { "code": null, "e": 3307, "s": 3172, "text": "The color used for text (and bitmaps) within the widget. You can change the color for tagged regions; this option is just the default." }, { "code": null, "e": 3314, "s": 3307, "text": "height" }, { "code": null, "e": 3329, "s": 3314, "text": "Window height." }, { "code": null, "e": 3336, "s": 3329, "text": "relief" }, { "code": null, "e": 3533, "s": 3336, "text": "Normally, a top-level window will have no 3-d borders around it. To get a shaded border, set the bd option larger that its default value of zero, and set the relief option to one of the constants." }, { "code": null, "e": 3539, "s": 3533, "text": "width" }, { "code": null, "e": 3572, "s": 3539, "text": "The desired width of the window." }, { "code": null, "e": 3610, "s": 3572, "text": "Toplevel objects have these methods −" }, { "code": null, "e": 3622, "s": 3610, "text": "deiconify()" }, { "code": null, "e": 3699, "s": 3622, "text": "Displays the window, after using either the iconify or the withdraw methods." }, { "code": null, "e": 3707, "s": 3699, "text": "frame()" }, { "code": null, "e": 3752, "s": 3707, "text": "Returns a system-specific window identifier." }, { "code": null, "e": 3766, "s": 3752, "text": "group(window)" }, { "code": null, "e": 3836, "s": 3766, "text": "Adds the window to the window group administered by the given window." }, { "code": null, "e": 3846, "s": 3836, "text": "iconify()" }, { "code": null, "e": 3900, "s": 3846, "text": "Turns the window into an icon, without destroying it." }, { "code": null, "e": 3925, "s": 3900, "text": "protocol(name, function)" }, { "code": null, "e": 4005, "s": 3925, "text": "Registers a function as a callback which will be called for the given protocol." }, { "code": null, "e": 4015, "s": 4005, "text": "iconify()" }, { "code": null, "e": 4069, "s": 4015, "text": "Turns the window into an icon, without destroying it." }, { "code": null, "e": 4077, "s": 4069, "text": "state()" }, { "code": null, "e": 4174, "s": 4077, "text": "Returns the current state of the window. Possible values are normal, iconic, withdrawn and icon." }, { "code": null, "e": 4194, "s": 4174, "text": "transient([master])" }, { "code": null, "e": 4321, "s": 4194, "text": "Turns the window into a temporary(transient) window for the given master or to the window's parent, when no argument is given." }, { "code": null, "e": 4332, "s": 4321, "text": "withdraw()" }, { "code": null, "e": 4391, "s": 4332, "text": "Removes the window from the screen, without destroying it." }, { "code": null, "e": 4414, "s": 4391, "text": "maxsize(width, height)" }, { "code": null, "e": 4456, "s": 4414, "text": "Defines the maximum size for this window." }, { "code": null, "e": 4479, "s": 4456, "text": "minsize(width, height)" }, { "code": null, "e": 4521, "s": 4479, "text": "Defines the minimum size for this window." }, { "code": null, "e": 4539, "s": 4521, "text": "positionfrom(who)" }, { "code": null, "e": 4572, "s": 4539, "text": "Defines the position controller." }, { "code": null, "e": 4597, "s": 4572, "text": "resizable(width, height)" }, { "code": null, "e": 4672, "s": 4597, "text": "Defines the resize flags, which control whether the window can be resized." }, { "code": null, "e": 4686, "s": 4672, "text": "sizefrom(who)" }, { "code": null, "e": 4715, "s": 4686, "text": "Defines the size controller." }, { "code": null, "e": 4729, "s": 4715, "text": "title(string)" }, { "code": null, "e": 4755, "s": 4729, "text": "Defines the window title." }, { "code": null, "e": 4788, "s": 4755, "text": "Try following example yourself −" }, { "code": null, "e": 4855, "s": 4788, "text": "from Tkinter import *\n\nroot = Tk()\ntop = Toplevel()\ntop.mainloop()" }, { "code": null, "e": 4924, "s": 4855, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 4961, "s": 4924, "text": "\n 187 Lectures \n 17.5 hours \n" }, { "code": null, "e": 4977, "s": 4961, "text": " Malhar Lathkar" }, { "code": null, "e": 5010, "s": 4977, "text": "\n 55 Lectures \n 8 hours \n" }, { "code": null, "e": 5029, "s": 5010, "text": " Arnab Chakraborty" }, { "code": null, "e": 5064, "s": 5029, "text": "\n 136 Lectures \n 11 hours \n" }, { "code": null, "e": 5086, "s": 5064, "text": " In28Minutes Official" }, { "code": null, "e": 5120, "s": 5086, "text": "\n 75 Lectures \n 13 hours \n" }, { "code": null, "e": 5148, "s": 5120, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 5183, "s": 5148, "text": "\n 70 Lectures \n 8.5 hours \n" }, { "code": null, "e": 5197, "s": 5183, "text": " Lets Kode It" }, { "code": null, "e": 5230, "s": 5197, "text": "\n 63 Lectures \n 6 hours \n" }, { "code": null, "e": 5247, "s": 5230, "text": " Abhilash Nelson" }, { "code": null, "e": 5254, "s": 5247, "text": " Print" }, { "code": null, "e": 5265, "s": 5254, "text": " Add Notes" } ]
Python, Memory, and Objects. The basics of memory management for... | by Naser Tamimi | Towards Data Science
As data scientists, normally, we don’t pay attention to how Python and the underlying operating system handle memory for our code. After all, Python is the most popular language among data scientists, partly because it automatically handles those details. As long as we are working on small datasets, ignoring how Python manages memory (i.e., memory allocation and deallocation) does not impact our code performance. But, as soon as we switch to large datasets (big data) or heavy processing projects, basic knowledge about memory management becomes crucial. As an example, I was working on a data science project regarding indexing human DNA. I used a python dictionary object to keep track of sequences (i.e., sequences of nucleotides) and store their location in a reference human DNA. About 10% into the process, the dictionary object took all my RAM and started swapping between disk and RAM. It made the process super slow (as the disk is much slower in data transmission). As a data scientist, if I knew the basics of Python and memory management, I could prevent it and make much more memory-efficient codes. In this article and an upcoming article, I explain some basic concepts around memory management in Python. At the end of this article, you have good basic knowledge of how Python handles memory allocation and deallocation. Let’s get started ... A python program is a collection of methodsreferencesobjects methods references objects Methods or operations are easy. When you add two numbers, you are basically applying the add (or sum) method to two values. References are a little bit tricky to explain. A reference is a name that we use to access a data value (i.e., an object). The most famous references in programming are variables. When you define x = 1 , x is the variable or reference and 1 is its value (more accurate an integer object). In addition to variables, attributes and items are two other popular references in programming. Now, let's get deeper and introduce objects. As a Python programmer, you must have heard that “Everything in Python is an object.” An integer number is an object. A string is an object. Lists, dictionaries, tuples, pandas data frames, NumPy arrays are objects. Even a function is an object. When we create an object, it will be stored in memory. When we defined references in the previous paragraph, I should have told you that a reference does not point to a value in Python but points to the memory address of an object. For example, in our simple example x = 1 the reference x is pointing to a memory address that the integer object 1 is stored. At the run time, computer memory gets divided into different parts. Three important memory sections are: CodeStackHeap Code Stack Heap Code (also called Text or Instructions) section of the memory stores code instructions in a form that the machine understands. The machine follows instructions in the code section. According to the instruction, the Python interpreter load functions and local variables in the Stack Memory (also called the Stack). Stack memory is static and temporary. Static means the size of values stored in the Stack cannot be changed. Temporary means, as soon as the called function returned its value, the function and the related variable will be removed from the Stack. As a data scientist and programmer, you don’t have access to Stack memory. Python interpreter and OS memory management together take care of this section of memory. As you learned, variables (or references in general) only stores memory addresses of objects. So, where are the objects? Are they in Stack memory? No, they are in a different memory called “Heap Memory” (also called the Heap). To store objects, we need memory with dynamic memory allocation (i.e., size of memory and objects can change). Python interpreter actively allocates and deallocates the memory on the Heap (what C/C++ programmers should do manually!!! Thanks, Python!!!). Python uses a garbage collection algorithm (called Garbage Collector) that keeps the Heap memory clean and removes objects that are not needed anymore. You don’t need to mess with the Heap, but it is better to understand how Python manages the Heap since most of your data is stored in this section of the memory. Let’s find the memory address on the Heap that the variable x points to. To find it out, we can use a function called id(). >>> x = 1>>> id(x)140710407579424>>> hex(id(x))'0x7ff9b1dc2720' When we run the first line (x = 1), Python stores integer object 1 in a memory address 140710407579424 on my computer (different from yours). In computer science, we normally show memory addresses in hexadecimal numbers; therefore, I used the hex() function (note: the prefix 0x is used in computer science to indicate that the number is in hex). After storing the int object 1 in Heap memory, Python tells the reference (or variable) x to memorize this address (140710407579424 or 0x7ff9b1dc2720) as its value. Take a look at this example. >>> x = 1>>> y = 1>>> hex(id(x))'0x7ffdf176a190'>>> hex(id(y))'0x7ffdf176a190' Here, I defined two variables (x and y). I assigned them an integer object (i.e. 1) to both of them. Surprisingly, the memory addresses that both variables point to are the same. Look at another example. >>> str1 = "Python">>> str2 = "Python">>> hex(id(str1))'0x1e3adfe2830'>>> hex(id(str2))'0x1e3adfe2830' I defined two variables (str1 and str2) and assigned a string object (Python) to both of them. The memory addresses that both variables point to are the same. If you test the same thing for boolean objects, you will see a similar observation. Why? To optimize memory allocation. Python does a process called “interning.” For some objects (will be discussed later), Python only stores one object on Heap memory and ask different variables to point to this memory address if they use those objects. The objects that Python does interning on them are integer numbers [-5, 256], boolean, and some strings. Interning does not apply to other types of objects such as large integers, most strings, floats, lists, dictionaries, tuples. So far, we have shown examples of simple data structures such as integers, strings, or booleans. What about more complex data structures such as lists or dictionaries. >>> lst = [1, 2, 3, 257]>>> hex(id(lst))'0x236330edf88'>>> hex(id(lst[0]))'0x7ffdf176a190'>>> hex(id(lst[3]))'0x7ffdf176a1b0' The example clearly shows that the memory address of the list object is different from its items. It makes sense since a list is a collection of objects, each of its items has its own identity and is a separate object with a different memory address. If each item in a list is a single object, does interning (from the previous section) apply to each item in a list? It is easy to check. >>> a = 1>>> b = 257>>> hex(id(a))‘0x7ffdf176a190’>>> hex(id(b))‘0x236330dc450’ As you see, both a and lst[0] are pointing to the same memory address due to integer interning. Also, you can see, when the integer number goes beyond 256, both b and lst[1] are pointing to different memory addresses. What happens when we add a new item to a list. Does the memory address change? Let’s test it. >>> lst = [1, 2, 3]>>> hex(id(lst))'0x23633104888'>>> lst.append(4)>>> lst[1, 2, 3, 4]>>> hex(id(lst))'0x23633104888' Interesting, the memory address for the list remains the same. The reason is that a list is a mutable object, and if you add items to it, the object is still the same object with one more item. Another important fact about mutable objects is that if you instantiate different variables from an object, all of them will change if you make any change to the object. Let me show it with a simple code. >>> lst1 = [1, 2, 3]>>> lst2 = lst1>>> lst1.append(4)>>> lst2[1, 2, 3, 4]>>> lst2.append(5)>>> lst1[1, 2, 3, 4, 5] In this example, both variables lst1 and lst2 are pointing to the same mutable object (i.e. [1, 2, 3]). If any of those variables change the object (e.g., append a new item), the value of another variable (which is pointing to the same object) will also change. The only way to get a separate copy of a mutable object is to use .copy() method. >>> lst1 = [1, 2, 3]>>> lst2 = lst1.copy()>>> lst1.append(4)>>> lst2[1, 2, 3]>>> hex(id(lst1))'0x236330dfe08'>>> hex(id(lst2))'0x236330e0c88' As you see, using the .copy() method, we are creating two separate list objects with different memory addresses that changing one of them does not change the other one. Almost anything we said about the list objects also applies to the dictionary objects. There are some subtle differences that are beyond this article. For example, the way their memory size grows after adding a new item will be different. To check if two or more variables are pointing to the same object, you don't need to check their memory address. You can check if two variables are pointing to the same object using is. Here is an example. >>> lst1 = [1, 2, 3]>>> lst2 = lst1>>> lst3 = lst1.copy()>>> lst2 is lst1True>>> lst3 is lst1False Remember, is is different from == . is tells you if two objects are the same but == tells you if their content or value is the same. For example, in the previous code, the contents of both lst3 and lst1 are the same (i.e. [1, 2, 3]), but they are two separate and different objects. The following code shows it clearly. >>> lst3 == lst1True>>> lst3 is lst1False This article gave you the basic knowledge about how Python (more accurate the CPython implementation) allocates memory to objects. When working on big data in Python, you need to know these fundamental concepts to write more memory-efficient codes. Follow me on Twitter for the latest stories: https://twitter.com/TamimiNas
[ { "code": null, "e": 730, "s": 171, "text": "As data scientists, normally, we don’t pay attention to how Python and the underlying operating system handle memory for our code. After all, Python is the most popular language among data scientists, partly because it automatically handles those details. As long as we are working on small datasets, ignoring how Python manages memory (i.e., memory allocation and deallocation) does not impact our code performance. But, as soon as we switch to large datasets (big data) or heavy processing projects, basic knowledge about memory management becomes crucial." }, { "code": null, "e": 1288, "s": 730, "text": "As an example, I was working on a data science project regarding indexing human DNA. I used a python dictionary object to keep track of sequences (i.e., sequences of nucleotides) and store their location in a reference human DNA. About 10% into the process, the dictionary object took all my RAM and started swapping between disk and RAM. It made the process super slow (as the disk is much slower in data transmission). As a data scientist, if I knew the basics of Python and memory management, I could prevent it and make much more memory-efficient codes." }, { "code": null, "e": 1533, "s": 1288, "text": "In this article and an upcoming article, I explain some basic concepts around memory management in Python. At the end of this article, you have good basic knowledge of how Python handles memory allocation and deallocation. Let’s get started ..." }, { "code": null, "e": 1569, "s": 1533, "text": "A python program is a collection of" }, { "code": null, "e": 1594, "s": 1569, "text": "methodsreferencesobjects" }, { "code": null, "e": 1602, "s": 1594, "text": "methods" }, { "code": null, "e": 1613, "s": 1602, "text": "references" }, { "code": null, "e": 1621, "s": 1613, "text": "objects" }, { "code": null, "e": 2130, "s": 1621, "text": "Methods or operations are easy. When you add two numbers, you are basically applying the add (or sum) method to two values. References are a little bit tricky to explain. A reference is a name that we use to access a data value (i.e., an object). The most famous references in programming are variables. When you define x = 1 , x is the variable or reference and 1 is its value (more accurate an integer object). In addition to variables, attributes and items are two other popular references in programming." }, { "code": null, "e": 2779, "s": 2130, "text": "Now, let's get deeper and introduce objects. As a Python programmer, you must have heard that “Everything in Python is an object.” An integer number is an object. A string is an object. Lists, dictionaries, tuples, pandas data frames, NumPy arrays are objects. Even a function is an object. When we create an object, it will be stored in memory. When we defined references in the previous paragraph, I should have told you that a reference does not point to a value in Python but points to the memory address of an object. For example, in our simple example x = 1 the reference x is pointing to a memory address that the integer object 1 is stored." }, { "code": null, "e": 2884, "s": 2779, "text": "At the run time, computer memory gets divided into different parts. Three important memory sections are:" }, { "code": null, "e": 2898, "s": 2884, "text": "CodeStackHeap" }, { "code": null, "e": 2903, "s": 2898, "text": "Code" }, { "code": null, "e": 2909, "s": 2903, "text": "Stack" }, { "code": null, "e": 2914, "s": 2909, "text": "Heap" }, { "code": null, "e": 3640, "s": 2914, "text": "Code (also called Text or Instructions) section of the memory stores code instructions in a form that the machine understands. The machine follows instructions in the code section. According to the instruction, the Python interpreter load functions and local variables in the Stack Memory (also called the Stack). Stack memory is static and temporary. Static means the size of values stored in the Stack cannot be changed. Temporary means, as soon as the called function returned its value, the function and the related variable will be removed from the Stack. As a data scientist and programmer, you don’t have access to Stack memory. Python interpreter and OS memory management together take care of this section of memory." }, { "code": null, "e": 4273, "s": 3640, "text": "As you learned, variables (or references in general) only stores memory addresses of objects. So, where are the objects? Are they in Stack memory? No, they are in a different memory called “Heap Memory” (also called the Heap). To store objects, we need memory with dynamic memory allocation (i.e., size of memory and objects can change). Python interpreter actively allocates and deallocates the memory on the Heap (what C/C++ programmers should do manually!!! Thanks, Python!!!). Python uses a garbage collection algorithm (called Garbage Collector) that keeps the Heap memory clean and removes objects that are not needed anymore." }, { "code": null, "e": 4435, "s": 4273, "text": "You don’t need to mess with the Heap, but it is better to understand how Python manages the Heap since most of your data is stored in this section of the memory." }, { "code": null, "e": 4559, "s": 4435, "text": "Let’s find the memory address on the Heap that the variable x points to. To find it out, we can use a function called id()." }, { "code": null, "e": 4623, "s": 4559, "text": ">>> x = 1>>> id(x)140710407579424>>> hex(id(x))'0x7ff9b1dc2720'" }, { "code": null, "e": 5135, "s": 4623, "text": "When we run the first line (x = 1), Python stores integer object 1 in a memory address 140710407579424 on my computer (different from yours). In computer science, we normally show memory addresses in hexadecimal numbers; therefore, I used the hex() function (note: the prefix 0x is used in computer science to indicate that the number is in hex). After storing the int object 1 in Heap memory, Python tells the reference (or variable) x to memorize this address (140710407579424 or 0x7ff9b1dc2720) as its value." }, { "code": null, "e": 5164, "s": 5135, "text": "Take a look at this example." }, { "code": null, "e": 5243, "s": 5164, "text": ">>> x = 1>>> y = 1>>> hex(id(x))'0x7ffdf176a190'>>> hex(id(y))'0x7ffdf176a190'" }, { "code": null, "e": 5447, "s": 5243, "text": "Here, I defined two variables (x and y). I assigned them an integer object (i.e. 1) to both of them. Surprisingly, the memory addresses that both variables point to are the same. Look at another example." }, { "code": null, "e": 5550, "s": 5447, "text": ">>> str1 = \"Python\">>> str2 = \"Python\">>> hex(id(str1))'0x1e3adfe2830'>>> hex(id(str2))'0x1e3adfe2830'" }, { "code": null, "e": 5798, "s": 5550, "text": "I defined two variables (str1 and str2) and assigned a string object (Python) to both of them. The memory addresses that both variables point to are the same. If you test the same thing for boolean objects, you will see a similar observation. Why?" }, { "code": null, "e": 6278, "s": 5798, "text": "To optimize memory allocation. Python does a process called “interning.” For some objects (will be discussed later), Python only stores one object on Heap memory and ask different variables to point to this memory address if they use those objects. The objects that Python does interning on them are integer numbers [-5, 256], boolean, and some strings. Interning does not apply to other types of objects such as large integers, most strings, floats, lists, dictionaries, tuples." }, { "code": null, "e": 6446, "s": 6278, "text": "So far, we have shown examples of simple data structures such as integers, strings, or booleans. What about more complex data structures such as lists or dictionaries." }, { "code": null, "e": 6572, "s": 6446, "text": ">>> lst = [1, 2, 3, 257]>>> hex(id(lst))'0x236330edf88'>>> hex(id(lst[0]))'0x7ffdf176a190'>>> hex(id(lst[3]))'0x7ffdf176a1b0'" }, { "code": null, "e": 6823, "s": 6572, "text": "The example clearly shows that the memory address of the list object is different from its items. It makes sense since a list is a collection of objects, each of its items has its own identity and is a separate object with a different memory address." }, { "code": null, "e": 6960, "s": 6823, "text": "If each item in a list is a single object, does interning (from the previous section) apply to each item in a list? It is easy to check." }, { "code": null, "e": 7040, "s": 6960, "text": ">>> a = 1>>> b = 257>>> hex(id(a))‘0x7ffdf176a190’>>> hex(id(b))‘0x236330dc450’" }, { "code": null, "e": 7258, "s": 7040, "text": "As you see, both a and lst[0] are pointing to the same memory address due to integer interning. Also, you can see, when the integer number goes beyond 256, both b and lst[1] are pointing to different memory addresses." }, { "code": null, "e": 7352, "s": 7258, "text": "What happens when we add a new item to a list. Does the memory address change? Let’s test it." }, { "code": null, "e": 7470, "s": 7352, "text": ">>> lst = [1, 2, 3]>>> hex(id(lst))'0x23633104888'>>> lst.append(4)>>> lst[1, 2, 3, 4]>>> hex(id(lst))'0x23633104888'" }, { "code": null, "e": 7664, "s": 7470, "text": "Interesting, the memory address for the list remains the same. The reason is that a list is a mutable object, and if you add items to it, the object is still the same object with one more item." }, { "code": null, "e": 7869, "s": 7664, "text": "Another important fact about mutable objects is that if you instantiate different variables from an object, all of them will change if you make any change to the object. Let me show it with a simple code." }, { "code": null, "e": 7984, "s": 7869, "text": ">>> lst1 = [1, 2, 3]>>> lst2 = lst1>>> lst1.append(4)>>> lst2[1, 2, 3, 4]>>> lst2.append(5)>>> lst1[1, 2, 3, 4, 5]" }, { "code": null, "e": 8328, "s": 7984, "text": "In this example, both variables lst1 and lst2 are pointing to the same mutable object (i.e. [1, 2, 3]). If any of those variables change the object (e.g., append a new item), the value of another variable (which is pointing to the same object) will also change. The only way to get a separate copy of a mutable object is to use .copy() method." }, { "code": null, "e": 8470, "s": 8328, "text": ">>> lst1 = [1, 2, 3]>>> lst2 = lst1.copy()>>> lst1.append(4)>>> lst2[1, 2, 3]>>> hex(id(lst1))'0x236330dfe08'>>> hex(id(lst2))'0x236330e0c88'" }, { "code": null, "e": 8639, "s": 8470, "text": "As you see, using the .copy() method, we are creating two separate list objects with different memory addresses that changing one of them does not change the other one." }, { "code": null, "e": 8878, "s": 8639, "text": "Almost anything we said about the list objects also applies to the dictionary objects. There are some subtle differences that are beyond this article. For example, the way their memory size grows after adding a new item will be different." }, { "code": null, "e": 9084, "s": 8878, "text": "To check if two or more variables are pointing to the same object, you don't need to check their memory address. You can check if two variables are pointing to the same object using is. Here is an example." }, { "code": null, "e": 9183, "s": 9084, "text": ">>> lst1 = [1, 2, 3]>>> lst2 = lst1>>> lst3 = lst1.copy()>>> lst2 is lst1True>>> lst3 is lst1False" }, { "code": null, "e": 9503, "s": 9183, "text": "Remember, is is different from == . is tells you if two objects are the same but == tells you if their content or value is the same. For example, in the previous code, the contents of both lst3 and lst1 are the same (i.e. [1, 2, 3]), but they are two separate and different objects. The following code shows it clearly." }, { "code": null, "e": 9545, "s": 9503, "text": ">>> lst3 == lst1True>>> lst3 is lst1False" }, { "code": null, "e": 9794, "s": 9545, "text": "This article gave you the basic knowledge about how Python (more accurate the CPython implementation) allocates memory to objects. When working on big data in Python, you need to know these fundamental concepts to write more memory-efficient codes." } ]
K-Palindrome | Practice | GeeksforGeeks
Given a string str of length n, find if the string is K-Palindrome or not. A k-palindrome string transforms into a palindrome on removing at most k characters from it. Example 1: Input: str = "abcdecba" n = 8, k = 1 Output: 1 Explaination: By removing 'd' or 'e' we can make it a palindrome. Example 2: Input: str = "abcdefcba" n = 9, k = 1 Output: 0 Explaination: By removing a single character we cannot make it a palindrome. Your Task: You do not need to read input or print anything. Your task is to complete the function kPalindrome() which takes string str, n and k as input parameters and returns 1 if str is a K-palindrome else returns 0. Expected Time Complexity: O(n*n) Expected Auxiliary Space: O(n*n) Constraints: 1 ≤ n, k ≤ 103 0 varshil3 weeks ago memo={} def ispalindrome(l,r,deletes): if (l,r,deletes) in memo: return memo[(l,r,deletes)] if deletes>c: return False if l>r: return True best=ispalindrome(l+1,r-1,deletes) if s[l]==s[r] else (ispalindrome(l,r-1,deletes+1) or ispalindrome(l+1,r,deletes+1)) memo[(l,r,deletes)]=best return best N=len(s) if ispalindrome(0,N-1,0): return 1 return 0 0 geminicode1 month ago Please edit the expected time complexity as O(n*n) is giving TLE in python. 0 ankitparashxr4 months ago time-0.1sec(java) class Solution{ static int kPalindrome(String str, int n, int k) { if(n==1 || k>n) { return 1; } int count = 0; if(n%2==0) { k = k+1; } for(int i =0;i<n;i++) { if(str.charAt(i)!=str.charAt(n-(i+1))) { count++; } } if(count==k) { return 1; } return 0; }} 0 ghoshghoshbishal4 months ago No DP→ static int kPalindrome(String str, int n, int k) { if(str.length() == 1) return 1; int start = 0; int end = str.length()-1; while(start <= end){ if(str.charAt(start) != str.charAt(end)){ if(k > 0){ k--; if(str.charAt(start) == str.charAt(end-1)) start--; else end++; } else return 0; } start++; end--; } return 1; } 0 angadstyle984 months ago java o(n) and constant space solution. Easy to understand public static int kPallindromeHelper(String s, int k) { int count = 0, n = s.length(); for (int i = 0, j = n - 1; i <= j; i++, j--) { if (s.charAt(i) != s.charAt(j)) count += 2; } // even length then remove count - 1 number of chars // odd length then remove count number of chars int ans = 0; if ((n & 1) == 0) ans = count - 1; else ans = count; if (ans <= k) return 1; else return 0; } 0 samsasuke76 months ago class Solution{public: int kPalindrome(string s1, int n, int k) { // code here int t[n+1][n+1]; string s2= s1; if(s1.length()==1) return 1; if(s1.length()<=k) return 1; reverse(s2.begin(), s2.end()); for(int i=0;i<n+1;i++){ for(int j=0;j<n+1;j++){ if(i==0 || j==0) t[i][j] = 0; } } for(int i=1;i<n+1;i++){ for(int j=1;j<n+1;j++){ if(s1[i-1] == s2[j-1]) t[i][j] = 1+ t[i-1][j-1]; else t[i][j] = max(t[i-1][j], t[i][j-1]); } } //cout<<t[n][n]<<endl; int size = n-t[n][n]; //cout<<size<<endl; if(size-k == 0) return 1; else return 0; }}; 0 ransomsumit16 months ago I don't know why people have used dp approach? int kPalindrome(string str, int n, int k) { int i = 0; int j=n-1; while(i<j) { if(str[i]==str[j]) { i++;j--; } else { if(i+1<j && str[i+1]==str[j]) { i++; k--; } else if(j-1>i && str[i]==str[j-1]) { j--; k--; } else { if(i+1 == j) { i++; k--; } else { i++; j--; k-=2; } } } } if(k<0) return 0; return 1; } 0 Saurabh Sharma8 months ago Saurabh Sharma https://uploads.disquscdn.c... Simple Java Solution DP Using LCS 0 Saurabh Sharma This comment was deleted. 0 Samundar Singh9 months ago Samundar Singh https://uploads.disquscdn.c... We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 406, "s": 238, "text": "Given a string str of length n, find if the string is K-Palindrome or not. A k-palindrome string transforms into a palindrome on removing at most k characters from it." }, { "code": null, "e": 418, "s": 406, "text": "\nExample 1:" }, { "code": null, "e": 532, "s": 418, "text": "Input: str = \"abcdecba\"\nn = 8, k = 1\nOutput: 1\nExplaination: By removing 'd' or 'e' \nwe can make it a palindrome." }, { "code": null, "e": 544, "s": 532, "text": "\nExample 2:" }, { "code": null, "e": 670, "s": 544, "text": "Input: str = \"abcdefcba\"\nn = 9, k = 1\nOutput: 0\nExplaination: By removing a single \ncharacter we cannot make it a palindrome." }, { "code": null, "e": 890, "s": 670, "text": "\nYour Task:\nYou do not need to read input or print anything. Your task is to complete the function kPalindrome() which takes string str, n and k as input parameters and returns 1 if str is a K-palindrome else returns 0." }, { "code": null, "e": 957, "s": 890, "text": "\nExpected Time Complexity: O(n*n)\nExpected Auxiliary Space: O(n*n)" }, { "code": null, "e": 986, "s": 957, "text": "\nConstraints:\n1 ≤ n, k ≤ 103" }, { "code": null, "e": 988, "s": 986, "text": "0" }, { "code": null, "e": 1007, "s": 988, "text": "varshil3 weeks ago" }, { "code": null, "e": 1513, "s": 1007, "text": " memo={}\n def ispalindrome(l,r,deletes):\n if (l,r,deletes) in memo:\n return memo[(l,r,deletes)]\n if deletes>c:\n return False\n if l>r:\n return True\n best=ispalindrome(l+1,r-1,deletes) if s[l]==s[r] else (ispalindrome(l,r-1,deletes+1) or ispalindrome(l+1,r,deletes+1))\n memo[(l,r,deletes)]=best\n return best\n N=len(s)\n if ispalindrome(0,N-1,0):\n return 1\n return 0" }, { "code": null, "e": 1515, "s": 1513, "text": "0" }, { "code": null, "e": 1537, "s": 1515, "text": "geminicode1 month ago" }, { "code": null, "e": 1613, "s": 1537, "text": "Please edit the expected time complexity as O(n*n) is giving TLE in python." }, { "code": null, "e": 1617, "s": 1615, "text": "0" }, { "code": null, "e": 1643, "s": 1617, "text": "ankitparashxr4 months ago" }, { "code": null, "e": 1661, "s": 1643, "text": "time-0.1sec(java)" }, { "code": null, "e": 2061, "s": 1661, "text": "class Solution{ static int kPalindrome(String str, int n, int k) { if(n==1 || k>n) { return 1; } int count = 0; if(n%2==0) { k = k+1; } for(int i =0;i<n;i++) { if(str.charAt(i)!=str.charAt(n-(i+1))) { count++; } } if(count==k) { return 1; } return 0; }}" }, { "code": null, "e": 2063, "s": 2061, "text": "0" }, { "code": null, "e": 2092, "s": 2063, "text": "ghoshghoshbishal4 months ago" }, { "code": null, "e": 2100, "s": 2092, "text": "No DP→ " }, { "code": null, "e": 2636, "s": 2100, "text": "static int kPalindrome(String str, int n, int k) { if(str.length() == 1) return 1; int start = 0; int end = str.length()-1; while(start <= end){ if(str.charAt(start) != str.charAt(end)){ if(k > 0){ k--; if(str.charAt(start) == str.charAt(end-1)) start--; else end++; } else return 0; } start++; end--; } return 1; }" }, { "code": null, "e": 2640, "s": 2638, "text": "0" }, { "code": null, "e": 2665, "s": 2640, "text": "angadstyle984 months ago" }, { "code": null, "e": 3129, "s": 2665, "text": "java o(n) and constant space solution. Easy to understand public static int kPallindromeHelper(String s, int k) { int count = 0, n = s.length(); for (int i = 0, j = n - 1; i <= j; i++, j--) { if (s.charAt(i) != s.charAt(j)) count += 2; } // even length then remove count - 1 number of chars // odd length then remove count number of chars int ans = 0; if ((n & 1) == 0) ans = count - 1; else ans = count;" }, { "code": null, "e": 3186, "s": 3129, "text": " if (ans <= k) return 1; else return 0; } " }, { "code": null, "e": 3188, "s": 3186, "text": "0" }, { "code": null, "e": 3211, "s": 3188, "text": "samsasuke76 months ago" }, { "code": null, "e": 4158, "s": 3211, "text": "class Solution{public: int kPalindrome(string s1, int n, int k) { // code here int t[n+1][n+1]; string s2= s1; if(s1.length()==1) return 1; if(s1.length()<=k) return 1; reverse(s2.begin(), s2.end()); for(int i=0;i<n+1;i++){ for(int j=0;j<n+1;j++){ if(i==0 || j==0) t[i][j] = 0; } } for(int i=1;i<n+1;i++){ for(int j=1;j<n+1;j++){ if(s1[i-1] == s2[j-1]) t[i][j] = 1+ t[i-1][j-1]; else t[i][j] = max(t[i-1][j], t[i][j-1]); } } //cout<<t[n][n]<<endl; int size = n-t[n][n]; //cout<<size<<endl; if(size-k == 0) return 1; else return 0; }}; " }, { "code": null, "e": 4160, "s": 4158, "text": "0" }, { "code": null, "e": 4185, "s": 4160, "text": "ransomsumit16 months ago" }, { "code": null, "e": 4232, "s": 4185, "text": "I don't know why people have used dp approach?" }, { "code": null, "e": 4994, "s": 4232, "text": "int kPalindrome(string str, int n, int k)\n{\n int i = 0;\n int j=n-1;\n while(i<j)\n {\n if(str[i]==str[j])\n {\n i++;j--;\n }\n else\n {\n if(i+1<j && str[i+1]==str[j])\n {\n i++;\n k--;\n }\n else if(j-1>i && str[i]==str[j-1])\n {\n j--;\n k--;\n }\n else\n {\n if(i+1 == j)\n {\n i++;\n k--;\n }\n else\n {\n i++;\n j--;\n k-=2;\n }\n }\n }\n }\n if(k<0) return 0;\n return 1;\n}" }, { "code": null, "e": 4996, "s": 4994, "text": "0" }, { "code": null, "e": 5023, "s": 4996, "text": "Saurabh Sharma8 months ago" }, { "code": null, "e": 5038, "s": 5023, "text": "Saurabh Sharma" }, { "code": null, "e": 5103, "s": 5038, "text": "https://uploads.disquscdn.c... Simple Java Solution DP Using LCS" }, { "code": null, "e": 5105, "s": 5103, "text": "0" }, { "code": null, "e": 5120, "s": 5105, "text": "Saurabh Sharma" }, { "code": null, "e": 5146, "s": 5120, "text": "This comment was deleted." }, { "code": null, "e": 5148, "s": 5146, "text": "0" }, { "code": null, "e": 5175, "s": 5148, "text": "Samundar Singh9 months ago" }, { "code": null, "e": 5190, "s": 5175, "text": "Samundar Singh" }, { "code": null, "e": 5221, "s": 5190, "text": "https://uploads.disquscdn.c..." }, { "code": null, "e": 5367, "s": 5221, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 5403, "s": 5367, "text": " Login to access your submissions. " }, { "code": null, "e": 5413, "s": 5403, "text": "\nProblem\n" }, { "code": null, "e": 5423, "s": 5413, "text": "\nContest\n" }, { "code": null, "e": 5486, "s": 5423, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 5634, "s": 5486, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 5842, "s": 5634, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 5948, "s": 5842, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
Convert PyMongo Cursor to JSON - GeeksforGeeks
26 May, 2020 Prerequisites: MongoDB Python Basics This article is about converting the PyMongo Cursor to JSON. Functions like find() and find_one() returns the Cursor instance. Let’s begin: Importing Required Modules: Import the required module using the command:from pymongo import MongoClient from bson.json_util import dumpsIf MongoDB is already not installed on your machine you can refer to the guide: Guide to Install MongoDB with PythonCreating a Connection: Now we had already imported the module, its time to establish a connection to the MongoDB server, presumably which is running on localhost (host name) at port 27017 (port number).client = MongoClient(‘localhost’, 27017)Accessing the Database: Since the connection to the MongoDB server is established. We can now create or use the existing database.mydatabase = client.name_of_the_databaseAccessing the Collection: We now select the collection from the database using the following syntax:collection_name = mydatabase.name_of_collectionGetting the documents: Getting all the documents from the collection using find() method. It returns the instance of the Cursor.cursor = collection_name.find() Converting the Cursor to JSON: Converting the Cursor to the JSON.First, we will convert the Cursor to the list of dictionary.list_cur = list(cursor)Now, converting the list_cur to the JSON using the method dumps() from bson.json_utiljson_data = dumps(list_cur) You can now save it to the file or can use it in the program using loads() function. Importing Required Modules: Import the required module using the command:from pymongo import MongoClient from bson.json_util import dumpsIf MongoDB is already not installed on your machine you can refer to the guide: Guide to Install MongoDB with Python from pymongo import MongoClient from bson.json_util import dumps If MongoDB is already not installed on your machine you can refer to the guide: Guide to Install MongoDB with Python Creating a Connection: Now we had already imported the module, its time to establish a connection to the MongoDB server, presumably which is running on localhost (host name) at port 27017 (port number).client = MongoClient(‘localhost’, 27017) client = MongoClient(‘localhost’, 27017) Accessing the Database: Since the connection to the MongoDB server is established. We can now create or use the existing database.mydatabase = client.name_of_the_database mydatabase = client.name_of_the_database Accessing the Collection: We now select the collection from the database using the following syntax:collection_name = mydatabase.name_of_collection collection_name = mydatabase.name_of_collection Getting the documents: Getting all the documents from the collection using find() method. It returns the instance of the Cursor.cursor = collection_name.find() cursor = collection_name.find() Converting the Cursor to JSON: Converting the Cursor to the JSON.First, we will convert the Cursor to the list of dictionary.list_cur = list(cursor)Now, converting the list_cur to the JSON using the method dumps() from bson.json_utiljson_data = dumps(list_cur) You can now save it to the file or can use it in the program using loads() function. list_cur = list(cursor) Now, converting the list_cur to the JSON using the method dumps() from bson.json_util json_data = dumps(list_cur) You can now save it to the file or can use it in the program using loads() function. Below is the implementation. # Python Program for# demonstrating the # PyMongo Cursor to JSON # Importing required modulesfrom pymongo import MongoClientfrom bson.json_util import dumps, loads # Connecting to MongoDB server# client = MongoClient('host_name',# 'port_number')client = MongoClient('localhost', 27017) # Connecting to the database named# GFGmydatabase = client.GFG # Accessing the collection named# gfg_collectionmycollection = mydatabase.College # Now creating a Cursor instance# using find() functioncursor = mycollection.find() # Converting cursor to the list # of dictionarieslist_cur = list(cursor) # Converting to the JSONjson_data = dumps(list_cur, indent = 2) # Writing data to file data.jsonwith open('data.json', 'w') as file: file.write(json_data) Output: Python-mongoDB Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python Different ways to create Pandas Dataframe Python String | replace() Python program to convert a list to string Reading and Writing to text files in Python sum() function in Python
[ { "code": null, "e": 24901, "s": 24873, "text": "\n26 May, 2020" }, { "code": null, "e": 24938, "s": 24901, "text": "Prerequisites: MongoDB Python Basics" }, { "code": null, "e": 25065, "s": 24938, "text": "This article is about converting the PyMongo Cursor to JSON. Functions like find() and find_one() returns the Cursor instance." }, { "code": null, "e": 25078, "s": 25065, "text": "Let’s begin:" }, { "code": null, "e": 26396, "s": 25078, "text": "Importing Required Modules: Import the required module using the command:from pymongo import MongoClient\nfrom bson.json_util import dumpsIf MongoDB is already not installed on your machine you can refer to the guide: Guide to Install MongoDB with PythonCreating a Connection: Now we had already imported the module, its time to establish a connection to the MongoDB server, presumably which is running on localhost (host name) at port 27017 (port number).client = MongoClient(‘localhost’, 27017)Accessing the Database: Since the connection to the MongoDB server is established. We can now create or use the existing database.mydatabase = client.name_of_the_databaseAccessing the Collection: We now select the collection from the database using the following syntax:collection_name = mydatabase.name_of_collectionGetting the documents: Getting all the documents from the collection using find() method. It returns the instance of the Cursor.cursor = collection_name.find()\nConverting the Cursor to JSON: Converting the Cursor to the JSON.First, we will convert the Cursor to the list of dictionary.list_cur = list(cursor)Now, converting the list_cur to the JSON using the method dumps() from bson.json_utiljson_data = dumps(list_cur)\nYou can now save it to the file or can use it in the program using loads() function." }, { "code": null, "e": 26650, "s": 26396, "text": "Importing Required Modules: Import the required module using the command:from pymongo import MongoClient\nfrom bson.json_util import dumpsIf MongoDB is already not installed on your machine you can refer to the guide: Guide to Install MongoDB with Python" }, { "code": null, "e": 26715, "s": 26650, "text": "from pymongo import MongoClient\nfrom bson.json_util import dumps" }, { "code": null, "e": 26832, "s": 26715, "text": "If MongoDB is already not installed on your machine you can refer to the guide: Guide to Install MongoDB with Python" }, { "code": null, "e": 27075, "s": 26832, "text": "Creating a Connection: Now we had already imported the module, its time to establish a connection to the MongoDB server, presumably which is running on localhost (host name) at port 27017 (port number).client = MongoClient(‘localhost’, 27017)" }, { "code": null, "e": 27116, "s": 27075, "text": "client = MongoClient(‘localhost’, 27017)" }, { "code": null, "e": 27287, "s": 27116, "text": "Accessing the Database: Since the connection to the MongoDB server is established. We can now create or use the existing database.mydatabase = client.name_of_the_database" }, { "code": null, "e": 27328, "s": 27287, "text": "mydatabase = client.name_of_the_database" }, { "code": null, "e": 27476, "s": 27328, "text": "Accessing the Collection: We now select the collection from the database using the following syntax:collection_name = mydatabase.name_of_collection" }, { "code": null, "e": 27524, "s": 27476, "text": "collection_name = mydatabase.name_of_collection" }, { "code": null, "e": 27685, "s": 27524, "text": "Getting the documents: Getting all the documents from the collection using find() method. It returns the instance of the Cursor.cursor = collection_name.find()\n" }, { "code": null, "e": 27718, "s": 27685, "text": "cursor = collection_name.find()\n" }, { "code": null, "e": 28064, "s": 27718, "text": "Converting the Cursor to JSON: Converting the Cursor to the JSON.First, we will convert the Cursor to the list of dictionary.list_cur = list(cursor)Now, converting the list_cur to the JSON using the method dumps() from bson.json_utiljson_data = dumps(list_cur)\nYou can now save it to the file or can use it in the program using loads() function." }, { "code": null, "e": 28088, "s": 28064, "text": "list_cur = list(cursor)" }, { "code": null, "e": 28174, "s": 28088, "text": "Now, converting the list_cur to the JSON using the method dumps() from bson.json_util" }, { "code": null, "e": 28203, "s": 28174, "text": "json_data = dumps(list_cur)\n" }, { "code": null, "e": 28288, "s": 28203, "text": "You can now save it to the file or can use it in the program using loads() function." }, { "code": null, "e": 28317, "s": 28288, "text": "Below is the implementation." }, { "code": "# Python Program for# demonstrating the # PyMongo Cursor to JSON # Importing required modulesfrom pymongo import MongoClientfrom bson.json_util import dumps, loads # Connecting to MongoDB server# client = MongoClient('host_name',# 'port_number')client = MongoClient('localhost', 27017) # Connecting to the database named# GFGmydatabase = client.GFG # Accessing the collection named# gfg_collectionmycollection = mydatabase.College # Now creating a Cursor instance# using find() functioncursor = mycollection.find() # Converting cursor to the list # of dictionarieslist_cur = list(cursor) # Converting to the JSONjson_data = dumps(list_cur, indent = 2) # Writing data to file data.jsonwith open('data.json', 'w') as file: file.write(json_data)", "e": 29079, "s": 28317, "text": null }, { "code": null, "e": 29087, "s": 29079, "text": "Output:" }, { "code": null, "e": 29102, "s": 29087, "text": "Python-mongoDB" }, { "code": null, "e": 29109, "s": 29102, "text": "Python" }, { "code": null, "e": 29207, "s": 29109, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29216, "s": 29207, "text": "Comments" }, { "code": null, "e": 29229, "s": 29216, "text": "Old Comments" }, { "code": null, "e": 29247, "s": 29229, "text": "Python Dictionary" }, { "code": null, "e": 29282, "s": 29247, "text": "Read a file line by line in Python" }, { "code": null, "e": 29304, "s": 29282, "text": "Enumerate() in Python" }, { "code": null, "e": 29336, "s": 29304, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 29366, "s": 29336, "text": "Iterate over a list in Python" }, { "code": null, "e": 29408, "s": 29366, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 29434, "s": 29408, "text": "Python String | replace()" }, { "code": null, "e": 29477, "s": 29434, "text": "Python program to convert a list to string" }, { "code": null, "e": 29521, "s": 29477, "text": "Reading and Writing to text files in Python" } ]
Deep count of elements of an array using JavaScript
We are required to write a JavaScript function that takes in a nested array of element and return the deep count of elements present in the array. Input const arr = [1, 2, [3, 4, [5]]]; Output const output = 7; Because the elements at level 1 are 2, elements at level 2 are 2 and elements at level 3 are 1, Hence the deep count is 7. Following is the code − Live Demo const arr = [1, 2, [3, 4, [5]]]; const deepCount = (arr = []) => { return arr .reduce((acc, val) => { return acc + (Array.isArray(val) ? deepCount(val) : 0); }, arr.length); }; console.log(deepCount(arr)); We used the Array.prototype.reduce() method to iterate over the array and if at any iteration we encountered a nested array, we recursively called our function. 7
[ { "code": null, "e": 1209, "s": 1062, "text": "We are required to write a JavaScript function that takes in a nested array of element and return the deep count of elements present in the array." }, { "code": null, "e": 1215, "s": 1209, "text": "Input" }, { "code": null, "e": 1248, "s": 1215, "text": "const arr = [1, 2, [3, 4, [5]]];" }, { "code": null, "e": 1255, "s": 1248, "text": "Output" }, { "code": null, "e": 1273, "s": 1255, "text": "const output = 7;" }, { "code": null, "e": 1396, "s": 1273, "text": "Because the elements at level 1 are 2, elements at level 2 are 2 and elements at level 3 are 1, Hence the deep count is 7." }, { "code": null, "e": 1420, "s": 1396, "text": "Following is the code −" }, { "code": null, "e": 1431, "s": 1420, "text": " Live Demo" }, { "code": null, "e": 1652, "s": 1431, "text": "const arr = [1, 2, [3, 4, [5]]];\nconst deepCount = (arr = []) => {\n return arr\n .reduce((acc, val) => {\n return acc + (Array.isArray(val) ? deepCount(val) : 0);\n }, arr.length);\n};\nconsole.log(deepCount(arr));" }, { "code": null, "e": 1813, "s": 1652, "text": "We used the Array.prototype.reduce() method to iterate over the array and if at any iteration we encountered a nested array, we recursively called our function." }, { "code": null, "e": 1815, "s": 1813, "text": "7" } ]
How to produce group subtotals and a grand total in Oracle?
Problem Statement:You want to find out totals, subtotals and a grand total in Oracle. Solution:Oracle ROLLUP function performs grouping at multiple levels, using a right to left method of rolling up through intermediate levels to any grand total. To demonstrate the ROLLUP function we will create a table to hold tennis player along with the ATP tour titles and Grandslam titles acheived by the player. We will begin by create the necessary data for this requirement. -- Drop table DROP TABLE atp_titles; -- Create table CREATE TABLE atp_titles ( player VARCHAR2(100) NOT NULL, title_type VARCHAR2(100) NOT NULL, titles NUMBER NOT NULL); -- Drop table DROP TABLE atp_titles; -- Create table CREATE TABLE atp_titles ( player VARCHAR2(100) NOT NULL, title_type VARCHAR2(100) NOT NULL, titles NUMBER NOT NULL); -- insert ATP tour titles won by the player INSERT INTO atp_titles VALUES('Roger Federer','ATP Tour Titles',103); INSERT INTO atp_titles VALUES('Rafael Nadal','ATP Tour Titles',86); INSERT INTO atp_titles VALUES('Novak Djokovic','ATP Tour Titles',81); INSERT INTO atp_titles VALUES('Pete Sampras','ATP Tour Titles',64); INSERT INTO atp_titles VALUES('Andre Agassi','ATP Tour Titles',52); INSERT INTO atp_titles VALUES('Andy Murray','ATP Tour Titles',46); INSERT INTO atp_titles VALUES('Thomas Muster','ATP Tour Titles',39); INSERT INTO atp_titles VALUES('Andy Roddick','ATP Tour Titles',32); -- insert grandslam titles won by the player INSERT INTO atp_titles VALUES('Roger Federer','Grandslams',20); INSERT INTO atp_titles VALUES('Rafael Nadal','Grandslams',20); INSERT INTO atp_titles VALUES('Novak Djokovic','Grandslams',17); INSERT INTO atp_titles VALUES('Pete Sampras','Grandslams',14); INSERT INTO atp_titles VALUES('Andre Agassi','Grandslams',8); INSERT INTO atp_titles VALUES('Andy Murray','Grandslams',3); INSERT INTO atp_titles VALUES('Thomas Muster','Grandslams',1); INSERT INTO atp_titles VALUES('Andy Roddick','Grandslams',0); COMMIT; Now we will look at few records inserted into atp_titles table. SELECT * FROM atp_titles ORDER BY 1; SELECT * FROM atp_titles ORDER BY 1; Andre Agassi ATP Tour Titles 52 Andre Agassi Grandslams 8 Andy Murray Grandslams 3 Andy Murray ATP Tour Titles 46 Andy Roddick ATP Tour Titles 32 Andy Roddick Grandslams 0 ............................ ............................ Oracle ROLLUP expression produces group subtotals from right to left along with grand total. With above data, let’s say we wanted to identify the total titles (i.e. ATP tour titles + Grandslam Titles) acheived by player “Roger Federer”. SELECT player,title_type, SUM(titles) AS total_titles FROM atp_titles WHERE player = 'Roger Federer' GROUP BY ROLLUP (player,title_type) ORDER BY player,title_type ; SELECT player,title_type, SUM(titles) AS total_titles FROM atp_titles WHERE player = 'Roger Federer' GROUP BY ROLLUP (player,title_type) ORDER BY player,title_type ; ROLLUP produces n+1 levels of subtotals for “n” number of columns listed in the ROLLUP. In above example after performing normal grouping by player and title_type, the ROLLUP function rolls up all title_type values so that we see sum for the Grandslams level for the player “Roger Federer”. You can see the rolled up rows in bold in the output. Now we will apply the ROLLUP function for all the players in the table as below: SELECT player, title_type, SUM(titles) As total FROM atp_titles GROUP BY ROLLUP (player,title_type) ORDER BY player,title_type; SELECT player, title_type, SUM(titles) As total FROM atp_titles GROUP BY ROLLUP (player,title_type) ORDER BY player,title_type; ROLLUP functions allows us to perform partial rollup to reduce the number of subtotals calculated. The output from the following partial rollup is shown below: SELECT player, title_type, SUM(titles) As total FROM atp_titles GROUP BY ROLLUP (player,title_type) ORDER BY player,title_type; SELECT player, title_type, SUM(titles) As total FROM atp_titles GROUP BY ROLLUP (player,title_type) ORDER BY player,title_type;
[ { "code": null, "e": 1148, "s": 1062, "text": "Problem Statement:You want to find out totals, subtotals and a grand total in Oracle." }, { "code": null, "e": 1465, "s": 1148, "text": "Solution:Oracle ROLLUP function performs grouping at multiple levels, using a right to left method of rolling up through intermediate levels to any grand total. To demonstrate the ROLLUP function we will create a table to hold tennis player along with the ATP tour titles and Grandslam titles acheived by the player." }, { "code": null, "e": 1530, "s": 1465, "text": "We will begin by create the necessary data for this requirement." }, { "code": null, "e": 1739, "s": 1530, "text": "-- Drop table\nDROP TABLE atp_titles;\n\n-- Create table\nCREATE TABLE atp_titles (\n player VARCHAR2(100) NOT NULL,\n title_type VARCHAR2(100) NOT NULL,\n titles NUMBER NOT NULL);" }, { "code": null, "e": 1948, "s": 1739, "text": "-- Drop table\nDROP TABLE atp_titles;\n\n-- Create table\nCREATE TABLE atp_titles (\n player VARCHAR2(100) NOT NULL,\n title_type VARCHAR2(100) NOT NULL,\n titles NUMBER NOT NULL);" }, { "code": null, "e": 2541, "s": 1948, "text": "-- insert ATP tour titles won by the player\nINSERT INTO atp_titles VALUES('Roger Federer','ATP Tour Titles',103);\nINSERT INTO atp_titles VALUES('Rafael Nadal','ATP Tour Titles',86);\nINSERT INTO atp_titles VALUES('Novak Djokovic','ATP Tour Titles',81);\nINSERT INTO atp_titles VALUES('Pete Sampras','ATP Tour Titles',64);\nINSERT INTO atp_titles VALUES('Andre Agassi','ATP Tour Titles',52);\nINSERT INTO atp_titles VALUES('Andy Murray','ATP Tour Titles',46);\nINSERT INTO atp_titles VALUES('Thomas Muster','ATP Tour Titles',39);\nINSERT INTO atp_titles VALUES('Andy Roddick','ATP Tour Titles',32);" }, { "code": null, "e": 3099, "s": 2541, "text": "-- insert grandslam titles won by the player\nINSERT INTO atp_titles VALUES('Roger Federer','Grandslams',20);\nINSERT INTO atp_titles VALUES('Rafael Nadal','Grandslams',20);\nINSERT INTO atp_titles VALUES('Novak Djokovic','Grandslams',17);\nINSERT INTO atp_titles VALUES('Pete Sampras','Grandslams',14);\nINSERT INTO atp_titles VALUES('Andre Agassi','Grandslams',8);\nINSERT INTO atp_titles VALUES('Andy Murray','Grandslams',3);\nINSERT INTO atp_titles VALUES('Thomas Muster','Grandslams',1);\nINSERT INTO atp_titles VALUES('Andy Roddick','Grandslams',0);\n\nCOMMIT;" }, { "code": null, "e": 3163, "s": 3099, "text": "Now we will look at few records inserted into atp_titles table." }, { "code": null, "e": 3200, "s": 3163, "text": "SELECT * FROM atp_titles ORDER BY 1;" }, { "code": null, "e": 3237, "s": 3200, "text": "SELECT * FROM atp_titles ORDER BY 1;" }, { "code": null, "e": 3518, "s": 3237, "text": "Andre Agassi ATP Tour Titles 52\nAndre Agassi Grandslams 8\nAndy Murray Grandslams 3\nAndy Murray ATP Tour Titles 46\nAndy Roddick ATP Tour Titles 32\nAndy Roddick Grandslams 0\n............................\n............................" }, { "code": null, "e": 3755, "s": 3518, "text": "Oracle ROLLUP expression produces group subtotals from right to left along with grand total. With above data, let’s say we wanted to identify the total titles (i.e. ATP tour titles + Grandslam Titles) acheived by player “Roger Federer”." }, { "code": null, "e": 3924, "s": 3755, "text": "SELECT player,title_type, SUM(titles) AS total_titles\n FROM atp_titles\n WHERE player = 'Roger Federer'\nGROUP BY ROLLUP (player,title_type)\nORDER BY player,title_type ;" }, { "code": null, "e": 4093, "s": 3924, "text": "SELECT player,title_type, SUM(titles) AS total_titles\n FROM atp_titles\n WHERE player = 'Roger Federer'\nGROUP BY ROLLUP (player,title_type)\nORDER BY player,title_type ;" }, { "code": null, "e": 4438, "s": 4093, "text": "ROLLUP produces n+1 levels of subtotals for “n” number of columns listed in the ROLLUP. In above example after performing normal grouping by player and title_type, the ROLLUP function rolls up all title_type values so that we see sum for the Grandslams level for the player “Roger Federer”. You can see the rolled up rows in bold in the output." }, { "code": null, "e": 4519, "s": 4438, "text": "Now we will apply the ROLLUP function for all the players in the table as below:" }, { "code": null, "e": 4649, "s": 4519, "text": "SELECT player, title_type, SUM(titles) As total\n FROM atp_titles\nGROUP BY ROLLUP (player,title_type)\nORDER BY player,title_type;" }, { "code": null, "e": 4779, "s": 4649, "text": "SELECT player, title_type, SUM(titles) As total\n FROM atp_titles\nGROUP BY ROLLUP (player,title_type)\nORDER BY player,title_type;" }, { "code": null, "e": 4939, "s": 4779, "text": "ROLLUP functions allows us to perform partial rollup to reduce the number of subtotals calculated. The output from the following partial rollup is shown below:" }, { "code": null, "e": 5069, "s": 4939, "text": "SELECT player, title_type, SUM(titles) As total\n FROM atp_titles\nGROUP BY ROLLUP (player,title_type)\nORDER BY player,title_type;" }, { "code": null, "e": 5199, "s": 5069, "text": "SELECT player, title_type, SUM(titles) As total\n FROM atp_titles\nGROUP BY ROLLUP (player,title_type)\nORDER BY player,title_type;" } ]
How to convert a Map to JSON object using JSON-lib API in Java?
A JSONObject is an unordered collection of name/value pairs whereas Map is an object that maps keys to values. A Map cannot contain duplicate keys and each key can map to at most one value. We need to use the JSON-lib library for serializing and de-serializing a Map in JSON format. Initially, we can create a POJO class and pass this instance as an argument to the put() method of Map class and finally add this map instance to the accumulateAll() method of JSONObject. public void accumulateAll(Map map) In the below example, we can convert Map to a JSON object. import java.util.*; import net.sf.json.JSONObject; public class ConvertMapToJSONObjectTest { public static void main(String[] args)throws Exception { JSONObject jsonObject = new JSONObject(); Map<Integer, Employee> employees = new HashMap<Integer, Employee>(); employees.put(1, new Employee("Adithya", "Jai", 30)); employees.put(2, new Employee("Vamsi", "Krishna", 28)); employees.put(3, new Employee("Chaitanya", "Sai", 30)); jsonObject.accumulateAll(employees); System.out.println(jsonObject.toString(3)); // pretty print JSON } public static class Employee { private String firstName, lastName; private int age; public Employee(String firstName, String lastName, int age) { super(); this.firstName = firstName; this.lastName = lastName; this.age = age; } public String getFirstName() { return firstName; } public String getLastName() { return lastName; } public int getAge() { return age; } } } { "1": { "firstName": "Adithya", "lastName": "Jai", "age": 30 }, "2": { "firstName": "Vamsi", "lastName": "Krishna", "age": 28 }, "3": { "firstName": "Chaitanya", "lastName": "Sai", "age": 30 } }
[ { "code": null, "e": 1533, "s": 1062, "text": "A JSONObject is an unordered collection of name/value pairs whereas Map is an object that maps keys to values. A Map cannot contain duplicate keys and each key can map to at most one value. We need to use the JSON-lib library for serializing and de-serializing a Map in JSON format. Initially, we can create a POJO class and pass this instance as an argument to the put() method of Map class and finally add this map instance to the accumulateAll() method of JSONObject." }, { "code": null, "e": 1568, "s": 1533, "text": "public void accumulateAll(Map map)" }, { "code": null, "e": 1627, "s": 1568, "text": "In the below example, we can convert Map to a JSON object." }, { "code": null, "e": 2705, "s": 1627, "text": "import java.util.*;\nimport net.sf.json.JSONObject;\n\npublic class ConvertMapToJSONObjectTest {\n public static void main(String[] args)throws Exception {\n JSONObject jsonObject = new JSONObject();\n Map<Integer, Employee> employees = new HashMap<Integer, Employee>();\n\n employees.put(1, new Employee(\"Adithya\", \"Jai\", 30));\n employees.put(2, new Employee(\"Vamsi\", \"Krishna\", 28));\n employees.put(3, new Employee(\"Chaitanya\", \"Sai\", 30));\n\n jsonObject.accumulateAll(employees);\n System.out.println(jsonObject.toString(3)); // pretty print JSON\n }\n public static class Employee {\n private String firstName, lastName;\n private int age;\n public Employee(String firstName, String lastName, int age) {\n super();\n this.firstName = firstName;\n this.lastName = lastName;\n this.age = age;\n }\n public String getFirstName() {\n return firstName;\n }\n public String getLastName() {\n return lastName;\n }\n public int getAge() {\n return age;\n }\n }\n}" }, { "code": null, "e": 2916, "s": 2705, "text": "{\n \"1\": {\n \"firstName\": \"Adithya\",\n \"lastName\": \"Jai\",\n \"age\": 30\n },\n \"2\": {\n \"firstName\": \"Vamsi\",\n \"lastName\": \"Krishna\",\n \"age\": 28\n },\n \"3\": {\n \"firstName\": \"Chaitanya\",\n \"lastName\": \"Sai\",\n \"age\": 30\n }\n}" } ]
SAP Web Dynpro - Creating an Application
To create a Web Dynpro application, we will create a Web Dynpro component that consists of one view. We will create a view context → linked to a table element on the view layout and contains the data from the table. The table will be shown in the browser at runtime. A Web Dynpro application for this simple Web Dynpro component, which can be run in the browser will be created. Step 1 − Go to T-Code − SE80 and select Web Dynpro component/intf from the list. Step 2 − Create a new component as the following. Step 3 − Enter the name of the new component and click on display. Step 4 − In the next window, enter the following details − You can enter a description of this component. In type, select a Web Dynpro component. You can also maintain the name of the default window. Step 5 − Assign this component to Package $TMP and click the Save button. When you click Save, you can see this new component under the object tree and it contains − Component Controller Component Interface View Windows When you expand the component interface, you can see the interface controller and interface views. Step 1 − Click on the Web Dynpro component and go to the context menu (right click) → Create → View Step 2 − Create a view MAINVIEW as the following and click on the tick mark. This will open view editor in ABAP workbench under the name − MAINVIEW Step 3 − If you want to open the layout tab and view designer, you may need to enter the application server user name and password. Step 4 − Click the save icon at the top. When you save, it comes under the object tree and you can check by expanding the view tab. Step 5 − To assign the window to this view, select the window ZZ_00_TEST under the window tab and click on Change mode at the top of the screen. Step 6 − You can right-click → Display → In Same Window. Step 7 − Now open the view structure and move the view MAINVIEW inside the window structure on the right hand side by Drag and Drop. Step 8 − Open the window structure on the right hand side and you will see the embedded MAINVIEW. Step 9 − Save by clicking the Save icon on top of the screen. Step 1 − Open the View Editor to view MAINVIEW and switch to tab Context. Create a context node in the View Controller by opening the corresponding context menu. Step 2 − Select the View in the object tree and click Display. Step 3 − Maintain the Properties in the next window. Select the cardinality and dictionary structure (table). Select Add Attribute from Structure and select the components of the structure. Step 4 − To select all the components, click Select all option at the top and then click the tick mark at the bottom of the screen. A context node TEST_NODE has been created, which refers to the data structure of the table and which can contain 0 → n entries at runtime. The context node has been created in the view context, since no data exchange with other views is planned hence component controller context usage is not necessary. Step 5 − Save the changes to MAINVIEW by clicking the Save icon. Step 6 − Go to the Layout tab of MAINVIEW. Insert a new UI element of the type table under ROOTUIELEMENT CONTAINER and assign the properties in the given table. Step 7 − Enter the name of the element and type. Step 8 − Create the binding of TEST_TABLE with context node TEST_NODE. Select Text View as Standard Cell Editors and activate bindings for all cells. Step 9 − Click the Context button. Select the context node as TEST_NODE from the list. Step 10 − You can see all the attributes by selecting it. Step 11 − Activate all the checkboxes under Binding for all context attributes by selecting them. Confirm Entry by pressing the Enter key. The result should look like this − Step 12 − Save the changes. Step 13 − To supply data to TEST table, go to Methods tab and double-click method WDDOINIT. Enter the following code − method WDDOINIT . * data declaration data: Node_TEST type REF TO IF_WD_CONTEXT_NODE, Itab_TEST type standard table of TEST. * get data from table TEST select * from TEST into table Itab_TEST. * navigate from <CONTEXT> to <TEST> via lead selection Node_TEST = wd_Context->get_Child_Node( Name = `TEST_NODE` ). * bind internal table to context node <TEST> Node_TEST->Bind_Table( Itab_TEST ). endmethod. Web Dynpro applications, you should not access database tables directly from Web Dynpro methods, however, you should use supply functions or BAPI calls for data access. Step 14 − Save the changes by clicking the save icon on top of the screen. Step 1 − Select the ZZ_00_TEST component in the object tree → right-click and create a new application. Step 2 − Enter the application name and click continue. Step 3 − Save the changes. Save as a local object. Next is activating objects in Web Dynpro component − Step 4 − Double-click on the component ZZ_00_TEST and click Activate. Step 5 − Select all the objects and click continue. Step 6 − To run the application, select Web Dynpro application → Right-click and Test. A browser will be started and Web Dypro application will be run. In a Web Dynpro application, the component window has an inbound plug. This inbound plug can have parameters, which have to be specified as URL parameters. Default values that are overwritten by the URL parameters can be set in the application for these parameters. If neither a default value nor a URL parameter is specified, a runtime error is triggered. To create a new inbound plug, specify plug as a startup and data type should be a string. Activate the component. Next is to specify the component to be called, parameters, window, and start-up plug. Call the application and URL parameters overwrite application parameters. 25 Lectures 6 hours Sanjo Thomas 26 Lectures 2 hours Neha Gupta 30 Lectures 2.5 hours Sumit Agarwal 30 Lectures 4 hours Sumit Agarwal 14 Lectures 1.5 hours Neha Malik 13 Lectures 1.5 hours Neha Malik Print Add Notes Bookmark this page
[ { "code": null, "e": 2395, "s": 2179, "text": "To create a Web Dynpro application, we will create a Web Dynpro component that consists of one view. We will create a view context → linked to a table element on the view layout and contains the data from the table." }, { "code": null, "e": 2558, "s": 2395, "text": "The table will be shown in the browser at runtime. A Web Dynpro application for this simple Web Dynpro component, which can be run in the browser will be created." }, { "code": null, "e": 2639, "s": 2558, "text": "Step 1 − Go to T-Code − SE80 and select Web Dynpro component/intf from the list." }, { "code": null, "e": 2689, "s": 2639, "text": "Step 2 − Create a new component as the following." }, { "code": null, "e": 2756, "s": 2689, "text": "Step 3 − Enter the name of the new component and click on display." }, { "code": null, "e": 2815, "s": 2756, "text": "Step 4 − In the next window, enter the following details −" }, { "code": null, "e": 2862, "s": 2815, "text": "You can enter a description of this component." }, { "code": null, "e": 2902, "s": 2862, "text": "In type, select a Web Dynpro component." }, { "code": null, "e": 2956, "s": 2902, "text": "You can also maintain the name of the default window." }, { "code": null, "e": 3030, "s": 2956, "text": "Step 5 − Assign this component to Package $TMP and click the Save button." }, { "code": null, "e": 3122, "s": 3030, "text": "When you click Save, you can see this new component under the object tree and it contains −" }, { "code": null, "e": 3143, "s": 3122, "text": "Component Controller" }, { "code": null, "e": 3163, "s": 3143, "text": "Component Interface" }, { "code": null, "e": 3168, "s": 3163, "text": "View" }, { "code": null, "e": 3176, "s": 3168, "text": "Windows" }, { "code": null, "e": 3275, "s": 3176, "text": "When you expand the component interface, you can see the interface controller and interface views." }, { "code": null, "e": 3375, "s": 3275, "text": "Step 1 − Click on the Web Dynpro component and go to the context menu (right click) → Create → View" }, { "code": null, "e": 3452, "s": 3375, "text": "Step 2 − Create a view MAINVIEW as the following and click on the tick mark." }, { "code": null, "e": 3523, "s": 3452, "text": "This will open view editor in ABAP workbench under the name − MAINVIEW" }, { "code": null, "e": 3655, "s": 3523, "text": "Step 3 − If you want to open the layout tab and view designer, you may need to enter the application server user name and password." }, { "code": null, "e": 3696, "s": 3655, "text": "Step 4 − Click the save icon at the top." }, { "code": null, "e": 3787, "s": 3696, "text": "When you save, it comes under the object tree and you can check by expanding the view tab." }, { "code": null, "e": 3932, "s": 3787, "text": "Step 5 − To assign the window to this view, select the window ZZ_00_TEST under the window tab and click on Change mode at the top of the screen." }, { "code": null, "e": 3989, "s": 3932, "text": "Step 6 − You can right-click → Display → In Same Window." }, { "code": null, "e": 4122, "s": 3989, "text": "Step 7 − Now open the view structure and move the view MAINVIEW inside the window structure on the right hand side by Drag and Drop." }, { "code": null, "e": 4220, "s": 4122, "text": "Step 8 − Open the window structure on the right hand side and you will see the embedded MAINVIEW." }, { "code": null, "e": 4282, "s": 4220, "text": "Step 9 − Save by clicking the Save icon on top of the screen." }, { "code": null, "e": 4444, "s": 4282, "text": "Step 1 − Open the View Editor to view MAINVIEW and switch to tab Context. Create a context node in the View Controller by opening the corresponding context menu." }, { "code": null, "e": 4507, "s": 4444, "text": "Step 2 − Select the View in the object tree and click Display." }, { "code": null, "e": 4697, "s": 4507, "text": "Step 3 − Maintain the Properties in the next window. Select the cardinality and dictionary structure (table). Select Add Attribute from Structure and select the components of the structure." }, { "code": null, "e": 4829, "s": 4697, "text": "Step 4 − To select all the components, click Select all option at the top and then click the tick mark at the bottom of the screen." }, { "code": null, "e": 5133, "s": 4829, "text": "A context node TEST_NODE has been created, which refers to the data structure of the table and which can contain 0 → n entries at runtime. The context node has been created in the view context, since no data exchange with other views is planned hence component controller context usage is not necessary." }, { "code": null, "e": 5198, "s": 5133, "text": "Step 5 − Save the changes to MAINVIEW by clicking the Save icon." }, { "code": null, "e": 5359, "s": 5198, "text": "Step 6 − Go to the Layout tab of MAINVIEW. Insert a new UI element of the type table under ROOTUIELEMENT CONTAINER and assign the properties in the given table." }, { "code": null, "e": 5408, "s": 5359, "text": "Step 7 − Enter the name of the element and type." }, { "code": null, "e": 5558, "s": 5408, "text": "Step 8 − Create the binding of TEST_TABLE with context node TEST_NODE. Select Text View as Standard Cell Editors and activate bindings for all cells." }, { "code": null, "e": 5645, "s": 5558, "text": "Step 9 − Click the Context button. Select the context node as TEST_NODE from the list." }, { "code": null, "e": 5703, "s": 5645, "text": "Step 10 − You can see all the attributes by selecting it." }, { "code": null, "e": 5842, "s": 5703, "text": "Step 11 − Activate all the checkboxes under Binding for all context attributes by selecting them. Confirm Entry by pressing the Enter key." }, { "code": null, "e": 5877, "s": 5842, "text": "The result should look like this −" }, { "code": null, "e": 5905, "s": 5877, "text": "Step 12 − Save the changes." }, { "code": null, "e": 6024, "s": 5905, "text": "Step 13 − To supply data to TEST table, go to Methods tab and double-click method WDDOINIT. Enter the following code −" }, { "code": null, "e": 6426, "s": 6024, "text": "method WDDOINIT .\n* data declaration\ndata:\nNode_TEST type REF TO IF_WD_CONTEXT_NODE,\nItab_TEST type standard table of TEST.\n* get data from table TEST\nselect * from TEST into table Itab_TEST.\n* navigate from <CONTEXT> to <TEST> via lead selection\nNode_TEST = wd_Context->get_Child_Node( Name = `TEST_NODE` ).\n* bind internal table to context node <TEST>\nNode_TEST->Bind_Table( Itab_TEST ).\nendmethod.\n" }, { "code": null, "e": 6595, "s": 6426, "text": "Web Dynpro applications, you should not access database tables directly from Web Dynpro methods, however, you should use supply functions or BAPI calls for data access." }, { "code": null, "e": 6670, "s": 6595, "text": "Step 14 − Save the changes by clicking the save icon on top of the screen." }, { "code": null, "e": 6774, "s": 6670, "text": "Step 1 − Select the ZZ_00_TEST component in the object tree → right-click and create a new application." }, { "code": null, "e": 6830, "s": 6774, "text": "Step 2 − Enter the application name and click continue." }, { "code": null, "e": 6881, "s": 6830, "text": "Step 3 − Save the changes. Save as a local object." }, { "code": null, "e": 6934, "s": 6881, "text": "Next is activating objects in Web Dynpro component −" }, { "code": null, "e": 7004, "s": 6934, "text": "Step 4 − Double-click on the component ZZ_00_TEST and click Activate." }, { "code": null, "e": 7056, "s": 7004, "text": "Step 5 − Select all the objects and click continue." }, { "code": null, "e": 7143, "s": 7056, "text": "Step 6 − To run the application, select Web Dynpro application → Right-click and Test." }, { "code": null, "e": 7208, "s": 7143, "text": "A browser will be started and Web Dypro application will be run." }, { "code": null, "e": 7364, "s": 7208, "text": "In a Web Dynpro application, the component window has an inbound plug. This inbound plug can have parameters, which have to be specified as URL parameters." }, { "code": null, "e": 7565, "s": 7364, "text": "Default values that are overwritten by the URL parameters can be set in the application for these parameters. If neither a default value nor a URL parameter is specified, a runtime error is triggered." }, { "code": null, "e": 7679, "s": 7565, "text": "To create a new inbound plug, specify plug as a startup and data type should be a string. Activate the component." }, { "code": null, "e": 7765, "s": 7679, "text": "Next is to specify the component to be called, parameters, window, and start-up plug." }, { "code": null, "e": 7839, "s": 7765, "text": "Call the application and URL parameters overwrite application parameters." }, { "code": null, "e": 7872, "s": 7839, "text": "\n 25 Lectures \n 6 hours \n" }, { "code": null, "e": 7886, "s": 7872, "text": " Sanjo Thomas" }, { "code": null, "e": 7919, "s": 7886, "text": "\n 26 Lectures \n 2 hours \n" }, { "code": null, "e": 7931, "s": 7919, "text": " Neha Gupta" }, { "code": null, "e": 7966, "s": 7931, "text": "\n 30 Lectures \n 2.5 hours \n" }, { "code": null, "e": 7981, "s": 7966, "text": " Sumit Agarwal" }, { "code": null, "e": 8014, "s": 7981, "text": "\n 30 Lectures \n 4 hours \n" }, { "code": null, "e": 8029, "s": 8014, "text": " Sumit Agarwal" }, { "code": null, "e": 8064, "s": 8029, "text": "\n 14 Lectures \n 1.5 hours \n" }, { "code": null, "e": 8076, "s": 8064, "text": " Neha Malik" }, { "code": null, "e": 8111, "s": 8076, "text": "\n 13 Lectures \n 1.5 hours \n" }, { "code": null, "e": 8123, "s": 8111, "text": " Neha Malik" }, { "code": null, "e": 8130, "s": 8123, "text": " Print" }, { "code": null, "e": 8141, "s": 8130, "text": " Add Notes" } ]
Redis - String Getrange Command
Redis GETRANGE command is used to get the substring of the string value stored at the key, determined by the offsets start and end (both are inclusive). Negative offsets can be used in order to provide an offset starting from the end of the string. The function handles out of range requests by limiting the resulting range to the actual length of the string. Simple string reply. Following is the basic syntax of Redis GETRANGE command. redis 127.0.0.1:6379> GETRANGE KEY_NAME start end First, set a key in Redis and then get some part of it. redis 127.0.0.1:6379> SET mykey "This is my test key" OK redis 127.0.0.1:6379> GETRANGE mykey 0 3 "This" redis 127.0.0.1:6379> GETRANGE mykey 0 -1 "This is my test key" 22 Lectures 40 mins Skillbakerystudios Print Add Notes Bookmark this page
[ { "code": null, "e": 2294, "s": 2045, "text": "Redis GETRANGE command is used to get the substring of the string value stored at the key, determined by the offsets start and end (both are inclusive). Negative offsets can be used in order to provide an offset starting from the end of the string." }, { "code": null, "e": 2405, "s": 2294, "text": "The function handles out of range requests by limiting the resulting range to the actual length of the string." }, { "code": null, "e": 2426, "s": 2405, "text": "Simple string reply." }, { "code": null, "e": 2483, "s": 2426, "text": "Following is the basic syntax of Redis GETRANGE command." }, { "code": null, "e": 2534, "s": 2483, "text": "redis 127.0.0.1:6379> GETRANGE KEY_NAME start end\n" }, { "code": null, "e": 2590, "s": 2534, "text": "First, set a key in Redis and then get some part of it." }, { "code": null, "e": 2765, "s": 2590, "text": "redis 127.0.0.1:6379> SET mykey \"This is my test key\" \nOK \nredis 127.0.0.1:6379> GETRANGE mykey 0 3 \n\"This\" \nredis 127.0.0.1:6379> GETRANGE mykey 0 -1 \n\"This is my test key\"\n" }, { "code": null, "e": 2797, "s": 2765, "text": "\n 22 Lectures \n 40 mins\n" }, { "code": null, "e": 2817, "s": 2797, "text": " Skillbakerystudios" }, { "code": null, "e": 2824, "s": 2817, "text": " Print" }, { "code": null, "e": 2835, "s": 2824, "text": " Add Notes" } ]
BigDecimal add() Method in Java with Examples - GeeksforGeeks
27 May, 2019 The java.math.BigDecimal.add(BigDecimal val) is used to calculate the Arithmetic sum of two BigDecimals. This method is used to find arithmetic addition of large numbers of range much greater than the range of largest data type double of Java without compromising with the precision of the result. This method performs an operation upon the current BigDecimal by which this method is called and the BigDecimal passed as the parameter. There are two overloads of add method available in Java which is listed below: add(BigDecimal val) add(BigDecimal val, MathContext mc) Syntax: public BigDecimal add(BigDecimal val) Parameters: This method accepts a parameter val which is the value to be added to this BigDecimal. Return value: This method returns a BigDecimal which holds sum (this + val), and whose scale is max(this.scale(), val.scale()). Below programs is used to illustrate the add() method of BigDecimal. // Java program to demonstrate// add() method of BigDecimal import java.math.BigDecimal; public class GFG { public static void main(String[] args) { // BigDecimal object to store the result BigDecimal sum; // For user input // Use Scanner or BufferedReader // Two objects of String created // Holds the values to calculate the sum String input1 = "545456468445645468464645"; String input2 = "4256456484464684864864"; // Convert the string input to BigDecimal BigDecimal a = new BigDecimal(input1); BigDecimal b = new BigDecimal(input2); // Using add() method sum = a.add(b); // Display the result in BigDecimal System.out.println("The sum of\n" + a + " \nand\n" + b + " " + "\nis\n" + sum + "\n"); }} Output: The sum of545456468445645468464645and4256456484464684864864is549712924930110153329509 Syntax: public BigDecimal add(BigDecimal val, MathContext mc) Parameters: This method accepts two parameter, one is val which is the value to be added to this BigDecimal and a mc of type MathContext. Return value: This method returns a BigDecimal which holds sum (this + val), with rounding according to the context settings. If either number is zero and the precision setting is nonzero then the other number, rounded if necessary, is used as the result. Below programs is used to illustrate the add() method of BigDecimal. // Java program to demonstrate// add() method of BigDecimal import java.math.*; public class GFG { public static void main(String[] args) { // BigDecimal object to store the result BigDecimal sum; // For user input // Use Scanner or BufferedReader // Two objects of String created // Holds the values to calculate the sum String input1 = "9854228445645468464645"; String input2 = "4252145764464684864864"; // Convert the string input to BigDecimal BigDecimal a = new BigDecimal(input1); BigDecimal b = new BigDecimal(input2); // Set precision to 10 MathContext mc = new MathContext(10); // Using add() method sum = a.add(b, mc); // Display the result in BigDecimal System.out.println("The sum of\n" + a + " \nand\n" + b + " " + "\nis\n" + sum + "\n"); }} Output: The sum of9854228445645468464645and4252145764464684864864is1.410637421E+22 References: https://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html#add(java.math.BigDecimal) Java-BigDecimal Java-Functions Java-math-package Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Interfaces in Java Stream In Java Singleton Class in Java Set in Java Overriding in Java LinkedList in Java Collections in Java Initializing a List in Java Queue Interface In Java Multithreading in Java
[ { "code": null, "e": 24418, "s": 24390, "text": "\n27 May, 2019" }, { "code": null, "e": 24853, "s": 24418, "text": "The java.math.BigDecimal.add(BigDecimal val) is used to calculate the Arithmetic sum of two BigDecimals. This method is used to find arithmetic addition of large numbers of range much greater than the range of largest data type double of Java without compromising with the precision of the result. This method performs an operation upon the current BigDecimal by which this method is called and the BigDecimal passed as the parameter." }, { "code": null, "e": 24932, "s": 24853, "text": "There are two overloads of add method available in Java which is listed below:" }, { "code": null, "e": 24952, "s": 24932, "text": "add(BigDecimal val)" }, { "code": null, "e": 24988, "s": 24952, "text": "add(BigDecimal val, MathContext mc)" }, { "code": null, "e": 24996, "s": 24988, "text": "Syntax:" }, { "code": null, "e": 25035, "s": 24996, "text": "public BigDecimal add(BigDecimal val)\n" }, { "code": null, "e": 25134, "s": 25035, "text": "Parameters: This method accepts a parameter val which is the value to be added to this BigDecimal." }, { "code": null, "e": 25262, "s": 25134, "text": "Return value: This method returns a BigDecimal which holds sum (this + val), and whose scale is max(this.scale(), val.scale())." }, { "code": null, "e": 25331, "s": 25262, "text": "Below programs is used to illustrate the add() method of BigDecimal." }, { "code": "// Java program to demonstrate// add() method of BigDecimal import java.math.BigDecimal; public class GFG { public static void main(String[] args) { // BigDecimal object to store the result BigDecimal sum; // For user input // Use Scanner or BufferedReader // Two objects of String created // Holds the values to calculate the sum String input1 = \"545456468445645468464645\"; String input2 = \"4256456484464684864864\"; // Convert the string input to BigDecimal BigDecimal a = new BigDecimal(input1); BigDecimal b = new BigDecimal(input2); // Using add() method sum = a.add(b); // Display the result in BigDecimal System.out.println(\"The sum of\\n\" + a + \" \\nand\\n\" + b + \" \" + \"\\nis\\n\" + sum + \"\\n\"); }}", "e": 26254, "s": 25331, "text": null }, { "code": null, "e": 26262, "s": 26254, "text": "Output:" }, { "code": null, "e": 26348, "s": 26262, "text": "The sum of545456468445645468464645and4256456484464684864864is549712924930110153329509" }, { "code": null, "e": 26356, "s": 26348, "text": "Syntax:" }, { "code": null, "e": 26411, "s": 26356, "text": "public BigDecimal add(BigDecimal val, MathContext mc)\n" }, { "code": null, "e": 26549, "s": 26411, "text": "Parameters: This method accepts two parameter, one is val which is the value to be added to this BigDecimal and a mc of type MathContext." }, { "code": null, "e": 26805, "s": 26549, "text": "Return value: This method returns a BigDecimal which holds sum (this + val), with rounding according to the context settings. If either number is zero and the precision setting is nonzero then the other number, rounded if necessary, is used as the result." }, { "code": null, "e": 26874, "s": 26805, "text": "Below programs is used to illustrate the add() method of BigDecimal." }, { "code": "// Java program to demonstrate// add() method of BigDecimal import java.math.*; public class GFG { public static void main(String[] args) { // BigDecimal object to store the result BigDecimal sum; // For user input // Use Scanner or BufferedReader // Two objects of String created // Holds the values to calculate the sum String input1 = \"9854228445645468464645\"; String input2 = \"4252145764464684864864\"; // Convert the string input to BigDecimal BigDecimal a = new BigDecimal(input1); BigDecimal b = new BigDecimal(input2); // Set precision to 10 MathContext mc = new MathContext(10); // Using add() method sum = a.add(b, mc); // Display the result in BigDecimal System.out.println(\"The sum of\\n\" + a + \" \\nand\\n\" + b + \" \" + \"\\nis\\n\" + sum + \"\\n\"); }}", "e": 27876, "s": 26874, "text": null }, { "code": null, "e": 27884, "s": 27876, "text": "Output:" }, { "code": null, "e": 27959, "s": 27884, "text": "The sum of9854228445645468464645and4252145764464684864864is1.410637421E+22" }, { "code": null, "e": 28065, "s": 27959, "text": "References: https://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html#add(java.math.BigDecimal)" }, { "code": null, "e": 28081, "s": 28065, "text": "Java-BigDecimal" }, { "code": null, "e": 28096, "s": 28081, "text": "Java-Functions" }, { "code": null, "e": 28114, "s": 28096, "text": "Java-math-package" }, { "code": null, "e": 28119, "s": 28114, "text": "Java" }, { "code": null, "e": 28124, "s": 28119, "text": "Java" }, { "code": null, "e": 28222, "s": 28124, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28241, "s": 28222, "text": "Interfaces in Java" }, { "code": null, "e": 28256, "s": 28241, "text": "Stream In Java" }, { "code": null, "e": 28280, "s": 28256, "text": "Singleton Class in Java" }, { "code": null, "e": 28292, "s": 28280, "text": "Set in Java" }, { "code": null, "e": 28311, "s": 28292, "text": "Overriding in Java" }, { "code": null, "e": 28330, "s": 28311, "text": "LinkedList in Java" }, { "code": null, "e": 28350, "s": 28330, "text": "Collections in Java" }, { "code": null, "e": 28378, "s": 28350, "text": "Initializing a List in Java" }, { "code": null, "e": 28402, "s": 28378, "text": "Queue Interface In Java" } ]
GATE | GATE-CS-2015 (Set 1) | Question 65 - GeeksforGeeks
28 Jun, 2021 Consider a disk pack with a seek time of 4 milliseconds and rotational speed of 10000 rotations per minute (RPM). It has 600 sectors per track and each sector can store 512 bytes of data. Consider a file stored in the disk. The file contains 2000 sectors. Assume that every sector access necessitates a seek, and the average rotational latency for accessing each sector is half of the time for one complete rotation. The total time (in milliseconds) needed to read the entire file is _________.(A) 14020(B) 14000(C) 25030(D) 15000Answer: (A)Explanation: Seek time (given) = 4ms RPM = 10000 rotation in 1 min [60 sec] So, 1 rotation will be =60/10000 =6ms [rotation speed] Rotation latency= 1/2 * 6ms=3ms # To access a file, total time includes =seek time + rot. latency +transfer time TO calc. transfer time, find transfer rate Transfer rate = bytes on track /rotation speed so, transfer rate = 600*512/6ms =51200 B/ms transfer time= total bytes to be transferred/ transfer rate so, Transfer time =2000*512/51200 = 20ms Given as each sector requires seek tim + rot. latency = 4ms+3ms =7ms Total 2000 sector takes = 2000*7 ms =14000 ms To read entire file ,total time = 14000 + 20(transfer time) = 14020 ms Quiz of this Question GATE-CS-2015 (Set 1) GATE-GATE-CS-2015 (Set 1) GATE Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments GATE | GATE-IT-2004 | Question 71 GATE | GATE CS 2011 | Question 7 GATE | GATE-CS-2016 (Set 2) | Question 48 GATE | GATE CS 2010 | Question 24 GATE | GATE-CS-2015 (Set 3) | Question 65 GATE | GATE-CS-2016 (Set 1) | Question 65 GATE | GATE-CS-2014-(Set-3) | Question 38 GATE | GATE CS 2018 | Question 37 GATE | GATE-IT-2004 | Question 83 GATE | GATE-CS-2016 (Set 1) | Question 63
[ { "code": null, "e": 24604, "s": 24576, "text": "\n28 Jun, 2021" }, { "code": null, "e": 25158, "s": 24604, "text": "Consider a disk pack with a seek time of 4 milliseconds and rotational speed of 10000 rotations per minute (RPM). It has 600 sectors per track and each sector can store 512 bytes of data. Consider a file stored in the disk. The file contains 2000 sectors. Assume that every sector access necessitates a seek, and the average rotational latency for accessing each sector is half of the time for one complete rotation. The total time (in milliseconds) needed to read the entire file is _________.(A) 14020(B) 14000(C) 25030(D) 15000Answer: (A)Explanation:" }, { "code": null, "e": 25851, "s": 25158, "text": "Seek time (given) = 4ms\n\nRPM = 10000 rotation in 1 min [60 sec]\nSo, 1 rotation will be =60/10000 =6ms [rotation speed]\nRotation latency= 1/2 * 6ms=3ms\n# To access a file, \n total time includes =seek time + rot. latency +transfer time\nTO calc. transfer time, find transfer rate\n\nTransfer rate = bytes on track /rotation speed\nso, transfer rate = 600*512/6ms =51200 B/ms\n\ntransfer time= total bytes to be transferred/ transfer rate\nso, Transfer time =2000*512/51200 = 20ms\n\nGiven as each sector requires seek tim + rot. latency\n= 4ms+3ms =7ms\n\nTotal 2000 sector takes = 2000*7 ms =14000 ms\nTo read entire file ,total time = 14000 + 20(transfer time)\n = 14020 ms\n" }, { "code": null, "e": 25873, "s": 25851, "text": "Quiz of this Question" }, { "code": null, "e": 25894, "s": 25873, "text": "GATE-CS-2015 (Set 1)" }, { "code": null, "e": 25920, "s": 25894, "text": "GATE-GATE-CS-2015 (Set 1)" }, { "code": null, "e": 25925, "s": 25920, "text": "GATE" }, { "code": null, "e": 26023, "s": 25925, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26032, "s": 26023, "text": "Comments" }, { "code": null, "e": 26045, "s": 26032, "text": "Old Comments" }, { "code": null, "e": 26079, "s": 26045, "text": "GATE | GATE-IT-2004 | Question 71" }, { "code": null, "e": 26112, "s": 26079, "text": "GATE | GATE CS 2011 | Question 7" }, { "code": null, "e": 26154, "s": 26112, "text": "GATE | GATE-CS-2016 (Set 2) | Question 48" }, { "code": null, "e": 26188, "s": 26154, "text": "GATE | GATE CS 2010 | Question 24" }, { "code": null, "e": 26230, "s": 26188, "text": "GATE | GATE-CS-2015 (Set 3) | Question 65" }, { "code": null, "e": 26272, "s": 26230, "text": "GATE | GATE-CS-2016 (Set 1) | Question 65" }, { "code": null, "e": 26314, "s": 26272, "text": "GATE | GATE-CS-2014-(Set-3) | Question 38" }, { "code": null, "e": 26348, "s": 26314, "text": "GATE | GATE CS 2018 | Question 37" }, { "code": null, "e": 26382, "s": 26348, "text": "GATE | GATE-IT-2004 | Question 83" } ]
The Hitchhikers guide to handle Big Data using Spark | by Rahul Agarwal | Towards Data Science
Big Data has become synonymous with Data engineering. But the line between Data Engineering and Data scientists is blurring day by day. At this point in time, I think that Big Data must be in the repertoire of all data scientists. Reason: Too much data is getting generated day by day And that brings us to Spark. Now most of the Spark documentation, while good, did not explain it from the perspective of a data scientist. So I thought of giving it a shot. This post is going to be about — “How to make Spark work?” This post is going to be quite long. Actually my longest post on medium, so go pick up a Coffee. Suppose you are tasked with cutting all the trees in the forest. Perhaps not a good business with all the global warming, but here it serves our purpose and we are talking hypothetically, so I will continue. You have two options: Get Batista with an electric powered chainsaw to do your work and make him cut each tree one by one. Get 500 normal guys with normal axes and make them work on different trees. Which would you prefer? Although Option 1 is still the way some people would go, the need for option 2 led to the emergence of MapReduce. In Bigdata speak, we call the Batista solution as scaling vertically/scaling-up as in we add/stuff a lot of RAM and hard disk in a single worker. And the second solution is called scaling horizontally/scaling-sideways. As in you connect a lot of ordinary machines(with less RAM) together and use them in parallel. Now, vertical scaling has certain benefits over Horizontal scaling: It is fast if the size of the problem is small: Think 2 trees. Batista would be through with both of them with his awesome chainsaw while our two guys would be still hacking with their axes. It is easy to understand. This is how we have always done things. We normally think about things in a sequential pattern and that is how our whole computer architecture and design has evolved. But, Horizontal Scaling is Less Expensive: Getting 50 normal guys itself is much cheaper than getting a single guy like Batista. Apart from that Batista needs a lot of care and maintenance to keep him cool and he is very sensitive to even small things just like machines with a high amount of RAM. Faster when the size of the problem is big: Now imagine 1000 trees and 1000 workers vs a single Batista. With Horizontal Scaling, if we face a very large problem we will just hire 100 or maybe 1000 more cheap workers. It doesn’t work like that with Batista. You have to increase RAM and that means more cooling infrastructure and more maintenance costs. MapReduce is what makes the second option possible by letting us use a cluster of computers for parallelization. Now, MapReduce looks like a fairly technical term. But let us break it a little. MapReduce is made up of two terms: It is basically the apply/map function. We split our data into n chunks and send each chunk to a different worker(Mapper). If there is any function we would like to apply over the rows of Data our worker does that. Aggregate the data using some function based on a groupby key. It is basically a groupby. Of course, there is a lot going in the background to make the system work as intended. Don’t worry, if you don’t understand it yet. Just keep reading. Maybe you will understand it when we use MapReduce ourselves in the examples I am going to provide. Hadoop was the first open source system that introduced us to the MapReduce paradigm of programming and Spark is the system that made it faster, much much faster(100x). There used to be a lot of data movement in Hadoop as it used to write intermediate results to the file system. This affected the speed at which you could do analysis. Spark provided us with an in-memory model, so Spark doesn’t write too much to the disk while working. Simply, Spark is faster than Hadoop and a lot of people use Spark now. So without further ado let us get started. Installing Spark is actually a headache of its own. Since we want to understand how it works and really work with it, I would suggest that you use Sparks on Databricks here online with the community edition. Don’t worry it is free. Once you register and login will be presented with the following screen. You can start a new notebook here. Select the Python notebook and give any name to your notebook. Once you start a new notebook and try to execute any command, the notebook will ask you if you want to start a new cluster. Do it. The next step will be to check if the sparkcontext is present. To check if the sparkcontext is present you just have to run this command: sc This means that we are set up with a notebook where we can run Spark. The next step is to upload some data we will use to learn Spark. Just click on ‘Import and Explore Data’ on the home tab. I will end up using multiple datasets by the end of this post but let us start with something very simple. Let us add the file shakespeare.txt which you can download from here. You can see that the file is loaded to /FileStore/tables/shakespeare.txt location. I like to learn by examples so let’s get done with the “Hello World” of Distributed computing: The WordCount Program. # Distribute the data - Create a RDD lines = sc.textFile("/FileStore/tables/shakespeare.txt")# Create a list with all words, Create tuple (word,1), reduce by key i.e. the wordcounts = (lines.flatMap(lambda x: x.split(' ')) .map(lambda x: (x, 1)) .reduceByKey(lambda x,y : x + y))# get the output on localoutput = counts.take(10) # print outputfor (word, count) in output: print("%s: %i" % (word, count)) So that is a small example which counts the number of words in the document and prints 10 of them. And most of the work gets done in the second command. Don’t worry if you are not able to follow this yet as I still need to tell you about the things that make Spark work. But before we get into Spark basics, Let us refresh some of our Python Basics. Understanding Spark becomes a lot easier if you have used functional programming with Python. For those of you who haven’t used it, below is a brief intro. map is used to map a function to an array or a list. Say you want to apply some function to every element in a list. You can do this by simply using a for loop but python lambda functions let you do this in a single line in Python. my_list = [1,2,3,4,5,6,7,8,9,10]# Lets say I want to square each term in my_list.squared_list = map(lambda x:x**2,my_list)print(list(squared_list))------------------------------------------------------------[1, 4, 9, 16, 25, 36, 49, 64, 81, 100] In the above example, you could think of map as a function which takes two arguments — A function and a list. It then applies the function to every element of the list. What lambda allows you to do is write an inline function. In here the part lambda x:x**2 defines a function that takes x as input and returns x2. You could have also provided a proper function in place of lambda. For example: def squared(x): return x**2my_list = [1,2,3,4,5,6,7,8,9,10]# Lets say I want to square each term in my_list.squared_list = map(squared,my_list)print(list(squared_list))------------------------------------------------------------[1, 4, 9, 16, 25, 36, 49, 64, 81, 100] The same result, but the lambda expressions make the code compact and a lot more readable. The other function that is used extensively is the filter function. This function takes two arguments — A condition and the list to filter. If you want to filter your list using some condition you use filter. my_list = [1,2,3,4,5,6,7,8,9,10]# Lets say I want only the even numbers in my list.filtered_list = filter(lambda x:x%2==0,my_list)print(list(filtered_list))---------------------------------------------------------------[2, 4, 6, 8, 10] The next function I want to talk about is the reduce function. This function will be the workhorse in Spark. This function takes two arguments — a function to reduce that takes two arguments, and a list over which the reduce function is to be applied. import functoolsmy_list = [1,2,3,4,5]# Lets say I want to sum all elements in my list.sum_list = functools.reduce(lambda x,y:x+y,my_list)print(sum_list) In python2 reduce used to be a part of Python, now we have to use reduce as a part of functools. Here the lambda function takes in two values x, y and returns their sum. Intuitively you can think that the reduce function works as: Reduce function first sends 1,2 ; the lambda function returns 3Reduce function then sends 3,3 ; the lambda function returns 6Reduce function then sends 6,4 ; the lambda function returns 10Reduce function finally sends 10,5 ; the lambda function returns 15 A condition on the lambda function we use in reduce is that it must be: commutative that is a + b = b + a and associative that is (a + b) + c == a + (b + c). In the above case, we used sum which is commutative as well as associative. Other functions that we could have used: max, min, * etc. As we have now got the fundamentals of Python Functional Programming out of the way, lets again head to Spark. But first, let us delve a little bit into how spark works. Spark actually consists of two things a driver and workers. Workers normally do all the work and the driver makes them do that work. An RDD(Resilient Distributed Dataset) is a parallelized data structure that gets distributed across the worker nodes. They are the basic units of Spark programming. In our wordcount example, in the first line lines = sc.textFile("/FileStore/tables/shakespeare.txt") We took a text file and distributed it across worker nodes so that they can work on it in parallel. We could also parallelize lists using the function sc.parallelize For example: data = [1,2,3,4,5,6,7,8,9,10]new_rdd = sc.parallelize(data,4)new_rdd---------------------------------------------------------------ParallelCollectionRDD[22] at parallelize at PythonRDD.scala:267 In Spark, we can do two different types of operations on RDD: Transformations and Actions. Transformations: Create new datasets from existing RDDsActions: Mechanism to get results out of Spark Transformations: Create new datasets from existing RDDs Actions: Mechanism to get results out of Spark So let us say you have got your data in the form of an RDD. To requote your data is now accessible to the worker machines. You want to do some transformations on the data now. You may want to filter, apply some function, etc. In Spark, this is done using Transformation functions. Spark provides many transformation functions. You can see a comprehensive list here. Some of the main ones that I use frequently are: Applies a given function to an RDD. Note that the syntax is a little bit different from Python, but it necessarily does the same thing. Don’t worry about collect yet. For now, just think of it as a function that collects the data in squared_rdd back to a list. data = [1,2,3,4,5,6,7,8,9,10]rdd = sc.parallelize(data,4)squared_rdd = rdd.map(lambda x:x**2)squared_rdd.collect()------------------------------------------------------[1, 4, 9, 16, 25, 36, 49, 64, 81, 100] Again no surprises here. Takes as input a condition and keeps only those elements that fulfill that condition. data = [1,2,3,4,5,6,7,8,9,10]rdd = sc.parallelize(data,4)filtered_rdd = rdd.filter(lambda x:x%2==0)filtered_rdd.collect()------------------------------------------------------[2, 4, 6, 8, 10] Returns only distinct elements in an RDD. data = [1,2,2,2,2,3,3,3,3,4,5,6,7,7,7,8,8,8,9,10]rdd = sc.parallelize(data,4)distinct_rdd = rdd.distinct()distinct_rdd.collect()------------------------------------------------------[8, 4, 1, 5, 9, 2, 10, 6, 3, 7] Similar to map, but each input item can be mapped to 0 or more output items. data = [1,2,3,4]rdd = sc.parallelize(data,4)flat_rdd = rdd.flatMap(lambda x:[x,x**3])flat_rdd.collect()------------------------------------------------------[1, 1, 2, 8, 3, 27, 4, 64] The parallel to the reduce in Hadoop MapReduce. Now Spark cannot provide the value if it just worked with Lists. In Spark, there is a concept of pair RDDs that makes it a lot more flexible. Let's assume we have a data in which we have a product, its category, and its selling price. We can still parallelize the data. data = [('Apple','Fruit',200),('Banana','Fruit',24),('Tomato','Fruit',56),('Potato','Vegetable',103),('Carrot','Vegetable',34)]rdd = sc.parallelize(data,4) Right now our RDD rdd holds tuples. Now we want to find out the total sum of revenue that we got from each category. To do that we have to transform our rdd to a pair rdd so that it only contains key-value pairs/tuples. category_price_rdd = rdd.map(lambda x: (x[1],x[2]))category_price_rdd.collect()-----------------------------------------------------------------[(‘Fruit’, 200), (‘Fruit’, 24), (‘Fruit’, 56), (‘Vegetable’, 103), (‘Vegetable’, 34)] Here we used the map function to get it in the format we wanted. When working with textfile, the RDD that gets formed has got a lot of strings. We use map to convert it into a format that we want. So now our category_price_rdd contains the product category and the price at which the product sold. Now we want to reduce on the key category and sum the prices. We can do this by: category_total_price_rdd = category_price_rdd.reduceByKey(lambda x,y:x+y)category_total_price_rdd.collect()---------------------------------------------------------[(‘Vegetable’, 137), (‘Fruit’, 280)] Similar to reduceByKey but does not reduces just puts all the elements in an iterator. For example, if we wanted to keep as key the category and as the value all the products we would use this function. Let us again use map to get data in the required form. data = [('Apple','Fruit',200),('Banana','Fruit',24),('Tomato','Fruit',56),('Potato','Vegetable',103),('Carrot','Vegetable',34)]rdd = sc.parallelize(data,4)category_product_rdd = rdd.map(lambda x: (x[1],x[0]))category_product_rdd.collect()------------------------------------------------------------[('Fruit', 'Apple'), ('Fruit', 'Banana'), ('Fruit', 'Tomato'), ('Vegetable', 'Potato'), ('Vegetable', 'Carrot')] We then use groupByKey as: grouped_products_by_category_rdd = category_product_rdd.groupByKey()findata = grouped_products_by_category_rdd.collect()for data in findata: print(data[0],list(data[1]))------------------------------------------------------------Vegetable ['Potato', 'Carrot'] Fruit ['Apple', 'Banana', 'Tomato'] Here the groupByKey function worked and it returned the category and the list of products in that category. You have filtered your data, mapped some functions on it. Done your computation. Now you want to get the data on your local machine or save it to a file or show the results in the form of some graphs in excel or any visualization tool. You will need actions for that. A comprehensive list of actions is provided here. Some of the most common actions that I tend to use are: We have already used this action many times. It takes the whole RDD and brings it back to the driver program. Aggregate the elements of the dataset using a function func (which takes two arguments and returns one). The function should be commutative and associative so that it can be computed correctly in parallel. rdd = sc.parallelize([1,2,3,4,5])rdd.reduce(lambda x,y : x+y)---------------------------------15 Sometimes you will need to see what your RDD contains without getting all the elements in memory itself. take returns a list with the first n elements of the RDD. rdd = sc.parallelize([1,2,3,4,5])rdd.take(3)---------------------------------[1, 2, 3] takeOrdered returns the first n elements of the RDD using either their natural order or a custom comparator. rdd = sc.parallelize([5,3,12,23])# descending orderrdd.takeOrdered(3,lambda s:-1*s)----[23, 12, 5]rdd = sc.parallelize([(5,23),(3,34),(12,344),(23,29)])# descending orderrdd.takeOrdered(3,lambda s:-1*s[1])---[(12, 344), (3, 34), (23, 29)] We have our basics covered finally. Let us get back to our wordcount example Now we sort of understand the transformations and the actions provided to us by Spark. It should not be difficult to understand the wordcount program now. Let us go through the program line by line. The first line creates an RDD and distributes it to the workers. lines = sc.textFile("/FileStore/tables/shakespeare.txt") This RDD lines contains a list of sentences in the file. You can see the rdd content using take lines.take(5)--------------------------------------------['The Project Gutenberg EBook of The Complete Works of William Shakespeare, by ', 'William Shakespeare', '', 'This eBook is for the use of anyone anywhere at no cost and with', 'almost no restrictions whatsoever. You may copy it, give it away or'] This RDD is of the form: ['word1 word2 word3','word4 word3 word2'] This next line is actually the workhorse function in the whole script. counts = (lines.flatMap(lambda x: x.split(' ')) .map(lambda x: (x, 1)) .reduceByKey(lambda x,y : x + y)) It contains a series of transformations that we do to the lines RDD. First of all, we do a flatmap transformation. The flatmap transformation takes as input the lines and gives words as output. So after the flatmap transformation, the RDD is of the form: ['word1','word2','word3','word4','word3','word2'] Next, we do a map transformation on the flatmap output which converts the RDD to : [('word1',1),('word2',1),('word3',1),('word4',1),('word3',1),('word2',1)] Finally, we do a reduceByKey transformation which counts the number of time each word appeared. After which the RDD approaches the final desirable form. [('word1',1),('word2',2),('word3',2),('word4',1)] This next line is an action that takes the first 10 elements of the resulting RDD locally. output = counts.take(10) This line just prints the output for (word, count) in output: print("%s: %i" % (word, count)) And that is it for the wordcount program. Hope you understand it now. So till now, we talked about the Wordcount example and the basic transformations and actions that you could use in Spark. But we don’t do wordcount in real life. We have to work on bigger problems which are much more complex. Worry not! Whatever we have learned till now will let us do that and more. Let us work with a concrete example which takes care of some usual transformations. We will work on Movielens ml-100k.zip dataset which is a stable benchmark dataset. 100,000 ratings from 1000 users on 1700 movies. Released 4/1998. The Movielens dataset contains a lot of files but we are going to be working with 3 files only: 1) Users: This file name is kept as “u.user”, The columns in this file are: ['user_id', 'age', 'sex', 'occupation', 'zip_code'] 2) Ratings: This file name is kept as “u.data”, The columns in this file are: ['user_id', 'movie_id', 'rating', 'unix_timestamp'] 3) Movies: This file name is kept as “u.item”, The columns in this file are: ['movie_id', 'title', 'release_date', 'video_release_date', 'imdb_url', and 18 more columns.....] Let us start by importing these 3 files into our spark instance using ‘Import and Explore Data’ on the home tab. Our business partner now comes to us and asks us to find out the 25 most rated movie titles from this data. How many times a movie has been rated? Let us load the data in different RDDs and see what the data contains. userRDD = sc.textFile("/FileStore/tables/u.user") ratingRDD = sc.textFile("/FileStore/tables/u.data") movieRDD = sc.textFile("/FileStore/tables/u.item") print("userRDD:",userRDD.take(1))print("ratingRDD:",ratingRDD.take(1))print("movieRDD:",movieRDD.take(1))-----------------------------------------------------------userRDD: ['1|24|M|technician|85711'] ratingRDD: ['196\t242\t3\t881250949'] movieRDD: ['1|Toy Story (1995)|01-Jan-1995||http://us.imdb.com/M/title-exact?Toy%20Story%20(1995)|0|0|0|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0'] We note that to answer this question we will need to use the ratingRDD. But the ratingRDD does not have the movie name. So we would have to merge movieRDD and ratingRDD using movie_id. How we would do that in Spark? Below is the code. We also use a new transformation leftOuterJoin. Do read the docs and comments in the below code. OUTPUT:--------------------------------------------------------------------RDD_movid_rating: [('242', '3'), ('302', '3'), ('377', '1'), ('51', '2')] RDD_movid_title: [('1', 'Toy Story (1995)'), ('2', 'GoldenEye (1995)')] rdd_movid_title_rating: [('1440', ('3', 'Above the Rim (1994)'))] rdd_title_rating: [('Above the Rim (1994)', 1), ('Above the Rim (1994)', 1)] rdd_title_ratingcnt: [('Mallrats (1995)', 54), ('Michael Collins (1996)', 92)] ##################################### 25 most rated movies: [('Star Wars (1977)', 583), ('Contact (1997)', 509), ('Fargo (1996)', 508), ('Return of the Jedi (1983)', 507), ('Liar Liar (1997)', 485), ('English Patient, The (1996)', 481), ('Scream (1996)', 478), ('Toy Story (1995)', 452), ('Air Force One (1997)', 431), ('Independence Day (ID4) (1996)', 429), ('Raiders of the Lost Ark (1981)', 420), ('Godfather, The (1972)', 413), ('Pulp Fiction (1994)', 394), ('Twelve Monkeys (1995)', 392), ('Silence of the Lambs, The (1991)', 390), ('Jerry Maguire (1996)', 384), ('Chasing Amy (1997)', 379), ('Rock, The (1996)', 378), ('Empire Strikes Back, The (1980)', 367), ('Star Trek: First Contact (1996)', 365), ('Back to the Future (1985)', 350), ('Titanic (1997)', 350), ('Mission: Impossible (1996)', 344), ('Fugitive, The (1993)', 336), ('Indiana Jones and the Last Crusade (1989)', 331)] ##################################### Star Wars is the most rated movie in the Movielens Dataset. Now we could have done all this in a single command using the below command but the code is a little messy now. I did this to show that you can use chaining functions with Spark and you could bypass the process of variable creation. Let us do one more. For practice: Now we want to find the most highly rated 25 movies using the same dataset. We actually want only those movies which have been rated at least 100 times. OUTPUT:------------------------------------------------------------rdd_title_ratingsum: [('Mallrats (1995)', 186), ('Michael Collins (1996)', 318)] rdd_title_ratingmean_rating_count: [('Mallrats (1995)', (3.4444444444444446, 54))] rdd_title_rating_rating_count_gt_100: [('Butch Cassidy and the Sundance Kid (1969)', (3.949074074074074, 216))]##################################### 25 highly rated movies: [('Close Shave, A (1995)', (4.491071428571429, 112)), ("Schindler's List (1993)", (4.466442953020135, 298)), ('Wrong Trousers, The (1993)', (4.466101694915254, 118)), ('Casablanca (1942)', (4.45679012345679, 243)), ('Shawshank Redemption, The (1994)', (4.445229681978798, 283)), ('Rear Window (1954)', (4.3875598086124405, 209)), ('Usual Suspects, The (1995)', (4.385767790262173, 267)), ('Star Wars (1977)', (4.3584905660377355, 583)), ('12 Angry Men (1957)', (4.344, 125)), ('Citizen Kane (1941)', (4.292929292929293, 198)), ('To Kill a Mockingbird (1962)', (4.292237442922374, 219)), ("One Flew Over the Cuckoo's Nest (1975)", (4.291666666666667, 264)), ('Silence of the Lambs, The (1991)', (4.28974358974359, 390)), ('North by Northwest (1959)', (4.284916201117318, 179)), ('Godfather, The (1972)', (4.283292978208232, 413)), ('Secrets & Lies (1996)', (4.265432098765432, 162)), ('Good Will Hunting (1997)', (4.262626262626263, 198)), ('Manchurian Candidate, The (1962)', (4.259541984732825, 131)), ('Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1963)', (4.252577319587629, 194)), ('Raiders of the Lost Ark (1981)', (4.252380952380952, 420)), ('Vertigo (1958)', (4.251396648044692, 179)), ('Titanic (1997)', (4.2457142857142856, 350)), ('Lawrence of Arabia (1962)', (4.23121387283237, 173)), ('Maltese Falcon, The (1941)', (4.2101449275362315, 138)), ('Empire Strikes Back, The (1980)', (4.204359673024523, 367))] ##################################### We have talked about RDDs till now as they are very powerful. You can use RDDs to work with non-relational databases too. They let you do a lot of things that you couldn’t do with SparkSQL? Yes, you can use SQL with Spark too which I am going to talk about now. Spark has provided DataFrame API for us Data Scientists to work with relational data. Here is the documentation for the adventurous folks. Remember that in the background it still is all RDDs and that is why the starting part of this post focussed on RDDs. I will start with some common functionalities you will need to work with Spark DataFrames. Would look a lot like Pandas with some syntax changes. ratings = spark.read.load("/FileStore/tables/u.data",format="csv", sep="\t", inferSchema="true", header="false") We have two ways to show files using Spark Dataframes. ratings.show() display(ratings) I prefer display as it looks a lot nicer and clean. Good functionality. Always required. Don’t forget the * in front of the list. ratings = ratings.toDF(*['user_id', 'movie_id', 'rating', 'unix_timestamp'])display(ratings) print(ratings.count()) #Row Countprint(len(ratings.columns)) #Column Count---------------------------------------------------------1000004 We can also see the dataframe statistics using: display(ratings.describe()) display(ratings.select('user_id','movie_id')) Filter a dataframe using multiple conditions: display(ratings.filter((ratings.rating==5) & (ratings.user_id==253))) We can use groupby function with a spark dataframe too. Pretty much same as a pandas groupby with the exception that you will need to import pyspark.sql.functions from pyspark.sql import functions as Fdisplay(ratings.groupBy("user_id").agg(F.count("user_id"),F.mean("rating"))) Here we have found the count of ratings and average rating from each user_id display(ratings.sort("user_id")) We can also do a descending sort using F.desc function as below. # descending Sortfrom pyspark.sql import functions as Fdisplay(ratings.sort(F.desc("user_id"))) I was not able to find a pandas equivalent of merge with Spark DataFrames but we can use SQL with dataframes and thus we can merge dataframes using SQL. Let us try to run some SQL on Ratings. We first register the ratings df to a temporary table ratings_table on which we can run sql operations. As you can see the result of the SQL select statement is again a Spark Dataframe. ratings.registerTempTable('ratings_table')newDF = sqlContext.sql('select * from ratings_table where rating>4')display(newDF) Let us now add one more Spark Dataframe to the mix to see if we can use join using the SQL queries: #get one more dataframe to joinmovies = spark.read.load("/FileStore/tables/u.item",format="csv", sep="|", inferSchema="true", header="false")# change column namesmovies = movies.toDF(*["movie_id","movie_title","release_date","video_release_date","IMDb_URL","unknown","Action","Adventure","Animation ","Children","Comedy","Crime","Documentary","Drama","Fantasy","Film_Noir","Horror","Musical","Mystery","Romance","Sci_Fi","Thriller","War","Western"])display(movies) Now let us try joining the tables on movie_id to get the name of the movie in the ratings table. movies.registerTempTable('movies_table')display(sqlContext.sql('select ratings_table.*,movies_table.movie_title from ratings_table left join movies_table on movies_table.movie_id = ratings_table.movie_id')) Let us try to do what we were doing earlier with the RDDs. Finding the top 25 most rated movies: mostrateddf = sqlContext.sql('select movie_id,movie_title, count(user_id) as num_ratings from (select ratings_table.*,movies_table.movie_title from ratings_table left join movies_table on movies_table.movie_id = ratings_table.movie_id)A group by movie_id,movie_title order by num_ratings desc ')display(mostrateddf) And finding the top 25 highest rated movies having more than 100 votes: highrateddf = sqlContext.sql('select movie_id,movie_title, avg(rating) as avg_rating,count(movie_id) as num_ratings from (select ratings_table.*,movies_table.movie_title from ratings_table left join movies_table on movies_table.movie_id = ratings_table.movie_id)A group by movie_id,movie_title having num_ratings>100 order by avg_rating desc ')display(highrateddf) I have used GROUP BY, HAVING, AND ORDER BY clauses as well as aliases in the above query. That shows that you can do pretty much complex stuff using sqlContext.sql You can also use display command to display charts in your notebooks. You can see more options when you select Plot Options. Sometimes you may want to convert to RDD from a spark Dataframe or vice versa so that you can have the best of both worlds. To convert from DF to RDD, you can simply do : highratedrdd =highrateddf.rddhighratedrdd.take(2) To go from an RDD to a dataframe: from pyspark.sql import Row# creating a RDD firstdata = [('A',1),('B',2),('C',3),('D',4)]rdd = sc.parallelize(data)# map the schema using Row.rdd_new = rdd.map(lambda x: Row(key=x[0], value=int(x[1])))# Convert the rdd to Dataframerdd_as_df = sqlContext.createDataFrame(rdd_new)display(rdd_as_df) RDD provides you with more control at the cost of time and coding effort. While Dataframes provide you with familiar coding platform. And now you can move back and forth between these two. This was a big post and congratulations if you reached the end. Spark has provided us with an interface where we could use transformations and actions on our data. Spark also has the Dataframe API to ease the transition of Data scientists to Big Data. Hopefully, I’ve covered the basics well enough to pique your interest and help you get started with Spark. You can find all the code at the GitHub repository. Also, if you want to learn more about Spark and Spark DataFrames, I would like to call out these excellent courses on Big Data Essentials: HDFS, MapReduce and Spark RDD on Coursera. I am going to be writing more of such posts in the future too. Let me know what you think about the series. Follow me up at Medium or Subscribe to my blog to be informed about them. As always, I welcome feedback and constructive criticism and can be reached on Twitter @mlwhiz.
[ { "code": null, "e": 225, "s": 171, "text": "Big Data has become synonymous with Data engineering." }, { "code": null, "e": 307, "s": 225, "text": "But the line between Data Engineering and Data scientists is blurring day by day." }, { "code": null, "e": 402, "s": 307, "text": "At this point in time, I think that Big Data must be in the repertoire of all data scientists." }, { "code": null, "e": 456, "s": 402, "text": "Reason: Too much data is getting generated day by day" }, { "code": null, "e": 485, "s": 456, "text": "And that brings us to Spark." }, { "code": null, "e": 595, "s": 485, "text": "Now most of the Spark documentation, while good, did not explain it from the perspective of a data scientist." }, { "code": null, "e": 629, "s": 595, "text": "So I thought of giving it a shot." }, { "code": null, "e": 688, "s": 629, "text": "This post is going to be about — “How to make Spark work?”" }, { "code": null, "e": 785, "s": 688, "text": "This post is going to be quite long. Actually my longest post on medium, so go pick up a Coffee." }, { "code": null, "e": 1015, "s": 785, "text": "Suppose you are tasked with cutting all the trees in the forest. Perhaps not a good business with all the global warming, but here it serves our purpose and we are talking hypothetically, so I will continue. You have two options:" }, { "code": null, "e": 1116, "s": 1015, "text": "Get Batista with an electric powered chainsaw to do your work and make him cut each tree one by one." }, { "code": null, "e": 1192, "s": 1116, "text": "Get 500 normal guys with normal axes and make them work on different trees." }, { "code": null, "e": 1216, "s": 1192, "text": "Which would you prefer?" }, { "code": null, "e": 1330, "s": 1216, "text": "Although Option 1 is still the way some people would go, the need for option 2 led to the emergence of MapReduce." }, { "code": null, "e": 1476, "s": 1330, "text": "In Bigdata speak, we call the Batista solution as scaling vertically/scaling-up as in we add/stuff a lot of RAM and hard disk in a single worker." }, { "code": null, "e": 1644, "s": 1476, "text": "And the second solution is called scaling horizontally/scaling-sideways. As in you connect a lot of ordinary machines(with less RAM) together and use them in parallel." }, { "code": null, "e": 1712, "s": 1644, "text": "Now, vertical scaling has certain benefits over Horizontal scaling:" }, { "code": null, "e": 1903, "s": 1712, "text": "It is fast if the size of the problem is small: Think 2 trees. Batista would be through with both of them with his awesome chainsaw while our two guys would be still hacking with their axes." }, { "code": null, "e": 2096, "s": 1903, "text": "It is easy to understand. This is how we have always done things. We normally think about things in a sequential pattern and that is how our whole computer architecture and design has evolved." }, { "code": null, "e": 2123, "s": 2096, "text": "But, Horizontal Scaling is" }, { "code": null, "e": 2394, "s": 2123, "text": "Less Expensive: Getting 50 normal guys itself is much cheaper than getting a single guy like Batista. Apart from that Batista needs a lot of care and maintenance to keep him cool and he is very sensitive to even small things just like machines with a high amount of RAM." }, { "code": null, "e": 2748, "s": 2394, "text": "Faster when the size of the problem is big: Now imagine 1000 trees and 1000 workers vs a single Batista. With Horizontal Scaling, if we face a very large problem we will just hire 100 or maybe 1000 more cheap workers. It doesn’t work like that with Batista. You have to increase RAM and that means more cooling infrastructure and more maintenance costs." }, { "code": null, "e": 2861, "s": 2748, "text": "MapReduce is what makes the second option possible by letting us use a cluster of computers for parallelization." }, { "code": null, "e": 2977, "s": 2861, "text": "Now, MapReduce looks like a fairly technical term. But let us break it a little. MapReduce is made up of two terms:" }, { "code": null, "e": 3192, "s": 2977, "text": "It is basically the apply/map function. We split our data into n chunks and send each chunk to a different worker(Mapper). If there is any function we would like to apply over the rows of Data our worker does that." }, { "code": null, "e": 3282, "s": 3192, "text": "Aggregate the data using some function based on a groupby key. It is basically a groupby." }, { "code": null, "e": 3369, "s": 3282, "text": "Of course, there is a lot going in the background to make the system work as intended." }, { "code": null, "e": 3533, "s": 3369, "text": "Don’t worry, if you don’t understand it yet. Just keep reading. Maybe you will understand it when we use MapReduce ourselves in the examples I am going to provide." }, { "code": null, "e": 3702, "s": 3533, "text": "Hadoop was the first open source system that introduced us to the MapReduce paradigm of programming and Spark is the system that made it faster, much much faster(100x)." }, { "code": null, "e": 3813, "s": 3702, "text": "There used to be a lot of data movement in Hadoop as it used to write intermediate results to the file system." }, { "code": null, "e": 3869, "s": 3813, "text": "This affected the speed at which you could do analysis." }, { "code": null, "e": 3971, "s": 3869, "text": "Spark provided us with an in-memory model, so Spark doesn’t write too much to the disk while working." }, { "code": null, "e": 4042, "s": 3971, "text": "Simply, Spark is faster than Hadoop and a lot of people use Spark now." }, { "code": null, "e": 4085, "s": 4042, "text": "So without further ado let us get started." }, { "code": null, "e": 4137, "s": 4085, "text": "Installing Spark is actually a headache of its own." }, { "code": null, "e": 4317, "s": 4137, "text": "Since we want to understand how it works and really work with it, I would suggest that you use Sparks on Databricks here online with the community edition. Don’t worry it is free." }, { "code": null, "e": 4390, "s": 4317, "text": "Once you register and login will be presented with the following screen." }, { "code": null, "e": 4425, "s": 4390, "text": "You can start a new notebook here." }, { "code": null, "e": 4488, "s": 4425, "text": "Select the Python notebook and give any name to your notebook." }, { "code": null, "e": 4619, "s": 4488, "text": "Once you start a new notebook and try to execute any command, the notebook will ask you if you want to start a new cluster. Do it." }, { "code": null, "e": 4757, "s": 4619, "text": "The next step will be to check if the sparkcontext is present. To check if the sparkcontext is present you just have to run this command:" }, { "code": null, "e": 4760, "s": 4757, "text": "sc" }, { "code": null, "e": 4830, "s": 4760, "text": "This means that we are set up with a notebook where we can run Spark." }, { "code": null, "e": 4952, "s": 4830, "text": "The next step is to upload some data we will use to learn Spark. Just click on ‘Import and Explore Data’ on the home tab." }, { "code": null, "e": 5059, "s": 4952, "text": "I will end up using multiple datasets by the end of this post but let us start with something very simple." }, { "code": null, "e": 5129, "s": 5059, "text": "Let us add the file shakespeare.txt which you can download from here." }, { "code": null, "e": 5212, "s": 5129, "text": "You can see that the file is loaded to /FileStore/tables/shakespeare.txt location." }, { "code": null, "e": 5330, "s": 5212, "text": "I like to learn by examples so let’s get done with the “Hello World” of Distributed computing: The WordCount Program." }, { "code": null, "e": 5859, "s": 5330, "text": "# Distribute the data - Create a RDD lines = sc.textFile(\"/FileStore/tables/shakespeare.txt\")# Create a list with all words, Create tuple (word,1), reduce by key i.e. the wordcounts = (lines.flatMap(lambda x: x.split(' ')) .map(lambda x: (x, 1)) .reduceByKey(lambda x,y : x + y))# get the output on localoutput = counts.take(10) # print outputfor (word, count) in output: print(\"%s: %i\" % (word, count))" }, { "code": null, "e": 5958, "s": 5859, "text": "So that is a small example which counts the number of words in the document and prints 10 of them." }, { "code": null, "e": 6012, "s": 5958, "text": "And most of the work gets done in the second command." }, { "code": null, "e": 6130, "s": 6012, "text": "Don’t worry if you are not able to follow this yet as I still need to tell you about the things that make Spark work." }, { "code": null, "e": 6303, "s": 6130, "text": "But before we get into Spark basics, Let us refresh some of our Python Basics. Understanding Spark becomes a lot easier if you have used functional programming with Python." }, { "code": null, "e": 6365, "s": 6303, "text": "For those of you who haven’t used it, below is a brief intro." }, { "code": null, "e": 6482, "s": 6365, "text": "map is used to map a function to an array or a list. Say you want to apply some function to every element in a list." }, { "code": null, "e": 6597, "s": 6482, "text": "You can do this by simply using a for loop but python lambda functions let you do this in a single line in Python." }, { "code": null, "e": 6843, "s": 6597, "text": "my_list = [1,2,3,4,5,6,7,8,9,10]# Lets say I want to square each term in my_list.squared_list = map(lambda x:x**2,my_list)print(list(squared_list))------------------------------------------------------------[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]" }, { "code": null, "e": 6953, "s": 6843, "text": "In the above example, you could think of map as a function which takes two arguments — A function and a list." }, { "code": null, "e": 7012, "s": 6953, "text": "It then applies the function to every element of the list." }, { "code": null, "e": 7158, "s": 7012, "text": "What lambda allows you to do is write an inline function. In here the part lambda x:x**2 defines a function that takes x as input and returns x2." }, { "code": null, "e": 7238, "s": 7158, "text": "You could have also provided a proper function in place of lambda. For example:" }, { "code": null, "e": 7508, "s": 7238, "text": "def squared(x): return x**2my_list = [1,2,3,4,5,6,7,8,9,10]# Lets say I want to square each term in my_list.squared_list = map(squared,my_list)print(list(squared_list))------------------------------------------------------------[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]" }, { "code": null, "e": 7599, "s": 7508, "text": "The same result, but the lambda expressions make the code compact and a lot more readable." }, { "code": null, "e": 7739, "s": 7599, "text": "The other function that is used extensively is the filter function. This function takes two arguments — A condition and the list to filter." }, { "code": null, "e": 7808, "s": 7739, "text": "If you want to filter your list using some condition you use filter." }, { "code": null, "e": 8044, "s": 7808, "text": "my_list = [1,2,3,4,5,6,7,8,9,10]# Lets say I want only the even numbers in my list.filtered_list = filter(lambda x:x%2==0,my_list)print(list(filtered_list))---------------------------------------------------------------[2, 4, 6, 8, 10]" }, { "code": null, "e": 8153, "s": 8044, "text": "The next function I want to talk about is the reduce function. This function will be the workhorse in Spark." }, { "code": null, "e": 8296, "s": 8153, "text": "This function takes two arguments — a function to reduce that takes two arguments, and a list over which the reduce function is to be applied." }, { "code": null, "e": 8449, "s": 8296, "text": "import functoolsmy_list = [1,2,3,4,5]# Lets say I want to sum all elements in my list.sum_list = functools.reduce(lambda x,y:x+y,my_list)print(sum_list)" }, { "code": null, "e": 8546, "s": 8449, "text": "In python2 reduce used to be a part of Python, now we have to use reduce as a part of functools." }, { "code": null, "e": 8680, "s": 8546, "text": "Here the lambda function takes in two values x, y and returns their sum. Intuitively you can think that the reduce function works as:" }, { "code": null, "e": 8947, "s": 8680, "text": "Reduce function first sends 1,2 ; the lambda function returns 3Reduce function then sends 3,3 ; the lambda function returns 6Reduce function then sends 6,4 ; the lambda function returns 10Reduce function finally sends 10,5 ; the lambda function returns 15" }, { "code": null, "e": 9019, "s": 8947, "text": "A condition on the lambda function we use in reduce is that it must be:" }, { "code": null, "e": 9057, "s": 9019, "text": "commutative that is a + b = b + a and" }, { "code": null, "e": 9105, "s": 9057, "text": "associative that is (a + b) + c == a + (b + c)." }, { "code": null, "e": 9239, "s": 9105, "text": "In the above case, we used sum which is commutative as well as associative. Other functions that we could have used: max, min, * etc." }, { "code": null, "e": 9350, "s": 9239, "text": "As we have now got the fundamentals of Python Functional Programming out of the way, lets again head to Spark." }, { "code": null, "e": 9469, "s": 9350, "text": "But first, let us delve a little bit into how spark works. Spark actually consists of two things a driver and workers." }, { "code": null, "e": 9542, "s": 9469, "text": "Workers normally do all the work and the driver makes them do that work." }, { "code": null, "e": 9707, "s": 9542, "text": "An RDD(Resilient Distributed Dataset) is a parallelized data structure that gets distributed across the worker nodes. They are the basic units of Spark programming." }, { "code": null, "e": 9751, "s": 9707, "text": "In our wordcount example, in the first line" }, { "code": null, "e": 9808, "s": 9751, "text": "lines = sc.textFile(\"/FileStore/tables/shakespeare.txt\")" }, { "code": null, "e": 9974, "s": 9808, "text": "We took a text file and distributed it across worker nodes so that they can work on it in parallel. We could also parallelize lists using the function sc.parallelize" }, { "code": null, "e": 9987, "s": 9974, "text": "For example:" }, { "code": null, "e": 10182, "s": 9987, "text": "data = [1,2,3,4,5,6,7,8,9,10]new_rdd = sc.parallelize(data,4)new_rdd---------------------------------------------------------------ParallelCollectionRDD[22] at parallelize at PythonRDD.scala:267" }, { "code": null, "e": 10273, "s": 10182, "text": "In Spark, we can do two different types of operations on RDD: Transformations and Actions." }, { "code": null, "e": 10375, "s": 10273, "text": "Transformations: Create new datasets from existing RDDsActions: Mechanism to get results out of Spark" }, { "code": null, "e": 10431, "s": 10375, "text": "Transformations: Create new datasets from existing RDDs" }, { "code": null, "e": 10478, "s": 10431, "text": "Actions: Mechanism to get results out of Spark" }, { "code": null, "e": 10538, "s": 10478, "text": "So let us say you have got your data in the form of an RDD." }, { "code": null, "e": 10654, "s": 10538, "text": "To requote your data is now accessible to the worker machines. You want to do some transformations on the data now." }, { "code": null, "e": 10704, "s": 10654, "text": "You may want to filter, apply some function, etc." }, { "code": null, "e": 10759, "s": 10704, "text": "In Spark, this is done using Transformation functions." }, { "code": null, "e": 10893, "s": 10759, "text": "Spark provides many transformation functions. You can see a comprehensive list here. Some of the main ones that I use frequently are:" }, { "code": null, "e": 10929, "s": 10893, "text": "Applies a given function to an RDD." }, { "code": null, "e": 11154, "s": 10929, "text": "Note that the syntax is a little bit different from Python, but it necessarily does the same thing. Don’t worry about collect yet. For now, just think of it as a function that collects the data in squared_rdd back to a list." }, { "code": null, "e": 11361, "s": 11154, "text": "data = [1,2,3,4,5,6,7,8,9,10]rdd = sc.parallelize(data,4)squared_rdd = rdd.map(lambda x:x**2)squared_rdd.collect()------------------------------------------------------[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]" }, { "code": null, "e": 11472, "s": 11361, "text": "Again no surprises here. Takes as input a condition and keeps only those elements that fulfill that condition." }, { "code": null, "e": 11664, "s": 11472, "text": "data = [1,2,3,4,5,6,7,8,9,10]rdd = sc.parallelize(data,4)filtered_rdd = rdd.filter(lambda x:x%2==0)filtered_rdd.collect()------------------------------------------------------[2, 4, 6, 8, 10]" }, { "code": null, "e": 11706, "s": 11664, "text": "Returns only distinct elements in an RDD." }, { "code": null, "e": 11920, "s": 11706, "text": "data = [1,2,2,2,2,3,3,3,3,4,5,6,7,7,7,8,8,8,9,10]rdd = sc.parallelize(data,4)distinct_rdd = rdd.distinct()distinct_rdd.collect()------------------------------------------------------[8, 4, 1, 5, 9, 2, 10, 6, 3, 7]" }, { "code": null, "e": 11997, "s": 11920, "text": "Similar to map, but each input item can be mapped to 0 or more output items." }, { "code": null, "e": 12181, "s": 11997, "text": "data = [1,2,3,4]rdd = sc.parallelize(data,4)flat_rdd = rdd.flatMap(lambda x:[x,x**3])flat_rdd.collect()------------------------------------------------------[1, 1, 2, 8, 3, 27, 4, 64]" }, { "code": null, "e": 12229, "s": 12181, "text": "The parallel to the reduce in Hadoop MapReduce." }, { "code": null, "e": 12294, "s": 12229, "text": "Now Spark cannot provide the value if it just worked with Lists." }, { "code": null, "e": 12499, "s": 12294, "text": "In Spark, there is a concept of pair RDDs that makes it a lot more flexible. Let's assume we have a data in which we have a product, its category, and its selling price. We can still parallelize the data." }, { "code": null, "e": 12655, "s": 12499, "text": "data = [('Apple','Fruit',200),('Banana','Fruit',24),('Tomato','Fruit',56),('Potato','Vegetable',103),('Carrot','Vegetable',34)]rdd = sc.parallelize(data,4)" }, { "code": null, "e": 12691, "s": 12655, "text": "Right now our RDD rdd holds tuples." }, { "code": null, "e": 12772, "s": 12691, "text": "Now we want to find out the total sum of revenue that we got from each category." }, { "code": null, "e": 12875, "s": 12772, "text": "To do that we have to transform our rdd to a pair rdd so that it only contains key-value pairs/tuples." }, { "code": null, "e": 13105, "s": 12875, "text": "category_price_rdd = rdd.map(lambda x: (x[1],x[2]))category_price_rdd.collect()-----------------------------------------------------------------[(‘Fruit’, 200), (‘Fruit’, 24), (‘Fruit’, 56), (‘Vegetable’, 103), (‘Vegetable’, 34)]" }, { "code": null, "e": 13302, "s": 13105, "text": "Here we used the map function to get it in the format we wanted. When working with textfile, the RDD that gets formed has got a lot of strings. We use map to convert it into a format that we want." }, { "code": null, "e": 13403, "s": 13302, "text": "So now our category_price_rdd contains the product category and the price at which the product sold." }, { "code": null, "e": 13484, "s": 13403, "text": "Now we want to reduce on the key category and sum the prices. We can do this by:" }, { "code": null, "e": 13685, "s": 13484, "text": "category_total_price_rdd = category_price_rdd.reduceByKey(lambda x,y:x+y)category_total_price_rdd.collect()---------------------------------------------------------[(‘Vegetable’, 137), (‘Fruit’, 280)]" }, { "code": null, "e": 13888, "s": 13685, "text": "Similar to reduceByKey but does not reduces just puts all the elements in an iterator. For example, if we wanted to keep as key the category and as the value all the products we would use this function." }, { "code": null, "e": 13943, "s": 13888, "text": "Let us again use map to get data in the required form." }, { "code": null, "e": 14358, "s": 13943, "text": "data = [('Apple','Fruit',200),('Banana','Fruit',24),('Tomato','Fruit',56),('Potato','Vegetable',103),('Carrot','Vegetable',34)]rdd = sc.parallelize(data,4)category_product_rdd = rdd.map(lambda x: (x[1],x[0]))category_product_rdd.collect()------------------------------------------------------------[('Fruit', 'Apple'), ('Fruit', 'Banana'), ('Fruit', 'Tomato'), ('Vegetable', 'Potato'), ('Vegetable', 'Carrot')]" }, { "code": null, "e": 14385, "s": 14358, "text": "We then use groupByKey as:" }, { "code": null, "e": 14684, "s": 14385, "text": "grouped_products_by_category_rdd = category_product_rdd.groupByKey()findata = grouped_products_by_category_rdd.collect()for data in findata: print(data[0],list(data[1]))------------------------------------------------------------Vegetable ['Potato', 'Carrot'] Fruit ['Apple', 'Banana', 'Tomato']" }, { "code": null, "e": 14792, "s": 14684, "text": "Here the groupByKey function worked and it returned the category and the list of products in that category." }, { "code": null, "e": 14873, "s": 14792, "text": "You have filtered your data, mapped some functions on it. Done your computation." }, { "code": null, "e": 15028, "s": 14873, "text": "Now you want to get the data on your local machine or save it to a file or show the results in the form of some graphs in excel or any visualization tool." }, { "code": null, "e": 15110, "s": 15028, "text": "You will need actions for that. A comprehensive list of actions is provided here." }, { "code": null, "e": 15166, "s": 15110, "text": "Some of the most common actions that I tend to use are:" }, { "code": null, "e": 15276, "s": 15166, "text": "We have already used this action many times. It takes the whole RDD and brings it back to the driver program." }, { "code": null, "e": 15482, "s": 15276, "text": "Aggregate the elements of the dataset using a function func (which takes two arguments and returns one). The function should be commutative and associative so that it can be computed correctly in parallel." }, { "code": null, "e": 15579, "s": 15482, "text": "rdd = sc.parallelize([1,2,3,4,5])rdd.reduce(lambda x,y : x+y)---------------------------------15" }, { "code": null, "e": 15742, "s": 15579, "text": "Sometimes you will need to see what your RDD contains without getting all the elements in memory itself. take returns a list with the first n elements of the RDD." }, { "code": null, "e": 15829, "s": 15742, "text": "rdd = sc.parallelize([1,2,3,4,5])rdd.take(3)---------------------------------[1, 2, 3]" }, { "code": null, "e": 15938, "s": 15829, "text": "takeOrdered returns the first n elements of the RDD using either their natural order or a custom comparator." }, { "code": null, "e": 16177, "s": 15938, "text": "rdd = sc.parallelize([5,3,12,23])# descending orderrdd.takeOrdered(3,lambda s:-1*s)----[23, 12, 5]rdd = sc.parallelize([(5,23),(3,34),(12,344),(23,29)])# descending orderrdd.takeOrdered(3,lambda s:-1*s[1])---[(12, 344), (3, 34), (23, 29)]" }, { "code": null, "e": 16254, "s": 16177, "text": "We have our basics covered finally. Let us get back to our wordcount example" }, { "code": null, "e": 16341, "s": 16254, "text": "Now we sort of understand the transformations and the actions provided to us by Spark." }, { "code": null, "e": 16453, "s": 16341, "text": "It should not be difficult to understand the wordcount program now. Let us go through the program line by line." }, { "code": null, "e": 16518, "s": 16453, "text": "The first line creates an RDD and distributes it to the workers." }, { "code": null, "e": 16575, "s": 16518, "text": "lines = sc.textFile(\"/FileStore/tables/shakespeare.txt\")" }, { "code": null, "e": 16671, "s": 16575, "text": "This RDD lines contains a list of sentences in the file. You can see the rdd content using take" }, { "code": null, "e": 16981, "s": 16671, "text": "lines.take(5)--------------------------------------------['The Project Gutenberg EBook of The Complete Works of William Shakespeare, by ', 'William Shakespeare', '', 'This eBook is for the use of anyone anywhere at no cost and with', 'almost no restrictions whatsoever. You may copy it, give it away or']" }, { "code": null, "e": 17006, "s": 16981, "text": "This RDD is of the form:" }, { "code": null, "e": 17048, "s": 17006, "text": "['word1 word2 word3','word4 word3 word2']" }, { "code": null, "e": 17119, "s": 17048, "text": "This next line is actually the workhorse function in the whole script." }, { "code": null, "e": 17285, "s": 17119, "text": "counts = (lines.flatMap(lambda x: x.split(' ')) .map(lambda x: (x, 1)) .reduceByKey(lambda x,y : x + y))" }, { "code": null, "e": 17400, "s": 17285, "text": "It contains a series of transformations that we do to the lines RDD. First of all, we do a flatmap transformation." }, { "code": null, "e": 17540, "s": 17400, "text": "The flatmap transformation takes as input the lines and gives words as output. So after the flatmap transformation, the RDD is of the form:" }, { "code": null, "e": 17590, "s": 17540, "text": "['word1','word2','word3','word4','word3','word2']" }, { "code": null, "e": 17673, "s": 17590, "text": "Next, we do a map transformation on the flatmap output which converts the RDD to :" }, { "code": null, "e": 17747, "s": 17673, "text": "[('word1',1),('word2',1),('word3',1),('word4',1),('word3',1),('word2',1)]" }, { "code": null, "e": 17843, "s": 17747, "text": "Finally, we do a reduceByKey transformation which counts the number of time each word appeared." }, { "code": null, "e": 17900, "s": 17843, "text": "After which the RDD approaches the final desirable form." }, { "code": null, "e": 17950, "s": 17900, "text": "[('word1',1),('word2',2),('word3',2),('word4',1)]" }, { "code": null, "e": 18041, "s": 17950, "text": "This next line is an action that takes the first 10 elements of the resulting RDD locally." }, { "code": null, "e": 18066, "s": 18041, "text": "output = counts.take(10)" }, { "code": null, "e": 18099, "s": 18066, "text": "This line just prints the output" }, { "code": null, "e": 18180, "s": 18099, "text": "for (word, count) in output: print(\"%s: %i\" % (word, count))" }, { "code": null, "e": 18250, "s": 18180, "text": "And that is it for the wordcount program. Hope you understand it now." }, { "code": null, "e": 18412, "s": 18250, "text": "So till now, we talked about the Wordcount example and the basic transformations and actions that you could use in Spark. But we don’t do wordcount in real life." }, { "code": null, "e": 18551, "s": 18412, "text": "We have to work on bigger problems which are much more complex. Worry not! Whatever we have learned till now will let us do that and more." }, { "code": null, "e": 18635, "s": 18551, "text": "Let us work with a concrete example which takes care of some usual transformations." }, { "code": null, "e": 18783, "s": 18635, "text": "We will work on Movielens ml-100k.zip dataset which is a stable benchmark dataset. 100,000 ratings from 1000 users on 1700 movies. Released 4/1998." }, { "code": null, "e": 18879, "s": 18783, "text": "The Movielens dataset contains a lot of files but we are going to be working with 3 files only:" }, { "code": null, "e": 18955, "s": 18879, "text": "1) Users: This file name is kept as “u.user”, The columns in this file are:" }, { "code": null, "e": 19007, "s": 18955, "text": "['user_id', 'age', 'sex', 'occupation', 'zip_code']" }, { "code": null, "e": 19085, "s": 19007, "text": "2) Ratings: This file name is kept as “u.data”, The columns in this file are:" }, { "code": null, "e": 19137, "s": 19085, "text": "['user_id', 'movie_id', 'rating', 'unix_timestamp']" }, { "code": null, "e": 19214, "s": 19137, "text": "3) Movies: This file name is kept as “u.item”, The columns in this file are:" }, { "code": null, "e": 19312, "s": 19214, "text": "['movie_id', 'title', 'release_date', 'video_release_date', 'imdb_url', and 18 more columns.....]" }, { "code": null, "e": 19425, "s": 19312, "text": "Let us start by importing these 3 files into our spark instance using ‘Import and Explore Data’ on the home tab." }, { "code": null, "e": 19572, "s": 19425, "text": "Our business partner now comes to us and asks us to find out the 25 most rated movie titles from this data. How many times a movie has been rated?" }, { "code": null, "e": 19643, "s": 19572, "text": "Let us load the data in different RDDs and see what the data contains." }, { "code": null, "e": 20173, "s": 19643, "text": "userRDD = sc.textFile(\"/FileStore/tables/u.user\") ratingRDD = sc.textFile(\"/FileStore/tables/u.data\") movieRDD = sc.textFile(\"/FileStore/tables/u.item\") print(\"userRDD:\",userRDD.take(1))print(\"ratingRDD:\",ratingRDD.take(1))print(\"movieRDD:\",movieRDD.take(1))-----------------------------------------------------------userRDD: ['1|24|M|technician|85711'] ratingRDD: ['196\\t242\\t3\\t881250949'] movieRDD: ['1|Toy Story (1995)|01-Jan-1995||http://us.imdb.com/M/title-exact?Toy%20Story%20(1995)|0|0|0|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0']" }, { "code": null, "e": 20293, "s": 20173, "text": "We note that to answer this question we will need to use the ratingRDD. But the ratingRDD does not have the movie name." }, { "code": null, "e": 20358, "s": 20293, "text": "So we would have to merge movieRDD and ratingRDD using movie_id." }, { "code": null, "e": 20389, "s": 20358, "text": "How we would do that in Spark?" }, { "code": null, "e": 20505, "s": 20389, "text": "Below is the code. We also use a new transformation leftOuterJoin. Do read the docs and comments in the below code." }, { "code": null, "e": 21875, "s": 20505, "text": "OUTPUT:--------------------------------------------------------------------RDD_movid_rating: [('242', '3'), ('302', '3'), ('377', '1'), ('51', '2')] RDD_movid_title: [('1', 'Toy Story (1995)'), ('2', 'GoldenEye (1995)')] rdd_movid_title_rating: [('1440', ('3', 'Above the Rim (1994)'))] rdd_title_rating: [('Above the Rim (1994)', 1), ('Above the Rim (1994)', 1)] rdd_title_ratingcnt: [('Mallrats (1995)', 54), ('Michael Collins (1996)', 92)] ##################################### 25 most rated movies: [('Star Wars (1977)', 583), ('Contact (1997)', 509), ('Fargo (1996)', 508), ('Return of the Jedi (1983)', 507), ('Liar Liar (1997)', 485), ('English Patient, The (1996)', 481), ('Scream (1996)', 478), ('Toy Story (1995)', 452), ('Air Force One (1997)', 431), ('Independence Day (ID4) (1996)', 429), ('Raiders of the Lost Ark (1981)', 420), ('Godfather, The (1972)', 413), ('Pulp Fiction (1994)', 394), ('Twelve Monkeys (1995)', 392), ('Silence of the Lambs, The (1991)', 390), ('Jerry Maguire (1996)', 384), ('Chasing Amy (1997)', 379), ('Rock, The (1996)', 378), ('Empire Strikes Back, The (1980)', 367), ('Star Trek: First Contact (1996)', 365), ('Back to the Future (1985)', 350), ('Titanic (1997)', 350), ('Mission: Impossible (1996)', 344), ('Fugitive, The (1993)', 336), ('Indiana Jones and the Last Crusade (1989)', 331)] #####################################" }, { "code": null, "e": 21935, "s": 21875, "text": "Star Wars is the most rated movie in the Movielens Dataset." }, { "code": null, "e": 22047, "s": 21935, "text": "Now we could have done all this in a single command using the below command but the code is a little messy now." }, { "code": null, "e": 22168, "s": 22047, "text": "I did this to show that you can use chaining functions with Spark and you could bypass the process of variable creation." }, { "code": null, "e": 22202, "s": 22168, "text": "Let us do one more. For practice:" }, { "code": null, "e": 22355, "s": 22202, "text": "Now we want to find the most highly rated 25 movies using the same dataset. We actually want only those movies which have been rated at least 100 times." }, { "code": null, "e": 24240, "s": 22355, "text": "OUTPUT:------------------------------------------------------------rdd_title_ratingsum: [('Mallrats (1995)', 186), ('Michael Collins (1996)', 318)] rdd_title_ratingmean_rating_count: [('Mallrats (1995)', (3.4444444444444446, 54))] rdd_title_rating_rating_count_gt_100: [('Butch Cassidy and the Sundance Kid (1969)', (3.949074074074074, 216))]##################################### 25 highly rated movies: [('Close Shave, A (1995)', (4.491071428571429, 112)), (\"Schindler's List (1993)\", (4.466442953020135, 298)), ('Wrong Trousers, The (1993)', (4.466101694915254, 118)), ('Casablanca (1942)', (4.45679012345679, 243)), ('Shawshank Redemption, The (1994)', (4.445229681978798, 283)), ('Rear Window (1954)', (4.3875598086124405, 209)), ('Usual Suspects, The (1995)', (4.385767790262173, 267)), ('Star Wars (1977)', (4.3584905660377355, 583)), ('12 Angry Men (1957)', (4.344, 125)), ('Citizen Kane (1941)', (4.292929292929293, 198)), ('To Kill a Mockingbird (1962)', (4.292237442922374, 219)), (\"One Flew Over the Cuckoo's Nest (1975)\", (4.291666666666667, 264)), ('Silence of the Lambs, The (1991)', (4.28974358974359, 390)), ('North by Northwest (1959)', (4.284916201117318, 179)), ('Godfather, The (1972)', (4.283292978208232, 413)), ('Secrets & Lies (1996)', (4.265432098765432, 162)), ('Good Will Hunting (1997)', (4.262626262626263, 198)), ('Manchurian Candidate, The (1962)', (4.259541984732825, 131)), ('Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1963)', (4.252577319587629, 194)), ('Raiders of the Lost Ark (1981)', (4.252380952380952, 420)), ('Vertigo (1958)', (4.251396648044692, 179)), ('Titanic (1997)', (4.2457142857142856, 350)), ('Lawrence of Arabia (1962)', (4.23121387283237, 173)), ('Maltese Falcon, The (1941)', (4.2101449275362315, 138)), ('Empire Strikes Back, The (1980)', (4.204359673024523, 367))] #####################################" }, { "code": null, "e": 24302, "s": 24240, "text": "We have talked about RDDs till now as they are very powerful." }, { "code": null, "e": 24362, "s": 24302, "text": "You can use RDDs to work with non-relational databases too." }, { "code": null, "e": 24430, "s": 24362, "text": "They let you do a lot of things that you couldn’t do with SparkSQL?" }, { "code": null, "e": 24502, "s": 24430, "text": "Yes, you can use SQL with Spark too which I am going to talk about now." }, { "code": null, "e": 24641, "s": 24502, "text": "Spark has provided DataFrame API for us Data Scientists to work with relational data. Here is the documentation for the adventurous folks." }, { "code": null, "e": 24759, "s": 24641, "text": "Remember that in the background it still is all RDDs and that is why the starting part of this post focussed on RDDs." }, { "code": null, "e": 24905, "s": 24759, "text": "I will start with some common functionalities you will need to work with Spark DataFrames. Would look a lot like Pandas with some syntax changes." }, { "code": null, "e": 25018, "s": 24905, "text": "ratings = spark.read.load(\"/FileStore/tables/u.data\",format=\"csv\", sep=\"\\t\", inferSchema=\"true\", header=\"false\")" }, { "code": null, "e": 25073, "s": 25018, "text": "We have two ways to show files using Spark Dataframes." }, { "code": null, "e": 25088, "s": 25073, "text": "ratings.show()" }, { "code": null, "e": 25105, "s": 25088, "text": "display(ratings)" }, { "code": null, "e": 25157, "s": 25105, "text": "I prefer display as it looks a lot nicer and clean." }, { "code": null, "e": 25235, "s": 25157, "text": "Good functionality. Always required. Don’t forget the * in front of the list." }, { "code": null, "e": 25328, "s": 25235, "text": "ratings = ratings.toDF(*['user_id', 'movie_id', 'rating', 'unix_timestamp'])display(ratings)" }, { "code": null, "e": 25467, "s": 25328, "text": "print(ratings.count()) #Row Countprint(len(ratings.columns)) #Column Count---------------------------------------------------------1000004" }, { "code": null, "e": 25515, "s": 25467, "text": "We can also see the dataframe statistics using:" }, { "code": null, "e": 25543, "s": 25515, "text": "display(ratings.describe())" }, { "code": null, "e": 25589, "s": 25543, "text": "display(ratings.select('user_id','movie_id'))" }, { "code": null, "e": 25635, "s": 25589, "text": "Filter a dataframe using multiple conditions:" }, { "code": null, "e": 25705, "s": 25635, "text": "display(ratings.filter((ratings.rating==5) & (ratings.user_id==253)))" }, { "code": null, "e": 25868, "s": 25705, "text": "We can use groupby function with a spark dataframe too. Pretty much same as a pandas groupby with the exception that you will need to import pyspark.sql.functions" }, { "code": null, "e": 25983, "s": 25868, "text": "from pyspark.sql import functions as Fdisplay(ratings.groupBy(\"user_id\").agg(F.count(\"user_id\"),F.mean(\"rating\")))" }, { "code": null, "e": 26060, "s": 25983, "text": "Here we have found the count of ratings and average rating from each user_id" }, { "code": null, "e": 26093, "s": 26060, "text": "display(ratings.sort(\"user_id\"))" }, { "code": null, "e": 26158, "s": 26093, "text": "We can also do a descending sort using F.desc function as below." }, { "code": null, "e": 26254, "s": 26158, "text": "# descending Sortfrom pyspark.sql import functions as Fdisplay(ratings.sort(F.desc(\"user_id\")))" }, { "code": null, "e": 26407, "s": 26254, "text": "I was not able to find a pandas equivalent of merge with Spark DataFrames but we can use SQL with dataframes and thus we can merge dataframes using SQL." }, { "code": null, "e": 26446, "s": 26407, "text": "Let us try to run some SQL on Ratings." }, { "code": null, "e": 26550, "s": 26446, "text": "We first register the ratings df to a temporary table ratings_table on which we can run sql operations." }, { "code": null, "e": 26632, "s": 26550, "text": "As you can see the result of the SQL select statement is again a Spark Dataframe." }, { "code": null, "e": 26757, "s": 26632, "text": "ratings.registerTempTable('ratings_table')newDF = sqlContext.sql('select * from ratings_table where rating>4')display(newDF)" }, { "code": null, "e": 26857, "s": 26757, "text": "Let us now add one more Spark Dataframe to the mix to see if we can use join using the SQL queries:" }, { "code": null, "e": 27322, "s": 26857, "text": "#get one more dataframe to joinmovies = spark.read.load(\"/FileStore/tables/u.item\",format=\"csv\", sep=\"|\", inferSchema=\"true\", header=\"false\")# change column namesmovies = movies.toDF(*[\"movie_id\",\"movie_title\",\"release_date\",\"video_release_date\",\"IMDb_URL\",\"unknown\",\"Action\",\"Adventure\",\"Animation \",\"Children\",\"Comedy\",\"Crime\",\"Documentary\",\"Drama\",\"Fantasy\",\"Film_Noir\",\"Horror\",\"Musical\",\"Mystery\",\"Romance\",\"Sci_Fi\",\"Thriller\",\"War\",\"Western\"])display(movies)" }, { "code": null, "e": 27419, "s": 27322, "text": "Now let us try joining the tables on movie_id to get the name of the movie in the ratings table." }, { "code": null, "e": 27626, "s": 27419, "text": "movies.registerTempTable('movies_table')display(sqlContext.sql('select ratings_table.*,movies_table.movie_title from ratings_table left join movies_table on movies_table.movie_id = ratings_table.movie_id'))" }, { "code": null, "e": 27723, "s": 27626, "text": "Let us try to do what we were doing earlier with the RDDs. Finding the top 25 most rated movies:" }, { "code": null, "e": 28039, "s": 27723, "text": "mostrateddf = sqlContext.sql('select movie_id,movie_title, count(user_id) as num_ratings from (select ratings_table.*,movies_table.movie_title from ratings_table left join movies_table on movies_table.movie_id = ratings_table.movie_id)A group by movie_id,movie_title order by num_ratings desc ')display(mostrateddf)" }, { "code": null, "e": 28111, "s": 28039, "text": "And finding the top 25 highest rated movies having more than 100 votes:" }, { "code": null, "e": 28476, "s": 28111, "text": "highrateddf = sqlContext.sql('select movie_id,movie_title, avg(rating) as avg_rating,count(movie_id) as num_ratings from (select ratings_table.*,movies_table.movie_title from ratings_table left join movies_table on movies_table.movie_id = ratings_table.movie_id)A group by movie_id,movie_title having num_ratings>100 order by avg_rating desc ')display(highrateddf)" }, { "code": null, "e": 28640, "s": 28476, "text": "I have used GROUP BY, HAVING, AND ORDER BY clauses as well as aliases in the above query. That shows that you can do pretty much complex stuff using sqlContext.sql" }, { "code": null, "e": 28710, "s": 28640, "text": "You can also use display command to display charts in your notebooks." }, { "code": null, "e": 28765, "s": 28710, "text": "You can see more options when you select Plot Options." }, { "code": null, "e": 28889, "s": 28765, "text": "Sometimes you may want to convert to RDD from a spark Dataframe or vice versa so that you can have the best of both worlds." }, { "code": null, "e": 28936, "s": 28889, "text": "To convert from DF to RDD, you can simply do :" }, { "code": null, "e": 28986, "s": 28936, "text": "highratedrdd =highrateddf.rddhighratedrdd.take(2)" }, { "code": null, "e": 29020, "s": 28986, "text": "To go from an RDD to a dataframe:" }, { "code": null, "e": 29317, "s": 29020, "text": "from pyspark.sql import Row# creating a RDD firstdata = [('A',1),('B',2),('C',3),('D',4)]rdd = sc.parallelize(data)# map the schema using Row.rdd_new = rdd.map(lambda x: Row(key=x[0], value=int(x[1])))# Convert the rdd to Dataframerdd_as_df = sqlContext.createDataFrame(rdd_new)display(rdd_as_df)" }, { "code": null, "e": 29506, "s": 29317, "text": "RDD provides you with more control at the cost of time and coding effort. While Dataframes provide you with familiar coding platform. And now you can move back and forth between these two." }, { "code": null, "e": 29570, "s": 29506, "text": "This was a big post and congratulations if you reached the end." }, { "code": null, "e": 29758, "s": 29570, "text": "Spark has provided us with an interface where we could use transformations and actions on our data. Spark also has the Dataframe API to ease the transition of Data scientists to Big Data." }, { "code": null, "e": 29865, "s": 29758, "text": "Hopefully, I’ve covered the basics well enough to pique your interest and help you get started with Spark." }, { "code": null, "e": 29917, "s": 29865, "text": "You can find all the code at the GitHub repository." }, { "code": null, "e": 30099, "s": 29917, "text": "Also, if you want to learn more about Spark and Spark DataFrames, I would like to call out these excellent courses on Big Data Essentials: HDFS, MapReduce and Spark RDD on Coursera." } ]
The 5 discrete distributions every Data Scientist should know | by Rahul Agarwal | Towards Data Science
Distributions play an essential role in the life of every Data Scientist. Now coming from a non-statistical background, distributions always come across as something mystical to me. And the fact is that there are a lot of them. So which ones should I know? And how do I know and understand them? This post is about some of the most used discrete distributions that you need to know along with some intuition and proofs. This one is perhaps the most simple discrete distribution of all and maybe the most useful as well. Story: A Coin is tossed with probability p of heads. Where to Use?: We can think of binary classification target as a Bernoulli RV. PMF of Bernoulli Distribution is given by: CDF of Bernoulli Distribution is given by: Expected Value: Variance: Bernoulli Distribution is closely associated with a lot of distributions as we will see below. One of the most basic distributions in the Statistician toolkit. The parameters of this distribution are n(number of trials) and p(probability of success). Story: Probability of getting exactly k successes in n trials Where to Use?: Suppose we have n eggs in a casket. The probability of having a broken egg is p. The number of broken eggs in the casket is then Binomially distributed. As per our story, This is the Probability that k bulbs are broken. First Solution: A better way to solve this: X is the sum of n indicator Random Variables where each I is a Bernoulli Random Variable. We can use the Indicator Random variable using Variance too as each Indicator random variable is independent. The parameter of this distribution is p(probability of success). Story: The number of failures before the first success(Heads) when a coin with probability p is tossed. Where to Use: Suppose you are giving an exam, and the probability of your getting pass is given by p. The number of failures you will have before clearing the exam is distributed Geometrically. Thus, A doctor is seeking an anti-depressant for a newly diagnosed patient. Suppose that, of the available anti-depressant drugs, the probability that any particular drug will be effective for a particular patient is p=0.6. What is the probability that the first drug found to be effective for this patient is the first drug tried, the second drug tried, and so on? What is the expected number of drugs that will be tried to find one that is effective? Expected number of drugs that will be tried to find one that is effective = q/p = .4/.6 =.67 The parameters of this distribution are p(probability of success) and r(number of success). Story: The number of failures of independent Bernoulli(p) trials before the rth success. Where to Use: You need to sell r candy bars to different houses. The probability that you will sell a candy bar is given by p. The number of failures you will have to endure before getting r successes is distributed as Negative Binomial. r successes, k failures, last attempt needs to be a success: The negative binomial RV could be stated as the sum of r Geometric RVs since Geometric Distribution is just the number of failures before the first success. Thus, Since the r geometric RVs are independent. Pat is required to sell candy bars to raise money for the 6th-grade field trip. There are thirty houses in the neighborhood, and Pat is not supposed to return home until five candy bars have been sold. So the child goes door to door, selling candy bars. At each house, there is a 0.4 probability of selling one candy bar and a 0.6 probability of selling nothing. What’s the probability of selling the last candy bar at the nth house? Here, r = 5 ; k = n — r Probability of selling the last candy bar at the nth house = The parameter of this distribution is λ, the rate parameter. Motivation: There is as such no story to this distribution but motivation for using this distribution. The Poisson distribution is often used for applications where we count the successes of a large number of trials where the per-trial success rate is low. For example, the Poisson distribution is a good starting point for counting the number of people who will email you over an hour. You have a large number of people in your address book, and the probability that any of them will send you a mail is pretty small. PMF of Poisson Distribution is given by: If electricity power failures occur according to a Poisson distribution with an average of 3 failures every twenty weeks, calculate the probability that there will not be more than one failure during a particular week? Probability = P(X=0)+P(X=1) = And here I will generate the PMFs of the discrete distributions we just discussed above using Pythons built-in functions. For more details on the upper function, please see my previous post — Create basic graph visualizations with SeaBorn. Also, take a look at the documentation guide for the below functions # Binomial :from scipy.stats import binomn=30p=0.5k = range(0,n)pmf = binom.pmf(k, n, p)chart_creator(k,pmf,"Binomial PMF") # Geometric :from scipy.stats import geomn=30p=0.5k = range(0,n)# -1 here is the location parameter for generating the PMF we want.pmf = geom.pmf(k, p,-1)chart_creator(k,pmf,"Geometric PMF") # Negative Binomial :from scipy.stats import nbinomr=5 # number of successesp=0.5 # probability of Successk = range(0,25) # number of failures# -1 here is the location parameter for generating the PMF we want.pmf = nbinom.pmf(k, r, p)chart_creator(k,pmf,"Nbinom PMF") #Poissonfrom scipy.stats import poissonlamb = .3 # Ratek = range(0,5)pmf = poisson.pmf(k, lamb)chart_creator(k,pmf,"Poisson PMF") You can also try to visualize distributions with different parameters than I have used. Understanding distributions is vital for any Data scientist. They occur very frequently in life, and understanding them makes life easier for you as you can get to a solution pretty fast just by using a simple equation. In this article, I talked about some of the essential discrete distributions along with a story to support them. The formatting for this post might look a little annoying, but Medium doesn’t support latex so can’t do much here. I still hope this helps you to get a better understanding. One of the most helpful ways to learn more about them is the Stat110 course by Joe Blitzstein and his book. You can check out this Coursera course too. Thanks for the read. I am going to be writing more beginner-friendly posts in the future too. Follow me up at Medium or Subscribe to my blog to be informed about them. As always, I welcome feedback and constructive criticism and can be reached on Twitter @mlwhiz.
[ { "code": null, "e": 246, "s": 172, "text": "Distributions play an essential role in the life of every Data Scientist." }, { "code": null, "e": 354, "s": 246, "text": "Now coming from a non-statistical background, distributions always come across as something mystical to me." }, { "code": null, "e": 400, "s": 354, "text": "And the fact is that there are a lot of them." }, { "code": null, "e": 468, "s": 400, "text": "So which ones should I know? And how do I know and understand them?" }, { "code": null, "e": 592, "s": 468, "text": "This post is about some of the most used discrete distributions that you need to know along with some intuition and proofs." }, { "code": null, "e": 692, "s": 592, "text": "This one is perhaps the most simple discrete distribution of all and maybe the most useful as well." }, { "code": null, "e": 745, "s": 692, "text": "Story: A Coin is tossed with probability p of heads." }, { "code": null, "e": 824, "s": 745, "text": "Where to Use?: We can think of binary classification target as a Bernoulli RV." }, { "code": null, "e": 867, "s": 824, "text": "PMF of Bernoulli Distribution is given by:" }, { "code": null, "e": 910, "s": 867, "text": "CDF of Bernoulli Distribution is given by:" }, { "code": null, "e": 926, "s": 910, "text": "Expected Value:" }, { "code": null, "e": 936, "s": 926, "text": "Variance:" }, { "code": null, "e": 1031, "s": 936, "text": "Bernoulli Distribution is closely associated with a lot of distributions as we will see below." }, { "code": null, "e": 1187, "s": 1031, "text": "One of the most basic distributions in the Statistician toolkit. The parameters of this distribution are n(number of trials) and p(probability of success)." }, { "code": null, "e": 1249, "s": 1187, "text": "Story: Probability of getting exactly k successes in n trials" }, { "code": null, "e": 1417, "s": 1249, "text": "Where to Use?: Suppose we have n eggs in a casket. The probability of having a broken egg is p. The number of broken eggs in the casket is then Binomially distributed." }, { "code": null, "e": 1484, "s": 1417, "text": "As per our story, This is the Probability that k bulbs are broken." }, { "code": null, "e": 1500, "s": 1484, "text": "First Solution:" }, { "code": null, "e": 1528, "s": 1500, "text": "A better way to solve this:" }, { "code": null, "e": 1618, "s": 1528, "text": "X is the sum of n indicator Random Variables where each I is a Bernoulli Random Variable." }, { "code": null, "e": 1728, "s": 1618, "text": "We can use the Indicator Random variable using Variance too as each Indicator random variable is independent." }, { "code": null, "e": 1793, "s": 1728, "text": "The parameter of this distribution is p(probability of success)." }, { "code": null, "e": 1897, "s": 1793, "text": "Story: The number of failures before the first success(Heads) when a coin with probability p is tossed." }, { "code": null, "e": 2091, "s": 1897, "text": "Where to Use: Suppose you are giving an exam, and the probability of your getting pass is given by p. The number of failures you will have before clearing the exam is distributed Geometrically." }, { "code": null, "e": 2097, "s": 2091, "text": "Thus," }, { "code": null, "e": 2544, "s": 2097, "text": "A doctor is seeking an anti-depressant for a newly diagnosed patient. Suppose that, of the available anti-depressant drugs, the probability that any particular drug will be effective for a particular patient is p=0.6. What is the probability that the first drug found to be effective for this patient is the first drug tried, the second drug tried, and so on? What is the expected number of drugs that will be tried to find one that is effective?" }, { "code": null, "e": 2620, "s": 2544, "text": "Expected number of drugs that will be tried to find one that is effective =" }, { "code": null, "e": 2637, "s": 2620, "text": "q/p = .4/.6 =.67" }, { "code": null, "e": 2729, "s": 2637, "text": "The parameters of this distribution are p(probability of success) and r(number of success)." }, { "code": null, "e": 2818, "s": 2729, "text": "Story: The number of failures of independent Bernoulli(p) trials before the rth success." }, { "code": null, "e": 3056, "s": 2818, "text": "Where to Use: You need to sell r candy bars to different houses. The probability that you will sell a candy bar is given by p. The number of failures you will have to endure before getting r successes is distributed as Negative Binomial." }, { "code": null, "e": 3117, "s": 3056, "text": "r successes, k failures, last attempt needs to be a success:" }, { "code": null, "e": 3274, "s": 3117, "text": "The negative binomial RV could be stated as the sum of r Geometric RVs since Geometric Distribution is just the number of failures before the first success." }, { "code": null, "e": 3280, "s": 3274, "text": "Thus," }, { "code": null, "e": 3323, "s": 3280, "text": "Since the r geometric RVs are independent." }, { "code": null, "e": 3757, "s": 3323, "text": "Pat is required to sell candy bars to raise money for the 6th-grade field trip. There are thirty houses in the neighborhood, and Pat is not supposed to return home until five candy bars have been sold. So the child goes door to door, selling candy bars. At each house, there is a 0.4 probability of selling one candy bar and a 0.6 probability of selling nothing. What’s the probability of selling the last candy bar at the nth house?" }, { "code": null, "e": 3781, "s": 3757, "text": "Here, r = 5 ; k = n — r" }, { "code": null, "e": 3842, "s": 3781, "text": "Probability of selling the last candy bar at the nth house =" }, { "code": null, "e": 3903, "s": 3842, "text": "The parameter of this distribution is λ, the rate parameter." }, { "code": null, "e": 4160, "s": 3903, "text": "Motivation: There is as such no story to this distribution but motivation for using this distribution. The Poisson distribution is often used for applications where we count the successes of a large number of trials where the per-trial success rate is low." }, { "code": null, "e": 4421, "s": 4160, "text": "For example, the Poisson distribution is a good starting point for counting the number of people who will email you over an hour. You have a large number of people in your address book, and the probability that any of them will send you a mail is pretty small." }, { "code": null, "e": 4462, "s": 4421, "text": "PMF of Poisson Distribution is given by:" }, { "code": null, "e": 4681, "s": 4462, "text": "If electricity power failures occur according to a Poisson distribution with an average of 3 failures every twenty weeks, calculate the probability that there will not be more than one failure during a particular week?" }, { "code": null, "e": 4711, "s": 4681, "text": "Probability = P(X=0)+P(X=1) =" }, { "code": null, "e": 5020, "s": 4711, "text": "And here I will generate the PMFs of the discrete distributions we just discussed above using Pythons built-in functions. For more details on the upper function, please see my previous post — Create basic graph visualizations with SeaBorn. Also, take a look at the documentation guide for the below functions" }, { "code": null, "e": 5144, "s": 5020, "text": "# Binomial :from scipy.stats import binomn=30p=0.5k = range(0,n)pmf = binom.pmf(k, n, p)chart_creator(k,pmf,\"Binomial PMF\")" }, { "code": null, "e": 5335, "s": 5144, "text": "# Geometric :from scipy.stats import geomn=30p=0.5k = range(0,n)# -1 here is the location parameter for generating the PMF we want.pmf = geom.pmf(k, p,-1)chart_creator(k,pmf,\"Geometric PMF\")" }, { "code": null, "e": 5603, "s": 5335, "text": "# Negative Binomial :from scipy.stats import nbinomr=5 # number of successesp=0.5 # probability of Successk = range(0,25) # number of failures# -1 here is the location parameter for generating the PMF we want.pmf = nbinom.pmf(k, r, p)chart_creator(k,pmf,\"Nbinom PMF\")" }, { "code": null, "e": 5733, "s": 5603, "text": "#Poissonfrom scipy.stats import poissonlamb = .3 # Ratek = range(0,5)pmf = poisson.pmf(k, lamb)chart_creator(k,pmf,\"Poisson PMF\")" }, { "code": null, "e": 5821, "s": 5733, "text": "You can also try to visualize distributions with different parameters than I have used." }, { "code": null, "e": 5882, "s": 5821, "text": "Understanding distributions is vital for any Data scientist." }, { "code": null, "e": 6041, "s": 5882, "text": "They occur very frequently in life, and understanding them makes life easier for you as you can get to a solution pretty fast just by using a simple equation." }, { "code": null, "e": 6154, "s": 6041, "text": "In this article, I talked about some of the essential discrete distributions along with a story to support them." }, { "code": null, "e": 6269, "s": 6154, "text": "The formatting for this post might look a little annoying, but Medium doesn’t support latex so can’t do much here." }, { "code": null, "e": 6328, "s": 6269, "text": "I still hope this helps you to get a better understanding." }, { "code": null, "e": 6436, "s": 6328, "text": "One of the most helpful ways to learn more about them is the Stat110 course by Joe Blitzstein and his book." }, { "code": null, "e": 6480, "s": 6436, "text": "You can check out this Coursera course too." } ]
Find the prime numbers which can written as sum of most consecutive primes - GeeksforGeeks
11 Oct, 2021 Given an array of limits. For every limit, find the prime number which can be written as the sum of the most consecutive primes smaller than or equal to limit.The maximum possible value of a limit is 10^4. Example: Input : arr[] = {10, 30} Output : 5, 17 Explanation : There are two limit values 10 and 30. Below limit 10, 5 is sum of two consecutive primes, 2 and 3. 5 is the prime number which is sum of largest chain of consecutive below limit 10. Below limit 30, 17 is sum of four consecutive primes. 2 + 3 + 5 + 7 = 17 Below are steps. Find all prime numbers below a maximum limit (10^6) using Sieve of Sundaram and store them in primes[].Construct a prefix sum array prime_sum[] for all prime numbers in primes[] prime_sum[i+1] = prime_sum[i] + primes[i]. Difference between two values in prime_sum[i] and prime_sum[j] represents sum of consecutive primes from index i to index j.Traverse two loops , outer loop from i (0 to limit) and inner loop from j (0 to i)For every i, inner loop traverse (0 to i), we check if current sum of consecutive primes (consSum = prime_sum[i] – prime_sum[j]) is prime number or not (we search consSum in prime[] using Binary search).If consSum is prime number then we update the result if the current length is more than length of current result. Find all prime numbers below a maximum limit (10^6) using Sieve of Sundaram and store them in primes[]. Construct a prefix sum array prime_sum[] for all prime numbers in primes[] prime_sum[i+1] = prime_sum[i] + primes[i]. Difference between two values in prime_sum[i] and prime_sum[j] represents sum of consecutive primes from index i to index j. Traverse two loops , outer loop from i (0 to limit) and inner loop from j (0 to i) For every i, inner loop traverse (0 to i), we check if current sum of consecutive primes (consSum = prime_sum[i] – prime_sum[j]) is prime number or not (we search consSum in prime[] using Binary search). If consSum is prime number then we update the result if the current length is more than length of current result. Below is implementation of above steps. C++ Java C# \// C++ program to find Longest Sum of consecutive// primes#include<bits/stdc++.h>using namespace std;const int MAX = 10000; // utility function for sieve of sundaramvoid sieveSundaram(vector <int> &primes){ // In general Sieve of Sundaram, produces primes smaller // than (2*x + 2) for a number given number x. Since // we want primes smaller than MAX, we reduce MAX to half // This array is used to separate numbers of the form // i+j+2ij from others where 1 <= i <= j bool marked[MAX/2 + 1] = {0}; // Main logic of Sundaram. Mark all numbers which // do not generate prime number by doing 2*i+1 for (int i=1; i<=(sqrt(MAX)-1)/2; i++) for (int j=(i*(i+1))<<1; j<=MAX/2; j=j+2*i+1) marked[j] = true; // Since 2 is a prime number primes.push_back(2); // Print other primes. Remaining primes are of the // form 2*i + 1 such that marked[i] is false. for (int i=1; i<=MAX/2; i++) if (marked[i] == false) primes.push_back(2*i + 1);} // function find the prime number which can be written// as the sum of the most consecutive primesint LSCPUtil(int limit, vector<int> &prime, long long int sum_prime[]){ // To store maximum length of consecutive primes that can // sum to a limit int max_length = -1; // The prime number (or result) that can be represented as // sum of maximum number of primes. int prime_number = -1; // Consider all lengths of consecutive primes below limit. for (int i=0; prime[i]<=limit; i++) { for (int j=0; j<i; j++) { // if we cross the limit, then break the loop if (sum_prime[i] - sum_prime[j] > limit) break; // sum_prime[i]-sum_prime[j] is prime number or not long long int consSum = sum_prime[i] - sum_prime[j]; // Check if sum of current length of consecutives is // prime or not. if (binary_search(prime.begin(), prime.end(), consSum)) { // update the length and prime number if (max_length < i-j+1) { max_length = i-j+1; prime_number = consSum; } } } } return prime_number;} // Returns the prime number that can written as sum// of longest chain of consecutive primes.void LSCP(int arr[], int n){ // Store prime number in vector vector<int> primes; sieveSundaram(primes); long long int sum_prime[primes.size() + 1]; // Calculate sum of prime numbers and store them // in sum_prime array. sum_prime[i] stores sum of // prime numbers from primes[0] to primes[i-1] sum_prime[0] = 0; for (int i = 1 ; i <= primes.size(); i++) sum_prime[i] = primes[i-1] + sum_prime[i-1]; // Process all queries one by one for (int i=0; i<n; i++) cout << LSCPUtil(arr[i], primes, sum_prime) << " ";} // Driver programint main(){ int arr[] = {10, 30, 40, 50, 1000}; int n = sizeof(arr)/sizeof(arr[0]); LSCP(arr, n); return 0;} // Java program to find longest sum// of consecutive primesimport java.util.*; class GFG{ static int MAX = 10000; // Store prime number in vectorstatic ArrayList<Object> primes = new ArrayList<Object>(); // Utility function for sieve of sundaramstatic void sieveSundaram(){ // In general Sieve of Sundaram, // produces primes smaller than // (2*x + 2) for a number given // number x. Since we want primes // smaller than MAX, we reduce MAX // to half. This array is used to // separate numbers of the form // i+j+2ij from others where 1 <= i <= j boolean []marked = new boolean[MAX / 2 + 1]; Arrays.fill(marked, false); // Main logic of Sundaram. Mark // all numbers which do not // generate prime number by // doing 2*i+1 for(int i = 1; i <= (Math.sqrt(MAX) - 1) / 2; i++) for(int j = (i * (i + 1)) << 1; j <= MAX / 2; j = j + 2 * i + 1) marked[j] = true; // Since 2 is a prime number primes.add(2); // Print other primes. Remaining // primes are of the form 2*i + 1 // such that marked[i] is false. for(int i = 1; i <= MAX / 2; i++) if (marked[i] == false) primes.add(2 * i + 1);} // Function find the prime number// which can be written as the// sum of the most consecutive primesstatic int LSCPUtil(int limit, long []sum_prime){ // To store maximum length of // consecutive primes that can // sum to a limit int max_length = -1; // The prime number (or result) // that can be represented as // sum of maximum number of primes. int prime_number = -1; // Consider all lengths of // consecutive primes below limit. for(int i = 0; (int)primes.get(i) <= limit; i++) { for(int j = 0; j < i; j++) { // If we cross the limit, then // break the loop if (sum_prime[i] - sum_prime[j] > limit) break; // sum_prime[i]-sum_prime[j] is // prime number or not long consSum = sum_prime[i] - sum_prime[j]; Object[] prime = primes.toArray(); // Check if sum of current length // of consecutives is prime or not. if (Arrays.binarySearch( prime, (int)consSum) >= 0) { // Update the length and prime number if (max_length < i - j + 1) { max_length = i - j + 1; prime_number = (int)consSum; } } } } return prime_number;} // Returns the prime number that// can written as sum of longest// chain of consecutive primes.static void LSCP(int []arr, int n){ sieveSundaram(); long []sum_prime = new long[primes.size() + 1]; // Calculate sum of prime numbers // and store them in sum_prime // array. sum_prime[i] stores sum // of prime numbers from // primes[0] to primes[i-1] sum_prime[0] = 0; for(int i = 1; i <= primes.size(); i++) sum_prime[i] = (int)primes.get(i - 1) + sum_prime[i - 1]; // Process all queries one by one for(int i = 0; i < n; i++) System.out.print(LSCPUtil( arr[i], sum_prime) + " ");} // Driver codepublic static void main(String []arg){ int []arr = { 10, 30, 40, 50, 1000 }; int n = arr.length; LSCP(arr, n);}} // This code is contributed by pratham76 // C# program to find longest sum// of consecutive primesusing System;using System.Collections; class GFG{ static int MAX = 10000; // Store prime number in vectorstatic ArrayList primes = new ArrayList(); // Utility function for sieve of sundaramstatic void sieveSundaram(){ // In general Sieve of Sundaram, // produces primes smaller than // (2*x + 2) for a number given // number x. Since we want primes // smaller than MAX, we reduce MAX // to half. This array is used to // separate numbers of the form // i+j+2ij from others where 1 <= i <= j bool []marked = new bool[MAX / 2 + 1]; Array.Fill(marked, false); // Main logic of Sundaram. Mark // all numbers which do not // generate prime number by // doing 2*i+1 for(int i = 1; i <= (Math.Sqrt(MAX) - 1) / 2; i++) for(int j = (i * (i + 1)) << 1; j <= MAX / 2; j = j + 2 * i + 1) marked[j] = true; // Since 2 is a prime number primes.Add(2); // Print other primes. Remaining // primes are of the form // 2*i + 1 such that marked[i] is false. for(int i = 1; i <= MAX / 2; i++) if (marked[i] == false) primes.Add(2 * i + 1);} // Function find the prime number// which can be written as the// sum of the most consecutive primesstatic int LSCPUtil(int limit, long []sum_prime){ // To store maximum length of // consecutive primes that can // sum to a limit int max_length = -1; // The prime number (or result) // that can be represented as // sum of maximum number of primes. int prime_number = -1; // Consider all lengths of // consecutive primes below limit. for(int i = 0; (int)primes[i] <= limit; i++) { for(int j = 0; j < i; j++) { // If we cross the limit, then // break the loop if (sum_prime[i] - sum_prime[j] > limit) break; // sum_prime[i]-sum_prime[j] is // prime number or not long consSum = sum_prime[i] - sum_prime[j]; int[] prime = (int[])primes.ToArray(typeof(int)); // Check if sum of current length // of consecutives is prime or not. if (Array.BinarySearch(prime, (int)consSum) >= 0) { // Update the length and prime number if (max_length < i - j + 1) { max_length = i - j + 1; prime_number = (int)consSum; } } } } return prime_number;} // Returns the prime number that// can written as sum of longest// chain of consecutive primes.static void LSCP(int []arr, int n){ sieveSundaram(); long []sum_prime = new long[primes.Count + 1]; // Calculate sum of prime numbers // and store them in sum_prime // array. sum_prime[i] stores sum // of prime numbers from // primes[0] to primes[i-1] sum_prime[0] = 0; for(int i = 1; i <= primes.Count; i++) sum_prime[i] = (int)primes[i - 1] + sum_prime[i - 1]; // Process all queries one by one for(int i = 0; i < n; i++) Console.Write(LSCPUtil( arr[i], sum_prime) + " ");} // Driver codepublic static void Main(string []arg){ int []arr = { 10, 30, 40, 50, 1000 }; int n = arr.Length; LSCP(arr, n);}} // This code is contributed by rutvik_56 Output: 5 17 17 41 953 This article is contributed by Nishant_singh (pintu). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. rutvik_56 pratham76 rajeev0719singh sumitgumber28 Binary Search prefix-sum Prime Number sieve Mathematical prefix-sum Mathematical Prime Number sieve Binary Search Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Merge two sorted arrays Modulo Operator (%) in C/C++ with Examples Prime Numbers Program to find sum of elements in a given array Program for factorial of a number Operators in C / C++ Program for Decimal to Binary Conversion Algorithm to solve Rubik's Cube Minimum number of jumps to reach end The Knight's tour problem | Backtracking-1
[ { "code": null, "e": 25008, "s": 24980, "text": "\n11 Oct, 2021" }, { "code": null, "e": 25214, "s": 25008, "text": "Given an array of limits. For every limit, find the prime number which can be written as the sum of the most consecutive primes smaller than or equal to limit.The maximum possible value of a limit is 10^4." }, { "code": null, "e": 25224, "s": 25214, "text": "Example: " }, { "code": null, "e": 25536, "s": 25224, "text": "Input : arr[] = {10, 30}\nOutput : 5, 17\nExplanation : There are two limit values 10 and 30.\nBelow limit 10, 5 is sum of two consecutive primes,\n2 and 3. 5 is the prime number which is sum of largest \nchain of consecutive below limit 10.\n\nBelow limit 30, 17 is sum of four consecutive primes.\n2 + 3 + 5 + 7 = 17" }, { "code": null, "e": 25554, "s": 25536, "text": "Below are steps. " }, { "code": null, "e": 26298, "s": 25554, "text": "Find all prime numbers below a maximum limit (10^6) using Sieve of Sundaram and store them in primes[].Construct a prefix sum array prime_sum[] for all prime numbers in primes[] prime_sum[i+1] = prime_sum[i] + primes[i]. Difference between two values in prime_sum[i] and prime_sum[j] represents sum of consecutive primes from index i to index j.Traverse two loops , outer loop from i (0 to limit) and inner loop from j (0 to i)For every i, inner loop traverse (0 to i), we check if current sum of consecutive primes (consSum = prime_sum[i] – prime_sum[j]) is prime number or not (we search consSum in prime[] using Binary search).If consSum is prime number then we update the result if the current length is more than length of current result." }, { "code": null, "e": 26402, "s": 26298, "text": "Find all prime numbers below a maximum limit (10^6) using Sieve of Sundaram and store them in primes[]." }, { "code": null, "e": 26645, "s": 26402, "text": "Construct a prefix sum array prime_sum[] for all prime numbers in primes[] prime_sum[i+1] = prime_sum[i] + primes[i]. Difference between two values in prime_sum[i] and prime_sum[j] represents sum of consecutive primes from index i to index j." }, { "code": null, "e": 26728, "s": 26645, "text": "Traverse two loops , outer loop from i (0 to limit) and inner loop from j (0 to i)" }, { "code": null, "e": 26932, "s": 26728, "text": "For every i, inner loop traverse (0 to i), we check if current sum of consecutive primes (consSum = prime_sum[i] – prime_sum[j]) is prime number or not (we search consSum in prime[] using Binary search)." }, { "code": null, "e": 27046, "s": 26932, "text": "If consSum is prime number then we update the result if the current length is more than length of current result." }, { "code": null, "e": 27086, "s": 27046, "text": "Below is implementation of above steps." }, { "code": null, "e": 27090, "s": 27086, "text": "C++" }, { "code": null, "e": 27095, "s": 27090, "text": "Java" }, { "code": null, "e": 27098, "s": 27095, "text": "C#" }, { "code": "\\// C++ program to find Longest Sum of consecutive// primes#include<bits/stdc++.h>using namespace std;const int MAX = 10000; // utility function for sieve of sundaramvoid sieveSundaram(vector <int> &primes){ // In general Sieve of Sundaram, produces primes smaller // than (2*x + 2) for a number given number x. Since // we want primes smaller than MAX, we reduce MAX to half // This array is used to separate numbers of the form // i+j+2ij from others where 1 <= i <= j bool marked[MAX/2 + 1] = {0}; // Main logic of Sundaram. Mark all numbers which // do not generate prime number by doing 2*i+1 for (int i=1; i<=(sqrt(MAX)-1)/2; i++) for (int j=(i*(i+1))<<1; j<=MAX/2; j=j+2*i+1) marked[j] = true; // Since 2 is a prime number primes.push_back(2); // Print other primes. Remaining primes are of the // form 2*i + 1 such that marked[i] is false. for (int i=1; i<=MAX/2; i++) if (marked[i] == false) primes.push_back(2*i + 1);} // function find the prime number which can be written// as the sum of the most consecutive primesint LSCPUtil(int limit, vector<int> &prime, long long int sum_prime[]){ // To store maximum length of consecutive primes that can // sum to a limit int max_length = -1; // The prime number (or result) that can be represented as // sum of maximum number of primes. int prime_number = -1; // Consider all lengths of consecutive primes below limit. for (int i=0; prime[i]<=limit; i++) { for (int j=0; j<i; j++) { // if we cross the limit, then break the loop if (sum_prime[i] - sum_prime[j] > limit) break; // sum_prime[i]-sum_prime[j] is prime number or not long long int consSum = sum_prime[i] - sum_prime[j]; // Check if sum of current length of consecutives is // prime or not. if (binary_search(prime.begin(), prime.end(), consSum)) { // update the length and prime number if (max_length < i-j+1) { max_length = i-j+1; prime_number = consSum; } } } } return prime_number;} // Returns the prime number that can written as sum// of longest chain of consecutive primes.void LSCP(int arr[], int n){ // Store prime number in vector vector<int> primes; sieveSundaram(primes); long long int sum_prime[primes.size() + 1]; // Calculate sum of prime numbers and store them // in sum_prime array. sum_prime[i] stores sum of // prime numbers from primes[0] to primes[i-1] sum_prime[0] = 0; for (int i = 1 ; i <= primes.size(); i++) sum_prime[i] = primes[i-1] + sum_prime[i-1]; // Process all queries one by one for (int i=0; i<n; i++) cout << LSCPUtil(arr[i], primes, sum_prime) << \" \";} // Driver programint main(){ int arr[] = {10, 30, 40, 50, 1000}; int n = sizeof(arr)/sizeof(arr[0]); LSCP(arr, n); return 0;}", "e": 30143, "s": 27098, "text": null }, { "code": "// Java program to find longest sum// of consecutive primesimport java.util.*; class GFG{ static int MAX = 10000; // Store prime number in vectorstatic ArrayList<Object> primes = new ArrayList<Object>(); // Utility function for sieve of sundaramstatic void sieveSundaram(){ // In general Sieve of Sundaram, // produces primes smaller than // (2*x + 2) for a number given // number x. Since we want primes // smaller than MAX, we reduce MAX // to half. This array is used to // separate numbers of the form // i+j+2ij from others where 1 <= i <= j boolean []marked = new boolean[MAX / 2 + 1]; Arrays.fill(marked, false); // Main logic of Sundaram. Mark // all numbers which do not // generate prime number by // doing 2*i+1 for(int i = 1; i <= (Math.sqrt(MAX) - 1) / 2; i++) for(int j = (i * (i + 1)) << 1; j <= MAX / 2; j = j + 2 * i + 1) marked[j] = true; // Since 2 is a prime number primes.add(2); // Print other primes. Remaining // primes are of the form 2*i + 1 // such that marked[i] is false. for(int i = 1; i <= MAX / 2; i++) if (marked[i] == false) primes.add(2 * i + 1);} // Function find the prime number// which can be written as the// sum of the most consecutive primesstatic int LSCPUtil(int limit, long []sum_prime){ // To store maximum length of // consecutive primes that can // sum to a limit int max_length = -1; // The prime number (or result) // that can be represented as // sum of maximum number of primes. int prime_number = -1; // Consider all lengths of // consecutive primes below limit. for(int i = 0; (int)primes.get(i) <= limit; i++) { for(int j = 0; j < i; j++) { // If we cross the limit, then // break the loop if (sum_prime[i] - sum_prime[j] > limit) break; // sum_prime[i]-sum_prime[j] is // prime number or not long consSum = sum_prime[i] - sum_prime[j]; Object[] prime = primes.toArray(); // Check if sum of current length // of consecutives is prime or not. if (Arrays.binarySearch( prime, (int)consSum) >= 0) { // Update the length and prime number if (max_length < i - j + 1) { max_length = i - j + 1; prime_number = (int)consSum; } } } } return prime_number;} // Returns the prime number that// can written as sum of longest// chain of consecutive primes.static void LSCP(int []arr, int n){ sieveSundaram(); long []sum_prime = new long[primes.size() + 1]; // Calculate sum of prime numbers // and store them in sum_prime // array. sum_prime[i] stores sum // of prime numbers from // primes[0] to primes[i-1] sum_prime[0] = 0; for(int i = 1; i <= primes.size(); i++) sum_prime[i] = (int)primes.get(i - 1) + sum_prime[i - 1]; // Process all queries one by one for(int i = 0; i < n; i++) System.out.print(LSCPUtil( arr[i], sum_prime) + \" \");} // Driver codepublic static void main(String []arg){ int []arr = { 10, 30, 40, 50, 1000 }; int n = arr.length; LSCP(arr, n);}} // This code is contributed by pratham76", "e": 33704, "s": 30143, "text": null }, { "code": "// C# program to find longest sum// of consecutive primesusing System;using System.Collections; class GFG{ static int MAX = 10000; // Store prime number in vectorstatic ArrayList primes = new ArrayList(); // Utility function for sieve of sundaramstatic void sieveSundaram(){ // In general Sieve of Sundaram, // produces primes smaller than // (2*x + 2) for a number given // number x. Since we want primes // smaller than MAX, we reduce MAX // to half. This array is used to // separate numbers of the form // i+j+2ij from others where 1 <= i <= j bool []marked = new bool[MAX / 2 + 1]; Array.Fill(marked, false); // Main logic of Sundaram. Mark // all numbers which do not // generate prime number by // doing 2*i+1 for(int i = 1; i <= (Math.Sqrt(MAX) - 1) / 2; i++) for(int j = (i * (i + 1)) << 1; j <= MAX / 2; j = j + 2 * i + 1) marked[j] = true; // Since 2 is a prime number primes.Add(2); // Print other primes. Remaining // primes are of the form // 2*i + 1 such that marked[i] is false. for(int i = 1; i <= MAX / 2; i++) if (marked[i] == false) primes.Add(2 * i + 1);} // Function find the prime number// which can be written as the// sum of the most consecutive primesstatic int LSCPUtil(int limit, long []sum_prime){ // To store maximum length of // consecutive primes that can // sum to a limit int max_length = -1; // The prime number (or result) // that can be represented as // sum of maximum number of primes. int prime_number = -1; // Consider all lengths of // consecutive primes below limit. for(int i = 0; (int)primes[i] <= limit; i++) { for(int j = 0; j < i; j++) { // If we cross the limit, then // break the loop if (sum_prime[i] - sum_prime[j] > limit) break; // sum_prime[i]-sum_prime[j] is // prime number or not long consSum = sum_prime[i] - sum_prime[j]; int[] prime = (int[])primes.ToArray(typeof(int)); // Check if sum of current length // of consecutives is prime or not. if (Array.BinarySearch(prime, (int)consSum) >= 0) { // Update the length and prime number if (max_length < i - j + 1) { max_length = i - j + 1; prime_number = (int)consSum; } } } } return prime_number;} // Returns the prime number that// can written as sum of longest// chain of consecutive primes.static void LSCP(int []arr, int n){ sieveSundaram(); long []sum_prime = new long[primes.Count + 1]; // Calculate sum of prime numbers // and store them in sum_prime // array. sum_prime[i] stores sum // of prime numbers from // primes[0] to primes[i-1] sum_prime[0] = 0; for(int i = 1; i <= primes.Count; i++) sum_prime[i] = (int)primes[i - 1] + sum_prime[i - 1]; // Process all queries one by one for(int i = 0; i < n; i++) Console.Write(LSCPUtil( arr[i], sum_prime) + \" \");} // Driver codepublic static void Main(string []arg){ int []arr = { 10, 30, 40, 50, 1000 }; int n = arr.Length; LSCP(arr, n);}} // This code is contributed by rutvik_56", "e": 37236, "s": 33704, "text": null }, { "code": null, "e": 37245, "s": 37236, "text": "Output: " }, { "code": null, "e": 37260, "s": 37245, "text": "5 17 17 41 953" }, { "code": null, "e": 37690, "s": 37260, "text": "This article is contributed by Nishant_singh (pintu). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 37700, "s": 37690, "text": "rutvik_56" }, { "code": null, "e": 37710, "s": 37700, "text": "pratham76" }, { "code": null, "e": 37726, "s": 37710, "text": "rajeev0719singh" }, { "code": null, "e": 37740, "s": 37726, "text": "sumitgumber28" }, { "code": null, "e": 37754, "s": 37740, "text": "Binary Search" }, { "code": null, "e": 37765, "s": 37754, "text": "prefix-sum" }, { "code": null, "e": 37778, "s": 37765, "text": "Prime Number" }, { "code": null, "e": 37784, "s": 37778, "text": "sieve" }, { "code": null, "e": 37797, "s": 37784, "text": "Mathematical" }, { "code": null, "e": 37808, "s": 37797, "text": "prefix-sum" }, { "code": null, "e": 37821, "s": 37808, "text": "Mathematical" }, { "code": null, "e": 37834, "s": 37821, "text": "Prime Number" }, { "code": null, "e": 37840, "s": 37834, "text": "sieve" }, { "code": null, "e": 37854, "s": 37840, "text": "Binary Search" }, { "code": null, "e": 37952, "s": 37854, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 37961, "s": 37952, "text": "Comments" }, { "code": null, "e": 37974, "s": 37961, "text": "Old Comments" }, { "code": null, "e": 37998, "s": 37974, "text": "Merge two sorted arrays" }, { "code": null, "e": 38041, "s": 37998, "text": "Modulo Operator (%) in C/C++ with Examples" }, { "code": null, "e": 38055, "s": 38041, "text": "Prime Numbers" }, { "code": null, "e": 38104, "s": 38055, "text": "Program to find sum of elements in a given array" }, { "code": null, "e": 38138, "s": 38104, "text": "Program for factorial of a number" }, { "code": null, "e": 38159, "s": 38138, "text": "Operators in C / C++" }, { "code": null, "e": 38200, "s": 38159, "text": "Program for Decimal to Binary Conversion" }, { "code": null, "e": 38232, "s": 38200, "text": "Algorithm to solve Rubik's Cube" }, { "code": null, "e": 38269, "s": 38232, "text": "Minimum number of jumps to reach end" } ]
apply() vs map() vs applymap() in Pandas | Towards Data Science
Usually, we need to apply certain functions over DataFrame columns or rows in order to either update values or even create new columns. The most commonly used operations for doing so in pandas, are apply, map and applymap methods. In today’s guide we are going to explore all three methods and understand how each of them works. Additionally, we will discuss when to use one over the other. First, let’s create an example DataFrame that we will use throughout this article in order to demonstrate a few concepts that will help us highlight the difference between apply(), map() and applymap(). import pandas as pddf = pd.DataFrame( [ (1, 521, True, 10.1, 'Hello'), (2, 723, False, 54.2, 'Hey'), (3, 123, False, 33.2, 'Howdy'), (4, 641, True, 48.6, 'Hi'), (5, 467, False, 98.1, 'Hey'), ], columns=['colA', 'colB', 'colC', 'colD', 'colE'])print(df) colA colB colC colD colE0 1 521 True 10.1 Hello1 2 723 False 54.2 Hey2 3 123 False 33.2 Howdy3 4 641 True 48.6 Hi4 5 467 False 98.1 Hey pandas.DataFrame.apply method is used to apply a function along the specified axis of the pandas DataFrame. apply() method operates on entire rows or columns at a time and is mostly suitable when it comes to applying functions that cannot be vectorised. Note that the input to the method must be a callable. import numpy as npdf[['colA', 'colB']] = df[['colA', 'colB']].apply(np.sqrt)print(df) colA colB colC colD colE0 1.000000 22.825424 True 10.1 Hello1 1.414214 26.888659 False 54.2 Hey2 1.732051 11.090537 False 33.2 Howdy3 2.000000 25.317978 True 48.6 Hi4 2.236068 21.610183 False 98.1 Hey Additionally, the method can also be applied over pandas Series (see pandas.Series.apply). pandas.Series.map method can applied only over pandas Series objects and is used to map the values of the Series based on the input which is used to substitute each value with the specified value that is derived from a dictionary, a function or even another Series object. Note that the method operates over one element at a time and missing values will be denoted as NaN in the output. df['colE'] = df['colE'].map({'Hello': 'Good Bye', 'Hey': 'Bye'})print(df) colA colB colC colD colE0 1 521 True 10.1 Good Bye1 2 723 False 54.2 Bye2 3 123 False 33.2 NaN3 4 641 True 48.6 NaN4 5 467 False 98.1 Bye Lastly, pandas.DataFrame.applymap method can only be applied over pandas DataFrame objects and is used to apply a specified function elementwise. The method accepts only callables and is mostly suitable when it comes to transforming values in multiple rows or columns. df[['colA', 'colD']] = df[['colA', 'colD']].applymap(lambda x: x**2)print(df) colA colB colC colD colE0 1 521 True 102.01 Hello1 4 723 False 2937.64 Hey2 9 123 False 1102.24 Howdy3 16 641 True 2361.96 Hi4 25 467 False 9623.61 Hey In today’s short guide we discussed how apply(), map() and applymap() methods work in pandas. Additionally, we showcased how to use each of these methods and explored their main differences. Sometimes, it may not be clear whether you need to use apply() or applymap() method and therefore, the best way forward is to simply test both methods and pick the one which is more efficient/faster. Become a member and read every story on Medium. Your membership fee directly supports me and other writers you read. You’ll also get full access to every story on Medium. gmyrianthous.medium.com You may also like
[ { "code": null, "e": 402, "s": 171, "text": "Usually, we need to apply certain functions over DataFrame columns or rows in order to either update values or even create new columns. The most commonly used operations for doing so in pandas, are apply, map and applymap methods." }, { "code": null, "e": 562, "s": 402, "text": "In today’s guide we are going to explore all three methods and understand how each of them works. Additionally, we will discuss when to use one over the other." }, { "code": null, "e": 765, "s": 562, "text": "First, let’s create an example DataFrame that we will use throughout this article in order to demonstrate a few concepts that will help us highlight the difference between apply(), map() and applymap()." }, { "code": null, "e": 1260, "s": 765, "text": "import pandas as pddf = pd.DataFrame( [ (1, 521, True, 10.1, 'Hello'), (2, 723, False, 54.2, 'Hey'), (3, 123, False, 33.2, 'Howdy'), (4, 641, True, 48.6, 'Hi'), (5, 467, False, 98.1, 'Hey'), ], columns=['colA', 'colB', 'colC', 'colD', 'colE'])print(df) colA colB colC colD colE0 1 521 True 10.1 Hello1 2 723 False 54.2 Hey2 3 123 False 33.2 Howdy3 4 641 True 48.6 Hi4 5 467 False 98.1 Hey" }, { "code": null, "e": 1568, "s": 1260, "text": "pandas.DataFrame.apply method is used to apply a function along the specified axis of the pandas DataFrame. apply() method operates on entire rows or columns at a time and is mostly suitable when it comes to applying functions that cannot be vectorised. Note that the input to the method must be a callable." }, { "code": null, "e": 1906, "s": 1568, "text": "import numpy as npdf[['colA', 'colB']] = df[['colA', 'colB']].apply(np.sqrt)print(df) colA colB colC colD colE0 1.000000 22.825424 True 10.1 Hello1 1.414214 26.888659 False 54.2 Hey2 1.732051 11.090537 False 33.2 Howdy3 2.000000 25.317978 True 48.6 Hi4 2.236068 21.610183 False 98.1 Hey" }, { "code": null, "e": 1997, "s": 1906, "text": "Additionally, the method can also be applied over pandas Series (see pandas.Series.apply)." }, { "code": null, "e": 2270, "s": 1997, "text": "pandas.Series.map method can applied only over pandas Series objects and is used to map the values of the Series based on the input which is used to substitute each value with the specified value that is derived from a dictionary, a function or even another Series object." }, { "code": null, "e": 2384, "s": 2270, "text": "Note that the method operates over one element at a time and missing values will be denoted as NaN in the output." }, { "code": null, "e": 2674, "s": 2384, "text": "df['colE'] = df['colE'].map({'Hello': 'Good Bye', 'Hey': 'Bye'})print(df) colA colB colC colD colE0 1 521 True 10.1 Good Bye1 2 723 False 54.2 Bye2 3 123 False 33.2 NaN3 4 641 True 48.6 NaN4 5 467 False 98.1 Bye" }, { "code": null, "e": 2943, "s": 2674, "text": "Lastly, pandas.DataFrame.applymap method can only be applied over pandas DataFrame objects and is used to apply a specified function elementwise. The method accepts only callables and is mostly suitable when it comes to transforming values in multiple rows or columns." }, { "code": null, "e": 3237, "s": 2943, "text": "df[['colA', 'colD']] = df[['colA', 'colD']].applymap(lambda x: x**2)print(df) colA colB colC colD colE0 1 521 True 102.01 Hello1 4 723 False 2937.64 Hey2 9 123 False 1102.24 Howdy3 16 641 True 2361.96 Hi4 25 467 False 9623.61 Hey" }, { "code": null, "e": 3428, "s": 3237, "text": "In today’s short guide we discussed how apply(), map() and applymap() methods work in pandas. Additionally, we showcased how to use each of these methods and explored their main differences." }, { "code": null, "e": 3628, "s": 3428, "text": "Sometimes, it may not be clear whether you need to use apply() or applymap() method and therefore, the best way forward is to simply test both methods and pick the one which is more efficient/faster." }, { "code": null, "e": 3799, "s": 3628, "text": "Become a member and read every story on Medium. Your membership fee directly supports me and other writers you read. You’ll also get full access to every story on Medium." }, { "code": null, "e": 3823, "s": 3799, "text": "gmyrianthous.medium.com" } ]
String Data Structure - GeeksforGeeks
16 Jul, 2021 Strings are defined as an array of characters. The difference between a character array and a string is the string is terminated with a special character ‘\0’. Declaring a string is as simple as declaring a one dimensional array. Below is the basic syntax for declaring a string in C programming language. char str_name[size]; Topics : Basics String in C & C++ Strings in Java String in Python Arthimetic Operation in String Character Counting Based Problems Subsequence & Substring Reverse & Rotation Sorting & Searching Case Sensitive String Occurrence Based String Spacing Anagram Palindrome Binary String Lexicographic pattern Pattern Searching Split String Balance Parentheses & Bracket Evaluation Conversion Misc Quick Links Basics : Function to copy stringPangram CheckingMissing characters to make a string PangramCheck if a string is Pangrammatic LipogramRemoving punctuations from a given stringRearrange characters in a string such that no two adjacent are sameProgram to check if input is an integer or a stringQuick way to check if all the characters of a string are sameProgram to find the initials of a nameCheck Whether a number is Duck Number or notRound the given number to nearest multiple of 10Change string to a new character setFind one extra character in a string Function to copy string Pangram Checking Missing characters to make a string Pangram Check if a string is Pangrammatic Lipogram Removing punctuations from a given string Rearrange characters in a string such that no two adjacent are same Program to check if input is an integer or a string Quick way to check if all the characters of a string are same Program to find the initials of a name Check Whether a number is Duck Number or not Round the given number to nearest multiple of 10 Change string to a new character set Find one extra character in a string Strings in C & C++ : Array of Strings in C++ (3 Different Ways to Create)Strings in CStorage for Strings in Csprintf() in CC program to find second most frequent characterC Program to Sort an array of names or stringsC++ Program to remove spaces from a stringString Class in C++C++ program to concatenate a string given number of timesstd::string::append vs std::string::push_back() vs Operator += in C++Comparing two strings in C++Convert string to char array in C++Extract all integers from string in C++std::regex_match, std::regex_replace() | Regex (Regular Expression) In C++C program to Replace a word in a text by another given wordstringstream in C++ and its applicationsC++ string class and its applications Array of Strings in C++ (3 Different Ways to Create) Strings in C Storage for Strings in C sprintf() in C C program to find second most frequent character C Program to Sort an array of names or strings C++ Program to remove spaces from a string String Class in C++ C++ program to concatenate a string given number of times std::string::append vs std::string::push_back() vs Operator += in C++ Comparing two strings in C++ Convert string to char array in C++ Extract all integers from string in C++ std::regex_match, std::regex_replace() | Regex (Regular Expression) In C++ C program to Replace a word in a text by another given word stringstream in C++ and its applications C++ string class and its applications Strings in Java : String Class in JavaString in Switch Case in JavaJava program to swap first and last characters of words in a sentenceJava program to expand a String if range is given?Check if a given string is a valid number (Integer or Floating Point) in Java | SET 2 (Regular Expression approach)Get the first letter of each word in a string using regex in JavaReverse words in a given String in JavaReverse a string in Java (5 Different Ways)Compare two strings lexicographically in JavaSearching characters and substring in a String in JavaPossible Words using given characters in PythonUsing Set() in Python Pangram CheckingUsing OrderedDict() in Python to check order of characters in stringPrint anagrams together in Python using List and DictionaryK’th Non-repeating Character in Python using List Comprehension and OrderedDictPrefix matching in Python using pytrie modulePrint number with commas as 1000 separators in PythonPattern Occurrences : Stack Implementation Java String Class in Java String in Switch Case in Java Java program to swap first and last characters of words in a sentence Java program to expand a String if range is given? Check if a given string is a valid number (Integer or Floating Point) in Java | SET 2 (Regular Expression approach) Get the first letter of each word in a string using regex in Java Reverse words in a given String in Java Reverse a string in Java (5 Different Ways) Compare two strings lexicographically in Java Searching characters and substring in a String in Java Possible Words using given characters in Python Using Set() in Python Pangram Checking Using OrderedDict() in Python to check order of characters in string Print anagrams together in Python using List and Dictionary K’th Non-repeating Character in Python using List Comprehension and OrderedDict Prefix matching in Python using pytrie module Print number with commas as 1000 separators in Python Pattern Occurrences : Stack Implementation Java Strings in Python : String Methods in Python : Set 1 , Set 2 , Set 3Dictionary and counter in Python to find winner of electionMaximum length of consecutive 1’s in a binary string in Python using Map functionPython code to print common characters of two Strings in alphabetical orderUsing Counter() in Python to find minimum character removal to make two strings anagramReverse string in PythonPython groupby method to remove all consecutive duplicatesGenerate two output strings depending upon occurrence of character in input string in PythonPython Dictionary to find mirror characters in a stringPython | Convert a list of characters into a stringMap function and Lambda expression in Python to replace charactersZip function in Python to change to a new character setSequenceMatcher in Python for Longest Common SubstringPython | Print the initials of a name with last name in fullPython counter and dictionary intersection example (Make a string using deletion and rearrangement)Python program to count number of vowels using sets in given stringPython set to check if string is panagramPython | Check if a Substring is Present in a Given StringPython sorted() to check if two strings are anagram or notPython | Remove leading zeros from an IP addressPython | Count all prefixes in given string with greatest frequencyCheck if both halves of the string have same set of characters in PythonConcatenated string with uncommon characters in PythonSecond most repeated word in a sequence in PythonRegex in Python to put spaces between words starting with capital lettersPython code to move spaces to front of string in single traversalString slicing in Python to rotate a stringString slicing in Python to check if a string can become empty by recursive deletionReverse words in a given String in PythonRun Length Encoding in PythonAnagram checking in Python using collections.Counter()Remove all duplicates from a given string in PythonRemove all consecutive duplicates from the stringPython program to check if a string is palindrome or not String Methods in Python : Set 1 , Set 2 , Set 3 Dictionary and counter in Python to find winner of election Maximum length of consecutive 1’s in a binary string in Python using Map function Python code to print common characters of two Strings in alphabetical order Using Counter() in Python to find minimum character removal to make two strings anagram Reverse string in Python Python groupby method to remove all consecutive duplicates Generate two output strings depending upon occurrence of character in input string in Python Python Dictionary to find mirror characters in a string Python | Convert a list of characters into a string Map function and Lambda expression in Python to replace characters Zip function in Python to change to a new character set SequenceMatcher in Python for Longest Common Substring Python | Print the initials of a name with last name in full Python counter and dictionary intersection example (Make a string using deletion and rearrangement) Python program to count number of vowels using sets in given string Python set to check if string is panagram Python | Check if a Substring is Present in a Given String Python sorted() to check if two strings are anagram or not Python | Remove leading zeros from an IP address Python | Count all prefixes in given string with greatest frequency Check if both halves of the string have same set of characters in Python Concatenated string with uncommon characters in Python Second most repeated word in a sequence in Python Regex in Python to put spaces between words starting with capital letters Python code to move spaces to front of string in single traversal String slicing in Python to rotate a string String slicing in Python to check if a string can become empty by recursive deletion Reverse words in a given String in Python Run Length Encoding in Python Anagram checking in Python using collections.Counter() Remove all duplicates from a given string in Python Remove all consecutive duplicates from the string Python program to check if a string is palindrome or not Arthimetic Operation in String : Smallest number with sum of digits as N and divisible by 10^NMinimum sum of squares of character counts in a given string after removing k charactersMaximum and minimum sums from two numbers with digit replacementsCheck if a given string is sum-stringSum of two large numbersCalculate sum of all numbers present in a stringExtract maximum numeric value from a given stringCalculate maximum value using ‘+’ or ‘*’ sign between two numbers in a stringMaximum segment value after putting k breakpoints in a numberDifference of two large numbersCheck if a large number is divisible by 4 or notCheck if a large number is divisible by 11 or notNumber of substrings divisible by 6 in a string of integersDecimal representation of given binary string is divisible by 5 or notNumber of substrings divisible by 8 but not by 3To check divisibility of any large number by 999Multiply Large Numbers represented as StringsDivide large number represented as stringRemainder with 7 for large numbersGiven two numbers as strings, find if one is a power of otherCheck whether a given number is even or oddProduct of nodes at k-th level in a tree represented as stringProgram to find remainder when large number is divided by 11Ways to remove one element from a binary string so that XOR becomes zeroFind the maximum subarray XOR in a given arrayCalculate the difficulty of a sentenceMinimum Index Sum for Common Elements of Two Lists Smallest number with sum of digits as N and divisible by 10^N Minimum sum of squares of character counts in a given string after removing k characters Maximum and minimum sums from two numbers with digit replacements Check if a given string is sum-string Sum of two large numbers Calculate sum of all numbers present in a string Extract maximum numeric value from a given string Calculate maximum value using ‘+’ or ‘*’ sign between two numbers in a string Maximum segment value after putting k breakpoints in a number Difference of two large numbers Check if a large number is divisible by 4 or not Check if a large number is divisible by 11 or not Number of substrings divisible by 6 in a string of integers Decimal representation of given binary string is divisible by 5 or not Number of substrings divisible by 8 but not by 3 To check divisibility of any large number by 999 Multiply Large Numbers represented as Strings Divide large number represented as string Remainder with 7 for large numbers Given two numbers as strings, find if one is a power of other Check whether a given number is even or odd Product of nodes at k-th level in a tree represented as string Program to find remainder when large number is divided by 11 Ways to remove one element from a binary string so that XOR becomes zero Find the maximum subarray XOR in a given array Calculate the difficulty of a sentence Minimum Index Sum for Common Elements of Two Lists Character Counting Based Problems : Count Uppercase, Lowercase, special character and numeric valuesFind the smallest window in a string containing all characters of another stringSmallest window that contains all characters of string itselfCount number of substrings with exactly k distinct charactersNumber of substrings with count of each character as kString with k distinct characters and no same characters adjacentNumber of substrings of a stringDistinct strings with odd and even changes allowedFind k’th character of decrypted stringCount characters at same position as in English alphabetsCount words in a given stringCount words present in a stringCount of words whose i-th letter is either (i-1)-th, i-th, or (i+1)-th letter of given wordProgram to find Smallest and Largest Word in a StringCount substrings with same first and last charactersRecursive solution to count substrings with same first and last charactersCount of distinct substrings of a string using Suffix ArrayCount of distinct substrings of a string using Suffix TrieCount number of strings (made of R, G and B) using given combinationCount of strings that can be formed using a, b and c under given constraintsCount of substrings of a binary string containing K onesGroup words with same set of charactersPrint all distinct characters of a string in order (3 Methods)Print common characters of two Strings in alphabetical orderCommon characters in n stringsFind uncommon characters of the two stringsConcatenated string with uncommon characters of two stringsProgram to remove vowels from a StringRemove consecutive vowels from stringProgram to count vowels in a string (Iterative and Recursive)Count consonants in a string (Iterative and recursive methods)Alternate vowel and consonant stringGiven a binary string, count number of substrings that start and end with 1Number of distinct permutation a String can haveTime complexity of all permutations of a stringPermutations of a given string using STLCheck if both halves of the string have same set of charactersCount words that appear exactly two times in an array of wordsCheck if frequency of all characters can become same by one removalCheck if a string has all characters with same frequency with one variation allowedCount ways to increase LCS length of two strings by oneFind the character in first string that is present at minimum index in second stringRemove characters from the first string which are present in the second stringLength of Longest sub-string that can be removedCount of character pairs at same distance as in English alphabetsCount number of equal pairs in a stringCount of strings where adjacent characters are of difference onePrint number of words, vowels and frequency of each character Count Uppercase, Lowercase, special character and numeric values Find the smallest window in a string containing all characters of another string Smallest window that contains all characters of string itself Count number of substrings with exactly k distinct characters Number of substrings with count of each character as k String with k distinct characters and no same characters adjacent Number of substrings of a string Distinct strings with odd and even changes allowed Find k’th character of decrypted string Count characters at same position as in English alphabets Count words in a given string Count words present in a string Count of words whose i-th letter is either (i-1)-th, i-th, or (i+1)-th letter of given word Program to find Smallest and Largest Word in a String Count substrings with same first and last characters Recursive solution to count substrings with same first and last characters Count of distinct substrings of a string using Suffix Array Count of distinct substrings of a string using Suffix Trie Count number of strings (made of R, G and B) using given combination Count of strings that can be formed using a, b and c under given constraints Count of substrings of a binary string containing K ones Group words with same set of characters Print all distinct characters of a string in order (3 Methods) Print common characters of two Strings in alphabetical order Common characters in n strings Find uncommon characters of the two strings Concatenated string with uncommon characters of two strings Program to remove vowels from a String Remove consecutive vowels from string Program to count vowels in a string (Iterative and Recursive) Count consonants in a string (Iterative and recursive methods) Alternate vowel and consonant string Given a binary string, count number of substrings that start and end with 1 Number of distinct permutation a String can have Time complexity of all permutations of a string Permutations of a given string using STL Check if both halves of the string have same set of characters Count words that appear exactly two times in an array of words Check if frequency of all characters can become same by one removal Check if a string has all characters with same frequency with one variation allowed Count ways to increase LCS length of two strings by one Find the character in first string that is present at minimum index in second string Remove characters from the first string which are present in the second string Length of Longest sub-string that can be removed Count of character pairs at same distance as in English alphabets Count number of equal pairs in a string Count of strings where adjacent characters are of difference one Print number of words, vowels and frequency of each character Subsequence & Substring : Longest subsequence where every character appears at-least k timesGiven two strings, find if first string is a subsequence of secondNumber of subsequences of the form a^i b^j c^kNumber of subsequences in a string divisible by nFind number of times a string occurs as a subsequence in given stringNumber of subsequences as “ab” in a string repeated K timesCount of ‘GFG’ Subsequences in the given stringCount Distinct SubsequencesCount distinct occurrences as a subsequenceLongest common subsequence with permutations allowedRepeated subsequence of length 2 or morePrint all longest common sub-sequences in lexicographical orderPrinting Longest Common Subsequence | Set 2Given number as string, find number of contiguous subsequences which recursively add up to 9 | Set 2Shortest Uncommon SubsequenceShortest Superstring ProblemPrinting Shortest Common SupersequenceShortest possible combination of two stringsA Space Optimized Solution of LCSSort a string according to the order defined by another stringShortest Common SupersequenceLongest Repeating SubsequenceFind largest word in dictionary by deleting some characters of given stringDynamic Programming | Set 12 (Longest Palindromic Subsequence) Longest subsequence where every character appears at-least k times Given two strings, find if first string is a subsequence of second Number of subsequences of the form a^i b^j c^k Number of subsequences in a string divisible by n Find number of times a string occurs as a subsequence in given string Number of subsequences as “ab” in a string repeated K times Count of ‘GFG’ Subsequences in the given string Count Distinct Subsequences Count distinct occurrences as a subsequence Longest common subsequence with permutations allowed Repeated subsequence of length 2 or more Print all longest common sub-sequences in lexicographical order Printing Longest Common Subsequence | Set 2 Given number as string, find number of contiguous subsequences which recursively add up to 9 | Set 2 Shortest Uncommon Subsequence Shortest Superstring Problem Printing Shortest Common Supersequence Shortest possible combination of two strings A Space Optimized Solution of LCS Sort a string according to the order defined by another string Shortest Common Supersequence Longest Repeating Subsequence Find largest word in dictionary by deleting some characters of given string Dynamic Programming | Set 12 (Longest Palindromic Subsequence) More >> Reverse & Rotation : Perfect reversible stringReversing an EquationLeft Rotation and Right Rotation of a StringGenerate all rotations of a given stringMinimum rotations required to get the same stringCheck if strings are rotations of each other or notCheck if a string can be obtained by rotating another string 2 placesCount rotations divisible by 4Check if all rows of a matrix are circular rotations of each otherPrint reverse of a string using recursionPrint words of a string in reverse orderProgram to reverse a string (Iterative and Recursive)Write a program to reverse an array or stringReverse an array without affecting special charactersReverse words in a given stringReverse individual wordsReverse a string preserving space positionsReverse string without using any temporary variablePrint reverse string after removing vowelsReverse vowels in a given stringReverse String according to the number of wordsReverse each word in a linked list nodeFind if an array of strings can be chained to form a circle Perfect reversible string Reversing an Equation Left Rotation and Right Rotation of a String Generate all rotations of a given string Minimum rotations required to get the same string Check if strings are rotations of each other or not Check if a string can be obtained by rotating another string 2 places Count rotations divisible by 4 Check if all rows of a matrix are circular rotations of each other Print reverse of a string using recursion Print words of a string in reverse order Program to reverse a string (Iterative and Recursive) Write a program to reverse an array or string Reverse an array without affecting special characters Reverse words in a given string Reverse individual words Reverse a string preserving space positions Reverse string without using any temporary variable Print reverse string after removing vowels Reverse vowels in a given string Reverse String according to the number of words Reverse each word in a linked list node Find if an array of strings can be chained to form a circle Sorting & Searching : Sort an array of strings according to string lengthsSorting array of strings (or words) using TrieSort string of charactersAlternate Lower Upper String SortProgram to sort string in descending orderPrint array of strings in sorted order without copying one string into anotherSort the given string using character searchGiven a sorted dictionary of an alien language, find order of charactersRearrange a string in sorted order followed by the integer sumPrint distinct sorted permutations with duplicates allowed in inputMinimum cost to sort strings using reversal operations of different costsPrint number in ascending order which contains 1, 2 and 3 in their digits.Search in an array of strings where non-empty strings are sortedSparse Search Sort an array of strings according to string lengths Sorting array of strings (or words) using Trie Sort string of characters Alternate Lower Upper String Sort Program to sort string in descending order Print array of strings in sorted order without copying one string into another Sort the given string using character search Given a sorted dictionary of an alien language, find order of characters Rearrange a string in sorted order followed by the integer sum Print distinct sorted permutations with duplicates allowed in input Minimum cost to sort strings using reversal operations of different costs Print number in ascending order which contains 1, 2 and 3 in their digits. Search in an array of strings where non-empty strings are sorted Sparse Search Case Sensitive String : Lower case to upper case – An interesting factisupper() and islower() and their application in C++Case conversion (Lower to Upper and Vice Versa) of a string using BitWise operators in C/C++Maximum distinct lowercase alphabets between two uppercaseFirst uppercase letter in a string (Iterative and Recursive)Convert characters of a string to opposite casegOOGLE cASE of a given sentencePrint all words matching a pattern in CamelCase Notation DictonaryCamel case of a given sentencePermute a string by changing caseToggle case of a string using Bitwise OperatorsHow to design a tiny URL or URL shortener? Lower case to upper case – An interesting fact isupper() and islower() and their application in C++ Case conversion (Lower to Upper and Vice Versa) of a string using BitWise operators in C/C++ Maximum distinct lowercase alphabets between two uppercase First uppercase letter in a string (Iterative and Recursive) Convert characters of a string to opposite case gOOGLE cASE of a given sentence Print all words matching a pattern in CamelCase Notation Dictonary Camel case of a given sentence Permute a string by changing case Toggle case of a string using Bitwise Operators How to design a tiny URL or URL shortener? Occurrence Based String : Given a string, find its first non-repeating characterPrint all permutations with repetition of charactersFind the first non-repeating character from a stream of charactersConvert to a string that is repetition of a substring of k lengthSmallest length string with repeated replacement of two distinct adjacentDistributing all balls without repetitionMaximum consecutive repeating character in stringMinimum number of deletions so that no two consecutive are sameK’th Non-repeating CharacterFind repeated character present first in a stringFind the first repeated word in a stringFind the first repeated character in a stringSecond most repeated word in a sequenceMost frequent word in an array of stringsEfficiently find first repeated character in a string without using any additional data structure in one traversalQueries for characters in a repeated stringReturn maximum occurring character in an input stringGenerate two output strings depending upon occurrence of character in input string.Print characters and their frequencies in order of occurrenceProgram to count occurrence of a given character in a stringCheck if all occurrences of a character appear togetherGroup all occurrences of characters according to first appearancePrint the string by ignoring alternate occurrences of any characterPrint the string after the specified character has occurred given no. of timesFind all occurrences of a given word in a matrixReplace all occurrences of string AB with C without using extra spaceRearrange a binary string as alternate x and y occurrencesRemove recurring digits in a given numberFind the most frequent digit without using array/string Given a string, find its first non-repeating character Print all permutations with repetition of characters Find the first non-repeating character from a stream of characters Convert to a string that is repetition of a substring of k length Smallest length string with repeated replacement of two distinct adjacent Distributing all balls without repetition Maximum consecutive repeating character in string Minimum number of deletions so that no two consecutive are same K’th Non-repeating Character Find repeated character present first in a string Find the first repeated word in a string Find the first repeated character in a string Second most repeated word in a sequence Most frequent word in an array of strings Efficiently find first repeated character in a string without using any additional data structure in one traversal Queries for characters in a repeated string Return maximum occurring character in an input string Generate two output strings depending upon occurrence of character in input string. Print characters and their frequencies in order of occurrence Program to count occurrence of a given character in a string Check if all occurrences of a character appear together Group all occurrences of characters according to first appearance Print the string by ignoring alternate occurrences of any character Print the string after the specified character has occurred given no. of times Find all occurrences of a given word in a matrix Replace all occurrences of string AB with C without using extra space Rearrange a binary string as alternate x and y occurrences Remove recurring digits in a given number Find the most frequent digit without using array/string Spacing : Remove spaces from a given stringMove spaces to front of string in single traversalPut spaces between words starting with capital lettersRemoving spaces from a string using StringstreamRemove extra spaces from a stringURLify a given string (Replace spaces is %20)String containing first letter of every word in a given string with spacesPrint all possible strings that can be made by placing spacesPrint all possible strings that can be made by placing spaces Remove spaces from a given string Move spaces to front of string in single traversal Put spaces between words starting with capital letters Removing spaces from a string using Stringstream Remove extra spaces from a string URLify a given string (Replace spaces is %20) String containing first letter of every word in a given string with spaces Print all possible strings that can be made by placing spaces Print all possible strings that can be made by placing spaces Anagram : Check whether two strings are anagram of each otherGiven a sequence of words, print all anagrams together | Set 2Anagram Substring SearchPrint all pairs of anagrams in a given array of stringsRemove minimum number of characters so that two strings become anagramCheck if two strings are k-anagrams or notCheck if binary representations of two numbers are anagramGiven a sequence of words, print all anagrams together using STLCheck if all levels of two trees are anagrams or notCount of total anagram substringsMinimum Number of Manipulations required to make two Strings Anagram Without Deletion of Character Check whether two strings are anagram of each other Given a sequence of words, print all anagrams together | Set 2 Anagram Substring Search Print all pairs of anagrams in a given array of strings Remove minimum number of characters so that two strings become anagram Check if two strings are k-anagrams or not Check if binary representations of two numbers are anagram Given a sequence of words, print all anagrams together using STL Check if all levels of two trees are anagrams or not Count of total anagram substrings Minimum Number of Manipulations required to make two Strings Anagram Without Deletion of Character More >> Palindrome : C Program to Check if a Given String is PalindromeCheck if a given string is a rotation of a palindromeC++ Program to print all palindromes in a given rangeCheck if characters of a given string can be rearranged to form a palindromeDynamic Programming | Set 28 (Minimum insertions to form a palindrome)Longest Palindromic Substring | Set 2Find all palindromic sub-strings of a given stringOnline algorithm for checking palindrome in a streamGiven a string, print all possible palindromic partitionsPrint all palindromic partitions of a stringDynamic Programming | Set 17 (Palindrome Partitioning)Count All Palindromic Subsequence in a given StringMinimum characters to be added at front to make string palindromePalindrome Substring QueriesSuffix Tree Application 6 – Longest Palindromic SubstringPalindrome pair in an array of words (or strings)Make largest palindrome by changing at most K-digitsLexicographically first palindromic stringRecursive function to check if a string is palindromeMinimum number of Appends needed to make a string palindromeLongest Non-palindromic substringMinimum number of deletions to make a string palindromeMinimum steps to delete a string after repeated deletion of palindrome substringsCount of Palindromic substrings in an Index rangeMinimum insertions to form a palindrome with permutations allowedNth Even length Palindrome C Program to Check if a Given String is Palindrome Check if a given string is a rotation of a palindrome C++ Program to print all palindromes in a given range Check if characters of a given string can be rearranged to form a palindrome Dynamic Programming | Set 28 (Minimum insertions to form a palindrome) Longest Palindromic Substring | Set 2 Find all palindromic sub-strings of a given string Online algorithm for checking palindrome in a stream Given a string, print all possible palindromic partitions Print all palindromic partitions of a string Dynamic Programming | Set 17 (Palindrome Partitioning) Count All Palindromic Subsequence in a given String Minimum characters to be added at front to make string palindrome Palindrome Substring Queries Suffix Tree Application 6 – Longest Palindromic Substring Palindrome pair in an array of words (or strings) Make largest palindrome by changing at most K-digits Lexicographically first palindromic string Recursive function to check if a string is palindrome Minimum number of Appends needed to make a string palindrome Longest Non-palindromic substring Minimum number of deletions to make a string palindrome Minimum steps to delete a string after repeated deletion of palindrome substrings Count of Palindromic substrings in an Index range Minimum insertions to form a palindrome with permutations allowed Nth Even length Palindrome More >> Binary String : Count of operations to make a binary string”ab” freeChange if all bits can be made same by single flipLength of Longest sub-string that can be removedNumber of flips to make binary string alternate1’s and 2’s complement of a Binary NumberEfficient method for 2’s complement of a binary stringCount binary strings with k times appearing adjacent two set bitsCount strings with consecutive 1’sGenerate all binary strings from given patternAdd two bit stringsCount number of binary strings without consecutive 1’sGenerate all binary permutations such that there are more or equal 1’s than 0’s before every point in all permutationsCheck if a string follows a^nb^n pattern or notBinary representation of next numberBinary representation of next greater number with same number of 1’s and 0’sMaximum difference of zeros and ones in binary stringCheck if a binary string has a 0 between 1s or not | Set 2Min flips of continuous characters to make all characters same in a stringConcatenation of two strings in PHPProgram to add two binary stringsConvert String into Binary SequenceGenerate all binary strings without consecutive 1’sMinimum number of characters to be removed to make a binary string alternateCheck divisibility of binary string by 2^kRemoving elements between the two zerosFind i’th Index character in a binary string obtained after n iterationsNumber of substrings with odd decimal value in a binary stringGenerate n-bit Gray CodesPrint N-bit binary numbers having more 1’s than 0’s in all prefixesAdd n binary strings Count of operations to make a binary string”ab” free Change if all bits can be made same by single flip Length of Longest sub-string that can be removed Number of flips to make binary string alternate 1’s and 2’s complement of a Binary Number Efficient method for 2’s complement of a binary string Count binary strings with k times appearing adjacent two set bits Count strings with consecutive 1’s Generate all binary strings from given pattern Add two bit strings Count number of binary strings without consecutive 1’s Generate all binary permutations such that there are more or equal 1’s than 0’s before every point in all permutations Check if a string follows a^nb^n pattern or not Binary representation of next number Binary representation of next greater number with same number of 1’s and 0’s Maximum difference of zeros and ones in binary string Check if a binary string has a 0 between 1s or not | Set 2 Min flips of continuous characters to make all characters same in a string Concatenation of two strings in PHP Program to add two binary strings Convert String into Binary Sequence Generate all binary strings without consecutive 1’s Minimum number of characters to be removed to make a binary string alternate Check divisibility of binary string by 2^k Removing elements between the two zeros Find i’th Index character in a binary string obtained after n iterations Number of substrings with odd decimal value in a binary string Generate n-bit Gray Codes Print N-bit binary numbers having more 1’s than 0’s in all prefixes Add n binary strings More >> Lexicographic pattern : Powet Set in Lexicographic orderLexicographically n-th permutation of stringLexicographic rank of string using stlLexicographically minimum string rotation | Set 1Generating distinct subsequences of a given string in lexicographic orderLexicographically smallest string obtained after concatenating arrayLexicographical Maximum substring of stringLexicographical concatenation of all substrings of a stringConstruct lexicographically smallest palindromeLexicographically smallest string whose hamming distance from given string is exactly KLexicographically next stringLexicographically largest subsequence such that every character occurs at least k timesLexicographically first alternate vowel and consonant stringFind a string in lexicographic order which is in between given two stringsPrint all permutations in sorted (lexicographic) orderHow to find Lexicographically previous permutation?Find n-th lexicographically permutation of a string | Set 2Lexicographic rank of a string Powet Set in Lexicographic order Lexicographically n-th permutation of string Lexicographic rank of string using stl Lexicographically minimum string rotation | Set 1 Generating distinct subsequences of a given string in lexicographic order Lexicographically smallest string obtained after concatenating array Lexicographical Maximum substring of string Lexicographical concatenation of all substrings of a string Construct lexicographically smallest palindrome Lexicographically smallest string whose hamming distance from given string is exactly K Lexicographically next string Lexicographically largest subsequence such that every character occurs at least k times Lexicographically first alternate vowel and consonant string Find a string in lexicographic order which is in between given two strings Print all permutations in sorted (lexicographic) order How to find Lexicographically previous permutation? Find n-th lexicographically permutation of a string | Set 2 Lexicographic rank of a string Pattern Searching : Searching for Patterns | Set 5 (Finite Automata)Pattern Searching | Set 7 (Boyer Moore Algorithm – Bad Character Heuristic)Manacher’s Algorithm – Linear Time Longest Palindromic Substring – Part 4Z algorithmSearch a Word in a 2D Grid of charactersPrinting string in plus ‘+’ pattern in the matrixWildcard Pattern MatchingDynamic Programming | Wildcard Pattern Matching | Linear Time and Constant SpaceReplace a character c1 with c2 and c2 with c1 in a string SAho-Corasick AlgorithmCount of occurrences of a “1(0+)1” pattern in a stringFind all the patterns of “1(0+)1” in a given string | SET 2In-place replace multiple occurrences of a patternFind all strings that match specific pattern in a dictionaryCheck if string follows order of characters defined by a pattern or notFind nth term of the Dragon Curve SequenceCount of number of given string in 2D character array Searching for Patterns | Set 5 (Finite Automata) Pattern Searching | Set 7 (Boyer Moore Algorithm – Bad Character Heuristic) Manacher’s Algorithm – Linear Time Longest Palindromic Substring – Part 4 Z algorithm Search a Word in a 2D Grid of characters Printing string in plus ‘+’ pattern in the matrix Wildcard Pattern Matching Dynamic Programming | Wildcard Pattern Matching | Linear Time and Constant Space Replace a character c1 with c2 and c2 with c1 in a string S Aho-Corasick Algorithm Count of occurrences of a “1(0+)1” pattern in a string Find all the patterns of “1(0+)1” in a given string | SET 2 In-place replace multiple occurrences of a pattern Find all strings that match specific pattern in a dictionary Check if string follows order of characters defined by a pattern or not Find nth term of the Dragon Curve Sequence Count of number of given string in 2D character array More >> Split String : Tokenizing a string in C++Split a sentence into words in C++How to split a string in C/C++, Python and Java?Check if given string can be split into four distinct stringsSplit numeric, alphabetic and special symbols from a StringSplitting a Numeric StringWays to split string such that each partition starts with distinct characterPartition a number into two divisble partsPartition given string in such manner that i’th substring is sum of (i-1)’th and (i-2)’th substringBreaking a number such that first part is integral division of second by a power of 10Divide a string in N equal partsMinimum Word BreakWord Break ProblemWord Break Problem using Backtracking Tokenizing a string in C++ Split a sentence into words in C++ How to split a string in C/C++, Python and Java? Check if given string can be split into four distinct strings Split numeric, alphabetic and special symbols from a String Splitting a Numeric String Ways to split string such that each partition starts with distinct character Partition a number into two divisble parts Partition given string in such manner that i’th substring is sum of (i-1)’th and (i-2)’th substring Breaking a number such that first part is integral division of second by a power of 10 Divide a string in N equal parts Minimum Word Break Word Break Problem Word Break Problem using Backtracking Balance Parentheses & Bracket Evaluation : Identify and mark unmatched parenthesis in an expressionCost to Balance the parenthesesCheck for balanced parentheses in an expression | O(1) spaceCheck for balanced parentheses in an expressionLength of Longest Balanced SubsequenceBalanced expression with replacementEvaluate a boolean expression represented as stringFind maximum depth of nested parenthesis in a stringPrint all ways to break a string in bracket formFind an equal point in a string of bracketsMinimum Swaps for Bracket BalancingCheck if two expressions with brackets are sameExpression contains redundant bracket or notRange Queries for Longest Correct Bracket SubsequenceEvaluate an array expression with numbers, + and –Print Bracket NumberFind index of closing bracket for a given opening bracket in an expressionBinary tree to string with bracketsConstruct Binary Tree from String with bracket representationMinimum number of bracket reversals needed to make an expression balanced Identify and mark unmatched parenthesis in an expression Cost to Balance the parentheses Check for balanced parentheses in an expression | O(1) space Check for balanced parentheses in an expression Length of Longest Balanced Subsequence Balanced expression with replacement Evaluate a boolean expression represented as string Find maximum depth of nested parenthesis in a string Print all ways to break a string in bracket form Find an equal point in a string of brackets Minimum Swaps for Bracket Balancing Check if two expressions with brackets are same Expression contains redundant bracket or not Range Queries for Longest Correct Bracket Subsequence Evaluate an array expression with numbers, + and – Print Bracket Number Find index of closing bracket for a given opening bracket in an expression Binary tree to string with brackets Construct Binary Tree from String with bracket representation Minimum number of bracket reversals needed to make an expression balanced Conversion : Convert all substrings of length ‘k’ from base ‘b’ to decimalConvert Binary fraction to DecimalConvert decimal fraction to binary numberConvert a sentence into its equivalent mobile numeric keypad sequenceCheck if it is possible to convert one string into another with given constraintsConverting one string to other using append and delete last operationsConverting Decimal Number lying between 1 to 3999 to Roman NumeralsConverting Roman Numerals to Decimal lying between 1 to 3999Inverting the Move to Front TransformBurrows – Wheeler Data Transform AlgorithmCheck if it is possible to transform one string to anotherTransform the stringAn in-place algorithm for String TransformationWays of transforming one string to other by removing 0 or more charactersTransform One String to Another using Minimum Number of Given OperationConvert Ternary Expression to a Binary TreePrefix to Infix ConversionPrefix to Postfix ConversionPostfix to Prefix ConversionPostfix to Infix Convert all substrings of length ‘k’ from base ‘b’ to decimal Convert Binary fraction to Decimal Convert decimal fraction to binary number Convert a sentence into its equivalent mobile numeric keypad sequence Check if it is possible to convert one string into another with given constraints Converting one string to other using append and delete last operations Converting Decimal Number lying between 1 to 3999 to Roman Numerals Converting Roman Numerals to Decimal lying between 1 to 3999 Inverting the Move to Front Transform Burrows – Wheeler Data Transform Algorithm Check if it is possible to transform one string to another Transform the string An in-place algorithm for String Transformation Ways of transforming one string to other by removing 0 or more characters Transform One String to Another using Minimum Number of Given Operation Convert Ternary Expression to a Binary Tree Prefix to Infix Conversion Prefix to Postfix Conversion Postfix to Prefix Conversion Postfix to Infix Misc : Word Wrap problem ( Space optimized solution )Form minimum number from given sequenceMaximum number of characters between any two same character in a stringPrint shortest path to print a string on screenMinimum number of stops from given pathCheck whether second string can be formed from characters of first stringMirror characters of a stringFind words which are greater than given length kFind last index of a character in a stringFind position of the given number among the numbers made of 4 and 7Find winner of an election where votes are represented as candidate namesCompare Version Numbers with large inputs allowedPossibility of moving out of mazePossibility of a word from a given set of charactersFind the arrangement of queue at given timeProgram to generate all possible valid IP addresses from given stringProgram to validate an IP addressProgram to check for a Valid IMEI NumberDecode a median string to the original stringDecode a string recursively encoded as count followed by substringMinimal operations to make a number magicalProgram to check for ISBNProgram for credit card number validationMaximize a number considering permutations with values smaller than limitFind if a string starts and ends with another given string Word Wrap problem ( Space optimized solution ) Form minimum number from given sequence Maximum number of characters between any two same character in a string Print shortest path to print a string on screen Minimum number of stops from given path Check whether second string can be formed from characters of first string Mirror characters of a string Find words which are greater than given length k Find last index of a character in a string Find position of the given number among the numbers made of 4 and 7 Find winner of an election where votes are represented as candidate names Compare Version Numbers with large inputs allowed Possibility of moving out of maze Possibility of a word from a given set of characters Find the arrangement of queue at given time Program to generate all possible valid IP addresses from given string Program to validate an IP address Program to check for a Valid IMEI Number Decode a median string to the original string Decode a string recursively encoded as count followed by substring Minimal operations to make a number magical Program to check for ISBN Program for credit card number validation Maximize a number considering permutations with values smaller than limit Find if a string starts and ends with another given string More >> Quick Links : ‘Practice Problems’ on Strings ‘Quizzes’ on Strings If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Must Do Coding Questions for Product Based Companies Microsoft Interview Experience for Internship (Via Engage) Difference between var, let and const keywords in JavaScript Find number of rectangles that can be formed from a given set of coordinates Array of Objects in C++ with Examples How to Replace Values in Column Based on Condition in Pandas? C Program to read contents of Whole File How to Replace Values in a List in Python? How to Read Text Files with Pandas? Infosys DSE Interview Experience | On-Campus 2021
[ { "code": null, "e": 27219, "s": 27191, "text": "\n16 Jul, 2021" }, { "code": null, "e": 27379, "s": 27219, "text": "Strings are defined as an array of characters. The difference between a character array and a string is the string is terminated with a special character ‘\\0’." }, { "code": null, "e": 27525, "s": 27379, "text": "Declaring a string is as simple as declaring a one dimensional array. Below is the basic syntax for declaring a string in C programming language." }, { "code": null, "e": 27547, "s": 27525, "text": "char str_name[size];\n" }, { "code": null, "e": 27556, "s": 27547, "text": "Topics :" }, { "code": null, "e": 27563, "s": 27556, "text": "Basics" }, { "code": null, "e": 27581, "s": 27563, "text": "String in C & C++" }, { "code": null, "e": 27597, "s": 27581, "text": "Strings in Java" }, { "code": null, "e": 27614, "s": 27597, "text": "String in Python" }, { "code": null, "e": 27645, "s": 27614, "text": "Arthimetic Operation in String" }, { "code": null, "e": 27679, "s": 27645, "text": "Character Counting Based Problems" }, { "code": null, "e": 27703, "s": 27679, "text": "Subsequence & Substring" }, { "code": null, "e": 27722, "s": 27703, "text": "Reverse & Rotation" }, { "code": null, "e": 27742, "s": 27722, "text": "Sorting & Searching" }, { "code": null, "e": 27764, "s": 27742, "text": "Case Sensitive String" }, { "code": null, "e": 27788, "s": 27764, "text": "Occurrence Based String" }, { "code": null, "e": 27796, "s": 27788, "text": "Spacing" }, { "code": null, "e": 27804, "s": 27796, "text": "Anagram" }, { "code": null, "e": 27815, "s": 27804, "text": "Palindrome" }, { "code": null, "e": 27829, "s": 27815, "text": "Binary String" }, { "code": null, "e": 27851, "s": 27829, "text": "Lexicographic pattern" }, { "code": null, "e": 27869, "s": 27851, "text": "Pattern Searching" }, { "code": null, "e": 27882, "s": 27869, "text": "Split String" }, { "code": null, "e": 27923, "s": 27882, "text": "Balance Parentheses & Bracket Evaluation" }, { "code": null, "e": 27934, "s": 27923, "text": "Conversion" }, { "code": null, "e": 27939, "s": 27934, "text": "Misc" }, { "code": null, "e": 27951, "s": 27939, "text": "Quick Links" }, { "code": null, "e": 27960, "s": 27951, "text": "Basics :" }, { "code": null, "e": 28507, "s": 27960, "text": "Function to copy stringPangram CheckingMissing characters to make a string PangramCheck if a string is Pangrammatic LipogramRemoving punctuations from a given stringRearrange characters in a string such that no two adjacent are sameProgram to check if input is an integer or a stringQuick way to check if all the characters of a string are sameProgram to find the initials of a nameCheck Whether a number is Duck Number or notRound the given number to nearest multiple of 10Change string to a new character setFind one extra character in a string" }, { "code": null, "e": 28531, "s": 28507, "text": "Function to copy string" }, { "code": null, "e": 28548, "s": 28531, "text": "Pangram Checking" }, { "code": null, "e": 28592, "s": 28548, "text": "Missing characters to make a string Pangram" }, { "code": null, "e": 28635, "s": 28592, "text": "Check if a string is Pangrammatic Lipogram" }, { "code": null, "e": 28677, "s": 28635, "text": "Removing punctuations from a given string" }, { "code": null, "e": 28745, "s": 28677, "text": "Rearrange characters in a string such that no two adjacent are same" }, { "code": null, "e": 28797, "s": 28745, "text": "Program to check if input is an integer or a string" }, { "code": null, "e": 28859, "s": 28797, "text": "Quick way to check if all the characters of a string are same" }, { "code": null, "e": 28898, "s": 28859, "text": "Program to find the initials of a name" }, { "code": null, "e": 28943, "s": 28898, "text": "Check Whether a number is Duck Number or not" }, { "code": null, "e": 28992, "s": 28943, "text": "Round the given number to nearest multiple of 10" }, { "code": null, "e": 29029, "s": 28992, "text": "Change string to a new character set" }, { "code": null, "e": 29066, "s": 29029, "text": "Find one extra character in a string" }, { "code": null, "e": 29087, "s": 29066, "text": "Strings in C & C++ :" }, { "code": null, "e": 29783, "s": 29087, "text": "Array of Strings in C++ (3 Different Ways to Create)Strings in CStorage for Strings in Csprintf() in CC program to find second most frequent characterC Program to Sort an array of names or stringsC++ Program to remove spaces from a stringString Class in C++C++ program to concatenate a string given number of timesstd::string::append vs std::string::push_back() vs Operator += in C++Comparing two strings in C++Convert string to char array in C++Extract all integers from string in C++std::regex_match, std::regex_replace() | Regex (Regular Expression) In C++C program to Replace a word in a text by another given wordstringstream in C++ and its applicationsC++ string class and its applications" }, { "code": null, "e": 29836, "s": 29783, "text": "Array of Strings in C++ (3 Different Ways to Create)" }, { "code": null, "e": 29849, "s": 29836, "text": "Strings in C" }, { "code": null, "e": 29874, "s": 29849, "text": "Storage for Strings in C" }, { "code": null, "e": 29889, "s": 29874, "text": "sprintf() in C" }, { "code": null, "e": 29938, "s": 29889, "text": "C program to find second most frequent character" }, { "code": null, "e": 29985, "s": 29938, "text": "C Program to Sort an array of names or strings" }, { "code": null, "e": 30028, "s": 29985, "text": "C++ Program to remove spaces from a string" }, { "code": null, "e": 30048, "s": 30028, "text": "String Class in C++" }, { "code": null, "e": 30106, "s": 30048, "text": "C++ program to concatenate a string given number of times" }, { "code": null, "e": 30176, "s": 30106, "text": "std::string::append vs std::string::push_back() vs Operator += in C++" }, { "code": null, "e": 30205, "s": 30176, "text": "Comparing two strings in C++" }, { "code": null, "e": 30241, "s": 30205, "text": "Convert string to char array in C++" }, { "code": null, "e": 30281, "s": 30241, "text": "Extract all integers from string in C++" }, { "code": null, "e": 30356, "s": 30281, "text": "std::regex_match, std::regex_replace() | Regex (Regular Expression) In C++" }, { "code": null, "e": 30416, "s": 30356, "text": "C program to Replace a word in a text by another given word" }, { "code": null, "e": 30457, "s": 30416, "text": "stringstream in C++ and its applications" }, { "code": null, "e": 30495, "s": 30457, "text": "C++ string class and its applications" }, { "code": null, "e": 30513, "s": 30495, "text": "Strings in Java :" }, { "code": null, "e": 31479, "s": 30513, "text": "String Class in JavaString in Switch Case in JavaJava program to swap first and last characters of words in a sentenceJava program to expand a String if range is given?Check if a given string is a valid number (Integer or Floating Point) in Java | SET 2 (Regular Expression approach)Get the first letter of each word in a string using regex in JavaReverse words in a given String in JavaReverse a string in Java (5 Different Ways)Compare two strings lexicographically in JavaSearching characters and substring in a String in JavaPossible Words using given characters in PythonUsing Set() in Python Pangram CheckingUsing OrderedDict() in Python to check order of characters in stringPrint anagrams together in Python using List and DictionaryK’th Non-repeating Character in Python using List Comprehension and OrderedDictPrefix matching in Python using pytrie modulePrint number with commas as 1000 separators in PythonPattern Occurrences : Stack Implementation Java" }, { "code": null, "e": 31500, "s": 31479, "text": "String Class in Java" }, { "code": null, "e": 31530, "s": 31500, "text": "String in Switch Case in Java" }, { "code": null, "e": 31600, "s": 31530, "text": "Java program to swap first and last characters of words in a sentence" }, { "code": null, "e": 31651, "s": 31600, "text": "Java program to expand a String if range is given?" }, { "code": null, "e": 31767, "s": 31651, "text": "Check if a given string is a valid number (Integer or Floating Point) in Java | SET 2 (Regular Expression approach)" }, { "code": null, "e": 31833, "s": 31767, "text": "Get the first letter of each word in a string using regex in Java" }, { "code": null, "e": 31873, "s": 31833, "text": "Reverse words in a given String in Java" }, { "code": null, "e": 31917, "s": 31873, "text": "Reverse a string in Java (5 Different Ways)" }, { "code": null, "e": 31963, "s": 31917, "text": "Compare two strings lexicographically in Java" }, { "code": null, "e": 32018, "s": 31963, "text": "Searching characters and substring in a String in Java" }, { "code": null, "e": 32066, "s": 32018, "text": "Possible Words using given characters in Python" }, { "code": null, "e": 32105, "s": 32066, "text": "Using Set() in Python Pangram Checking" }, { "code": null, "e": 32174, "s": 32105, "text": "Using OrderedDict() in Python to check order of characters in string" }, { "code": null, "e": 32234, "s": 32174, "text": "Print anagrams together in Python using List and Dictionary" }, { "code": null, "e": 32314, "s": 32234, "text": "K’th Non-repeating Character in Python using List Comprehension and OrderedDict" }, { "code": null, "e": 32360, "s": 32314, "text": "Prefix matching in Python using pytrie module" }, { "code": null, "e": 32414, "s": 32360, "text": "Print number with commas as 1000 separators in Python" }, { "code": null, "e": 32462, "s": 32414, "text": "Pattern Occurrences : Stack Implementation Java" }, { "code": null, "e": 32482, "s": 32462, "text": "Strings in Python :" }, { "code": null, "e": 34506, "s": 32482, "text": "String Methods in Python : Set 1 , Set 2 , Set 3Dictionary and counter in Python to find winner of electionMaximum length of consecutive 1’s in a binary string in Python using Map functionPython code to print common characters of two Strings in alphabetical orderUsing Counter() in Python to find minimum character removal to make two strings anagramReverse string in PythonPython groupby method to remove all consecutive duplicatesGenerate two output strings depending upon occurrence of character in input string in PythonPython Dictionary to find mirror characters in a stringPython | Convert a list of characters into a stringMap function and Lambda expression in Python to replace charactersZip function in Python to change to a new character setSequenceMatcher in Python for Longest Common SubstringPython | Print the initials of a name with last name in fullPython counter and dictionary intersection example (Make a string using deletion and rearrangement)Python program to count number of vowels using sets in given stringPython set to check if string is panagramPython | Check if a Substring is Present in a Given StringPython sorted() to check if two strings are anagram or notPython | Remove leading zeros from an IP addressPython | Count all prefixes in given string with greatest frequencyCheck if both halves of the string have same set of characters in PythonConcatenated string with uncommon characters in PythonSecond most repeated word in a sequence in PythonRegex in Python to put spaces between words starting with capital lettersPython code to move spaces to front of string in single traversalString slicing in Python to rotate a stringString slicing in Python to check if a string can become empty by recursive deletionReverse words in a given String in PythonRun Length Encoding in PythonAnagram checking in Python using collections.Counter()Remove all duplicates from a given string in PythonRemove all consecutive duplicates from the stringPython program to check if a string is palindrome or not" }, { "code": null, "e": 34555, "s": 34506, "text": "String Methods in Python : Set 1 , Set 2 , Set 3" }, { "code": null, "e": 34615, "s": 34555, "text": "Dictionary and counter in Python to find winner of election" }, { "code": null, "e": 34697, "s": 34615, "text": "Maximum length of consecutive 1’s in a binary string in Python using Map function" }, { "code": null, "e": 34773, "s": 34697, "text": "Python code to print common characters of two Strings in alphabetical order" }, { "code": null, "e": 34861, "s": 34773, "text": "Using Counter() in Python to find minimum character removal to make two strings anagram" }, { "code": null, "e": 34886, "s": 34861, "text": "Reverse string in Python" }, { "code": null, "e": 34945, "s": 34886, "text": "Python groupby method to remove all consecutive duplicates" }, { "code": null, "e": 35038, "s": 34945, "text": "Generate two output strings depending upon occurrence of character in input string in Python" }, { "code": null, "e": 35094, "s": 35038, "text": "Python Dictionary to find mirror characters in a string" }, { "code": null, "e": 35146, "s": 35094, "text": "Python | Convert a list of characters into a string" }, { "code": null, "e": 35213, "s": 35146, "text": "Map function and Lambda expression in Python to replace characters" }, { "code": null, "e": 35269, "s": 35213, "text": "Zip function in Python to change to a new character set" }, { "code": null, "e": 35324, "s": 35269, "text": "SequenceMatcher in Python for Longest Common Substring" }, { "code": null, "e": 35385, "s": 35324, "text": "Python | Print the initials of a name with last name in full" }, { "code": null, "e": 35485, "s": 35385, "text": "Python counter and dictionary intersection example (Make a string using deletion and rearrangement)" }, { "code": null, "e": 35553, "s": 35485, "text": "Python program to count number of vowels using sets in given string" }, { "code": null, "e": 35595, "s": 35553, "text": "Python set to check if string is panagram" }, { "code": null, "e": 35654, "s": 35595, "text": "Python | Check if a Substring is Present in a Given String" }, { "code": null, "e": 35713, "s": 35654, "text": "Python sorted() to check if two strings are anagram or not" }, { "code": null, "e": 35762, "s": 35713, "text": "Python | Remove leading zeros from an IP address" }, { "code": null, "e": 35830, "s": 35762, "text": "Python | Count all prefixes in given string with greatest frequency" }, { "code": null, "e": 35903, "s": 35830, "text": "Check if both halves of the string have same set of characters in Python" }, { "code": null, "e": 35958, "s": 35903, "text": "Concatenated string with uncommon characters in Python" }, { "code": null, "e": 36008, "s": 35958, "text": "Second most repeated word in a sequence in Python" }, { "code": null, "e": 36082, "s": 36008, "text": "Regex in Python to put spaces between words starting with capital letters" }, { "code": null, "e": 36148, "s": 36082, "text": "Python code to move spaces to front of string in single traversal" }, { "code": null, "e": 36192, "s": 36148, "text": "String slicing in Python to rotate a string" }, { "code": null, "e": 36277, "s": 36192, "text": "String slicing in Python to check if a string can become empty by recursive deletion" }, { "code": null, "e": 36319, "s": 36277, "text": "Reverse words in a given String in Python" }, { "code": null, "e": 36349, "s": 36319, "text": "Run Length Encoding in Python" }, { "code": null, "e": 36404, "s": 36349, "text": "Anagram checking in Python using collections.Counter()" }, { "code": null, "e": 36456, "s": 36404, "text": "Remove all duplicates from a given string in Python" }, { "code": null, "e": 36506, "s": 36456, "text": "Remove all consecutive duplicates from the string" }, { "code": null, "e": 36563, "s": 36506, "text": "Python program to check if a string is palindrome or not" }, { "code": null, "e": 36596, "s": 36563, "text": "Arthimetic Operation in String :" }, { "code": null, "e": 38012, "s": 36596, "text": "Smallest number with sum of digits as N and divisible by 10^NMinimum sum of squares of character counts in a given string after removing k charactersMaximum and minimum sums from two numbers with digit replacementsCheck if a given string is sum-stringSum of two large numbersCalculate sum of all numbers present in a stringExtract maximum numeric value from a given stringCalculate maximum value using ‘+’ or ‘*’ sign between two numbers in a stringMaximum segment value after putting k breakpoints in a numberDifference of two large numbersCheck if a large number is divisible by 4 or notCheck if a large number is divisible by 11 or notNumber of substrings divisible by 6 in a string of integersDecimal representation of given binary string is divisible by 5 or notNumber of substrings divisible by 8 but not by 3To check divisibility of any large number by 999Multiply Large Numbers represented as StringsDivide large number represented as stringRemainder with 7 for large numbersGiven two numbers as strings, find if one is a power of otherCheck whether a given number is even or oddProduct of nodes at k-th level in a tree represented as stringProgram to find remainder when large number is divided by 11Ways to remove one element from a binary string so that XOR becomes zeroFind the maximum subarray XOR in a given arrayCalculate the difficulty of a sentenceMinimum Index Sum for Common Elements of Two Lists" }, { "code": null, "e": 38074, "s": 38012, "text": "Smallest number with sum of digits as N and divisible by 10^N" }, { "code": null, "e": 38163, "s": 38074, "text": "Minimum sum of squares of character counts in a given string after removing k characters" }, { "code": null, "e": 38229, "s": 38163, "text": "Maximum and minimum sums from two numbers with digit replacements" }, { "code": null, "e": 38267, "s": 38229, "text": "Check if a given string is sum-string" }, { "code": null, "e": 38292, "s": 38267, "text": "Sum of two large numbers" }, { "code": null, "e": 38341, "s": 38292, "text": "Calculate sum of all numbers present in a string" }, { "code": null, "e": 38391, "s": 38341, "text": "Extract maximum numeric value from a given string" }, { "code": null, "e": 38469, "s": 38391, "text": "Calculate maximum value using ‘+’ or ‘*’ sign between two numbers in a string" }, { "code": null, "e": 38531, "s": 38469, "text": "Maximum segment value after putting k breakpoints in a number" }, { "code": null, "e": 38563, "s": 38531, "text": "Difference of two large numbers" }, { "code": null, "e": 38612, "s": 38563, "text": "Check if a large number is divisible by 4 or not" }, { "code": null, "e": 38662, "s": 38612, "text": "Check if a large number is divisible by 11 or not" }, { "code": null, "e": 38722, "s": 38662, "text": "Number of substrings divisible by 6 in a string of integers" }, { "code": null, "e": 38793, "s": 38722, "text": "Decimal representation of given binary string is divisible by 5 or not" }, { "code": null, "e": 38842, "s": 38793, "text": "Number of substrings divisible by 8 but not by 3" }, { "code": null, "e": 38891, "s": 38842, "text": "To check divisibility of any large number by 999" }, { "code": null, "e": 38937, "s": 38891, "text": "Multiply Large Numbers represented as Strings" }, { "code": null, "e": 38979, "s": 38937, "text": "Divide large number represented as string" }, { "code": null, "e": 39014, "s": 38979, "text": "Remainder with 7 for large numbers" }, { "code": null, "e": 39076, "s": 39014, "text": "Given two numbers as strings, find if one is a power of other" }, { "code": null, "e": 39120, "s": 39076, "text": "Check whether a given number is even or odd" }, { "code": null, "e": 39183, "s": 39120, "text": "Product of nodes at k-th level in a tree represented as string" }, { "code": null, "e": 39244, "s": 39183, "text": "Program to find remainder when large number is divided by 11" }, { "code": null, "e": 39317, "s": 39244, "text": "Ways to remove one element from a binary string so that XOR becomes zero" }, { "code": null, "e": 39364, "s": 39317, "text": "Find the maximum subarray XOR in a given array" }, { "code": null, "e": 39403, "s": 39364, "text": "Calculate the difficulty of a sentence" }, { "code": null, "e": 39454, "s": 39403, "text": "Minimum Index Sum for Common Elements of Two Lists" }, { "code": null, "e": 39490, "s": 39454, "text": "Character Counting Based Problems :" }, { "code": null, "e": 42206, "s": 39490, "text": "Count Uppercase, Lowercase, special character and numeric valuesFind the smallest window in a string containing all characters of another stringSmallest window that contains all characters of string itselfCount number of substrings with exactly k distinct charactersNumber of substrings with count of each character as kString with k distinct characters and no same characters adjacentNumber of substrings of a stringDistinct strings with odd and even changes allowedFind k’th character of decrypted stringCount characters at same position as in English alphabetsCount words in a given stringCount words present in a stringCount of words whose i-th letter is either (i-1)-th, i-th, or (i+1)-th letter of given wordProgram to find Smallest and Largest Word in a StringCount substrings with same first and last charactersRecursive solution to count substrings with same first and last charactersCount of distinct substrings of a string using Suffix ArrayCount of distinct substrings of a string using Suffix TrieCount number of strings (made of R, G and B) using given combinationCount of strings that can be formed using a, b and c under given constraintsCount of substrings of a binary string containing K onesGroup words with same set of charactersPrint all distinct characters of a string in order (3 Methods)Print common characters of two Strings in alphabetical orderCommon characters in n stringsFind uncommon characters of the two stringsConcatenated string with uncommon characters of two stringsProgram to remove vowels from a StringRemove consecutive vowels from stringProgram to count vowels in a string (Iterative and Recursive)Count consonants in a string (Iterative and recursive methods)Alternate vowel and consonant stringGiven a binary string, count number of substrings that start and end with 1Number of distinct permutation a String can haveTime complexity of all permutations of a stringPermutations of a given string using STLCheck if both halves of the string have same set of charactersCount words that appear exactly two times in an array of wordsCheck if frequency of all characters can become same by one removalCheck if a string has all characters with same frequency with one variation allowedCount ways to increase LCS length of two strings by oneFind the character in first string that is present at minimum index in second stringRemove characters from the first string which are present in the second stringLength of Longest sub-string that can be removedCount of character pairs at same distance as in English alphabetsCount number of equal pairs in a stringCount of strings where adjacent characters are of difference onePrint number of words, vowels and frequency of each character" }, { "code": null, "e": 42271, "s": 42206, "text": "Count Uppercase, Lowercase, special character and numeric values" }, { "code": null, "e": 42352, "s": 42271, "text": "Find the smallest window in a string containing all characters of another string" }, { "code": null, "e": 42414, "s": 42352, "text": "Smallest window that contains all characters of string itself" }, { "code": null, "e": 42476, "s": 42414, "text": "Count number of substrings with exactly k distinct characters" }, { "code": null, "e": 42531, "s": 42476, "text": "Number of substrings with count of each character as k" }, { "code": null, "e": 42597, "s": 42531, "text": "String with k distinct characters and no same characters adjacent" }, { "code": null, "e": 42630, "s": 42597, "text": "Number of substrings of a string" }, { "code": null, "e": 42681, "s": 42630, "text": "Distinct strings with odd and even changes allowed" }, { "code": null, "e": 42721, "s": 42681, "text": "Find k’th character of decrypted string" }, { "code": null, "e": 42779, "s": 42721, "text": "Count characters at same position as in English alphabets" }, { "code": null, "e": 42809, "s": 42779, "text": "Count words in a given string" }, { "code": null, "e": 42841, "s": 42809, "text": "Count words present in a string" }, { "code": null, "e": 42933, "s": 42841, "text": "Count of words whose i-th letter is either (i-1)-th, i-th, or (i+1)-th letter of given word" }, { "code": null, "e": 42987, "s": 42933, "text": "Program to find Smallest and Largest Word in a String" }, { "code": null, "e": 43040, "s": 42987, "text": "Count substrings with same first and last characters" }, { "code": null, "e": 43115, "s": 43040, "text": "Recursive solution to count substrings with same first and last characters" }, { "code": null, "e": 43175, "s": 43115, "text": "Count of distinct substrings of a string using Suffix Array" }, { "code": null, "e": 43234, "s": 43175, "text": "Count of distinct substrings of a string using Suffix Trie" }, { "code": null, "e": 43303, "s": 43234, "text": "Count number of strings (made of R, G and B) using given combination" }, { "code": null, "e": 43380, "s": 43303, "text": "Count of strings that can be formed using a, b and c under given constraints" }, { "code": null, "e": 43437, "s": 43380, "text": "Count of substrings of a binary string containing K ones" }, { "code": null, "e": 43477, "s": 43437, "text": "Group words with same set of characters" }, { "code": null, "e": 43540, "s": 43477, "text": "Print all distinct characters of a string in order (3 Methods)" }, { "code": null, "e": 43601, "s": 43540, "text": "Print common characters of two Strings in alphabetical order" }, { "code": null, "e": 43632, "s": 43601, "text": "Common characters in n strings" }, { "code": null, "e": 43676, "s": 43632, "text": "Find uncommon characters of the two strings" }, { "code": null, "e": 43736, "s": 43676, "text": "Concatenated string with uncommon characters of two strings" }, { "code": null, "e": 43775, "s": 43736, "text": "Program to remove vowels from a String" }, { "code": null, "e": 43813, "s": 43775, "text": "Remove consecutive vowels from string" }, { "code": null, "e": 43875, "s": 43813, "text": "Program to count vowels in a string (Iterative and Recursive)" }, { "code": null, "e": 43938, "s": 43875, "text": "Count consonants in a string (Iterative and recursive methods)" }, { "code": null, "e": 43975, "s": 43938, "text": "Alternate vowel and consonant string" }, { "code": null, "e": 44051, "s": 43975, "text": "Given a binary string, count number of substrings that start and end with 1" }, { "code": null, "e": 44100, "s": 44051, "text": "Number of distinct permutation a String can have" }, { "code": null, "e": 44148, "s": 44100, "text": "Time complexity of all permutations of a string" }, { "code": null, "e": 44189, "s": 44148, "text": "Permutations of a given string using STL" }, { "code": null, "e": 44252, "s": 44189, "text": "Check if both halves of the string have same set of characters" }, { "code": null, "e": 44315, "s": 44252, "text": "Count words that appear exactly two times in an array of words" }, { "code": null, "e": 44383, "s": 44315, "text": "Check if frequency of all characters can become same by one removal" }, { "code": null, "e": 44467, "s": 44383, "text": "Check if a string has all characters with same frequency with one variation allowed" }, { "code": null, "e": 44523, "s": 44467, "text": "Count ways to increase LCS length of two strings by one" }, { "code": null, "e": 44608, "s": 44523, "text": "Find the character in first string that is present at minimum index in second string" }, { "code": null, "e": 44687, "s": 44608, "text": "Remove characters from the first string which are present in the second string" }, { "code": null, "e": 44736, "s": 44687, "text": "Length of Longest sub-string that can be removed" }, { "code": null, "e": 44802, "s": 44736, "text": "Count of character pairs at same distance as in English alphabets" }, { "code": null, "e": 44842, "s": 44802, "text": "Count number of equal pairs in a string" }, { "code": null, "e": 44907, "s": 44842, "text": "Count of strings where adjacent characters are of difference one" }, { "code": null, "e": 44969, "s": 44907, "text": "Print number of words, vowels and frequency of each character" }, { "code": null, "e": 44995, "s": 44969, "text": "Subsequence & Substring :" }, { "code": null, "e": 46195, "s": 44995, "text": "Longest subsequence where every character appears at-least k timesGiven two strings, find if first string is a subsequence of secondNumber of subsequences of the form a^i b^j c^kNumber of subsequences in a string divisible by nFind number of times a string occurs as a subsequence in given stringNumber of subsequences as “ab” in a string repeated K timesCount of ‘GFG’ Subsequences in the given stringCount Distinct SubsequencesCount distinct occurrences as a subsequenceLongest common subsequence with permutations allowedRepeated subsequence of length 2 or morePrint all longest common sub-sequences in lexicographical orderPrinting Longest Common Subsequence | Set 2Given number as string, find number of contiguous subsequences which recursively add up to 9 | Set 2Shortest Uncommon SubsequenceShortest Superstring ProblemPrinting Shortest Common SupersequenceShortest possible combination of two stringsA Space Optimized Solution of LCSSort a string according to the order defined by another stringShortest Common SupersequenceLongest Repeating SubsequenceFind largest word in dictionary by deleting some characters of given stringDynamic Programming | Set 12 (Longest Palindromic Subsequence)" }, { "code": null, "e": 46262, "s": 46195, "text": "Longest subsequence where every character appears at-least k times" }, { "code": null, "e": 46329, "s": 46262, "text": "Given two strings, find if first string is a subsequence of second" }, { "code": null, "e": 46376, "s": 46329, "text": "Number of subsequences of the form a^i b^j c^k" }, { "code": null, "e": 46426, "s": 46376, "text": "Number of subsequences in a string divisible by n" }, { "code": null, "e": 46496, "s": 46426, "text": "Find number of times a string occurs as a subsequence in given string" }, { "code": null, "e": 46556, "s": 46496, "text": "Number of subsequences as “ab” in a string repeated K times" }, { "code": null, "e": 46604, "s": 46556, "text": "Count of ‘GFG’ Subsequences in the given string" }, { "code": null, "e": 46632, "s": 46604, "text": "Count Distinct Subsequences" }, { "code": null, "e": 46676, "s": 46632, "text": "Count distinct occurrences as a subsequence" }, { "code": null, "e": 46729, "s": 46676, "text": "Longest common subsequence with permutations allowed" }, { "code": null, "e": 46770, "s": 46729, "text": "Repeated subsequence of length 2 or more" }, { "code": null, "e": 46834, "s": 46770, "text": "Print all longest common sub-sequences in lexicographical order" }, { "code": null, "e": 46878, "s": 46834, "text": "Printing Longest Common Subsequence | Set 2" }, { "code": null, "e": 46979, "s": 46878, "text": "Given number as string, find number of contiguous subsequences which recursively add up to 9 | Set 2" }, { "code": null, "e": 47009, "s": 46979, "text": "Shortest Uncommon Subsequence" }, { "code": null, "e": 47038, "s": 47009, "text": "Shortest Superstring Problem" }, { "code": null, "e": 47077, "s": 47038, "text": "Printing Shortest Common Supersequence" }, { "code": null, "e": 47122, "s": 47077, "text": "Shortest possible combination of two strings" }, { "code": null, "e": 47156, "s": 47122, "text": "A Space Optimized Solution of LCS" }, { "code": null, "e": 47219, "s": 47156, "text": "Sort a string according to the order defined by another string" }, { "code": null, "e": 47249, "s": 47219, "text": "Shortest Common Supersequence" }, { "code": null, "e": 47279, "s": 47249, "text": "Longest Repeating Subsequence" }, { "code": null, "e": 47355, "s": 47279, "text": "Find largest word in dictionary by deleting some characters of given string" }, { "code": null, "e": 47418, "s": 47355, "text": "Dynamic Programming | Set 12 (Longest Palindromic Subsequence)" }, { "code": null, "e": 47426, "s": 47418, "text": "More >>" }, { "code": null, "e": 47447, "s": 47426, "text": "Reverse & Rotation :" }, { "code": null, "e": 48443, "s": 47447, "text": "Perfect reversible stringReversing an EquationLeft Rotation and Right Rotation of a StringGenerate all rotations of a given stringMinimum rotations required to get the same stringCheck if strings are rotations of each other or notCheck if a string can be obtained by rotating another string 2 placesCount rotations divisible by 4Check if all rows of a matrix are circular rotations of each otherPrint reverse of a string using recursionPrint words of a string in reverse orderProgram to reverse a string (Iterative and Recursive)Write a program to reverse an array or stringReverse an array without affecting special charactersReverse words in a given stringReverse individual wordsReverse a string preserving space positionsReverse string without using any temporary variablePrint reverse string after removing vowelsReverse vowels in a given stringReverse String according to the number of wordsReverse each word in a linked list nodeFind if an array of strings can be chained to form a circle" }, { "code": null, "e": 48469, "s": 48443, "text": "Perfect reversible string" }, { "code": null, "e": 48491, "s": 48469, "text": "Reversing an Equation" }, { "code": null, "e": 48536, "s": 48491, "text": "Left Rotation and Right Rotation of a String" }, { "code": null, "e": 48577, "s": 48536, "text": "Generate all rotations of a given string" }, { "code": null, "e": 48627, "s": 48577, "text": "Minimum rotations required to get the same string" }, { "code": null, "e": 48679, "s": 48627, "text": "Check if strings are rotations of each other or not" }, { "code": null, "e": 48749, "s": 48679, "text": "Check if a string can be obtained by rotating another string 2 places" }, { "code": null, "e": 48780, "s": 48749, "text": "Count rotations divisible by 4" }, { "code": null, "e": 48847, "s": 48780, "text": "Check if all rows of a matrix are circular rotations of each other" }, { "code": null, "e": 48889, "s": 48847, "text": "Print reverse of a string using recursion" }, { "code": null, "e": 48930, "s": 48889, "text": "Print words of a string in reverse order" }, { "code": null, "e": 48984, "s": 48930, "text": "Program to reverse a string (Iterative and Recursive)" }, { "code": null, "e": 49030, "s": 48984, "text": "Write a program to reverse an array or string" }, { "code": null, "e": 49084, "s": 49030, "text": "Reverse an array without affecting special characters" }, { "code": null, "e": 49116, "s": 49084, "text": "Reverse words in a given string" }, { "code": null, "e": 49141, "s": 49116, "text": "Reverse individual words" }, { "code": null, "e": 49185, "s": 49141, "text": "Reverse a string preserving space positions" }, { "code": null, "e": 49237, "s": 49185, "text": "Reverse string without using any temporary variable" }, { "code": null, "e": 49280, "s": 49237, "text": "Print reverse string after removing vowels" }, { "code": null, "e": 49313, "s": 49280, "text": "Reverse vowels in a given string" }, { "code": null, "e": 49361, "s": 49313, "text": "Reverse String according to the number of words" }, { "code": null, "e": 49401, "s": 49361, "text": "Reverse each word in a linked list node" }, { "code": null, "e": 49461, "s": 49401, "text": "Find if an array of strings can be chained to form a circle" }, { "code": null, "e": 49483, "s": 49461, "text": "Sorting & Searching :" }, { "code": null, "e": 50229, "s": 49483, "text": "Sort an array of strings according to string lengthsSorting array of strings (or words) using TrieSort string of charactersAlternate Lower Upper String SortProgram to sort string in descending orderPrint array of strings in sorted order without copying one string into anotherSort the given string using character searchGiven a sorted dictionary of an alien language, find order of charactersRearrange a string in sorted order followed by the integer sumPrint distinct sorted permutations with duplicates allowed in inputMinimum cost to sort strings using reversal operations of different costsPrint number in ascending order which contains 1, 2 and 3 in their digits.Search in an array of strings where non-empty strings are sortedSparse Search" }, { "code": null, "e": 50282, "s": 50229, "text": "Sort an array of strings according to string lengths" }, { "code": null, "e": 50329, "s": 50282, "text": "Sorting array of strings (or words) using Trie" }, { "code": null, "e": 50355, "s": 50329, "text": "Sort string of characters" }, { "code": null, "e": 50389, "s": 50355, "text": "Alternate Lower Upper String Sort" }, { "code": null, "e": 50432, "s": 50389, "text": "Program to sort string in descending order" }, { "code": null, "e": 50511, "s": 50432, "text": "Print array of strings in sorted order without copying one string into another" }, { "code": null, "e": 50556, "s": 50511, "text": "Sort the given string using character search" }, { "code": null, "e": 50629, "s": 50556, "text": "Given a sorted dictionary of an alien language, find order of characters" }, { "code": null, "e": 50692, "s": 50629, "text": "Rearrange a string in sorted order followed by the integer sum" }, { "code": null, "e": 50760, "s": 50692, "text": "Print distinct sorted permutations with duplicates allowed in input" }, { "code": null, "e": 50834, "s": 50760, "text": "Minimum cost to sort strings using reversal operations of different costs" }, { "code": null, "e": 50909, "s": 50834, "text": "Print number in ascending order which contains 1, 2 and 3 in their digits." }, { "code": null, "e": 50974, "s": 50909, "text": "Search in an array of strings where non-empty strings are sorted" }, { "code": null, "e": 50988, "s": 50974, "text": "Sparse Search" }, { "code": null, "e": 51012, "s": 50988, "text": "Case Sensitive String :" }, { "code": null, "e": 51617, "s": 51012, "text": "Lower case to upper case – An interesting factisupper() and islower() and their application in C++Case conversion (Lower to Upper and Vice Versa) of a string using BitWise operators in C/C++Maximum distinct lowercase alphabets between two uppercaseFirst uppercase letter in a string (Iterative and Recursive)Convert characters of a string to opposite casegOOGLE cASE of a given sentencePrint all words matching a pattern in CamelCase Notation DictonaryCamel case of a given sentencePermute a string by changing caseToggle case of a string using Bitwise OperatorsHow to design a tiny URL or URL shortener?" }, { "code": null, "e": 51664, "s": 51617, "text": "Lower case to upper case – An interesting fact" }, { "code": null, "e": 51717, "s": 51664, "text": "isupper() and islower() and their application in C++" }, { "code": null, "e": 51810, "s": 51717, "text": "Case conversion (Lower to Upper and Vice Versa) of a string using BitWise operators in C/C++" }, { "code": null, "e": 51869, "s": 51810, "text": "Maximum distinct lowercase alphabets between two uppercase" }, { "code": null, "e": 51930, "s": 51869, "text": "First uppercase letter in a string (Iterative and Recursive)" }, { "code": null, "e": 51978, "s": 51930, "text": "Convert characters of a string to opposite case" }, { "code": null, "e": 52010, "s": 51978, "text": "gOOGLE cASE of a given sentence" }, { "code": null, "e": 52077, "s": 52010, "text": "Print all words matching a pattern in CamelCase Notation Dictonary" }, { "code": null, "e": 52108, "s": 52077, "text": "Camel case of a given sentence" }, { "code": null, "e": 52142, "s": 52108, "text": "Permute a string by changing case" }, { "code": null, "e": 52190, "s": 52142, "text": "Toggle case of a string using Bitwise Operators" }, { "code": null, "e": 52233, "s": 52190, "text": "How to design a tiny URL or URL shortener?" }, { "code": null, "e": 52259, "s": 52233, "text": "Occurrence Based String :" }, { "code": null, "e": 53915, "s": 52259, "text": "Given a string, find its first non-repeating characterPrint all permutations with repetition of charactersFind the first non-repeating character from a stream of charactersConvert to a string that is repetition of a substring of k lengthSmallest length string with repeated replacement of two distinct adjacentDistributing all balls without repetitionMaximum consecutive repeating character in stringMinimum number of deletions so that no two consecutive are sameK’th Non-repeating CharacterFind repeated character present first in a stringFind the first repeated word in a stringFind the first repeated character in a stringSecond most repeated word in a sequenceMost frequent word in an array of stringsEfficiently find first repeated character in a string without using any additional data structure in one traversalQueries for characters in a repeated stringReturn maximum occurring character in an input stringGenerate two output strings depending upon occurrence of character in input string.Print characters and their frequencies in order of occurrenceProgram to count occurrence of a given character in a stringCheck if all occurrences of a character appear togetherGroup all occurrences of characters according to first appearancePrint the string by ignoring alternate occurrences of any characterPrint the string after the specified character has occurred given no. of timesFind all occurrences of a given word in a matrixReplace all occurrences of string AB with C without using extra spaceRearrange a binary string as alternate x and y occurrencesRemove recurring digits in a given numberFind the most frequent digit without using array/string" }, { "code": null, "e": 53970, "s": 53915, "text": "Given a string, find its first non-repeating character" }, { "code": null, "e": 54023, "s": 53970, "text": "Print all permutations with repetition of characters" }, { "code": null, "e": 54090, "s": 54023, "text": "Find the first non-repeating character from a stream of characters" }, { "code": null, "e": 54156, "s": 54090, "text": "Convert to a string that is repetition of a substring of k length" }, { "code": null, "e": 54230, "s": 54156, "text": "Smallest length string with repeated replacement of two distinct adjacent" }, { "code": null, "e": 54272, "s": 54230, "text": "Distributing all balls without repetition" }, { "code": null, "e": 54322, "s": 54272, "text": "Maximum consecutive repeating character in string" }, { "code": null, "e": 54386, "s": 54322, "text": "Minimum number of deletions so that no two consecutive are same" }, { "code": null, "e": 54415, "s": 54386, "text": "K’th Non-repeating Character" }, { "code": null, "e": 54465, "s": 54415, "text": "Find repeated character present first in a string" }, { "code": null, "e": 54506, "s": 54465, "text": "Find the first repeated word in a string" }, { "code": null, "e": 54552, "s": 54506, "text": "Find the first repeated character in a string" }, { "code": null, "e": 54592, "s": 54552, "text": "Second most repeated word in a sequence" }, { "code": null, "e": 54634, "s": 54592, "text": "Most frequent word in an array of strings" }, { "code": null, "e": 54749, "s": 54634, "text": "Efficiently find first repeated character in a string without using any additional data structure in one traversal" }, { "code": null, "e": 54793, "s": 54749, "text": "Queries for characters in a repeated string" }, { "code": null, "e": 54847, "s": 54793, "text": "Return maximum occurring character in an input string" }, { "code": null, "e": 54931, "s": 54847, "text": "Generate two output strings depending upon occurrence of character in input string." }, { "code": null, "e": 54993, "s": 54931, "text": "Print characters and their frequencies in order of occurrence" }, { "code": null, "e": 55054, "s": 54993, "text": "Program to count occurrence of a given character in a string" }, { "code": null, "e": 55110, "s": 55054, "text": "Check if all occurrences of a character appear together" }, { "code": null, "e": 55176, "s": 55110, "text": "Group all occurrences of characters according to first appearance" }, { "code": null, "e": 55244, "s": 55176, "text": "Print the string by ignoring alternate occurrences of any character" }, { "code": null, "e": 55323, "s": 55244, "text": "Print the string after the specified character has occurred given no. of times" }, { "code": null, "e": 55372, "s": 55323, "text": "Find all occurrences of a given word in a matrix" }, { "code": null, "e": 55442, "s": 55372, "text": "Replace all occurrences of string AB with C without using extra space" }, { "code": null, "e": 55501, "s": 55442, "text": "Rearrange a binary string as alternate x and y occurrences" }, { "code": null, "e": 55543, "s": 55501, "text": "Remove recurring digits in a given number" }, { "code": null, "e": 55599, "s": 55543, "text": "Find the most frequent digit without using array/string" }, { "code": null, "e": 55609, "s": 55599, "text": "Spacing :" }, { "code": null, "e": 56069, "s": 55609, "text": "Remove spaces from a given stringMove spaces to front of string in single traversalPut spaces between words starting with capital lettersRemoving spaces from a string using StringstreamRemove extra spaces from a stringURLify a given string (Replace spaces is %20)String containing first letter of every word in a given string with spacesPrint all possible strings that can be made by placing spacesPrint all possible strings that can be made by placing spaces" }, { "code": null, "e": 56103, "s": 56069, "text": "Remove spaces from a given string" }, { "code": null, "e": 56154, "s": 56103, "text": "Move spaces to front of string in single traversal" }, { "code": null, "e": 56209, "s": 56154, "text": "Put spaces between words starting with capital letters" }, { "code": null, "e": 56258, "s": 56209, "text": "Removing spaces from a string using Stringstream" }, { "code": null, "e": 56292, "s": 56258, "text": "Remove extra spaces from a string" }, { "code": null, "e": 56338, "s": 56292, "text": "URLify a given string (Replace spaces is %20)" }, { "code": null, "e": 56413, "s": 56338, "text": "String containing first letter of every word in a given string with spaces" }, { "code": null, "e": 56475, "s": 56413, "text": "Print all possible strings that can be made by placing spaces" }, { "code": null, "e": 56537, "s": 56475, "text": "Print all possible strings that can be made by placing spaces" }, { "code": null, "e": 56547, "s": 56537, "text": "Anagram :" }, { "code": null, "e": 57157, "s": 56547, "text": "Check whether two strings are anagram of each otherGiven a sequence of words, print all anagrams together | Set 2Anagram Substring SearchPrint all pairs of anagrams in a given array of stringsRemove minimum number of characters so that two strings become anagramCheck if two strings are k-anagrams or notCheck if binary representations of two numbers are anagramGiven a sequence of words, print all anagrams together using STLCheck if all levels of two trees are anagrams or notCount of total anagram substringsMinimum Number of Manipulations required to make two Strings Anagram Without Deletion of Character" }, { "code": null, "e": 57209, "s": 57157, "text": "Check whether two strings are anagram of each other" }, { "code": null, "e": 57272, "s": 57209, "text": "Given a sequence of words, print all anagrams together | Set 2" }, { "code": null, "e": 57297, "s": 57272, "text": "Anagram Substring Search" }, { "code": null, "e": 57353, "s": 57297, "text": "Print all pairs of anagrams in a given array of strings" }, { "code": null, "e": 57424, "s": 57353, "text": "Remove minimum number of characters so that two strings become anagram" }, { "code": null, "e": 57467, "s": 57424, "text": "Check if two strings are k-anagrams or not" }, { "code": null, "e": 57526, "s": 57467, "text": "Check if binary representations of two numbers are anagram" }, { "code": null, "e": 57591, "s": 57526, "text": "Given a sequence of words, print all anagrams together using STL" }, { "code": null, "e": 57644, "s": 57591, "text": "Check if all levels of two trees are anagrams or not" }, { "code": null, "e": 57678, "s": 57644, "text": "Count of total anagram substrings" }, { "code": null, "e": 57777, "s": 57678, "text": "Minimum Number of Manipulations required to make two Strings Anagram Without Deletion of Character" }, { "code": null, "e": 57785, "s": 57777, "text": "More >>" }, { "code": null, "e": 57798, "s": 57785, "text": "Palindrome :" }, { "code": null, "e": 59161, "s": 57798, "text": "C Program to Check if a Given String is PalindromeCheck if a given string is a rotation of a palindromeC++ Program to print all palindromes in a given rangeCheck if characters of a given string can be rearranged to form a palindromeDynamic Programming | Set 28 (Minimum insertions to form a palindrome)Longest Palindromic Substring | Set 2Find all palindromic sub-strings of a given stringOnline algorithm for checking palindrome in a streamGiven a string, print all possible palindromic partitionsPrint all palindromic partitions of a stringDynamic Programming | Set 17 (Palindrome Partitioning)Count All Palindromic Subsequence in a given StringMinimum characters to be added at front to make string palindromePalindrome Substring QueriesSuffix Tree Application 6 – Longest Palindromic SubstringPalindrome pair in an array of words (or strings)Make largest palindrome by changing at most K-digitsLexicographically first palindromic stringRecursive function to check if a string is palindromeMinimum number of Appends needed to make a string palindromeLongest Non-palindromic substringMinimum number of deletions to make a string palindromeMinimum steps to delete a string after repeated deletion of palindrome substringsCount of Palindromic substrings in an Index rangeMinimum insertions to form a palindrome with permutations allowedNth Even length Palindrome" }, { "code": null, "e": 59212, "s": 59161, "text": "C Program to Check if a Given String is Palindrome" }, { "code": null, "e": 59266, "s": 59212, "text": "Check if a given string is a rotation of a palindrome" }, { "code": null, "e": 59320, "s": 59266, "text": "C++ Program to print all palindromes in a given range" }, { "code": null, "e": 59397, "s": 59320, "text": "Check if characters of a given string can be rearranged to form a palindrome" }, { "code": null, "e": 59468, "s": 59397, "text": "Dynamic Programming | Set 28 (Minimum insertions to form a palindrome)" }, { "code": null, "e": 59506, "s": 59468, "text": "Longest Palindromic Substring | Set 2" }, { "code": null, "e": 59557, "s": 59506, "text": "Find all palindromic sub-strings of a given string" }, { "code": null, "e": 59610, "s": 59557, "text": "Online algorithm for checking palindrome in a stream" }, { "code": null, "e": 59668, "s": 59610, "text": "Given a string, print all possible palindromic partitions" }, { "code": null, "e": 59713, "s": 59668, "text": "Print all palindromic partitions of a string" }, { "code": null, "e": 59768, "s": 59713, "text": "Dynamic Programming | Set 17 (Palindrome Partitioning)" }, { "code": null, "e": 59820, "s": 59768, "text": "Count All Palindromic Subsequence in a given String" }, { "code": null, "e": 59886, "s": 59820, "text": "Minimum characters to be added at front to make string palindrome" }, { "code": null, "e": 59915, "s": 59886, "text": "Palindrome Substring Queries" }, { "code": null, "e": 59973, "s": 59915, "text": "Suffix Tree Application 6 – Longest Palindromic Substring" }, { "code": null, "e": 60023, "s": 59973, "text": "Palindrome pair in an array of words (or strings)" }, { "code": null, "e": 60076, "s": 60023, "text": "Make largest palindrome by changing at most K-digits" }, { "code": null, "e": 60119, "s": 60076, "text": "Lexicographically first palindromic string" }, { "code": null, "e": 60173, "s": 60119, "text": "Recursive function to check if a string is palindrome" }, { "code": null, "e": 60234, "s": 60173, "text": "Minimum number of Appends needed to make a string palindrome" }, { "code": null, "e": 60268, "s": 60234, "text": "Longest Non-palindromic substring" }, { "code": null, "e": 60324, "s": 60268, "text": "Minimum number of deletions to make a string palindrome" }, { "code": null, "e": 60406, "s": 60324, "text": "Minimum steps to delete a string after repeated deletion of palindrome substrings" }, { "code": null, "e": 60456, "s": 60406, "text": "Count of Palindromic substrings in an Index range" }, { "code": null, "e": 60522, "s": 60456, "text": "Minimum insertions to form a palindrome with permutations allowed" }, { "code": null, "e": 60549, "s": 60522, "text": "Nth Even length Palindrome" }, { "code": null, "e": 60557, "s": 60549, "text": "More >>" }, { "code": null, "e": 60573, "s": 60557, "text": "Binary String :" }, { "code": null, "e": 62103, "s": 60573, "text": "Count of operations to make a binary string”ab” freeChange if all bits can be made same by single flipLength of Longest sub-string that can be removedNumber of flips to make binary string alternate1’s and 2’s complement of a Binary NumberEfficient method for 2’s complement of a binary stringCount binary strings with k times appearing adjacent two set bitsCount strings with consecutive 1’sGenerate all binary strings from given patternAdd two bit stringsCount number of binary strings without consecutive 1’sGenerate all binary permutations such that there are more or equal 1’s than 0’s before every point in all permutationsCheck if a string follows a^nb^n pattern or notBinary representation of next numberBinary representation of next greater number with same number of 1’s and 0’sMaximum difference of zeros and ones in binary stringCheck if a binary string has a 0 between 1s or not | Set 2Min flips of continuous characters to make all characters same in a stringConcatenation of two strings in PHPProgram to add two binary stringsConvert String into Binary SequenceGenerate all binary strings without consecutive 1’sMinimum number of characters to be removed to make a binary string alternateCheck divisibility of binary string by 2^kRemoving elements between the two zerosFind i’th Index character in a binary string obtained after n iterationsNumber of substrings with odd decimal value in a binary stringGenerate n-bit Gray CodesPrint N-bit binary numbers having more 1’s than 0’s in all prefixesAdd n binary strings" }, { "code": null, "e": 62156, "s": 62103, "text": "Count of operations to make a binary string”ab” free" }, { "code": null, "e": 62207, "s": 62156, "text": "Change if all bits can be made same by single flip" }, { "code": null, "e": 62256, "s": 62207, "text": "Length of Longest sub-string that can be removed" }, { "code": null, "e": 62304, "s": 62256, "text": "Number of flips to make binary string alternate" }, { "code": null, "e": 62346, "s": 62304, "text": "1’s and 2’s complement of a Binary Number" }, { "code": null, "e": 62401, "s": 62346, "text": "Efficient method for 2’s complement of a binary string" }, { "code": null, "e": 62467, "s": 62401, "text": "Count binary strings with k times appearing adjacent two set bits" }, { "code": null, "e": 62502, "s": 62467, "text": "Count strings with consecutive 1’s" }, { "code": null, "e": 62549, "s": 62502, "text": "Generate all binary strings from given pattern" }, { "code": null, "e": 62569, "s": 62549, "text": "Add two bit strings" }, { "code": null, "e": 62624, "s": 62569, "text": "Count number of binary strings without consecutive 1’s" }, { "code": null, "e": 62743, "s": 62624, "text": "Generate all binary permutations such that there are more or equal 1’s than 0’s before every point in all permutations" }, { "code": null, "e": 62791, "s": 62743, "text": "Check if a string follows a^nb^n pattern or not" }, { "code": null, "e": 62828, "s": 62791, "text": "Binary representation of next number" }, { "code": null, "e": 62905, "s": 62828, "text": "Binary representation of next greater number with same number of 1’s and 0’s" }, { "code": null, "e": 62959, "s": 62905, "text": "Maximum difference of zeros and ones in binary string" }, { "code": null, "e": 63018, "s": 62959, "text": "Check if a binary string has a 0 between 1s or not | Set 2" }, { "code": null, "e": 63093, "s": 63018, "text": "Min flips of continuous characters to make all characters same in a string" }, { "code": null, "e": 63129, "s": 63093, "text": "Concatenation of two strings in PHP" }, { "code": null, "e": 63163, "s": 63129, "text": "Program to add two binary strings" }, { "code": null, "e": 63199, "s": 63163, "text": "Convert String into Binary Sequence" }, { "code": null, "e": 63251, "s": 63199, "text": "Generate all binary strings without consecutive 1’s" }, { "code": null, "e": 63328, "s": 63251, "text": "Minimum number of characters to be removed to make a binary string alternate" }, { "code": null, "e": 63371, "s": 63328, "text": "Check divisibility of binary string by 2^k" }, { "code": null, "e": 63411, "s": 63371, "text": "Removing elements between the two zeros" }, { "code": null, "e": 63484, "s": 63411, "text": "Find i’th Index character in a binary string obtained after n iterations" }, { "code": null, "e": 63547, "s": 63484, "text": "Number of substrings with odd decimal value in a binary string" }, { "code": null, "e": 63573, "s": 63547, "text": "Generate n-bit Gray Codes" }, { "code": null, "e": 63641, "s": 63573, "text": "Print N-bit binary numbers having more 1’s than 0’s in all prefixes" }, { "code": null, "e": 63662, "s": 63641, "text": "Add n binary strings" }, { "code": null, "e": 63670, "s": 63662, "text": "More >>" }, { "code": null, "e": 63694, "s": 63670, "text": "Lexicographic pattern :" }, { "code": null, "e": 64679, "s": 63694, "text": "Powet Set in Lexicographic orderLexicographically n-th permutation of stringLexicographic rank of string using stlLexicographically minimum string rotation | Set 1Generating distinct subsequences of a given string in lexicographic orderLexicographically smallest string obtained after concatenating arrayLexicographical Maximum substring of stringLexicographical concatenation of all substrings of a stringConstruct lexicographically smallest palindromeLexicographically smallest string whose hamming distance from given string is exactly KLexicographically next stringLexicographically largest subsequence such that every character occurs at least k timesLexicographically first alternate vowel and consonant stringFind a string in lexicographic order which is in between given two stringsPrint all permutations in sorted (lexicographic) orderHow to find Lexicographically previous permutation?Find n-th lexicographically permutation of a string | Set 2Lexicographic rank of a string" }, { "code": null, "e": 64712, "s": 64679, "text": "Powet Set in Lexicographic order" }, { "code": null, "e": 64757, "s": 64712, "text": "Lexicographically n-th permutation of string" }, { "code": null, "e": 64796, "s": 64757, "text": "Lexicographic rank of string using stl" }, { "code": null, "e": 64846, "s": 64796, "text": "Lexicographically minimum string rotation | Set 1" }, { "code": null, "e": 64920, "s": 64846, "text": "Generating distinct subsequences of a given string in lexicographic order" }, { "code": null, "e": 64989, "s": 64920, "text": "Lexicographically smallest string obtained after concatenating array" }, { "code": null, "e": 65033, "s": 64989, "text": "Lexicographical Maximum substring of string" }, { "code": null, "e": 65093, "s": 65033, "text": "Lexicographical concatenation of all substrings of a string" }, { "code": null, "e": 65141, "s": 65093, "text": "Construct lexicographically smallest palindrome" }, { "code": null, "e": 65229, "s": 65141, "text": "Lexicographically smallest string whose hamming distance from given string is exactly K" }, { "code": null, "e": 65259, "s": 65229, "text": "Lexicographically next string" }, { "code": null, "e": 65347, "s": 65259, "text": "Lexicographically largest subsequence such that every character occurs at least k times" }, { "code": null, "e": 65408, "s": 65347, "text": "Lexicographically first alternate vowel and consonant string" }, { "code": null, "e": 65483, "s": 65408, "text": "Find a string in lexicographic order which is in between given two strings" }, { "code": null, "e": 65538, "s": 65483, "text": "Print all permutations in sorted (lexicographic) order" }, { "code": null, "e": 65590, "s": 65538, "text": "How to find Lexicographically previous permutation?" }, { "code": null, "e": 65650, "s": 65590, "text": "Find n-th lexicographically permutation of a string | Set 2" }, { "code": null, "e": 65681, "s": 65650, "text": "Lexicographic rank of a string" }, { "code": null, "e": 65701, "s": 65681, "text": "Pattern Searching :" }, { "code": null, "e": 66573, "s": 65701, "text": "Searching for Patterns | Set 5 (Finite Automata)Pattern Searching | Set 7 (Boyer Moore Algorithm – Bad Character Heuristic)Manacher’s Algorithm – Linear Time Longest Palindromic Substring – Part 4Z algorithmSearch a Word in a 2D Grid of charactersPrinting string in plus ‘+’ pattern in the matrixWildcard Pattern MatchingDynamic Programming | Wildcard Pattern Matching | Linear Time and Constant SpaceReplace a character c1 with c2 and c2 with c1 in a string SAho-Corasick AlgorithmCount of occurrences of a “1(0+)1” pattern in a stringFind all the patterns of “1(0+)1” in a given string | SET 2In-place replace multiple occurrences of a patternFind all strings that match specific pattern in a dictionaryCheck if string follows order of characters defined by a pattern or notFind nth term of the Dragon Curve SequenceCount of number of given string in 2D character array" }, { "code": null, "e": 66622, "s": 66573, "text": "Searching for Patterns | Set 5 (Finite Automata)" }, { "code": null, "e": 66698, "s": 66622, "text": "Pattern Searching | Set 7 (Boyer Moore Algorithm – Bad Character Heuristic)" }, { "code": null, "e": 66772, "s": 66698, "text": "Manacher’s Algorithm – Linear Time Longest Palindromic Substring – Part 4" }, { "code": null, "e": 66784, "s": 66772, "text": "Z algorithm" }, { "code": null, "e": 66825, "s": 66784, "text": "Search a Word in a 2D Grid of characters" }, { "code": null, "e": 66875, "s": 66825, "text": "Printing string in plus ‘+’ pattern in the matrix" }, { "code": null, "e": 66901, "s": 66875, "text": "Wildcard Pattern Matching" }, { "code": null, "e": 66982, "s": 66901, "text": "Dynamic Programming | Wildcard Pattern Matching | Linear Time and Constant Space" }, { "code": null, "e": 67042, "s": 66982, "text": "Replace a character c1 with c2 and c2 with c1 in a string S" }, { "code": null, "e": 67065, "s": 67042, "text": "Aho-Corasick Algorithm" }, { "code": null, "e": 67120, "s": 67065, "text": "Count of occurrences of a “1(0+)1” pattern in a string" }, { "code": null, "e": 67180, "s": 67120, "text": "Find all the patterns of “1(0+)1” in a given string | SET 2" }, { "code": null, "e": 67231, "s": 67180, "text": "In-place replace multiple occurrences of a pattern" }, { "code": null, "e": 67292, "s": 67231, "text": "Find all strings that match specific pattern in a dictionary" }, { "code": null, "e": 67364, "s": 67292, "text": "Check if string follows order of characters defined by a pattern or not" }, { "code": null, "e": 67407, "s": 67364, "text": "Find nth term of the Dragon Curve Sequence" }, { "code": null, "e": 67461, "s": 67407, "text": "Count of number of given string in 2D character array" }, { "code": null, "e": 67469, "s": 67461, "text": "More >>" }, { "code": null, "e": 67484, "s": 67469, "text": "Split String :" }, { "code": null, "e": 68147, "s": 67484, "text": "Tokenizing a string in C++Split a sentence into words in C++How to split a string in C/C++, Python and Java?Check if given string can be split into four distinct stringsSplit numeric, alphabetic and special symbols from a StringSplitting a Numeric StringWays to split string such that each partition starts with distinct characterPartition a number into two divisble partsPartition given string in such manner that i’th substring is sum of (i-1)’th and (i-2)’th substringBreaking a number such that first part is integral division of second by a power of 10Divide a string in N equal partsMinimum Word BreakWord Break ProblemWord Break Problem using Backtracking" }, { "code": null, "e": 68174, "s": 68147, "text": "Tokenizing a string in C++" }, { "code": null, "e": 68209, "s": 68174, "text": "Split a sentence into words in C++" }, { "code": null, "e": 68258, "s": 68209, "text": "How to split a string in C/C++, Python and Java?" }, { "code": null, "e": 68320, "s": 68258, "text": "Check if given string can be split into four distinct strings" }, { "code": null, "e": 68380, "s": 68320, "text": "Split numeric, alphabetic and special symbols from a String" }, { "code": null, "e": 68407, "s": 68380, "text": "Splitting a Numeric String" }, { "code": null, "e": 68484, "s": 68407, "text": "Ways to split string such that each partition starts with distinct character" }, { "code": null, "e": 68527, "s": 68484, "text": "Partition a number into two divisble parts" }, { "code": null, "e": 68627, "s": 68527, "text": "Partition given string in such manner that i’th substring is sum of (i-1)’th and (i-2)’th substring" }, { "code": null, "e": 68714, "s": 68627, "text": "Breaking a number such that first part is integral division of second by a power of 10" }, { "code": null, "e": 68747, "s": 68714, "text": "Divide a string in N equal parts" }, { "code": null, "e": 68766, "s": 68747, "text": "Minimum Word Break" }, { "code": null, "e": 68785, "s": 68766, "text": "Word Break Problem" }, { "code": null, "e": 68823, "s": 68785, "text": "Word Break Problem using Backtracking" }, { "code": null, "e": 68866, "s": 68823, "text": "Balance Parentheses & Bracket Evaluation :" }, { "code": null, "e": 69821, "s": 68866, "text": "Identify and mark unmatched parenthesis in an expressionCost to Balance the parenthesesCheck for balanced parentheses in an expression | O(1) spaceCheck for balanced parentheses in an expressionLength of Longest Balanced SubsequenceBalanced expression with replacementEvaluate a boolean expression represented as stringFind maximum depth of nested parenthesis in a stringPrint all ways to break a string in bracket formFind an equal point in a string of bracketsMinimum Swaps for Bracket BalancingCheck if two expressions with brackets are sameExpression contains redundant bracket or notRange Queries for Longest Correct Bracket SubsequenceEvaluate an array expression with numbers, + and –Print Bracket NumberFind index of closing bracket for a given opening bracket in an expressionBinary tree to string with bracketsConstruct Binary Tree from String with bracket representationMinimum number of bracket reversals needed to make an expression balanced" }, { "code": null, "e": 69878, "s": 69821, "text": "Identify and mark unmatched parenthesis in an expression" }, { "code": null, "e": 69910, "s": 69878, "text": "Cost to Balance the parentheses" }, { "code": null, "e": 69971, "s": 69910, "text": "Check for balanced parentheses in an expression | O(1) space" }, { "code": null, "e": 70019, "s": 69971, "text": "Check for balanced parentheses in an expression" }, { "code": null, "e": 70058, "s": 70019, "text": "Length of Longest Balanced Subsequence" }, { "code": null, "e": 70095, "s": 70058, "text": "Balanced expression with replacement" }, { "code": null, "e": 70147, "s": 70095, "text": "Evaluate a boolean expression represented as string" }, { "code": null, "e": 70200, "s": 70147, "text": "Find maximum depth of nested parenthesis in a string" }, { "code": null, "e": 70249, "s": 70200, "text": "Print all ways to break a string in bracket form" }, { "code": null, "e": 70293, "s": 70249, "text": "Find an equal point in a string of brackets" }, { "code": null, "e": 70329, "s": 70293, "text": "Minimum Swaps for Bracket Balancing" }, { "code": null, "e": 70377, "s": 70329, "text": "Check if two expressions with brackets are same" }, { "code": null, "e": 70422, "s": 70377, "text": "Expression contains redundant bracket or not" }, { "code": null, "e": 70476, "s": 70422, "text": "Range Queries for Longest Correct Bracket Subsequence" }, { "code": null, "e": 70527, "s": 70476, "text": "Evaluate an array expression with numbers, + and –" }, { "code": null, "e": 70548, "s": 70527, "text": "Print Bracket Number" }, { "code": null, "e": 70623, "s": 70548, "text": "Find index of closing bracket for a given opening bracket in an expression" }, { "code": null, "e": 70659, "s": 70623, "text": "Binary tree to string with brackets" }, { "code": null, "e": 70721, "s": 70659, "text": "Construct Binary Tree from String with bracket representation" }, { "code": null, "e": 70795, "s": 70721, "text": "Minimum number of bracket reversals needed to make an expression balanced" }, { "code": null, "e": 70808, "s": 70795, "text": "Conversion :" }, { "code": null, "e": 71781, "s": 70808, "text": "Convert all substrings of length ‘k’ from base ‘b’ to decimalConvert Binary fraction to DecimalConvert decimal fraction to binary numberConvert a sentence into its equivalent mobile numeric keypad sequenceCheck if it is possible to convert one string into another with given constraintsConverting one string to other using append and delete last operationsConverting Decimal Number lying between 1 to 3999 to Roman NumeralsConverting Roman Numerals to Decimal lying between 1 to 3999Inverting the Move to Front TransformBurrows – Wheeler Data Transform AlgorithmCheck if it is possible to transform one string to anotherTransform the stringAn in-place algorithm for String TransformationWays of transforming one string to other by removing 0 or more charactersTransform One String to Another using Minimum Number of Given OperationConvert Ternary Expression to a Binary TreePrefix to Infix ConversionPrefix to Postfix ConversionPostfix to Prefix ConversionPostfix to Infix" }, { "code": null, "e": 71843, "s": 71781, "text": "Convert all substrings of length ‘k’ from base ‘b’ to decimal" }, { "code": null, "e": 71878, "s": 71843, "text": "Convert Binary fraction to Decimal" }, { "code": null, "e": 71920, "s": 71878, "text": "Convert decimal fraction to binary number" }, { "code": null, "e": 71990, "s": 71920, "text": "Convert a sentence into its equivalent mobile numeric keypad sequence" }, { "code": null, "e": 72072, "s": 71990, "text": "Check if it is possible to convert one string into another with given constraints" }, { "code": null, "e": 72143, "s": 72072, "text": "Converting one string to other using append and delete last operations" }, { "code": null, "e": 72211, "s": 72143, "text": "Converting Decimal Number lying between 1 to 3999 to Roman Numerals" }, { "code": null, "e": 72272, "s": 72211, "text": "Converting Roman Numerals to Decimal lying between 1 to 3999" }, { "code": null, "e": 72310, "s": 72272, "text": "Inverting the Move to Front Transform" }, { "code": null, "e": 72353, "s": 72310, "text": "Burrows – Wheeler Data Transform Algorithm" }, { "code": null, "e": 72412, "s": 72353, "text": "Check if it is possible to transform one string to another" }, { "code": null, "e": 72433, "s": 72412, "text": "Transform the string" }, { "code": null, "e": 72481, "s": 72433, "text": "An in-place algorithm for String Transformation" }, { "code": null, "e": 72555, "s": 72481, "text": "Ways of transforming one string to other by removing 0 or more characters" }, { "code": null, "e": 72627, "s": 72555, "text": "Transform One String to Another using Minimum Number of Given Operation" }, { "code": null, "e": 72671, "s": 72627, "text": "Convert Ternary Expression to a Binary Tree" }, { "code": null, "e": 72698, "s": 72671, "text": "Prefix to Infix Conversion" }, { "code": null, "e": 72727, "s": 72698, "text": "Prefix to Postfix Conversion" }, { "code": null, "e": 72756, "s": 72727, "text": "Postfix to Prefix Conversion" }, { "code": null, "e": 72773, "s": 72756, "text": "Postfix to Infix" }, { "code": null, "e": 72780, "s": 72773, "text": "Misc :" }, { "code": null, "e": 74025, "s": 72780, "text": "Word Wrap problem ( Space optimized solution )Form minimum number from given sequenceMaximum number of characters between any two same character in a stringPrint shortest path to print a string on screenMinimum number of stops from given pathCheck whether second string can be formed from characters of first stringMirror characters of a stringFind words which are greater than given length kFind last index of a character in a stringFind position of the given number among the numbers made of 4 and 7Find winner of an election where votes are represented as candidate namesCompare Version Numbers with large inputs allowedPossibility of moving out of mazePossibility of a word from a given set of charactersFind the arrangement of queue at given timeProgram to generate all possible valid IP addresses from given stringProgram to validate an IP addressProgram to check for a Valid IMEI NumberDecode a median string to the original stringDecode a string recursively encoded as count followed by substringMinimal operations to make a number magicalProgram to check for ISBNProgram for credit card number validationMaximize a number considering permutations with values smaller than limitFind if a string starts and ends with another given string" }, { "code": null, "e": 74072, "s": 74025, "text": "Word Wrap problem ( Space optimized solution )" }, { "code": null, "e": 74112, "s": 74072, "text": "Form minimum number from given sequence" }, { "code": null, "e": 74184, "s": 74112, "text": "Maximum number of characters between any two same character in a string" }, { "code": null, "e": 74232, "s": 74184, "text": "Print shortest path to print a string on screen" }, { "code": null, "e": 74272, "s": 74232, "text": "Minimum number of stops from given path" }, { "code": null, "e": 74346, "s": 74272, "text": "Check whether second string can be formed from characters of first string" }, { "code": null, "e": 74376, "s": 74346, "text": "Mirror characters of a string" }, { "code": null, "e": 74425, "s": 74376, "text": "Find words which are greater than given length k" }, { "code": null, "e": 74468, "s": 74425, "text": "Find last index of a character in a string" }, { "code": null, "e": 74536, "s": 74468, "text": "Find position of the given number among the numbers made of 4 and 7" }, { "code": null, "e": 74610, "s": 74536, "text": "Find winner of an election where votes are represented as candidate names" }, { "code": null, "e": 74660, "s": 74610, "text": "Compare Version Numbers with large inputs allowed" }, { "code": null, "e": 74694, "s": 74660, "text": "Possibility of moving out of maze" }, { "code": null, "e": 74747, "s": 74694, "text": "Possibility of a word from a given set of characters" }, { "code": null, "e": 74791, "s": 74747, "text": "Find the arrangement of queue at given time" }, { "code": null, "e": 74861, "s": 74791, "text": "Program to generate all possible valid IP addresses from given string" }, { "code": null, "e": 74895, "s": 74861, "text": "Program to validate an IP address" }, { "code": null, "e": 74936, "s": 74895, "text": "Program to check for a Valid IMEI Number" }, { "code": null, "e": 74982, "s": 74936, "text": "Decode a median string to the original string" }, { "code": null, "e": 75049, "s": 74982, "text": "Decode a string recursively encoded as count followed by substring" }, { "code": null, "e": 75093, "s": 75049, "text": "Minimal operations to make a number magical" }, { "code": null, "e": 75119, "s": 75093, "text": "Program to check for ISBN" }, { "code": null, "e": 75161, "s": 75119, "text": "Program for credit card number validation" }, { "code": null, "e": 75235, "s": 75161, "text": "Maximize a number considering permutations with values smaller than limit" }, { "code": null, "e": 75294, "s": 75235, "text": "Find if a string starts and ends with another given string" }, { "code": null, "e": 75302, "s": 75294, "text": "More >>" }, { "code": null, "e": 75316, "s": 75302, "text": "Quick Links :" }, { "code": null, "e": 75347, "s": 75316, "text": "‘Practice Problems’ on Strings" }, { "code": null, "e": 75368, "s": 75347, "text": "‘Quizzes’ on Strings" }, { "code": null, "e": 75590, "s": 75368, "text": "If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 75715, "s": 75590, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 75813, "s": 75715, "text": "Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here." }, { "code": null, "e": 75822, "s": 75813, "text": "Comments" }, { "code": null, "e": 75835, "s": 75822, "text": "Old Comments" }, { "code": null, "e": 75888, "s": 75835, "text": "Must Do Coding Questions for Product Based Companies" }, { "code": null, "e": 75947, "s": 75888, "text": "Microsoft Interview Experience for Internship (Via Engage)" }, { "code": null, "e": 76008, "s": 75947, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 76085, "s": 76008, "text": "Find number of rectangles that can be formed from a given set of coordinates" }, { "code": null, "e": 76123, "s": 76085, "text": "Array of Objects in C++ with Examples" }, { "code": null, "e": 76185, "s": 76123, "text": "How to Replace Values in Column Based on Condition in Pandas?" }, { "code": null, "e": 76226, "s": 76185, "text": "C Program to read contents of Whole File" }, { "code": null, "e": 76269, "s": 76226, "text": "How to Replace Values in a List in Python?" }, { "code": null, "e": 76305, "s": 76269, "text": "How to Read Text Files with Pandas?" } ]
CSS - Fade In Down Effect
The image come or cause to come gradually into or out of view, or to merge into another shot. @keyframes fadeInDown { 0% { opacity: 0; transform: translateY(-20px); } 100% { opacity: 1; transform: translateY(0); } } Transform − Transform applies to 2d and 3d transformation to an element. Transform − Transform applies to 2d and 3d transformation to an element. Opacity − Opacity applies to an element to make translucence. Opacity − Opacity applies to an element to make translucence. <html> <head> <style> .animated { background-image: url(/css/images/logo.png); background-repeat: no-repeat; background-position: left top; padding-top:95px; margin-bottom:60px; -webkit-animation-duration: 10s; animation-duration: 10s; -webkit-animation-fill-mode: both; animation-fill-mode: both; } @-webkit-keyframes fadeInDown { 0% { opacity: 0; -webkit-transform: translateY(-20px); } 100% { opacity: 1; -webkit-transform: translateY(0); } } @keyframes fadeInDown { 0% { opacity: 0; transform: translateY(-20px); } 100% { opacity: 1; transform: translateY(0); } } .fadeInDown { -webkit-animation-name: fadeInDown; animation-name: fadeInDown; } </style> </head> <body> <div id = "animated-example" class = "animated fadeInDown"></div> <button onclick = "myFunction()">Reload page</button> <script> function myFunction() { location.reload(); } </script> </body> </html> It will produce the following result − Academic Tutorials Big Data & Analytics Computer Programming Computer Science Databases DevOps Digital Marketing Engineering Tutorials Exams Syllabus Famous Monuments GATE Exams Tutorials Latest Technologies Machine Learning Mainframe Development Management Tutorials Mathematics Tutorials Microsoft Technologies Misc tutorials Mobile Development Java Technologies Python Technologies SAP Tutorials Programming Scripts Selected Reading Software Quality Soft Skills Telecom Tutorials UPSC IAS Exams Web Development Sports Tutorials XML Technologies Multi-Language Interview Questions Academic Tutorials Big Data & Analytics Computer Programming Computer Science Databases DevOps Digital Marketing Engineering Tutorials Exams Syllabus Famous Monuments GATE Exams Tutorials Latest Technologies Machine Learning Mainframe Development Management Tutorials Mathematics Tutorials Microsoft Technologies Misc tutorials Mobile Development Java Technologies Python Technologies SAP Tutorials Programming Scripts Selected Reading Software Quality Soft Skills Telecom Tutorials UPSC IAS Exams Web Development Sports Tutorials XML Technologies Multi-Language Interview Questions Selected Reading UPSC IAS Exams Notes Developer's Best Practices Questions and Answers Effective Resume Writing HR Interview Questions Computer Glossary Who is Who Print Add Notes Bookmark this page
[ { "code": null, "e": 2720, "s": 2626, "text": "The image come or cause to come gradually into or out of view, or to merge into another shot." }, { "code": null, "e": 2879, "s": 2720, "text": "@keyframes fadeInDown {\n 0% {\n opacity: 0;\n transform: translateY(-20px);\n }\n 100% {\n opacity: 1;\n transform: translateY(0);\n }\n} " }, { "code": null, "e": 2952, "s": 2879, "text": "Transform − Transform applies to 2d and 3d transformation to an element." }, { "code": null, "e": 3025, "s": 2952, "text": "Transform − Transform applies to 2d and 3d transformation to an element." }, { "code": null, "e": 3087, "s": 3025, "text": "Opacity − Opacity applies to an element to make translucence." }, { "code": null, "e": 3149, "s": 3087, "text": "Opacity − Opacity applies to an element to make translucence." }, { "code": null, "e": 4571, "s": 3149, "text": "<html>\n <head>\n <style>\n .animated {\n background-image: url(/css/images/logo.png);\n background-repeat: no-repeat;\n background-position: left top;\n padding-top:95px;\n margin-bottom:60px;\n -webkit-animation-duration: 10s;\n animation-duration: 10s;\n -webkit-animation-fill-mode: both;\n animation-fill-mode: both;\n }\n \n @-webkit-keyframes fadeInDown {\n 0% {\n opacity: 0;\n -webkit-transform: translateY(-20px);\n }\n 100% {\n opacity: 1;\n -webkit-transform: translateY(0);\n }\n }\n \n @keyframes fadeInDown {\n 0% {\n opacity: 0;\n transform: translateY(-20px);\n }\n 100% {\n opacity: 1;\n transform: translateY(0);\n }\n }\n \n .fadeInDown {\n -webkit-animation-name: fadeInDown;\n animation-name: fadeInDown;\n }\n </style>\n </head>\n\n <body>\n \n <div id = \"animated-example\" class = \"animated fadeInDown\"></div>\n <button onclick = \"myFunction()\">Reload page</button>\n \n <script>\n function myFunction() {\n location.reload();\n }\n </script>\n \n </body>\n</html>" }, { "code": null, "e": 4610, "s": 4571, "text": "It will produce the following result −" }, { "code": null, "e": 5257, "s": 4610, "text": "\n\n Academic Tutorials\n Big Data & Analytics \n Computer Programming \n Computer Science \n Databases \n DevOps \n Digital Marketing \n Engineering Tutorials \n Exams Syllabus \n Famous Monuments \n GATE Exams Tutorials\n Latest Technologies \n Machine Learning \n Mainframe Development \n Management Tutorials \n Mathematics Tutorials\n Microsoft Technologies \n Misc tutorials \n Mobile Development \n Java Technologies \n Python Technologies \n SAP Tutorials \nProgramming Scripts \n Selected Reading \n Software Quality \n Soft Skills \n Telecom Tutorials \n UPSC IAS Exams \n Web Development \n Sports Tutorials \n XML Technologies \n Multi-Language\n Interview Questions\n\n" }, { "code": null, "e": 5277, "s": 5257, "text": " Academic Tutorials" }, { "code": null, "e": 5300, "s": 5277, "text": " Big Data & Analytics " }, { "code": null, "e": 5323, "s": 5300, "text": " Computer Programming " }, { "code": null, "e": 5342, "s": 5323, "text": " Computer Science " }, { "code": null, "e": 5354, "s": 5342, "text": " Databases " }, { "code": null, "e": 5363, "s": 5354, "text": " DevOps " }, { "code": null, "e": 5383, "s": 5363, "text": " Digital Marketing " }, { "code": null, "e": 5407, "s": 5383, "text": " Engineering Tutorials " }, { "code": null, "e": 5424, "s": 5407, "text": " Exams Syllabus " }, { "code": null, "e": 5443, "s": 5424, "text": " Famous Monuments " }, { "code": null, "e": 5465, "s": 5443, "text": " GATE Exams Tutorials" }, { "code": null, "e": 5487, "s": 5465, "text": " Latest Technologies " }, { "code": null, "e": 5506, "s": 5487, "text": " Machine Learning " }, { "code": null, "e": 5530, "s": 5506, "text": " Mainframe Development " }, { "code": null, "e": 5553, "s": 5530, "text": " Management Tutorials " }, { "code": null, "e": 5576, "s": 5553, "text": " Mathematics Tutorials" }, { "code": null, "e": 5601, "s": 5576, "text": " Microsoft Technologies " }, { "code": null, "e": 5618, "s": 5601, "text": " Misc tutorials " }, { "code": null, "e": 5639, "s": 5618, "text": " Mobile Development " }, { "code": null, "e": 5659, "s": 5639, "text": " Java Technologies " }, { "code": null, "e": 5681, "s": 5659, "text": " Python Technologies " }, { "code": null, "e": 5697, "s": 5681, "text": " SAP Tutorials " }, { "code": null, "e": 5718, "s": 5697, "text": "Programming Scripts " }, { "code": null, "e": 5737, "s": 5718, "text": " Selected Reading " }, { "code": null, "e": 5756, "s": 5737, "text": " Software Quality " }, { "code": null, "e": 5770, "s": 5756, "text": " Soft Skills " }, { "code": null, "e": 5790, "s": 5770, "text": " Telecom Tutorials " }, { "code": null, "e": 5807, "s": 5790, "text": " UPSC IAS Exams " }, { "code": null, "e": 5825, "s": 5807, "text": " Web Development " }, { "code": null, "e": 5844, "s": 5825, "text": " Sports Tutorials " }, { "code": null, "e": 5863, "s": 5844, "text": " XML Technologies " }, { "code": null, "e": 5879, "s": 5863, "text": " Multi-Language" }, { "code": null, "e": 5900, "s": 5879, "text": " Interview Questions" }, { "code": null, "e": 5917, "s": 5900, "text": "Selected Reading" }, { "code": null, "e": 5938, "s": 5917, "text": "UPSC IAS Exams Notes" }, { "code": null, "e": 5965, "s": 5938, "text": "Developer's Best Practices" }, { "code": null, "e": 5987, "s": 5965, "text": "Questions and Answers" }, { "code": null, "e": 6012, "s": 5987, "text": "Effective Resume Writing" }, { "code": null, "e": 6035, "s": 6012, "text": "HR Interview Questions" }, { "code": null, "e": 6053, "s": 6035, "text": "Computer Glossary" }, { "code": null, "e": 6064, "s": 6053, "text": "Who is Who" }, { "code": null, "e": 6071, "s": 6064, "text": " Print" }, { "code": null, "e": 6082, "s": 6071, "text": " Add Notes" } ]
Can we call a method on "this" keyword from a constructor in java?
The “this ” keyword in Java is used as a reference to the current object, with in an instance method or a constructor. Using this you can refer the members of a class such as constructors, variables and methods. Yes, as mentioned we can call all the members of a class (methods, variables, and constructors) from instance methods or, constructors. In the following Java program, the Student class contains two private variables name and age with setter methods and, a parameterized constructor which accepts these two values. From the constructor, we are invoking the setName() and setAge() methods using "this" keyword by passing the obtained name and age values to them respectively. From the main method, we are reading name and, age values from user and calling the constructor by passing them. Then we are displaying the values of the instance variables name and class using the display() method. public class Student { private String name; private int age; public Student(String name, int age){ this.setName(name); this.setAge(age); } public void setName(String name) { this.name = name; } public void setAge(int age) { this.age = age; } public void display(){ System.out.println("Name of the Student: "+this.name ); System.out.println("Age of the Student: "+this.age ); } public static void main(String args[]) { //Reading values from user Scanner sc = new Scanner(System.in); System.out.println("Enter the name of the student: "); String name = sc.nextLine(); System.out.println("Enter the age of the student: "); int age = sc.nextInt(); //Calling the constructor that accepts both values new Student(name, age).display(); } } Rohan Enter the age of the student: 18 Name of the Student: Rohan Age of the Student: 18
[ { "code": null, "e": 1274, "s": 1062, "text": "The “this ” keyword in Java is used as a reference to the current object, with in an instance method or a constructor. Using this you can refer the members of a class such as constructors, variables and methods." }, { "code": null, "e": 1410, "s": 1274, "text": "Yes, as mentioned we can call all the members of a class (methods, variables, and constructors) from instance methods or, constructors." }, { "code": null, "e": 1588, "s": 1410, "text": "In the following Java program, the Student class contains two private variables name and age with setter methods and, a parameterized constructor which accepts these two values." }, { "code": null, "e": 1748, "s": 1588, "text": "From the constructor, we are invoking the setName() and setAge() methods using \"this\" keyword by passing the obtained name and age values to them respectively." }, { "code": null, "e": 1964, "s": 1748, "text": "From the main method, we are reading name and, age values from user and calling the constructor by passing them. Then we are displaying the values of the instance variables name and class using the display() method." }, { "code": null, "e": 2817, "s": 1964, "text": "public class Student {\n private String name;\n private int age;\n public Student(String name, int age){\n this.setName(name);\n this.setAge(age);\n }\n public void setName(String name) {\n this.name = name;\n }\n public void setAge(int age) {\n this.age = age;\n }\n public void display(){\n System.out.println(\"Name of the Student: \"+this.name );\n System.out.println(\"Age of the Student: \"+this.age );\n }\n public static void main(String args[]) {\n //Reading values from user\n Scanner sc = new Scanner(System.in);\n System.out.println(\"Enter the name of the student: \");\n String name = sc.nextLine();\n System.out.println(\"Enter the age of the student: \");\n int age = sc.nextInt();\n //Calling the constructor that accepts both values\n new Student(name, age).display();\n }\n}" }, { "code": null, "e": 2906, "s": 2817, "text": "Rohan\nEnter the age of the student:\n18\nName of the Student: Rohan\nAge of the Student: 18" } ]
Demystifying ‘Confusion Matrix’ Confusion | by SalRite | Towards Data Science
If you are Confused about Confusion Matrix, then I hope this post may help you understand it! Happy Reading. We will use the UCI Bank Note Authentication Dataset for demystifying the confusion behind Confusion Matrix. We will predict and evaluate our model, and along the way develop our conceptual understanding. Also will be providing the links to further reading wherever required. The Dataset contains properties of the wavelet transformed image of 400x400 pixels of a BankNote, and can be found here. It is recommended for reader to download the dataset and follow along. Further for reference, you can find the Kaggle Notebook here. #Skipping the necessary Libraries import#Reading the Data Filedf = pd.read_csv('../input/BankNote_Authentication.csv')df.head(5) #To check if the data is equally balanced between the target classesdf['class'].value_counts() Splitting the Data into Training and Test Set, Train is on which we will be training our model and the evaluation will be performed on the Test set, we are skipping the Validation set here for simplicity and lack of sufficient data. In general the data is divided into three sets Train, Test and Validation, read more here. #Defining features and target variabley = df['class'] #target variable we want to predict X = df.drop(columns = ['class']) #set of required features, in this case all#Splitting the data into train and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42) Next, we will make a simple Logistic Regression Model for our Prediction. #Predicting using Logistic Regression for Binary classification from sklearn.linear_model import LogisticRegressionLR = LogisticRegression()LR.fit(X_train,y_train) #fitting the model y_pred = LR.predict(X_test) #prediction Let’s plot the most confusing Confusion Matrix? Just Kidding, Lets have a simple Confusion Matrix (Scikit-learn documentation used for the below code). #Evaluation of Model - Confusion Matrix Plotdef plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.ylabel('True label') plt.xlabel('Predicted label') plt.tight_layout()# Compute confusion matrixcnf_matrix = confusion_matrix(y_test, y_pred)np.set_printoptions(precision=2)# Plot non-normalized confusion matrixplt.figure()plot_confusion_matrix(cnf_matrix, classes=['Forged','Authorized'], title='Confusion matrix, without normalization') #extracting true_positives, false_positives, true_negatives, false_negativestn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()print("True Negatives: ",tn)print("False Positives: ",fp)print("False Negatives: ",fn)print("True Positives: ",tp) How Accurate is our Model? #AccuracyAccuracy = (tn+tp)*100/(tp+tn+fp+fn) print("Accuracy {:0.2f}%:",.format(Accuracy)) Does Accuracy matter? Not always, it may not be the right measure at times, especially if your Target class is not balanced (data is skewed). Then you may consider additional metrics like Precision, Recall, F score (combined metric), but before diving in lets take a step back and understand the terms that form the basis for these. Some Basic Terms True Positive — Label which was predicted Positive (in our scenario Authenticated Bank Notes) and is actually Positive (i.e. belong to Positive ‘Authorized’ Class). True Negative — Label which was predicted Negative (in our scenario Forged Bank Notes) and is actually Negative (i.e. belong to Negative ‘Forged’ Class). False Positive — Label which was predicted as Positive, but is actually Negative, or in simple words the Note wrongly predicted as Authentic by our Model, but is actually Forged. In Hypothesis Testing it is also known as Type 1 error or the incorrect rejection of Null Hypothesis, refer this to read more about Hypothesis testing. False Negatives — Labels which was predicted as Negative, but is actually Positive (Authentic Note predicted as Forged). It is also known as Type 2 error, which leads to the failure in rejection of Null Hypothesis. Now lets look at most common evaluation metrics every Machine Learning Practitioner should know! Precision It is the ‘Exactness’, ability of the model to return only relevant instances. If your use case/problem statement involves minimizing the False Positives, i.e. in current scenario if you don’t want the Forged Notes to be labelled as Authentic by the Model then Precision is something you need. #Precision Precision = tp/(tp+fp) print("Precision {:0.2f}".format(Precision)) Recall It is the ‘Completeness’, ability of the model to identify all relevant instances, True Positive Rate, aka Sensitivity. In the current scenario if your focus is to have the least False Negatives i.e. you don’t Authentic Notes to be wrongly classified as Forged then Recall can come to your rescue. #Recall Recall = tp/(tp+fn) print("Recall {:0.2f}".format(Recall)) F1 Measure Harmonic mean of Precision & Recall, used to indicate a balance between Precision & Recall providing each equal weightage, it ranges from 0 to 1. F1 Score reaches its best value at 1 (perfect precision & recall) and worst at 0, read more here. #F1 Scoref1 = (2*Precision*Recall)/(Precision + Recall)print("F1 Score {:0.2f}".format(f1)) F-beta Measure It is the general form of F measure — Beta 0.5 & 2 are usually used as measures, 0.5 indicates the Inclination towards Precision whereas 2 favors Recall giving it twice the weightage compared to precision. #F-beta score calculationdef fbeta(precision, recall, beta): return ((1+pow(beta,2))*precision*recall)/(pow(beta,2)*precision + recall) f2 = fbeta(Precision, Recall, 2)f0_5 = fbeta(Precision, Recall, 0.5)print("F2 {:0.2f}".format(f2))print("\nF0.5 {:0.2f}".format(f0_5)) Specificity It is also referred to as ‘True Negative Rate’ (Proportion of actual negatives that are correctly identified), i.e. more True Negatives the data hold the higher its Specificity. #Specificity Specificity = tn/(tn+fp)print("Specificity {:0.2f}".format(Specificity)) ROC (Receiver Operating Characteristic curve) The plot of ‘True Positive Rate’ (Sensitivity/Recall) against the ‘False Positive Rate’ (1-Specificity) at different classification thresholds. The area under the ROC curve (AUC ) measures the entire two-dimensional area underneath the curve. It is a measure of how well a parameter can distinguish between two diagnostic groups. Often used as a measure of quality of the classification models. A random classifier has an area under the curve of 0.5, while AUC for a perfect classifier is equal to 1. #ROCimport scikitplot as skplt #to make things easyy_pred_proba = LR.predict_proba(X_test)skplt.metrics.plot_roc_curve(y_test, y_pred_proba)plt.show() Since the problem selected to illustrate the use of Confusion Matrix and related Metrics was simple, you found every value on higher level (98% or above) be it Precision, Recall or Accuracy; usually that will not be the case and you will require the domain knowledge about data to choose between the one metric or other (often times a combination of metrics). For example: if its about finding that ‘spam in your mailbox’, high Precision of your model will be of much importance (as you don’t want the ham to be labelled as spam), it will tell us what proportion of messages we classified as spam, actually were spam. Ratio of true positives(words classified as spam, and which are actually spam) to all positives(all words classified as spam, irrespective of whether that was the correct classification). While in Fraud detection you may wish your Recall to be higher, so that you can correctly classify/identify the Frauds even if you miss classify some of the non-fraudulent activity as Fraud, it won’t cause any significant damage.
[ { "code": null, "e": 281, "s": 172, "text": "If you are Confused about Confusion Matrix, then I hope this post may help you understand it! Happy Reading." }, { "code": null, "e": 557, "s": 281, "text": "We will use the UCI Bank Note Authentication Dataset for demystifying the confusion behind Confusion Matrix. We will predict and evaluate our model, and along the way develop our conceptual understanding. Also will be providing the links to further reading wherever required." }, { "code": null, "e": 811, "s": 557, "text": "The Dataset contains properties of the wavelet transformed image of 400x400 pixels of a BankNote, and can be found here. It is recommended for reader to download the dataset and follow along. Further for reference, you can find the Kaggle Notebook here." }, { "code": null, "e": 940, "s": 811, "text": "#Skipping the necessary Libraries import#Reading the Data Filedf = pd.read_csv('../input/BankNote_Authentication.csv')df.head(5)" }, { "code": null, "e": 1035, "s": 940, "text": "#To check if the data is equally balanced between the target classesdf['class'].value_counts()" }, { "code": null, "e": 1359, "s": 1035, "text": "Splitting the Data into Training and Test Set, Train is on which we will be training our model and the evaluation will be performed on the Test set, we are skipping the Validation set here for simplicity and lack of sufficient data. In general the data is divided into three sets Train, Test and Validation, read more here." }, { "code": null, "e": 1660, "s": 1359, "text": "#Defining features and target variabley = df['class'] #target variable we want to predict X = df.drop(columns = ['class']) #set of required features, in this case all#Splitting the data into train and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)" }, { "code": null, "e": 1734, "s": 1660, "text": "Next, we will make a simple Logistic Regression Model for our Prediction." }, { "code": null, "e": 1957, "s": 1734, "text": "#Predicting using Logistic Regression for Binary classification from sklearn.linear_model import LogisticRegressionLR = LogisticRegression()LR.fit(X_train,y_train) #fitting the model y_pred = LR.predict(X_test) #prediction" }, { "code": null, "e": 2109, "s": 1957, "text": "Let’s plot the most confusing Confusion Matrix? Just Kidding, Lets have a simple Confusion Matrix (Scikit-learn documentation used for the below code)." }, { "code": null, "e": 3543, "s": 2109, "text": "#Evaluation of Model - Confusion Matrix Plotdef plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): \"\"\" This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. \"\"\" if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print(\"Normalized confusion matrix\") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment=\"center\", color=\"white\" if cm[i, j] > thresh else \"black\") plt.ylabel('True label') plt.xlabel('Predicted label') plt.tight_layout()# Compute confusion matrixcnf_matrix = confusion_matrix(y_test, y_pred)np.set_printoptions(precision=2)# Plot non-normalized confusion matrixplt.figure()plot_confusion_matrix(cnf_matrix, classes=['Forged','Authorized'], title='Confusion matrix, without normalization')" }, { "code": null, "e": 3791, "s": 3543, "text": "#extracting true_positives, false_positives, true_negatives, false_negativestn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()print(\"True Negatives: \",tn)print(\"False Positives: \",fp)print(\"False Negatives: \",fn)print(\"True Positives: \",tp)" }, { "code": null, "e": 3818, "s": 3791, "text": "How Accurate is our Model?" }, { "code": null, "e": 3910, "s": 3818, "text": "#AccuracyAccuracy = (tn+tp)*100/(tp+tn+fp+fn) print(\"Accuracy {:0.2f}%:\",.format(Accuracy))" }, { "code": null, "e": 3932, "s": 3910, "text": "Does Accuracy matter?" }, { "code": null, "e": 4243, "s": 3932, "text": "Not always, it may not be the right measure at times, especially if your Target class is not balanced (data is skewed). Then you may consider additional metrics like Precision, Recall, F score (combined metric), but before diving in lets take a step back and understand the terms that form the basis for these." }, { "code": null, "e": 4260, "s": 4243, "text": "Some Basic Terms" }, { "code": null, "e": 4425, "s": 4260, "text": "True Positive — Label which was predicted Positive (in our scenario Authenticated Bank Notes) and is actually Positive (i.e. belong to Positive ‘Authorized’ Class)." }, { "code": null, "e": 4579, "s": 4425, "text": "True Negative — Label which was predicted Negative (in our scenario Forged Bank Notes) and is actually Negative (i.e. belong to Negative ‘Forged’ Class)." }, { "code": null, "e": 4910, "s": 4579, "text": "False Positive — Label which was predicted as Positive, but is actually Negative, or in simple words the Note wrongly predicted as Authentic by our Model, but is actually Forged. In Hypothesis Testing it is also known as Type 1 error or the incorrect rejection of Null Hypothesis, refer this to read more about Hypothesis testing." }, { "code": null, "e": 5125, "s": 4910, "text": "False Negatives — Labels which was predicted as Negative, but is actually Positive (Authentic Note predicted as Forged). It is also known as Type 2 error, which leads to the failure in rejection of Null Hypothesis." }, { "code": null, "e": 5222, "s": 5125, "text": "Now lets look at most common evaluation metrics every Machine Learning Practitioner should know!" }, { "code": null, "e": 5232, "s": 5222, "text": "Precision" }, { "code": null, "e": 5526, "s": 5232, "text": "It is the ‘Exactness’, ability of the model to return only relevant instances. If your use case/problem statement involves minimizing the False Positives, i.e. in current scenario if you don’t want the Forged Notes to be labelled as Authentic by the Model then Precision is something you need." }, { "code": null, "e": 5605, "s": 5526, "text": "#Precision Precision = tp/(tp+fp) print(\"Precision {:0.2f}\".format(Precision))" }, { "code": null, "e": 5612, "s": 5605, "text": "Recall" }, { "code": null, "e": 5910, "s": 5612, "text": "It is the ‘Completeness’, ability of the model to identify all relevant instances, True Positive Rate, aka Sensitivity. In the current scenario if your focus is to have the least False Negatives i.e. you don’t Authentic Notes to be wrongly classified as Forged then Recall can come to your rescue." }, { "code": null, "e": 5977, "s": 5910, "text": "#Recall Recall = tp/(tp+fn) print(\"Recall {:0.2f}\".format(Recall))" }, { "code": null, "e": 5988, "s": 5977, "text": "F1 Measure" }, { "code": null, "e": 6232, "s": 5988, "text": "Harmonic mean of Precision & Recall, used to indicate a balance between Precision & Recall providing each equal weightage, it ranges from 0 to 1. F1 Score reaches its best value at 1 (perfect precision & recall) and worst at 0, read more here." }, { "code": null, "e": 6324, "s": 6232, "text": "#F1 Scoref1 = (2*Precision*Recall)/(Precision + Recall)print(\"F1 Score {:0.2f}\".format(f1))" }, { "code": null, "e": 6339, "s": 6324, "text": "F-beta Measure" }, { "code": null, "e": 6545, "s": 6339, "text": "It is the general form of F measure — Beta 0.5 & 2 are usually used as measures, 0.5 indicates the Inclination towards Precision whereas 2 favors Recall giving it twice the weightage compared to precision." }, { "code": null, "e": 6830, "s": 6545, "text": "#F-beta score calculationdef fbeta(precision, recall, beta): return ((1+pow(beta,2))*precision*recall)/(pow(beta,2)*precision + recall) f2 = fbeta(Precision, Recall, 2)f0_5 = fbeta(Precision, Recall, 0.5)print(\"F2 {:0.2f}\".format(f2))print(\"\\nF0.5 {:0.2f}\".format(f0_5))" }, { "code": null, "e": 6842, "s": 6830, "text": "Specificity" }, { "code": null, "e": 7020, "s": 6842, "text": "It is also referred to as ‘True Negative Rate’ (Proportion of actual negatives that are correctly identified), i.e. more True Negatives the data hold the higher its Specificity." }, { "code": null, "e": 7106, "s": 7020, "text": "#Specificity Specificity = tn/(tn+fp)print(\"Specificity {:0.2f}\".format(Specificity))" }, { "code": null, "e": 7152, "s": 7106, "text": "ROC (Receiver Operating Characteristic curve)" }, { "code": null, "e": 7296, "s": 7152, "text": "The plot of ‘True Positive Rate’ (Sensitivity/Recall) against the ‘False Positive Rate’ (1-Specificity) at different classification thresholds." }, { "code": null, "e": 7547, "s": 7296, "text": "The area under the ROC curve (AUC ) measures the entire two-dimensional area underneath the curve. It is a measure of how well a parameter can distinguish between two diagnostic groups. Often used as a measure of quality of the classification models." }, { "code": null, "e": 7653, "s": 7547, "text": "A random classifier has an area under the curve of 0.5, while AUC for a perfect classifier is equal to 1." }, { "code": null, "e": 7804, "s": 7653, "text": "#ROCimport scikitplot as skplt #to make things easyy_pred_proba = LR.predict_proba(X_test)skplt.metrics.plot_roc_curve(y_test, y_pred_proba)plt.show()" }, { "code": null, "e": 8164, "s": 7804, "text": "Since the problem selected to illustrate the use of Confusion Matrix and related Metrics was simple, you found every value on higher level (98% or above) be it Precision, Recall or Accuracy; usually that will not be the case and you will require the domain knowledge about data to choose between the one metric or other (often times a combination of metrics)." } ]
How to include CDN Based Version of jQuery in my HTML file?
Easily include jQuery library into your HTML code directly from Content Delivery Network (CDN). Google and Microsoft provide content delivery for the latest version. You can try to run the following code to learn how to use Google CDN for jQuery: Live Demo <html> <head> <title>jQuery CDN</title> <script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> <script> $(document).ready(function(){ document.write("Tutorialspoint!"); }); </script> </head> <body> <h2>Hello</h2> </body> </html>
[ { "code": null, "e": 1228, "s": 1062, "text": "Easily include jQuery library into your HTML code directly from Content Delivery Network (CDN). Google and Microsoft provide content delivery for the latest version." }, { "code": null, "e": 1309, "s": 1228, "text": "You can try to run the following code to learn how to use Google CDN for jQuery:" }, { "code": null, "e": 1319, "s": 1309, "text": "Live Demo" }, { "code": null, "e": 1672, "s": 1319, "text": "<html>\n <head>\n <title>jQuery CDN</title>\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js\"></script>\n \n <script>\n $(document).ready(function(){\n document.write(\"Tutorialspoint!\");\n });\n </script>\n </head>\n \n <body>\n <h2>Hello</h2>\n </body>\n \n</html>" } ]
Nth catalan number | Practice | GeeksforGeeks
Given a number N. The task is to find the Nth catalan number. The first few Catalan numbers for N = 0, 1, 2, 3, ... are 1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, ... Note: Positions start from 0 as shown above. Example 1: Input: N = 5 Output: 42 Example 2: Input: N = 4 Output: 14 Your Task: Complete findCatalan() function that takes n as an argument and returns the Nth Catalan number. The output is printed by the driver code. Expected Time Complexity: O(N). Expected Auxiliary Space: O(N). Constraints: 1 <= N <= 100 +1 shekharankur44 days ago class Solution { public: //Function to find the nth catalan number. cpp_int factorial(int n){ if(n==0) return 1; return n*factorial(n-1); } cpp_int findCatalan(int n) { //code here return factorial(2*n)/(factorial(n) * factorial(n+1)); } }; +1 vivekjmodiya6 days ago class Solution{ public: //Function to find the nth catalan number. cpp_int findCatalan(int n) { cpp_int ans = 1; for(cpp_int i=1; i<=n; i++){ ans = (ans*(4*i - 2))/(i+1); } return ans; }}; 0 triloki352 weeks ago cpp_int findCatalan(int n) { cpp_int dp[n+1]={0}; dp[0]=1; dp[1]=1; for(int i=2;i<=n;i++) { int start = 0; int end = i-1; while(start<=i-1 && end>=0) { dp[i]+= dp[start]*dp[end]; start++; end--; } } return dp[n]; } 0 makwanajoy20032 weeks ago cpp_int findCatalan(int n) { if (n==0){ return 1; } return (4*n - 2)*findCatalan(n-1)/(n+1); } 0 user_u8gi2 weeks ago class Solution{ public: //Function to find the nth catalan number. cpp_int findCatalan(int n) { //code here cpp_int a[n+1]; a[0]=1;a[1]=1; for(int i=2;i<n+1;i++) { a[i]=a[i-1]*(2*(2*i-1))/(i+1); } return a[n]; }}; +1 rituprasad96542 weeks ago JAVA CODE: Dynamic Programming class Solution { //Function to find the nth catalan number. public static BigInteger findCatalan(int n) { //Your code here BigInteger []dp=new BigInteger[n+1]; Arrays.fill(dp, BigInteger.valueOf(0)); dp[0]=BigInteger.valueOf(1); dp[1]=BigInteger.valueOf(1); for(int i=2;i<=n;i++) { for(int j=0;j<i;j++) { dp[i]=dp[i].add(dp[j].multiply(dp[i-j-1])); } } return dp[n]; } } 0 abhinavashish1999 This comment was deleted. 0 vikasingh3 weeks ago Python3:- class Solution: #Function to find the nth catalan number. def findCatalan(self,n): #return the nth Catalan number. if (n == 0): return 1 lst = [0] * (n+1) lst[0] = 1 for i in range(0, n): x = 0 for j in range(0, n): x = x + lst[j]*lst[i-j] lst[i+1] = x return lst[n] +1 siddhant073 weeks ago cpp_int findCatalan(int n) { cpp_int dp[n + 1]; dp[0] = 1; dp[1] = 1; for(int i = 2; i <= n; i++){ dp[i] = 0; for(int j = 0; j < i; j++){ dp[i] += dp[j]*dp[i - 1 - j]; } } return dp[n]; } +2 vi5hnu1 month ago cpp_int findCatalan(int n) { vector<cpp_int> res(n+1,0); res[0]=1; res[1]=1; for(int i=2;i<n+1;i++){ for(int j=0;j<i;j++){ res[i]+=res[j]*res[i-j-1]; } } return res[n]; } We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab. Make sure you are not using ad-blockers. Disable browser extensions. We recommend using latest version of your browser for best experience. Avoid using static/global variables in coding problems as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases in coding problems does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
[ { "code": null, "e": 449, "s": 238, "text": "Given a number N. The task is to find the Nth catalan number.\nThe first few Catalan numbers for N = 0, 1, 2, 3, ... are 1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, ...\nNote: Positions start from 0 as shown above." }, { "code": null, "e": 460, "s": 449, "text": "Example 1:" }, { "code": null, "e": 485, "s": 460, "text": "Input:\nN = 5\nOutput: 42\n" }, { "code": null, "e": 496, "s": 485, "text": "Example 2:" }, { "code": null, "e": 520, "s": 496, "text": "Input:\nN = 4\nOutput: 14" }, { "code": null, "e": 669, "s": 520, "text": "Your Task:\nComplete findCatalan() function that takes n as an argument and returns the Nth Catalan number. The output is printed by the driver code." }, { "code": null, "e": 733, "s": 669, "text": "Expected Time Complexity: O(N).\nExpected Auxiliary Space: O(N)." }, { "code": null, "e": 760, "s": 733, "text": "Constraints:\n1 <= N <= 100" }, { "code": null, "e": 763, "s": 760, "text": "+1" }, { "code": null, "e": 787, "s": 763, "text": "shekharankur44 days ago" }, { "code": null, "e": 1101, "s": 787, "text": "class Solution\n{\n public:\n //Function to find the nth catalan number.\n cpp_int factorial(int n){\n if(n==0)\n return 1;\n return n*factorial(n-1);\n }\n cpp_int findCatalan(int n) \n {\n //code here\n return factorial(2*n)/(factorial(n) * factorial(n+1));\n }\n};" }, { "code": null, "e": 1104, "s": 1101, "text": "+1" }, { "code": null, "e": 1127, "s": 1104, "text": "vivekjmodiya6 days ago" }, { "code": null, "e": 1363, "s": 1129, "text": "class Solution{ public: //Function to find the nth catalan number. cpp_int findCatalan(int n) { cpp_int ans = 1; for(cpp_int i=1; i<=n; i++){ ans = (ans*(4*i - 2))/(i+1); } return ans; }};" }, { "code": null, "e": 1367, "s": 1365, "text": "0" }, { "code": null, "e": 1388, "s": 1367, "text": "triloki352 weeks ago" }, { "code": null, "e": 1743, "s": 1388, "text": "cpp_int findCatalan(int n) { cpp_int dp[n+1]={0}; dp[0]=1; dp[1]=1; for(int i=2;i<=n;i++) { int start = 0; int end = i-1; while(start<=i-1 && end>=0) { dp[i]+= dp[start]*dp[end]; start++; end--; } } return dp[n]; }" }, { "code": null, "e": 1745, "s": 1743, "text": "0" }, { "code": null, "e": 1771, "s": 1745, "text": "makwanajoy20032 weeks ago" }, { "code": null, "e": 1899, "s": 1771, "text": "cpp_int findCatalan(int n) { if (n==0){ return 1; } return (4*n - 2)*findCatalan(n-1)/(n+1); }" }, { "code": null, "e": 1901, "s": 1899, "text": "0" }, { "code": null, "e": 1922, "s": 1901, "text": "user_u8gi2 weeks ago" }, { "code": null, "e": 2199, "s": 1922, "text": "class Solution{ public: //Function to find the nth catalan number. cpp_int findCatalan(int n) { //code here cpp_int a[n+1]; a[0]=1;a[1]=1; for(int i=2;i<n+1;i++) { a[i]=a[i-1]*(2*(2*i-1))/(i+1); } return a[n]; }};" }, { "code": null, "e": 2202, "s": 2199, "text": "+1" }, { "code": null, "e": 2228, "s": 2202, "text": "rituprasad96542 weeks ago" }, { "code": null, "e": 2829, "s": 2228, "text": "JAVA CODE: Dynamic Programming\n\nclass Solution\n{\n //Function to find the nth catalan number.\n\n public static BigInteger findCatalan(int n)\n {\n //Your code here\n \n BigInteger []dp=new BigInteger[n+1];\n \n Arrays.fill(dp, BigInteger.valueOf(0));\n dp[0]=BigInteger.valueOf(1);\n dp[1]=BigInteger.valueOf(1);\n \n \n \n for(int i=2;i<=n;i++)\n {\n for(int j=0;j<i;j++)\n { \n dp[i]=dp[i].add(dp[j].multiply(dp[i-j-1]));\n }\n }\n \n return dp[n];\n }\n}" }, { "code": null, "e": 2831, "s": 2829, "text": "0" }, { "code": null, "e": 2849, "s": 2831, "text": "abhinavashish1999" }, { "code": null, "e": 2875, "s": 2849, "text": "This comment was deleted." }, { "code": null, "e": 2877, "s": 2875, "text": "0" }, { "code": null, "e": 2898, "s": 2877, "text": "vikasingh3 weeks ago" }, { "code": null, "e": 2908, "s": 2898, "text": "Python3:-" }, { "code": null, "e": 3304, "s": 2908, "text": "\nclass Solution:\n #Function to find the nth catalan number.\n def findCatalan(self,n):\n #return the nth Catalan number.\n \n if (n == 0):\n return 1\n lst = [0] * (n+1)\n lst[0] = 1\n for i in range(0, n):\n x = 0\n for j in range(0, n):\n x = x + lst[j]*lst[i-j]\n lst[i+1] = x\n return lst[n]" }, { "code": null, "e": 3307, "s": 3304, "text": "+1" }, { "code": null, "e": 3329, "s": 3307, "text": "siddhant073 weeks ago" }, { "code": null, "e": 3626, "s": 3329, "text": "cpp_int findCatalan(int n) \n {\n cpp_int dp[n + 1];\n dp[0] = 1;\n dp[1] = 1;\n for(int i = 2; i <= n; i++){\n dp[i] = 0;\n for(int j = 0; j < i; j++){\n dp[i] += dp[j]*dp[i - 1 - j];\n }\n }\n return dp[n];\n }" }, { "code": null, "e": 3629, "s": 3626, "text": "+2" }, { "code": null, "e": 3647, "s": 3629, "text": "vi5hnu1 month ago" }, { "code": null, "e": 3890, "s": 3647, "text": "cpp_int findCatalan(int n) { vector<cpp_int> res(n+1,0); res[0]=1; res[1]=1; for(int i=2;i<n+1;i++){ for(int j=0;j<i;j++){ res[i]+=res[j]*res[i-j-1]; } } return res[n]; }" }, { "code": null, "e": 4036, "s": 3890, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 4072, "s": 4036, "text": " Login to access your submissions. " }, { "code": null, "e": 4082, "s": 4072, "text": "\nProblem\n" }, { "code": null, "e": 4092, "s": 4082, "text": "\nContest\n" }, { "code": null, "e": 4155, "s": 4092, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 4340, "s": 4155, "text": "Avoid using static/global variables in your code as your code is tested \n against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 4624, "s": 4340, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code.\n On submission, your code is tested against multiple test cases consisting of all\n possible corner cases and stress constraints." }, { "code": null, "e": 4770, "s": 4624, "text": "You can access the hints to get an idea about what is expected of you as well as\n the final solution code." }, { "code": null, "e": 4847, "s": 4770, "text": "You can view the solutions submitted by other users from the submission tab." }, { "code": null, "e": 4888, "s": 4847, "text": "Make sure you are not using ad-blockers." }, { "code": null, "e": 4916, "s": 4888, "text": "Disable browser extensions." }, { "code": null, "e": 4987, "s": 4916, "text": "We recommend using latest version of your browser for best experience." }, { "code": null, "e": 5174, "s": 4987, "text": "Avoid using static/global variables in coding problems as your code is tested \n against multiple test cases and these tend to retain their previous values." } ]
Python | Image Classification using Keras
13 May, 2022 Image classification is a method to classify way images into their respective category classes using some methods like : Training a small network from scratch Fine-tuning the top layers of the model using VGG16 Let’s discuss how to train the model from scratch and classify the data containing cars and planes. Train Data: Train data contains the 200 images of each car and plane, i.e. in total, there are 400 images in the training dataset Test Data: Test data contains 50 images of each car and plane i.e., includes a total. There are 100 images in the test datasetTo download the complete dataset, click here. Prerequisite: Image Classifier using CNN Model Description: Before starting with the model, first prepare the dataset and its arrangement. Look at the following image given below: For feeding the dataset folders they should be made and provided into this format only. So now, Let’s begins with the model: For training the model we don’t need a large high-end machine and GPU’s, we can work with CPU’s also. Firstly, in given code include the following libraries: Python3 # Importing all necessary librariesfrom keras.preprocessing.image import ImageDataGeneratorfrom keras.models import Sequentialfrom keras.layers import Conv2D, MaxPooling2Dfrom keras.layers import Activation, Dropout, Flatten, Densefrom keras import backend as K img_width, img_height = 224, 224 Every image in the dataset is of the size 224*224. Python3 train_data_dir = 'v_data/train'validation_data_dir = 'v_data/test'nb_train_samples =400nb_validation_samples = 100epochs = 10batch_size = 16 Here, the train_data_dir is the train dataset directory. validation_data_dir is the directory for validation data. nb_train_samples is the total number of train samples. nb_validation_samples is the total number of validation samples. Checking format of Image: Python3 if K.image_data_format() == 'channels_first': input_shape = (3, img_width, img_height)else: input_shape = (img_width, img_height, 3) This part is to check the data format i.e the RGB channel is coming first or last so, whatever it may be, the model will check first and then input shape will be fed accordingly. Python3 model = Sequential()model.add(Conv2D(32, (2, 2), input_shape=input_shape))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (2, 2)))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (2, 2)))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten())model.add(Dense(64))model.add(Activation('relu'))model.add(Dropout(0.5))model.add(Dense(1))model.add(Activation('sigmoid')) About the following terms used above: Conv2D is the layer to convolve the image into multiple images Activation is the activation function. MaxPooling2D is used to max pool the value from the given size matrix and same is used for the next 2 layers. then, Flatten is used to flatten the dimensions of the image obtained after convolving it. Dense is used to make this a fully connected model and is the hidden layer. Dropout is used to avoid overfitting on the dataset. Dense is the output layer contains only one neuron which decide to which category image belongs. Compile Function: Python3 model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) Compile function is used here that involve the use of loss, optimizers and metrics. Here loss function used is binary_crossentropy, optimizer used is rmsprop. Using DataGenerator: Python3 train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') model.fit_generator( train_generator, steps_per_epoch=nb_train_samples // batch_size, epochs=epochs, validation_data=validation_generator, validation_steps=nb_validation_samples // batch_size) Now, the part of dataGenerator comes into the figure. In which we have used: ImageDataGenerator that rescales the image, applies shear in some range, zooms the image and does horizontal flipping with the image. This ImageDataGenerator includes all possible orientation of the image. train_datagen.flow_from_directory is the function that is used to prepare data from the train_dataset directory Target_size specifies the target size of the image. test_datagen.flow_from_directory is used to prepare test data for the model and all is similar as above. fit_generator is used to fit the data into the model made above, other factors used are steps_per_epochs tells us about the number of times the model will execute for the training data. epochs tells us the number of times model will be trained in forward and backward pass. validation_data is used to feed the validation/test data into the model. validation_steps denotes the number of validation/test samples. Python3 model.save_weights('model_saved.h5') At last, we can also save the model. Model Output: Loading and Prediction Load Model with “load_model”Convert Images to Numpy Arrays for passing into ML ModelPrint the predicted output from the model. Load Model with “load_model” Convert Images to Numpy Arrays for passing into ML Model Print the predicted output from the model. Python3 from keras.models import load_modelfrom keras.preprocessing.image import load_imgfrom keras.preprocessing.image import img_to_arrayfrom keras.applications.vgg16 import preprocess_inputfrom keras.applications.vgg16 import decode_predictionsfrom keras.applications.vgg16 import VGG16import numpy as np from keras.models import load_model model = load_model('model_saved.h5') image = load_img('v_data/test/planes/5.jpg', target_size=(224, 224))img = np.array(image)img = img / 255.0img = img.reshape(1,224,224,3)label = model.predict(img)print("Predicted Class (0 - Cars , 1- Planes): ", label[0][0]) Output : Predicted Class (0 – Cars , 1- Planes): 1 djross2000 shubhayan1998 Nitish_Gangwar Machine Learning Python Machine Learning Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. ML | Linear Regression Reinforcement learning Supervised and Unsupervised learning Decision Tree Introduction with example Search Algorithms in AI Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe
[ { "code": null, "e": 54, "s": 26, "text": "\n13 May, 2022" }, { "code": null, "e": 177, "s": 54, "text": "Image classification is a method to classify way images into their respective category classes using some methods like : " }, { "code": null, "e": 215, "s": 177, "text": "Training a small network from scratch" }, { "code": null, "e": 267, "s": 215, "text": "Fine-tuning the top layers of the model using VGG16" }, { "code": null, "e": 367, "s": 267, "text": "Let’s discuss how to train the model from scratch and classify the data containing cars and planes." }, { "code": null, "e": 498, "s": 367, "text": "Train Data: Train data contains the 200 images of each car and plane, i.e. in total, there are 400 images in the training dataset " }, { "code": null, "e": 670, "s": 498, "text": "Test Data: Test data contains 50 images of each car and plane i.e., includes a total. There are 100 images in the test datasetTo download the complete dataset, click here." }, { "code": null, "e": 711, "s": 670, "text": "Prerequisite: Image Classifier using CNN" }, { "code": null, "e": 852, "s": 711, "text": "Model Description: Before starting with the model, first prepare the dataset and its arrangement. Look at the following image given below: " }, { "code": null, "e": 1136, "s": 852, "text": "For feeding the dataset folders they should be made and provided into this format only. So now, Let’s begins with the model: For training the model we don’t need a large high-end machine and GPU’s, we can work with CPU’s also. Firstly, in given code include the following libraries: " }, { "code": null, "e": 1144, "s": 1136, "text": "Python3" }, { "code": "# Importing all necessary librariesfrom keras.preprocessing.image import ImageDataGeneratorfrom keras.models import Sequentialfrom keras.layers import Conv2D, MaxPooling2Dfrom keras.layers import Activation, Dropout, Flatten, Densefrom keras import backend as K img_width, img_height = 224, 224", "e": 1439, "s": 1144, "text": null }, { "code": null, "e": 1492, "s": 1439, "text": "Every image in the dataset is of the size 224*224. " }, { "code": null, "e": 1500, "s": 1492, "text": "Python3" }, { "code": "train_data_dir = 'v_data/train'validation_data_dir = 'v_data/test'nb_train_samples =400nb_validation_samples = 100epochs = 10batch_size = 16", "e": 1641, "s": 1500, "text": null }, { "code": null, "e": 1876, "s": 1641, "text": "Here, the train_data_dir is the train dataset directory. validation_data_dir is the directory for validation data. nb_train_samples is the total number of train samples. nb_validation_samples is the total number of validation samples." }, { "code": null, "e": 1904, "s": 1876, "text": "Checking format of Image: " }, { "code": null, "e": 1912, "s": 1904, "text": "Python3" }, { "code": "if K.image_data_format() == 'channels_first': input_shape = (3, img_width, img_height)else: input_shape = (img_width, img_height, 3)", "e": 2051, "s": 1912, "text": null }, { "code": null, "e": 2231, "s": 2051, "text": "This part is to check the data format i.e the RGB channel is coming first or last so, whatever it may be, the model will check first and then input shape will be fed accordingly. " }, { "code": null, "e": 2239, "s": 2231, "text": "Python3" }, { "code": "model = Sequential()model.add(Conv2D(32, (2, 2), input_shape=input_shape))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (2, 2)))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (2, 2)))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten())model.add(Dense(64))model.add(Activation('relu'))model.add(Dropout(0.5))model.add(Dense(1))model.add(Activation('sigmoid'))", "e": 2728, "s": 2239, "text": null }, { "code": null, "e": 2767, "s": 2728, "text": "About the following terms used above: " }, { "code": null, "e": 3296, "s": 2767, "text": "Conv2D is the layer to convolve the image into multiple images Activation is the activation function. MaxPooling2D is used to max pool the value from the given size matrix and same is used for the next 2 layers. then, Flatten is used to flatten the dimensions of the image obtained after convolving it. Dense is used to make this a fully connected model and is the hidden layer. Dropout is used to avoid overfitting on the dataset. Dense is the output layer contains only one neuron which decide to which category image belongs." }, { "code": null, "e": 3316, "s": 3296, "text": "Compile Function: " }, { "code": null, "e": 3324, "s": 3316, "text": "Python3" }, { "code": "model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])", "e": 3435, "s": 3324, "text": null }, { "code": null, "e": 3594, "s": 3435, "text": "Compile function is used here that involve the use of loss, optimizers and metrics. Here loss function used is binary_crossentropy, optimizer used is rmsprop." }, { "code": null, "e": 3617, "s": 3594, "text": "Using DataGenerator: " }, { "code": null, "e": 3625, "s": 3617, "text": "Python3" }, { "code": "train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') model.fit_generator( train_generator, steps_per_epoch=nb_train_samples // batch_size, epochs=epochs, validation_data=validation_generator, validation_steps=nb_validation_samples // batch_size)", "e": 4339, "s": 3625, "text": null }, { "code": null, "e": 4417, "s": 4339, "text": "Now, the part of dataGenerator comes into the figure. In which we have used: " }, { "code": null, "e": 5303, "s": 4417, "text": "ImageDataGenerator that rescales the image, applies shear in some range, zooms the image and does horizontal flipping with the image. This ImageDataGenerator includes all possible orientation of the image. train_datagen.flow_from_directory is the function that is used to prepare data from the train_dataset directory Target_size specifies the target size of the image. test_datagen.flow_from_directory is used to prepare test data for the model and all is similar as above. fit_generator is used to fit the data into the model made above, other factors used are steps_per_epochs tells us about the number of times the model will execute for the training data. epochs tells us the number of times model will be trained in forward and backward pass. validation_data is used to feed the validation/test data into the model. validation_steps denotes the number of validation/test samples." }, { "code": null, "e": 5311, "s": 5303, "text": "Python3" }, { "code": "model.save_weights('model_saved.h5')", "e": 5348, "s": 5311, "text": null }, { "code": null, "e": 5387, "s": 5348, "text": "At last, we can also save the model. " }, { "code": null, "e": 5402, "s": 5387, "text": "Model Output: " }, { "code": null, "e": 5426, "s": 5402, "text": " Loading and Prediction" }, { "code": null, "e": 5553, "s": 5426, "text": "Load Model with “load_model”Convert Images to Numpy Arrays for passing into ML ModelPrint the predicted output from the model." }, { "code": null, "e": 5582, "s": 5553, "text": "Load Model with “load_model”" }, { "code": null, "e": 5639, "s": 5582, "text": "Convert Images to Numpy Arrays for passing into ML Model" }, { "code": null, "e": 5682, "s": 5639, "text": "Print the predicted output from the model." }, { "code": null, "e": 5690, "s": 5682, "text": "Python3" }, { "code": "from keras.models import load_modelfrom keras.preprocessing.image import load_imgfrom keras.preprocessing.image import img_to_arrayfrom keras.applications.vgg16 import preprocess_inputfrom keras.applications.vgg16 import decode_predictionsfrom keras.applications.vgg16 import VGG16import numpy as np from keras.models import load_model model = load_model('model_saved.h5') image = load_img('v_data/test/planes/5.jpg', target_size=(224, 224))img = np.array(image)img = img / 255.0img = img.reshape(1,224,224,3)label = model.predict(img)print(\"Predicted Class (0 - Cars , 1- Planes): \", label[0][0])", "e": 6288, "s": 5690, "text": null }, { "code": null, "e": 6297, "s": 6288, "text": "Output :" }, { "code": null, "e": 6339, "s": 6297, "text": "Predicted Class (0 – Cars , 1- Planes): 1" }, { "code": null, "e": 6350, "s": 6339, "text": "djross2000" }, { "code": null, "e": 6364, "s": 6350, "text": "shubhayan1998" }, { "code": null, "e": 6379, "s": 6364, "text": "Nitish_Gangwar" }, { "code": null, "e": 6396, "s": 6379, "text": "Machine Learning" }, { "code": null, "e": 6403, "s": 6396, "text": "Python" }, { "code": null, "e": 6420, "s": 6403, "text": "Machine Learning" }, { "code": null, "e": 6518, "s": 6420, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 6541, "s": 6518, "text": "ML | Linear Regression" }, { "code": null, "e": 6564, "s": 6541, "text": "Reinforcement learning" }, { "code": null, "e": 6601, "s": 6564, "text": "Supervised and Unsupervised learning" }, { "code": null, "e": 6641, "s": 6601, "text": "Decision Tree Introduction with example" }, { "code": null, "e": 6665, "s": 6641, "text": "Search Algorithms in AI" }, { "code": null, "e": 6693, "s": 6665, "text": "Read JSON file using Python" }, { "code": null, "e": 6743, "s": 6693, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 6765, "s": 6743, "text": "Python map() function" } ]
Introduction of Parallel Database
17 Mar, 2021 In this article, we will discuss the overview of Parallel Databases and then will emphasize their needs and advantages, and then finally, will cover the performance measurement factor-like Speedup and Scale-up with examples. Let’s discuss it one by one. Parallel Databases :Nowadays organizations need to handle a huge amount of data with a high transfer rate. For such requirements, the client-server or centralized system is not efficient. With the need to improve the efficiency of the system, the concept of the parallel database comes in picture. A parallel database system seeks to improve the performance of the system through parallelizing concept. Need :Multiple resources like CPUs and Disks are used in parallel. The operations are performed simultaneously, as opposed to serial processing. A parallel server can allow access to a single database by users on multiple machines. It also performs many parallelization operations like data loading, query processing, building indexes, and evaluating queries. Advantages :Here, we will discuss the advantages of parallel databases. Let’s have a look. Performance Improvement – By connecting multiple resources like CPU and disks in parallel we can significantly increase the performance of the system. High availability – In the parallel database, nodes have less contact with each other, so the failure of one node doesn’t cause for failure of the entire system. This amounts to significantly higher database availability. Proper resource utilization – Due to parallel execution, the CPU will never be ideal. Thus, proper utilization of resources is there. Increase Reliability – When one site fails, the execution can continue with another available site which is having a copy of data. Making the system more reliable. Performance Improvement – By connecting multiple resources like CPU and disks in parallel we can significantly increase the performance of the system. High availability – In the parallel database, nodes have less contact with each other, so the failure of one node doesn’t cause for failure of the entire system. This amounts to significantly higher database availability. Proper resource utilization – Due to parallel execution, the CPU will never be ideal. Thus, proper utilization of resources is there. Increase Reliability – When one site fails, the execution can continue with another available site which is having a copy of data. Making the system more reliable. Performance Measurement of Databases :Here, we will emphasize the performance measurement factor-like Speedup and Scale-up. Let’s understand it one by one with the help of examples. Speedup –The ability to execute the tasks in less time by increasing the number of resources is called Speedup. Speedup=time original/time parallel Where , time original = time required to execute the task using 1 processor time parallel = time required to execute the task using 'n' processors fig. Ideal Speedup curve Example – fig. A CPU requires 3 minutes to execute a process fig. ‘n’ CPU requires 1 min to execute a process by dividing into smaller tasks Scale-up –The ability to maintain the performance of the system when both workload and resources increase proportionally. Scaleup = Volume Parallel/Volume Original Where , Volume Parallel = volume executed in a given amount of time using 'n' processor Volume Original = volume executed in a given amount of time using 1 processor fig. Ideal Scaleup curve Example –20 users are using a CPU at 100% efficiency. If we try to add more users, then it’s not possible for a single processor to handle additional users. A new processor can be added to serve the users parallel. And will provide 200% efficiency. DBMS SQL DBMS SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n17 Mar, 2021" }, { "code": null, "e": 306, "s": 52, "text": "In this article, we will discuss the overview of Parallel Databases and then will emphasize their needs and advantages, and then finally, will cover the performance measurement factor-like Speedup and Scale-up with examples. Let’s discuss it one by one." }, { "code": null, "e": 709, "s": 306, "text": "Parallel Databases :Nowadays organizations need to handle a huge amount of data with a high transfer rate. For such requirements, the client-server or centralized system is not efficient. With the need to improve the efficiency of the system, the concept of the parallel database comes in picture. A parallel database system seeks to improve the performance of the system through parallelizing concept." }, { "code": null, "e": 1069, "s": 709, "text": "Need :Multiple resources like CPUs and Disks are used in parallel. The operations are performed simultaneously, as opposed to serial processing. A parallel server can allow access to a single database by users on multiple machines. It also performs many parallelization operations like data loading, query processing, building indexes, and evaluating queries." }, { "code": null, "e": 1160, "s": 1069, "text": "Advantages :Here, we will discuss the advantages of parallel databases. Let’s have a look." }, { "code": null, "e": 1831, "s": 1160, "text": "Performance Improvement – By connecting multiple resources like CPU and disks in parallel we can significantly increase the performance of the system. High availability – In the parallel database, nodes have less contact with each other, so the failure of one node doesn’t cause for failure of the entire system. This amounts to significantly higher database availability. Proper resource utilization – Due to parallel execution, the CPU will never be ideal. Thus, proper utilization of resources is there. Increase Reliability – When one site fails, the execution can continue with another available site which is having a copy of data. Making the system more reliable." }, { "code": null, "e": 1983, "s": 1831, "text": "Performance Improvement – By connecting multiple resources like CPU and disks in parallel we can significantly increase the performance of the system. " }, { "code": null, "e": 2206, "s": 1983, "text": "High availability – In the parallel database, nodes have less contact with each other, so the failure of one node doesn’t cause for failure of the entire system. This amounts to significantly higher database availability. " }, { "code": null, "e": 2341, "s": 2206, "text": "Proper resource utilization – Due to parallel execution, the CPU will never be ideal. Thus, proper utilization of resources is there. " }, { "code": null, "e": 2505, "s": 2341, "text": "Increase Reliability – When one site fails, the execution can continue with another available site which is having a copy of data. Making the system more reliable." }, { "code": null, "e": 2687, "s": 2505, "text": "Performance Measurement of Databases :Here, we will emphasize the performance measurement factor-like Speedup and Scale-up. Let’s understand it one by one with the help of examples." }, { "code": null, "e": 2799, "s": 2687, "text": "Speedup –The ability to execute the tasks in less time by increasing the number of resources is called Speedup." }, { "code": null, "e": 2982, "s": 2799, "text": "Speedup=time original/time parallel\nWhere ,\ntime original = time required to execute the task using 1 processor\ntime parallel = time required to execute the task using 'n' processors" }, { "code": null, "e": 3007, "s": 2982, "text": "fig. Ideal Speedup curve" }, { "code": null, "e": 3017, "s": 3007, "text": "Example –" }, { "code": null, "e": 3068, "s": 3017, "text": "fig. A CPU requires 3 minutes to execute a process" }, { "code": null, "e": 3148, "s": 3068, "text": "fig. ‘n’ CPU requires 1 min to execute a process by dividing into smaller tasks" }, { "code": null, "e": 3270, "s": 3148, "text": "Scale-up –The ability to maintain the performance of the system when both workload and resources increase proportionally." }, { "code": null, "e": 3478, "s": 3270, "text": "Scaleup = Volume Parallel/Volume Original\nWhere ,\nVolume Parallel = volume executed in a given amount of time using 'n' processor\nVolume Original = volume executed in a given amount of time using 1 processor" }, { "code": null, "e": 3503, "s": 3478, "text": "fig. Ideal Scaleup curve" }, { "code": null, "e": 3752, "s": 3503, "text": "Example –20 users are using a CPU at 100% efficiency. If we try to add more users, then it’s not possible for a single processor to handle additional users. A new processor can be added to serve the users parallel. And will provide 200% efficiency." }, { "code": null, "e": 3757, "s": 3752, "text": "DBMS" }, { "code": null, "e": 3761, "s": 3757, "text": "SQL" }, { "code": null, "e": 3766, "s": 3761, "text": "DBMS" }, { "code": null, "e": 3770, "s": 3766, "text": "SQL" } ]
Matplotlib.figure.Figure.set_edgecolor() in Python
03 May, 2020 Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The figure module provides the top-level Artist, the Figure, which contains all the plot elements. This module is used to control the default spacing of the subplots and top level container for all plot elements. The set_edgecolor() method figure module of matplotlib library is used to set the edge color of the Figure rectangle. Syntax: set_edgecolor(self, color) Parameters: This method accept the following parameters that are discussed below: color : This parameter is the color. Returns: This method does not returns any value. Below examples illustrate the matplotlib.figure.Figure.set_edgecolor() function in matplotlib.figure: Example 1: # Implementation of matplotlib function import matplotlib.pyplot as plt from matplotlib.figure import Figurefrom mpl_toolkits.axisartist.axislines import Subplot import numpy as np fig = plt.figure() ax = Subplot(fig, 111) fig.add_subplot(ax) fig.set_edgecolor("green") fig.suptitle("""matplotlib.figure.Figure.set_edgecolor()function Example\n\n""", fontweight ="bold") plt.show() Output: Example 2: # Implementation of matplotlib function import matplotlib.pyplot as plt from matplotlib.figure import Figureimport numpy as np fig = plt.figure(figsize =(7, 6)) ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) xx = np.arange(0, 2 * np.pi, 0.01) ax.plot(xx, np.sin(xx)) fig.set_edgecolor("red") fig.suptitle("""matplotlib.figure.Figure.set_edgecolor()function Example\n\n""", fontweight ="bold") plt.show() Output: Matplotlib figure-class Python-matplotlib Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Python Classes and Objects Python OOPs Concepts Introduction To PYTHON Python | os.path.join() method How to drop one or multiple columns in Pandas Dataframe How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | Get unique values from a list Python | datetime.timedelta() function
[ { "code": null, "e": 28, "s": 0, "text": "\n03 May, 2020" }, { "code": null, "e": 339, "s": 28, "text": "Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The figure module provides the top-level Artist, the Figure, which contains all the plot elements. This module is used to control the default spacing of the subplots and top level container for all plot elements." }, { "code": null, "e": 457, "s": 339, "text": "The set_edgecolor() method figure module of matplotlib library is used to set the edge color of the Figure rectangle." }, { "code": null, "e": 492, "s": 457, "text": "Syntax: set_edgecolor(self, color)" }, { "code": null, "e": 574, "s": 492, "text": "Parameters: This method accept the following parameters that are discussed below:" }, { "code": null, "e": 611, "s": 574, "text": "color : This parameter is the color." }, { "code": null, "e": 660, "s": 611, "text": "Returns: This method does not returns any value." }, { "code": null, "e": 762, "s": 660, "text": "Below examples illustrate the matplotlib.figure.Figure.set_edgecolor() function in matplotlib.figure:" }, { "code": null, "e": 773, "s": 762, "text": "Example 1:" }, { "code": "# Implementation of matplotlib function import matplotlib.pyplot as plt from matplotlib.figure import Figurefrom mpl_toolkits.axisartist.axislines import Subplot import numpy as np fig = plt.figure() ax = Subplot(fig, 111) fig.add_subplot(ax) fig.set_edgecolor(\"green\") fig.suptitle(\"\"\"matplotlib.figure.Figure.set_edgecolor()function Example\\n\\n\"\"\", fontweight =\"bold\") plt.show() ", "e": 1176, "s": 773, "text": null }, { "code": null, "e": 1184, "s": 1176, "text": "Output:" }, { "code": null, "e": 1195, "s": 1184, "text": "Example 2:" }, { "code": "# Implementation of matplotlib function import matplotlib.pyplot as plt from matplotlib.figure import Figureimport numpy as np fig = plt.figure(figsize =(7, 6)) ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) xx = np.arange(0, 2 * np.pi, 0.01) ax.plot(xx, np.sin(xx)) fig.set_edgecolor(\"red\") fig.suptitle(\"\"\"matplotlib.figure.Figure.set_edgecolor()function Example\\n\\n\"\"\", fontweight =\"bold\") plt.show() ", "e": 1621, "s": 1195, "text": null }, { "code": null, "e": 1629, "s": 1621, "text": "Output:" }, { "code": null, "e": 1653, "s": 1629, "text": "Matplotlib figure-class" }, { "code": null, "e": 1671, "s": 1653, "text": "Python-matplotlib" }, { "code": null, "e": 1678, "s": 1671, "text": "Python" }, { "code": null, "e": 1776, "s": 1678, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 1808, "s": 1776, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 1835, "s": 1808, "text": "Python Classes and Objects" }, { "code": null, "e": 1856, "s": 1835, "text": "Python OOPs Concepts" }, { "code": null, "e": 1879, "s": 1856, "text": "Introduction To PYTHON" }, { "code": null, "e": 1910, "s": 1879, "text": "Python | os.path.join() method" }, { "code": null, "e": 1966, "s": 1910, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 2008, "s": 1966, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 2050, "s": 2008, "text": "Check if element exists in list in Python" }, { "code": null, "e": 2089, "s": 2050, "text": "Python | Get unique values from a list" } ]
C++ Installation on MacBook M1 for VS Code
12 Mar, 2021 This article is written on CPP installation for the latest MacBook M1 processor. It’s not like we can’t do CPP programming in the latest MacBook, there is Xcode which is a substitute for other code editors. But still many of the developers like to code on Visual Studio. So let’s start with this installation of CPP on Visual Studio code. First download VS Code on your device. You can also Download M1 specific Visual Studio Code(i.e Visual Studio code- Insiders) After downloading Visual Studio Code or Visual Studio Code Insiders open it and go to extensions. There is a search tab, just type c++ then click on 1 recommendation and install it. Another extension you have to download is code runner. During this process, users can come across 2 different types of issues. So let’s discuss what they are and how to resolve them. Problem 1: After downloading all extensions on VS Code not able to work on CPP. Follow the below steps to resolve the same issue: Step 1: Open your terminal and run the below command: arch -x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/ Homebrew/install/master/install.sh)" Step 2: Now after the completion of the previous command type : arch -x86_64 brew install mingw-w64 Problem 2: #include<bits/stdc++.h> is not found . If you want to more about the system header file click here. Follow the below steps to resolve the issue: Step 1: Open terminal using command+space and type terminal. Step 2: Now move to the below-given path : /usr/local/include Step 3: Now create bits directory in the present location mkdir bits Step 4: Now move into bits directory and create a file and name it stdc++.h. nano stdc++.h Step 5: After creating a file just copy the code from the GitHub repository and paste that code into stdc++.h file and then press {control+x}-> y -> return Now just try to implement any CPP code to ensure that you are done with the CPP setup on MacBook M1. C++ #include <bits/stdc++.h>using namespace std; int main(){ int a= 2, b=4; cout<<a+b<<endl; return 0;} That’s it. You have successfully installed CPP into your Mac M1. C++ CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Sorting a vector in C++ Polymorphism in C++ Friend class and function in C++ std::string class in C++ Pair in C++ Standard Template Library (STL) Queue in C++ Standard Template Library (STL) Unordered Sets in C++ Standard Template Library List in C++ Standard Template Library (STL) std::find in C++ Inline Functions in C++
[ { "code": null, "e": 54, "s": 26, "text": "\n12 Mar, 2021" }, { "code": null, "e": 326, "s": 54, "text": "This article is written on CPP installation for the latest MacBook M1 processor. It’s not like we can’t do CPP programming in the latest MacBook, there is Xcode which is a substitute for other code editors. But still many of the developers like to code on Visual Studio." }, { "code": null, "e": 394, "s": 326, "text": "So let’s start with this installation of CPP on Visual Studio code." }, { "code": null, "e": 433, "s": 394, "text": "First download VS Code on your device." }, { "code": null, "e": 520, "s": 433, "text": "You can also Download M1 specific Visual Studio Code(i.e Visual Studio code- Insiders)" }, { "code": null, "e": 757, "s": 520, "text": "After downloading Visual Studio Code or Visual Studio Code Insiders open it and go to extensions. There is a search tab, just type c++ then click on 1 recommendation and install it. Another extension you have to download is code runner." }, { "code": null, "e": 885, "s": 757, "text": "During this process, users can come across 2 different types of issues. So let’s discuss what they are and how to resolve them." }, { "code": null, "e": 965, "s": 885, "text": "Problem 1: After downloading all extensions on VS Code not able to work on CPP." }, { "code": null, "e": 1015, "s": 965, "text": "Follow the below steps to resolve the same issue:" }, { "code": null, "e": 1069, "s": 1015, "text": "Step 1: Open your terminal and run the below command:" }, { "code": null, "e": 1182, "s": 1069, "text": "arch -x86_64 /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/ Homebrew/install/master/install.sh)\"" }, { "code": null, "e": 1247, "s": 1182, "text": "Step 2: Now after the completion of the previous command type : " }, { "code": null, "e": 1283, "s": 1247, "text": "arch -x86_64 brew install mingw-w64" }, { "code": null, "e": 1333, "s": 1283, "text": "Problem 2: #include<bits/stdc++.h> is not found ." }, { "code": null, "e": 1439, "s": 1333, "text": "If you want to more about the system header file click here. Follow the below steps to resolve the issue:" }, { "code": null, "e": 1500, "s": 1439, "text": "Step 1: Open terminal using command+space and type terminal." }, { "code": null, "e": 1543, "s": 1500, "text": "Step 2: Now move to the below-given path :" }, { "code": null, "e": 1562, "s": 1543, "text": "/usr/local/include" }, { "code": null, "e": 1620, "s": 1562, "text": "Step 3: Now create bits directory in the present location" }, { "code": null, "e": 1631, "s": 1620, "text": "mkdir bits" }, { "code": null, "e": 1709, "s": 1631, "text": "Step 4: Now move into bits directory and create a file and name it stdc++.h." }, { "code": null, "e": 1723, "s": 1709, "text": "nano stdc++.h" }, { "code": null, "e": 1853, "s": 1723, "text": "Step 5: After creating a file just copy the code from the GitHub repository and paste that code into stdc++.h file and then press" }, { "code": null, "e": 1880, "s": 1853, "text": " {control+x}-> y -> return" }, { "code": null, "e": 1981, "s": 1880, "text": "Now just try to implement any CPP code to ensure that you are done with the CPP setup on MacBook M1." }, { "code": null, "e": 1985, "s": 1981, "text": "C++" }, { "code": "#include <bits/stdc++.h>using namespace std; int main(){ int a= 2, b=4; cout<<a+b<<endl; return 0;}", "e": 2095, "s": 1985, "text": null }, { "code": null, "e": 2160, "s": 2095, "text": "That’s it. You have successfully installed CPP into your Mac M1." }, { "code": null, "e": 2164, "s": 2160, "text": "C++" }, { "code": null, "e": 2168, "s": 2164, "text": "CPP" }, { "code": null, "e": 2266, "s": 2168, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2290, "s": 2266, "text": "Sorting a vector in C++" }, { "code": null, "e": 2310, "s": 2290, "text": "Polymorphism in C++" }, { "code": null, "e": 2343, "s": 2310, "text": "Friend class and function in C++" }, { "code": null, "e": 2368, "s": 2343, "text": "std::string class in C++" }, { "code": null, "e": 2412, "s": 2368, "text": "Pair in C++ Standard Template Library (STL)" }, { "code": null, "e": 2457, "s": 2412, "text": "Queue in C++ Standard Template Library (STL)" }, { "code": null, "e": 2505, "s": 2457, "text": "Unordered Sets in C++ Standard Template Library" }, { "code": null, "e": 2549, "s": 2505, "text": "List in C++ Standard Template Library (STL)" }, { "code": null, "e": 2566, "s": 2549, "text": "std::find in C++" } ]
PostgreSQL - WITH Clause
In PostgreSQL, the WITH query provides a way to write auxiliary statements for use in a larger query. It helps in breaking down complicated and large queries into simpler forms, which are easily readable. These statements often referred to as Common Table Expressions or CTEs, can be thought of as defining temporary tables that exist just for one query. The WITH query being CTE query, is particularly useful when subquery is executed multiple times. It is equally helpful in place of temporary tables. It computes the aggregation once and allows us to reference it by its name (may be multiple times) in the queries. The WITH clause must be defined before it is used in the query. The basic syntax of WITH query is as follows − WITH name_for_summary_data AS ( SELECT Statement) SELECT columns FROM name_for_summary_data WHERE conditions <=> ( SELECT column FROM name_for_summary_data) [ORDER BY columns] Where name_for_summary_data is the name given to the WITH clause. The name_for_summary_data can be the same as an existing table name and will take precedence. You can use data-modifying statements (INSERT, UPDATE or DELETE) in WITH. This allows you to perform several different operations in the same query. Consider the table COMPANY having records as follows − testdb# select * from COMPANY; id | name | age | address | salary ----+-------+-----+-----------+-------- 1 | Paul | 32 | California| 20000 2 | Allen | 25 | Texas | 15000 3 | Teddy | 23 | Norway | 20000 4 | Mark | 25 | Rich-Mond | 65000 5 | David | 27 | Texas | 85000 6 | Kim | 22 | South-Hall| 45000 7 | James | 24 | Houston | 10000 (7 rows) Now, let us write a query using the WITH clause to select the records from the above table, as follows − With CTE AS (Select ID , NAME , AGE , ADDRESS , SALARY FROM COMPANY ) Select * From CTE; The above given PostgreSQL statement will produce the following result − id | name | age | address | salary ----+-------+-----+-----------+-------- 1 | Paul | 32 | California| 20000 2 | Allen | 25 | Texas | 15000 3 | Teddy | 23 | Norway | 20000 4 | Mark | 25 | Rich-Mond | 65000 5 | David | 27 | Texas | 85000 6 | Kim | 22 | South-Hall| 45000 7 | James | 24 | Houston | 10000 (7 rows) Now, let us write a query using the RECURSIVE keyword along with the WITH clause, to find the sum of the salaries less than 20000, as follows − WITH RECURSIVE t(n) AS ( VALUES (0) UNION ALL SELECT SALARY FROM COMPANY WHERE SALARY < 20000 ) SELECT sum(n) FROM t; The above given PostgreSQL statement will produce the following result − sum ------- 25000 (1 row) Let us write a query using data modifying statements along with the WITH clause, as shown below. First, create a table COMPANY1 similar to the table COMPANY. The query in the example effectively moves rows from COMPANY to COMPANY1. The DELETE in WITH deletes the specified rows from COMPANY, returning their contents by means of its RETURNING clause; and then the primary query reads that output and inserts it into COMPANY1 TABLE − CREATE TABLE COMPANY1( ID INT PRIMARY KEY NOT NULL, NAME TEXT NOT NULL, AGE INT NOT NULL, ADDRESS CHAR(50), SALARY REAL ); WITH moved_rows AS ( DELETE FROM COMPANY WHERE SALARY >= 30000 RETURNING * ) INSERT INTO COMPANY1 (SELECT * FROM moved_rows); The above given PostgreSQL statement will produce the following result − INSERT 0 3 Now, the records in the tables COMPANY and COMPANY1 are as follows − testdb=# SELECT * FROM COMPANY; id | name | age | address | salary ----+-------+-----+------------+-------- 1 | Paul | 32 | California | 20000 2 | Allen | 25 | Texas | 15000 3 | Teddy | 23 | Norway | 20000 7 | James | 24 | Houston | 10000 (4 rows) testdb=# SELECT * FROM COMPANY1; id | name | age | address | salary ----+-------+-----+-------------+-------- 4 | Mark | 25 | Rich-Mond | 65000 5 | David | 27 | Texas | 85000 6 | Kim | 22 | South-Hall | 45000
[ { "code": null, "e": 3314, "s": 2959, "text": "In PostgreSQL, the WITH query provides a way to write auxiliary statements for use in a larger query. It helps in breaking down complicated and large queries into simpler forms, which are easily readable. These statements often referred to as Common Table Expressions or CTEs, can be thought of as defining temporary tables that exist just for one query." }, { "code": null, "e": 3578, "s": 3314, "text": "The WITH query being CTE query, is particularly useful when subquery is executed multiple times. It is equally helpful in place of temporary tables. It computes the aggregation once and allows us to reference it by its name (may be multiple times) in the queries." }, { "code": null, "e": 3642, "s": 3578, "text": "The WITH clause must be defined before it is used in the query." }, { "code": null, "e": 3689, "s": 3642, "text": "The basic syntax of WITH query is as follows −" }, { "code": null, "e": 3899, "s": 3689, "text": "WITH\n name_for_summary_data AS (\n SELECT Statement)\n SELECT columns\n FROM name_for_summary_data\n WHERE conditions <=> (\n SELECT column\n FROM name_for_summary_data)\n [ORDER BY columns]\n" }, { "code": null, "e": 4059, "s": 3899, "text": "Where name_for_summary_data is the name given to the WITH clause. The name_for_summary_data can be the same as an existing table name and will take precedence." }, { "code": null, "e": 4208, "s": 4059, "text": "You can use data-modifying statements (INSERT, UPDATE or DELETE) in WITH. This allows you to perform several different operations in the same query." }, { "code": null, "e": 4263, "s": 4208, "text": "Consider the table COMPANY having records as follows −" }, { "code": null, "e": 4655, "s": 4263, "text": "testdb# select * from COMPANY;\n id | name | age | address | salary\n----+-------+-----+-----------+--------\n 1 | Paul | 32 | California| 20000\n 2 | Allen | 25 | Texas | 15000\n 3 | Teddy | 23 | Norway | 20000\n 4 | Mark | 25 | Rich-Mond | 65000\n 5 | David | 27 | Texas | 85000\n 6 | Kim | 22 | South-Hall| 45000\n 7 | James | 24 | Houston | 10000\n(7 rows)" }, { "code": null, "e": 4760, "s": 4655, "text": "Now, let us write a query using the WITH clause to select the records from the above table, as follows −" }, { "code": null, "e": 4850, "s": 4760, "text": "With CTE AS\n(Select\n ID\n, NAME\n, AGE\n, ADDRESS\n, SALARY\nFROM COMPANY )\nSelect * From CTE;" }, { "code": null, "e": 4923, "s": 4850, "text": "The above given PostgreSQL statement will produce the following result −" }, { "code": null, "e": 5284, "s": 4923, "text": "id | name | age | address | salary\n----+-------+-----+-----------+--------\n 1 | Paul | 32 | California| 20000\n 2 | Allen | 25 | Texas | 15000\n 3 | Teddy | 23 | Norway | 20000\n 4 | Mark | 25 | Rich-Mond | 65000\n 5 | David | 27 | Texas | 85000\n 6 | Kim | 22 | South-Hall| 45000\n 7 | James | 24 | Houston | 10000\n(7 rows)\n" }, { "code": null, "e": 5428, "s": 5284, "text": "Now, let us write a query using the RECURSIVE keyword along with the WITH clause, to find the sum of the salaries less than 20000, as follows −" }, { "code": null, "e": 5555, "s": 5428, "text": "WITH RECURSIVE t(n) AS (\n VALUES (0)\n UNION ALL\n SELECT SALARY FROM COMPANY WHERE SALARY < 20000\n)\nSELECT sum(n) FROM t;" }, { "code": null, "e": 5628, "s": 5555, "text": "The above given PostgreSQL statement will produce the following result −" }, { "code": null, "e": 5658, "s": 5628, "text": " sum\n-------\n 25000\n(1 row)\n" }, { "code": null, "e": 5755, "s": 5658, "text": "Let us write a query using data modifying statements along with the WITH clause, as shown below." }, { "code": null, "e": 6091, "s": 5755, "text": "First, create a table COMPANY1 similar to the table COMPANY. The query in the example effectively moves rows from COMPANY to COMPANY1. The DELETE in WITH deletes the specified rows from COMPANY, returning their contents by means of its RETURNING clause; and then the primary query reads that output and inserts it into COMPANY1 TABLE −" }, { "code": null, "e": 6418, "s": 6091, "text": "CREATE TABLE COMPANY1(\n ID INT PRIMARY KEY NOT NULL,\n NAME TEXT NOT NULL,\n AGE INT NOT NULL,\n ADDRESS CHAR(50),\n SALARY REAL\n);\n\nWITH moved_rows AS (\n DELETE FROM COMPANY\n WHERE\n SALARY >= 30000\n RETURNING *\n)\nINSERT INTO COMPANY1 (SELECT * FROM moved_rows);" }, { "code": null, "e": 6491, "s": 6418, "text": "The above given PostgreSQL statement will produce the following result −" }, { "code": null, "e": 6503, "s": 6491, "text": "INSERT 0 3\n" }, { "code": null, "e": 6572, "s": 6503, "text": "Now, the records in the tables COMPANY and COMPANY1 are as follows −" } ]
Remove uppercase, lowercase, special, numeric, and non-numeric characters from a String
03 Nov, 2021 Given string str of length N, the task is to remove uppercase, lowercase, special, numeric, and non-numeric characters from this string and print the string after the simultaneous modifications. Examples: Input: str = “GFGgfg123$%” Output: After removing uppercase characters: gfg123$% After removing lowercase characters: GFG123$% After removing special characters: GFGgfg123 After removing numeric characters: GFGgfg$% After removing non-numeric characters: 123 Input: str = “J@va12” Output: After removing uppercase characters: @va12 After removing lowercase characters: J@12 After removing special characters: Jva12 After removing numeric characters: J@va After removing non-numeric characters: 12 Naive Approach: The simplest approach is to iterate over the string and remove uppercase, lowercase, special, numeric, and non-numeric characters. Below are the steps: 1. Traverse the string character by character from start to end. 2. Check the ASCII value of each character for the following conditions: If the ASCII value lies in the range of [65, 90], then it is an uppercase character. Therefore, skip such characters and add the rest characters in another string and print it. If the ASCII value lies in the range of [97, 122], then it is a lowercase character. Therefore, skip such characters and add the rest characters in another string and print it. If the ASCII value lies in the range of [32, 47], [58, 64], [91, 96], or [123, 126] then it is a special character. Therefore, skip such characters and add the rest characters in another string and print it. If the ASCII value lies in the range of [48, 57], then it is a numeric character. Therefore, skip such characters and add the rest characters in another string and print it. Else the character is a non-numeric character. Therefore, skip such characters and add the rest characters in another string and print it. Time Complexity: O(N)Auxiliary Space: O(1) Regular Expression Approach: The idea is to use regular expressions to solve this problem. Below are the steps: 1. Create regular expressions to remove uppercase, lowercase, special, numeric, and non-numeric characters from the string as mentioned below: regexToRemoveUpperCaseCharacters = “[A-Z]” regexToRemoveLowerCaseCharacters = “[a-z]” regexToRemoveSpecialCharacters = “[^A-Za-z0-9]” regexToRemoveNumericCharacters = “[0-9]” regexToRemoveNon-NumericCharacters = “[^0-9]” 2. Compile the given regular expressions to create the pattern using Pattern.compile() method. 3. Match the given string with all the above Regular Expressions using Pattern.matcher(). 4. Replace every matched pattern with the target string using the Matcher.replaceAll() method. Below is the implementation of the above approach: C++ Java Python3 // C++ program to remove uppercase, lowercase// special, numeric, and non-numeric characters#include <iostream>#include <regex>using namespace std; // Function to remove uppercase charactersstring removingUpperCaseCharacters(string str){ // Create a regular expression const regex pattern("[A-Z]"); // Replace every matched pattern with the // target string using regex_replace() method return regex_replace(str, pattern, "");} // Function to remove lowercase charactersstring removingLowerCaseCharacters(string str){ // Create a regular expression const regex pattern("[a-z]"); // Replace every matched pattern with the // target string using regex_replace() method return regex_replace(str, pattern, "");} // Function to remove special charactersstring removingSpecialCharacters(string str){ // Create a regular expression const regex pattern("[^A-Za-z0-9]"); // Replace every matched pattern with the // target string using regex_replace() method return regex_replace(str, pattern, "");} // Function to remove numeric charactersstring removingNumericCharacters(string str){ // Create a regular expression const regex pattern("[0-9]"); // Replace every matched pattern with the // target string using regex_replace() method return regex_replace(str, pattern, "");} // Function to remove non-numeric charactersstring removingNonNumericCharacters(string str){ // Create a regular expression const regex pattern("[^0-9]"); // Replace every matched pattern with the // target string using regex_replace() method return regex_replace(str, pattern, "");} int main(){ // Given String str string str = "GFGgfg123$%"; // Print the strings after the simultaneous // modifications cout << "After removing uppercase characters: " << removingUpperCaseCharacters(str) << endl; cout << "After removing lowercase characters: " << removingLowerCaseCharacters(str) << endl; cout << "After removing special characters: " << removingSpecialCharacters(str) << endl; cout << "After removing numeric characters: " << removingNumericCharacters(str) << endl; cout << "After removing non-numeric characters: " << removingNonNumericCharacters(str) << endl; return 0;} // This article is contributed by yuvraj_chandra // Java program to remove uppercase, lowercase// special, numeric, and non-numeric charactersimport java.util.regex.Matcher;import java.util.regex.Pattern;public class GFG{ // Function to remove uppercase characters public static String removingUpperCaseCharacters(String str) { // Create a regular expression String regex = "[A-Z]"; // Compile the regex to create pattern // using compile() method Pattern pattern = Pattern.compile(regex); // Get a matcher object from pattern Matcher matcher = pattern.matcher(str); // Replace every matched pattern with the // target string using replaceAll() method return matcher.replaceAll(""); } // Function to remove lowercase characters public static String removingLowerCaseCharacters(String str) { // Create a regular expression String regex = "[a-z]"; // Compile the regex to create pattern // using compile() method Pattern pattern = Pattern.compile(regex); // Get a matcher object from pattern Matcher matcher = pattern.matcher(str); // Replace every matched pattern with the // target string using replaceAll() method return matcher.replaceAll(""); } // Function to remove special characters public static String removingSpecialCharacters(String str) { // Create a regular expression String regex = "[^A-Za-z0-9]"; // Compile the regex to create pattern // using compile() method Pattern pattern = Pattern.compile(regex); // Get a matcher object from pattern Matcher matcher = pattern.matcher(str); // Replace every matched pattern with the // target string using replaceAll() method return matcher.replaceAll(""); } // Function to remove numeric characters public static String removingNumericCharacters(String str) { // Create a regular expression String regex = "[0-9]"; // Compile the regex to create pattern // using compile() method Pattern pattern = Pattern.compile(regex); // Get a matcher object from pattern Matcher matcher = pattern.matcher(str); // Replace every matched pattern with the // target string using replaceAll() method return matcher.replaceAll(""); } // Function to remove non-numeric characters public static String removingNonNumericCharacters(String str) { // Create a regular expression String regex = "[^0-9]"; // Compile the regex to create pattern // using compile() method Pattern pattern = Pattern.compile(regex); // Get a matcher object from pattern Matcher matcher = pattern.matcher(str); // Replace every matched pattern with the // target string using replaceAll() method return matcher.replaceAll(""); } // Driver Code public static void main(String[] args) { // Given String str String str = "GFGgfg123$%"; // Print the strings after the simultaneous // modifications System.out.println( "After removing uppercase characters: " + removingUpperCaseCharacters(str)); System.out.println( "After removing lowercase characters: " + removingLowerCaseCharacters(str)); System.out.println( "After removing special characters: " + removingSpecialCharacters(str)); System.out.println( "After removing numeric characters: " + removingNumericCharacters(str)); System.out.println( "After removing non-numeric characters: " + removingNonNumericCharacters(str)); }} # Python3 program to remove# uppercase, lowercase special,# numeric, and non-numeric charactersimport re # Function to remove# uppercase charactersdef removingUpperCaseCharacters(str): # Create a regular expression regex = "[A-Z]" # Replace every matched pattern # with the target string using # sub() method return (re.sub(regex, "", str)) # Function to remove lowercase# charactersdef removingLowerCaseCharacters(str): # Create a regular expression regex = "[a-z]" # Replace every matched # pattern with the target # string using sub() method return (re.sub(regex, "", str)) def removingSpecialCharacters(str): # Create a regular expression regex = "[^A-Za-z0-9]" # Replace every matched pattern # with the target string using # sub() method return (re.sub(regex, "", str)) def removingNumericCharacters(str): # Create a regular expression regex = "[0-9]" # Replace every matched # pattern with the target # string using sub() method return (re.sub(regex, "", str)) def removingNonNumericCharacters(str): # Create a regular expression regex = "[^0-9]" # Replace every matched pattern # with the target string using # sub() method return (re.sub(regex, "", str)) str = "GFGgfg123$%"print("After removing uppercase characters:", removingUpperCaseCharacters(str))print("After removing lowercase characters:", removingLowerCaseCharacters(str))print("After removing special characters:", removingSpecialCharacters(str))print("After removing numeric characters:", removingNumericCharacters(str))print("After removing non-numeric characters:", removingNonNumericCharacters(str)) # This code is contributed by avanitrachhadiya2155 After removing uppercase characters: gfg123$% After removing lowercase characters: GFG123$% After removing special characters: GFGgfg123 After removing numeric characters: GFGgfg$% After removing non-numeric characters: 123 Time Complexity: O(N) Auxiliary Space: O(1) avanitrachhadiya2155 yuvraj_chandra rohitsingh07052 Java-String-Programs regular-expression Java Java Programs Strings Strings Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n03 Nov, 2021" }, { "code": null, "e": 249, "s": 54, "text": "Given string str of length N, the task is to remove uppercase, lowercase, special, numeric, and non-numeric characters from this string and print the string after the simultaneous modifications." }, { "code": null, "e": 259, "s": 249, "text": "Examples:" }, { "code": null, "e": 286, "s": 259, "text": "Input: str = “GFGgfg123$%”" }, { "code": null, "e": 340, "s": 286, "text": "Output: After removing uppercase characters: gfg123$%" }, { "code": null, "e": 399, "s": 340, "text": " After removing lowercase characters: GFG123$%" }, { "code": null, "e": 456, "s": 399, "text": " After removing special characters: GFGgfg123" }, { "code": null, "e": 511, "s": 456, "text": " After removing numeric characters: GFGgfg$%" }, { "code": null, "e": 565, "s": 511, "text": " After removing non-numeric characters: 123" }, { "code": null, "e": 587, "s": 565, "text": "Input: str = “J@va12”" }, { "code": null, "e": 638, "s": 587, "text": "Output: After removing uppercase characters: @va12" }, { "code": null, "e": 693, "s": 638, "text": " After removing lowercase characters: J@12" }, { "code": null, "e": 746, "s": 693, "text": " After removing special characters: Jva12" }, { "code": null, "e": 808, "s": 746, "text": " After removing numeric characters: J@va " }, { "code": null, "e": 860, "s": 808, "text": " After removing non-numeric characters: 12" }, { "code": null, "e": 1028, "s": 860, "text": "Naive Approach: The simplest approach is to iterate over the string and remove uppercase, lowercase, special, numeric, and non-numeric characters. Below are the steps:" }, { "code": null, "e": 1093, "s": 1028, "text": "1. Traverse the string character by character from start to end." }, { "code": null, "e": 1166, "s": 1093, "text": "2. Check the ASCII value of each character for the following conditions:" }, { "code": null, "e": 1343, "s": 1166, "text": "If the ASCII value lies in the range of [65, 90], then it is an uppercase character. Therefore, skip such characters and add the rest characters in another string and print it." }, { "code": null, "e": 1520, "s": 1343, "text": "If the ASCII value lies in the range of [97, 122], then it is a lowercase character. Therefore, skip such characters and add the rest characters in another string and print it." }, { "code": null, "e": 1728, "s": 1520, "text": "If the ASCII value lies in the range of [32, 47], [58, 64], [91, 96], or [123, 126] then it is a special character. Therefore, skip such characters and add the rest characters in another string and print it." }, { "code": null, "e": 1902, "s": 1728, "text": "If the ASCII value lies in the range of [48, 57], then it is a numeric character. Therefore, skip such characters and add the rest characters in another string and print it." }, { "code": null, "e": 2041, "s": 1902, "text": "Else the character is a non-numeric character. Therefore, skip such characters and add the rest characters in another string and print it." }, { "code": null, "e": 2084, "s": 2041, "text": "Time Complexity: O(N)Auxiliary Space: O(1)" }, { "code": null, "e": 2197, "s": 2084, "text": " Regular Expression Approach: The idea is to use regular expressions to solve this problem. Below are the steps:" }, { "code": null, "e": 2340, "s": 2197, "text": "1. Create regular expressions to remove uppercase, lowercase, special, numeric, and non-numeric characters from the string as mentioned below:" }, { "code": null, "e": 2384, "s": 2340, "text": "regexToRemoveUpperCaseCharacters = “[A-Z]”" }, { "code": null, "e": 2427, "s": 2384, "text": "regexToRemoveLowerCaseCharacters = “[a-z]”" }, { "code": null, "e": 2475, "s": 2427, "text": "regexToRemoveSpecialCharacters = “[^A-Za-z0-9]”" }, { "code": null, "e": 2516, "s": 2475, "text": "regexToRemoveNumericCharacters = “[0-9]”" }, { "code": null, "e": 2562, "s": 2516, "text": "regexToRemoveNon-NumericCharacters = “[^0-9]”" }, { "code": null, "e": 2657, "s": 2562, "text": "2. Compile the given regular expressions to create the pattern using Pattern.compile() method." }, { "code": null, "e": 2747, "s": 2657, "text": "3. Match the given string with all the above Regular Expressions using Pattern.matcher()." }, { "code": null, "e": 2842, "s": 2747, "text": "4. Replace every matched pattern with the target string using the Matcher.replaceAll() method." }, { "code": null, "e": 2893, "s": 2842, "text": "Below is the implementation of the above approach:" }, { "code": null, "e": 2897, "s": 2893, "text": "C++" }, { "code": null, "e": 2902, "s": 2897, "text": "Java" }, { "code": null, "e": 2910, "s": 2902, "text": "Python3" }, { "code": "// C++ program to remove uppercase, lowercase// special, numeric, and non-numeric characters#include <iostream>#include <regex>using namespace std; // Function to remove uppercase charactersstring removingUpperCaseCharacters(string str){ // Create a regular expression const regex pattern(\"[A-Z]\"); // Replace every matched pattern with the // target string using regex_replace() method return regex_replace(str, pattern, \"\");} // Function to remove lowercase charactersstring removingLowerCaseCharacters(string str){ // Create a regular expression const regex pattern(\"[a-z]\"); // Replace every matched pattern with the // target string using regex_replace() method return regex_replace(str, pattern, \"\");} // Function to remove special charactersstring removingSpecialCharacters(string str){ // Create a regular expression const regex pattern(\"[^A-Za-z0-9]\"); // Replace every matched pattern with the // target string using regex_replace() method return regex_replace(str, pattern, \"\");} // Function to remove numeric charactersstring removingNumericCharacters(string str){ // Create a regular expression const regex pattern(\"[0-9]\"); // Replace every matched pattern with the // target string using regex_replace() method return regex_replace(str, pattern, \"\");} // Function to remove non-numeric charactersstring removingNonNumericCharacters(string str){ // Create a regular expression const regex pattern(\"[^0-9]\"); // Replace every matched pattern with the // target string using regex_replace() method return regex_replace(str, pattern, \"\");} int main(){ // Given String str string str = \"GFGgfg123$%\"; // Print the strings after the simultaneous // modifications cout << \"After removing uppercase characters: \" << removingUpperCaseCharacters(str) << endl; cout << \"After removing lowercase characters: \" << removingLowerCaseCharacters(str) << endl; cout << \"After removing special characters: \" << removingSpecialCharacters(str) << endl; cout << \"After removing numeric characters: \" << removingNumericCharacters(str) << endl; cout << \"After removing non-numeric characters: \" << removingNonNumericCharacters(str) << endl; return 0;} // This article is contributed by yuvraj_chandra", "e": 5174, "s": 2910, "text": null }, { "code": "// Java program to remove uppercase, lowercase// special, numeric, and non-numeric charactersimport java.util.regex.Matcher;import java.util.regex.Pattern;public class GFG{ // Function to remove uppercase characters public static String removingUpperCaseCharacters(String str) { // Create a regular expression String regex = \"[A-Z]\"; // Compile the regex to create pattern // using compile() method Pattern pattern = Pattern.compile(regex); // Get a matcher object from pattern Matcher matcher = pattern.matcher(str); // Replace every matched pattern with the // target string using replaceAll() method return matcher.replaceAll(\"\"); } // Function to remove lowercase characters public static String removingLowerCaseCharacters(String str) { // Create a regular expression String regex = \"[a-z]\"; // Compile the regex to create pattern // using compile() method Pattern pattern = Pattern.compile(regex); // Get a matcher object from pattern Matcher matcher = pattern.matcher(str); // Replace every matched pattern with the // target string using replaceAll() method return matcher.replaceAll(\"\"); } // Function to remove special characters public static String removingSpecialCharacters(String str) { // Create a regular expression String regex = \"[^A-Za-z0-9]\"; // Compile the regex to create pattern // using compile() method Pattern pattern = Pattern.compile(regex); // Get a matcher object from pattern Matcher matcher = pattern.matcher(str); // Replace every matched pattern with the // target string using replaceAll() method return matcher.replaceAll(\"\"); } // Function to remove numeric characters public static String removingNumericCharacters(String str) { // Create a regular expression String regex = \"[0-9]\"; // Compile the regex to create pattern // using compile() method Pattern pattern = Pattern.compile(regex); // Get a matcher object from pattern Matcher matcher = pattern.matcher(str); // Replace every matched pattern with the // target string using replaceAll() method return matcher.replaceAll(\"\"); } // Function to remove non-numeric characters public static String removingNonNumericCharacters(String str) { // Create a regular expression String regex = \"[^0-9]\"; // Compile the regex to create pattern // using compile() method Pattern pattern = Pattern.compile(regex); // Get a matcher object from pattern Matcher matcher = pattern.matcher(str); // Replace every matched pattern with the // target string using replaceAll() method return matcher.replaceAll(\"\"); } // Driver Code public static void main(String[] args) { // Given String str String str = \"GFGgfg123$%\"; // Print the strings after the simultaneous // modifications System.out.println( \"After removing uppercase characters: \" + removingUpperCaseCharacters(str)); System.out.println( \"After removing lowercase characters: \" + removingLowerCaseCharacters(str)); System.out.println( \"After removing special characters: \" + removingSpecialCharacters(str)); System.out.println( \"After removing numeric characters: \" + removingNumericCharacters(str)); System.out.println( \"After removing non-numeric characters: \" + removingNonNumericCharacters(str)); }}", "e": 8950, "s": 5174, "text": null }, { "code": "# Python3 program to remove# uppercase, lowercase special,# numeric, and non-numeric charactersimport re # Function to remove# uppercase charactersdef removingUpperCaseCharacters(str): # Create a regular expression regex = \"[A-Z]\" # Replace every matched pattern # with the target string using # sub() method return (re.sub(regex, \"\", str)) # Function to remove lowercase# charactersdef removingLowerCaseCharacters(str): # Create a regular expression regex = \"[a-z]\" # Replace every matched # pattern with the target # string using sub() method return (re.sub(regex, \"\", str)) def removingSpecialCharacters(str): # Create a regular expression regex = \"[^A-Za-z0-9]\" # Replace every matched pattern # with the target string using # sub() method return (re.sub(regex, \"\", str)) def removingNumericCharacters(str): # Create a regular expression regex = \"[0-9]\" # Replace every matched # pattern with the target # string using sub() method return (re.sub(regex, \"\", str)) def removingNonNumericCharacters(str): # Create a regular expression regex = \"[^0-9]\" # Replace every matched pattern # with the target string using # sub() method return (re.sub(regex, \"\", str)) str = \"GFGgfg123$%\"print(\"After removing uppercase characters:\", removingUpperCaseCharacters(str))print(\"After removing lowercase characters:\", removingLowerCaseCharacters(str))print(\"After removing special characters:\", removingSpecialCharacters(str))print(\"After removing numeric characters:\", removingNumericCharacters(str))print(\"After removing non-numeric characters:\", removingNonNumericCharacters(str)) # This code is contributed by avanitrachhadiya2155", "e": 10707, "s": 8950, "text": null }, { "code": null, "e": 10931, "s": 10707, "text": "After removing uppercase characters: gfg123$%\nAfter removing lowercase characters: GFG123$%\nAfter removing special characters: GFGgfg123\nAfter removing numeric characters: GFGgfg$%\nAfter removing non-numeric characters: 123" }, { "code": null, "e": 10953, "s": 10931, "text": "Time Complexity: O(N)" }, { "code": null, "e": 10975, "s": 10953, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 10996, "s": 10975, "text": "avanitrachhadiya2155" }, { "code": null, "e": 11011, "s": 10996, "text": "yuvraj_chandra" }, { "code": null, "e": 11027, "s": 11011, "text": "rohitsingh07052" }, { "code": null, "e": 11048, "s": 11027, "text": "Java-String-Programs" }, { "code": null, "e": 11067, "s": 11048, "text": "regular-expression" }, { "code": null, "e": 11072, "s": 11067, "text": "Java" }, { "code": null, "e": 11086, "s": 11072, "text": "Java Programs" }, { "code": null, "e": 11094, "s": 11086, "text": "Strings" }, { "code": null, "e": 11102, "s": 11094, "text": "Strings" }, { "code": null, "e": 11107, "s": 11102, "text": "Java" } ]
Max Heap in Java
11 Jun, 2022 A max-heap is a complete binary tree in which the value in each internal node is greater than or equal to the values in the children of that node. Mapping the elements of a heap into an array is trivial: if a node is stored an index k, then its left child is stored at index 2k + 1 and its right child at index 2k + 2. Illustration: Max Heap How is Max Heap represented? A-Max Heap is a Complete Binary Tree. A-Max heap is typically represented as an array. The root element will be at Arr[0]. Below table shows indexes of other nodes for the ith node, i.e., Arr[i]: Arr[(i-1)/2] Returns the parent node. Arr[(2*i)+1] Returns the left child node. Arr[(2*i)+2] Returns the right child node. Operations on Max Heap are as follows: getMax(): It returns the root element of Max Heap. The Time Complexity of this operation is O(1). extractMax(): Removes the maximum element from MaxHeap. The Time Complexity of this Operation is O(Log n) as this operation needs to maintain the heap property by calling the heapify() method after removing the root. insert(): Inserting a new key takes O(Log n) time. We add a new key at the end of the tree. If the new key is smaller than its parent, then we don’t need to do anything. Otherwise, we need to traverse up to fix the violated heap property. Note: In the below implementation, we do indexing from index 1 to simplify the implementation. Methods: There are 2 methods by which we can achieve the goal as listed: Basic approach by creating maxHeapify() methodUsing Collections.reverseOrder() method via library Functions Basic approach by creating maxHeapify() method Using Collections.reverseOrder() method via library Functions Method 1: Basic approach by creating maxHeapify() method We will be creating a method assuming that the left and right subtrees are already heapified, we only need to fix the root. Example Java // Java program to implement Max Heap // Main classpublic class MaxHeap { private int[] Heap; private int size; private int maxsize; // Constructor to initialize an // empty max heap with given maximum // capacity public MaxHeap(int maxsize) { // This keyword refers to current instance itself this.maxsize = maxsize; this.size = 0; Heap = new int[this.maxsize]; } // Method 1 // Returning position of parent private int parent(int pos) { return (pos - 1) / 2; } // Method 2 // Returning left children private int leftChild(int pos) { return (2 * pos) + 1; } // Method 3 // Returning right children private int rightChild(int pos){ return (2 * pos) + 2; } // Method 4 // Returning true of given node is leaf private boolean isLeaf(int pos) { if (pos > (size / 2) && pos <= size) { return true; } return false; } // Method 5 // Swapping nodes private void swap(int fpos, int spos) { int tmp; tmp = Heap[fpos]; Heap[fpos] = Heap[spos]; Heap[spos] = tmp; } // Method 6 // Recursive function to max heapify given subtree private void maxHeapify(int pos) { if (isLeaf(pos)) return; if (Heap[pos] < Heap[leftChild(pos)] || Heap[pos] < Heap[rightChild(pos)]) { if (Heap[leftChild(pos)] > Heap[rightChild(pos)]) { swap(pos, leftChild(pos)); maxHeapify(leftChild(pos)); } else { swap(pos, rightChild(pos)); maxHeapify(rightChild(pos)); } } } // Method 7 // Inserts a new element to max heap public void insert(int element) { Heap[size] = element; // Traverse up and fix violated property int current = size; while (Heap[current] > Heap[parent(current)]) { swap(current, parent(current)); current = parent(current); } size++; } // Method 8 // To display heap public void print() { for(int i=0;i<size/2;i++){ System.out.print("Parent Node : " + Heap[i] ); if(leftChild(i)<size) //if the child is out of the bound of the array System.out.print( " Left Child Node: " + Heap[leftChild(i)]); if(rightChild(i)<size) //if the right child index must not be out of the index of the array System.out.print(" Right Child Node: "+ Heap[rightChild(i)]); System.out.println(); //for new line } } // Method 9 // Remove an element from max heap public int extractMax() { int popped = Heap[0]; Heap[0] = Heap[--size]; maxHeapify(0); return popped; } // Method 10 // main dri er method public static void main(String[] arg) { // Display message for better readability System.out.println("The Max Heap is "); MaxHeap maxHeap = new MaxHeap(15); // Inserting nodes // Custom inputs maxHeap.insert(5); maxHeap.insert(3); maxHeap.insert(17); maxHeap.insert(10); maxHeap.insert(84); maxHeap.insert(19); maxHeap.insert(6); maxHeap.insert(22); maxHeap.insert(9); // Calling maxHeap() as defined above maxHeap.print(); // Print and display the maximum value in heap System.out.println("The max val is " + maxHeap.extractMax()); }} The Max Heap is Parent Node : 84 Left Child Node: 22 Right Child Node: 19 Parent Node : 22 Left Child Node: 17 Right Child Node: 10 Parent Node : 19 Left Child Node: 5 Right Child Node: 6 Parent Node : 17 Left Child Node: 3 Right Child Node: 9 The max val is 84 Method 2: Using Collections.reverseOrder() method via library Functions We use PriorityQueue class to implement Heaps in Java. By default Min Heap is implemented by this class. To implement Max Heap, we use Collections.reverseOrder() method. Example Java // Java program to demonstrate working// of PriorityQueue as a Max Heap// Using Collections.reverseOrder() method // Importing all utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String args[]) { // Creating empty priority queue PriorityQueue<Integer> pQueue = new PriorityQueue<Integer>( Collections.reverseOrder()); // Adding items to our priority queue // using add() method pQueue.add(10); pQueue.add(30); pQueue.add(20); pQueue.add(400); // Printing the most priority element System.out.println("Head value using peek function:" + pQueue.peek()); // Printing all elements System.out.println("The queue elements:"); Iterator itr = pQueue.iterator(); while (itr.hasNext()) System.out.println(itr.next()); // Removing the top priority element (or head) and // printing the modified pQueue using poll() pQueue.poll(); System.out.println("After removing an element " + "with poll function:"); Iterator<Integer> itr2 = pQueue.iterator(); while (itr2.hasNext()) System.out.println(itr2.next()); // Removing 30 using remove() method pQueue.remove(30); System.out.println("after removing 30 with" + " remove function:"); Iterator<Integer> itr3 = pQueue.iterator(); while (itr3.hasNext()) System.out.println(itr3.next()); // Check if an element is present using contains() boolean b = pQueue.contains(20); System.out.println("Priority queue contains 20 " + "or not?: " + b); // Getting objects from the queue using toArray() // in an array and print the array Object[] arr = pQueue.toArray(); System.out.println("Value in array: "); for (int i = 0; i < arr.length; i++) System.out.println("Value: " + arr[i].toString()); }} Head value using peek function:400 The queue elements: 400 30 20 10 After removing an element with poll function: 30 10 20 after removing 30 with remove function: 20 10 Priority queue contains 20 or not?: true Value in array: Value: 20 Value: 10 gowtham_yuvaraj ishanov26 hardik20507 arorakashish0911 prachisoda1234 sayanbabudutta dengresamarth113 harendrakumar123 himanshugarg4 java-priority-queue Java-Queue-Programs Picked priority-queue Heap Java Java Heap priority-queue Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Introduction to Data Structures Building Heap from Array Priority Queue in Python Time Complexity of building a heap Overview of Data Structures | Set 2 (Binary Tree, BST, Heap and Hash) Arrays in Java Split() String method in Java with examples Object Oriented Programming (OOPs) Concept in Java Reverse a string in Java For-each loop in Java
[ { "code": null, "e": 54, "s": 26, "text": "\n11 Jun, 2022" }, { "code": null, "e": 373, "s": 54, "text": "A max-heap is a complete binary tree in which the value in each internal node is greater than or equal to the values in the children of that node. Mapping the elements of a heap into an array is trivial: if a node is stored an index k, then its left child is stored at index 2k + 1 and its right child at index 2k + 2." }, { "code": null, "e": 397, "s": 373, "text": "Illustration: Max Heap " }, { "code": null, "e": 427, "s": 397, "text": "How is Max Heap represented? " }, { "code": null, "e": 624, "s": 427, "text": "A-Max Heap is a Complete Binary Tree. A-Max heap is typically represented as an array. The root element will be at Arr[0]. Below table shows indexes of other nodes for the ith node, i.e., Arr[i]: " }, { "code": null, "e": 747, "s": 624, "text": "Arr[(i-1)/2] Returns the parent node. Arr[(2*i)+1] Returns the left child node. Arr[(2*i)+2] Returns the right child node." }, { "code": null, "e": 786, "s": 747, "text": "Operations on Max Heap are as follows:" }, { "code": null, "e": 884, "s": 786, "text": "getMax(): It returns the root element of Max Heap. The Time Complexity of this operation is O(1)." }, { "code": null, "e": 1101, "s": 884, "text": "extractMax(): Removes the maximum element from MaxHeap. The Time Complexity of this Operation is O(Log n) as this operation needs to maintain the heap property by calling the heapify() method after removing the root." }, { "code": null, "e": 1341, "s": 1101, "text": " insert(): Inserting a new key takes O(Log n) time. We add a new key at the end of the tree. If the new key is smaller than its parent, then we don’t need to do anything. Otherwise, we need to traverse up to fix the violated heap property." }, { "code": null, "e": 1437, "s": 1341, "text": "Note: In the below implementation, we do indexing from index 1 to simplify the implementation. " }, { "code": null, "e": 1447, "s": 1437, "text": "Methods: " }, { "code": null, "e": 1512, "s": 1447, "text": "There are 2 methods by which we can achieve the goal as listed: " }, { "code": null, "e": 1620, "s": 1512, "text": "Basic approach by creating maxHeapify() methodUsing Collections.reverseOrder() method via library Functions" }, { "code": null, "e": 1667, "s": 1620, "text": "Basic approach by creating maxHeapify() method" }, { "code": null, "e": 1729, "s": 1667, "text": "Using Collections.reverseOrder() method via library Functions" }, { "code": null, "e": 1786, "s": 1729, "text": "Method 1: Basic approach by creating maxHeapify() method" }, { "code": null, "e": 1910, "s": 1786, "text": "We will be creating a method assuming that the left and right subtrees are already heapified, we only need to fix the root." }, { "code": null, "e": 1918, "s": 1910, "text": "Example" }, { "code": null, "e": 1923, "s": 1918, "text": "Java" }, { "code": "// Java program to implement Max Heap // Main classpublic class MaxHeap { private int[] Heap; private int size; private int maxsize; // Constructor to initialize an // empty max heap with given maximum // capacity public MaxHeap(int maxsize) { // This keyword refers to current instance itself this.maxsize = maxsize; this.size = 0; Heap = new int[this.maxsize]; } // Method 1 // Returning position of parent private int parent(int pos) { return (pos - 1) / 2; } // Method 2 // Returning left children private int leftChild(int pos) { return (2 * pos) + 1; } // Method 3 // Returning right children private int rightChild(int pos){ return (2 * pos) + 2; } // Method 4 // Returning true of given node is leaf private boolean isLeaf(int pos) { if (pos > (size / 2) && pos <= size) { return true; } return false; } // Method 5 // Swapping nodes private void swap(int fpos, int spos) { int tmp; tmp = Heap[fpos]; Heap[fpos] = Heap[spos]; Heap[spos] = tmp; } // Method 6 // Recursive function to max heapify given subtree private void maxHeapify(int pos) { if (isLeaf(pos)) return; if (Heap[pos] < Heap[leftChild(pos)] || Heap[pos] < Heap[rightChild(pos)]) { if (Heap[leftChild(pos)] > Heap[rightChild(pos)]) { swap(pos, leftChild(pos)); maxHeapify(leftChild(pos)); } else { swap(pos, rightChild(pos)); maxHeapify(rightChild(pos)); } } } // Method 7 // Inserts a new element to max heap public void insert(int element) { Heap[size] = element; // Traverse up and fix violated property int current = size; while (Heap[current] > Heap[parent(current)]) { swap(current, parent(current)); current = parent(current); } size++; } // Method 8 // To display heap public void print() { for(int i=0;i<size/2;i++){ System.out.print(\"Parent Node : \" + Heap[i] ); if(leftChild(i)<size) //if the child is out of the bound of the array System.out.print( \" Left Child Node: \" + Heap[leftChild(i)]); if(rightChild(i)<size) //if the right child index must not be out of the index of the array System.out.print(\" Right Child Node: \"+ Heap[rightChild(i)]); System.out.println(); //for new line } } // Method 9 // Remove an element from max heap public int extractMax() { int popped = Heap[0]; Heap[0] = Heap[--size]; maxHeapify(0); return popped; } // Method 10 // main dri er method public static void main(String[] arg) { // Display message for better readability System.out.println(\"The Max Heap is \"); MaxHeap maxHeap = new MaxHeap(15); // Inserting nodes // Custom inputs maxHeap.insert(5); maxHeap.insert(3); maxHeap.insert(17); maxHeap.insert(10); maxHeap.insert(84); maxHeap.insert(19); maxHeap.insert(6); maxHeap.insert(22); maxHeap.insert(9); // Calling maxHeap() as defined above maxHeap.print(); // Print and display the maximum value in heap System.out.println(\"The max val is \" + maxHeap.extractMax()); }}", "e": 5548, "s": 1923, "text": null }, { "code": null, "e": 5812, "s": 5548, "text": "The Max Heap is \nParent Node : 84 Left Child Node: 22 Right Child Node: 19\nParent Node : 22 Left Child Node: 17 Right Child Node: 10\nParent Node : 19 Left Child Node: 5 Right Child Node: 6\nParent Node : 17 Left Child Node: 3 Right Child Node: 9\nThe max val is 84\n" }, { "code": null, "e": 5885, "s": 5812, "text": "Method 2: Using Collections.reverseOrder() method via library Functions " }, { "code": null, "e": 6056, "s": 5885, "text": "We use PriorityQueue class to implement Heaps in Java. By default Min Heap is implemented by this class. To implement Max Heap, we use Collections.reverseOrder() method. " }, { "code": null, "e": 6065, "s": 6056, "text": "Example " }, { "code": null, "e": 6070, "s": 6065, "text": "Java" }, { "code": "// Java program to demonstrate working// of PriorityQueue as a Max Heap// Using Collections.reverseOrder() method // Importing all utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String args[]) { // Creating empty priority queue PriorityQueue<Integer> pQueue = new PriorityQueue<Integer>( Collections.reverseOrder()); // Adding items to our priority queue // using add() method pQueue.add(10); pQueue.add(30); pQueue.add(20); pQueue.add(400); // Printing the most priority element System.out.println(\"Head value using peek function:\" + pQueue.peek()); // Printing all elements System.out.println(\"The queue elements:\"); Iterator itr = pQueue.iterator(); while (itr.hasNext()) System.out.println(itr.next()); // Removing the top priority element (or head) and // printing the modified pQueue using poll() pQueue.poll(); System.out.println(\"After removing an element \" + \"with poll function:\"); Iterator<Integer> itr2 = pQueue.iterator(); while (itr2.hasNext()) System.out.println(itr2.next()); // Removing 30 using remove() method pQueue.remove(30); System.out.println(\"after removing 30 with\" + \" remove function:\"); Iterator<Integer> itr3 = pQueue.iterator(); while (itr3.hasNext()) System.out.println(itr3.next()); // Check if an element is present using contains() boolean b = pQueue.contains(20); System.out.println(\"Priority queue contains 20 \" + \"or not?: \" + b); // Getting objects from the queue using toArray() // in an array and print the array Object[] arr = pQueue.toArray(); System.out.println(\"Value in array: \"); for (int i = 0; i < arr.length; i++) System.out.println(\"Value: \" + arr[i].toString()); }}", "e": 8205, "s": 6070, "text": null }, { "code": null, "e": 8453, "s": 8205, "text": "Head value using peek function:400\nThe queue elements:\n400\n30\n20\n10\nAfter removing an element with poll function:\n30\n10\n20\nafter removing 30 with remove function:\n20\n10\nPriority queue contains 20 or not?: true\nValue in array: \nValue: 20\nValue: 10\n" }, { "code": null, "e": 8469, "s": 8453, "text": "gowtham_yuvaraj" }, { "code": null, "e": 8479, "s": 8469, "text": "ishanov26" }, { "code": null, "e": 8491, "s": 8479, "text": "hardik20507" }, { "code": null, "e": 8508, "s": 8491, "text": "arorakashish0911" }, { "code": null, "e": 8523, "s": 8508, "text": "prachisoda1234" }, { "code": null, "e": 8538, "s": 8523, "text": "sayanbabudutta" }, { "code": null, "e": 8555, "s": 8538, "text": "dengresamarth113" }, { "code": null, "e": 8572, "s": 8555, "text": "harendrakumar123" }, { "code": null, "e": 8586, "s": 8572, "text": "himanshugarg4" }, { "code": null, "e": 8606, "s": 8586, "text": "java-priority-queue" }, { "code": null, "e": 8626, "s": 8606, "text": "Java-Queue-Programs" }, { "code": null, "e": 8633, "s": 8626, "text": "Picked" }, { "code": null, "e": 8648, "s": 8633, "text": "priority-queue" }, { "code": null, "e": 8653, "s": 8648, "text": "Heap" }, { "code": null, "e": 8658, "s": 8653, "text": "Java" }, { "code": null, "e": 8663, "s": 8658, "text": "Java" }, { "code": null, "e": 8668, "s": 8663, "text": "Heap" }, { "code": null, "e": 8683, "s": 8668, "text": "priority-queue" }, { "code": null, "e": 8781, "s": 8683, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 8813, "s": 8781, "text": "Introduction to Data Structures" }, { "code": null, "e": 8838, "s": 8813, "text": "Building Heap from Array" }, { "code": null, "e": 8863, "s": 8838, "text": "Priority Queue in Python" }, { "code": null, "e": 8898, "s": 8863, "text": "Time Complexity of building a heap" }, { "code": null, "e": 8968, "s": 8898, "text": "Overview of Data Structures | Set 2 (Binary Tree, BST, Heap and Hash)" }, { "code": null, "e": 8983, "s": 8968, "text": "Arrays in Java" }, { "code": null, "e": 9027, "s": 8983, "text": "Split() String method in Java with examples" }, { "code": null, "e": 9078, "s": 9027, "text": "Object Oriented Programming (OOPs) Concept in Java" }, { "code": null, "e": 9103, "s": 9078, "text": "Reverse a string in Java" } ]
JP Morgan Chase Interview Experience (Python Developer)
30 Jul, 2021 Round 1 (Technical Round): I started with the introduction. Few questions asked on the skills that were mentioned in the introduction. As I had mentioned the celery in my resume, Interviewer asked about the use cases, their implementation, and the broker used while implementing celery. Difference between List and Set in python along with examples. Difference between generators and iterators along with examples. Coding Question: Given an infinite long sorted array consisting of 0s and 1s. Find the index of the 1st occurrence of 1. Discussion about past projects, AWS Services being used in those projects, and the use case of each of the services (primarily AWS Lambda, EC2, S3, ECS, RDS, RS, Cloudwatch). Some discussion about project management and team management. Round 2 (Technical Round): Again, I started with an introduction and discussion on all previous projects. The interviewer asked me to explain the use case and implementation of the most complex problem that I had solved while working on past projects. I explained some scenarios of such instances along with use cases and implementation. There were some counter questions and discussions. The interviewer was not completely convinced. Coding Questions: 1. Given a sorted array and number N, write a program to find the pairs of numbers whose sum is equal to N. I explained two approaches. He was convinced. He gave a couple of hints while I was stuck in implementing those approaches. 2. Difference between List and Tuples along with examples. Various python basics questions on decorators. Question on the immutability of tuples like suppose a tuple x = ([1,2,3], “str”, 5) is given. Can we modify the element of list on index 0 of this tuple.? I explained Yes along with the whole workaround with the concept of reference memory in python. He asked me the concept of List slicing and list comprehension along with an example. 3. Write a code that prints the following pattern using list comprehension. Pattern = [1, 1, 1, 2, 4, 8, 3, 9, 27, 4, 16, 64] I wrote the logic but made a terrible mistake. I reversed the order of loops while implementing the nested list comprehension. The interviewer tried to give hint on the order of loops, but I couldn’t catch it. Correct one is Python3 # num, num^2, num^3 ....repeat for (num+1) then (num+2) ... so onlst = [ base**power for base in range(1, 5) for power in range(1,4)]print(lst) He asked about my overall working experience, the reason for the switch. Also, asked me if I have any questions for him before the wrap-up of this interview. I asked about his journey and roles at JPMC. JP Morgan Marketing Interview Experiences Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. TCS Digital Interview Questions Google SWE Interview Experience (Google Online Coding Challenge) 2022 Samsung Interview Experience Research & Institute SRIB (Off-Campus) 2022 Amazon Interview Experience for SDE 1 Google Interview Questions TCS Ninja Interview Experience (2020 batch) Amazon Interview Experience SDE-2 (3 Years Experienced) Write It Up: Share Your Interview Experiences Samsung RnD Coding Round Questions Nagarro Interview Experience
[ { "code": null, "e": 28, "s": 0, "text": "\n30 Jul, 2021" }, { "code": null, "e": 164, "s": 28, "text": "Round 1 (Technical Round): I started with the introduction. Few questions asked on the skills that were mentioned in the introduction. " }, { "code": null, "e": 316, "s": 164, "text": "As I had mentioned the celery in my resume, Interviewer asked about the use cases, their implementation, and the broker used while implementing celery." }, { "code": null, "e": 379, "s": 316, "text": "Difference between List and Set in python along with examples." }, { "code": null, "e": 444, "s": 379, "text": "Difference between generators and iterators along with examples." }, { "code": null, "e": 565, "s": 444, "text": "Coding Question: Given an infinite long sorted array consisting of 0s and 1s. Find the index of the 1st occurrence of 1." }, { "code": null, "e": 740, "s": 565, "text": "Discussion about past projects, AWS Services being used in those projects, and the use case of each of the services (primarily AWS Lambda, EC2, S3, ECS, RDS, RS, Cloudwatch)." }, { "code": null, "e": 802, "s": 740, "text": "Some discussion about project management and team management." }, { "code": null, "e": 1237, "s": 802, "text": "Round 2 (Technical Round): Again, I started with an introduction and discussion on all previous projects. The interviewer asked me to explain the use case and implementation of the most complex problem that I had solved while working on past projects. I explained some scenarios of such instances along with use cases and implementation. There were some counter questions and discussions. The interviewer was not completely convinced." }, { "code": null, "e": 1256, "s": 1237, "text": "Coding Questions: " }, { "code": null, "e": 1365, "s": 1256, "text": "1. Given a sorted array and number N, write a program to find the pairs of numbers whose sum is equal to N. " }, { "code": null, "e": 1489, "s": 1365, "text": "I explained two approaches. He was convinced. He gave a couple of hints while I was stuck in implementing those approaches." }, { "code": null, "e": 1548, "s": 1489, "text": "2. Difference between List and Tuples along with examples." }, { "code": null, "e": 1750, "s": 1548, "text": "Various python basics questions on decorators. Question on the immutability of tuples like suppose a tuple x = ([1,2,3], “str”, 5) is given. Can we modify the element of list on index 0 of this tuple.?" }, { "code": null, "e": 1846, "s": 1750, "text": "I explained Yes along with the whole workaround with the concept of reference memory in python." }, { "code": null, "e": 1932, "s": 1846, "text": "He asked me the concept of List slicing and list comprehension along with an example." }, { "code": null, "e": 2008, "s": 1932, "text": "3. Write a code that prints the following pattern using list comprehension." }, { "code": null, "e": 2059, "s": 2008, "text": " Pattern = [1, 1, 1, 2, 4, 8, 3, 9, 27, 4, 16, 64]" }, { "code": null, "e": 2269, "s": 2059, "text": "I wrote the logic but made a terrible mistake. I reversed the order of loops while implementing the nested list comprehension. The interviewer tried to give hint on the order of loops, but I couldn’t catch it." }, { "code": null, "e": 2284, "s": 2269, "text": "Correct one is" }, { "code": null, "e": 2292, "s": 2284, "text": "Python3" }, { "code": "# num, num^2, num^3 ....repeat for (num+1) then (num+2) ... so onlst = [ base**power for base in range(1, 5) for power in range(1,4)]print(lst)", "e": 2436, "s": 2292, "text": null }, { "code": null, "e": 2639, "s": 2436, "text": "He asked about my overall working experience, the reason for the switch. Also, asked me if I have any questions for him before the wrap-up of this interview. I asked about his journey and roles at JPMC." }, { "code": null, "e": 2649, "s": 2639, "text": "JP Morgan" }, { "code": null, "e": 2659, "s": 2649, "text": "Marketing" }, { "code": null, "e": 2681, "s": 2659, "text": "Interview Experiences" }, { "code": null, "e": 2779, "s": 2681, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2811, "s": 2779, "text": "TCS Digital Interview Questions" }, { "code": null, "e": 2881, "s": 2811, "text": "Google SWE Interview Experience (Google Online Coding Challenge) 2022" }, { "code": null, "e": 2954, "s": 2881, "text": "Samsung Interview Experience Research & Institute SRIB (Off-Campus) 2022" }, { "code": null, "e": 2992, "s": 2954, "text": "Amazon Interview Experience for SDE 1" }, { "code": null, "e": 3019, "s": 2992, "text": "Google Interview Questions" }, { "code": null, "e": 3063, "s": 3019, "text": "TCS Ninja Interview Experience (2020 batch)" }, { "code": null, "e": 3119, "s": 3063, "text": "Amazon Interview Experience SDE-2 (3 Years Experienced)" }, { "code": null, "e": 3165, "s": 3119, "text": "Write It Up: Share Your Interview Experiences" }, { "code": null, "e": 3200, "s": 3165, "text": "Samsung RnD Coding Round Questions" } ]
Java | MIDI Introduction
02 Jul, 2021 The Musical Instrument Digital Interface (MIDI) standard defines a communication protocol for electronic music devices, such as electronic keyboard instruments and personal computers. MIDI data can be transmitted over special cables during a live performance, and can also be stored in a standard type of file for later playback or editing.A midi file contains data that represents music in terms of events. Each event gives a description of the musical note to be played – duration, pitch, velocity, channel etc. The Package javax.sound.midi provides interfaces and classes for I/O, sequencing, and synthesis of MIDI data.MIDI is both a hardware specification and a software specification. The javax.sound.midi package is used to create and use midi events to produce a simple soundtrack in Java. Java Sound API’s Representation of MIDI Device MidiDevice Interface : The MidiDevice interface includes an API for opening and closing a device. It also includes an inner class called MidiDevice.Info that provides textual descriptions of the device, including its name, vendor, and version. Transmitters and Receivers : The way a device sends data is via one or more transmitter objects that it “owns.” Similarly, the way a device receives data is via one or more of its receiver objects. The transmitter objects implement the Transmitter interface, and the receivers implement the Receiver interface.Each transmitter can be connected to only one receiver at a time, and vice versa Essential components of a Midi system Synthesizer: This is the device that plays the midi soundtrack. It can either be a software synthesizer or a real midi compatible instrument. Sequencer: A sequencer takes in Midi data(via a sequence) and commands different instruments to play the notes. It arranges events according to start time, duration and channel to be played on. Channel: Midi supports upto 16 different channels. We can send off a midi event to any of those channels which are later synchronized by the sequencer. Track: It is a sequence of Midi events. Sequence: It is a data structure containing multiple tracks and timing information.The sequencer takes in a sequence and plays it. Important classes and methods MidiSystem: This class provides access to installed MIDI resources like sequencers, synthesizers, I/O ports. All methods are static and this class can’t be instantiated. MidiSystem.getSequencer() – returns an instance of the sequencer interface which is connected to a synthesizer/receiver. sequencer.open() – opens the sequencer so it can acquire system resources. sequencer.setSequence(Sequence sequence) – Sets the current sequence on which the sequencer operates. sequencer.setTempoInBPM(float bpm) – Sets the playback tempo in beats per minute. sequencer.start() – Starts playback of the MIDI data in the currently loaded sequence. sequencer.isRunning() – Indicates whether the Sequencer is currently running. Sequence – The Sequence class instance holds a data structure representing one or more tracks and timing information. Sequence.PPQ – Tempo based timing type, for which the resolution is expressed in pulses (ticks) per quarter note. Sequence.createTrack() – Creates an empty track Track – Class containing midi events arranged chronologically. Track.add(MidiEvent event) – Adds a new event to the track. MidiEvent(MidiMessage message, long tick) – A midi event object that contains a time stamped midi message. ShortMessage() – A ShortMessage object with at most two data bytes (Extended from MidiMessage). ShortMessage.setMessage(int command, int channel, int data1, int data2) – sets a ShortMessage object that has at most two data bytes (data1 and data2). MIDI Commands Below programs illustrate the usage of MIDI in Java: Program 1: Illustrating the implementation of a simple record. Java // Java program showing the implementation of a simple recordimport javax.sound.midi.*;import java.util.*; public class MyMidiPlayer { public static void main(String[] args) { System.out.println("Enter the number of notes to be played: "); Scanner in = new Scanner(System.in); int numOfNotes = in.nextInt(); MyMidiPlayer player = new MyMidiPlayer(); player.setUpPlayer(numOfNotes); } public void setUpPlayer(int numOfNotes) { try { // A static method of MidiSystem that returns // a sequencer instance. Sequencer sequencer = MidiSystem.getSequencer(); sequencer.open(); // Creating a sequence. Sequence sequence = new Sequence(Sequence.PPQ, 4); // PPQ(Pulse per ticks) is used to specify timing // type and 4 is the timing resolution. // Creating a track on our sequence upon which // MIDI events would be placed Track track = sequence.createTrack(); . // Adding some events to the track for (int i = 5; i < (4 * numOfNotes) + 5; i += 4) { // Add Note On event track.add(makeEvent(144, 1, i, 100, i)); // Add Note Off event track.add(makeEvent(128, 1, i, 100, i + 2)); } // Setting our sequence so that the sequencer can // run it on synthesizer sequencer.setSequence(sequence); // Specifies the beat rate in beats per minute. sequencer.setTempoInBPM(220); // Sequencer starts to play notes sequencer.start(); while (true) { // Exit the program when sequencer has stopped playing. if (!sequencer.isRunning()) { sequencer.close(); System.exit(1); } } } catch (Exception ex) { ex.printStackTrace(); } } public MidiEvent makeEvent(int command, int channel, int note, int velocity, int tick) { MidiEvent event = null; try { // ShortMessage stores a note as command type, channel, // instrument it has to be played on and its speed. ShortMessage a = new ShortMessage(); a.setMessage(command, channel, note, velocity); // A midi event is comprised of a short message(representing // a note) and the tick at which that note has to be played event = new MidiEvent(a, tick); } catch (Exception ex) { ex.printStackTrace(); } return event; }} Input: Enter the number of notes to be played: 15 Output: 15 sound notes with increasing pitch are played Input: Enter the number of notes to be played: 25 Output: 25 sound notes with increasing pitch are played (Note: Number of notes should not exceed 31 for reasons cited later) Why is the number of notes limited to 31? Since data1 and data2 fields of ShortMessage are of byte type, while using setMessage(int command, int channel, int note, int velocity), note and velocity must not exceed 127.Program 2: Using command code 192 to change the instrument type Java // Java program showing how to change the instrument typeimport javax.sound.midi.*;import java.util.*; public class MyMidiPlayer1 { public static void main(String[] args) { MyMidiPlayer1 player = new MyMidiPlayer1(); Scanner in = new Scanner(System.in); System.out.println("Enter the instrument to be played"); int instrument = in.nextInt(); System.out.println("Enter the note to be played"); int note = in.nextInt(); player.setUpPlayer(instrument, note); } public void setUpPlayer(int instrument, int note) { try { Sequencer sequencer = MidiSystem.getSequencer(); sequencer.open(); Sequence sequence = new Sequence(Sequence.PPQ, 4); Track track = sequence.createTrack(); // Set the instrument type track.add(makeEvent(192, 1, instrument, 0, 1)); // Add a note on event with specified note track.add(makeEvent(144, 1, note, 100, 1)); // Add a note off event with specified note track.add(makeEvent(128, 1, note, 100, 4)); sequencer.setSequence(sequence); sequencer.start(); while (true) { if (!sequencer.isRunning()) { sequencer.close(); System.exit(1); } } } catch (Exception ex) { ex.printStackTrace(); } } public MidiEvent makeEvent(int command, int channel, int note, int velocity, int tick) { MidiEvent event = null; try { ShortMessage a = new ShortMessage(); a.setMessage(command, channel, note, velocity); event = new MidiEvent(a, tick); } catch (Exception ex) { ex.printStackTrace(); } return event; }} Input : Enter the instrument to be played 102 Enter the note to be played 110 Output : Sound note is played Input : Enter the instrument to be played 40 Enter the note to be played 100 Output : Sound note is played NOTE: The code won’t run on online IDE since the code requires a few seconds of runtime for playback which IDE won’t permit. akshaysingh98088 Java Misc Programming Language Misc Misc Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Queue Interface In Java Object Oriented Programming (OOPs) Concept in Java Stack Class in Java Interfaces in Java ArrayList in Java Overview of Data Structures | Set 1 (Linear Data Structures) Top 10 algorithms in Interview Questions vector::push_back() and vector::pop_back() in C++ STL Program for nth Catalan Number OOPs | Object Oriented Design
[ { "code": null, "e": 28, "s": 0, "text": "\n02 Jul, 2021" }, { "code": null, "e": 828, "s": 28, "text": "The Musical Instrument Digital Interface (MIDI) standard defines a communication protocol for electronic music devices, such as electronic keyboard instruments and personal computers. MIDI data can be transmitted over special cables during a live performance, and can also be stored in a standard type of file for later playback or editing.A midi file contains data that represents music in terms of events. Each event gives a description of the musical note to be played – duration, pitch, velocity, channel etc. The Package javax.sound.midi provides interfaces and classes for I/O, sequencing, and synthesis of MIDI data.MIDI is both a hardware specification and a software specification. The javax.sound.midi package is used to create and use midi events to produce a simple soundtrack in Java. " }, { "code": null, "e": 875, "s": 828, "text": "Java Sound API’s Representation of MIDI Device" }, { "code": null, "e": 1121, "s": 877, "text": "MidiDevice Interface : The MidiDevice interface includes an API for opening and closing a device. It also includes an inner class called MidiDevice.Info that provides textual descriptions of the device, including its name, vendor, and version." }, { "code": null, "e": 1512, "s": 1121, "text": "Transmitters and Receivers : The way a device sends data is via one or more transmitter objects that it “owns.” Similarly, the way a device receives data is via one or more of its receiver objects. The transmitter objects implement the Transmitter interface, and the receivers implement the Receiver interface.Each transmitter can be connected to only one receiver at a time, and vice versa" }, { "code": null, "e": 1552, "s": 1514, "text": "Essential components of a Midi system" }, { "code": null, "e": 1696, "s": 1554, "text": "Synthesizer: This is the device that plays the midi soundtrack. It can either be a software synthesizer or a real midi compatible instrument." }, { "code": null, "e": 1890, "s": 1696, "text": "Sequencer: A sequencer takes in Midi data(via a sequence) and commands different instruments to play the notes. It arranges events according to start time, duration and channel to be played on." }, { "code": null, "e": 2042, "s": 1890, "text": "Channel: Midi supports upto 16 different channels. We can send off a midi event to any of those channels which are later synchronized by the sequencer." }, { "code": null, "e": 2082, "s": 2042, "text": "Track: It is a sequence of Midi events." }, { "code": null, "e": 2213, "s": 2082, "text": "Sequence: It is a data structure containing multiple tracks and timing information.The sequencer takes in a sequence and plays it." }, { "code": null, "e": 2245, "s": 2215, "text": "Important classes and methods" }, { "code": null, "e": 2417, "s": 2247, "text": "MidiSystem: This class provides access to installed MIDI resources like sequencers, synthesizers, I/O ports. All methods are static and this class can’t be instantiated." }, { "code": null, "e": 2538, "s": 2417, "text": "MidiSystem.getSequencer() – returns an instance of the sequencer interface which is connected to a synthesizer/receiver." }, { "code": null, "e": 2613, "s": 2538, "text": "sequencer.open() – opens the sequencer so it can acquire system resources." }, { "code": null, "e": 2715, "s": 2613, "text": "sequencer.setSequence(Sequence sequence) – Sets the current sequence on which the sequencer operates." }, { "code": null, "e": 2797, "s": 2715, "text": "sequencer.setTempoInBPM(float bpm) – Sets the playback tempo in beats per minute." }, { "code": null, "e": 2884, "s": 2797, "text": "sequencer.start() – Starts playback of the MIDI data in the currently loaded sequence." }, { "code": null, "e": 2962, "s": 2884, "text": "sequencer.isRunning() – Indicates whether the Sequencer is currently running." }, { "code": null, "e": 3080, "s": 2962, "text": "Sequence – The Sequence class instance holds a data structure representing one or more tracks and timing information." }, { "code": null, "e": 3194, "s": 3080, "text": "Sequence.PPQ – Tempo based timing type, for which the resolution is expressed in pulses (ticks) per quarter note." }, { "code": null, "e": 3242, "s": 3194, "text": "Sequence.createTrack() – Creates an empty track" }, { "code": null, "e": 3305, "s": 3242, "text": "Track – Class containing midi events arranged chronologically." }, { "code": null, "e": 3365, "s": 3305, "text": "Track.add(MidiEvent event) – Adds a new event to the track." }, { "code": null, "e": 3472, "s": 3365, "text": "MidiEvent(MidiMessage message, long tick) – A midi event object that contains a time stamped midi message." }, { "code": null, "e": 3568, "s": 3472, "text": "ShortMessage() – A ShortMessage object with at most two data bytes (Extended from MidiMessage)." }, { "code": null, "e": 3720, "s": 3568, "text": "ShortMessage.setMessage(int command, int channel, int data1, int data2) – sets a ShortMessage object that has at most two data bytes (data1 and data2)." }, { "code": null, "e": 3736, "s": 3722, "text": "MIDI Commands" }, { "code": null, "e": 3855, "s": 3738, "text": "Below programs illustrate the usage of MIDI in Java: Program 1: Illustrating the implementation of a simple record. " }, { "code": null, "e": 3860, "s": 3855, "text": "Java" }, { "code": "// Java program showing the implementation of a simple recordimport javax.sound.midi.*;import java.util.*; public class MyMidiPlayer { public static void main(String[] args) { System.out.println(\"Enter the number of notes to be played: \"); Scanner in = new Scanner(System.in); int numOfNotes = in.nextInt(); MyMidiPlayer player = new MyMidiPlayer(); player.setUpPlayer(numOfNotes); } public void setUpPlayer(int numOfNotes) { try { // A static method of MidiSystem that returns // a sequencer instance. Sequencer sequencer = MidiSystem.getSequencer(); sequencer.open(); // Creating a sequence. Sequence sequence = new Sequence(Sequence.PPQ, 4); // PPQ(Pulse per ticks) is used to specify timing // type and 4 is the timing resolution. // Creating a track on our sequence upon which // MIDI events would be placed Track track = sequence.createTrack(); . // Adding some events to the track for (int i = 5; i < (4 * numOfNotes) + 5; i += 4) { // Add Note On event track.add(makeEvent(144, 1, i, 100, i)); // Add Note Off event track.add(makeEvent(128, 1, i, 100, i + 2)); } // Setting our sequence so that the sequencer can // run it on synthesizer sequencer.setSequence(sequence); // Specifies the beat rate in beats per minute. sequencer.setTempoInBPM(220); // Sequencer starts to play notes sequencer.start(); while (true) { // Exit the program when sequencer has stopped playing. if (!sequencer.isRunning()) { sequencer.close(); System.exit(1); } } } catch (Exception ex) { ex.printStackTrace(); } } public MidiEvent makeEvent(int command, int channel, int note, int velocity, int tick) { MidiEvent event = null; try { // ShortMessage stores a note as command type, channel, // instrument it has to be played on and its speed. ShortMessage a = new ShortMessage(); a.setMessage(command, channel, note, velocity); // A midi event is comprised of a short message(representing // a note) and the tick at which that note has to be played event = new MidiEvent(a, tick); } catch (Exception ex) { ex.printStackTrace(); } return event; }}", "e": 6603, "s": 3860, "text": null }, { "code": null, "e": 6903, "s": 6603, "text": "Input: Enter the number of notes to be played: \n 15 \nOutput: 15 sound notes with increasing pitch are played\n\nInput: Enter the number of notes to be played: \n 25\nOutput: 25 sound notes with increasing pitch are played\n\n(Note: Number of notes should not exceed 31 for reasons cited later)" }, { "code": null, "e": 7186, "s": 6903, "text": "Why is the number of notes limited to 31? Since data1 and data2 fields of ShortMessage are of byte type, while using setMessage(int command, int channel, int note, int velocity), note and velocity must not exceed 127.Program 2: Using command code 192 to change the instrument type " }, { "code": null, "e": 7191, "s": 7186, "text": "Java" }, { "code": "// Java program showing how to change the instrument typeimport javax.sound.midi.*;import java.util.*; public class MyMidiPlayer1 { public static void main(String[] args) { MyMidiPlayer1 player = new MyMidiPlayer1(); Scanner in = new Scanner(System.in); System.out.println(\"Enter the instrument to be played\"); int instrument = in.nextInt(); System.out.println(\"Enter the note to be played\"); int note = in.nextInt(); player.setUpPlayer(instrument, note); } public void setUpPlayer(int instrument, int note) { try { Sequencer sequencer = MidiSystem.getSequencer(); sequencer.open(); Sequence sequence = new Sequence(Sequence.PPQ, 4); Track track = sequence.createTrack(); // Set the instrument type track.add(makeEvent(192, 1, instrument, 0, 1)); // Add a note on event with specified note track.add(makeEvent(144, 1, note, 100, 1)); // Add a note off event with specified note track.add(makeEvent(128, 1, note, 100, 4)); sequencer.setSequence(sequence); sequencer.start(); while (true) { if (!sequencer.isRunning()) { sequencer.close(); System.exit(1); } } } catch (Exception ex) { ex.printStackTrace(); } } public MidiEvent makeEvent(int command, int channel, int note, int velocity, int tick) { MidiEvent event = null; try { ShortMessage a = new ShortMessage(); a.setMessage(command, channel, note, velocity); event = new MidiEvent(a, tick); } catch (Exception ex) { ex.printStackTrace(); } return event; }}", "e": 9075, "s": 7191, "text": null }, { "code": null, "e": 9341, "s": 9075, "text": "Input : Enter the instrument to be played\n 102\n Enter the note to be played\n 110\n\nOutput : Sound note is played\n\nInput : Enter the instrument to be played\n 40\n Enter the note to be played\n 100\n\nOutput : Sound note is played" }, { "code": null, "e": 9467, "s": 9341, "text": "NOTE: The code won’t run on online IDE since the code requires a few seconds of runtime for playback which IDE won’t permit. " }, { "code": null, "e": 9484, "s": 9467, "text": "akshaysingh98088" }, { "code": null, "e": 9489, "s": 9484, "text": "Java" }, { "code": null, "e": 9494, "s": 9489, "text": "Misc" }, { "code": null, "e": 9515, "s": 9494, "text": "Programming Language" }, { "code": null, "e": 9520, "s": 9515, "text": "Misc" }, { "code": null, "e": 9525, "s": 9520, "text": "Misc" }, { "code": null, "e": 9530, "s": 9525, "text": "Java" }, { "code": null, "e": 9628, "s": 9530, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 9652, "s": 9628, "text": "Queue Interface In Java" }, { "code": null, "e": 9703, "s": 9652, "text": "Object Oriented Programming (OOPs) Concept in Java" }, { "code": null, "e": 9723, "s": 9703, "text": "Stack Class in Java" }, { "code": null, "e": 9742, "s": 9723, "text": "Interfaces in Java" }, { "code": null, "e": 9760, "s": 9742, "text": "ArrayList in Java" }, { "code": null, "e": 9821, "s": 9760, "text": "Overview of Data Structures | Set 1 (Linear Data Structures)" }, { "code": null, "e": 9862, "s": 9821, "text": "Top 10 algorithms in Interview Questions" }, { "code": null, "e": 9916, "s": 9862, "text": "vector::push_back() and vector::pop_back() in C++ STL" }, { "code": null, "e": 9947, "s": 9916, "text": "Program for nth Catalan Number" } ]
Use Machine Learning to Predict Horse Racing | by Cullen Sun | Towards Data Science
I am a part-time student of master of computer science at HKU. After last semester exams, I was thinking it would be fun and cool to apply machine learning on a real world dataset. It took me no time to find out the great website for machine learning, Kaggle. Kaggle has a lot of interesting datasets for data science students and professionals. I searched for “Hong Kong” because I would like to find a dataset that is close/related to me, and I got the dataset for Horse Racing in HK. Great thanks to Graham Daley for posting the dataset on Kaggle. I was very excited to work on my first real world machine learning project. I think horse racing is predictable and it’s a typical classification problem. In this article, I am going to share with you how to do data preprocessing, neural network model building and training. We are going to use common machine learning packages like pandas, tensorflow, numpy, sklearn, etc. At the end, I will also share some thoughts that I came out after this project. The dataset contains two csv files: races.csv and runs.csv. The two tables are related by column “race_id”. We will need to join the two tables to make meaningful data for training. Line 1-6: Import the packages we are going to use for the project. Line 8–9: Use pandas to read the csv file. Then, select 6 columns that I think are important features. Line 11–13: It’s very common that the dataset contains NaN, which means some field is missing. Here, let’s just drop them. Line 15–23: Some columns are strings. However, the neural network model only deals with numbers, so we need to encode them. String columns have different types: nominal and ordinal. We use LabelEncoder for column “venue”, while we use OrdinalEncoder for columns “config” and “going”. Till now, we are happy with the data from races.csv. The dataframe head looks like below. The runs.csv file contains data of horses for every race, associated with races.csv by race_id. Usually, each race has 14 or less horses racing together. The code is very similar to what we did for races.csv file, so I am not going to explain line by line. One thing to note is that, in line 10–13 above, we need to delete some noisy data, and the reason is explained in the code inline comment. After the processing, the dataframe head looks like below. You might think that we are done with the runs.csv file, but not really because we need to think how we are going to feed the data into the neural network. Basically, we want one row from races.csv plus all the rows from runs.csv which belong to the same race, and put them together as one sample input for the neural network. The following code is to reshape runs dataframe so that rows of same race will form into one row. Line 1–5: We define a small function her for sorting the columns. Line 7: The magic pivot method reshapes the runs.csv data so that rows of same race goes into one row. The columns are ordered like [(horse_age, 1), (horse_age, 2), (horse_age, 3) ... ]. However, I want the columns to be like [(horse_age, 1), (horse_country, 1), (horse_type, 1) ...(horse_age, 2), (horse_country, 2), (horse_type, 2) ... ], so that we will have horse-1-features, horse-2-features, horse-3-features ... in order. It took me a lot of time to figure out which panda API to use for this. Have to mention that we need to avoid using loops to strive for better computation efficiency. Line 8–10: Sort columns with the function defined in line 1–5. Apart from grouping the horse features, it also put the “result” columns to the end. So the right most columns will be [... (result, 11), (result, 12), (result, 13), (result, 14)]. Put them at the end because they are the outputs but not feature inputs. Line 12–14: As the inline comment has explained, our neural network will need 14 horses data as input. Here, we simply fill dummy horse features with 0s. You see, it’s a lot of work for the data preprocessing. Luckily, we are almost there. Now, let’s do the last bit to get the data ready for the neural network. Line 1: Join the two dataframes we have prepared above by “race_id”. Line 2: Select all the columns except last 14 columns as inputs X. Line 3–4: Apply the feature scaling technique, standardisation, which makes training easier. Without that it’s difficult for the network to learn parameters for feature “declared_weight” (1100 sth) and feature “horse_age” (around 3–5). Line 6: Convert the column “result” to the output y. This is a binary classification. So we put 1 if the horse got 1st place, else put 0. Line 8–9: Simply print the shape of input and output to take a look. We can see that we have 6348 races, each of which we have 104 features and 14 outputs. (6348, 104)(6348, 14) Line 11–12: Use a handy util method from sklearn to split the data to train and test(validation) sets. Firstly, I’d like to show you the architecture of the neural network as below. This is the core of the project. All the data preprocessing we have done so far is to feed into the neural network. Input layer: 104 nodes, which takes 6 from the races.csv file and 7x14 from the runs.csv file. Hidden layer: Just one with 96 nodes. I think this is enough for our project because the function of this classification shall not be that complicated. Output layer: 14 nodes, each of them indicate whether a horse won the race. Code is shown below, which is fairly simple, thanks to keras. Please pay attention to metrics. I chose precision, precision = TP/(TP+FP), as an evaluation parameter because it's our interest to bet on win. The model detail is printed out as below. Trainable parameters are 11,438, which is the power of a neural network. It means the network will learn and adjust each of these parameters in order to find a function y = f(X) to fit our data. Line 1–4: Further twist on the data before feed data for training. Line 6–8: The training itself. During training the console prints the progress. Line 10–26: Plot the performance, and the graphs are shown below. The network is powerful enough to fit the training dataset well. However, there is big overfitting. The best performance we can reach on the validation dataset is about precision=0.3, which happens around epoch 60~80. Precision=0.3 means when we can get 3 out 10 win-bets correctly. This performance is not as high as I expected. The reason might be most races are handicaps, which means runners carry different weights to equalise their chance of winning the race, see reference Racing 101. One shall study the domain knowledge before data analysis. My lesson is I didn’t understand the horse racing terms like “win_odds” and “actual_weight” so I didn’t know which columns are the useful features. After study the guide, things got much clearer. Do more data analysis before write code. I started coding very quickly after found the dataset. You can see another notebook of mine, in which I made a big mistake that I treated one row of runs.csv file as one sample. Whether a horse can win a race is relative to other horses in the same race, so we have to put 14 horses’ data of one race together as one sample. I have heard people saying that data preprocessing usually takes more time than model building and training. I found it’s true for this project. Pandas have hundreds of powerful APIs for you to manipulate the dataframe, and it will take time to get familiar with them in order to choose the right API to get your job done quickly. One can really learn a lot by doing the project. It’s much much more than the course assignments. In this article, I have shared with you how to apply machine learning skills end to end on a real world dataset from Kaggle. My full notebook can be found here. Thank you so much for reading it and welcome your comments.
[ { "code": null, "e": 353, "s": 172, "text": "I am a part-time student of master of computer science at HKU. After last semester exams, I was thinking it would be fun and cool to apply machine learning on a real world dataset." }, { "code": null, "e": 723, "s": 353, "text": "It took me no time to find out the great website for machine learning, Kaggle. Kaggle has a lot of interesting datasets for data science students and professionals. I searched for “Hong Kong” because I would like to find a dataset that is close/related to me, and I got the dataset for Horse Racing in HK. Great thanks to Graham Daley for posting the dataset on Kaggle." }, { "code": null, "e": 878, "s": 723, "text": "I was very excited to work on my first real world machine learning project. I think horse racing is predictable and it’s a typical classification problem." }, { "code": null, "e": 1177, "s": 878, "text": "In this article, I am going to share with you how to do data preprocessing, neural network model building and training. We are going to use common machine learning packages like pandas, tensorflow, numpy, sklearn, etc. At the end, I will also share some thoughts that I came out after this project." }, { "code": null, "e": 1359, "s": 1177, "text": "The dataset contains two csv files: races.csv and runs.csv. The two tables are related by column “race_id”. We will need to join the two tables to make meaningful data for training." }, { "code": null, "e": 1426, "s": 1359, "text": "Line 1-6: Import the packages we are going to use for the project." }, { "code": null, "e": 1529, "s": 1426, "text": "Line 8–9: Use pandas to read the csv file. Then, select 6 columns that I think are important features." }, { "code": null, "e": 1652, "s": 1529, "text": "Line 11–13: It’s very common that the dataset contains NaN, which means some field is missing. Here, let’s just drop them." }, { "code": null, "e": 1936, "s": 1652, "text": "Line 15–23: Some columns are strings. However, the neural network model only deals with numbers, so we need to encode them. String columns have different types: nominal and ordinal. We use LabelEncoder for column “venue”, while we use OrdinalEncoder for columns “config” and “going”." }, { "code": null, "e": 2026, "s": 1936, "text": "Till now, we are happy with the data from races.csv. The dataframe head looks like below." }, { "code": null, "e": 2481, "s": 2026, "text": "The runs.csv file contains data of horses for every race, associated with races.csv by race_id. Usually, each race has 14 or less horses racing together. The code is very similar to what we did for races.csv file, so I am not going to explain line by line. One thing to note is that, in line 10–13 above, we need to delete some noisy data, and the reason is explained in the code inline comment. After the processing, the dataframe head looks like below." }, { "code": null, "e": 2906, "s": 2481, "text": "You might think that we are done with the runs.csv file, but not really because we need to think how we are going to feed the data into the neural network. Basically, we want one row from races.csv plus all the rows from runs.csv which belong to the same race, and put them together as one sample input for the neural network. The following code is to reshape runs dataframe so that rows of same race will form into one row." }, { "code": null, "e": 2972, "s": 2906, "text": "Line 1–5: We define a small function her for sorting the columns." }, { "code": null, "e": 3568, "s": 2972, "text": "Line 7: The magic pivot method reshapes the runs.csv data so that rows of same race goes into one row. The columns are ordered like [(horse_age, 1), (horse_age, 2), (horse_age, 3) ... ]. However, I want the columns to be like [(horse_age, 1), (horse_country, 1), (horse_type, 1) ...(horse_age, 2), (horse_country, 2), (horse_type, 2) ... ], so that we will have horse-1-features, horse-2-features, horse-3-features ... in order. It took me a lot of time to figure out which panda API to use for this. Have to mention that we need to avoid using loops to strive for better computation efficiency." }, { "code": null, "e": 3885, "s": 3568, "text": "Line 8–10: Sort columns with the function defined in line 1–5. Apart from grouping the horse features, it also put the “result” columns to the end. So the right most columns will be [... (result, 11), (result, 12), (result, 13), (result, 14)]. Put them at the end because they are the outputs but not feature inputs." }, { "code": null, "e": 4039, "s": 3885, "text": "Line 12–14: As the inline comment has explained, our neural network will need 14 horses data as input. Here, we simply fill dummy horse features with 0s." }, { "code": null, "e": 4198, "s": 4039, "text": "You see, it’s a lot of work for the data preprocessing. Luckily, we are almost there. Now, let’s do the last bit to get the data ready for the neural network." }, { "code": null, "e": 4267, "s": 4198, "text": "Line 1: Join the two dataframes we have prepared above by “race_id”." }, { "code": null, "e": 4334, "s": 4267, "text": "Line 2: Select all the columns except last 14 columns as inputs X." }, { "code": null, "e": 4570, "s": 4334, "text": "Line 3–4: Apply the feature scaling technique, standardisation, which makes training easier. Without that it’s difficult for the network to learn parameters for feature “declared_weight” (1100 sth) and feature “horse_age” (around 3–5)." }, { "code": null, "e": 4708, "s": 4570, "text": "Line 6: Convert the column “result” to the output y. This is a binary classification. So we put 1 if the horse got 1st place, else put 0." }, { "code": null, "e": 4864, "s": 4708, "text": "Line 8–9: Simply print the shape of input and output to take a look. We can see that we have 6348 races, each of which we have 104 features and 14 outputs." }, { "code": null, "e": 4886, "s": 4864, "text": "(6348, 104)(6348, 14)" }, { "code": null, "e": 4989, "s": 4886, "text": "Line 11–12: Use a handy util method from sklearn to split the data to train and test(validation) sets." }, { "code": null, "e": 5184, "s": 4989, "text": "Firstly, I’d like to show you the architecture of the neural network as below. This is the core of the project. All the data preprocessing we have done so far is to feed into the neural network." }, { "code": null, "e": 5279, "s": 5184, "text": "Input layer: 104 nodes, which takes 6 from the races.csv file and 7x14 from the runs.csv file." }, { "code": null, "e": 5431, "s": 5279, "text": "Hidden layer: Just one with 96 nodes. I think this is enough for our project because the function of this classification shall not be that complicated." }, { "code": null, "e": 5507, "s": 5431, "text": "Output layer: 14 nodes, each of them indicate whether a horse won the race." }, { "code": null, "e": 5713, "s": 5507, "text": "Code is shown below, which is fairly simple, thanks to keras. Please pay attention to metrics. I chose precision, precision = TP/(TP+FP), as an evaluation parameter because it's our interest to bet on win." }, { "code": null, "e": 5950, "s": 5713, "text": "The model detail is printed out as below. Trainable parameters are 11,438, which is the power of a neural network. It means the network will learn and adjust each of these parameters in order to find a function y = f(X) to fit our data." }, { "code": null, "e": 6017, "s": 5950, "text": "Line 1–4: Further twist on the data before feed data for training." }, { "code": null, "e": 6097, "s": 6017, "text": "Line 6–8: The training itself. During training the console prints the progress." }, { "code": null, "e": 6163, "s": 6097, "text": "Line 10–26: Plot the performance, and the graphs are shown below." }, { "code": null, "e": 6655, "s": 6163, "text": "The network is powerful enough to fit the training dataset well. However, there is big overfitting. The best performance we can reach on the validation dataset is about precision=0.3, which happens around epoch 60~80. Precision=0.3 means when we can get 3 out 10 win-bets correctly. This performance is not as high as I expected. The reason might be most races are handicaps, which means runners carry different weights to equalise their chance of winning the race, see reference Racing 101." }, { "code": null, "e": 6910, "s": 6655, "text": "One shall study the domain knowledge before data analysis. My lesson is I didn’t understand the horse racing terms like “win_odds” and “actual_weight” so I didn’t know which columns are the useful features. After study the guide, things got much clearer." }, { "code": null, "e": 7276, "s": 6910, "text": "Do more data analysis before write code. I started coding very quickly after found the dataset. You can see another notebook of mine, in which I made a big mistake that I treated one row of runs.csv file as one sample. Whether a horse can win a race is relative to other horses in the same race, so we have to put 14 horses’ data of one race together as one sample." }, { "code": null, "e": 7607, "s": 7276, "text": "I have heard people saying that data preprocessing usually takes more time than model building and training. I found it’s true for this project. Pandas have hundreds of powerful APIs for you to manipulate the dataframe, and it will take time to get familiar with them in order to choose the right API to get your job done quickly." }, { "code": null, "e": 7705, "s": 7607, "text": "One can really learn a lot by doing the project. It’s much much more than the course assignments." } ]
Java program to count the number of vowels in a given sentence
To count the number of vowels in a given sentence: Read a sentence from the user Create a variable (count) initialize it with 0; Compare each character in the sentence with the characters {'a', 'e', 'i', 'o', 'u' } If a match occurs increment the count. Finally print count. import java.util.Scanner; public class CountingVowels { public static void main(String args[]){ int count = 0; System.out.println("Enter a sentence :"); Scanner sc = new Scanner(System.in); String sentence = sc.nextLine(); for (int i=0 ; i<sentence.length(); i++){ char ch = sentence.charAt(i); if(ch == 'a'|| ch == 'e'|| ch == 'i' ||ch == 'o' ||ch == 'u'||ch == ' '){ count ++; } } System.out.println("Number of vowels in the given sentence is "+count); } } Enter a sentence : Hi how are you welcome to tutorialspoint Number of vowels in the given sentence is 22
[ { "code": null, "e": 1113, "s": 1062, "text": "To count the number of vowels in a given sentence:" }, { "code": null, "e": 1144, "s": 1113, "text": " Read a sentence from the user" }, { "code": null, "e": 1193, "s": 1144, "text": " Create a variable (count) initialize it with 0;" }, { "code": null, "e": 1280, "s": 1193, "text": " Compare each character in the sentence with the characters {'a', 'e', 'i', 'o', 'u' }" }, { "code": null, "e": 1320, "s": 1280, "text": " If a match occurs increment the count." }, { "code": null, "e": 1342, "s": 1320, "text": " Finally print count." }, { "code": null, "e": 1889, "s": 1342, "text": "import java.util.Scanner;\npublic class CountingVowels {\n public static void main(String args[]){\n int count = 0;\n System.out.println(\"Enter a sentence :\");\n Scanner sc = new Scanner(System.in);\n String sentence = sc.nextLine();\n\n for (int i=0 ; i<sentence.length(); i++){\n char ch = sentence.charAt(i);\n if(ch == 'a'|| ch == 'e'|| ch == 'i' ||ch == 'o' ||ch == 'u'||ch == ' '){\n count ++;\n }\n }\n System.out.println(\"Number of vowels in the given sentence is \"+count);\n }\n}" }, { "code": null, "e": 1994, "s": 1889, "text": "Enter a sentence :\nHi how are you welcome to tutorialspoint\nNumber of vowels in the given sentence is 22" } ]
How to add hours to the current time in Python? - GeeksforGeeks
29 Dec, 2020 Prerequisites: Datetime module Every minute should be enjoyed and savored. Time is measured by the hours, days, years, and so on. Time helps us to make a good habit of organizing and structuring our daily activities. In this article, we will see how we can extract real-time from a python module. There are various ways to pass the date and time feature to the program. Python ‘Time’ and ‘Calendar’ module help in tracking date and time. Also, the ‘DateTime’ provides a class for controlling date and time in both simple and complex ways. So with the help of this module, we will try to figure out our future desire time by adding hours in real-time with the help of ‘timedelta( )’. To get both current date and time datetime.now() function of DateTime module is used. This function returns the current local date and time. Syntax : datetime.now(tz) Parameters : tz : Specified time zone of which current time and date is required. (Uses Greenwich Meridian time by default.) Returns : Returns the current date and time in time format. Approach : Import DateTime module. Display Current time. Create a new variable to Updated time. Store the Updated time in that variable. Display Updated Time. Implementation : Step 1: Showing current time. Firstly, we will Import ‘datetime’ and ‘timedelta’ from datetime module, Then we will store our Present time in a variable. After that, we will align date in “HH:MM:SS” format. Now we can print our Present time. Python3 #importing datetime module for now() from datetime import datetime, timedelta # using now() to get present_time present_time = datetime.now() #time formatting'{:%H:%M:%S}'.format(present_time) print("Present time at greenwich meridian is " ,end = "") print( present_time ) Output: Present time at greenwich meridian is 2020-11-11 08:26:55.032586 Step 2: Adding the time to the existing current time. After the following above steps, we will pass the desired time in ‘timedelta’ function that will add hour in present time.Now, we can display updated time. Python3 #importing datetime module for now() from datetime import datetime, timedelta # using now() to get present_time present_time = datetime.now() #time formatting'{:%H:%M:%S}'.format( present_time ) print("Present time at greenwich meridian is ", end = "") print( present_time ) updated_time = datetime.now() + timedelta(hours=6)print( updated_time ) Output: Present time at greenwich meridian is 2020-11-11 08:27:39.6157942020-11-11 14:27:39.615794 Another better way you can try : Python3 from datetime import datetime, timedelta updated = ( datetime.now() + timedelta( hours=5 )).strftime('%H:%M:%S') print( updated ) Output: 13:28:21 Python datetime-program Python-datetime Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Install PIP on Windows ? How to drop one or multiple columns in Pandas Dataframe Selecting rows in pandas DataFrame based on conditions How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | os.path.join() method Python | Get unique values from a list Create a directory in Python Defaultdict in Python Python | Pandas dataframe.groupby()
[ { "code": null, "e": 24292, "s": 24264, "text": "\n29 Dec, 2020" }, { "code": null, "e": 24323, "s": 24292, "text": "Prerequisites: Datetime module" }, { "code": null, "e": 24975, "s": 24323, "text": "Every minute should be enjoyed and savored. Time is measured by the hours, days, years, and so on. Time helps us to make a good habit of organizing and structuring our daily activities. In this article, we will see how we can extract real-time from a python module. There are various ways to pass the date and time feature to the program. Python ‘Time’ and ‘Calendar’ module help in tracking date and time. Also, the ‘DateTime’ provides a class for controlling date and time in both simple and complex ways. So with the help of this module, we will try to figure out our future desire time by adding hours in real-time with the help of ‘timedelta( )’." }, { "code": null, "e": 25116, "s": 24975, "text": "To get both current date and time datetime.now() function of DateTime module is used. This function returns the current local date and time." }, { "code": null, "e": 25142, "s": 25116, "text": "Syntax : datetime.now(tz)" }, { "code": null, "e": 25267, "s": 25142, "text": "Parameters : tz : Specified time zone of which current time and date is required. (Uses Greenwich Meridian time by default.)" }, { "code": null, "e": 25327, "s": 25267, "text": "Returns : Returns the current date and time in time format." }, { "code": null, "e": 25338, "s": 25327, "text": "Approach :" }, { "code": null, "e": 25362, "s": 25338, "text": "Import DateTime module." }, { "code": null, "e": 25384, "s": 25362, "text": "Display Current time." }, { "code": null, "e": 25423, "s": 25384, "text": "Create a new variable to Updated time." }, { "code": null, "e": 25464, "s": 25423, "text": "Store the Updated time in that variable." }, { "code": null, "e": 25486, "s": 25464, "text": "Display Updated Time." }, { "code": null, "e": 25503, "s": 25486, "text": "Implementation :" }, { "code": null, "e": 25533, "s": 25503, "text": "Step 1: Showing current time." }, { "code": null, "e": 25746, "s": 25533, "text": "Firstly, we will Import ‘datetime’ and ‘timedelta’ from datetime module, Then we will store our Present time in a variable. After that, we will align date in “HH:MM:SS” format. Now we can print our Present time." }, { "code": null, "e": 25754, "s": 25746, "text": "Python3" }, { "code": "#importing datetime module for now() from datetime import datetime, timedelta # using now() to get present_time present_time = datetime.now() #time formatting'{:%H:%M:%S}'.format(present_time) print(\"Present time at greenwich meridian is \" ,end = \"\") print( present_time )", "e": 26047, "s": 25754, "text": null }, { "code": null, "e": 26055, "s": 26047, "text": "Output:" }, { "code": null, "e": 26120, "s": 26055, "text": "Present time at greenwich meridian is 2020-11-11 08:26:55.032586" }, { "code": null, "e": 26174, "s": 26120, "text": "Step 2: Adding the time to the existing current time." }, { "code": null, "e": 26330, "s": 26174, "text": "After the following above steps, we will pass the desired time in ‘timedelta’ function that will add hour in present time.Now, we can display updated time." }, { "code": null, "e": 26338, "s": 26330, "text": "Python3" }, { "code": "#importing datetime module for now() from datetime import datetime, timedelta # using now() to get present_time present_time = datetime.now() #time formatting'{:%H:%M:%S}'.format( present_time ) print(\"Present time at greenwich meridian is \", end = \"\") print( present_time ) updated_time = datetime.now() + timedelta(hours=6)print( updated_time )", "e": 26707, "s": 26338, "text": null }, { "code": null, "e": 26715, "s": 26707, "text": "Output:" }, { "code": null, "e": 26806, "s": 26715, "text": "Present time at greenwich meridian is 2020-11-11 08:27:39.6157942020-11-11 14:27:39.615794" }, { "code": null, "e": 26840, "s": 26806, "text": " Another better way you can try :" }, { "code": null, "e": 26848, "s": 26840, "text": "Python3" }, { "code": "from datetime import datetime, timedelta updated = ( datetime.now() + timedelta( hours=5 )).strftime('%H:%M:%S') print( updated )", "e": 26990, "s": 26848, "text": null }, { "code": null, "e": 26998, "s": 26990, "text": "Output:" }, { "code": null, "e": 27008, "s": 26998, "text": "13:28:21\n" }, { "code": null, "e": 27032, "s": 27008, "text": "Python datetime-program" }, { "code": null, "e": 27048, "s": 27032, "text": "Python-datetime" }, { "code": null, "e": 27055, "s": 27048, "text": "Python" }, { "code": null, "e": 27153, "s": 27055, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27162, "s": 27153, "text": "Comments" }, { "code": null, "e": 27175, "s": 27162, "text": "Old Comments" }, { "code": null, "e": 27207, "s": 27175, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27263, "s": 27207, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 27318, "s": 27263, "text": "Selecting rows in pandas DataFrame based on conditions" }, { "code": null, "e": 27360, "s": 27318, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 27402, "s": 27360, "text": "Check if element exists in list in Python" }, { "code": null, "e": 27433, "s": 27402, "text": "Python | os.path.join() method" }, { "code": null, "e": 27472, "s": 27433, "text": "Python | Get unique values from a list" }, { "code": null, "e": 27501, "s": 27472, "text": "Create a directory in Python" }, { "code": null, "e": 27523, "s": 27501, "text": "Defaultdict in Python" } ]
Resizing images with ImageTk.PhotoImage with Tkinter
The PIL or Pillow library in Python is used for processing images in a Tkinter application. We can use Pillow to open the images, resize them and display in the window. To resize the image, we can use image_resize((width, height) **options) method. The resized image can later be processed and displayed through the label widget. Let us have a look at the example where we will open an image and resize it to display in the window through the label widget. # Import the required libraries from tkinter import * from PIL import Image, ImageTk # Create an instance of tkinter frame or window win=Tk() # Set the size of the tkinter window win.geometry("700x350") # Load the image image=Image.open('download.png') # Resize the image in the given (width, height) img=image.resize((450, 350)) # Conver the image in TkImage my_img=ImageTk.PhotoImage(img) # Display the image with label label=Label(win, image=my_img) label.pack() win.mainloop() Running the above code will display a resized image in the window.
[ { "code": null, "e": 1392, "s": 1062, "text": "The PIL or Pillow library in Python is used for processing images in a Tkinter application. We can use Pillow to open the images, resize them and display in the window. To resize the image, we can use image_resize((width, height) **options) method. The resized image can later be processed and displayed through the label widget." }, { "code": null, "e": 1519, "s": 1392, "text": "Let us have a look at the example where we will open an image and resize it to display in the window through the label widget." }, { "code": null, "e": 2007, "s": 1519, "text": "# Import the required libraries\nfrom tkinter import *\nfrom PIL import Image, ImageTk\n\n# Create an instance of tkinter frame or window\nwin=Tk()\n\n# Set the size of the tkinter window\nwin.geometry(\"700x350\")\n\n# Load the image\nimage=Image.open('download.png')\n\n# Resize the image in the given (width, height)\nimg=image.resize((450, 350))\n\n# Conver the image in TkImage\nmy_img=ImageTk.PhotoImage(img)\n\n# Display the image with label\nlabel=Label(win, image=my_img)\nlabel.pack()\n\nwin.mainloop()" }, { "code": null, "e": 2074, "s": 2007, "text": "Running the above code will display a resized image in the window." } ]
How to Efficiently Remove Punctuations from a String | by Eunjoo Byeon | Towards Data Science
Recently I found myself spending many hours trying to make sense of messy text data, and decided to review some of the preprocessing involved. There are many different ways to achieve a simple cleaning step. Today, I will review a couple of different methods to remove punctuations from a string and compare their performances. The string translate method is a convenient way to change multiple characters to different values at once. Translate requires a table that will work as a dictionary to map the strings. The maketrans does that job for you. The maketrans syntax works like str.maketrans('abcd', '0123', 'xyz'). It will create a table that tells translate to change all a with 0, b with 1, c with 2, etc., and remove x, y, and z. Full syntax to remove punctuations and digits using translate is as below. # importing a string of punctuation and digits to removeimport stringexclist = string.punctuation + string.digits# remove punctuations and digits from oldtexttable_ = str.maketrans('', '', exclist)newtext = oldtext.translate(table_) This approach will entirely remove any character that is in string.punctuation and string.digits. That includes !”#$%&\’()*+,-./:;<=>?@[\\]^_`{|}~’ and all numbers. But sometimes, we might want to add a space in place of these special characters instead of getting rid of them entirely. We can do so by telling a table to change special characters to space instead of excluding them. table_ = str.maketrans(exclist, ' '*len(exclist)) Additionally, we can simply split and join to make sure this operation does not result in multiple spaces between words. newtext = ' '.join(oldtext.translate(table_).split()) We can also just use join instead of translate, taking the same exclusion list from the string package we made above. # using exclist from abovenewtext = ''.join(x for x in oldtext if x not in exclist) We can forego the exclusion list and just use the string method to call only the alphabets. newtext = ''.join(x for x in oldtext if x.isalpha()) This approach will only keep the alphabet. As a result, it will also eliminate space between words. Instead of the list comprehension, we can do the same thing using the filter. This is slightly more efficient than using a list comprehension but outputs a new text in the same manner. newtext = ''.join(filter(str.isalpha, oldtext)) Another way to remove punctuations (or any select characters) is to iterate through each special character and remove them one at a time. We can do this by using the replace method. # using exclist from abovefor s in exclist: text = text.replace(s, '') There are many ways to accomplish a similar thing using regex depending on the exact goal. One way to do it is to replace characters that are not alphabets with space. import renewtext = re.sub(r'[^A-Za-z]+', ' ', oldtext) [^A-Za-z]+ selects any character that matches the rule inside of the square bracket ([]), that does not (^) have at least one (+) letter in upper case alphabets (A-Z) or lower case alphabets (a-z). Then the regex sub replaces these characters in the old text with space. Another method is to select all the non-words by using a metacharacter \W. This metacharacter does not include underscore (-) and numbers. newtext = re.sub(r'\W+', ' ', oldtext) We reviewed a handful of methods, but which one is the best? I used the timeit module to measure how long each of the methods takes to process approximately 1kb string data 10000 times. The test shows that using translate takes much less time compared to other methods! On the other hand, using the join with the list comprehension seems to be the least efficient way to clean select characters. Translate is the most versatile and fast option out of all reviewed today. If you have any other method, please leave a comment and I will add the test result to the post!
[ { "code": null, "e": 500, "s": 172, "text": "Recently I found myself spending many hours trying to make sense of messy text data, and decided to review some of the preprocessing involved. There are many different ways to achieve a simple cleaning step. Today, I will review a couple of different methods to remove punctuations from a string and compare their performances." }, { "code": null, "e": 722, "s": 500, "text": "The string translate method is a convenient way to change multiple characters to different values at once. Translate requires a table that will work as a dictionary to map the strings. The maketrans does that job for you." }, { "code": null, "e": 910, "s": 722, "text": "The maketrans syntax works like str.maketrans('abcd', '0123', 'xyz'). It will create a table that tells translate to change all a with 0, b with 1, c with 2, etc., and remove x, y, and z." }, { "code": null, "e": 985, "s": 910, "text": "Full syntax to remove punctuations and digits using translate is as below." }, { "code": null, "e": 1218, "s": 985, "text": "# importing a string of punctuation and digits to removeimport stringexclist = string.punctuation + string.digits# remove punctuations and digits from oldtexttable_ = str.maketrans('', '', exclist)newtext = oldtext.translate(table_)" }, { "code": null, "e": 1383, "s": 1218, "text": "This approach will entirely remove any character that is in string.punctuation and string.digits. That includes !”#$%&\\’()*+,-./:;<=>?@[\\\\]^_`{|}~’ and all numbers." }, { "code": null, "e": 1602, "s": 1383, "text": "But sometimes, we might want to add a space in place of these special characters instead of getting rid of them entirely. We can do so by telling a table to change special characters to space instead of excluding them." }, { "code": null, "e": 1652, "s": 1602, "text": "table_ = str.maketrans(exclist, ' '*len(exclist))" }, { "code": null, "e": 1773, "s": 1652, "text": "Additionally, we can simply split and join to make sure this operation does not result in multiple spaces between words." }, { "code": null, "e": 1827, "s": 1773, "text": "newtext = ' '.join(oldtext.translate(table_).split())" }, { "code": null, "e": 1945, "s": 1827, "text": "We can also just use join instead of translate, taking the same exclusion list from the string package we made above." }, { "code": null, "e": 2029, "s": 1945, "text": "# using exclist from abovenewtext = ''.join(x for x in oldtext if x not in exclist)" }, { "code": null, "e": 2121, "s": 2029, "text": "We can forego the exclusion list and just use the string method to call only the alphabets." }, { "code": null, "e": 2174, "s": 2121, "text": "newtext = ''.join(x for x in oldtext if x.isalpha())" }, { "code": null, "e": 2274, "s": 2174, "text": "This approach will only keep the alphabet. As a result, it will also eliminate space between words." }, { "code": null, "e": 2459, "s": 2274, "text": "Instead of the list comprehension, we can do the same thing using the filter. This is slightly more efficient than using a list comprehension but outputs a new text in the same manner." }, { "code": null, "e": 2507, "s": 2459, "text": "newtext = ''.join(filter(str.isalpha, oldtext))" }, { "code": null, "e": 2689, "s": 2507, "text": "Another way to remove punctuations (or any select characters) is to iterate through each special character and remove them one at a time. We can do this by using the replace method." }, { "code": null, "e": 2764, "s": 2689, "text": "# using exclist from abovefor s in exclist: text = text.replace(s, '')" }, { "code": null, "e": 2932, "s": 2764, "text": "There are many ways to accomplish a similar thing using regex depending on the exact goal. One way to do it is to replace characters that are not alphabets with space." }, { "code": null, "e": 2987, "s": 2932, "text": "import renewtext = re.sub(r'[^A-Za-z]+', ' ', oldtext)" }, { "code": null, "e": 3258, "s": 2987, "text": "[^A-Za-z]+ selects any character that matches the rule inside of the square bracket ([]), that does not (^) have at least one (+) letter in upper case alphabets (A-Z) or lower case alphabets (a-z). Then the regex sub replaces these characters in the old text with space." }, { "code": null, "e": 3397, "s": 3258, "text": "Another method is to select all the non-words by using a metacharacter \\W. This metacharacter does not include underscore (-) and numbers." }, { "code": null, "e": 3436, "s": 3397, "text": "newtext = re.sub(r'\\W+', ' ', oldtext)" }, { "code": null, "e": 3622, "s": 3436, "text": "We reviewed a handful of methods, but which one is the best? I used the timeit module to measure how long each of the methods takes to process approximately 1kb string data 10000 times." }, { "code": null, "e": 3907, "s": 3622, "text": "The test shows that using translate takes much less time compared to other methods! On the other hand, using the join with the list comprehension seems to be the least efficient way to clean select characters. Translate is the most versatile and fast option out of all reviewed today." } ]
Maximum sum of lengths of non-overlapping subarrays with k as the max element. - GeeksforGeeks
19 Apr, 2022 Find the maximum sum of lengths of non-overlapping subarrays (contiguous elements) with k as the maximum element. Examples: Input : arr[] = {2, 1, 4, 9, 2, 3, 8, 3, 4} k = 4 Output : 5 {2, 1, 4} => Length = 3 {3, 4} => Length = 2 So, 3 + 2 = 5 is the answer Input : arr[] = {1, 2, 3, 2, 3, 4, 1} k = 4 Output : 7 {1, 2, 3, 2, 3, 4, 1} => Length = 7 Input : arr = {4, 5, 7, 1, 2, 9, 8, 4, 3, 1} k = 4 Ans = 4 {4} => Length = 1 {4, 3, 1} => Length = 3 So, 1 + 3 = 4 is the answer question source : https://www.geeksforgeeks.org/amazon-interview-experience-set-376-campus-internship/ Algorithm : Traverse the array starting from first element Take a loop and keep on incrementing count If element is less than equal to k if array element is equal to k, then mark a flag If flag is marked, add this count to answer Take another loop and traverse the array till element is greater than k return ans C++ Java Python3 C# PHP Javascript // CPP program to calculate max sum lengths of// non overlapping contiguous subarrays with k as// max element#include <bits/stdc++.h>using namespace std; // Returns max sum of lengths with maximum element// as kint calculateMaxSumLength(int arr[], int n, int k){ int ans = 0; // final sum of lengths // number of elements in current subarray int count = 0; // variable for checking if k appeared in subarray int flag = 0; for (int i = 0; i < n;) { count = 0; flag = 0; // count the number of elements which are // less than equal to k while (arr[i] <= k && i < n) { count++; if (arr[i] == k) flag = 1; i++; } // if current element appeared in current // subarray add count to sumLength if (flag == 1) ans += count; // skip the array elements which are // greater than k while (arr[i] > k && i < n) i++; } return ans;} // driver programint main(){ int arr[] = { 4, 5, 7, 1, 2, 9, 8, 4, 3, 1 }; int size = sizeof(arr) / sizeof(arr[0]); int k = 4; int ans = calculateMaxSumLength(arr, size, k); cout << "Max Length :: " << ans << endl; return 0;} // A Java program to calculate max sum lengths of// non overlapping contiguous subarrays with k as// max elementpublic class GFG{ // Returns max sum of lengths with maximum element // as k static int calculateMaxSumLength(int arr[], int n, int k) { int ans = 0; // final sum of lengths // number of elements in current subarray int count = 0; // variable for checking if k appeared in subarray int flag = 0; for (int i = 0; i < n;) { count = 0; flag = 0; // count the number of elements which are // less than equal to k while (i < n && arr[i] <= k) { count++; if (arr[i] == k) flag = 1; i++; } // if current element appeared in current // subarray add count to sumLength if (flag == 1) ans += count; // skip the array elements which are // greater than k while (i < n && arr[i] > k) i++; } return ans; } // driver program to test above method public static void main(String[] args) { int arr[] = { 4, 5, 7, 1, 2, 9, 8, 4, 3, 1 }; int size = arr.length; int k = 4; int ans = calculateMaxSumLength(arr, size, k); System.out.println("Max Length :: " + ans); }}// This code is contributed by Sumit Ghosh # Python program to calculate max sum lengths of non# overlapping contiguous subarrays with k as max element # Returns max sum of lengths with max elements as kdef calculateMaxSumLength(arr, n, k): ans = 0 # final sum of lengths i=0 while i < n : # number of elements in current sub array count = 0 # Variable for checking if k appeared in the sub array flag = 0 # Count the number of elements which are # less than or equal to k while i < n and arr[i] <= k : count = count + 1 if arr[i] == k: flag = 1 i = i + 1 # if current element appeared in current # subarray and count to sumLength if flag == 1: ans = ans + count # skip the array elements which are greater than k while i < n and arr[i] > k : i = i + 1 return ans # Driver Programarr = [4, 5, 7, 1, 2, 9, 8, 4, 3, 1]size = len(arr)k = 4ans = calculateMaxSumLength(arr, size, k)print ("Max Length ::",ans) # Contributed by Rohit // A C# program to calculate max// sum lengths of non overlapping// contiguous subarrays with k as// max elementusing System;class GFG { // Returns max sum of lengths // with maximum element as k static int calculateMaxSumLength(int []arr, int n, int k) { // final sum of lengths int ans = 0; // number of elements in // current subarray int count = 0; // variable for checking if // k appeared in subarray int flag = 0; for(int i = 0; i < n;) { count = 0; flag = 0; // count the number of // elements which are // less than equal to k while (i < n && arr[i] <= k) { count++; if (arr[i] == k) flag = 1; i++; } // if current element // appeared in current // subarray add count // to sumLength if (flag == 1) ans += count; // skip the array // elements which are // greater than k while (i < n && arr[i] > k) i++; } return ans; } // Driver Code public static void Main() { int []arr = {4, 5, 7, 1, 2, 9, 8, 4, 3, 1}; int size = arr.Length; int k = 4; int ans = calculateMaxSumLength(arr, size, k); Console.WriteLine("Max Length :: " + ans); }} // This code is contributed by anuj_67. <?php// PHP program to calculate max sum lengths// of non overlapping contiguous subarrays// with k as max element // Returns max sum of lengths with maximum// element as kfunction calculateMaxSumLength(&$arr, $n, $k){ $ans = 0; // final sum of lengths // number of elements in current subarray $count = 0; // variable for checking if k // appeared in subarray $flag = 0; for ($i = 0; $i < $n { $count = 0; $flag = 0; // count the number of elements which // are less than equal to k while ($arr[$i] <= $k && $i < $n) { $count++; if ($arr[$i] == $k) $flag = 1; $i++; } // if current element appeared in current // subarray add count to sumLength if ($flag == 1) $ans += $count; // skip the array elements which are // greater than k while ($arr[$i] > $k && $i < $n) $i++; } return $ans;} // Driver Code$arr = array( 4, 5, 7, 1, 2, 9, 8, 4, 3, 1 );$size = sizeof($arr);$k = 4;$ans = calculateMaxSumLength($arr, $size, $k);echo "Max Length :: " . $ans . "\n"; // This code is contributed by ita_c?> <script>// A Javascript program to calculate max sum lengths of// non overlapping contiguous subarrays with k as// max element // Returns max sum of lengths with maximum element // as k function calculateMaxSumLength(arr,n,k) { let ans = 0; // final sum of lengths // number of elements in current subarray let count = 0; // variable for checking if k appeared in subarray let flag = 0; for (let i = 0; i < n;) { count = 0; flag = 0; // count the number of elements which are // less than equal to k while (i < n && arr[i] <= k) { count++; if (arr[i] == k) flag = 1; i++; } // if current element appeared in current // subarray add count to sumLength if (flag == 1) ans += count; // skip the array elements which are // greater than k while (i < n && arr[i] > k) i++; } return ans; } // driver program to test above method let arr=[4, 5, 7, 1, 2, 9, 8, 4, 3, 1]; let size = arr.length; let k = 4; let ans = calculateMaxSumLength(arr, size, k); document.write("Max Length :: " + ans); //This code is contributed by avanitrachhadiya2155 </script> Max Length :: 4 Time Complexity : O(n) It may look like O(n2), but if you take a closer look, array is traversed only onceThis article is contributed by Mandeep Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Algorithm: Traverse the array from first element to last element if the element is less than k increment the count if the element is equals to k if k is not found increment the count and mark flag as 1 if k is found add the value of count to ans and mark count as 1 if the element is greater than k if k is present in the subarray add the value of count to ans and assign value of count and flag variables as 0 finally check again if k value is found in subarray or not if k is found return sum of answer and count if not return ans C++ Java Python3 Javascript // C++ program to find Maximum sum of lengths of// non-overlapping subarrays with k as the max element.#include <bits/stdc++.h>using namespace std;// Below function calculates the Maximum sum of lengths of// non-overlapping subarrays with k as the max element.int calculateMaxSumLength(int arr[], int n, int k){ // maximun sum of lengths int ans = 0; // number of elements in current subarray int count = 0; // falg variable for checking if k is present // in current subarray or not int flag = 0; for (int i = 0; i < n; i++) { // increment the count if element in arr is less // than k if (arr[i] < k) { count++; } // if the element is equals to k else if (arr[i] == k) { if (flag == 0) { count++; flag = 1; } // if flag is 1, we can say k is already present // in that subarray. So, add the value of count // to ans variable and make the count value as 1 // because we found the k else { ans += count; count = 1; } } // if element in arr is grater than k else { // if k is present in the subarray // add the value of count to ans variable if (flag == 1) { ans += count; } // assign value of count and flag variables as 0 count = 0; flag = 0; } } // Check again if k value is found in subarray // if k is found, return sum of values of variables ans // and count if k is not found, return value of variable // ans. if (flag == 1) { return ans + count; } return ans;}// driver programint main(){ int arr[] = { 4, 5, 7, 1, 2, 9, 8, 4, 3, 1 }; int size = sizeof(arr) / sizeof(arr[0]); int k = 4; int ans = calculateMaxSumLength(arr, size, k); cout << "Max Length :: " << ans << endl; return 0;}// Contributed by Ravi Teja Kuchipudi // JAVA program to find Maximum sum of lengths of// non-overlapping subarrays with k as the max element.class GFG { // Below function calculates the Maximum sum of lengths // of non-overlapping subarrays with k as the max // element. static int calculateMaxSumLength(int arr[], int n, int k) { // maximun sum of lengths int ans = 0; // number of elements in current subarray int count = 0; // falg variable for checking if k is present // in current subarray or not int flag = 0; for (int i = 0; i < n; i++) { // increment the count if element in arr is less // than k if (arr[i] < k) { count++; } // if the element is equals to k else if (arr[i] == k) { // if flag is equals to 0 then make flag // variable value as 1 and increment the // count. if (flag == 0) { count++; flag = 1; } // if flag is 1, we can say k is already // present in that subarray. So, add the // value of count to ans variable and make // the count value as 1 because we found the // k else { ans += count; count = 1; } } // if element in arr is grater than k else { // if k is present in the subarray // add the value of count to ans variable if (flag == 1) { ans += count; } // assign value of count and flag variables // as 0 count = 0; flag = 0; } } // Check again if k value is found in subarray // if k is found, return sum of values of variables // ans and count if k is not found, return value of // variable ans. if (flag == 1) { return ans + count; } return ans; } // driver program to test above method public static void main(String[] args) { int arr[] = { 4, 5, 7, 1, 2, 9, 8, 4, 3, 1 }; int size = arr.length; int k = 4; int ans = calculateMaxSumLength(arr, size, k); System.out.println("Max Length :: " + ans); }}// Contributed by Ravi Teja Kuchipudi # program to find Maximum sum of lengths of# non-overlapping subarrays with k as the max element. def calculateMaxSumLength(arr, n, k): # maximun sum of lengths ans = 0 # number of elements in current subarray count = 0 # falg variable for checking if k is present in current subarray or not flag = 0 for i in range(n): # increment the count if element in arr is less than k if arr[i] < k: count = count+1 # if the element is equals to k elif arr[i] == k: # if flag is equals to 0 then make flag variable value as 1 and # increment the count. if flag == 0: count = count + 1 flag = 1 # if flag is 1, we can say k is already present in that subarray. # So, add the value of count to ans variable and # make the count value as 1 because we found the k else: ans = ans + count count = 1 # if element in arr is grater than k else: # if k is present in the subarray # add the value of count to ans variable if flag == 1: ans = ans + count # assign value of count and flag variables as 0 count = 0 flag = 0 # Check again if k value is found in subarray # if k is found, return sum of values of variables ans and count # if k is not found, return value of variable ans. if flag == 1: return ans + count return ans # Driver Programarr = [4, 5, 7, 1, 2, 9, 8, 4, 3, 1]size = len(arr)k = 4ans = calculateMaxSumLength(arr, size, k)print("Max Length ::", ans) # Contributed by Ravi Teja Kuchipudi <script> // JavaScript program to find Maximum sum of lengths of// non-overlapping subarrays with k as the max element. // Below function calculates the Maximum sum of lengths of// non-overlapping subarrays with k as the max element.function calculateMaxSumLength(arr, n, k){ // maximun sum of lengths let ans = 0; // number of elements in current subarray let count = 0; // falg variable for checking if k is present // in current subarray or not let flag = 0; for (let i = 0; i < n; i++) { // increment the count if element in arr is less // than k if (arr[i] < k) { count++; } // if the element is equals to k else if (arr[i] == k) { if (flag == 0) { count++; flag = 1; } // if flag is 1, we can say k is already present // in that subarray. So, add the value of count // to ans variable and make the count value as 1 // because we found the k else { ans += count; count = 1; } } // if element in arr is grater than k else { // if k is present in the subarray // add the value of count to ans variable if (flag == 1) { ans += count; } // assign value of count and flag variables as 0 count = 0; flag = 0; } } // Check again if k value is found in subarray // if k is found, return sum of values of variables ans // and count if k is not found, return value of variable // ans. if (flag == 1) { return ans + count; } return ans;} // driver programlet arr = [ 4, 5, 7, 1, 2, 9, 8, 4, 3, 1 ];let size = arr.length;let k = 4;let ans = calculateMaxSumLength(arr, size, k);document.write("Max Length :: ",ans,"</br>"); // This code is contributed by shinjanpatra </script> Max Length :: 4 Time Complexity : O(n) vt_m ukasp rv720468 avanitrachhadiya2155 amartyaghoshgfg tejakuchipudi1972 Amazon subarray subarray-sum Arrays Amazon Arrays Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Stack Data Structure (Introduction and Program) Top 50 Array Coding Problems for Interviews Introduction to Arrays Multidimensional Arrays in Java Linear Search Linked List vs Array Python | Using 2D arrays/lists the right way Maximum and minimum of an array using minimum number of comparisons Queue | Set 1 (Introduction and Array Implementation) Subset Sum Problem | DP-25
[ { "code": null, "e": 24949, "s": 24921, "text": "\n19 Apr, 2022" }, { "code": null, "e": 25074, "s": 24949, "text": "Find the maximum sum of lengths of non-overlapping subarrays (contiguous elements) with k as the maximum element. Examples: " }, { "code": null, "e": 25456, "s": 25074, "text": "Input : arr[] = {2, 1, 4, 9, 2, 3, 8, 3, 4} \n k = 4\nOutput : 5\n{2, 1, 4} => Length = 3\n{3, 4} => Length = 2\nSo, 3 + 2 = 5 is the answer\n\nInput : arr[] = {1, 2, 3, 2, 3, 4, 1} \n k = 4\nOutput : 7\n{1, 2, 3, 2, 3, 4, 1} => Length = 7\n\nInput : arr = {4, 5, 7, 1, 2, 9, 8, 4, 3, 1}\n k = 4\nAns = 4\n{4} => Length = 1\n{4, 3, 1} => Length = 3\nSo, 1 + 3 = 4 is the answer" }, { "code": null, "e": 25559, "s": 25456, "text": "question source : https://www.geeksforgeeks.org/amazon-interview-experience-set-376-campus-internship/" }, { "code": null, "e": 25572, "s": 25559, "text": "Algorithm : " }, { "code": null, "e": 25912, "s": 25572, "text": "Traverse the array starting from first element\n Take a loop and keep on incrementing count \n If element is less than equal to k\n if array element is equal to k, then mark\n a flag\n \n If flag is marked, add this count to answer\n \n Take another loop and traverse the array \n till element is greater than k\nreturn ans" }, { "code": null, "e": 25916, "s": 25912, "text": "C++" }, { "code": null, "e": 25921, "s": 25916, "text": "Java" }, { "code": null, "e": 25929, "s": 25921, "text": "Python3" }, { "code": null, "e": 25932, "s": 25929, "text": "C#" }, { "code": null, "e": 25936, "s": 25932, "text": "PHP" }, { "code": null, "e": 25947, "s": 25936, "text": "Javascript" }, { "code": "// CPP program to calculate max sum lengths of// non overlapping contiguous subarrays with k as// max element#include <bits/stdc++.h>using namespace std; // Returns max sum of lengths with maximum element// as kint calculateMaxSumLength(int arr[], int n, int k){ int ans = 0; // final sum of lengths // number of elements in current subarray int count = 0; // variable for checking if k appeared in subarray int flag = 0; for (int i = 0; i < n;) { count = 0; flag = 0; // count the number of elements which are // less than equal to k while (arr[i] <= k && i < n) { count++; if (arr[i] == k) flag = 1; i++; } // if current element appeared in current // subarray add count to sumLength if (flag == 1) ans += count; // skip the array elements which are // greater than k while (arr[i] > k && i < n) i++; } return ans;} // driver programint main(){ int arr[] = { 4, 5, 7, 1, 2, 9, 8, 4, 3, 1 }; int size = sizeof(arr) / sizeof(arr[0]); int k = 4; int ans = calculateMaxSumLength(arr, size, k); cout << \"Max Length :: \" << ans << endl; return 0;}", "e": 27208, "s": 25947, "text": null }, { "code": "// A Java program to calculate max sum lengths of// non overlapping contiguous subarrays with k as// max elementpublic class GFG{ // Returns max sum of lengths with maximum element // as k static int calculateMaxSumLength(int arr[], int n, int k) { int ans = 0; // final sum of lengths // number of elements in current subarray int count = 0; // variable for checking if k appeared in subarray int flag = 0; for (int i = 0; i < n;) { count = 0; flag = 0; // count the number of elements which are // less than equal to k while (i < n && arr[i] <= k) { count++; if (arr[i] == k) flag = 1; i++; } // if current element appeared in current // subarray add count to sumLength if (flag == 1) ans += count; // skip the array elements which are // greater than k while (i < n && arr[i] > k) i++; } return ans; } // driver program to test above method public static void main(String[] args) { int arr[] = { 4, 5, 7, 1, 2, 9, 8, 4, 3, 1 }; int size = arr.length; int k = 4; int ans = calculateMaxSumLength(arr, size, k); System.out.println(\"Max Length :: \" + ans); }}// This code is contributed by Sumit Ghosh", "e": 28655, "s": 27208, "text": null }, { "code": "# Python program to calculate max sum lengths of non# overlapping contiguous subarrays with k as max element # Returns max sum of lengths with max elements as kdef calculateMaxSumLength(arr, n, k): ans = 0 # final sum of lengths i=0 while i < n : # number of elements in current sub array count = 0 # Variable for checking if k appeared in the sub array flag = 0 # Count the number of elements which are # less than or equal to k while i < n and arr[i] <= k : count = count + 1 if arr[i] == k: flag = 1 i = i + 1 # if current element appeared in current # subarray and count to sumLength if flag == 1: ans = ans + count # skip the array elements which are greater than k while i < n and arr[i] > k : i = i + 1 return ans # Driver Programarr = [4, 5, 7, 1, 2, 9, 8, 4, 3, 1]size = len(arr)k = 4ans = calculateMaxSumLength(arr, size, k)print (\"Max Length ::\",ans) # Contributed by Rohit", "e": 29777, "s": 28655, "text": null }, { "code": "// A C# program to calculate max// sum lengths of non overlapping// contiguous subarrays with k as// max elementusing System;class GFG { // Returns max sum of lengths // with maximum element as k static int calculateMaxSumLength(int []arr, int n, int k) { // final sum of lengths int ans = 0; // number of elements in // current subarray int count = 0; // variable for checking if // k appeared in subarray int flag = 0; for(int i = 0; i < n;) { count = 0; flag = 0; // count the number of // elements which are // less than equal to k while (i < n && arr[i] <= k) { count++; if (arr[i] == k) flag = 1; i++; } // if current element // appeared in current // subarray add count // to sumLength if (flag == 1) ans += count; // skip the array // elements which are // greater than k while (i < n && arr[i] > k) i++; } return ans; } // Driver Code public static void Main() { int []arr = {4, 5, 7, 1, 2, 9, 8, 4, 3, 1}; int size = arr.Length; int k = 4; int ans = calculateMaxSumLength(arr, size, k); Console.WriteLine(\"Max Length :: \" + ans); }} // This code is contributed by anuj_67.", "e": 31380, "s": 29777, "text": null }, { "code": "<?php// PHP program to calculate max sum lengths// of non overlapping contiguous subarrays// with k as max element // Returns max sum of lengths with maximum// element as kfunction calculateMaxSumLength(&$arr, $n, $k){ $ans = 0; // final sum of lengths // number of elements in current subarray $count = 0; // variable for checking if k // appeared in subarray $flag = 0; for ($i = 0; $i < $n { $count = 0; $flag = 0; // count the number of elements which // are less than equal to k while ($arr[$i] <= $k && $i < $n) { $count++; if ($arr[$i] == $k) $flag = 1; $i++; } // if current element appeared in current // subarray add count to sumLength if ($flag == 1) $ans += $count; // skip the array elements which are // greater than k while ($arr[$i] > $k && $i < $n) $i++; } return $ans;} // Driver Code$arr = array( 4, 5, 7, 1, 2, 9, 8, 4, 3, 1 );$size = sizeof($arr);$k = 4;$ans = calculateMaxSumLength($arr, $size, $k);echo \"Max Length :: \" . $ans . \"\\n\"; // This code is contributed by ita_c?>", "e": 32594, "s": 31380, "text": null }, { "code": "<script>// A Javascript program to calculate max sum lengths of// non overlapping contiguous subarrays with k as// max element // Returns max sum of lengths with maximum element // as k function calculateMaxSumLength(arr,n,k) { let ans = 0; // final sum of lengths // number of elements in current subarray let count = 0; // variable for checking if k appeared in subarray let flag = 0; for (let i = 0; i < n;) { count = 0; flag = 0; // count the number of elements which are // less than equal to k while (i < n && arr[i] <= k) { count++; if (arr[i] == k) flag = 1; i++; } // if current element appeared in current // subarray add count to sumLength if (flag == 1) ans += count; // skip the array elements which are // greater than k while (i < n && arr[i] > k) i++; } return ans; } // driver program to test above method let arr=[4, 5, 7, 1, 2, 9, 8, 4, 3, 1]; let size = arr.length; let k = 4; let ans = calculateMaxSumLength(arr, size, k); document.write(\"Max Length :: \" + ans); //This code is contributed by avanitrachhadiya2155 </script>", "e": 33988, "s": 32594, "text": null }, { "code": null, "e": 34004, "s": 33988, "text": "Max Length :: 4" }, { "code": null, "e": 34531, "s": 34004, "text": "Time Complexity : O(n) It may look like O(n2), but if you take a closer look, array is traversed only onceThis article is contributed by Mandeep Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 34542, "s": 34531, "text": "Algorithm:" }, { "code": null, "e": 35149, "s": 34542, "text": "Traverse the array from first element to last element\n if the element is less than k increment the count\n if the element is equals to k \n if k is not found\n increment the count and mark flag as 1\n if k is found\n add the value of count to ans and mark count as 1\n if the element is greater than k\n if k is present in the subarray add the value of count to ans and\n assign value of count and flag variables as 0\nfinally check again if k value is found in subarray or not\n if k is found return sum of answer and count\n if not return ans" }, { "code": null, "e": 35166, "s": 35162, "text": "C++" }, { "code": null, "e": 35171, "s": 35166, "text": "Java" }, { "code": null, "e": 35179, "s": 35171, "text": "Python3" }, { "code": null, "e": 35190, "s": 35179, "text": "Javascript" }, { "code": "// C++ program to find Maximum sum of lengths of// non-overlapping subarrays with k as the max element.#include <bits/stdc++.h>using namespace std;// Below function calculates the Maximum sum of lengths of// non-overlapping subarrays with k as the max element.int calculateMaxSumLength(int arr[], int n, int k){ // maximun sum of lengths int ans = 0; // number of elements in current subarray int count = 0; // falg variable for checking if k is present // in current subarray or not int flag = 0; for (int i = 0; i < n; i++) { // increment the count if element in arr is less // than k if (arr[i] < k) { count++; } // if the element is equals to k else if (arr[i] == k) { if (flag == 0) { count++; flag = 1; } // if flag is 1, we can say k is already present // in that subarray. So, add the value of count // to ans variable and make the count value as 1 // because we found the k else { ans += count; count = 1; } } // if element in arr is grater than k else { // if k is present in the subarray // add the value of count to ans variable if (flag == 1) { ans += count; } // assign value of count and flag variables as 0 count = 0; flag = 0; } } // Check again if k value is found in subarray // if k is found, return sum of values of variables ans // and count if k is not found, return value of variable // ans. if (flag == 1) { return ans + count; } return ans;}// driver programint main(){ int arr[] = { 4, 5, 7, 1, 2, 9, 8, 4, 3, 1 }; int size = sizeof(arr) / sizeof(arr[0]); int k = 4; int ans = calculateMaxSumLength(arr, size, k); cout << \"Max Length :: \" << ans << endl; return 0;}// Contributed by Ravi Teja Kuchipudi", "e": 37214, "s": 35190, "text": null }, { "code": "// JAVA program to find Maximum sum of lengths of// non-overlapping subarrays with k as the max element.class GFG { // Below function calculates the Maximum sum of lengths // of non-overlapping subarrays with k as the max // element. static int calculateMaxSumLength(int arr[], int n, int k) { // maximun sum of lengths int ans = 0; // number of elements in current subarray int count = 0; // falg variable for checking if k is present // in current subarray or not int flag = 0; for (int i = 0; i < n; i++) { // increment the count if element in arr is less // than k if (arr[i] < k) { count++; } // if the element is equals to k else if (arr[i] == k) { // if flag is equals to 0 then make flag // variable value as 1 and increment the // count. if (flag == 0) { count++; flag = 1; } // if flag is 1, we can say k is already // present in that subarray. So, add the // value of count to ans variable and make // the count value as 1 because we found the // k else { ans += count; count = 1; } } // if element in arr is grater than k else { // if k is present in the subarray // add the value of count to ans variable if (flag == 1) { ans += count; } // assign value of count and flag variables // as 0 count = 0; flag = 0; } } // Check again if k value is found in subarray // if k is found, return sum of values of variables // ans and count if k is not found, return value of // variable ans. if (flag == 1) { return ans + count; } return ans; } // driver program to test above method public static void main(String[] args) { int arr[] = { 4, 5, 7, 1, 2, 9, 8, 4, 3, 1 }; int size = arr.length; int k = 4; int ans = calculateMaxSumLength(arr, size, k); System.out.println(\"Max Length :: \" + ans); }}// Contributed by Ravi Teja Kuchipudi", "e": 39698, "s": 37214, "text": null }, { "code": "# program to find Maximum sum of lengths of# non-overlapping subarrays with k as the max element. def calculateMaxSumLength(arr, n, k): # maximun sum of lengths ans = 0 # number of elements in current subarray count = 0 # falg variable for checking if k is present in current subarray or not flag = 0 for i in range(n): # increment the count if element in arr is less than k if arr[i] < k: count = count+1 # if the element is equals to k elif arr[i] == k: # if flag is equals to 0 then make flag variable value as 1 and # increment the count. if flag == 0: count = count + 1 flag = 1 # if flag is 1, we can say k is already present in that subarray. # So, add the value of count to ans variable and # make the count value as 1 because we found the k else: ans = ans + count count = 1 # if element in arr is grater than k else: # if k is present in the subarray # add the value of count to ans variable if flag == 1: ans = ans + count # assign value of count and flag variables as 0 count = 0 flag = 0 # Check again if k value is found in subarray # if k is found, return sum of values of variables ans and count # if k is not found, return value of variable ans. if flag == 1: return ans + count return ans # Driver Programarr = [4, 5, 7, 1, 2, 9, 8, 4, 3, 1]size = len(arr)k = 4ans = calculateMaxSumLength(arr, size, k)print(\"Max Length ::\", ans) # Contributed by Ravi Teja Kuchipudi", "e": 41400, "s": 39698, "text": null }, { "code": "<script> // JavaScript program to find Maximum sum of lengths of// non-overlapping subarrays with k as the max element. // Below function calculates the Maximum sum of lengths of// non-overlapping subarrays with k as the max element.function calculateMaxSumLength(arr, n, k){ // maximun sum of lengths let ans = 0; // number of elements in current subarray let count = 0; // falg variable for checking if k is present // in current subarray or not let flag = 0; for (let i = 0; i < n; i++) { // increment the count if element in arr is less // than k if (arr[i] < k) { count++; } // if the element is equals to k else if (arr[i] == k) { if (flag == 0) { count++; flag = 1; } // if flag is 1, we can say k is already present // in that subarray. So, add the value of count // to ans variable and make the count value as 1 // because we found the k else { ans += count; count = 1; } } // if element in arr is grater than k else { // if k is present in the subarray // add the value of count to ans variable if (flag == 1) { ans += count; } // assign value of count and flag variables as 0 count = 0; flag = 0; } } // Check again if k value is found in subarray // if k is found, return sum of values of variables ans // and count if k is not found, return value of variable // ans. if (flag == 1) { return ans + count; } return ans;} // driver programlet arr = [ 4, 5, 7, 1, 2, 9, 8, 4, 3, 1 ];let size = arr.length;let k = 4;let ans = calculateMaxSumLength(arr, size, k);document.write(\"Max Length :: \",ans,\"</br>\"); // This code is contributed by shinjanpatra </script>", "e": 43440, "s": 41400, "text": null }, { "code": null, "e": 43456, "s": 43440, "text": "Max Length :: 4" }, { "code": null, "e": 43480, "s": 43456, "text": "Time Complexity : O(n) " }, { "code": null, "e": 43485, "s": 43480, "text": "vt_m" }, { "code": null, "e": 43491, "s": 43485, "text": "ukasp" }, { "code": null, "e": 43500, "s": 43491, "text": "rv720468" }, { "code": null, "e": 43521, "s": 43500, "text": "avanitrachhadiya2155" }, { "code": null, "e": 43537, "s": 43521, "text": "amartyaghoshgfg" }, { "code": null, "e": 43555, "s": 43537, "text": "tejakuchipudi1972" }, { "code": null, "e": 43562, "s": 43555, "text": "Amazon" }, { "code": null, "e": 43571, "s": 43562, "text": "subarray" }, { "code": null, "e": 43584, "s": 43571, "text": "subarray-sum" }, { "code": null, "e": 43591, "s": 43584, "text": "Arrays" }, { "code": null, "e": 43598, "s": 43591, "text": "Amazon" }, { "code": null, "e": 43605, "s": 43598, "text": "Arrays" }, { "code": null, "e": 43703, "s": 43605, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 43712, "s": 43703, "text": "Comments" }, { "code": null, "e": 43725, "s": 43712, "text": "Old Comments" }, { "code": null, "e": 43773, "s": 43725, "text": "Stack Data Structure (Introduction and Program)" }, { "code": null, "e": 43817, "s": 43773, "text": "Top 50 Array Coding Problems for Interviews" }, { "code": null, "e": 43840, "s": 43817, "text": "Introduction to Arrays" }, { "code": null, "e": 43872, "s": 43840, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 43886, "s": 43872, "text": "Linear Search" }, { "code": null, "e": 43907, "s": 43886, "text": "Linked List vs Array" }, { "code": null, "e": 43952, "s": 43907, "text": "Python | Using 2D arrays/lists the right way" }, { "code": null, "e": 44020, "s": 43952, "text": "Maximum and minimum of an array using minimum number of comparisons" }, { "code": null, "e": 44074, "s": 44020, "text": "Queue | Set 1 (Introduction and Array Implementation)" } ]
Implementing and Analyzing different Activation Functions and Weight Initialization Methods Using Python | by Niranjan Kumar | Towards Data Science
In this post, we will discuss how to implement different combinations of non-linear activation functions and weight initialization methods in python. Also, we will analyze how the choice of activation function and weight initialization method will have an effect on accuracy and the rate at which we reduce our loss in a deep neural network using a non-linearly separable toy data set. This is a follow-up post to my previous post on activation functions and weight initialization methods. Note: This article assumes that the reader has a basic understanding of Neural Network, weights, biases, and backpropagation. If you want to learn the basics of the feed-forward neural network, check out my previous article (Link at the end of this article). Citation Note: The content and the structure of this article is based on the deep learning lectures from One-Fourth Labs — PadhAI. The activation function is the non-linear function that we apply over the input data coming to a particular neuron and the output from the function will be sent to the neurons present in the next layer as input. Even if we use very very deep neural networks without the non-linear activation function, we will just learn the ‘y’ as a linear transformation of ‘x’. It can only represent linear relations between ‘x’ and ‘y’. In other words, we will be constrained to learning linear decision boundaries and we can’t learn any arbitrary non-linear decision boundaries. This is why we need activation functions — non-linear activation function to learn the complex non-linear relationship between input and the output. Some of the commonly used activation functions, Logistic Tanh ReLU Leaky ReLU When we are training deep neural networks, weights and biases are usually initialized with random values. In the process of initializing weights to random values, we might encounter the problems like vanishing gradient or exploding gradient. As a result, the network would take a lot of time to converge (if it converges at all). The most commonly used weight initialization methods: Xavier Initialization He Initialization To understand the intuition behind the most commonly used activation functions and weight initialization methods, kindly refer to my previous post on activation functions and weight initialization methods. medium.com In the coding section, we will be covering the following topics. Generate data that is not linearly separableWrite a feedforward network classSetup code for plottingAnalyze sigmoid activationAnalyze tanh activationAnalyze ReLU activationAnalyze Leaky ReLU activation Generate data that is not linearly separable Write a feedforward network class Setup code for plotting Analyze sigmoid activation Analyze tanh activation Analyze ReLU activation Analyze Leaky ReLU activation In this section, we will compare the accuracy of a simple feedforward neural network by trying out various combinations of activation functions and weight initialization methods. The way we do that it is, first we will generate non-linearly separable data with two classes and write our simple feedforward neural network that supports all the activation functions and weight initialization methods. Then compare the different scenarios using loss plots. If you want to skip the theory part and get into the code right away, github.com Before we start with our analysis of the feedforward network, first we need to import the required libraries. We are importing the numpy to evaluate the matrix multiplication and dot product between two vectors in the neural network, matplotlib to visualize the data and from thesklearn package, we are importing functions to generate data and evaluate the network performance. To display/render HTML content in-line in Jupiter notebook import HTML. In line 19, we are creating a custom color map from a list of colors by using the from_list() method of LinearSegmentedColormap. Remember that we are using feedforward neural networks because we wanted to deal with non-linearly separable data. In this section, we will see how to randomly generate non-linearly separable data. To generate data randomly we will use make_blobs to generate blobs of points with a Gaussian distribution. I have generated 1000 data points in 2D space with four blobs centers=4 as a multi-class classification prediction problem. Each data point has two inputs and 0, 1, 2 or 3 class labels. Note that make_blobs() function will generate linearly separable data, but we need to have non-linearly separable data for binary classification. labels_orig = labelslabels = np.mod(labels_orig, 2) One way to convert the 4 classes to binary classification is to take the remainder of these 4 classes when they are divided by 2 so that I can get the new labels as 0 and 1. From the plot, we can see that the centers of blobs are merged such that we now have a binary classification problem where the decision boundary is not linear. Once we have our data ready, I have used the train_test_split function to split the data for training and validation in the ratio of 90:10 In this section, we will write a generic class where it can generate a neural network, by taking the number of hidden layers and the number of neurons in each hidden layer as input parameters. The network has six neurons in total — two in the first hidden layer and four in the output layer. For each of these neurons, pre-activation is represented by ‘a’ and post-activation is represented by ‘h’. In the network, we have a total of 18 parameters — 12 weight parameters and 6 bias terms. we will write our neural network in a class called FFNetwork. In the class FirstFFNetworkwe have 8 functions, we will go over these functions one by one. def __init__(self, init_method = 'random', activation_function = 'sigmoid', leaky_slope = 0.1): ...... The __init__ function initializes all the parameters of the network including weights and biases. The function takes accepts a few arguments, init_method: Initialization method to be used for initializing all the parameters of the network. Supports — “random”, “zeros”, “He” and “Xavier”. activation_function: Activation function to be used for learning non-linear decision boundary. Supports — “sigmoid”, “tanh”, “relu” and “leaky_relu”. leaky_slope: Negative slope of Leaky ReLU. Default value set to 0.1. In Line 5–10, we are setting the network configuration and the activation function to be used in the network. self.layer_sizes = [2, 2, 4] layer_sizesrepresents that the network has two inputs, two neurons in the first hidden layer and 4 neurons in the second hidden layer which is also the final layer in this case. After that, we have a bunch of “if-else” weight initialization statements, in each of these statements we are only initializing the weights based on the method of choice and the biases are always initialized to the value one. The initialized values of weights and biases are stored in a dictionary self.params. def forward_activation(self, X): if self.activation_function == "sigmoid": return 1.0/(1.0 + np.exp(-X)) elif self.activation_function == "tanh": return np.tanh(X) elif self.activation_function == "relu": return np.maximum(0,X) elif self.activation_function == "leaky_relu": return np.maximum(self.leaky_slope*X,X) Next, we have forward_activation function that takes input ‘X’ as an argument and computes the post-activation value of the input depending on the choice of the activation function. def grad_activation(self, X): ...... The function grad_activation also takes input ‘X’ as an argument and computes the derivative of the activation function at given input and returns it. def forward_pass(self, X, params = None):....... def grad(self, X, Y, params = None): ....... After that, we have two functions forward_pass which characterize the forward pass. Forward pass involves two steps Post Activation — Computes the dot product between the input x & weights wand adds bias bPre Activation — Takes the output of post activation and applies the activation function on top of it. Post Activation — Computes the dot product between the input x & weights wand adds bias b Pre Activation — Takes the output of post activation and applies the activation function on top of it. gradfunction characterize the gradient computation for each of the parameters present in the network and stores it in a list called gradients. Don’t worry too much in how we arrived at the gradients because we will be using Pytorch to do the heavy lifting, but if you are interested in learning them go through my previous article. hackernoon.com def fit(self, X, Y, epochs=1, algo= "GD", display_loss=False, eta=1, mini_batch_size=100, eps=1e-8, beta=0.9, beta1=0.9, beta2=0.9, gamma=0.9 ): Next, we define fit method that takes input ‘X’ and ‘Y’ as mandatory arguments and a few optional arguments required for implementing the different variants of gradient descent algorithm. Kindly refer to my previous post for the detail explanation on how to implement the algorithms. hackernoon.com def predict(self, X): Now we define our predict function takes inputs X as an argument, which it expects to be an numpy array. In the predict function, we will compute the forward pass of each input with the trained model and send back a numpy array which contains the predicted value of each input data. In this section, we define a function to evaluate the performance of the neural network and create plots to visualize the working of the update rule. This kind of setup helps us to run different experiments with different activation functions, different weight initialization methods and plot update rule for different variants of the gradient descent First, we instantiate the feedforward network class and then call the fit method on the training data with 10 epochs and learning rate set to 1 (These values are arbitrary not the optimal values for this data, you can play around these values and find the best number of epochs and the learning rate). Then we will call post_process function to compute the training and validation accuracy of the neural network (Line 2–11). We are also plotting the scatter plot for the input points with different sizes based on the predicted value of the neural network. The size of each point in the plot is given by a formula, s=15*(np.abs(Y_pred_binarised_train-Y_train)+.2) The formula takes the absolute difference between the predicted value and the actual value. If the ground truth is equal to the predicted value then size = 3 If the ground truth is not equal to the predicted value the size = 18 All the small points in the plot indicate that the model is predicting those observations correctly and large points indicate that those observations are incorrectly classified. Line 20–29, we are plotting the updates each parameter getting from the network using backpropagation. In our network, there are 18 parameters in total so we are iterating 18 times, each time we will find the update each parameter gets and plot them using subplot. For example, update for weight Wi at ith epoch = Wi + 1 — Wi To analyze the effect of sigmoid activation function on the neural network, we will set the activation function as ‘sigmoid’ and execute the neural network class. The Loss of the network is falling even though we have run it for very few iterations. By using the post_process function, we are able to plot the 18 subplots and we have not provided any axis labels because it is not required. The 18 plots for 18 parameters are plotted in row-major order representing the frequency of updates the parameter receives. The first 12 plots indicate the updates received by the weights and last 6 indicate the updates received by the bias terms in the network. In any of the subplots, if the curve is closer to the middle indicates that the particular parameter is not getting any updates. Instead of executing each weight initialization manually, we will write a for — loop to execute all possible weight initialization combinations. for init_method in ['zeros', 'random', 'xavier', 'he']: for activation_function in ['sigmoid']: print(init_method, activation_function) model = FFNetwork(init_method=init_method,activation_function = activation_function) model.fit(X_train, y_OH_train, epochs=50, eta=1, algo="GD", display_loss=True) post_process(plot_scale=0.05) print('\n--\n') In the above code, I just added two ‘for’ loops. One ‘for’ loop for weight initialization and another ‘for’ loop for activation function. Once you execute the above code, you will see that the neural network tries all the possible weight initialization methods by keep activation function — sigmoid constant. If you observe the output of zero weight initialization method with sigmoid, you can see that the symmetry breaking problem occurs in the sigmoid neuron. Once we initialize the weights to zero, in all subsequent iterations the weights are going to remain the same (they will move away from zero but they will be equal), this symmetry will never break during the training. This kind of phenomenon is known as symmetry breaking problem. Because of this problem, we are getting very low accuracy of 54%. In the random initialization, we can see that the problem of symmetry breaking doesn’t occur. It means that all the weights & biases are taking different values during the training. By using the Xavier initialization, we are getting the highest accuracy across different weight initialization method. Xavier is the recommended weight initialization method for sigmoid and tanh activation function. We will use the same code for executing the tanh activation function with different combinations of weight initialization methods by including the keyword ‘tanh’ in the second ‘for’ loop. for activation_function in ['tanh']: In the zero initialization with tanh activation, from the weight update subplots, we can see that tanh activation is hardly learning anything. In all the plots the curve is closer to zero, indicating that the parameters are not getting updates from optimization algorithm. The reason behind this phenomenon is that the value of tanh at x = 0 is zero and the derivative of tanh is also zero. When we do Xavier initialization with tanh, we are able to get higher performance from the neural network. Just by changing the method of weight initialization we are able to get higher accuracy (86.6%). We will use the same code for executing the ReLU activation function with different combinations of weight initialization methods by including the keyword ‘relu’ in the second ‘for’ loop. for activation_function in ['relu']: Similar to tanh with zero weight initialization, we observed that setting weights to zero doesn’t work with ReLU because the value of ReLU at zero is equal to zero itself. As a result, weights won’t be propagated back into the network and network won’t learn anything. So it’s not a good idea to set weights to zero either in case of tanh or ReLU. The recommended initialization method for ReLU is He-initialization, by using He-initialization we are able to get the highest accuracy. We will use the same code for executing the ReLU activation function with different combinations of weight initialization methods by including the keyword ‘relu’ in the second ‘for’ loop. for activation_function in ['leaky_relu']: Similar to ReLU with zero weight initialization, we observed that setting weights to zero doesn’t work with Leaky ReLU because the value of Leaky ReLU at zero is equal to zero itself. As a result, weights won’t be propagated back into the network and network won’t learn anything. Coming to random initialization, we can see that the network achieves very good accuracy but there are a lot of oscillations in the update subplots. The large oscillations might be occurring due to a large learning rate. By using He initialization, we get the highest accuracy of 92% on the test data. To avoid the large oscillations, we should set a smaller learning rate in any method of weight initialization. The recommended initialization method for Leaky ReLU is He-initialization. There you have it, we have successfully analyzed the different combinations of weight initialization methods and activation functions. CODE CODE CODE In this article, we have used make_blobs function to generate toy data and we have seen that make_blobs generate linearly separable data. If you want to generate some complex non-linearly separable data to train your feedforward neural network, you can use make_moons function from sklearn package. You can also try, changing the learning algorithm (we been using vanilla gradient descent) to a different variant of gradient descent like Adam, NAG, etc... and study the impact of the learning algorithm on network performance. Using our feedforward neural network class you can create a much deeper network with more number of neurons in each layer ([2,2,2,4] — two neurons each in first 3 hidden layers and 4 neurons in the output layer) and play with learning rate & a number of epochs to check under which parameters neural network is able to arrive at the best decision boundary possible. The entire code discussed in the article is present in this GitHub repository. Feel free to fork it or download it. The best part is that you can directly run the code in google colab, don’t need to worry about installing the packages. github.com In this post, we briefly looked at the overview of weight initialization methods and activation functions. Then we have seen how to build a generic simple neuron network class that supports different variants of gradient descent, weight initialization, and activation functions. After that, we have analyzed each of the activation function with different weight initialization methods. If you want to learn more about Data Science, Machine Learning. Check out the Machine Learning Basics and Advanced Machine Learning by Abhishek and Pukhraj from Starttechacademy. One of the good points about these courses is that they teach in both Python and R, so it’s your choice. Recommended Reading hackernoon.com Author Bio Niranjan Kumar is Retail Risk Analyst at HSBC Analytics division. He is passionate about Deep learning and Artificial Intelligence. He is one of the top writers at Medium in Artificial Intelligence. You can find all of Niranjan’s blog here. You can connect with Niranjan on LinkedIn, Twitter and GitHub to stay up to date with his latest blog posts. Disclaimer — There might be some affiliate links in this post to relevant resources. You can purchase the bundle at the lowest price possible. I will receive a small commission if you purchase the course.
[ { "code": null, "e": 662, "s": 172, "text": "In this post, we will discuss how to implement different combinations of non-linear activation functions and weight initialization methods in python. Also, we will analyze how the choice of activation function and weight initialization method will have an effect on accuracy and the rate at which we reduce our loss in a deep neural network using a non-linearly separable toy data set. This is a follow-up post to my previous post on activation functions and weight initialization methods." }, { "code": null, "e": 921, "s": 662, "text": "Note: This article assumes that the reader has a basic understanding of Neural Network, weights, biases, and backpropagation. If you want to learn the basics of the feed-forward neural network, check out my previous article (Link at the end of this article)." }, { "code": null, "e": 1052, "s": 921, "text": "Citation Note: The content and the structure of this article is based on the deep learning lectures from One-Fourth Labs — PadhAI." }, { "code": null, "e": 1264, "s": 1052, "text": "The activation function is the non-linear function that we apply over the input data coming to a particular neuron and the output from the function will be sent to the neurons present in the next layer as input." }, { "code": null, "e": 1768, "s": 1264, "text": "Even if we use very very deep neural networks without the non-linear activation function, we will just learn the ‘y’ as a linear transformation of ‘x’. It can only represent linear relations between ‘x’ and ‘y’. In other words, we will be constrained to learning linear decision boundaries and we can’t learn any arbitrary non-linear decision boundaries. This is why we need activation functions — non-linear activation function to learn the complex non-linear relationship between input and the output." }, { "code": null, "e": 1816, "s": 1768, "text": "Some of the commonly used activation functions," }, { "code": null, "e": 1825, "s": 1816, "text": "Logistic" }, { "code": null, "e": 1830, "s": 1825, "text": "Tanh" }, { "code": null, "e": 1835, "s": 1830, "text": "ReLU" }, { "code": null, "e": 1846, "s": 1835, "text": "Leaky ReLU" }, { "code": null, "e": 2230, "s": 1846, "text": "When we are training deep neural networks, weights and biases are usually initialized with random values. In the process of initializing weights to random values, we might encounter the problems like vanishing gradient or exploding gradient. As a result, the network would take a lot of time to converge (if it converges at all). The most commonly used weight initialization methods:" }, { "code": null, "e": 2252, "s": 2230, "text": "Xavier Initialization" }, { "code": null, "e": 2270, "s": 2252, "text": "He Initialization" }, { "code": null, "e": 2476, "s": 2270, "text": "To understand the intuition behind the most commonly used activation functions and weight initialization methods, kindly refer to my previous post on activation functions and weight initialization methods." }, { "code": null, "e": 2487, "s": 2476, "text": "medium.com" }, { "code": null, "e": 2552, "s": 2487, "text": "In the coding section, we will be covering the following topics." }, { "code": null, "e": 2754, "s": 2552, "text": "Generate data that is not linearly separableWrite a feedforward network classSetup code for plottingAnalyze sigmoid activationAnalyze tanh activationAnalyze ReLU activationAnalyze Leaky ReLU activation" }, { "code": null, "e": 2799, "s": 2754, "text": "Generate data that is not linearly separable" }, { "code": null, "e": 2833, "s": 2799, "text": "Write a feedforward network class" }, { "code": null, "e": 2857, "s": 2833, "text": "Setup code for plotting" }, { "code": null, "e": 2884, "s": 2857, "text": "Analyze sigmoid activation" }, { "code": null, "e": 2908, "s": 2884, "text": "Analyze tanh activation" }, { "code": null, "e": 2932, "s": 2908, "text": "Analyze ReLU activation" }, { "code": null, "e": 2962, "s": 2932, "text": "Analyze Leaky ReLU activation" }, { "code": null, "e": 3141, "s": 2962, "text": "In this section, we will compare the accuracy of a simple feedforward neural network by trying out various combinations of activation functions and weight initialization methods." }, { "code": null, "e": 3416, "s": 3141, "text": "The way we do that it is, first we will generate non-linearly separable data with two classes and write our simple feedforward neural network that supports all the activation functions and weight initialization methods. Then compare the different scenarios using loss plots." }, { "code": null, "e": 3486, "s": 3416, "text": "If you want to skip the theory part and get into the code right away," }, { "code": null, "e": 3497, "s": 3486, "text": "github.com" }, { "code": null, "e": 3947, "s": 3497, "text": "Before we start with our analysis of the feedforward network, first we need to import the required libraries. We are importing the numpy to evaluate the matrix multiplication and dot product between two vectors in the neural network, matplotlib to visualize the data and from thesklearn package, we are importing functions to generate data and evaluate the network performance. To display/render HTML content in-line in Jupiter notebook import HTML." }, { "code": null, "e": 4076, "s": 3947, "text": "In line 19, we are creating a custom color map from a list of colors by using the from_list() method of LinearSegmentedColormap." }, { "code": null, "e": 4274, "s": 4076, "text": "Remember that we are using feedforward neural networks because we wanted to deal with non-linearly separable data. In this section, we will see how to randomly generate non-linearly separable data." }, { "code": null, "e": 4713, "s": 4274, "text": "To generate data randomly we will use make_blobs to generate blobs of points with a Gaussian distribution. I have generated 1000 data points in 2D space with four blobs centers=4 as a multi-class classification prediction problem. Each data point has two inputs and 0, 1, 2 or 3 class labels. Note that make_blobs() function will generate linearly separable data, but we need to have non-linearly separable data for binary classification." }, { "code": null, "e": 4765, "s": 4713, "text": "labels_orig = labelslabels = np.mod(labels_orig, 2)" }, { "code": null, "e": 4939, "s": 4765, "text": "One way to convert the 4 classes to binary classification is to take the remainder of these 4 classes when they are divided by 2 so that I can get the new labels as 0 and 1." }, { "code": null, "e": 5238, "s": 4939, "text": "From the plot, we can see that the centers of blobs are merged such that we now have a binary classification problem where the decision boundary is not linear. Once we have our data ready, I have used the train_test_split function to split the data for training and validation in the ratio of 90:10" }, { "code": null, "e": 5431, "s": 5238, "text": "In this section, we will write a generic class where it can generate a neural network, by taking the number of hidden layers and the number of neurons in each hidden layer as input parameters." }, { "code": null, "e": 5727, "s": 5431, "text": "The network has six neurons in total — two in the first hidden layer and four in the output layer. For each of these neurons, pre-activation is represented by ‘a’ and post-activation is represented by ‘h’. In the network, we have a total of 18 parameters — 12 weight parameters and 6 bias terms." }, { "code": null, "e": 5789, "s": 5727, "text": "we will write our neural network in a class called FFNetwork." }, { "code": null, "e": 5881, "s": 5789, "text": "In the class FirstFFNetworkwe have 8 functions, we will go over these functions one by one." }, { "code": null, "e": 5987, "s": 5881, "text": "def __init__(self, init_method = 'random', activation_function = 'sigmoid', leaky_slope = 0.1): ......" }, { "code": null, "e": 6129, "s": 5987, "text": "The __init__ function initializes all the parameters of the network including weights and biases. The function takes accepts a few arguments," }, { "code": null, "e": 6276, "s": 6129, "text": "init_method: Initialization method to be used for initializing all the parameters of the network. Supports — “random”, “zeros”, “He” and “Xavier”." }, { "code": null, "e": 6426, "s": 6276, "text": "activation_function: Activation function to be used for learning non-linear decision boundary. Supports — “sigmoid”, “tanh”, “relu” and “leaky_relu”." }, { "code": null, "e": 6495, "s": 6426, "text": "leaky_slope: Negative slope of Leaky ReLU. Default value set to 0.1." }, { "code": null, "e": 6605, "s": 6495, "text": "In Line 5–10, we are setting the network configuration and the activation function to be used in the network." }, { "code": null, "e": 6634, "s": 6605, "text": "self.layer_sizes = [2, 2, 4]" }, { "code": null, "e": 7123, "s": 6634, "text": "layer_sizesrepresents that the network has two inputs, two neurons in the first hidden layer and 4 neurons in the second hidden layer which is also the final layer in this case. After that, we have a bunch of “if-else” weight initialization statements, in each of these statements we are only initializing the weights based on the method of choice and the biases are always initialized to the value one. The initialized values of weights and biases are stored in a dictionary self.params." }, { "code": null, "e": 7519, "s": 7123, "text": "def forward_activation(self, X): if self.activation_function == \"sigmoid\": return 1.0/(1.0 + np.exp(-X)) elif self.activation_function == \"tanh\": return np.tanh(X) elif self.activation_function == \"relu\": return np.maximum(0,X) elif self.activation_function == \"leaky_relu\": return np.maximum(self.leaky_slope*X,X)" }, { "code": null, "e": 7701, "s": 7519, "text": "Next, we have forward_activation function that takes input ‘X’ as an argument and computes the post-activation value of the input depending on the choice of the activation function." }, { "code": null, "e": 7741, "s": 7701, "text": "def grad_activation(self, X): ......" }, { "code": null, "e": 7892, "s": 7741, "text": "The function grad_activation also takes input ‘X’ as an argument and computes the derivative of the activation function at given input and returns it." }, { "code": null, "e": 7990, "s": 7892, "text": "def forward_pass(self, X, params = None):....... def grad(self, X, Y, params = None): ......." }, { "code": null, "e": 8106, "s": 7990, "text": "After that, we have two functions forward_pass which characterize the forward pass. Forward pass involves two steps" }, { "code": null, "e": 8298, "s": 8106, "text": "Post Activation — Computes the dot product between the input x & weights wand adds bias bPre Activation — Takes the output of post activation and applies the activation function on top of it." }, { "code": null, "e": 8388, "s": 8298, "text": "Post Activation — Computes the dot product between the input x & weights wand adds bias b" }, { "code": null, "e": 8491, "s": 8388, "text": "Pre Activation — Takes the output of post activation and applies the activation function on top of it." }, { "code": null, "e": 8823, "s": 8491, "text": "gradfunction characterize the gradient computation for each of the parameters present in the network and stores it in a list called gradients. Don’t worry too much in how we arrived at the gradients because we will be using Pytorch to do the heavy lifting, but if you are interested in learning them go through my previous article." }, { "code": null, "e": 8838, "s": 8823, "text": "hackernoon.com" }, { "code": null, "e": 9004, "s": 8838, "text": "def fit(self, X, Y, epochs=1, algo= \"GD\", display_loss=False, eta=1, mini_batch_size=100, eps=1e-8, beta=0.9, beta1=0.9, beta2=0.9, gamma=0.9 ):" }, { "code": null, "e": 9288, "s": 9004, "text": "Next, we define fit method that takes input ‘X’ and ‘Y’ as mandatory arguments and a few optional arguments required for implementing the different variants of gradient descent algorithm. Kindly refer to my previous post for the detail explanation on how to implement the algorithms." }, { "code": null, "e": 9303, "s": 9288, "text": "hackernoon.com" }, { "code": null, "e": 9325, "s": 9303, "text": "def predict(self, X):" }, { "code": null, "e": 9608, "s": 9325, "text": "Now we define our predict function takes inputs X as an argument, which it expects to be an numpy array. In the predict function, we will compute the forward pass of each input with the trained model and send back a numpy array which contains the predicted value of each input data." }, { "code": null, "e": 9960, "s": 9608, "text": "In this section, we define a function to evaluate the performance of the neural network and create plots to visualize the working of the update rule. This kind of setup helps us to run different experiments with different activation functions, different weight initialization methods and plot update rule for different variants of the gradient descent" }, { "code": null, "e": 10262, "s": 9960, "text": "First, we instantiate the feedforward network class and then call the fit method on the training data with 10 epochs and learning rate set to 1 (These values are arbitrary not the optimal values for this data, you can play around these values and find the best number of epochs and the learning rate)." }, { "code": null, "e": 10517, "s": 10262, "text": "Then we will call post_process function to compute the training and validation accuracy of the neural network (Line 2–11). We are also plotting the scatter plot for the input points with different sizes based on the predicted value of the neural network." }, { "code": null, "e": 10575, "s": 10517, "text": "The size of each point in the plot is given by a formula," }, { "code": null, "e": 10624, "s": 10575, "text": "s=15*(np.abs(Y_pred_binarised_train-Y_train)+.2)" }, { "code": null, "e": 10716, "s": 10624, "text": "The formula takes the absolute difference between the predicted value and the actual value." }, { "code": null, "e": 10782, "s": 10716, "text": "If the ground truth is equal to the predicted value then size = 3" }, { "code": null, "e": 10852, "s": 10782, "text": "If the ground truth is not equal to the predicted value the size = 18" }, { "code": null, "e": 11030, "s": 10852, "text": "All the small points in the plot indicate that the model is predicting those observations correctly and large points indicate that those observations are incorrectly classified." }, { "code": null, "e": 11356, "s": 11030, "text": "Line 20–29, we are plotting the updates each parameter getting from the network using backpropagation. In our network, there are 18 parameters in total so we are iterating 18 times, each time we will find the update each parameter gets and plot them using subplot. For example, update for weight Wi at ith epoch = Wi + 1 — Wi" }, { "code": null, "e": 11519, "s": 11356, "text": "To analyze the effect of sigmoid activation function on the neural network, we will set the activation function as ‘sigmoid’ and execute the neural network class." }, { "code": null, "e": 12010, "s": 11519, "text": "The Loss of the network is falling even though we have run it for very few iterations. By using the post_process function, we are able to plot the 18 subplots and we have not provided any axis labels because it is not required. The 18 plots for 18 parameters are plotted in row-major order representing the frequency of updates the parameter receives. The first 12 plots indicate the updates received by the weights and last 6 indicate the updates received by the bias terms in the network." }, { "code": null, "e": 12284, "s": 12010, "text": "In any of the subplots, if the curve is closer to the middle indicates that the particular parameter is not getting any updates. Instead of executing each weight initialization manually, we will write a for — loop to execute all possible weight initialization combinations." }, { "code": null, "e": 12646, "s": 12284, "text": "for init_method in ['zeros', 'random', 'xavier', 'he']: for activation_function in ['sigmoid']: print(init_method, activation_function) model = FFNetwork(init_method=init_method,activation_function = activation_function) model.fit(X_train, y_OH_train, epochs=50, eta=1, algo=\"GD\", display_loss=True) post_process(plot_scale=0.05) print('\\n--\\n')" }, { "code": null, "e": 12955, "s": 12646, "text": "In the above code, I just added two ‘for’ loops. One ‘for’ loop for weight initialization and another ‘for’ loop for activation function. Once you execute the above code, you will see that the neural network tries all the possible weight initialization methods by keep activation function — sigmoid constant." }, { "code": null, "e": 13456, "s": 12955, "text": "If you observe the output of zero weight initialization method with sigmoid, you can see that the symmetry breaking problem occurs in the sigmoid neuron. Once we initialize the weights to zero, in all subsequent iterations the weights are going to remain the same (they will move away from zero but they will be equal), this symmetry will never break during the training. This kind of phenomenon is known as symmetry breaking problem. Because of this problem, we are getting very low accuracy of 54%." }, { "code": null, "e": 13854, "s": 13456, "text": "In the random initialization, we can see that the problem of symmetry breaking doesn’t occur. It means that all the weights & biases are taking different values during the training. By using the Xavier initialization, we are getting the highest accuracy across different weight initialization method. Xavier is the recommended weight initialization method for sigmoid and tanh activation function." }, { "code": null, "e": 14042, "s": 13854, "text": "We will use the same code for executing the tanh activation function with different combinations of weight initialization methods by including the keyword ‘tanh’ in the second ‘for’ loop." }, { "code": null, "e": 14079, "s": 14042, "text": "for activation_function in ['tanh']:" }, { "code": null, "e": 14470, "s": 14079, "text": "In the zero initialization with tanh activation, from the weight update subplots, we can see that tanh activation is hardly learning anything. In all the plots the curve is closer to zero, indicating that the parameters are not getting updates from optimization algorithm. The reason behind this phenomenon is that the value of tanh at x = 0 is zero and the derivative of tanh is also zero." }, { "code": null, "e": 14674, "s": 14470, "text": "When we do Xavier initialization with tanh, we are able to get higher performance from the neural network. Just by changing the method of weight initialization we are able to get higher accuracy (86.6%)." }, { "code": null, "e": 14862, "s": 14674, "text": "We will use the same code for executing the ReLU activation function with different combinations of weight initialization methods by including the keyword ‘relu’ in the second ‘for’ loop." }, { "code": null, "e": 14899, "s": 14862, "text": "for activation_function in ['relu']:" }, { "code": null, "e": 15247, "s": 14899, "text": "Similar to tanh with zero weight initialization, we observed that setting weights to zero doesn’t work with ReLU because the value of ReLU at zero is equal to zero itself. As a result, weights won’t be propagated back into the network and network won’t learn anything. So it’s not a good idea to set weights to zero either in case of tanh or ReLU." }, { "code": null, "e": 15384, "s": 15247, "text": "The recommended initialization method for ReLU is He-initialization, by using He-initialization we are able to get the highest accuracy." }, { "code": null, "e": 15572, "s": 15384, "text": "We will use the same code for executing the ReLU activation function with different combinations of weight initialization methods by including the keyword ‘relu’ in the second ‘for’ loop." }, { "code": null, "e": 15615, "s": 15572, "text": "for activation_function in ['leaky_relu']:" }, { "code": null, "e": 15896, "s": 15615, "text": "Similar to ReLU with zero weight initialization, we observed that setting weights to zero doesn’t work with Leaky ReLU because the value of Leaky ReLU at zero is equal to zero itself. As a result, weights won’t be propagated back into the network and network won’t learn anything." }, { "code": null, "e": 16384, "s": 15896, "text": "Coming to random initialization, we can see that the network achieves very good accuracy but there are a lot of oscillations in the update subplots. The large oscillations might be occurring due to a large learning rate. By using He initialization, we get the highest accuracy of 92% on the test data. To avoid the large oscillations, we should set a smaller learning rate in any method of weight initialization. The recommended initialization method for Leaky ReLU is He-initialization." }, { "code": null, "e": 16519, "s": 16384, "text": "There you have it, we have successfully analyzed the different combinations of weight initialization methods and activation functions." }, { "code": null, "e": 16534, "s": 16519, "text": "CODE CODE CODE" }, { "code": null, "e": 16833, "s": 16534, "text": "In this article, we have used make_blobs function to generate toy data and we have seen that make_blobs generate linearly separable data. If you want to generate some complex non-linearly separable data to train your feedforward neural network, you can use make_moons function from sklearn package." }, { "code": null, "e": 17427, "s": 16833, "text": "You can also try, changing the learning algorithm (we been using vanilla gradient descent) to a different variant of gradient descent like Adam, NAG, etc... and study the impact of the learning algorithm on network performance. Using our feedforward neural network class you can create a much deeper network with more number of neurons in each layer ([2,2,2,4] — two neurons each in first 3 hidden layers and 4 neurons in the output layer) and play with learning rate & a number of epochs to check under which parameters neural network is able to arrive at the best decision boundary possible." }, { "code": null, "e": 17663, "s": 17427, "text": "The entire code discussed in the article is present in this GitHub repository. Feel free to fork it or download it. The best part is that you can directly run the code in google colab, don’t need to worry about installing the packages." }, { "code": null, "e": 17674, "s": 17663, "text": "github.com" }, { "code": null, "e": 18060, "s": 17674, "text": "In this post, we briefly looked at the overview of weight initialization methods and activation functions. Then we have seen how to build a generic simple neuron network class that supports different variants of gradient descent, weight initialization, and activation functions. After that, we have analyzed each of the activation function with different weight initialization methods." }, { "code": null, "e": 18344, "s": 18060, "text": "If you want to learn more about Data Science, Machine Learning. Check out the Machine Learning Basics and Advanced Machine Learning by Abhishek and Pukhraj from Starttechacademy. One of the good points about these courses is that they teach in both Python and R, so it’s your choice." }, { "code": null, "e": 18364, "s": 18344, "text": "Recommended Reading" }, { "code": null, "e": 18379, "s": 18364, "text": "hackernoon.com" }, { "code": null, "e": 18390, "s": 18379, "text": "Author Bio" }, { "code": null, "e": 18740, "s": 18390, "text": "Niranjan Kumar is Retail Risk Analyst at HSBC Analytics division. He is passionate about Deep learning and Artificial Intelligence. He is one of the top writers at Medium in Artificial Intelligence. You can find all of Niranjan’s blog here. You can connect with Niranjan on LinkedIn, Twitter and GitHub to stay up to date with his latest blog posts." } ]