title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
---|---|---|
Convert the column type from string to datetime format in Pandas dataframe - GeeksforGeeks
|
05 Oct, 2020
While working with data in Pandas, it is not an unusual thing to encounter time series data, and we know Pandas is a very useful tool for working with time-series data in python.Let’s see how we can convert a dataframe column of strings (in dd/mm/yyyy format) to datetime format. We cannot perform any time series based operation on the dates if they are not in the right format. In order to be able to work with it, we are required to convert the dates into the datetime format.
Code #1 : Convert Pandas dataframe column type from string to datetime format using pd.to_datetime() function.
Python3
# importing pandas as pdimport pandas as pd # Creating the dataframedf = pd.DataFrame({'Date':['11/8/2011', '04/23/2008', '10/2/2019'], 'Event':['Music', 'Poetry', 'Theatre'], 'Cost':[10000, 5000, 15000]}) # Print the dataframeprint(df) # Now we will check the data type# of the 'Date' columndf.info()
Output:
As we can see in the output, the data type of the ‘Date’ column is object i.e. string. Now we will convert it to datetime format using pd.to_datetime() function.
Python3
# convert the 'Date' column to datetime formatdf['Date']= pd.to_datetime(df['Date']) # Check the format of 'Date' columndf.info()
Output:
As we can see in the output, the format of the ‘Date’ column has been changed to the datetime format. Code #2: Convert Pandas dataframe column type from string to datetime format using DataFrame.astype() function.
Python3
# importing pandas as pdimport pandas as pd # Creating the dataframedf = pd.DataFrame({'Date':['11/8/2011', '04/23/2008', '10/2/2019'], 'Event':['Music', 'Poetry', 'Theatre'], 'Cost':[10000, 5000, 15000]}) # Print the dataframeprint(df) # Now we will check the data type# of the 'Date' columndf.info()
Output :
As we can see in the output, the data type of the ‘Date’ column is object i.e. string. Now we will convert it to datetime format using DataFrame.astype() function.
Python3
# convert the 'Date' column to datetime formatdf['Date'] = df['Date'].astype('datetime64[ns]') # Check the format of 'Date' columndf.info()
Output :
As we can see in the output, the format of the ‘Date’ column has been changed to the datetime format.
Code #3: If the data frame column is in ‘yymmdd’ format and we have to convert it to ‘yyyymmdd’ format
Python3
# importing pandas libraryimport pandas as pd # Initializing the nested list with Data setplayer_list = [['200712',50000],['200714',51000],['200716',51500], ['200719',53000],['200721',54000], ['200724',55000],['200729',57000]] # creating a pandas dataframedf = pd.DataFrame(player_list,columns=['Dates','Patients']) # printing dataframeprint(df)print() # checking the typeprint(df.dtypes)
Python3
# converting the string to datetime formatdf['Dates'] = pd.to_datetime(df['Dates'], format='%y%m%d') # printing dataframeprint(df)print() print(df.dtypes)
In the above example, we change the data type of column ‘Dates’ from ‘object‘ to ‘datetime64[ns]‘ and format from ‘yymmdd’ to ‘yyyymmdd’.
Code #4: Converting multiple columns from string to ‘yyyymmdd‘ format using pandas.to_datetime()
Python3
# importing pandas libraryimport pandas as pd # Initializing the nested list with Data setplayer_list = [['20200712',50000,'20200812'], ['20200714',51000,'20200814'], ['20200716',51500,'20200816'], ['20200719',53000,'20200819'], ['20200721',54000,'20200821'], ['20200724',55000,'20200824'], ['20200729',57000,'20200824']] # creating a pandas dataframedf = pd.DataFrame( player_list,columns = ['Treatment_start', 'No.of Patients', 'Treatment_end']) # printing dataframeprint(df)print() # checking the typeprint(df.dtypes)
Python3
# converting the string to datetime# format in multiple columnsdf['Treatment_start'] = pd.to_datetime( df['Treatment_start'], format='%Y%m%d')df['Treatment_end'] = pd.to_datetime( df['Treatment_end'], format='%Y%m%d') # printing dataframeprint(df)print() print(df.dtypes)
In the above example, we change the data type of columns ‘Treatment_start‘ and ‘Treatment_end‘ from ‘object‘ to ‘datetime64[ns]‘ type.
Shubham__Ranjan
vanshgaur14866
pandas-dataframe-program
Python pandas-dataFrame
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Enumerate() in Python
How to Install PIP on Windows ?
Different ways to create Pandas Dataframe
Create a Pandas DataFrame from Lists
sum() function in Python
*args and **kwargs in Python
How to drop one or multiple columns in Pandas Dataframe
Convert integer to string in Python
Print lists in Python (4 Different Ways)
Python | Get unique values from a list
|
[
{
"code": null,
"e": 24777,
"s": 24749,
"text": "\n05 Oct, 2020"
},
{
"code": null,
"e": 25257,
"s": 24777,
"text": "While working with data in Pandas, it is not an unusual thing to encounter time series data, and we know Pandas is a very useful tool for working with time-series data in python.Let’s see how we can convert a dataframe column of strings (in dd/mm/yyyy format) to datetime format. We cannot perform any time series based operation on the dates if they are not in the right format. In order to be able to work with it, we are required to convert the dates into the datetime format."
},
{
"code": null,
"e": 25368,
"s": 25257,
"text": "Code #1 : Convert Pandas dataframe column type from string to datetime format using pd.to_datetime() function."
},
{
"code": null,
"e": 25376,
"s": 25368,
"text": "Python3"
},
{
"code": "# importing pandas as pdimport pandas as pd # Creating the dataframedf = pd.DataFrame({'Date':['11/8/2011', '04/23/2008', '10/2/2019'], 'Event':['Music', 'Poetry', 'Theatre'], 'Cost':[10000, 5000, 15000]}) # Print the dataframeprint(df) # Now we will check the data type# of the 'Date' columndf.info()",
"e": 25708,
"s": 25376,
"text": null
},
{
"code": null,
"e": 25717,
"s": 25708,
"text": "Output: "
},
{
"code": null,
"e": 25880,
"s": 25717,
"text": "As we can see in the output, the data type of the ‘Date’ column is object i.e. string. Now we will convert it to datetime format using pd.to_datetime() function. "
},
{
"code": null,
"e": 25888,
"s": 25880,
"text": "Python3"
},
{
"code": "# convert the 'Date' column to datetime formatdf['Date']= pd.to_datetime(df['Date']) # Check the format of 'Date' columndf.info()",
"e": 26018,
"s": 25888,
"text": null
},
{
"code": null,
"e": 26027,
"s": 26018,
"text": "Output: "
},
{
"code": null,
"e": 26243,
"s": 26027,
"text": "As we can see in the output, the format of the ‘Date’ column has been changed to the datetime format. Code #2: Convert Pandas dataframe column type from string to datetime format using DataFrame.astype() function."
},
{
"code": null,
"e": 26251,
"s": 26243,
"text": "Python3"
},
{
"code": "# importing pandas as pdimport pandas as pd # Creating the dataframedf = pd.DataFrame({'Date':['11/8/2011', '04/23/2008', '10/2/2019'], 'Event':['Music', 'Poetry', 'Theatre'], 'Cost':[10000, 5000, 15000]}) # Print the dataframeprint(df) # Now we will check the data type# of the 'Date' columndf.info()",
"e": 26583,
"s": 26251,
"text": null
},
{
"code": null,
"e": 26593,
"s": 26583,
"text": "Output : "
},
{
"code": null,
"e": 26758,
"s": 26593,
"text": "As we can see in the output, the data type of the ‘Date’ column is object i.e. string. Now we will convert it to datetime format using DataFrame.astype() function. "
},
{
"code": null,
"e": 26766,
"s": 26758,
"text": "Python3"
},
{
"code": "# convert the 'Date' column to datetime formatdf['Date'] = df['Date'].astype('datetime64[ns]') # Check the format of 'Date' columndf.info()",
"e": 26906,
"s": 26766,
"text": null
},
{
"code": null,
"e": 26916,
"s": 26906,
"text": "Output : "
},
{
"code": null,
"e": 27018,
"s": 26916,
"text": "As we can see in the output, the format of the ‘Date’ column has been changed to the datetime format."
},
{
"code": null,
"e": 27122,
"s": 27018,
"text": "Code #3: If the data frame column is in ‘yymmdd’ format and we have to convert it to ‘yyyymmdd’ format "
},
{
"code": null,
"e": 27130,
"s": 27122,
"text": "Python3"
},
{
"code": "# importing pandas libraryimport pandas as pd # Initializing the nested list with Data setplayer_list = [['200712',50000],['200714',51000],['200716',51500], ['200719',53000],['200721',54000], ['200724',55000],['200729',57000]] # creating a pandas dataframedf = pd.DataFrame(player_list,columns=['Dates','Patients']) # printing dataframeprint(df)print() # checking the typeprint(df.dtypes)",
"e": 27541,
"s": 27130,
"text": null
},
{
"code": null,
"e": 27549,
"s": 27541,
"text": "Python3"
},
{
"code": "# converting the string to datetime formatdf['Dates'] = pd.to_datetime(df['Dates'], format='%y%m%d') # printing dataframeprint(df)print() print(df.dtypes)",
"e": 27704,
"s": 27549,
"text": null
},
{
"code": null,
"e": 27842,
"s": 27704,
"text": "In the above example, we change the data type of column ‘Dates’ from ‘object‘ to ‘datetime64[ns]‘ and format from ‘yymmdd’ to ‘yyyymmdd’."
},
{
"code": null,
"e": 27939,
"s": 27842,
"text": "Code #4: Converting multiple columns from string to ‘yyyymmdd‘ format using pandas.to_datetime()"
},
{
"code": null,
"e": 27947,
"s": 27939,
"text": "Python3"
},
{
"code": "# importing pandas libraryimport pandas as pd # Initializing the nested list with Data setplayer_list = [['20200712',50000,'20200812'], ['20200714',51000,'20200814'], ['20200716',51500,'20200816'], ['20200719',53000,'20200819'], ['20200721',54000,'20200821'], ['20200724',55000,'20200824'], ['20200729',57000,'20200824']] # creating a pandas dataframedf = pd.DataFrame( player_list,columns = ['Treatment_start', 'No.of Patients', 'Treatment_end']) # printing dataframeprint(df)print() # checking the typeprint(df.dtypes)",
"e": 28601,
"s": 27947,
"text": null
},
{
"code": null,
"e": 28609,
"s": 28601,
"text": "Python3"
},
{
"code": "# converting the string to datetime# format in multiple columnsdf['Treatment_start'] = pd.to_datetime( df['Treatment_start'], format='%Y%m%d')df['Treatment_end'] = pd.to_datetime( df['Treatment_end'], format='%Y%m%d') # printing dataframeprint(df)print() print(df.dtypes)",
"e": 28982,
"s": 28609,
"text": null
},
{
"code": null,
"e": 29117,
"s": 28982,
"text": "In the above example, we change the data type of columns ‘Treatment_start‘ and ‘Treatment_end‘ from ‘object‘ to ‘datetime64[ns]‘ type."
},
{
"code": null,
"e": 29133,
"s": 29117,
"text": "Shubham__Ranjan"
},
{
"code": null,
"e": 29148,
"s": 29133,
"text": "vanshgaur14866"
},
{
"code": null,
"e": 29173,
"s": 29148,
"text": "pandas-dataframe-program"
},
{
"code": null,
"e": 29197,
"s": 29173,
"text": "Python pandas-dataFrame"
},
{
"code": null,
"e": 29211,
"s": 29197,
"text": "Python-pandas"
},
{
"code": null,
"e": 29218,
"s": 29211,
"text": "Python"
},
{
"code": null,
"e": 29316,
"s": 29218,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29325,
"s": 29316,
"text": "Comments"
},
{
"code": null,
"e": 29338,
"s": 29325,
"text": "Old Comments"
},
{
"code": null,
"e": 29360,
"s": 29338,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 29392,
"s": 29360,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 29434,
"s": 29392,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 29471,
"s": 29434,
"text": "Create a Pandas DataFrame from Lists"
},
{
"code": null,
"e": 29496,
"s": 29471,
"text": "sum() function in Python"
},
{
"code": null,
"e": 29525,
"s": 29496,
"text": "*args and **kwargs in Python"
},
{
"code": null,
"e": 29581,
"s": 29525,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 29617,
"s": 29581,
"text": "Convert integer to string in Python"
},
{
"code": null,
"e": 29658,
"s": 29617,
"text": "Print lists in Python (4 Different Ways)"
}
] |
Epoch time in Perl
|
You can use the time() function in Perl to get epoch time, i.e., the numbers of seconds that have elapsed since a given date, in Unix is January 1, 1970.
Live Demo
#!/usr/local/bin/perl
$epoc = time();
print "Number of seconds since Jan 1, 1970 - $epoc\n";
When the above code is executed, it produces the following result −
Number of seconds since Jan 1, 1970 - 1361022130
You can convert a given number of seconds into date and time string as follows −
Live Demo
#!/usr/local/bin/perl
$datestring = localtime();
print "Current date and time $datestring\n";
$epoc = time();
$epoc = $epoc - 24 * 60 * 60; # one day before of current date.
$datestring = localtime($epoc);
print "Yesterday's date and time $datestring\n";
When the above code is executed, it produces the following result −
Current date and time Tue Jun 5 05:54:43 2018
Yesterday's date and time Mon Jun 4 05:54:43 2018
|
[
{
"code": null,
"e": 1216,
"s": 1062,
"text": "You can use the time() function in Perl to get epoch time, i.e., the numbers of seconds that have elapsed since a given date, in Unix is January 1, 1970."
},
{
"code": null,
"e": 1227,
"s": 1216,
"text": " Live Demo"
},
{
"code": null,
"e": 1320,
"s": 1227,
"text": "#!/usr/local/bin/perl\n$epoc = time();\nprint \"Number of seconds since Jan 1, 1970 - $epoc\\n\";"
},
{
"code": null,
"e": 1388,
"s": 1320,
"text": "When the above code is executed, it produces the following result −"
},
{
"code": null,
"e": 1437,
"s": 1388,
"text": "Number of seconds since Jan 1, 1970 - 1361022130"
},
{
"code": null,
"e": 1518,
"s": 1437,
"text": "You can convert a given number of seconds into date and time string as follows −"
},
{
"code": null,
"e": 1529,
"s": 1518,
"text": " Live Demo"
},
{
"code": null,
"e": 1787,
"s": 1529,
"text": "#!/usr/local/bin/perl\n$datestring = localtime();\nprint \"Current date and time $datestring\\n\";\n$epoc = time();\n$epoc = $epoc - 24 * 60 * 60; # one day before of current date.\n$datestring = localtime($epoc);\nprint \"Yesterday's date and time $datestring\\n\";"
},
{
"code": null,
"e": 1855,
"s": 1787,
"text": "When the above code is executed, it produces the following result −"
},
{
"code": null,
"e": 1951,
"s": 1855,
"text": "Current date and time Tue Jun 5 05:54:43 2018\nYesterday's date and time Mon Jun 4 05:54:43 2018"
}
] |
Persisting Data in ElectronJS - GeeksforGeeks
|
13 Aug, 2020
ElectronJS is an Open Source Framework used for building Cross-Platform native desktop applications using web technologies such as HTML, CSS, and JavaScript which are capable of running on Windows, macOS, and Linux operating systems. It combines the Chromium engine and NodeJS into a Single Runtime.
All complex web applications make use of Local Storage. Local Storage (a.k.a DOM Storage) is a type of web storage that allows websites to store, persist and access this data within the client-side browser without any expiration date. The data persisted in the local storage will still be accessible even after the browser window is closed or reloaded. The application data is stored locally in the client-side browser. HTML5 and Vanilla JavaScript provide extensive support for localstorage via APIs. Before local storage was implemented, the data used to be stored in cookies which were included in every HTTP REST API calls to the server. Web local storage is more secure than cookies, and we can store large amounts of data (up to 5 MB) without affecting the performance of the website. This data is never transferred to the server and will remain in the client-side browser until the local storage is not cleared manually. Local Storage is implemented per origin basis i.e. based on the domain and the protocol of the website. All webpages from the same origin have access to the same data within the local storage. The same webpage accessed from different Protocols such as HTTP or HTTPS has a different local storage instance created for them. Any data stored in the local storage when the webpage is in Private or Incognito mode will be cleared once after all Incognito tabs are closed. The local storage is not to be confused with Session Storage in which the data is persisted until the page session ends. Once the session is terminated, the data is erased. All modern browsers support Local Storage including Chromium.
Even though Chromium supports Local Storage, Electron does not provide us with a built-in way to store and persist user settings and other data within the local storage. However, with the help of external npm packages, we can persist and access data within an Electron application simply and efficiently. In this tutorial, we will implement local storage in Electron by using the electron-settings npm package. For more detailed information, refer to the link: https://www.npmjs.com/package/electron-settings. This package has been adopted and is used by Electron itself for demo purposes. We assume that you are familiar with the prerequisites as covered in the above-mentioned link. For Electron to work, node and npm need to be pre-installed in the system.
Project Structure:
Example: Follow the Steps given in Dynamic Styling in ElectronJS to set up the basic Electron Application. Copy the Boilerplate code for the main.js file and the index.html file as provided in the article. We will continue building our application using the same code base. Additionally, install the electron-settings package using npm. According to the official documentation, this package is a simple and robust settings management library for Electron applications. This package allows us to persist user data and settings within the application between reloads and application startups just like local storage. All the data persisted using this package is stored onto a JSON file by the name of settings.json located in the user’s local system application directory. Refer to the code for better understanding.
npm install electron-settings --save
Also, perform the necessary changes mentioned for the package.json file to launch the Electron Application. We will continue building our application using the same code base. The basic steps required to set up the Electron application remain the same. package.json:
{
"name": "electron-persist",
"version": "1.0.0",
"description": "Persist Data in Electron ",
"main": "main.js",
"scripts": {
"start": "electron ."
},
"keywords": [
"electron"
],
"author": "Radhesh Khanna",
"license": "ISC",
"dependencies": {
"electron": "^8.3.0",
"electron-settings": "^4.0.2"
}
}
Output:
Persisting Data in Electron: The electron-settings npm package can be directly used in the Main Process and the Renderer Processes of the application for accessing the storage. This package is designed and works similarly as compared to the Window.localStorage Web API. This package is compliant and works without any errors as of Electron v8.3.0 and it is regularly updated. We will now implement this package in the Electron application. For more detailed information on this package, version updates and changelogs, refer to the link: https://electron-settings.js.org/.
index.html: Add the following snippet in that file.
HTML
<h3>Persist Data and User Settings in Electron</h3> <h5>Enter Sample Text here</h5> <input type="text" id="sample"> <button id="submit"> Submitting the Data </button>
index.js: Submitting the Data button does not have any functionality associated with it yet. To change this, add the following code snippet in the index.js file.
javascript
const electron = require('electron')const settings = require('electron-settings'); console.log('File used for Persisting Data - ' + settings.file()); var sample = document.getElementById('sample');var submit = document.getElementById('submit'); settings.get('key.data').then(value => { console.log('Persisted Value - ' + value);}) settings.has('key1.data').then(bool => { console.log('Checking if key1.data Exists - ' + bool)}); submit.addEventListener('click', () => { console.log('Sample Text Entered - ' + sample.value); console.log('Persisting Data in electron-settings'); settings.set('key', { data: sample.value });});
Explanation: In the above application, we are inputting a sample text data from the user using the HTML DOM Input Element. We are then persisting and accessing this data in the Electron application using the Instance methods of the electron-setting package. The electron-settings npm package supports the following Instance methods which have also been used in the above code.
settings.set(key, value) This Instance method is used to persist data in the application. The data is uniquely stored and identified by the key parameter. This method does not have a return type. By default, this Instance method is Asynchronous. Instead, we can use the settings.setSync(key, value) method for Synchronous data operation. It takes in the following parameters.key: String This parameter is used to uniquely identify the actual data which is being stored. Using this key parameter, we can later than access this data using the settings.get() Instance method.value: Object This parameter represents the actual data that needs to be persisted within the Electron application. This parameter can hold any valid JSON object including a JSON Array. We can then use the dot annotation . to filter and fetch the exact data required from the JSON object in combination with the key parameter. Refer to the code for better understanding. To access data from within the JSON Array
key: String This parameter is used to uniquely identify the actual data which is being stored. Using this key parameter, we can later than access this data using the settings.get() Instance method.
value: Object This parameter represents the actual data that needs to be persisted within the Electron application. This parameter can hold any valid JSON object including a JSON Array. We can then use the dot annotation . to filter and fetch the exact data required from the JSON object in combination with the key parameter. Refer to the code for better understanding. To access data from within the JSON Array
settings.has(key) This Instance method is used to check whether the data as represented by the key parameter is present or not within the application. Then the Instance method returns a Promise and it returns a Boolean value stating whether the data is present or not. It returns true if the Key exists in the storage. By default, this Instance method is Asynchronous. Instead, we can use the settings.hasSync(key) method for Synchronous data operation. In the above example, we have provided an invalid key parameter which should return false.
settings.get(key) This Instance method is used to return the data which is being persisted in the application as uniquely identified by the key parameter. This is the same key parameter which was set in the settings.set() Instance method. This method returns a Promise and it resolves to an object containing the actual data. The object returned can be a valid JSON object or it can simply be a primitive data type such as an Integer or a String. In case, we pass an undefined key parameter, the object returned will be undefined. We can also check if the key parameter exists using the settings.has() Instance method. By default, this Instance method is Asynchronous. Instead, we can use the settings.getSync(key) method for Synchronous data operation. This package uses the Lodash get() method under the hood to perform this operation. Lodash is a JavaScript utility library used for common programming tasks.
settings.file() This Instance method returns the full path of the settings.json which was created and is being used to persist the data in the application. As mentioned above, by default, this file is stored in the app’s user data directory of the native system. It is not recommended to change the location of the settings.json file. This Instance method returns a String value representing the full path of the settings.json file.
settings.reset() This Instance method is used to reset all configurations of the electron-settings package to default. This Instance method does not have a return type. This method does not reset the data stored in the settings.json file but only resets the configurations of this package.
settings.unset(key) This Instance method is used to clear the data stored in the application using the settings.set() Instance method. This method will reset the data based on the key parameter provided which will uniquely identify the data. This Instance method returns a Promise and it is resolved when the data identified by the key parameter/all the data has been successfully reset. By default, this Instance method is Asynchronous. Instead, we can use the settings.unsetSync(key) method for Synchronous data operation.
Note: The key parameter is an Optional parameter. If the key parameter is not passed, it will reset all the data persisted in the application using the electron-settings package.
Output: At this point, upon launching the Electron application, we should be able to persist data entered by the user within the application and successfully retrieve it when the application is reloaded or relaunched.
ElectronJS
CSS
HTML
JavaScript
Node.js
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to update Node.js and NPM to next version ?
How to apply style to parent if it has child with CSS?
Types of CSS (Cascading Style Sheet)
How to position a div at the bottom of its container using CSS?
How to set space between the flexbox ?
How to update Node.js and NPM to next version ?
How to set the default value for an HTML <select> element ?
Hide or show elements in HTML using display property
How to set input type date in dd-mm-yyyy format using HTML ?
REST API (Introduction)
|
[
{
"code": null,
"e": 25991,
"s": 25963,
"text": "\n13 Aug, 2020"
},
{
"code": null,
"e": 26291,
"s": 25991,
"text": "ElectronJS is an Open Source Framework used for building Cross-Platform native desktop applications using web technologies such as HTML, CSS, and JavaScript which are capable of running on Windows, macOS, and Linux operating systems. It combines the Chromium engine and NodeJS into a Single Runtime."
},
{
"code": null,
"e": 27922,
"s": 26291,
"text": "All complex web applications make use of Local Storage. Local Storage (a.k.a DOM Storage) is a type of web storage that allows websites to store, persist and access this data within the client-side browser without any expiration date. The data persisted in the local storage will still be accessible even after the browser window is closed or reloaded. The application data is stored locally in the client-side browser. HTML5 and Vanilla JavaScript provide extensive support for localstorage via APIs. Before local storage was implemented, the data used to be stored in cookies which were included in every HTTP REST API calls to the server. Web local storage is more secure than cookies, and we can store large amounts of data (up to 5 MB) without affecting the performance of the website. This data is never transferred to the server and will remain in the client-side browser until the local storage is not cleared manually. Local Storage is implemented per origin basis i.e. based on the domain and the protocol of the website. All webpages from the same origin have access to the same data within the local storage. The same webpage accessed from different Protocols such as HTTP or HTTPS has a different local storage instance created for them. Any data stored in the local storage when the webpage is in Private or Incognito mode will be cleared once after all Incognito tabs are closed. The local storage is not to be confused with Session Storage in which the data is persisted until the page session ends. Once the session is terminated, the data is erased. All modern browsers support Local Storage including Chromium. "
},
{
"code": null,
"e": 28683,
"s": 27922,
"text": "Even though Chromium supports Local Storage, Electron does not provide us with a built-in way to store and persist user settings and other data within the local storage. However, with the help of external npm packages, we can persist and access data within an Electron application simply and efficiently. In this tutorial, we will implement local storage in Electron by using the electron-settings npm package. For more detailed information, refer to the link: https://www.npmjs.com/package/electron-settings. This package has been adopted and is used by Electron itself for demo purposes. We assume that you are familiar with the prerequisites as covered in the above-mentioned link. For Electron to work, node and npm need to be pre-installed in the system."
},
{
"code": null,
"e": 28703,
"s": 28683,
"text": "Project Structure: "
},
{
"code": null,
"e": 29519,
"s": 28703,
"text": "Example: Follow the Steps given in Dynamic Styling in ElectronJS to set up the basic Electron Application. Copy the Boilerplate code for the main.js file and the index.html file as provided in the article. We will continue building our application using the same code base. Additionally, install the electron-settings package using npm. According to the official documentation, this package is a simple and robust settings management library for Electron applications. This package allows us to persist user data and settings within the application between reloads and application startups just like local storage. All the data persisted using this package is stored onto a JSON file by the name of settings.json located in the user’s local system application directory. Refer to the code for better understanding. "
},
{
"code": null,
"e": 29557,
"s": 29519,
"text": "npm install electron-settings --save\n"
},
{
"code": null,
"e": 29825,
"s": 29557,
"text": "Also, perform the necessary changes mentioned for the package.json file to launch the Electron Application. We will continue building our application using the same code base. The basic steps required to set up the Electron application remain the same. package.json: "
},
{
"code": null,
"e": 30165,
"s": 29825,
"text": "{\n \"name\": \"electron-persist\",\n \"version\": \"1.0.0\",\n \"description\": \"Persist Data in Electron \",\n \"main\": \"main.js\",\n \"scripts\": {\n \"start\": \"electron .\"\n },\n \"keywords\": [\n \"electron\"\n ],\n \"author\": \"Radhesh Khanna\",\n \"license\": \"ISC\",\n \"dependencies\": {\n \"electron\": \"^8.3.0\",\n \"electron-settings\": \"^4.0.2\"\n }\n}\n"
},
{
"code": null,
"e": 30174,
"s": 30165,
"text": "Output: "
},
{
"code": null,
"e": 30747,
"s": 30174,
"text": "Persisting Data in Electron: The electron-settings npm package can be directly used in the Main Process and the Renderer Processes of the application for accessing the storage. This package is designed and works similarly as compared to the Window.localStorage Web API. This package is compliant and works without any errors as of Electron v8.3.0 and it is regularly updated. We will now implement this package in the Electron application. For more detailed information on this package, version updates and changelogs, refer to the link: https://electron-settings.js.org/."
},
{
"code": null,
"e": 30800,
"s": 30747,
"text": "index.html: Add the following snippet in that file. "
},
{
"code": null,
"e": 30805,
"s": 30800,
"text": "HTML"
},
{
"code": "<h3>Persist Data and User Settings in Electron</h3> <h5>Enter Sample Text here</h5> <input type=\"text\" id=\"sample\"> <button id=\"submit\"> Submitting the Data </button>",
"e": 30983,
"s": 30805,
"text": null
},
{
"code": null,
"e": 31145,
"s": 30983,
"text": "index.js: Submitting the Data button does not have any functionality associated with it yet. To change this, add the following code snippet in the index.js file."
},
{
"code": null,
"e": 31156,
"s": 31145,
"text": "javascript"
},
{
"code": "const electron = require('electron')const settings = require('electron-settings'); console.log('File used for Persisting Data - ' + settings.file()); var sample = document.getElementById('sample');var submit = document.getElementById('submit'); settings.get('key.data').then(value => { console.log('Persisted Value - ' + value);}) settings.has('key1.data').then(bool => { console.log('Checking if key1.data Exists - ' + bool)}); submit.addEventListener('click', () => { console.log('Sample Text Entered - ' + sample.value); console.log('Persisting Data in electron-settings'); settings.set('key', { data: sample.value });});",
"e": 31821,
"s": 31156,
"text": null
},
{
"code": null,
"e": 32199,
"s": 31821,
"text": "Explanation: In the above application, we are inputting a sample text data from the user using the HTML DOM Input Element. We are then persisting and accessing this data in the Electron application using the Instance methods of the electron-setting package. The electron-settings npm package supports the following Instance methods which have also been used in the above code. "
},
{
"code": null,
"e": 33184,
"s": 32199,
"text": "settings.set(key, value) This Instance method is used to persist data in the application. The data is uniquely stored and identified by the key parameter. This method does not have a return type. By default, this Instance method is Asynchronous. Instead, we can use the settings.setSync(key, value) method for Synchronous data operation. It takes in the following parameters.key: String This parameter is used to uniquely identify the actual data which is being stored. Using this key parameter, we can later than access this data using the settings.get() Instance method.value: Object This parameter represents the actual data that needs to be persisted within the Electron application. This parameter can hold any valid JSON object including a JSON Array. We can then use the dot annotation . to filter and fetch the exact data required from the JSON object in combination with the key parameter. Refer to the code for better understanding. To access data from within the JSON Array"
},
{
"code": null,
"e": 33382,
"s": 33184,
"text": "key: String This parameter is used to uniquely identify the actual data which is being stored. Using this key parameter, we can later than access this data using the settings.get() Instance method."
},
{
"code": null,
"e": 33795,
"s": 33382,
"text": "value: Object This parameter represents the actual data that needs to be persisted within the Electron application. This parameter can hold any valid JSON object including a JSON Array. We can then use the dot annotation . to filter and fetch the exact data required from the JSON object in combination with the key parameter. Refer to the code for better understanding. To access data from within the JSON Array"
},
{
"code": null,
"e": 34340,
"s": 33795,
"text": "settings.has(key) This Instance method is used to check whether the data as represented by the key parameter is present or not within the application. Then the Instance method returns a Promise and it returns a Boolean value stating whether the data is present or not. It returns true if the Key exists in the storage. By default, this Instance method is Asynchronous. Instead, we can use the settings.hasSync(key) method for Synchronous data operation. In the above example, we have provided an invalid key parameter which should return false."
},
{
"code": null,
"e": 35252,
"s": 34340,
"text": "settings.get(key) This Instance method is used to return the data which is being persisted in the application as uniquely identified by the key parameter. This is the same key parameter which was set in the settings.set() Instance method. This method returns a Promise and it resolves to an object containing the actual data. The object returned can be a valid JSON object or it can simply be a primitive data type such as an Integer or a String. In case, we pass an undefined key parameter, the object returned will be undefined. We can also check if the key parameter exists using the settings.has() Instance method. By default, this Instance method is Asynchronous. Instead, we can use the settings.getSync(key) method for Synchronous data operation. This package uses the Lodash get() method under the hood to perform this operation. Lodash is a JavaScript utility library used for common programming tasks."
},
{
"code": null,
"e": 35685,
"s": 35252,
"text": "settings.file() This Instance method returns the full path of the settings.json which was created and is being used to persist the data in the application. As mentioned above, by default, this file is stored in the app’s user data directory of the native system. It is not recommended to change the location of the settings.json file. This Instance method returns a String value representing the full path of the settings.json file."
},
{
"code": null,
"e": 35975,
"s": 35685,
"text": "settings.reset() This Instance method is used to reset all configurations of the electron-settings package to default. This Instance method does not have a return type. This method does not reset the data stored in the settings.json file but only resets the configurations of this package."
},
{
"code": null,
"e": 36501,
"s": 35975,
"text": "settings.unset(key) This Instance method is used to clear the data stored in the application using the settings.set() Instance method. This method will reset the data based on the key parameter provided which will uniquely identify the data. This Instance method returns a Promise and it is resolved when the data identified by the key parameter/all the data has been successfully reset. By default, this Instance method is Asynchronous. Instead, we can use the settings.unsetSync(key) method for Synchronous data operation. "
},
{
"code": null,
"e": 36680,
"s": 36501,
"text": "Note: The key parameter is an Optional parameter. If the key parameter is not passed, it will reset all the data persisted in the application using the electron-settings package."
},
{
"code": null,
"e": 36899,
"s": 36680,
"text": "Output: At this point, upon launching the Electron application, we should be able to persist data entered by the user within the application and successfully retrieve it when the application is reloaded or relaunched. "
},
{
"code": null,
"e": 36912,
"s": 36901,
"text": "ElectronJS"
},
{
"code": null,
"e": 36916,
"s": 36912,
"text": "CSS"
},
{
"code": null,
"e": 36921,
"s": 36916,
"text": "HTML"
},
{
"code": null,
"e": 36932,
"s": 36921,
"text": "JavaScript"
},
{
"code": null,
"e": 36940,
"s": 36932,
"text": "Node.js"
},
{
"code": null,
"e": 36957,
"s": 36940,
"text": "Web Technologies"
},
{
"code": null,
"e": 36962,
"s": 36957,
"text": "HTML"
},
{
"code": null,
"e": 37060,
"s": 36962,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 37108,
"s": 37060,
"text": "How to update Node.js and NPM to next version ?"
},
{
"code": null,
"e": 37163,
"s": 37108,
"text": "How to apply style to parent if it has child with CSS?"
},
{
"code": null,
"e": 37200,
"s": 37163,
"text": "Types of CSS (Cascading Style Sheet)"
},
{
"code": null,
"e": 37264,
"s": 37200,
"text": "How to position a div at the bottom of its container using CSS?"
},
{
"code": null,
"e": 37303,
"s": 37264,
"text": "How to set space between the flexbox ?"
},
{
"code": null,
"e": 37351,
"s": 37303,
"text": "How to update Node.js and NPM to next version ?"
},
{
"code": null,
"e": 37411,
"s": 37351,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 37464,
"s": 37411,
"text": "Hide or show elements in HTML using display property"
},
{
"code": null,
"e": 37525,
"s": 37464,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
}
] |
Calculating the Power of a Number in Java Without Using Math pow() Method - GeeksforGeeks
|
03 Mar, 2021
We need to calculate a number raised to the power of some other number without using the predefined function java.lang.Math.pow()
In Java, we can calculate the power of any number by :
Calculating the power of a number through while loop or for loop.Calculating the power of a number by the divide and conquer method.
Calculating the power of a number through while loop or for loop.
Calculating the power of a number by the divide and conquer method.
To calculate the power of any number, the base number and an exponent are required.
Prerequisite:
The basic understanding of Java arithmetic operators, data types, basic input/output and loop etc.
Example:
Input : base = 3, exponent = 3
Output: 27
Input : base = 2, exponent = 4
Output: 16
Implementation:
1. Using while loop: The base and power values have been assigned respective values. Now, the while Loop will keep multiplying result variable by base variable until the power becomes zero.
Below is the implementation of the problem statement using the while loop:
Java
// Calculate the power of a number in Java using while loopimport java.io.*; public class JavaApplication1 { public static void main(String[] args) { int base = 3, power = 3; int result = 1; // running loop while the power > 0 while (power != 0) { result = result * base; // power will get reduced after // each multiplication power--; } System.out.println("Result = " + result); }}
Result = 27
Time Complexity: O(N), where N is power.
2. Using for loop: The base and power values have been assigned respective values. Now, in the above program, we can use for loop instead of while loop.
Below is the implementation of the problem statement using the while loop:
Java
// Calculate the power of a number in Java using for loopimport java.io.*; public class JavaApplication1 { public static void main(String[] args) { int base = 3, power = 3; int result = 1; for (power = 3; power != 0; power--) { result = result * base; } System.out.println("Result = " + result); }}
Result = 27
Time Complexity: O(N), where N is power.
3. Using the Divide and conquer method: Above operation can be optimized to O(log N) by calculating power(base, exponent/2) only once and storing it.
Java
// Calculate the power of a number// in Java using Divide and Conquerclass GFG { public static int power(int x, int y) { int temp; if (y == 0) return 1; temp = power(x, y / 2); if (y % 2 == 0) return temp * temp; else { if (y > 0) return x * temp * temp; else return (temp * temp) / x; } } // Driver code public static void main(String[] args) { int x = 2; int y = 3; System.out.println(power(x, y)); }}
8
Time Complexity: O(log N), where N is power.
Technical Scripter 2020
Java
Java Programs
Technical Scripter
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Constructors in Java
Exceptions in Java
Functional Interfaces in Java
Different ways of Reading a text file in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
How to Iterate HashMap in Java?
Iterate through List in Java
|
[
{
"code": null,
"e": 25371,
"s": 25343,
"text": "\n03 Mar, 2021"
},
{
"code": null,
"e": 25501,
"s": 25371,
"text": "We need to calculate a number raised to the power of some other number without using the predefined function java.lang.Math.pow()"
},
{
"code": null,
"e": 25556,
"s": 25501,
"text": "In Java, we can calculate the power of any number by :"
},
{
"code": null,
"e": 25689,
"s": 25556,
"text": "Calculating the power of a number through while loop or for loop.Calculating the power of a number by the divide and conquer method."
},
{
"code": null,
"e": 25755,
"s": 25689,
"text": "Calculating the power of a number through while loop or for loop."
},
{
"code": null,
"e": 25823,
"s": 25755,
"text": "Calculating the power of a number by the divide and conquer method."
},
{
"code": null,
"e": 25908,
"s": 25823,
"text": "To calculate the power of any number, the base number and an exponent are required. "
},
{
"code": null,
"e": 25924,
"s": 25908,
"text": "Prerequisite: "
},
{
"code": null,
"e": 26023,
"s": 25924,
"text": "The basic understanding of Java arithmetic operators, data types, basic input/output and loop etc."
},
{
"code": null,
"e": 26032,
"s": 26023,
"text": "Example:"
},
{
"code": null,
"e": 26117,
"s": 26032,
"text": "Input : base = 3, exponent = 3\nOutput: 27\n\nInput : base = 2, exponent = 4\nOutput: 16"
},
{
"code": null,
"e": 26133,
"s": 26117,
"text": "Implementation:"
},
{
"code": null,
"e": 26323,
"s": 26133,
"text": "1. Using while loop: The base and power values have been assigned respective values. Now, the while Loop will keep multiplying result variable by base variable until the power becomes zero."
},
{
"code": null,
"e": 26399,
"s": 26323,
"text": "Below is the implementation of the problem statement using the while loop: "
},
{
"code": null,
"e": 26404,
"s": 26399,
"text": "Java"
},
{
"code": "// Calculate the power of a number in Java using while loopimport java.io.*; public class JavaApplication1 { public static void main(String[] args) { int base = 3, power = 3; int result = 1; // running loop while the power > 0 while (power != 0) { result = result * base; // power will get reduced after // each multiplication power--; } System.out.println(\"Result = \" + result); }}",
"e": 26884,
"s": 26404,
"text": null
},
{
"code": null,
"e": 26898,
"s": 26884,
"text": "Result = 27\n"
},
{
"code": null,
"e": 26939,
"s": 26898,
"text": "Time Complexity: O(N), where N is power."
},
{
"code": null,
"e": 27092,
"s": 26939,
"text": "2. Using for loop: The base and power values have been assigned respective values. Now, in the above program, we can use for loop instead of while loop."
},
{
"code": null,
"e": 27168,
"s": 27092,
"text": "Below is the implementation of the problem statement using the while loop: "
},
{
"code": null,
"e": 27173,
"s": 27168,
"text": "Java"
},
{
"code": "// Calculate the power of a number in Java using for loopimport java.io.*; public class JavaApplication1 { public static void main(String[] args) { int base = 3, power = 3; int result = 1; for (power = 3; power != 0; power--) { result = result * base; } System.out.println(\"Result = \" + result); }}",
"e": 27531,
"s": 27173,
"text": null
},
{
"code": null,
"e": 27545,
"s": 27531,
"text": "Result = 27\n"
},
{
"code": null,
"e": 27586,
"s": 27545,
"text": "Time Complexity: O(N), where N is power."
},
{
"code": null,
"e": 27736,
"s": 27586,
"text": "3. Using the Divide and conquer method: Above operation can be optimized to O(log N) by calculating power(base, exponent/2) only once and storing it."
},
{
"code": null,
"e": 27741,
"s": 27736,
"text": "Java"
},
{
"code": "// Calculate the power of a number// in Java using Divide and Conquerclass GFG { public static int power(int x, int y) { int temp; if (y == 0) return 1; temp = power(x, y / 2); if (y % 2 == 0) return temp * temp; else { if (y > 0) return x * temp * temp; else return (temp * temp) / x; } } // Driver code public static void main(String[] args) { int x = 2; int y = 3; System.out.println(power(x, y)); }}",
"e": 28309,
"s": 27741,
"text": null
},
{
"code": null,
"e": 28312,
"s": 28309,
"text": "8\n"
},
{
"code": null,
"e": 28358,
"s": 28312,
"text": " Time Complexity: O(log N), where N is power."
},
{
"code": null,
"e": 28382,
"s": 28358,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 28387,
"s": 28382,
"text": "Java"
},
{
"code": null,
"e": 28401,
"s": 28387,
"text": "Java Programs"
},
{
"code": null,
"e": 28420,
"s": 28401,
"text": "Technical Scripter"
},
{
"code": null,
"e": 28425,
"s": 28420,
"text": "Java"
},
{
"code": null,
"e": 28523,
"s": 28425,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28538,
"s": 28523,
"text": "Stream In Java"
},
{
"code": null,
"e": 28559,
"s": 28538,
"text": "Constructors in Java"
},
{
"code": null,
"e": 28578,
"s": 28559,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 28608,
"s": 28578,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 28654,
"s": 28608,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 28680,
"s": 28654,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 28714,
"s": 28680,
"text": "Convert Double to Integer in Java"
},
{
"code": null,
"e": 28761,
"s": 28714,
"text": "Implementing a Linked List in Java using Class"
},
{
"code": null,
"e": 28793,
"s": 28761,
"text": "How to Iterate HashMap in Java?"
}
] |
A Decent Guide to DataFrames in Spark 3.0 for Beginners | by David Vrba | Towards Data Science
|
Apache Spark is a distributed engine that provides a couple of APIs for the end-user to build data processing pipelines. The most commonly used API in Apache Spark 3.0 is the DataFrame API that is very popular especially because it is user-friendly, easy to use, very expressive (similarly to SQL), and in 3.0 quite rich and mature. The API is suitable especially for processing data with some structure and using just a few lines of code you can compose a query that carries out several non-trivial transformations.
In Spark, it is quite typical that one thing can be achieved in different ways and this flexibility brings power but sometimes also confusion. If there are more ways, what is the difference between them? Which way is better or more efficient? As we will see, this is the case also with column transformations on a DataFrame and we will try to clarify this to avoid some misconceptions.
In this article, we will explain what a DataFrame is, how you can create it, and how you can work with it to perform various data transformations. We will use PySpark API in the latest stable version which is at the time of writing (January 2021) 3.0.
DataFrame in Spark is an abstraction that allows us to work with distributed data in a nice way. It represents data that has a tabular structure, each record in the dataset is like a row that has some fields, each field has a name and a data type so each field is like a column in a table. It is indeed very similar to a table in a database, we just need to keep in mind that DataFrame represents data that will be distributed on a cluster during the processing. Also, if your data doesn’t really have a tabular structure, for example, each record is just a string with some text message, you can still represent it with a DataFrame that will have just one column.
There are six basic ways how to create a DataFrame:
The most basic way is to transform another DataFrame. For example:
The most basic way is to transform another DataFrame. For example:
# transformation of one DataFrame creates another DataFramedf2 = df1.orderBy('age')
2. You can also create a DataFrame from an RDD. RDD is a low-level data structure in Spark which also represents distributed data, and it was used mainly before Spark 2.x. It is slowly becoming more like an internal API in Spark but you can still use it if you want and in particular, it allows you to create a DataFrame as follows:
df = spark.createDataFrame(rdd, schema)
3. The next and more useful way (especially for prototyping) is to create a DataFrame from a local collection, for example, from a list:
l = [(1, 'Alice'), (2, 'Bob')] # each tuple will become a rowdf = spark.createDataFrame(l, schema=['id', 'name'])
This is useful especially if you want to test your transformations, but you don’t want to run them on the real data yet. If you know the schema, you can create a small DataFrame like this.
4. For prototyping, it is also useful to quickly create a DataFrame that will have a specific number of rows with just a single column id using a sequence:
df = spark.range(10) # creates a DataFrame with one column id
5. The next option is by using SQL. We pass a valid SQL statement as a string argument to the sql() function:
df = spark.sql("show tables") # this creates a DataFrame
6. And finally, the most important option how to create a DataFrame is by reading the data from a source:
The options here are specified as the key-value pairs and there are different keys available based on the format that your data is in. To check all the possible options see the PySpark docs. There are also various alternatives and shortcuts to this piece of code, for example:
df = spark.read.parquet(path, **options)df = spark.read.format('json').schema(schema).load(path, **options)
If our datasource is stored using the Hive metastore, we can access it using its table name as follows:
df = spark.read.table('table_name')# or just:df = spark.table('table_name')
This is actually more elegant because you don’t have to deal with the schema, the file format, or the path to the data, all you need to know is the name of the table and Spark picks all the needed information from the metastore. Notice that apart from Hive table format there are also other table formats such as Delta or Iceberg and we can cover them in some separate articles.
So we showed six possible ways how to create a DataFrame and as you can see all of them (except for the first one) start with the spark object which is an instance of the SparkSession. The SparkSession is created at the beginning of your Spark application and it is your ticket to the world of DataFrames, you can not create a DataFrame without it. So typically at the beginning of your Spark application, you create the object as follows:
from pyspark.sql import SparkSessionspark = ( SparkSession .builder .appName('name_of_your_application') .enableHiveSupport() # if you want to use Hive metastore .getOrCreate())
There are two types of operations you can call on a DataFrame, namely transformations, and actions. The transformations are lazy which means that they don’t trigger the computation when you call them, but instead, they just build up a query plan under the cover. So when you call for example this:
result = df.dropDuplicates(['user_id']).orderBy('age')
two transformations are called, but no computation has been done, the data itself is still sitting in the storage and waiting for you to process it. Instead, you just created a DataFrame result that has a query plan which is saying that the data will be deduplicated by the user_id column and sorted by the age column. But all of this will happen after you materialize the query and this is when you call some action. So after calling an action which is usually a function by which you ask for some output, Spark will trigger the computation, which itself is composed of multiple steps: the query plan is optimized by the Spark optimizer, a physical plan is generated, then the plan is compiled into RDD DAG (directed acyclic graph) which is next divided into stages and tasks that are executed on the cluster.
If you want to see the query plan, you can just call:
df.explain()
To read more about query plans, feel free to check my other article or watch my talk from the Spark Summit Europe 2019.
The transformations themselves can be divided into two groups, DataFrame transformations, and column transformations. The first group transform the entire DataFrame, for example
df.select(col1, col2, col3)df.filter(col('user_id') == 123)df.orderBy('age')...
The most frequently used DataFrame transformations are probably the following (but it of course depends on the use case):
select(), withColumn() — for projecting columnsfilter() — for filteringorderBy(), sort(), sortWithinPartitions() — for sortingdistinct(), dropDuplicates() — for deduplicationjoin ()— for joining (see my other article about joins in Spark 3.0)groupBy ()— for aggregations
select(), withColumn() — for projecting columns
filter() — for filtering
orderBy(), sort(), sortWithinPartitions() — for sorting
distinct(), dropDuplicates() — for deduplication
join ()— for joining (see my other article about joins in Spark 3.0)
groupBy ()— for aggregations
For the complete list of DataFrame transformations see the class DataFrame in PySpark docs.
On the other hand, the column transformations are used to transform individual columns and they are used either inside the withColumn(), select(), or selectExpr() transformations that allow you to add a new column to the DataFrame:
from pyspark.sql.functions import initcap# capitalize the first letter of the user name and save it to a new # column name_capdf.withColumn('name_cap', initcap('user_name'))
Here the column transformation is achieved with the function initcap() which transforms the string from the user_name column. On the other hand, the withColumn transformation is a DataFrame transformation because it transforms the entire DataFrame — it adds a new column. So the typical use case is that you want to add a new column that is derived from other columns using some transformations.
For the column transformations, it is typical that they can be expressed using multiple different ways, in the example above the same can be achieved with select (or equivalently using expr or selectExpr):
df.select('*', initcap('user_name').alias('name_cap'))df.select('*', expr('initcap(user_name) as name_cap'))df.selectExpr('*', 'initcap(user_name) as name_cap')
Notice the difference between the select and withColumn, the latter projects all columns from the DataFrame and adds one new with a given name, while select only projects those columns that are passed as the argument, so if you want to have there also all columns, you either need to list them all explicitly, or you use the notation with an asterisk as we have above. Also, the resulting column of the transformation inside select will have a default name, so if you want to use a custom name, you have to rename it using alias().
Also, notice that the function expr() allows you to pass in a SQL expression as a string in quotes, similarly, selectExpr() is just a shortcut for using select() + expr(). In the expression, you can use any of the SQL functions that you can find in the SQL docs.
Now you might be confused what is the use case for the expr function and we will explain that shortly. Before we do that, let’s see yet another way how to express column transformations and it is using the methods from the Column class. Consider an example in which we want to do a substring of length 3 taken from the beginning of the word. Using the method substr() from the Column class we can do it as follows:
df.withColumn('new_col', col('user_name').substr(1, 3))
As you can see these methods from the Column class are called on a column object that you can construct either using the col function or using the dot or bracket notations as follows:
# all of these create a column object:col('user_name')df.user_namedf['user_name']
The example above can be alternatively written using the substring() function from the pyspark.sql.functions package as follows:
df.withColumn('new_col', substring('user_name', 1, 3))
Here user_name can be either a string — the name of the column, or it can be a column object, the function is more flexible and can handle both, which is typical for the functions from this package, however, this flexibility has its limitations as you will see in a moment.
You might be wondering what is the difference between using the SQL functions with expr or using directly the functions from pyspark.sql.functions package (that we call here DSL functions). Well, if you check both documentations (SQL, DSL), you will see that many of these functions are really the same (in the sense that they have the same name and provide you the same functionality), but there are some cases in which the SQL functions are more flexible and allow you to achieve what is not easy with the DSL functions.
Let’s consider again the example with the substring, the second two arguments, position, and length of the DSL substring function have to be integers — they can not be column objects. But sometimes there is a use case where you need to take the argument from another column, for example, you want the length of the substring to be dynamic, and you want to use a different value on each row. The DSL substring function doesn’t allow you to do it — the position and length need to be constant, the same on each row.
This is the limitation that we mentioned above and it can be circumvented using the SQL functions inside expr(). Consider this example in which we want to make a substring of one column based on information in another column. Let’s say we want to take the first n characters, where n is stored in another column called len:
l = [('abcd', 2), ('abcd', 1)]df = spark.createDataFrame(l, ['str', 'len'])df.show()+----+----------+| str| len |+----+----------+|abcd| 2||abcd| 1|+----+----------+
And let’s now see what options we have to implement the substring of len characters:
# (1) raises error:df.withColumn('new_col', substring('str', 1, 'len')) # (2) raises error:df.withColumn('new_col', substring(col('str'), 1, col('len')))# (3) raises error:df.withColumn('new_col', substring(col('str'), lit(1), col('len')))# (4) raises error:df.withColumn('new_col', col('str').substr(1, 'len'))# (5) raises error:df.withColumn('new_col', col('str').substr(1, col('len')))# (6) this works:df.withColumn('new_col', col('str').substr(lit(1), col('len')))# (7) this works:df.withColumn('new_col', expr('substring(str, 1, len)')).show()+----+---+-------+| str|len|new_col|+----+---+-------+|abcd| 2| ab||abcd| 1| a|+----+---+-------+
As you can see we are not able to do it using substring from the pyspark.sql.package, each of the attempts (1–3) fails. But if we use the substring function from the SQL API inside expr it works (attempt 7), the SQL function is more flexible and is able to realize that len is actually a name of a column in the DataFrame and it takes the length from there. We can see that attempt 6 in which we use the substr method also works but we need to make sure that both arguments are columns, that is why we have to use lit(1) for the first argument (the lit function will make a column object from the constant that we pass in). The class Column is however not so rich in terms of methods it provides as compared to the SQL functions, so in some other cases, we will not find a corresponding function here. One such example is discussed in this StackOverflow question, another one is related to array_sort, which I described in my other article here.
To summarize, if you want to create a column transformation, there are three places where you can look for an appropriate function in the documentation:
pyspark.sql.functions (here we call them DSL functions)
Column class (methods of the Column class — they are called on a column object)
SQL functions (from the SQL API to be used inside expr, or selectExpr, here we call them SQL functions)
All of them are equivalent in terms of performance because they are lazy and simply update the query plan under the hood. The SQL functions used with expr() (or selectExpr()) have the advantage to be sometimes more flexible, than their DSL counterparts and allow you to pass all arguments as column names. Very often you don’t need this extra flexibility and which approach you decide to use is usually a matter of style and it is possibly also based on some agreement between the team members on what they prefer to use in their codebase.
There are three more advanced groups of column transformations, which are user-defined functions, higher-order functions, and window functions and we can cover them in some separate articles since they deserve a more detailed exploration. Very briefly explained, the user-defined functions allow us to extend the DataFrame API through a very simple interface and so using native Python (Scala/Java) functions we can implement any custom transformation that is not available in the API yet, and we can then use it as the column transformation. The higher-order functions are quite well supported since Spark 2.4 and they are used to transform and manipulate complex data types such as arrays or maps. The window functions allow us to compute various aggregations or ranking over groups of rows that are defined by a window and a frame.
As explained above, actions are functions in which we ask for some output. These functions trigger the computation and run a job on the Spark cluster. The usual situation is that the action runs one job, however, in some cases, it can run even more jobs (see for example my other article where I explain why the show() function can run more jobs). Some typical actions are:
count() — computes the number of rows in the dataset that is represented by the DataFrame.
show() — prints on the screen 20 records from the dataset that is represented by the DataFrame.
collect() — prints all the records on the screen. This should be used carefully since it literally collects all the data from all executors and brings it to the driver, this is potentially dangerous because if the data is big, it can crash the driver since its memory is limited. This function can be useful if the data is already filtered or aggregated enough so its size is not a problem for the driver.
toPandas() — this is an analogy of collect(), the result however is not a list of records but instead it is a Pandas dataframe.
take(n) — also an analogy of collect(), but it collects only n records. This can be useful if you want to check whether there is some data, or the DataFrame is empty (df.take(1)).
write — creates a DataFrame writer which allows saving data to external storage.
Let’s see a final diagram (with some examples) that describes the operations in Spark:
In this note, we took an introduction path to Spark SQL and DataFrames in particular. We first explained what a DataFrame is, we have seen six ways how it can be created and we spent some time explaining what kind of operations you can do with it. We also pointed out the flexibility of the SQL functions that allow you to bypass the limitations of the corresponding DSL functions when you need to provide multiple arguments as columns from the DataFrame. We explained what is the difference between various approaches to column transformations which is a subject that is often confusing especially for beginners in Spark.
|
[
{
"code": null,
"e": 689,
"s": 172,
"text": "Apache Spark is a distributed engine that provides a couple of APIs for the end-user to build data processing pipelines. The most commonly used API in Apache Spark 3.0 is the DataFrame API that is very popular especially because it is user-friendly, easy to use, very expressive (similarly to SQL), and in 3.0 quite rich and mature. The API is suitable especially for processing data with some structure and using just a few lines of code you can compose a query that carries out several non-trivial transformations."
},
{
"code": null,
"e": 1075,
"s": 689,
"text": "In Spark, it is quite typical that one thing can be achieved in different ways and this flexibility brings power but sometimes also confusion. If there are more ways, what is the difference between them? Which way is better or more efficient? As we will see, this is the case also with column transformations on a DataFrame and we will try to clarify this to avoid some misconceptions."
},
{
"code": null,
"e": 1327,
"s": 1075,
"text": "In this article, we will explain what a DataFrame is, how you can create it, and how you can work with it to perform various data transformations. We will use PySpark API in the latest stable version which is at the time of writing (January 2021) 3.0."
},
{
"code": null,
"e": 1992,
"s": 1327,
"text": "DataFrame in Spark is an abstraction that allows us to work with distributed data in a nice way. It represents data that has a tabular structure, each record in the dataset is like a row that has some fields, each field has a name and a data type so each field is like a column in a table. It is indeed very similar to a table in a database, we just need to keep in mind that DataFrame represents data that will be distributed on a cluster during the processing. Also, if your data doesn’t really have a tabular structure, for example, each record is just a string with some text message, you can still represent it with a DataFrame that will have just one column."
},
{
"code": null,
"e": 2044,
"s": 1992,
"text": "There are six basic ways how to create a DataFrame:"
},
{
"code": null,
"e": 2111,
"s": 2044,
"text": "The most basic way is to transform another DataFrame. For example:"
},
{
"code": null,
"e": 2178,
"s": 2111,
"text": "The most basic way is to transform another DataFrame. For example:"
},
{
"code": null,
"e": 2262,
"s": 2178,
"text": "# transformation of one DataFrame creates another DataFramedf2 = df1.orderBy('age')"
},
{
"code": null,
"e": 2595,
"s": 2262,
"text": "2. You can also create a DataFrame from an RDD. RDD is a low-level data structure in Spark which also represents distributed data, and it was used mainly before Spark 2.x. It is slowly becoming more like an internal API in Spark but you can still use it if you want and in particular, it allows you to create a DataFrame as follows:"
},
{
"code": null,
"e": 2635,
"s": 2595,
"text": "df = spark.createDataFrame(rdd, schema)"
},
{
"code": null,
"e": 2772,
"s": 2635,
"text": "3. The next and more useful way (especially for prototyping) is to create a DataFrame from a local collection, for example, from a list:"
},
{
"code": null,
"e": 2886,
"s": 2772,
"text": "l = [(1, 'Alice'), (2, 'Bob')] # each tuple will become a rowdf = spark.createDataFrame(l, schema=['id', 'name'])"
},
{
"code": null,
"e": 3075,
"s": 2886,
"text": "This is useful especially if you want to test your transformations, but you don’t want to run them on the real data yet. If you know the schema, you can create a small DataFrame like this."
},
{
"code": null,
"e": 3231,
"s": 3075,
"text": "4. For prototyping, it is also useful to quickly create a DataFrame that will have a specific number of rows with just a single column id using a sequence:"
},
{
"code": null,
"e": 3293,
"s": 3231,
"text": "df = spark.range(10) # creates a DataFrame with one column id"
},
{
"code": null,
"e": 3403,
"s": 3293,
"text": "5. The next option is by using SQL. We pass a valid SQL statement as a string argument to the sql() function:"
},
{
"code": null,
"e": 3460,
"s": 3403,
"text": "df = spark.sql(\"show tables\") # this creates a DataFrame"
},
{
"code": null,
"e": 3566,
"s": 3460,
"text": "6. And finally, the most important option how to create a DataFrame is by reading the data from a source:"
},
{
"code": null,
"e": 3843,
"s": 3566,
"text": "The options here are specified as the key-value pairs and there are different keys available based on the format that your data is in. To check all the possible options see the PySpark docs. There are also various alternatives and shortcuts to this piece of code, for example:"
},
{
"code": null,
"e": 3951,
"s": 3843,
"text": "df = spark.read.parquet(path, **options)df = spark.read.format('json').schema(schema).load(path, **options)"
},
{
"code": null,
"e": 4055,
"s": 3951,
"text": "If our datasource is stored using the Hive metastore, we can access it using its table name as follows:"
},
{
"code": null,
"e": 4131,
"s": 4055,
"text": "df = spark.read.table('table_name')# or just:df = spark.table('table_name')"
},
{
"code": null,
"e": 4510,
"s": 4131,
"text": "This is actually more elegant because you don’t have to deal with the schema, the file format, or the path to the data, all you need to know is the name of the table and Spark picks all the needed information from the metastore. Notice that apart from Hive table format there are also other table formats such as Delta or Iceberg and we can cover them in some separate articles."
},
{
"code": null,
"e": 4950,
"s": 4510,
"text": "So we showed six possible ways how to create a DataFrame and as you can see all of them (except for the first one) start with the spark object which is an instance of the SparkSession. The SparkSession is created at the beginning of your Spark application and it is your ticket to the world of DataFrames, you can not create a DataFrame without it. So typically at the beginning of your Spark application, you create the object as follows:"
},
{
"code": null,
"e": 5143,
"s": 4950,
"text": "from pyspark.sql import SparkSessionspark = ( SparkSession .builder .appName('name_of_your_application') .enableHiveSupport() # if you want to use Hive metastore .getOrCreate())"
},
{
"code": null,
"e": 5441,
"s": 5143,
"text": "There are two types of operations you can call on a DataFrame, namely transformations, and actions. The transformations are lazy which means that they don’t trigger the computation when you call them, but instead, they just build up a query plan under the cover. So when you call for example this:"
},
{
"code": null,
"e": 5496,
"s": 5441,
"text": "result = df.dropDuplicates(['user_id']).orderBy('age')"
},
{
"code": null,
"e": 6307,
"s": 5496,
"text": "two transformations are called, but no computation has been done, the data itself is still sitting in the storage and waiting for you to process it. Instead, you just created a DataFrame result that has a query plan which is saying that the data will be deduplicated by the user_id column and sorted by the age column. But all of this will happen after you materialize the query and this is when you call some action. So after calling an action which is usually a function by which you ask for some output, Spark will trigger the computation, which itself is composed of multiple steps: the query plan is optimized by the Spark optimizer, a physical plan is generated, then the plan is compiled into RDD DAG (directed acyclic graph) which is next divided into stages and tasks that are executed on the cluster."
},
{
"code": null,
"e": 6361,
"s": 6307,
"text": "If you want to see the query plan, you can just call:"
},
{
"code": null,
"e": 6374,
"s": 6361,
"text": "df.explain()"
},
{
"code": null,
"e": 6494,
"s": 6374,
"text": "To read more about query plans, feel free to check my other article or watch my talk from the Spark Summit Europe 2019."
},
{
"code": null,
"e": 6672,
"s": 6494,
"text": "The transformations themselves can be divided into two groups, DataFrame transformations, and column transformations. The first group transform the entire DataFrame, for example"
},
{
"code": null,
"e": 6752,
"s": 6672,
"text": "df.select(col1, col2, col3)df.filter(col('user_id') == 123)df.orderBy('age')..."
},
{
"code": null,
"e": 6874,
"s": 6752,
"text": "The most frequently used DataFrame transformations are probably the following (but it of course depends on the use case):"
},
{
"code": null,
"e": 7145,
"s": 6874,
"text": "select(), withColumn() — for projecting columnsfilter() — for filteringorderBy(), sort(), sortWithinPartitions() — for sortingdistinct(), dropDuplicates() — for deduplicationjoin ()— for joining (see my other article about joins in Spark 3.0)groupBy ()— for aggregations"
},
{
"code": null,
"e": 7193,
"s": 7145,
"text": "select(), withColumn() — for projecting columns"
},
{
"code": null,
"e": 7218,
"s": 7193,
"text": "filter() — for filtering"
},
{
"code": null,
"e": 7274,
"s": 7218,
"text": "orderBy(), sort(), sortWithinPartitions() — for sorting"
},
{
"code": null,
"e": 7323,
"s": 7274,
"text": "distinct(), dropDuplicates() — for deduplication"
},
{
"code": null,
"e": 7392,
"s": 7323,
"text": "join ()— for joining (see my other article about joins in Spark 3.0)"
},
{
"code": null,
"e": 7421,
"s": 7392,
"text": "groupBy ()— for aggregations"
},
{
"code": null,
"e": 7513,
"s": 7421,
"text": "For the complete list of DataFrame transformations see the class DataFrame in PySpark docs."
},
{
"code": null,
"e": 7745,
"s": 7513,
"text": "On the other hand, the column transformations are used to transform individual columns and they are used either inside the withColumn(), select(), or selectExpr() transformations that allow you to add a new column to the DataFrame:"
},
{
"code": null,
"e": 7919,
"s": 7745,
"text": "from pyspark.sql.functions import initcap# capitalize the first letter of the user name and save it to a new # column name_capdf.withColumn('name_cap', initcap('user_name'))"
},
{
"code": null,
"e": 8315,
"s": 7919,
"text": "Here the column transformation is achieved with the function initcap() which transforms the string from the user_name column. On the other hand, the withColumn transformation is a DataFrame transformation because it transforms the entire DataFrame — it adds a new column. So the typical use case is that you want to add a new column that is derived from other columns using some transformations."
},
{
"code": null,
"e": 8521,
"s": 8315,
"text": "For the column transformations, it is typical that they can be expressed using multiple different ways, in the example above the same can be achieved with select (or equivalently using expr or selectExpr):"
},
{
"code": null,
"e": 8682,
"s": 8521,
"text": "df.select('*', initcap('user_name').alias('name_cap'))df.select('*', expr('initcap(user_name) as name_cap'))df.selectExpr('*', 'initcap(user_name) as name_cap')"
},
{
"code": null,
"e": 9214,
"s": 8682,
"text": "Notice the difference between the select and withColumn, the latter projects all columns from the DataFrame and adds one new with a given name, while select only projects those columns that are passed as the argument, so if you want to have there also all columns, you either need to list them all explicitly, or you use the notation with an asterisk as we have above. Also, the resulting column of the transformation inside select will have a default name, so if you want to use a custom name, you have to rename it using alias()."
},
{
"code": null,
"e": 9477,
"s": 9214,
"text": "Also, notice that the function expr() allows you to pass in a SQL expression as a string in quotes, similarly, selectExpr() is just a shortcut for using select() + expr(). In the expression, you can use any of the SQL functions that you can find in the SQL docs."
},
{
"code": null,
"e": 9892,
"s": 9477,
"text": "Now you might be confused what is the use case for the expr function and we will explain that shortly. Before we do that, let’s see yet another way how to express column transformations and it is using the methods from the Column class. Consider an example in which we want to do a substring of length 3 taken from the beginning of the word. Using the method substr() from the Column class we can do it as follows:"
},
{
"code": null,
"e": 9948,
"s": 9892,
"text": "df.withColumn('new_col', col('user_name').substr(1, 3))"
},
{
"code": null,
"e": 10132,
"s": 9948,
"text": "As you can see these methods from the Column class are called on a column object that you can construct either using the col function or using the dot or bracket notations as follows:"
},
{
"code": null,
"e": 10214,
"s": 10132,
"text": "# all of these create a column object:col('user_name')df.user_namedf['user_name']"
},
{
"code": null,
"e": 10343,
"s": 10214,
"text": "The example above can be alternatively written using the substring() function from the pyspark.sql.functions package as follows:"
},
{
"code": null,
"e": 10398,
"s": 10343,
"text": "df.withColumn('new_col', substring('user_name', 1, 3))"
},
{
"code": null,
"e": 10672,
"s": 10398,
"text": "Here user_name can be either a string — the name of the column, or it can be a column object, the function is more flexible and can handle both, which is typical for the functions from this package, however, this flexibility has its limitations as you will see in a moment."
},
{
"code": null,
"e": 11195,
"s": 10672,
"text": "You might be wondering what is the difference between using the SQL functions with expr or using directly the functions from pyspark.sql.functions package (that we call here DSL functions). Well, if you check both documentations (SQL, DSL), you will see that many of these functions are really the same (in the sense that they have the same name and provide you the same functionality), but there are some cases in which the SQL functions are more flexible and allow you to achieve what is not easy with the DSL functions."
},
{
"code": null,
"e": 11709,
"s": 11195,
"text": "Let’s consider again the example with the substring, the second two arguments, position, and length of the DSL substring function have to be integers — they can not be column objects. But sometimes there is a use case where you need to take the argument from another column, for example, you want the length of the substring to be dynamic, and you want to use a different value on each row. The DSL substring function doesn’t allow you to do it — the position and length need to be constant, the same on each row."
},
{
"code": null,
"e": 12033,
"s": 11709,
"text": "This is the limitation that we mentioned above and it can be circumvented using the SQL functions inside expr(). Consider this example in which we want to make a substring of one column based on information in another column. Let’s say we want to take the first n characters, where n is stored in another column called len:"
},
{
"code": null,
"e": 12220,
"s": 12033,
"text": "l = [('abcd', 2), ('abcd', 1)]df = spark.createDataFrame(l, ['str', 'len'])df.show()+----+----------+| str| len |+----+----------+|abcd| 2||abcd| 1|+----+----------+"
},
{
"code": null,
"e": 12305,
"s": 12220,
"text": "And let’s now see what options we have to implement the substring of len characters:"
},
{
"code": null,
"e": 12962,
"s": 12305,
"text": "# (1) raises error:df.withColumn('new_col', substring('str', 1, 'len')) # (2) raises error:df.withColumn('new_col', substring(col('str'), 1, col('len')))# (3) raises error:df.withColumn('new_col', substring(col('str'), lit(1), col('len')))# (4) raises error:df.withColumn('new_col', col('str').substr(1, 'len'))# (5) raises error:df.withColumn('new_col', col('str').substr(1, col('len')))# (6) this works:df.withColumn('new_col', col('str').substr(lit(1), col('len')))# (7) this works:df.withColumn('new_col', expr('substring(str, 1, len)')).show()+----+---+-------+| str|len|new_col|+----+---+-------+|abcd| 2| ab||abcd| 1| a|+----+---+-------+"
},
{
"code": null,
"e": 13908,
"s": 12962,
"text": "As you can see we are not able to do it using substring from the pyspark.sql.package, each of the attempts (1–3) fails. But if we use the substring function from the SQL API inside expr it works (attempt 7), the SQL function is more flexible and is able to realize that len is actually a name of a column in the DataFrame and it takes the length from there. We can see that attempt 6 in which we use the substr method also works but we need to make sure that both arguments are columns, that is why we have to use lit(1) for the first argument (the lit function will make a column object from the constant that we pass in). The class Column is however not so rich in terms of methods it provides as compared to the SQL functions, so in some other cases, we will not find a corresponding function here. One such example is discussed in this StackOverflow question, another one is related to array_sort, which I described in my other article here."
},
{
"code": null,
"e": 14061,
"s": 13908,
"text": "To summarize, if you want to create a column transformation, there are three places where you can look for an appropriate function in the documentation:"
},
{
"code": null,
"e": 14117,
"s": 14061,
"text": "pyspark.sql.functions (here we call them DSL functions)"
},
{
"code": null,
"e": 14197,
"s": 14117,
"text": "Column class (methods of the Column class — they are called on a column object)"
},
{
"code": null,
"e": 14301,
"s": 14197,
"text": "SQL functions (from the SQL API to be used inside expr, or selectExpr, here we call them SQL functions)"
},
{
"code": null,
"e": 14841,
"s": 14301,
"text": "All of them are equivalent in terms of performance because they are lazy and simply update the query plan under the hood. The SQL functions used with expr() (or selectExpr()) have the advantage to be sometimes more flexible, than their DSL counterparts and allow you to pass all arguments as column names. Very often you don’t need this extra flexibility and which approach you decide to use is usually a matter of style and it is possibly also based on some agreement between the team members on what they prefer to use in their codebase."
},
{
"code": null,
"e": 15676,
"s": 14841,
"text": "There are three more advanced groups of column transformations, which are user-defined functions, higher-order functions, and window functions and we can cover them in some separate articles since they deserve a more detailed exploration. Very briefly explained, the user-defined functions allow us to extend the DataFrame API through a very simple interface and so using native Python (Scala/Java) functions we can implement any custom transformation that is not available in the API yet, and we can then use it as the column transformation. The higher-order functions are quite well supported since Spark 2.4 and they are used to transform and manipulate complex data types such as arrays or maps. The window functions allow us to compute various aggregations or ranking over groups of rows that are defined by a window and a frame."
},
{
"code": null,
"e": 16050,
"s": 15676,
"text": "As explained above, actions are functions in which we ask for some output. These functions trigger the computation and run a job on the Spark cluster. The usual situation is that the action runs one job, however, in some cases, it can run even more jobs (see for example my other article where I explain why the show() function can run more jobs). Some typical actions are:"
},
{
"code": null,
"e": 16141,
"s": 16050,
"text": "count() — computes the number of rows in the dataset that is represented by the DataFrame."
},
{
"code": null,
"e": 16237,
"s": 16141,
"text": "show() — prints on the screen 20 records from the dataset that is represented by the DataFrame."
},
{
"code": null,
"e": 16643,
"s": 16237,
"text": "collect() — prints all the records on the screen. This should be used carefully since it literally collects all the data from all executors and brings it to the driver, this is potentially dangerous because if the data is big, it can crash the driver since its memory is limited. This function can be useful if the data is already filtered or aggregated enough so its size is not a problem for the driver."
},
{
"code": null,
"e": 16771,
"s": 16643,
"text": "toPandas() — this is an analogy of collect(), the result however is not a list of records but instead it is a Pandas dataframe."
},
{
"code": null,
"e": 16951,
"s": 16771,
"text": "take(n) — also an analogy of collect(), but it collects only n records. This can be useful if you want to check whether there is some data, or the DataFrame is empty (df.take(1))."
},
{
"code": null,
"e": 17032,
"s": 16951,
"text": "write — creates a DataFrame writer which allows saving data to external storage."
},
{
"code": null,
"e": 17119,
"s": 17032,
"text": "Let’s see a final diagram (with some examples) that describes the operations in Spark:"
}
] |
Can we have generic constructors in Java?
|
Generics is a concept in Java where you can enable a class, interface and, method, accept all (reference) types as parameters. In other words it is the concept which enables the users to choose the reference type that a method, constructor of a class accepts, dynamically. By defining a class as generic you are making it type-safe i.e. it can act up on any datatype.
To define a generic class you need to specify the type parameter you are using in the angle brackets “<>” after the class name and you can treat this as datatype of the instance variable an proceed with the code.
Example − Generic class
class Student<T>{
T age;
Student(T age){
this.age = age;
}
public void display() {
System.out.println("Value of age: "+this.age);
}
}
Usage − While instantiating the generic class you need to specify the object name after the class within the angular brackets. Thus, choose the type of the type parameter dynamically and pass the required object as a parameter.
public class GenericsExample {
public static void main(String args[]) {
Student<Float> std1 = new Student<Float>(25.5f);
std1.display();
Student<String> std2 = new Student<String>("25");
std2.display();
Student<Integer> std3 = new Student<Integer>(25);
std3.display();
}
}
Example generic methods
Similar to generic classes you can also define generic methods in Java. These methods use their own type parameters. Just like local variables, the scope of the type parameters of the methods lies within the method.
While defining generic methods you need to specify the type parameter in the angular brackets and use it as a local variable.
Live Demo
public class GenericMethod {
<T>void sampleMethod(T[] array) {
for(int i=0; i<array.length; i++) {
System.out.println(array[i]);
}
}
public static void main(String args[]) {
GenericMethod obj = new GenericMethod();
Integer intArray[] = {45, 26, 89, 96};
obj.sampleMethod(intArray);
String stringArray[] = {"Krishna", "Raju", "Seema", "Geeta"};
obj.sampleMethod(stringArray);
}
}
45
26
89
96
Krishna
Raju
Seema
Geeta
Constructors are similar to methods and just like generic methods we can also have generic constructors in Java though the class is non-generic.
Since the method does not have return type for generic constructors the type parameter should be placed after the public keyword and before its (class) name.
Once you define a generic constructor you can invoke (instantiate the class using that particular constructor) it by passing any type (object) as parameter.
Following Java program demonstrates the usage of generic constructors in Java.
Live Demo
class Employee{
String data;
public <T> Employee(T data){
this.data = data.toString();
}
public void dsplay() {
System.out.println("value: "+this.data);
}
}
public class GenericConstructor {
public static void main(String args[]) {
Employee emp1 = new Employee("Raju");
emp1.dsplay();
Employee emp2 = new Employee(12548);
emp2.dsplay();
}
}
value: Raju
value: 12548
|
[
{
"code": null,
"e": 1430,
"s": 1062,
"text": "Generics is a concept in Java where you can enable a class, interface and, method, accept all (reference) types as parameters. In other words it is the concept which enables the users to choose the reference type that a method, constructor of a class accepts, dynamically. By defining a class as generic you are making it type-safe i.e. it can act up on any datatype."
},
{
"code": null,
"e": 1643,
"s": 1430,
"text": "To define a generic class you need to specify the type parameter you are using in the angle brackets “<>” after the class name and you can treat this as datatype of the instance variable an proceed with the code."
},
{
"code": null,
"e": 1667,
"s": 1643,
"text": "Example − Generic class"
},
{
"code": null,
"e": 1828,
"s": 1667,
"text": "class Student<T>{\n T age;\n Student(T age){\n this.age = age;\n }\n public void display() {\n System.out.println(\"Value of age: \"+this.age);\n }\n}"
},
{
"code": null,
"e": 2056,
"s": 1828,
"text": "Usage − While instantiating the generic class you need to specify the object name after the class within the angular brackets. Thus, choose the type of the type parameter dynamically and pass the required object as a parameter."
},
{
"code": null,
"e": 2371,
"s": 2056,
"text": "public class GenericsExample {\n public static void main(String args[]) {\n Student<Float> std1 = new Student<Float>(25.5f);\n std1.display();\n Student<String> std2 = new Student<String>(\"25\");\n std2.display();\n Student<Integer> std3 = new Student<Integer>(25);\n std3.display();\n }\n}"
},
{
"code": null,
"e": 2395,
"s": 2371,
"text": "Example generic methods"
},
{
"code": null,
"e": 2611,
"s": 2395,
"text": "Similar to generic classes you can also define generic methods in Java. These methods use their own type parameters. Just like local variables, the scope of the type parameters of the methods lies within the method."
},
{
"code": null,
"e": 2737,
"s": 2611,
"text": "While defining generic methods you need to specify the type parameter in the angular brackets and use it as a local variable."
},
{
"code": null,
"e": 2748,
"s": 2737,
"text": " Live Demo"
},
{
"code": null,
"e": 3190,
"s": 2748,
"text": "public class GenericMethod {\n <T>void sampleMethod(T[] array) {\n for(int i=0; i<array.length; i++) {\n System.out.println(array[i]);\n }\n }\n public static void main(String args[]) {\n GenericMethod obj = new GenericMethod();\n Integer intArray[] = {45, 26, 89, 96};\n obj.sampleMethod(intArray);\n String stringArray[] = {\"Krishna\", \"Raju\", \"Seema\", \"Geeta\"};\n obj.sampleMethod(stringArray);\n }\n}"
},
{
"code": null,
"e": 3227,
"s": 3190,
"text": "45\n26\n89\n96\nKrishna\nRaju\nSeema\nGeeta"
},
{
"code": null,
"e": 3372,
"s": 3227,
"text": "Constructors are similar to methods and just like generic methods we can also have generic constructors in Java though the class is non-generic."
},
{
"code": null,
"e": 3530,
"s": 3372,
"text": "Since the method does not have return type for generic constructors the type parameter should be placed after the public keyword and before its (class) name."
},
{
"code": null,
"e": 3687,
"s": 3530,
"text": "Once you define a generic constructor you can invoke (instantiate the class using that particular constructor) it by passing any type (object) as parameter."
},
{
"code": null,
"e": 3766,
"s": 3687,
"text": "Following Java program demonstrates the usage of generic constructors in Java."
},
{
"code": null,
"e": 3777,
"s": 3766,
"text": " Live Demo"
},
{
"code": null,
"e": 4175,
"s": 3777,
"text": "class Employee{\n String data;\n public <T> Employee(T data){\n this.data = data.toString();\n }\n public void dsplay() {\n System.out.println(\"value: \"+this.data);\n }\n}\npublic class GenericConstructor {\n public static void main(String args[]) {\n Employee emp1 = new Employee(\"Raju\");\n emp1.dsplay();\n Employee emp2 = new Employee(12548);\n emp2.dsplay();\n }\n}"
},
{
"code": null,
"e": 4200,
"s": 4175,
"text": "value: Raju\nvalue: 12548"
}
] |
Construct Tree from Preorder Traversal | Practice | GeeksforGeeks
|
Construct a binary tree of size N using two given arrays pre[] and preLN[]. Array pre[] represents preorder traversal of a binary tree. Array preLN[] has only two possible values L and N. The value L in preLN[] indicates that the corresponding node in Binary Tree is a leaf node and value N indicates that the corresponding node is a non-leaf node.
Note: Every node in the binary tree has either 0 or 2 children.
Example 1:
Input :
N = 5
pre[] = {10, 30, 20, 5, 15}
preLN[] = {N, N, L, L, L}
Output:
10
/ \
30 15
/ \
20 5
Your Task:
You dont need to read input or print anything. Complete the function constructTree() which takes N, pre[] and preLN[] as input parameters and returns the root node of the constructed binary tree.
Note: The output generated by the compiler will contain the inorder traversal of the created binary tree.
Expected Time Complexity: O(N)
Expected Auxiliary Space: O(N)
Constraints:
1 ≤ N ≤ 104
1 ≤ pre[i] ≤ 107
preLN[i]: {'N', 'L'}
0
ialtafshaikh1 day ago
Python Solution | Recursive with Map | Easy -----------
def constructTree(pre, preLN, n):
if(n < 1 or not len(pre)): return
leafNodeMap = { pre[index] : preLN[index] for index in range(len(preLN))}
return treeConstruction(pre, leafNodeMap)
def treeConstruction(pre, leafNodeMap):
if(len(pre)):
root = Node(pre.pop(0))
if(leafNodeMap[root.data] == "N"):
# if it is a node then attach child nodes
root.left = treeConstruction(pre, leafNodeMap)
root.right = treeConstruction(pre, leafNodeMap)
return root
return
+1
wolfofsv4 days ago
struct Node *constructTree(int n, int pre[], char preLN[])
{
// Code here
stack<Node*>st;
Node* root = new Node(pre[0]);
if(preLN[0] == 'N'){
st.push(root);
}
int i = 1;
while(!st.empty()){
Node* child = new Node(pre[i]);
Node* par = st.top();
if(par -> left == NULL){
par -> left = child;
}
else{
par -> right = child;
st.pop();
}
if(preLN[i] == 'N'){
st.push(child);
}
i++;
}
return root;
}
0
sethiyashristi202 weeks ago
struct Node *helper(int n,int pre[],char preLN[],int &i){ int val=pre[i]; struct Node* newnode=new struct Node(val); if(preLN[i]!='L') { i++; newnode->left=helper(n,pre,preLN,i); newnode->right=helper(n,pre,preLN,i); } else i++; return newnode;}struct Node *constructTree(int n, int pre[], char preLN[]){ // Code here int i=0; return helper(n,pre,preLN,i); }
+1
detroix071 month ago
Node* solve(int n,int pre[],char preLN[],int &PreStartIndex){ // base case
if(PreStartIndex >= n) { return NULL; } // create a root node Node* root = new Node (pre[PreStartIndex]); if(preLN[PreStartIndex]=='N') { PreStartIndex+=1; root->left=solve(n,pre,preLN,PreStartIndex); root->right=solve(n,pre,preLN,PreStartIndex); } else { PreStartIndex+=1; return root; }}
struct Node *constructTree(int n, int pre[], char preLN[]){ int PreStartIndex = 0; Node* ans = solve(n,pre,preLN,PreStartIndex); return ans;}
0
golaravi5552 months ago
void tree(Node* &root,int &i, int pre[],char preLN[]){ if(preLN[i]=='L'){ root=new Node(pre[i]); return; } else{ root=new Node(pre[i]); i++; tree(root->left,i,pre,preLN); i++; tree(root->right,i,pre,preLN); return; } }struct Node *constructTree(int n, int pre[], char preLN[]){ // Code here Node* root=NULL; int i=0; tree(root,i,pre,preLN); return root; }
+2
mohankumarit20012 months ago
ITERATIVE SOLUTION || O(N) || 0.2 ms
struct Node *constructTree(int n, int pre[], char preLN[])
{
Node *root = new Node(pre[0]);
stack<Node*> stk;int index=1;
stk.push(root);
while(!stk.empty() && index<n){
Node *node = stk.top();
if(!node->left){
node->left = new Node(pre[index]);
stk.push(node->left);
}
else if(!node->right){
node->right = new Node(pre[index]);
stk.push(node->right);
}
else {stk.pop();continue;}
if(preLN[index]=='L'){
stk.pop();
}
index++;
}
return root;
}
0
maheshahirwar2042 months ago
JAVA SOLUTION
class Solution{ int i=0; Node constructTree(int n, int pre[], char preLN[]){ if(i==n)return null; Node root=new Node(pre[i]); if(preLN[i++]=='L'){ root.left=root.right=null; return root; } root.left=constructTree(n,pre,preLN); root.right=constructTree(n,pre,preLN); return root; }
}
0
hamidnourashraf2 months ago
def rec_build_tree(pre, preLN, idx):
if idx == len(pre):
return None, idx
root = Node(pre[idx])
root_type = preLN[idx]
idx += 1
if root_type == 'N':
root.left, idx = rec_build_tree(pre, preLN, idx)
root.right, idx = rec_build_tree(pre, preLN, idx)
return root, idx
def constructTree(pre, preLN, n):
idx = 0
root, _ = rec_build_tree(pre, preLN, idx)
return root
0
shreyash9779663 months ago
CPP SOLUTION
Node * construct(int pre[],char preLN[],int n,int &i)
{
if(i>=n)return NULL;
Node *root=new Node(pre[i]);
if(preLN[i]=='N')
{
i++;
root->left=construct(pre,preLN,n,i);
i++;
root->right=construct(pre,preLN,n,i);
}
return root;
}
struct Node *constructTree(int n, int pre[], char preLN[])
{
// Code here
int i=0;
return construct(pre,preLN,n,i);
}
-1
amannamdev28093 months ago
Easy way recursive solution:
Node* solve(int pre[],char preLN[],int n,int& i){ if(i>=n) return NULL; if(preLN[i]=='N') { Node *temp=new Node(pre[i]); i++; temp->left=solve(pre,preLN,n,i); temp->right=solve(pre,preLN,n,i); return temp; } Node *temp=new Node(pre[i]); i++; return temp;}
struct Node *constructTree(int n, int pre[], char preLN[]){ int i=0; return solve(pre,preLN,n,i);}
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab.
|
[
{
"code": null,
"e": 651,
"s": 238,
"text": "Construct a binary tree of size N using two given arrays pre[] and preLN[]. Array pre[] represents preorder traversal of a binary tree. Array preLN[] has only two possible values L and N. The value L in preLN[] indicates that the corresponding node in Binary Tree is a leaf node and value N indicates that the corresponding node is a non-leaf node.\nNote: Every node in the binary tree has either 0 or 2 children."
},
{
"code": null,
"e": 662,
"s": 651,
"text": "Example 1:"
},
{
"code": null,
"e": 819,
"s": 662,
"text": "Input : \nN = 5\npre[] = {10, 30, 20, 5, 15}\npreLN[] = {N, N, L, L, L}\n\nOutput:\n 10\n / \\\n 30 15\n / \\ \n 20 5 "
},
{
"code": null,
"e": 1138,
"s": 821,
"text": "Your Task: \nYou dont need to read input or print anything. Complete the function constructTree() which takes N, pre[] and preLN[] as input parameters and returns the root node of the constructed binary tree.\nNote: The output generated by the compiler will contain the inorder traversal of the created binary tree.\n "
},
{
"code": null,
"e": 1200,
"s": 1138,
"text": "Expected Time Complexity: O(N)\nExpected Auxiliary Space: O(N)"
},
{
"code": null,
"e": 1264,
"s": 1200,
"text": "\nConstraints:\n1 ≤ N ≤ 104\n1 ≤ pre[i] ≤ 107\npreLN[i]: {'N', 'L'}"
},
{
"code": null,
"e": 1266,
"s": 1264,
"text": "0"
},
{
"code": null,
"e": 1288,
"s": 1266,
"text": "ialtafshaikh1 day ago"
},
{
"code": null,
"e": 1344,
"s": 1288,
"text": "Python Solution | Recursive with Map | Easy -----------"
},
{
"code": null,
"e": 1910,
"s": 1346,
"text": "def constructTree(pre, preLN, n):\n if(n < 1 or not len(pre)): return \n \n leafNodeMap = { pre[index] : preLN[index] for index in range(len(preLN))}\n \n return treeConstruction(pre, leafNodeMap)\n \ndef treeConstruction(pre, leafNodeMap):\n if(len(pre)):\n root = Node(pre.pop(0))\n \n if(leafNodeMap[root.data] == \"N\"):\n # if it is a node then attach child nodes\n root.left = treeConstruction(pre, leafNodeMap)\n root.right = treeConstruction(pre, leafNodeMap)\n return root\n return \n"
},
{
"code": null,
"e": 1913,
"s": 1910,
"text": "+1"
},
{
"code": null,
"e": 1932,
"s": 1913,
"text": "wolfofsv4 days ago"
},
{
"code": null,
"e": 2489,
"s": 1932,
"text": "struct Node *constructTree(int n, int pre[], char preLN[])\n{\n // Code here\n stack<Node*>st;\n Node* root = new Node(pre[0]);\n if(preLN[0] == 'N'){\n st.push(root);\n }\n int i = 1;\n while(!st.empty()){\n Node* child = new Node(pre[i]);\n Node* par = st.top();\n if(par -> left == NULL){\n par -> left = child;\n }\n else{\n par -> right = child;\n st.pop();\n }\n if(preLN[i] == 'N'){\n st.push(child);\n }\n i++;\n }\n return root;\n}"
},
{
"code": null,
"e": 2491,
"s": 2489,
"text": "0"
},
{
"code": null,
"e": 2519,
"s": 2491,
"text": "sethiyashristi202 weeks ago"
},
{
"code": null,
"e": 2919,
"s": 2519,
"text": "struct Node *helper(int n,int pre[],char preLN[],int &i){ int val=pre[i]; struct Node* newnode=new struct Node(val); if(preLN[i]!='L') { i++; newnode->left=helper(n,pre,preLN,i); newnode->right=helper(n,pre,preLN,i); } else i++; return newnode;}struct Node *constructTree(int n, int pre[], char preLN[]){ // Code here int i=0; return helper(n,pre,preLN,i); }"
},
{
"code": null,
"e": 2922,
"s": 2919,
"text": "+1"
},
{
"code": null,
"e": 2943,
"s": 2922,
"text": "detroix071 month ago"
},
{
"code": null,
"e": 3020,
"s": 2943,
"text": "Node* solve(int n,int pre[],char preLN[],int &PreStartIndex){ // base case "
},
{
"code": null,
"e": 3341,
"s": 3020,
"text": " if(PreStartIndex >= n) { return NULL; } // create a root node Node* root = new Node (pre[PreStartIndex]); if(preLN[PreStartIndex]=='N') { PreStartIndex+=1; root->left=solve(n,pre,preLN,PreStartIndex); root->right=solve(n,pre,preLN,PreStartIndex); } else { PreStartIndex+=1; return root; }}"
},
{
"code": null,
"e": 3492,
"s": 3341,
"text": "struct Node *constructTree(int n, int pre[], char preLN[]){ int PreStartIndex = 0; Node* ans = solve(n,pre,preLN,PreStartIndex); return ans;}"
},
{
"code": null,
"e": 3494,
"s": 3492,
"text": "0"
},
{
"code": null,
"e": 3518,
"s": 3494,
"text": "golaravi5552 months ago"
},
{
"code": null,
"e": 3950,
"s": 3518,
"text": "void tree(Node* &root,int &i, int pre[],char preLN[]){ if(preLN[i]=='L'){ root=new Node(pre[i]); return; } else{ root=new Node(pre[i]); i++; tree(root->left,i,pre,preLN); i++; tree(root->right,i,pre,preLN); return; } }struct Node *constructTree(int n, int pre[], char preLN[]){ // Code here Node* root=NULL; int i=0; tree(root,i,pre,preLN); return root; }"
},
{
"code": null,
"e": 3953,
"s": 3950,
"text": "+2"
},
{
"code": null,
"e": 3982,
"s": 3953,
"text": "mohankumarit20012 months ago"
},
{
"code": null,
"e": 4019,
"s": 3982,
"text": "ITERATIVE SOLUTION || O(N) || 0.2 ms"
},
{
"code": null,
"e": 4618,
"s": 4019,
"text": "struct Node *constructTree(int n, int pre[], char preLN[])\n{\n Node *root = new Node(pre[0]);\n stack<Node*> stk;int index=1;\n stk.push(root);\n while(!stk.empty() && index<n){\n Node *node = stk.top();\n if(!node->left){\n node->left = new Node(pre[index]);\n stk.push(node->left);\n }\n else if(!node->right){\n node->right = new Node(pre[index]);\n stk.push(node->right);\n }\n else {stk.pop();continue;}\n if(preLN[index]=='L'){\n stk.pop();\n }\n index++;\n }\n return root;\n}"
},
{
"code": null,
"e": 4620,
"s": 4618,
"text": "0"
},
{
"code": null,
"e": 4649,
"s": 4620,
"text": "maheshahirwar2042 months ago"
},
{
"code": null,
"e": 4664,
"s": 4649,
"text": "JAVA SOLUTION "
},
{
"code": null,
"e": 4962,
"s": 4666,
"text": "class Solution{ int i=0; Node constructTree(int n, int pre[], char preLN[]){ if(i==n)return null; Node root=new Node(pre[i]); if(preLN[i++]=='L'){ root.left=root.right=null; return root; } root.left=constructTree(n,pre,preLN); root.right=constructTree(n,pre,preLN); return root; }"
},
{
"code": null,
"e": 4964,
"s": 4962,
"text": "}"
},
{
"code": null,
"e": 4966,
"s": 4964,
"text": "0"
},
{
"code": null,
"e": 4994,
"s": 4966,
"text": "hamidnourashraf2 months ago"
},
{
"code": null,
"e": 5420,
"s": 4994,
"text": "def rec_build_tree(pre, preLN, idx):\n if idx == len(pre):\n return None, idx\n root = Node(pre[idx])\n root_type = preLN[idx]\n idx += 1\n if root_type == 'N':\n root.left, idx = rec_build_tree(pre, preLN, idx)\n root.right, idx = rec_build_tree(pre, preLN, idx)\n return root, idx\n \ndef constructTree(pre, preLN, n):\n idx = 0\n root, _ = rec_build_tree(pre, preLN, idx)\n return root"
},
{
"code": null,
"e": 5422,
"s": 5420,
"text": "0"
},
{
"code": null,
"e": 5449,
"s": 5422,
"text": "shreyash9779663 months ago"
},
{
"code": null,
"e": 5897,
"s": 5449,
"text": "CPP SOLUTION\n\nNode * construct(int pre[],char preLN[],int n,int &i)\n{\nif(i>=n)return NULL;\n Node *root=new Node(pre[i]);\n if(preLN[i]=='N')\n {\n i++;\n root->left=construct(pre,preLN,n,i);\n i++;\n root->right=construct(pre,preLN,n,i);\n }\n return root;\n}\n\nstruct Node *constructTree(int n, int pre[], char preLN[])\n{\n // Code here\n int i=0;\n return construct(pre,preLN,n,i);\n \n}"
},
{
"code": null,
"e": 5900,
"s": 5897,
"text": "-1"
},
{
"code": null,
"e": 5927,
"s": 5900,
"text": "amannamdev28093 months ago"
},
{
"code": null,
"e": 5956,
"s": 5927,
"text": "Easy way recursive solution:"
},
{
"code": null,
"e": 6259,
"s": 5956,
"text": "Node* solve(int pre[],char preLN[],int n,int& i){ if(i>=n) return NULL; if(preLN[i]=='N') { Node *temp=new Node(pre[i]); i++; temp->left=solve(pre,preLN,n,i); temp->right=solve(pre,preLN,n,i); return temp; } Node *temp=new Node(pre[i]); i++; return temp;}"
},
{
"code": null,
"e": 6360,
"s": 6259,
"text": "struct Node *constructTree(int n, int pre[], char preLN[]){ int i=0; return solve(pre,preLN,n,i);}"
},
{
"code": null,
"e": 6506,
"s": 6360,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 6542,
"s": 6506,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 6552,
"s": 6542,
"text": "\nProblem\n"
},
{
"code": null,
"e": 6562,
"s": 6552,
"text": "\nContest\n"
},
{
"code": null,
"e": 6625,
"s": 6562,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 6773,
"s": 6625,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 6981,
"s": 6773,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 7087,
"s": 6981,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
C++ Program For Reversing A Linked List In Groups Of Given Size - Set 2 - GeeksforGeeks
|
11 Jan, 2022
Given a linked list, write a function to reverse every k nodes (where k is an input to the function). Examples:
Input: 1->2->3->4->5->6->7->8->NULL and k = 3
Output: 3->2->1->6->5->4->8->7->NULL.
Input: 1->2->3->4->5->6->7->8->NULL and k = 5
Output: 5->4->3->2->1->8->7->6->NULL.
We have already discussed its solution in below post Reverse a Linked List in groups of given size | Set 1In this post, we have used a stack which will store the nodes of the given linked list. Firstly, push the k elements of the linked list in the stack. Now pop elements one by one and keep track of the previously popped node. Point the next pointer of prev node to top element of stack. Repeat this process, until NULL is reached.This algorithm uses O(k) extra space.
C++
// C++ program to reverse a linked list // in groups of given size#include <bits/stdc++.h>using namespace std; // Link list nodestruct Node { int data; struct Node* next;}; /* Reverses the linked list in groups of size k and returns the pointer to the new head node. */struct Node* Reverse(struct Node* head, int k){ // Create a stack of Node* stack<Node*> mystack; struct Node* current = head; struct Node* prev = NULL; while (current != NULL) { // Terminate the loop whichever // comes first either current == NULL // or count >= k int count = 0; while (current != NULL && count < k) { mystack.push(current); current = current->next; count++; } // Now pop the elements of stack // one by one while (mystack.size() > 0) { // If final list has not been // started yet. if (prev == NULL) { prev = mystack.top(); head = prev; mystack.pop(); } else { prev->next = mystack.top(); prev = prev->next; mystack.pop(); } } } // Next of last element will // point to NULL. prev->next = NULL; return head;} // UTILITY FUNCTIONS// Function to push a nodevoid push(struct Node** head_ref, int new_data){ // Allocate node struct Node* new_node = (struct Node*)malloc(sizeof(struct Node)); // Put in the data new_node->data = new_data; // Link the old list off the // new node new_node->next = (*head_ref); // Move the head to point to // the new node (*head_ref) = new_node;} // Function to print linked listvoid printList(struct Node* node){ while (node != NULL) { printf("%d ", node->data); node = node->next; }} // Driver codeint main(void){ // Start with the empty list struct Node* head = NULL; // Created Linked list is // 1->2->3->4->5->6->7->8->9 push(&head, 9); push(&head, 8); push(&head, 7); push(&head, 6); push(&head, 5); push(&head, 4); push(&head, 3); push(&head, 2); push(&head, 1); printf("Given linked list "); printList(head); head = Reverse(head, 3); printf("Reversed Linked list "); printList(head); return 0;}
Output:
Given Linked List
1 2 3 4 5 6 7 8 9
Reversed list
3 2 1 6 5 4 9 8 7
Please refer complete article on Reverse a Linked List in groups of given size | Set 2 for more details!
Accolite
Adobe
Linked Lists
MakeMyTrip
Microsoft
Paytm
Snapdeal
VMWare
C++
C++ Programs
Linked List
Paytm
VMWare
Accolite
Microsoft
Snapdeal
MakeMyTrip
Adobe
Linked List
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Operator Overloading in C++
Polymorphism in C++
Friend class and function in C++
Sorting a vector in C++
Iterators in C++ STL
Header files in C/C++ and its uses
C++ Program for QuickSort
How to return multiple values from a function in C or C++?
Program to print ASCII Value of a character
CSV file management using C++
|
[
{
"code": null,
"e": 24200,
"s": 24172,
"text": "\n11 Jan, 2022"
},
{
"code": null,
"e": 24312,
"s": 24200,
"text": "Given a linked list, write a function to reverse every k nodes (where k is an input to the function). Examples:"
},
{
"code": null,
"e": 24483,
"s": 24312,
"text": "Input: 1->2->3->4->5->6->7->8->NULL and k = 3 \nOutput: 3->2->1->6->5->4->8->7->NULL. \n\nInput: 1->2->3->4->5->6->7->8->NULL and k = 5\nOutput: 5->4->3->2->1->8->7->6->NULL."
},
{
"code": null,
"e": 24955,
"s": 24483,
"text": "We have already discussed its solution in below post Reverse a Linked List in groups of given size | Set 1In this post, we have used a stack which will store the nodes of the given linked list. Firstly, push the k elements of the linked list in the stack. Now pop elements one by one and keep track of the previously popped node. Point the next pointer of prev node to top element of stack. Repeat this process, until NULL is reached.This algorithm uses O(k) extra space."
},
{
"code": null,
"e": 24959,
"s": 24955,
"text": "C++"
},
{
"code": "// C++ program to reverse a linked list // in groups of given size#include <bits/stdc++.h>using namespace std; // Link list nodestruct Node { int data; struct Node* next;}; /* Reverses the linked list in groups of size k and returns the pointer to the new head node. */struct Node* Reverse(struct Node* head, int k){ // Create a stack of Node* stack<Node*> mystack; struct Node* current = head; struct Node* prev = NULL; while (current != NULL) { // Terminate the loop whichever // comes first either current == NULL // or count >= k int count = 0; while (current != NULL && count < k) { mystack.push(current); current = current->next; count++; } // Now pop the elements of stack // one by one while (mystack.size() > 0) { // If final list has not been // started yet. if (prev == NULL) { prev = mystack.top(); head = prev; mystack.pop(); } else { prev->next = mystack.top(); prev = prev->next; mystack.pop(); } } } // Next of last element will // point to NULL. prev->next = NULL; return head;} // UTILITY FUNCTIONS// Function to push a nodevoid push(struct Node** head_ref, int new_data){ // Allocate node struct Node* new_node = (struct Node*)malloc(sizeof(struct Node)); // Put in the data new_node->data = new_data; // Link the old list off the // new node new_node->next = (*head_ref); // Move the head to point to // the new node (*head_ref) = new_node;} // Function to print linked listvoid printList(struct Node* node){ while (node != NULL) { printf(\"%d \", node->data); node = node->next; }} // Driver codeint main(void){ // Start with the empty list struct Node* head = NULL; // Created Linked list is // 1->2->3->4->5->6->7->8->9 push(&head, 9); push(&head, 8); push(&head, 7); push(&head, 6); push(&head, 5); push(&head, 4); push(&head, 3); push(&head, 2); push(&head, 1); printf(\"Given linked list \"); printList(head); head = Reverse(head, 3); printf(\"Reversed Linked list \"); printList(head); return 0;}",
"e": 27427,
"s": 24959,
"text": null
},
{
"code": null,
"e": 27436,
"s": 27427,
"text": "Output: "
},
{
"code": null,
"e": 27505,
"s": 27436,
"text": "Given Linked List\n1 2 3 4 5 6 7 8 9 \nReversed list\n3 2 1 6 5 4 9 8 7"
},
{
"code": null,
"e": 27610,
"s": 27505,
"text": "Please refer complete article on Reverse a Linked List in groups of given size | Set 2 for more details!"
},
{
"code": null,
"e": 27619,
"s": 27610,
"text": "Accolite"
},
{
"code": null,
"e": 27625,
"s": 27619,
"text": "Adobe"
},
{
"code": null,
"e": 27638,
"s": 27625,
"text": "Linked Lists"
},
{
"code": null,
"e": 27649,
"s": 27638,
"text": "MakeMyTrip"
},
{
"code": null,
"e": 27659,
"s": 27649,
"text": "Microsoft"
},
{
"code": null,
"e": 27665,
"s": 27659,
"text": "Paytm"
},
{
"code": null,
"e": 27674,
"s": 27665,
"text": "Snapdeal"
},
{
"code": null,
"e": 27681,
"s": 27674,
"text": "VMWare"
},
{
"code": null,
"e": 27685,
"s": 27681,
"text": "C++"
},
{
"code": null,
"e": 27698,
"s": 27685,
"text": "C++ Programs"
},
{
"code": null,
"e": 27710,
"s": 27698,
"text": "Linked List"
},
{
"code": null,
"e": 27716,
"s": 27710,
"text": "Paytm"
},
{
"code": null,
"e": 27723,
"s": 27716,
"text": "VMWare"
},
{
"code": null,
"e": 27732,
"s": 27723,
"text": "Accolite"
},
{
"code": null,
"e": 27742,
"s": 27732,
"text": "Microsoft"
},
{
"code": null,
"e": 27751,
"s": 27742,
"text": "Snapdeal"
},
{
"code": null,
"e": 27762,
"s": 27751,
"text": "MakeMyTrip"
},
{
"code": null,
"e": 27768,
"s": 27762,
"text": "Adobe"
},
{
"code": null,
"e": 27780,
"s": 27768,
"text": "Linked List"
},
{
"code": null,
"e": 27784,
"s": 27780,
"text": "CPP"
},
{
"code": null,
"e": 27882,
"s": 27784,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27910,
"s": 27882,
"text": "Operator Overloading in C++"
},
{
"code": null,
"e": 27930,
"s": 27910,
"text": "Polymorphism in C++"
},
{
"code": null,
"e": 27963,
"s": 27930,
"text": "Friend class and function in C++"
},
{
"code": null,
"e": 27987,
"s": 27963,
"text": "Sorting a vector in C++"
},
{
"code": null,
"e": 28008,
"s": 27987,
"text": "Iterators in C++ STL"
},
{
"code": null,
"e": 28043,
"s": 28008,
"text": "Header files in C/C++ and its uses"
},
{
"code": null,
"e": 28069,
"s": 28043,
"text": "C++ Program for QuickSort"
},
{
"code": null,
"e": 28128,
"s": 28069,
"text": "How to return multiple values from a function in C or C++?"
},
{
"code": null,
"e": 28172,
"s": 28128,
"text": "Program to print ASCII Value of a character"
}
] |
Using conda on an M1 Mac. Run multiple conda distributions to get... | by Nils Flaschel | Towards Data Science
|
If you recently bought or got a new M1 Mac from work and you are using Python to develop or work on data science projects you probably already wasted some hours trying to get some packages to run. I still struggle a lot with Python, Docker, and conda on my Mac, but I found a way to get many packages to run inside conda environments.
Since the M1 is an ARM-based system, many Python packages will not install properly because they are built to run on AMD64 (x86). Like many other, I use conda to set up environments for my projects — preferably with Anaconda or Miniconda.
When I wanted to install Tensorflow for the first time on my Mac, I stumbled across Miniforge (https://github.com/conda-forge/miniforge) which is comparable to Miniconda, but with conda-forge as the default channel and a focus on supporting various CPU architectures. Tensorflow works like expected when installed with Miniforge.
But as soon as I need to install certain packages that I use for work — like SimpleITK (which is now also available as M1 Python wheel!) — Miniforge does not manage to install it. It is a bit of a gamble. At some point, I realized that I can install and use both, Miniforge and Miniconda on the same system.
EDIT: As Lucas-Raphael Müller pointed out to me, you do not need to install both, Miniconda and Miniforge. You can choose whether to use packages compiled for Intel chips or Apple Silicon like stated here: https://github.com/Haydnspass/miniforge#rosetta-on-mac-with-apple-silicon-hardware.
After installing Miniforge the initialization command will be set in your .bashrc/.zshrc:
# >>> conda initialize >>># !! Contents within this block are managed by ‘conda init’ !!__conda_setup=”$(‘/Users/xyz/miniforge3/bin/conda’ ‘shell.zsh’ ‘hook’ 2> /dev/null)”if [ $? -eq 0 ]; then eval “$__conda_setup”else if [ -f “/Users/xyz/miniforge3/etc/profile.d/conda.sh” ]; then . “/Users/xyz/miniforge3/etc/profile.d/conda.sh” else export PATH=”/Users/xyz/miniforge3/bin:$PATH” fifiunset __conda_setup# <<< conda initialize <<<
This will initialize conda with Miniforge. You just need to copy your .bashrc/.zshrc file and change miniforge3 to miniconda3 and chose which one you want to use by default. Changing is as simple as running source .bashrc with the desired conda initialization.
For example, I was working on a project in which I needed SimpleITK for preprocessing images and Tensorflow to train a model. I was not able to get both working in the same Miniforge environment on the M1. So I split preprocessing and training into two environments, one utilizing Miniconda to run SimpleITK and one Miniforge environment to run Tensorflow.
The good part about it is that you can see both, the Miniconda and Miniforge environments at the same time by running conda env list . The only difference is that you will not see the names of the environments built with the other installer, only the path. With Miniconda initialized you will need to run conda activate with the full path to the Miniforge environment.
This is still easy to manage in a bash script to run scripts using multiple environments built with multiple conda distributions.
I hope this is just a temporary workaround until more and more packages will work on the M1, but I am sure this will take some time.
Hope this helps some people struggling with Python packages using conda on an M1.
|
[
{
"code": null,
"e": 506,
"s": 171,
"text": "If you recently bought or got a new M1 Mac from work and you are using Python to develop or work on data science projects you probably already wasted some hours trying to get some packages to run. I still struggle a lot with Python, Docker, and conda on my Mac, but I found a way to get many packages to run inside conda environments."
},
{
"code": null,
"e": 745,
"s": 506,
"text": "Since the M1 is an ARM-based system, many Python packages will not install properly because they are built to run on AMD64 (x86). Like many other, I use conda to set up environments for my projects — preferably with Anaconda or Miniconda."
},
{
"code": null,
"e": 1075,
"s": 745,
"text": "When I wanted to install Tensorflow for the first time on my Mac, I stumbled across Miniforge (https://github.com/conda-forge/miniforge) which is comparable to Miniconda, but with conda-forge as the default channel and a focus on supporting various CPU architectures. Tensorflow works like expected when installed with Miniforge."
},
{
"code": null,
"e": 1383,
"s": 1075,
"text": "But as soon as I need to install certain packages that I use for work — like SimpleITK (which is now also available as M1 Python wheel!) — Miniforge does not manage to install it. It is a bit of a gamble. At some point, I realized that I can install and use both, Miniforge and Miniconda on the same system."
},
{
"code": null,
"e": 1674,
"s": 1383,
"text": "EDIT: As Lucas-Raphael Müller pointed out to me, you do not need to install both, Miniconda and Miniforge. You can choose whether to use packages compiled for Intel chips or Apple Silicon like stated here: https://github.com/Haydnspass/miniforge#rosetta-on-mac-with-apple-silicon-hardware."
},
{
"code": null,
"e": 1764,
"s": 1674,
"text": "After installing Miniforge the initialization command will be set in your .bashrc/.zshrc:"
},
{
"code": null,
"e": 2197,
"s": 1764,
"text": "# >>> conda initialize >>># !! Contents within this block are managed by ‘conda init’ !!__conda_setup=”$(‘/Users/xyz/miniforge3/bin/conda’ ‘shell.zsh’ ‘hook’ 2> /dev/null)”if [ $? -eq 0 ]; then eval “$__conda_setup”else if [ -f “/Users/xyz/miniforge3/etc/profile.d/conda.sh” ]; then . “/Users/xyz/miniforge3/etc/profile.d/conda.sh” else export PATH=”/Users/xyz/miniforge3/bin:$PATH” fifiunset __conda_setup# <<< conda initialize <<<"
},
{
"code": null,
"e": 2458,
"s": 2197,
"text": "This will initialize conda with Miniforge. You just need to copy your .bashrc/.zshrc file and change miniforge3 to miniconda3 and chose which one you want to use by default. Changing is as simple as running source .bashrc with the desired conda initialization."
},
{
"code": null,
"e": 2815,
"s": 2458,
"text": "For example, I was working on a project in which I needed SimpleITK for preprocessing images and Tensorflow to train a model. I was not able to get both working in the same Miniforge environment on the M1. So I split preprocessing and training into two environments, one utilizing Miniconda to run SimpleITK and one Miniforge environment to run Tensorflow."
},
{
"code": null,
"e": 3184,
"s": 2815,
"text": "The good part about it is that you can see both, the Miniconda and Miniforge environments at the same time by running conda env list . The only difference is that you will not see the names of the environments built with the other installer, only the path. With Miniconda initialized you will need to run conda activate with the full path to the Miniforge environment."
},
{
"code": null,
"e": 3314,
"s": 3184,
"text": "This is still easy to manage in a bash script to run scripts using multiple environments built with multiple conda distributions."
},
{
"code": null,
"e": 3447,
"s": 3314,
"text": "I hope this is just a temporary workaround until more and more packages will work on the M1, but I am sure this will take some time."
}
] |
C program to count Positive and Negative numbers in an Array - GeeksforGeeks
|
22 Apr, 2020
Given an array arr of integers of size N, the task is to find the count of positive numbers and negative numbers in the array
Examples:
Input: arr[] = {2, -1, 5, 6, 0, -3}Output:Positive elements = 3Negative elements = 2There are 3 positive, 2 negative, and 1 zero.
Input: arr[] = {4, 0, -2, -9, -7, 1}Output:Positive elements = 2Negative elements = 3There are 2 positive, 3 negative, and 1 zero.
Approach:
Traverse the elements in the array one by one.For each element, check if the element is less than 0. If it is, then increment the count of negative elements.For each element, check if the element is greater than 0. If it is, then increment the count of positive elements.Print the count of negative and positive elements.
Traverse the elements in the array one by one.
For each element, check if the element is less than 0. If it is, then increment the count of negative elements.
For each element, check if the element is greater than 0. If it is, then increment the count of positive elements.
Print the count of negative and positive elements.
Below is the implementation of the above approach:
// C program to find the count of positive// and negative integers in an array #include <stdio.h> // Function to find the count of// positive integers in an arrayint countPositiveNumbers(int* arr, int n){ int pos_count = 0; int i; for (i = 0; i < n; i++) { if (arr[i] > 0) pos_count++; } return pos_count;} // Function to find the count of// negative integers in an arrayint countNegativeNumbers(int* arr, int n){ int neg_count = 0; int i; for (i = 0; i < n; i++) { if (arr[i] < 0) neg_count++; } return neg_count;} // Function to print the arrayvoid printArray(int* arr, int n){ int i; printf("Array: "); for (i = 0; i < n; i++) { printf("%d ", arr[i]); } printf("\n");} // Driver programint main(){ int arr[] = { 2, -1, 5, 6, 0, -3 }; int n; n = sizeof(arr) / sizeof(arr[0]); printArray(arr, n); printf("Count of Positive elements = %d\n", countPositiveNumbers(arr, n)); printf("Count of Negative elements = %d\n", countNegativeNumbers(arr, n)); return 0;}
Array: 2 -1 5 6 0 -3
Count of Positive elements = 3
Count of Negative elements = 2
Arrays
C Programs
School Programming
Arrays
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Stack Data Structure (Introduction and Program)
Top 50 Array Coding Problems for Interviews
Introduction to Arrays
Multidimensional Arrays in Java
Linear Search
Strings in C
Arrow operator -> in C/C++ with Examples
C Program to read contents of Whole File
UDP Server-Client implementation in C
Header files in C/C++ and its uses
|
[
{
"code": null,
"e": 25013,
"s": 24985,
"text": "\n22 Apr, 2020"
},
{
"code": null,
"e": 25139,
"s": 25013,
"text": "Given an array arr of integers of size N, the task is to find the count of positive numbers and negative numbers in the array"
},
{
"code": null,
"e": 25149,
"s": 25139,
"text": "Examples:"
},
{
"code": null,
"e": 25279,
"s": 25149,
"text": "Input: arr[] = {2, -1, 5, 6, 0, -3}Output:Positive elements = 3Negative elements = 2There are 3 positive, 2 negative, and 1 zero."
},
{
"code": null,
"e": 25410,
"s": 25279,
"text": "Input: arr[] = {4, 0, -2, -9, -7, 1}Output:Positive elements = 2Negative elements = 3There are 2 positive, 3 negative, and 1 zero."
},
{
"code": null,
"e": 25420,
"s": 25410,
"text": "Approach:"
},
{
"code": null,
"e": 25742,
"s": 25420,
"text": "Traverse the elements in the array one by one.For each element, check if the element is less than 0. If it is, then increment the count of negative elements.For each element, check if the element is greater than 0. If it is, then increment the count of positive elements.Print the count of negative and positive elements."
},
{
"code": null,
"e": 25789,
"s": 25742,
"text": "Traverse the elements in the array one by one."
},
{
"code": null,
"e": 25901,
"s": 25789,
"text": "For each element, check if the element is less than 0. If it is, then increment the count of negative elements."
},
{
"code": null,
"e": 26016,
"s": 25901,
"text": "For each element, check if the element is greater than 0. If it is, then increment the count of positive elements."
},
{
"code": null,
"e": 26067,
"s": 26016,
"text": "Print the count of negative and positive elements."
},
{
"code": null,
"e": 26118,
"s": 26067,
"text": "Below is the implementation of the above approach:"
},
{
"code": "// C program to find the count of positive// and negative integers in an array #include <stdio.h> // Function to find the count of// positive integers in an arrayint countPositiveNumbers(int* arr, int n){ int pos_count = 0; int i; for (i = 0; i < n; i++) { if (arr[i] > 0) pos_count++; } return pos_count;} // Function to find the count of// negative integers in an arrayint countNegativeNumbers(int* arr, int n){ int neg_count = 0; int i; for (i = 0; i < n; i++) { if (arr[i] < 0) neg_count++; } return neg_count;} // Function to print the arrayvoid printArray(int* arr, int n){ int i; printf(\"Array: \"); for (i = 0; i < n; i++) { printf(\"%d \", arr[i]); } printf(\"\\n\");} // Driver programint main(){ int arr[] = { 2, -1, 5, 6, 0, -3 }; int n; n = sizeof(arr) / sizeof(arr[0]); printArray(arr, n); printf(\"Count of Positive elements = %d\\n\", countPositiveNumbers(arr, n)); printf(\"Count of Negative elements = %d\\n\", countNegativeNumbers(arr, n)); return 0;}",
"e": 27219,
"s": 26118,
"text": null
},
{
"code": null,
"e": 27304,
"s": 27219,
"text": "Array: 2 -1 5 6 0 -3 \nCount of Positive elements = 3\nCount of Negative elements = 2\n"
},
{
"code": null,
"e": 27311,
"s": 27304,
"text": "Arrays"
},
{
"code": null,
"e": 27322,
"s": 27311,
"text": "C Programs"
},
{
"code": null,
"e": 27341,
"s": 27322,
"text": "School Programming"
},
{
"code": null,
"e": 27348,
"s": 27341,
"text": "Arrays"
},
{
"code": null,
"e": 27446,
"s": 27348,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27455,
"s": 27446,
"text": "Comments"
},
{
"code": null,
"e": 27468,
"s": 27455,
"text": "Old Comments"
},
{
"code": null,
"e": 27516,
"s": 27468,
"text": "Stack Data Structure (Introduction and Program)"
},
{
"code": null,
"e": 27560,
"s": 27516,
"text": "Top 50 Array Coding Problems for Interviews"
},
{
"code": null,
"e": 27583,
"s": 27560,
"text": "Introduction to Arrays"
},
{
"code": null,
"e": 27615,
"s": 27583,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 27629,
"s": 27615,
"text": "Linear Search"
},
{
"code": null,
"e": 27642,
"s": 27629,
"text": "Strings in C"
},
{
"code": null,
"e": 27683,
"s": 27642,
"text": "Arrow operator -> in C/C++ with Examples"
},
{
"code": null,
"e": 27724,
"s": 27683,
"text": "C Program to read contents of Whole File"
},
{
"code": null,
"e": 27762,
"s": 27724,
"text": "UDP Server-Client implementation in C"
}
] |
Deleting a Label in Python Tkinter
|
Tkinter label widgets are used to display text and images in the application. We can also configure the properties of Label widget that are created by default in a tkinter application.
If we want to delete a label that is defined in a tkinter application, then we have to use the destroy() method.
In this example, we will create a Button that will allow the user to delete the label from the widget.
# Import the required libraries
from tkinter import *
from tkinter import ttk
from PIL import Image, ImageTk
# Create an instance of tkinter frame or window
win = Tk()
# Set the size of the window
win.geometry("700x350")
def on_click():
label.after(1000, label.destroy())
# Create a Label widget
label = Label(win, text=" Deleting a Label in Python Tkinter", font=('Helvetica 15'))
label.pack(pady=20)
# Add a Button to Show/Hide Canvas Items
ttk.Button(win, text="Delete", command=on_click).pack()
win.mainloop()
If we run the above code, it will display a window with a label widget and a Button.
Now, click the button to delete the Label from the window.
|
[
{
"code": null,
"e": 1247,
"s": 1062,
"text": "Tkinter label widgets are used to display text and images in the application. We can also configure the properties of Label widget that are created by default in a tkinter application."
},
{
"code": null,
"e": 1360,
"s": 1247,
"text": "If we want to delete a label that is defined in a tkinter application, then we have to use the destroy() method."
},
{
"code": null,
"e": 1463,
"s": 1360,
"text": "In this example, we will create a Button that will allow the user to delete the label from the widget."
},
{
"code": null,
"e": 1986,
"s": 1463,
"text": "# Import the required libraries\nfrom tkinter import *\nfrom tkinter import ttk\nfrom PIL import Image, ImageTk\n\n# Create an instance of tkinter frame or window\nwin = Tk()\n\n# Set the size of the window\nwin.geometry(\"700x350\")\n\ndef on_click():\n label.after(1000, label.destroy())\n\n# Create a Label widget\nlabel = Label(win, text=\" Deleting a Label in Python Tkinter\", font=('Helvetica 15'))\nlabel.pack(pady=20)\n\n# Add a Button to Show/Hide Canvas Items\nttk.Button(win, text=\"Delete\", command=on_click).pack()\n\nwin.mainloop()"
},
{
"code": null,
"e": 2071,
"s": 1986,
"text": "If we run the above code, it will display a window with a label widget and a Button."
},
{
"code": null,
"e": 2130,
"s": 2071,
"text": "Now, click the button to delete the Label from the window."
}
] |
Python - MySQL Database Access
|
The Python standard for database interfaces is the Python DB-API. Most Python database interfaces adhere to this standard.
You can choose the right database for your application. Python Database API supports a wide range of database servers such as −
GadFly
mSQL
MySQL
PostgreSQL
Microsoft SQL Server 2000
Informix
Interbase
Oracle
Sybase
Here is the list of available Python database interfaces: Python Database Interfaces and APIs. You must download a separate DB API module for each database you need to access. For example, if you need to access an Oracle database as well as a MySQL database, you must download both the Oracle and the MySQL database modules.
The DB API provides a minimal standard for working with databases using Python structures and syntax wherever possible. This API includes the following −
Importing the API module.
Acquiring a connection with the database.
Issuing SQL statements and stored procedures.
Closing the connection
We would learn all the concepts using MySQL, so let us talk about MySQLdb module.
MySQLdb is an interface for connecting to a MySQL database server from Python. It implements the Python Database API v2.0 and is built on top of the MySQL C API.
Before proceeding, you make sure you have MySQLdb installed on your machine. Just type the following in your Python script and execute it −
#!/usr/bin/python
import MySQLdb
If it produces the following result, then it means MySQLdb module is not installed −
Traceback (most recent call last):
File "test.py", line 3, in <module>
import MySQLdb
ImportError: No module named MySQLdb
To install MySQLdb module, use the following command −
For Ubuntu, use the following command -
$ sudo apt-get install python-pip python-dev libmysqlclient-dev
For Fedora, use the following command -
$ sudo dnf install python python-devel mysql-devel redhat-rpm-config gcc
For Python command prompt, use the following command -
pip install MySQL-python
Note − Make sure you have root privilege to install above module.
Before connecting to a MySQL database, make sure of the followings −
You have created a database TESTDB.
You have created a database TESTDB.
You have created a table EMPLOYEE in TESTDB.
You have created a table EMPLOYEE in TESTDB.
This table has fields FIRST_NAME, LAST_NAME, AGE, SEX and INCOME.
This table has fields FIRST_NAME, LAST_NAME, AGE, SEX and INCOME.
User ID "testuser" and password "test123" are set to access TESTDB.
User ID "testuser" and password "test123" are set to access TESTDB.
Python module MySQLdb is installed properly on your machine.
Python module MySQLdb is installed properly on your machine.
You have gone through MySQL tutorial to understand MySQL Basics.
You have gone through MySQL tutorial to understand MySQL Basics.
Following is the example of connecting with MySQL database "TESTDB"
#!/usr/bin/python
import MySQLdb
# Open database connection
db = MySQLdb.connect("localhost","testuser","test123","TESTDB" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
# execute SQL query using execute() method.
cursor.execute("SELECT VERSION()")
# Fetch a single row using fetchone() method.
data = cursor.fetchone()
print "Database version : %s " % data
# disconnect from server
db.close()
While running this script, it is producing the following result in my Linux machine.
Database version : 5.0.45
If a connection is established with the datasource, then a Connection Object is returned and saved into db for further use, otherwise db is set to None. Next, db object is used to create a cursor object, which in turn is used to execute SQL queries. Finally, before coming out, it ensures that database connection is closed and resources are released.
Once a database connection is established, we are ready to create tables or records into the database tables using execute method of the created cursor.
Let us create Database table EMPLOYEE −
#!/usr/bin/python
import MySQLdb
# Open database connection
db = MySQLdb.connect("localhost","testuser","test123","TESTDB" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
# Drop table if it already exist using execute() method.
cursor.execute("DROP TABLE IF EXISTS EMPLOYEE")
# Create table as per requirement
sql = """CREATE TABLE EMPLOYEE (
FIRST_NAME CHAR(20) NOT NULL,
LAST_NAME CHAR(20),
AGE INT,
SEX CHAR(1),
INCOME FLOAT )"""
cursor.execute(sql)
# disconnect from server
db.close()
It is required when you want to create your records into a database table.
The following example, executes SQL INSERT statement to create a record into EMPLOYEE table −
#!/usr/bin/python
import MySQLdb
# Open database connection
db = MySQLdb.connect("localhost","testuser","test123","TESTDB" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
# Prepare SQL query to INSERT a record into the database.
sql = """INSERT INTO EMPLOYEE(FIRST_NAME,
LAST_NAME, AGE, SEX, INCOME)
VALUES ('Mac', 'Mohan', 20, 'M', 2000)"""
try:
# Execute the SQL command
cursor.execute(sql)
# Commit your changes in the database
db.commit()
except:
# Rollback in case there is any error
db.rollback()
# disconnect from server
db.close()
Above example can be written as follows to create SQL queries dynamically −
#!/usr/bin/python
import MySQLdb
# Open database connection
db = MySQLdb.connect("localhost","testuser","test123","TESTDB" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
# Prepare SQL query to INSERT a record into the database.
sql = "INSERT INTO EMPLOYEE(FIRST_NAME, \
LAST_NAME, AGE, SEX, INCOME) \
VALUES ('%s', '%s', '%d', '%c', '%d' )" % \
('Mac', 'Mohan', 20, 'M', 2000)
try:
# Execute the SQL command
cursor.execute(sql)
# Commit your changes in the database
db.commit()
except:
# Rollback in case there is any error
db.rollback()
# disconnect from server
db.close()
Following code segment is another form of execution where you can pass parameters directly −
..................................
user_id = "test123"
password = "password"
con.execute('insert into Login values("%s", "%s")' % \
(user_id, password))
..................................
READ Operation on any database means to fetch some useful information from the database.
Once our database connection is established, you are ready to make a query into this database. You can use either fetchone() method to fetch single record or fetchall() method to fetech multiple values from a database table.
fetchone() − It fetches the next row of a query result set. A result set is an object that is returned when a cursor object is used to query a table.
fetchone() − It fetches the next row of a query result set. A result set is an object that is returned when a cursor object is used to query a table.
fetchall() − It fetches all the rows in a result set. If some rows have already been extracted from the result set, then it retrieves
the remaining rows from the result set.
fetchall() − It fetches all the rows in a result set. If some rows have already been extracted from the result set, then it retrieves
the remaining rows from the result set.
rowcount − This is a read-only attribute and returns the number of rows that were affected by an execute() method.
rowcount − This is a read-only attribute and returns the number of rows that were affected by an execute() method.
The following procedure queries all the records from EMPLOYEE table having salary more than 1000 −
#!/usr/bin/python
import MySQLdb
# Open database connection
db = MySQLdb.connect("localhost","testuser","test123","TESTDB" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
sql = "SELECT * FROM EMPLOYEE \
WHERE INCOME > '%d'" % (1000)
try:
# Execute the SQL command
cursor.execute(sql)
# Fetch all the rows in a list of lists.
results = cursor.fetchall()
for row in results:
fname = row[0]
lname = row[1]
age = row[2]
sex = row[3]
income = row[4]
# Now print fetched result
print "fname=%s,lname=%s,age=%d,sex=%s,income=%d" % \
(fname, lname, age, sex, income )
except:
print "Error: unable to fecth data"
# disconnect from server
db.close()
This will produce the following result −
fname=Mac, lname=Mohan, age=20, sex=M, income=2000
UPDATE Operation on any database means to update one or more records, which are already available in the database.
The following procedure updates all the records having SEX as 'M'. Here, we increase AGE of all the males by one year.
#!/usr/bin/python
import MySQLdb
# Open database connection
db = MySQLdb.connect("localhost","testuser","test123","TESTDB" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
# Prepare SQL query to UPDATE required records
sql = "UPDATE EMPLOYEE SET AGE = AGE + 1
WHERE SEX = '%c'" % ('M')
try:
# Execute the SQL command
cursor.execute(sql)
# Commit your changes in the database
db.commit()
except:
# Rollback in case there is any error
db.rollback()
# disconnect from server
db.close()
DELETE operation is required when you want to delete some records from your database. Following is the procedure to delete all the records from EMPLOYEE where AGE is more than 20 −
#!/usr/bin/python
import MySQLdb
# Open database connection
db = MySQLdb.connect("localhost","testuser","test123","TESTDB" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
# Prepare SQL query to DELETE required records
sql = "DELETE FROM EMPLOYEE WHERE AGE > '%d'" % (20)
try:
# Execute the SQL command
cursor.execute(sql)
# Commit your changes in the database
db.commit()
except:
# Rollback in case there is any error
db.rollback()
# disconnect from server
db.close()
Transactions are a mechanism that ensures data consistency. Transactions have the following four properties −
Atomicity − Either a transaction completes or nothing happens at all.
Atomicity − Either a transaction completes or nothing happens at all.
Consistency − A transaction must start in a consistent state and leave the system in a consistent state.
Consistency − A transaction must start in a consistent state and leave the system in a consistent state.
Isolation − Intermediate results of a transaction are not visible outside the current transaction.
Isolation − Intermediate results of a transaction are not visible outside the current transaction.
Durability − Once a transaction was committed, the effects are persistent, even after a system failure.
Durability − Once a transaction was committed, the effects are persistent, even after a system failure.
The Python DB API 2.0 provides two methods to either commit or rollback a transaction.
You already know how to implement transactions. Here is again similar example −
# Prepare SQL query to DELETE required records
sql = "DELETE FROM EMPLOYEE WHERE AGE > '%d'" % (20)
try:
# Execute the SQL command
cursor.execute(sql)
# Commit your changes in the database
db.commit()
except:
# Rollback in case there is any error
db.rollback()
Commit is the operation, which gives a green signal to database to finalize the changes, and after this operation, no change can be reverted back.
Here is a simple example to call commit method.
db.commit()
If you are not satisfied with one or more of the changes and you want to revert back those changes completely, then use rollback() method.
Here is a simple example to call rollback() method.
db.rollback()
To disconnect Database connection, use close() method.
db.close()
If the connection to a database is closed by the user with the close() method, any outstanding transactions are rolled back by the DB. However, instead of depending on any of DB lower level implementation details, your application would be better off calling commit or rollback explicitly.
There are many sources of errors. A few examples are a syntax error in an executed SQL statement, a connection failure, or calling the fetch method for an already canceled or finished statement handle.
The DB API defines a number of errors that must exist in each database module. The following table lists these exceptions.
Warning
Used for non-fatal issues. Must subclass StandardError.
Error
Base class for errors. Must subclass StandardError.
InterfaceError
Used for errors in the database module, not the database itself. Must subclass Error.
DatabaseError
Used for errors in the database. Must subclass Error.
DataError
Subclass of DatabaseError that refers to errors in the data.
OperationalError
Subclass of DatabaseError that refers to errors such as the loss of a connection to the database. These errors are generally outside of the control of the Python scripter.
IntegrityError
Subclass of DatabaseError for situations that would damage the relational integrity, such as uniqueness constraints or foreign keys.
InternalError
Subclass of DatabaseError that refers to errors internal to the database module, such as a cursor no longer being active.
ProgrammingError
Subclass of DatabaseError that refers to errors such as a bad table name and other things that can safely be blamed on you.
NotSupportedError
Subclass of DatabaseError that refers to trying to call unsupported functionality.
Your Python scripts should handle these errors, but before using any of the above exceptions, make sure your MySQLdb has support for that exception. You can get more information about them by reading the DB API 2.0 specification.
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2367,
"s": 2244,
"text": "The Python standard for database interfaces is the Python DB-API. Most Python database interfaces adhere to this standard."
},
{
"code": null,
"e": 2495,
"s": 2367,
"text": "You can choose the right database for your application. Python Database API supports a wide range of database servers such as −"
},
{
"code": null,
"e": 2502,
"s": 2495,
"text": "GadFly"
},
{
"code": null,
"e": 2507,
"s": 2502,
"text": "mSQL"
},
{
"code": null,
"e": 2513,
"s": 2507,
"text": "MySQL"
},
{
"code": null,
"e": 2524,
"s": 2513,
"text": "PostgreSQL"
},
{
"code": null,
"e": 2550,
"s": 2524,
"text": "Microsoft SQL Server 2000"
},
{
"code": null,
"e": 2559,
"s": 2550,
"text": "Informix"
},
{
"code": null,
"e": 2569,
"s": 2559,
"text": "Interbase"
},
{
"code": null,
"e": 2576,
"s": 2569,
"text": "Oracle"
},
{
"code": null,
"e": 2583,
"s": 2576,
"text": "Sybase"
},
{
"code": null,
"e": 2908,
"s": 2583,
"text": "Here is the list of available Python database interfaces: Python Database Interfaces and APIs. You must download a separate DB API module for each database you need to access. For example, if you need to access an Oracle database as well as a MySQL database, you must download both the Oracle and the MySQL database modules."
},
{
"code": null,
"e": 3062,
"s": 2908,
"text": "The DB API provides a minimal standard for working with databases using Python structures and syntax wherever possible. This API includes the following −"
},
{
"code": null,
"e": 3088,
"s": 3062,
"text": "Importing the API module."
},
{
"code": null,
"e": 3130,
"s": 3088,
"text": "Acquiring a connection with the database."
},
{
"code": null,
"e": 3176,
"s": 3130,
"text": "Issuing SQL statements and stored procedures."
},
{
"code": null,
"e": 3199,
"s": 3176,
"text": "Closing the connection"
},
{
"code": null,
"e": 3281,
"s": 3199,
"text": "We would learn all the concepts using MySQL, so let us talk about MySQLdb module."
},
{
"code": null,
"e": 3443,
"s": 3281,
"text": "MySQLdb is an interface for connecting to a MySQL database server from Python. It implements the Python Database API v2.0 and is built on top of the MySQL C API."
},
{
"code": null,
"e": 3583,
"s": 3443,
"text": "Before proceeding, you make sure you have MySQLdb installed on your machine. Just type the following in your Python script and execute it −"
},
{
"code": null,
"e": 3617,
"s": 3583,
"text": "#!/usr/bin/python\n\nimport MySQLdb"
},
{
"code": null,
"e": 3702,
"s": 3617,
"text": "If it produces the following result, then it means MySQLdb module is not installed −"
},
{
"code": null,
"e": 3835,
"s": 3702,
"text": "Traceback (most recent call last):\n File \"test.py\", line 3, in <module>\n import MySQLdb\nImportError: No module named MySQLdb\n"
},
{
"code": null,
"e": 3890,
"s": 3835,
"text": "To install MySQLdb module, use the following command −"
},
{
"code": null,
"e": 4188,
"s": 3890,
"text": "For Ubuntu, use the following command -\n$ sudo apt-get install python-pip python-dev libmysqlclient-dev\nFor Fedora, use the following command -\n$ sudo dnf install python python-devel mysql-devel redhat-rpm-config gcc\nFor Python command prompt, use the following command -\npip install MySQL-python\n"
},
{
"code": null,
"e": 4254,
"s": 4188,
"text": "Note − Make sure you have root privilege to install above module."
},
{
"code": null,
"e": 4323,
"s": 4254,
"text": "Before connecting to a MySQL database, make sure of the followings −"
},
{
"code": null,
"e": 4359,
"s": 4323,
"text": "You have created a database TESTDB."
},
{
"code": null,
"e": 4395,
"s": 4359,
"text": "You have created a database TESTDB."
},
{
"code": null,
"e": 4440,
"s": 4395,
"text": "You have created a table EMPLOYEE in TESTDB."
},
{
"code": null,
"e": 4485,
"s": 4440,
"text": "You have created a table EMPLOYEE in TESTDB."
},
{
"code": null,
"e": 4551,
"s": 4485,
"text": "This table has fields FIRST_NAME, LAST_NAME, AGE, SEX and INCOME."
},
{
"code": null,
"e": 4617,
"s": 4551,
"text": "This table has fields FIRST_NAME, LAST_NAME, AGE, SEX and INCOME."
},
{
"code": null,
"e": 4685,
"s": 4617,
"text": "User ID \"testuser\" and password \"test123\" are set to access TESTDB."
},
{
"code": null,
"e": 4753,
"s": 4685,
"text": "User ID \"testuser\" and password \"test123\" are set to access TESTDB."
},
{
"code": null,
"e": 4814,
"s": 4753,
"text": "Python module MySQLdb is installed properly on your machine."
},
{
"code": null,
"e": 4875,
"s": 4814,
"text": "Python module MySQLdb is installed properly on your machine."
},
{
"code": null,
"e": 4941,
"s": 4875,
"text": "You have gone through MySQL tutorial to understand MySQL Basics.\n"
},
{
"code": null,
"e": 5006,
"s": 4941,
"text": "You have gone through MySQL tutorial to understand MySQL Basics."
},
{
"code": null,
"e": 5074,
"s": 5006,
"text": "Following is the example of connecting with MySQL database \"TESTDB\""
},
{
"code": null,
"e": 5498,
"s": 5074,
"text": "#!/usr/bin/python\n\nimport MySQLdb\n\n# Open database connection\ndb = MySQLdb.connect(\"localhost\",\"testuser\",\"test123\",\"TESTDB\" )\n\n# prepare a cursor object using cursor() method\ncursor = db.cursor()\n\n# execute SQL query using execute() method.\ncursor.execute(\"SELECT VERSION()\")\n\n# Fetch a single row using fetchone() method.\ndata = cursor.fetchone()\nprint \"Database version : %s \" % data\n\n# disconnect from server\ndb.close()"
},
{
"code": null,
"e": 5583,
"s": 5498,
"text": "While running this script, it is producing the following result in my Linux machine."
},
{
"code": null,
"e": 5610,
"s": 5583,
"text": "Database version : 5.0.45\n"
},
{
"code": null,
"e": 5962,
"s": 5610,
"text": "If a connection is established with the datasource, then a Connection Object is returned and saved into db for further use, otherwise db is set to None. Next, db object is used to create a cursor object, which in turn is used to execute SQL queries. Finally, before coming out, it ensures that database connection is closed and resources are released."
},
{
"code": null,
"e": 6115,
"s": 5962,
"text": "Once a database connection is established, we are ready to create tables or records into the database tables using execute method of the created cursor."
},
{
"code": null,
"e": 6155,
"s": 6115,
"text": "Let us create Database table EMPLOYEE −"
},
{
"code": null,
"e": 6723,
"s": 6155,
"text": "#!/usr/bin/python\n\nimport MySQLdb\n\n# Open database connection\ndb = MySQLdb.connect(\"localhost\",\"testuser\",\"test123\",\"TESTDB\" )\n\n# prepare a cursor object using cursor() method\ncursor = db.cursor()\n\n# Drop table if it already exist using execute() method.\ncursor.execute(\"DROP TABLE IF EXISTS EMPLOYEE\")\n\n# Create table as per requirement\nsql = \"\"\"CREATE TABLE EMPLOYEE (\n FIRST_NAME CHAR(20) NOT NULL,\n LAST_NAME CHAR(20),\n AGE INT, \n SEX CHAR(1),\n INCOME FLOAT )\"\"\"\n\ncursor.execute(sql)\n\n# disconnect from server\ndb.close()"
},
{
"code": null,
"e": 6798,
"s": 6723,
"text": "It is required when you want to create your records into a database table."
},
{
"code": null,
"e": 6892,
"s": 6798,
"text": "The following example, executes SQL INSERT statement to create a record into EMPLOYEE table −"
},
{
"code": null,
"e": 7495,
"s": 6892,
"text": "#!/usr/bin/python\n\nimport MySQLdb\n\n# Open database connection\ndb = MySQLdb.connect(\"localhost\",\"testuser\",\"test123\",\"TESTDB\" )\n\n# prepare a cursor object using cursor() method\ncursor = db.cursor()\n\n# Prepare SQL query to INSERT a record into the database.\nsql = \"\"\"INSERT INTO EMPLOYEE(FIRST_NAME,\n LAST_NAME, AGE, SEX, INCOME)\n VALUES ('Mac', 'Mohan', 20, 'M', 2000)\"\"\"\ntry:\n # Execute the SQL command\n cursor.execute(sql)\n # Commit your changes in the database\n db.commit()\nexcept:\n # Rollback in case there is any error\n db.rollback()\n\n# disconnect from server\ndb.close()"
},
{
"code": null,
"e": 7571,
"s": 7495,
"text": "Above example can be written as follows to create SQL queries dynamically −"
},
{
"code": null,
"e": 8213,
"s": 7571,
"text": "#!/usr/bin/python\n\nimport MySQLdb\n\n# Open database connection\ndb = MySQLdb.connect(\"localhost\",\"testuser\",\"test123\",\"TESTDB\" )\n\n# prepare a cursor object using cursor() method\ncursor = db.cursor()\n\n# Prepare SQL query to INSERT a record into the database.\nsql = \"INSERT INTO EMPLOYEE(FIRST_NAME, \\\n LAST_NAME, AGE, SEX, INCOME) \\\n VALUES ('%s', '%s', '%d', '%c', '%d' )\" % \\\n ('Mac', 'Mohan', 20, 'M', 2000)\ntry:\n # Execute the SQL command\n cursor.execute(sql)\n # Commit your changes in the database\n db.commit()\nexcept:\n # Rollback in case there is any error\n db.rollback()\n\n# disconnect from server\ndb.close()"
},
{
"code": null,
"e": 8306,
"s": 8213,
"text": "Following code segment is another form of execution where you can pass parameters directly −"
},
{
"code": null,
"e": 8509,
"s": 8306,
"text": "..................................\nuser_id = \"test123\"\npassword = \"password\"\n\ncon.execute('insert into Login values(\"%s\", \"%s\")' % \\\n (user_id, password))\n..................................\n"
},
{
"code": null,
"e": 8598,
"s": 8509,
"text": "READ Operation on any database means to fetch some useful information from the database."
},
{
"code": null,
"e": 8823,
"s": 8598,
"text": "Once our database connection is established, you are ready to make a query into this database. You can use either fetchone() method to fetch single record or fetchall() method to fetech multiple values from a database table."
},
{
"code": null,
"e": 8973,
"s": 8823,
"text": "fetchone() − It fetches the next row of a query result set. A result set is an object that is returned when a cursor object is used to query a table."
},
{
"code": null,
"e": 9123,
"s": 8973,
"text": "fetchone() − It fetches the next row of a query result set. A result set is an object that is returned when a cursor object is used to query a table."
},
{
"code": null,
"e": 9297,
"s": 9123,
"text": "fetchall() − It fetches all the rows in a result set. If some rows have already been extracted from the result set, then it retrieves\nthe remaining rows from the result set."
},
{
"code": null,
"e": 9471,
"s": 9297,
"text": "fetchall() − It fetches all the rows in a result set. If some rows have already been extracted from the result set, then it retrieves\nthe remaining rows from the result set."
},
{
"code": null,
"e": 9586,
"s": 9471,
"text": "rowcount − This is a read-only attribute and returns the number of rows that were affected by an execute() method."
},
{
"code": null,
"e": 9701,
"s": 9586,
"text": "rowcount − This is a read-only attribute and returns the number of rows that were affected by an execute() method."
},
{
"code": null,
"e": 9800,
"s": 9701,
"text": "The following procedure queries all the records from EMPLOYEE table having salary more than 1000 −"
},
{
"code": null,
"e": 10548,
"s": 9800,
"text": "#!/usr/bin/python\n\nimport MySQLdb\n\n# Open database connection\ndb = MySQLdb.connect(\"localhost\",\"testuser\",\"test123\",\"TESTDB\" )\n\n# prepare a cursor object using cursor() method\ncursor = db.cursor()\n\nsql = \"SELECT * FROM EMPLOYEE \\\n WHERE INCOME > '%d'\" % (1000)\ntry:\n # Execute the SQL command\n cursor.execute(sql)\n # Fetch all the rows in a list of lists.\n results = cursor.fetchall()\n for row in results:\n fname = row[0]\n lname = row[1]\n age = row[2]\n sex = row[3]\n income = row[4]\n # Now print fetched result\n print \"fname=%s,lname=%s,age=%d,sex=%s,income=%d\" % \\\n (fname, lname, age, sex, income )\nexcept:\n print \"Error: unable to fecth data\"\n\n# disconnect from server\ndb.close()"
},
{
"code": null,
"e": 10589,
"s": 10548,
"text": "This will produce the following result −"
},
{
"code": null,
"e": 10641,
"s": 10589,
"text": "fname=Mac, lname=Mohan, age=20, sex=M, income=2000\n"
},
{
"code": null,
"e": 10756,
"s": 10641,
"text": "UPDATE Operation on any database means to update one or more records, which are already available in the database."
},
{
"code": null,
"e": 10875,
"s": 10756,
"text": "The following procedure updates all the records having SEX as 'M'. Here, we increase AGE of all the males by one year."
},
{
"code": null,
"e": 11429,
"s": 10875,
"text": "#!/usr/bin/python\n\nimport MySQLdb\n\n# Open database connection\ndb = MySQLdb.connect(\"localhost\",\"testuser\",\"test123\",\"TESTDB\" )\n\n# prepare a cursor object using cursor() method\ncursor = db.cursor()\n\n# Prepare SQL query to UPDATE required records\nsql = \"UPDATE EMPLOYEE SET AGE = AGE + 1\n WHERE SEX = '%c'\" % ('M')\ntry:\n # Execute the SQL command\n cursor.execute(sql)\n # Commit your changes in the database\n db.commit()\nexcept:\n # Rollback in case there is any error\n db.rollback()\n\n# disconnect from server\ndb.close()"
},
{
"code": null,
"e": 11611,
"s": 11429,
"text": "DELETE operation is required when you want to delete some records from your database. Following is the procedure to delete all the records from EMPLOYEE where AGE is more than 20 −"
},
{
"code": null,
"e": 12125,
"s": 11611,
"text": "#!/usr/bin/python\n\nimport MySQLdb\n\n# Open database connection\ndb = MySQLdb.connect(\"localhost\",\"testuser\",\"test123\",\"TESTDB\" )\n\n# prepare a cursor object using cursor() method\ncursor = db.cursor()\n\n# Prepare SQL query to DELETE required records\nsql = \"DELETE FROM EMPLOYEE WHERE AGE > '%d'\" % (20)\ntry:\n # Execute the SQL command\n cursor.execute(sql)\n # Commit your changes in the database\n db.commit()\nexcept:\n # Rollback in case there is any error\n db.rollback()\n\n# disconnect from server\ndb.close()"
},
{
"code": null,
"e": 12235,
"s": 12125,
"text": "Transactions are a mechanism that ensures data consistency. Transactions have the following four properties −"
},
{
"code": null,
"e": 12305,
"s": 12235,
"text": "Atomicity − Either a transaction completes or nothing happens at all."
},
{
"code": null,
"e": 12375,
"s": 12305,
"text": "Atomicity − Either a transaction completes or nothing happens at all."
},
{
"code": null,
"e": 12480,
"s": 12375,
"text": "Consistency − A transaction must start in a consistent state and leave the system in a consistent state."
},
{
"code": null,
"e": 12585,
"s": 12480,
"text": "Consistency − A transaction must start in a consistent state and leave the system in a consistent state."
},
{
"code": null,
"e": 12684,
"s": 12585,
"text": "Isolation − Intermediate results of a transaction are not visible outside the current transaction."
},
{
"code": null,
"e": 12783,
"s": 12684,
"text": "Isolation − Intermediate results of a transaction are not visible outside the current transaction."
},
{
"code": null,
"e": 12887,
"s": 12783,
"text": "Durability − Once a transaction was committed, the effects are persistent, even after a system failure."
},
{
"code": null,
"e": 12991,
"s": 12887,
"text": "Durability − Once a transaction was committed, the effects are persistent, even after a system failure."
},
{
"code": null,
"e": 13078,
"s": 12991,
"text": "The Python DB API 2.0 provides two methods to either commit or rollback a transaction."
},
{
"code": null,
"e": 13158,
"s": 13078,
"text": "You already know how to implement transactions. Here is again similar example −"
},
{
"code": null,
"e": 13437,
"s": 13158,
"text": "# Prepare SQL query to DELETE required records\nsql = \"DELETE FROM EMPLOYEE WHERE AGE > '%d'\" % (20)\ntry:\n # Execute the SQL command\n cursor.execute(sql)\n # Commit your changes in the database\n db.commit()\nexcept:\n # Rollback in case there is any error\n db.rollback()"
},
{
"code": null,
"e": 13584,
"s": 13437,
"text": "Commit is the operation, which gives a green signal to database to finalize the changes, and after this operation, no change can be reverted back."
},
{
"code": null,
"e": 13632,
"s": 13584,
"text": "Here is a simple example to call commit method."
},
{
"code": null,
"e": 13645,
"s": 13632,
"text": "db.commit()\n"
},
{
"code": null,
"e": 13784,
"s": 13645,
"text": "If you are not satisfied with one or more of the changes and you want to revert back those changes completely, then use rollback() method."
},
{
"code": null,
"e": 13836,
"s": 13784,
"text": "Here is a simple example to call rollback() method."
},
{
"code": null,
"e": 13851,
"s": 13836,
"text": "db.rollback()\n"
},
{
"code": null,
"e": 13906,
"s": 13851,
"text": "To disconnect Database connection, use close() method."
},
{
"code": null,
"e": 13918,
"s": 13906,
"text": "db.close()\n"
},
{
"code": null,
"e": 14208,
"s": 13918,
"text": "If the connection to a database is closed by the user with the close() method, any outstanding transactions are rolled back by the DB. However, instead of depending on any of DB lower level implementation details, your application would be better off calling commit or rollback explicitly."
},
{
"code": null,
"e": 14410,
"s": 14208,
"text": "There are many sources of errors. A few examples are a syntax error in an executed SQL statement, a connection failure, or calling the fetch method for an already canceled or finished statement handle."
},
{
"code": null,
"e": 14533,
"s": 14410,
"text": "The DB API defines a number of errors that must exist in each database module. The following table lists these exceptions."
},
{
"code": null,
"e": 14541,
"s": 14533,
"text": "Warning"
},
{
"code": null,
"e": 14597,
"s": 14541,
"text": "Used for non-fatal issues. Must subclass StandardError."
},
{
"code": null,
"e": 14603,
"s": 14597,
"text": "Error"
},
{
"code": null,
"e": 14655,
"s": 14603,
"text": "Base class for errors. Must subclass StandardError."
},
{
"code": null,
"e": 14670,
"s": 14655,
"text": "InterfaceError"
},
{
"code": null,
"e": 14756,
"s": 14670,
"text": "Used for errors in the database module, not the database itself. Must subclass Error."
},
{
"code": null,
"e": 14770,
"s": 14756,
"text": "DatabaseError"
},
{
"code": null,
"e": 14824,
"s": 14770,
"text": "Used for errors in the database. Must subclass Error."
},
{
"code": null,
"e": 14834,
"s": 14824,
"text": "DataError"
},
{
"code": null,
"e": 14895,
"s": 14834,
"text": "Subclass of DatabaseError that refers to errors in the data."
},
{
"code": null,
"e": 14912,
"s": 14895,
"text": "OperationalError"
},
{
"code": null,
"e": 15084,
"s": 14912,
"text": "Subclass of DatabaseError that refers to errors such as the loss of a connection to the database. These errors are generally outside of the control of the Python scripter."
},
{
"code": null,
"e": 15099,
"s": 15084,
"text": "IntegrityError"
},
{
"code": null,
"e": 15232,
"s": 15099,
"text": "Subclass of DatabaseError for situations that would damage the relational integrity, such as uniqueness constraints or foreign keys."
},
{
"code": null,
"e": 15246,
"s": 15232,
"text": "InternalError"
},
{
"code": null,
"e": 15368,
"s": 15246,
"text": "Subclass of DatabaseError that refers to errors internal to the database module, such as a cursor no longer being active."
},
{
"code": null,
"e": 15385,
"s": 15368,
"text": "ProgrammingError"
},
{
"code": null,
"e": 15509,
"s": 15385,
"text": "Subclass of DatabaseError that refers to errors such as a bad table name and other things that can safely be blamed on you."
},
{
"code": null,
"e": 15527,
"s": 15509,
"text": "NotSupportedError"
},
{
"code": null,
"e": 15610,
"s": 15527,
"text": "Subclass of DatabaseError that refers to trying to call unsupported functionality."
},
{
"code": null,
"e": 15840,
"s": 15610,
"text": "Your Python scripts should handle these errors, but before using any of the above exceptions, make sure your MySQLdb has support for that exception. You can get more information about them by reading the DB API 2.0 specification."
},
{
"code": null,
"e": 15877,
"s": 15840,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 15893,
"s": 15877,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 15926,
"s": 15893,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 15945,
"s": 15926,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 15980,
"s": 15945,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 16002,
"s": 15980,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 16036,
"s": 16002,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 16064,
"s": 16036,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 16099,
"s": 16064,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 16113,
"s": 16099,
"text": " Lets Kode It"
},
{
"code": null,
"e": 16146,
"s": 16113,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 16163,
"s": 16146,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 16170,
"s": 16163,
"text": " Print"
},
{
"code": null,
"e": 16181,
"s": 16170,
"text": " Add Notes"
}
] |
How to sort an ArrayList in Ascending Order in Java - GeeksforGeeks
|
11 Dec, 2018
Given an unsorted ArrayList, the task is to sort this ArrayList in ascending order in Java.
Examples:
Input: Unsorted ArrayList: [Geeks, For, ForGeeks, GeeksForGeeks, A computer portal]Output: Sorted ArrayList: [A computer portal, For, ForGeeks, Geeks, GeeksForGeeks]
Input: Unsorted ArrayList: [Geeks, For, ForGeeks]Output: Sorted ArrayList: [For, ForGeeks, Geeks]
Approach: An ArrayList can be Sorted by using the sort() method of the Collections Class in Java. This sort() method takes the collection to be sorted as the parameter and returns a Collection sorted in the Ascending Order by default.
Syntax:
Collections.sort(ArrayList);
Below is the implementation of the above approach:
// Java program to demonstrate// How to sort ArrayList in ascending order import java.util.*; public class GFG { public static void main(String args[]) { // Get the ArrayList ArrayList<String> list = new ArrayList<String>(); // Populate the ArrayList list.add("Geeks"); list.add("For"); list.add("ForGeeks"); list.add("GeeksForGeeks"); list.add("A computer portal"); // Print the unsorted ArrayList System.out.println("Unsorted ArrayList: " + list); // Sorting ArrayList in ascending Order // using Collection.sort() method Collections.sort(list); // Print the sorted ArrayList System.out.println("Sorted ArrayList " + "in Ascending order : " + list); }}
Unsorted ArrayList: [Geeks, For, ForGeeks, GeeksForGeeks, A computer portal]
Sorted ArrayList in Ascending order : [A computer portal, For, ForGeeks, Geeks, GeeksForGeeks]
Java - util package
Java-ArrayList
Java-Collections
Java-List-Programs
Sorting Quiz
Java
Java
Java-Collections
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Initialize an ArrayList in Java
Object Oriented Programming (OOPs) Concept in Java
HashMap in Java with Examples
Interfaces in Java
ArrayList in Java
How to iterate any Map in Java
Multidimensional Arrays in Java
Stack Class in Java
Stream In Java
Singleton Class in Java
|
[
{
"code": null,
"e": 24546,
"s": 24518,
"text": "\n11 Dec, 2018"
},
{
"code": null,
"e": 24638,
"s": 24546,
"text": "Given an unsorted ArrayList, the task is to sort this ArrayList in ascending order in Java."
},
{
"code": null,
"e": 24648,
"s": 24638,
"text": "Examples:"
},
{
"code": null,
"e": 24814,
"s": 24648,
"text": "Input: Unsorted ArrayList: [Geeks, For, ForGeeks, GeeksForGeeks, A computer portal]Output: Sorted ArrayList: [A computer portal, For, ForGeeks, Geeks, GeeksForGeeks]"
},
{
"code": null,
"e": 24912,
"s": 24814,
"text": "Input: Unsorted ArrayList: [Geeks, For, ForGeeks]Output: Sorted ArrayList: [For, ForGeeks, Geeks]"
},
{
"code": null,
"e": 25147,
"s": 24912,
"text": "Approach: An ArrayList can be Sorted by using the sort() method of the Collections Class in Java. This sort() method takes the collection to be sorted as the parameter and returns a Collection sorted in the Ascending Order by default."
},
{
"code": null,
"e": 25155,
"s": 25147,
"text": "Syntax:"
},
{
"code": null,
"e": 25184,
"s": 25155,
"text": "Collections.sort(ArrayList);"
},
{
"code": null,
"e": 25235,
"s": 25184,
"text": "Below is the implementation of the above approach:"
},
{
"code": "// Java program to demonstrate// How to sort ArrayList in ascending order import java.util.*; public class GFG { public static void main(String args[]) { // Get the ArrayList ArrayList<String> list = new ArrayList<String>(); // Populate the ArrayList list.add(\"Geeks\"); list.add(\"For\"); list.add(\"ForGeeks\"); list.add(\"GeeksForGeeks\"); list.add(\"A computer portal\"); // Print the unsorted ArrayList System.out.println(\"Unsorted ArrayList: \" + list); // Sorting ArrayList in ascending Order // using Collection.sort() method Collections.sort(list); // Print the sorted ArrayList System.out.println(\"Sorted ArrayList \" + \"in Ascending order : \" + list); }}",
"e": 26105,
"s": 25235,
"text": null
},
{
"code": null,
"e": 26278,
"s": 26105,
"text": "Unsorted ArrayList: [Geeks, For, ForGeeks, GeeksForGeeks, A computer portal]\nSorted ArrayList in Ascending order : [A computer portal, For, ForGeeks, Geeks, GeeksForGeeks]\n"
},
{
"code": null,
"e": 26298,
"s": 26278,
"text": "Java - util package"
},
{
"code": null,
"e": 26313,
"s": 26298,
"text": "Java-ArrayList"
},
{
"code": null,
"e": 26330,
"s": 26313,
"text": "Java-Collections"
},
{
"code": null,
"e": 26349,
"s": 26330,
"text": "Java-List-Programs"
},
{
"code": null,
"e": 26362,
"s": 26349,
"text": "Sorting Quiz"
},
{
"code": null,
"e": 26367,
"s": 26362,
"text": "Java"
},
{
"code": null,
"e": 26372,
"s": 26367,
"text": "Java"
},
{
"code": null,
"e": 26389,
"s": 26372,
"text": "Java-Collections"
},
{
"code": null,
"e": 26487,
"s": 26389,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26519,
"s": 26487,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 26570,
"s": 26519,
"text": "Object Oriented Programming (OOPs) Concept in Java"
},
{
"code": null,
"e": 26600,
"s": 26570,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 26619,
"s": 26600,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 26637,
"s": 26619,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 26668,
"s": 26637,
"text": "How to iterate any Map in Java"
},
{
"code": null,
"e": 26700,
"s": 26668,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 26720,
"s": 26700,
"text": "Stack Class in Java"
},
{
"code": null,
"e": 26735,
"s": 26720,
"text": "Stream In Java"
}
] |
How can I remove everything inside of a <div> using jQuery?
|
To remove everything inside a div element, use the remove() method. You can try to run the following code to remove everything inside a <div> element −
Live Demo
<!DOCTYPE html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script>
$(document).ready(function(){
$(".btn").click(function(){
$("div#demo").remove();
});
});
</script>
</head>
<body>
<button class="btn">Hide the elements inside div</button>
<div id="demo">
<p>Inside div- This is demo text.</p>
<p>Inside div- This is demo text.</p>
</div>
<span>
<p>Inside span- This is demo text.</p>
<p>Inside span- This is demo text.</p>
</span>
</body>
</html>
|
[
{
"code": null,
"e": 1214,
"s": 1062,
"text": "To remove everything inside a div element, use the remove() method. You can try to run the following code to remove everything inside a <div> element −"
},
{
"code": null,
"e": 1224,
"s": 1214,
"text": "Live Demo"
},
{
"code": null,
"e": 1759,
"s": 1224,
"text": "<!DOCTYPE html>\n<html>\n<head>\n<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js\"></script>\n<script>\n$(document).ready(function(){\n $(\".btn\").click(function(){\n $(\"div#demo\").remove();\n });\n \n });\n</script>\n</head>\n<body>\n\n<button class=\"btn\">Hide the elements inside div</button>\n\n<div id=\"demo\">\n<p>Inside div- This is demo text.</p>\n<p>Inside div- This is demo text.</p>\n</div>\n<span>\n<p>Inside span- This is demo text.</p>\n<p>Inside span- This is demo text.</p>\n</span>\n</body>\n</html>"
}
] |
Kibana - Creating Reports Using Kibana
|
Reports can be easily created by using the Share button available in Kibana UI.
Reports in Kibana are available in the following two forms −
Permalinks
CSV Report
When performing visualization, you can share the same as follows −
Use the share button to share the visualization with others as Embed Code or Permalinks.
In-case of Embed code you get the following options −
You can generate the iframe code as short url or long url for snapshot or saved object. Snapshot will not give the recent data and user will be able to see the data saved when the link was shared. Any changes done later will not be reflected.
In case of saved object, you will get the recent changes done to that visualization.
Snapshot IFrame code for long url −
<iframe src="http://localhost:5601/app/kibana#/visualize/edit/87af
cb60-165f-11e9-aaf1-3524d1f04792?embed=true&_g=()&_a=(filters:!(),linked:!f,query:(language:lucene,query:''),
uiState:(),vis:(aggs:!((enabled:!t,id:'1',params:(field:Area),schema:metric,type:max),(enabled:!t,id:'2',p
arams:(field:Country.keyword,missingBucket:!f,missingBucketLabel:Missing,order:desc,orderBy:'1',otherBucket:!
f,otherBucketLabel:Other,size:10),schema:segment,type:terms)),params:(addLegend:!t,addTimeMarker:!f,addToo
ltip:!t,categoryAxes:!((id:CategoryAxis-1,labels:(show:!t,truncate:100),position:bottom,scale:(type:linear),
show:!t,style:(),title:(),type:category)),grid:(categoryLines:!f,style:(color:%23eee)),legendPosition:right,
seriesParams:!((data:(id:'1',label:'Max+Area'),drawLi
nesBetweenPoints:!t,mode:stacked,show:true,showCircles:!t,type:histogram,valueAxis:ValueAxis-1)),times:!(),
type:histogram,valueAxes:!((id:ValueAxis-1,labels:(filter:!f,rotate:0,show:!t,truncate:100),name:LeftAxis-1,
position:left,scale:(mode:normal,type:linear),show:!t,style:(),title:(text:'Max+Area'),type:value))),title:
'countrywise_maxarea+',type:histogram))" height="600" width="800"></iframe>
Snapshot Iframe code for short url −
<iframe src="http://localhost:5601/goto/f0a6c852daedcb6b4fa74cce8c2ff6c4?embed=true" height="600" width="800"><iframe>
As snapshot and shot url.
With Short url −
http://localhost:5601/goto/f0a6c852daedcb6b4fa74cce8c2ff6c4
With Short url off, the link looks as below −
http://localhost:5601/app/kibana#/visualize/edit/87afcb60-165f-11e9-aaf1-3524d1f04792?_g=()&_a=(filters:!(
),linked:!f,query:(language:lucene,query:''),uiState:(),vis:(aggs:!((enabled:!t,id:'1',params:(field:Area),
schema:metric,type:max),(enabled:!t,id:'2',params:(field:Country.keyword,missingBucket:!f,missingBucketLabel:
Missing,order:desc,orderBy:'1',otherBucket:!f,otherBucketLabel:Other,size:10),schema:segment,type:terms)),
params:(addLegend:!t,addTimeMarker:!f,addTooltip:!t,categoryAxes:!((id:CategoryAxis-1,labels:(show:!t,trun
cate:100),position:bottom,scale:(type:linear),show:!t,style:(),title:(),type:category)),grid:(categoryLine
s:!f,style:(color:%23eee)),legendPosition:right,seriesParams:!((data:(id:'1',label:'Max%20Area'),drawLines
BetweenPoints:!t,mode:stacked,show:true,showCircles:!t,type:histogram,valueAxis:ValueAxis-1)),times:!(),
type:histogram,valueAxes:!((id:ValueAxis-1,labels:(filter:!f,rotate:0,show:!t,truncate:100),name:LeftAxis-1,
position:left,scale:(mode:normal,type:linear),show:!t,style:(),title:(text:'Max%20Area'),type:value))),title:'countrywise_maxarea%20',type:histogram))
When you hit the above link in the browser, you will get the same visualization as shown above. The above links are hosted locally, so it will not work when used outside the local environment.
You can get CSV Report in Kibana where there is data, which is mostly in the Discover tab.
Go to Discover tab and take any index you want the data for. Here we have taken the index:countriesdata-26.12.2018. Here is the data displayed from the index −
You can create tabular data from above data as shown below −
We have selected the fields from Available fields and the data seen earlier is converted into tabular format.
You can get above data in CSV report as shown below −
The share button has option for CSV report and permalinks. You can click on CSV Report and download the same.
Please note to get the CSV Reports you need to save your data.
Confirm Save and click on Share button and CSV Reports. You will get following display −
Click on Generate CSV to get your report. Once done, it will instruct you to go the management tab.
Go to Management Tab → Reporting
It displays the report name, created at, status and actions. You can click on the download button as highlighted above and get your csv report.
The CSV file we just downloaded is as shown here −
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2185,
"s": 2105,
"text": "Reports can be easily created by using the Share button available in Kibana UI."
},
{
"code": null,
"e": 2246,
"s": 2185,
"text": "Reports in Kibana are available in the following two forms −"
},
{
"code": null,
"e": 2257,
"s": 2246,
"text": "Permalinks"
},
{
"code": null,
"e": 2268,
"s": 2257,
"text": "CSV Report"
},
{
"code": null,
"e": 2335,
"s": 2268,
"text": "When performing visualization, you can share the same as follows −"
},
{
"code": null,
"e": 2424,
"s": 2335,
"text": "Use the share button to share the visualization with others as Embed Code or Permalinks."
},
{
"code": null,
"e": 2478,
"s": 2424,
"text": "In-case of Embed code you get the following options −"
},
{
"code": null,
"e": 2721,
"s": 2478,
"text": "You can generate the iframe code as short url or long url for snapshot or saved object. Snapshot will not give the recent data and user will be able to see the data saved when the link was shared. Any changes done later will not be reflected."
},
{
"code": null,
"e": 2806,
"s": 2721,
"text": "In case of saved object, you will get the recent changes done to that visualization."
},
{
"code": null,
"e": 2842,
"s": 2806,
"text": "Snapshot IFrame code for long url −"
},
{
"code": null,
"e": 4017,
"s": 2842,
"text": "<iframe src=\"http://localhost:5601/app/kibana#/visualize/edit/87af\ncb60-165f-11e9-aaf1-3524d1f04792?embed=true&_g=()&_a=(filters:!(),linked:!f,query:(language:lucene,query:''),\nuiState:(),vis:(aggs:!((enabled:!t,id:'1',params:(field:Area),schema:metric,type:max),(enabled:!t,id:'2',p\narams:(field:Country.keyword,missingBucket:!f,missingBucketLabel:Missing,order:desc,orderBy:'1',otherBucket:!\nf,otherBucketLabel:Other,size:10),schema:segment,type:terms)),params:(addLegend:!t,addTimeMarker:!f,addToo\nltip:!t,categoryAxes:!((id:CategoryAxis-1,labels:(show:!t,truncate:100),position:bottom,scale:(type:linear),\nshow:!t,style:(),title:(),type:category)),grid:(categoryLines:!f,style:(color:%23eee)),legendPosition:right,\nseriesParams:!((data:(id:'1',label:'Max+Area'),drawLi\nnesBetweenPoints:!t,mode:stacked,show:true,showCircles:!t,type:histogram,valueAxis:ValueAxis-1)),times:!(),\ntype:histogram,valueAxes:!((id:ValueAxis-1,labels:(filter:!f,rotate:0,show:!t,truncate:100),name:LeftAxis-1,\nposition:left,scale:(mode:normal,type:linear),show:!t,style:(),title:(text:'Max+Area'),type:value))),title:\n'countrywise_maxarea+',type:histogram))\" height=\"600\" width=\"800\"></iframe>\n"
},
{
"code": null,
"e": 4054,
"s": 4017,
"text": "Snapshot Iframe code for short url −"
},
{
"code": null,
"e": 4174,
"s": 4054,
"text": "<iframe src=\"http://localhost:5601/goto/f0a6c852daedcb6b4fa74cce8c2ff6c4?embed=true\" height=\"600\" width=\"800\"><iframe>\n"
},
{
"code": null,
"e": 4200,
"s": 4174,
"text": "As snapshot and shot url."
},
{
"code": null,
"e": 4217,
"s": 4200,
"text": "With Short url −"
},
{
"code": null,
"e": 4278,
"s": 4217,
"text": "http://localhost:5601/goto/f0a6c852daedcb6b4fa74cce8c2ff6c4\n"
},
{
"code": null,
"e": 4324,
"s": 4278,
"text": "With Short url off, the link looks as below −"
},
{
"code": null,
"e": 5443,
"s": 4324,
"text": "http://localhost:5601/app/kibana#/visualize/edit/87afcb60-165f-11e9-aaf1-3524d1f04792?_g=()&_a=(filters:!(\n),linked:!f,query:(language:lucene,query:''),uiState:(),vis:(aggs:!((enabled:!t,id:'1',params:(field:Area),\nschema:metric,type:max),(enabled:!t,id:'2',params:(field:Country.keyword,missingBucket:!f,missingBucketLabel:\nMissing,order:desc,orderBy:'1',otherBucket:!f,otherBucketLabel:Other,size:10),schema:segment,type:terms)),\nparams:(addLegend:!t,addTimeMarker:!f,addTooltip:!t,categoryAxes:!((id:CategoryAxis-1,labels:(show:!t,trun\ncate:100),position:bottom,scale:(type:linear),show:!t,style:(),title:(),type:category)),grid:(categoryLine\ns:!f,style:(color:%23eee)),legendPosition:right,seriesParams:!((data:(id:'1',label:'Max%20Area'),drawLines\nBetweenPoints:!t,mode:stacked,show:true,showCircles:!t,type:histogram,valueAxis:ValueAxis-1)),times:!(),\ntype:histogram,valueAxes:!((id:ValueAxis-1,labels:(filter:!f,rotate:0,show:!t,truncate:100),name:LeftAxis-1,\nposition:left,scale:(mode:normal,type:linear),show:!t,style:(),title:(text:'Max%20Area'),type:value))),title:'countrywise_maxarea%20',type:histogram))\n"
},
{
"code": null,
"e": 5636,
"s": 5443,
"text": "When you hit the above link in the browser, you will get the same visualization as shown above. The above links are hosted locally, so it will not work when used outside the local environment."
},
{
"code": null,
"e": 5727,
"s": 5636,
"text": "You can get CSV Report in Kibana where there is data, which is mostly in the Discover tab."
},
{
"code": null,
"e": 5887,
"s": 5727,
"text": "Go to Discover tab and take any index you want the data for. Here we have taken the index:countriesdata-26.12.2018. Here is the data displayed from the index −"
},
{
"code": null,
"e": 5948,
"s": 5887,
"text": "You can create tabular data from above data as shown below −"
},
{
"code": null,
"e": 6058,
"s": 5948,
"text": "We have selected the fields from Available fields and the data seen earlier is converted into tabular format."
},
{
"code": null,
"e": 6112,
"s": 6058,
"text": "You can get above data in CSV report as shown below −"
},
{
"code": null,
"e": 6222,
"s": 6112,
"text": "The share button has option for CSV report and permalinks. You can click on CSV Report and download the same."
},
{
"code": null,
"e": 6285,
"s": 6222,
"text": "Please note to get the CSV Reports you need to save your data."
},
{
"code": null,
"e": 6374,
"s": 6285,
"text": "Confirm Save and click on Share button and CSV Reports. You will get following display −"
},
{
"code": null,
"e": 6474,
"s": 6374,
"text": "Click on Generate CSV to get your report. Once done, it will instruct you to go the management tab."
},
{
"code": null,
"e": 6507,
"s": 6474,
"text": "Go to Management Tab → Reporting"
},
{
"code": null,
"e": 6651,
"s": 6507,
"text": "It displays the report name, created at, status and actions. You can click on the download button as highlighted above and get your csv report."
},
{
"code": null,
"e": 6702,
"s": 6651,
"text": "The CSV file we just downloaded is as shown here −"
},
{
"code": null,
"e": 6709,
"s": 6702,
"text": " Print"
},
{
"code": null,
"e": 6720,
"s": 6709,
"text": " Add Notes"
}
] |
C# | Math.Exp() Method - GeeksforGeeks
|
01 Feb, 2019
In C#, Exp() is a Math class method which is used to return the e raised to the specified power. Here e is a mathematical constant whose value is approximately 2.71828. Exp() is the inverse of Log().
Syntax:
public static double Exp (double num);
Parameter:
num: It is the required number of type System.Double which specifies a power.
Return Type: It returns a number e raised to the power num of type System.Double.
Note:
If num is equal to NaN then the return value will be NaN.
If num is equal to PositiveInfinity then the return value will be Infinity.
If num is equal to NegativeInfinity then the return value will be 0.
Example 1:
// C# Program to illustrate the// Math.Exp(Double) Methodusing System; class Geeks { // Main Method public static void Main() { // using the method Console.WriteLine(Math.Exp(10.0)); Console.WriteLine(Math.Exp(15.57)); Console.WriteLine(Math.Exp(529.548)); Console.WriteLine(Math.Exp(0.00)); }}
22026.4657948067
5780495.71030692
9.54496417945595E+229
1
Example 2:
// C# Program to illustrate the// Math.Exp(Double) Method by // taking NaN, PositiveInfinity// and NegativeInfinity]using System; class Geeks { // Main Method public static void Main() { // Taking NaN Console.WriteLine(Math.Exp(Double.NaN)); // Taking PositiveInfinity Console.WriteLine(Math.Exp(Double.PositiveInfinity)); // Taking NegativeInfinity Console.WriteLine(Math.Exp(Double.NegativeInfinity)); }}
NaN
Infinity
0
Reference: https://docs.microsoft.com/en-us/dotnet/api/system.math.exp?view=netcore-2.1
CSharp-Math
CSharp-method
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
C# Dictionary with examples
C# | Method Overriding
Destructors in C#
Difference between Ref and Out keywords in C#
C# | Delegates
C# | String.IndexOf( ) Method | Set - 1
C# | Constructors
Extension Method in C#
Introduction to .NET Framework
C# | Class and Object
|
[
{
"code": null,
"e": 24394,
"s": 24366,
"text": "\n01 Feb, 2019"
},
{
"code": null,
"e": 24594,
"s": 24394,
"text": "In C#, Exp() is a Math class method which is used to return the e raised to the specified power. Here e is a mathematical constant whose value is approximately 2.71828. Exp() is the inverse of Log()."
},
{
"code": null,
"e": 24602,
"s": 24594,
"text": "Syntax:"
},
{
"code": null,
"e": 24641,
"s": 24602,
"text": "public static double Exp (double num);"
},
{
"code": null,
"e": 24652,
"s": 24641,
"text": "Parameter:"
},
{
"code": null,
"e": 24730,
"s": 24652,
"text": "num: It is the required number of type System.Double which specifies a power."
},
{
"code": null,
"e": 24812,
"s": 24730,
"text": "Return Type: It returns a number e raised to the power num of type System.Double."
},
{
"code": null,
"e": 24818,
"s": 24812,
"text": "Note:"
},
{
"code": null,
"e": 24876,
"s": 24818,
"text": "If num is equal to NaN then the return value will be NaN."
},
{
"code": null,
"e": 24952,
"s": 24876,
"text": "If num is equal to PositiveInfinity then the return value will be Infinity."
},
{
"code": null,
"e": 25021,
"s": 24952,
"text": "If num is equal to NegativeInfinity then the return value will be 0."
},
{
"code": null,
"e": 25032,
"s": 25021,
"text": "Example 1:"
},
{
"code": "// C# Program to illustrate the// Math.Exp(Double) Methodusing System; class Geeks { // Main Method public static void Main() { // using the method Console.WriteLine(Math.Exp(10.0)); Console.WriteLine(Math.Exp(15.57)); Console.WriteLine(Math.Exp(529.548)); Console.WriteLine(Math.Exp(0.00)); }}",
"e": 25382,
"s": 25032,
"text": null
},
{
"code": null,
"e": 25441,
"s": 25382,
"text": "22026.4657948067\n5780495.71030692\n9.54496417945595E+229\n1\n"
},
{
"code": null,
"e": 25452,
"s": 25441,
"text": "Example 2:"
},
{
"code": "// C# Program to illustrate the// Math.Exp(Double) Method by // taking NaN, PositiveInfinity// and NegativeInfinity]using System; class Geeks { // Main Method public static void Main() { // Taking NaN Console.WriteLine(Math.Exp(Double.NaN)); // Taking PositiveInfinity Console.WriteLine(Math.Exp(Double.PositiveInfinity)); // Taking NegativeInfinity Console.WriteLine(Math.Exp(Double.NegativeInfinity)); }}",
"e": 25941,
"s": 25452,
"text": null
},
{
"code": null,
"e": 25957,
"s": 25941,
"text": "NaN\nInfinity\n0\n"
},
{
"code": null,
"e": 26045,
"s": 25957,
"text": "Reference: https://docs.microsoft.com/en-us/dotnet/api/system.math.exp?view=netcore-2.1"
},
{
"code": null,
"e": 26057,
"s": 26045,
"text": "CSharp-Math"
},
{
"code": null,
"e": 26071,
"s": 26057,
"text": "CSharp-method"
},
{
"code": null,
"e": 26074,
"s": 26071,
"text": "C#"
},
{
"code": null,
"e": 26172,
"s": 26074,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26200,
"s": 26172,
"text": "C# Dictionary with examples"
},
{
"code": null,
"e": 26223,
"s": 26200,
"text": "C# | Method Overriding"
},
{
"code": null,
"e": 26241,
"s": 26223,
"text": "Destructors in C#"
},
{
"code": null,
"e": 26287,
"s": 26241,
"text": "Difference between Ref and Out keywords in C#"
},
{
"code": null,
"e": 26302,
"s": 26287,
"text": "C# | Delegates"
},
{
"code": null,
"e": 26342,
"s": 26302,
"text": "C# | String.IndexOf( ) Method | Set - 1"
},
{
"code": null,
"e": 26360,
"s": 26342,
"text": "C# | Constructors"
},
{
"code": null,
"e": 26383,
"s": 26360,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 26414,
"s": 26383,
"text": "Introduction to .NET Framework"
}
] |
DAX Date & Time - TODAY function
|
Returns the current date.
TODAY ()
No parameters for this function.
Current date in datetime format.
DAX TODAY function is useful when you need to have the current date displayed on a workbook, regardless of when you open the workbook. It is also useful for calculating intervals.
DAX functions - TODAY and NOW both return the current date. However,
TODAY returns the time as 12:00:00 always.
NOW returns the time precisely.
= YEAR (TODAY () – [JoiningDate]) – 1900 returns the number of years of service for each employee.
53 Lectures
5.5 hours
Abhay Gadiya
24 Lectures
2 hours
Randy Minder
26 Lectures
4.5 hours
Randy Minder
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2027,
"s": 2001,
"text": "Returns the current date."
},
{
"code": null,
"e": 2038,
"s": 2027,
"text": "TODAY () \n"
},
{
"code": null,
"e": 2071,
"s": 2038,
"text": "No parameters for this function."
},
{
"code": null,
"e": 2104,
"s": 2071,
"text": "Current date in datetime format."
},
{
"code": null,
"e": 2284,
"s": 2104,
"text": "DAX TODAY function is useful when you need to have the current date displayed on a workbook, regardless of when you open the workbook. It is also useful for calculating intervals."
},
{
"code": null,
"e": 2353,
"s": 2284,
"text": "DAX functions - TODAY and NOW both return the current date. However,"
},
{
"code": null,
"e": 2396,
"s": 2353,
"text": "TODAY returns the time as 12:00:00 always."
},
{
"code": null,
"e": 2428,
"s": 2396,
"text": "NOW returns the time precisely."
},
{
"code": null,
"e": 2527,
"s": 2428,
"text": "= YEAR (TODAY () – [JoiningDate]) – 1900 returns the number of years of service for each employee."
},
{
"code": null,
"e": 2562,
"s": 2527,
"text": "\n 53 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 2576,
"s": 2562,
"text": " Abhay Gadiya"
},
{
"code": null,
"e": 2609,
"s": 2576,
"text": "\n 24 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 2623,
"s": 2609,
"text": " Randy Minder"
},
{
"code": null,
"e": 2658,
"s": 2623,
"text": "\n 26 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 2672,
"s": 2658,
"text": " Randy Minder"
},
{
"code": null,
"e": 2679,
"s": 2672,
"text": " Print"
},
{
"code": null,
"e": 2690,
"s": 2679,
"text": " Add Notes"
}
] |
Hamming Distance between two strings - GeeksforGeeks
|
19 May, 2021
You are given two strings of equal length, you have to find the Hamming Distance between these string. Where the Hamming distance between two strings of equal length is the number of positions at which the corresponding character is different. Examples:
Input : str1[] = "geeksforgeeks", str2[] = "geeksandgeeks"
Output : 3
Explanation : The corresponding character mismatch are highlighted.
"geeksforgeeks" and "geeksandgeeks"
Input : str1[] = "1011101", str2[] = "1001001"
Output : 2
Explanation : The corresponding character mismatch are highlighted.
"1011101" and "1001001"
This problem can be solved with a simple approach in which we traverse the strings and count the mismatch at the corresponding position. The extended form of this problem is edit distance.Algorithm :
int hammingDist(char str1[], char str2[])
{
int i = 0, count = 0;
while(str1[i]!='\0')
{
if (str1[i] != str2[i])
count++;
i++;
}
return count;
}
Below is the implementation of two strings.
C++
Java
Python3
C#
PHP
Javascript
// C++ program to find hamming distance b/w// two string#include<bits/stdc++.h>using namespace std; // function to calculate Hamming distanceint hammingDist(char *str1, char *str2){ int i = 0, count = 0; while (str1[i] != '\0') { if (str1[i] != str2[i]) count++; i++; } return count;} // driver codeint main(){ char str1[] = "geekspractice"; char str2[] = "nerdspractise"; // function call cout << hammingDist (str1, str2); return 0;}
// Java program to find hamming distance// b/w two stringclass GFG{// function to calculate Hamming distancestatic int hammingDist(String str1, String str2){ int i = 0, count = 0; while (i < str1.length()) { if (str1.charAt(i) != str2.charAt(i)) count++; i++; } return count;} // Driver codepublic static void main (String[] args){ String str1 = "geekspractice"; String str2 = "nerdspractise"; // function call System.out.println(hammingDist (str1, str2));}} // This code is contributed by Anant Agarwal.
# Python3 program to find# hamming distance b/w two# string # Function to calculate# Hamming distancedef hammingDist(str1, str2): i = 0 count = 0 while(i < len(str1)): if(str1[i] != str2[i]): count += 1 i += 1 return count # Driver code str1 = "geekspractice"str2 = "nerdspractise" # function callprint(hammingDist(str1, str2)) # This code is contributed by avanitrachhadiya2155
// C# program to find hamming// distance b/w two stringusing System; class GFG { // function to calculate// Hamming distancestatic int hammingDist(String str1, String str2){ int i = 0, count = 0; while (i < str1.Length) { if (str1[i] != str2[i]) count++; i++; } return count;} // Driver codepublic static void Main (){ String str1 = "geekspractice"; String str2 = "nerdspractise"; // function call Console.Write(hammingDist(str1, str2));}} // This code is contributed by nitin mittal
<?php// PHP program to find hamming distance b/w// two string // function to calculate// Hamming distancefunction hammingDist($str1, $str2){ $i = 0; $count = 0; while (isset($str1[$i]) != '') { if ($str1[$i] != $str2[$i]) $count++; $i++; } return $count;} // Driver Code $str1 = "geekspractice"; $str2 = "nerdspractise"; // function call echo hammingDist ($str1, $str2); // This code is contributed by nitin mittal.?>
<script>// JavaScript program to find hamming distance b/w// two string // function to calculate Hamming distancefunction hammingDist(str1, str2){ let i = 0, count = 0; while (i < str1.length) { if (str1[i] != str2[i]) count++; i++; } return count;} // driver code let str1 = "geekspractice"; let str2 = "nerdspractise"; // function call document.write(hammingDist (str1, str2)); // This code is contributed by Manoj.</script>
Output:
4
Time complexity: O(n)Note: For Hamming distance of two binary numbers, we can simply return a count of set bits in XOR of two numbers.This article is contributed by Shivam Pradhan (anuj_charm). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
nitin mittal
avanitrachhadiya2155
mank1083
Strings
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Check for Balanced Brackets in an expression (well-formedness) using Stack
Python program to check if a string is palindrome or not
KMP Algorithm for Pattern Searching
Array of Strings in C++ (5 Different Ways to Create)
Different methods to reverse a string in C/C++
Convert string to char array in C++
Longest Palindromic Substring | Set 1
Caesar Cipher in Cryptography
Top 50 String Coding Problems for Interviews
Length of the longest substring without repeating characters
|
[
{
"code": null,
"e": 26617,
"s": 26589,
"text": "\n19 May, 2021"
},
{
"code": null,
"e": 26872,
"s": 26617,
"text": "You are given two strings of equal length, you have to find the Hamming Distance between these string. Where the Hamming distance between two strings of equal length is the number of positions at which the corresponding character is different. Examples: "
},
{
"code": null,
"e": 27197,
"s": 26872,
"text": "Input : str1[] = \"geeksforgeeks\", str2[] = \"geeksandgeeks\"\nOutput : 3\nExplanation : The corresponding character mismatch are highlighted.\n\"geeksforgeeks\" and \"geeksandgeeks\"\n\nInput : str1[] = \"1011101\", str2[] = \"1001001\"\nOutput : 2\nExplanation : The corresponding character mismatch are highlighted.\n\"1011101\" and \"1001001\""
},
{
"code": null,
"e": 27399,
"s": 27197,
"text": "This problem can be solved with a simple approach in which we traverse the strings and count the mismatch at the corresponding position. The extended form of this problem is edit distance.Algorithm : "
},
{
"code": null,
"e": 27592,
"s": 27399,
"text": "int hammingDist(char str1[], char str2[])\n{\n int i = 0, count = 0;\n while(str1[i]!='\\0')\n {\n if (str1[i] != str2[i])\n count++;\n i++;\n }\n return count;\n}"
},
{
"code": null,
"e": 27638,
"s": 27592,
"text": "Below is the implementation of two strings. "
},
{
"code": null,
"e": 27642,
"s": 27638,
"text": "C++"
},
{
"code": null,
"e": 27647,
"s": 27642,
"text": "Java"
},
{
"code": null,
"e": 27655,
"s": 27647,
"text": "Python3"
},
{
"code": null,
"e": 27658,
"s": 27655,
"text": "C#"
},
{
"code": null,
"e": 27662,
"s": 27658,
"text": "PHP"
},
{
"code": null,
"e": 27673,
"s": 27662,
"text": "Javascript"
},
{
"code": "// C++ program to find hamming distance b/w// two string#include<bits/stdc++.h>using namespace std; // function to calculate Hamming distanceint hammingDist(char *str1, char *str2){ int i = 0, count = 0; while (str1[i] != '\\0') { if (str1[i] != str2[i]) count++; i++; } return count;} // driver codeint main(){ char str1[] = \"geekspractice\"; char str2[] = \"nerdspractise\"; // function call cout << hammingDist (str1, str2); return 0;}",
"e": 28165,
"s": 27673,
"text": null
},
{
"code": "// Java program to find hamming distance// b/w two stringclass GFG{// function to calculate Hamming distancestatic int hammingDist(String str1, String str2){ int i = 0, count = 0; while (i < str1.length()) { if (str1.charAt(i) != str2.charAt(i)) count++; i++; } return count;} // Driver codepublic static void main (String[] args){ String str1 = \"geekspractice\"; String str2 = \"nerdspractise\"; // function call System.out.println(hammingDist (str1, str2));}} // This code is contributed by Anant Agarwal.",
"e": 28723,
"s": 28165,
"text": null
},
{
"code": "# Python3 program to find# hamming distance b/w two# string # Function to calculate# Hamming distancedef hammingDist(str1, str2): i = 0 count = 0 while(i < len(str1)): if(str1[i] != str2[i]): count += 1 i += 1 return count # Driver code str1 = \"geekspractice\"str2 = \"nerdspractise\" # function callprint(hammingDist(str1, str2)) # This code is contributed by avanitrachhadiya2155",
"e": 29140,
"s": 28723,
"text": null
},
{
"code": "// C# program to find hamming// distance b/w two stringusing System; class GFG { // function to calculate// Hamming distancestatic int hammingDist(String str1, String str2){ int i = 0, count = 0; while (i < str1.Length) { if (str1[i] != str2[i]) count++; i++; } return count;} // Driver codepublic static void Main (){ String str1 = \"geekspractice\"; String str2 = \"nerdspractise\"; // function call Console.Write(hammingDist(str1, str2));}} // This code is contributed by nitin mittal",
"e": 29703,
"s": 29140,
"text": null
},
{
"code": "<?php// PHP program to find hamming distance b/w// two string // function to calculate// Hamming distancefunction hammingDist($str1, $str2){ $i = 0; $count = 0; while (isset($str1[$i]) != '') { if ($str1[$i] != $str2[$i]) $count++; $i++; } return $count;} // Driver Code $str1 = \"geekspractice\"; $str2 = \"nerdspractise\"; // function call echo hammingDist ($str1, $str2); // This code is contributed by nitin mittal.?>",
"e": 30182,
"s": 29703,
"text": null
},
{
"code": "<script>// JavaScript program to find hamming distance b/w// two string // function to calculate Hamming distancefunction hammingDist(str1, str2){ let i = 0, count = 0; while (i < str1.length) { if (str1[i] != str2[i]) count++; i++; } return count;} // driver code let str1 = \"geekspractice\"; let str2 = \"nerdspractise\"; // function call document.write(hammingDist (str1, str2)); // This code is contributed by Manoj.</script>",
"e": 30664,
"s": 30182,
"text": null
},
{
"code": null,
"e": 30674,
"s": 30664,
"text": "Output: "
},
{
"code": null,
"e": 30676,
"s": 30674,
"text": "4"
},
{
"code": null,
"e": 31245,
"s": 30676,
"text": "Time complexity: O(n)Note: For Hamming distance of two binary numbers, we can simply return a count of set bits in XOR of two numbers.This article is contributed by Shivam Pradhan (anuj_charm). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 31258,
"s": 31245,
"text": "nitin mittal"
},
{
"code": null,
"e": 31279,
"s": 31258,
"text": "avanitrachhadiya2155"
},
{
"code": null,
"e": 31288,
"s": 31279,
"text": "mank1083"
},
{
"code": null,
"e": 31296,
"s": 31288,
"text": "Strings"
},
{
"code": null,
"e": 31304,
"s": 31296,
"text": "Strings"
},
{
"code": null,
"e": 31402,
"s": 31304,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31477,
"s": 31402,
"text": "Check for Balanced Brackets in an expression (well-formedness) using Stack"
},
{
"code": null,
"e": 31534,
"s": 31477,
"text": "Python program to check if a string is palindrome or not"
},
{
"code": null,
"e": 31570,
"s": 31534,
"text": "KMP Algorithm for Pattern Searching"
},
{
"code": null,
"e": 31623,
"s": 31570,
"text": "Array of Strings in C++ (5 Different Ways to Create)"
},
{
"code": null,
"e": 31670,
"s": 31623,
"text": "Different methods to reverse a string in C/C++"
},
{
"code": null,
"e": 31706,
"s": 31670,
"text": "Convert string to char array in C++"
},
{
"code": null,
"e": 31744,
"s": 31706,
"text": "Longest Palindromic Substring | Set 1"
},
{
"code": null,
"e": 31774,
"s": 31744,
"text": "Caesar Cipher in Cryptography"
},
{
"code": null,
"e": 31819,
"s": 31774,
"text": "Top 50 String Coding Problems for Interviews"
}
] |
How to add sleep/wait function before continuing in JavaScript? - GeeksforGeeks
|
17 Sep, 2019
JavaScript, unlike other languages, does not have any method to simulate a sleep() function. There are some approaches that can be used to simulate a sleep function.
Method 1: Using an infinite loop to keep checking for the elapsed timeThe time that the sleep function starts is first found using the new Date().getTime() method. This returns the number of milliseconds passed since the Epoch time.
An infinite while loop is started. The elapsed time is calculated by subtracting the current time with the starting time. If-statement checks whether the elapsed time is greater than the given time (in milliseconds). On satisfying the condition, a break statement is executed, breaking out of the loop. The sleep function now ends and the lines of code written after the sleep function will now execute.
This type of sleep which uses an infinite loop stall the processing of the rest of the script and may cause warnings from the browser. It is not encouraged to use this type of sleep function for a long duration.
Syntax:
function sleep(milliseconds) { let timeStart = new Date().getTime(); while (true) { let elapsedTime = new Date().getTime() - timeStart; if (elapsedTime > milliseconds) { break; } } }
Example:
<!DOCTYPE html><html> <head> <title> JavaScript | sleep/wait before continuing </title></head> <body> <h1 style="color: green">GeeksforGeeks</h1> <b>JavaScript | sleep/wait before continuing</b> <p> A sleep of 5000 milliseconds is simulated. Check the console for the output. </p> <script> function sleep(milliseconds) { let timeStart = new Date().getTime(); while (true) { let elapsedTime = new Date().getTime() - timeStart; if (elapsedTime > milliseconds) { break; } } } console.log("Hello World"); console.log("Sleeping for 5000 milliseconds"); // sleep for 5000 miliiseconds sleep(5000); console.log("Sleep completed successfully"); </script></body> </html>
Output:
Before sleeping:
After sleeping for 5000 milliseconds:
Method 2: Creating a new Promise and using the then() methodA new Promise is created which contains a setTimeout() function. The setTimeout() function is used to execute a function after a specified amount of time. The resolved state of the Promise is used inside the setTimeout() function to finishing it after the timeout.
The then() method can be used to execute the required function after the Promise has finished. This simulates a waiting time for a function.
This method does not block the asynchronous nature of JavaScript and is a preferred method for delaying a function. It is also only supported with the ES6 standard due to the use of Promises.
Syntax:
const sleep = milliseconds => { return new Promise(resolve => setTimeout(resolve, milliseconds));}; sleep(timeToSleep).then(() => { // code to execute after sleep});
Example:
<!DOCTYPE html><html> <head> <title> JavaScript | sleep/wait before continuing </title></head> <body> <h1 style="color: green">GeeksforGeeks</h1> <b>JavaScript | sleep/wait before continuing</b> <p> A sleep of 5000 milliseconds is simulated. Check the console for the output. </p> <script> const sleep = milliseconds => { return new Promise(resolve => setTimeout(resolve, milliseconds)); }; console.log("Hello World"); console.log("Sleeping for 5000 milliseconds"); // sleep for 5000 miliiseconds and then execute function sleep(5000).then(() => { console.log("Sleep completed successfully"); }); </script></body> </html>
Output:
Before sleeping:
After sleeping for 5000 milliseconds:
JavaScript-Misc
Picked
JavaScript
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Remove elements from a JavaScript Array
Difference between var, let and const keywords in JavaScript
Difference Between PUT and PATCH Request
JavaScript | Promises
How to get character array from string in JavaScript?
Remove elements from a JavaScript Array
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
Difference between var, let and const keywords in JavaScript
|
[
{
"code": null,
"e": 26545,
"s": 26517,
"text": "\n17 Sep, 2019"
},
{
"code": null,
"e": 26711,
"s": 26545,
"text": "JavaScript, unlike other languages, does not have any method to simulate a sleep() function. There are some approaches that can be used to simulate a sleep function."
},
{
"code": null,
"e": 26944,
"s": 26711,
"text": "Method 1: Using an infinite loop to keep checking for the elapsed timeThe time that the sleep function starts is first found using the new Date().getTime() method. This returns the number of milliseconds passed since the Epoch time."
},
{
"code": null,
"e": 27348,
"s": 26944,
"text": "An infinite while loop is started. The elapsed time is calculated by subtracting the current time with the starting time. If-statement checks whether the elapsed time is greater than the given time (in milliseconds). On satisfying the condition, a break statement is executed, breaking out of the loop. The sleep function now ends and the lines of code written after the sleep function will now execute."
},
{
"code": null,
"e": 27560,
"s": 27348,
"text": "This type of sleep which uses an infinite loop stall the processing of the rest of the script and may cause warnings from the browser. It is not encouraged to use this type of sleep function for a long duration."
},
{
"code": null,
"e": 27568,
"s": 27560,
"text": "Syntax:"
},
{
"code": "function sleep(milliseconds) { let timeStart = new Date().getTime(); while (true) { let elapsedTime = new Date().getTime() - timeStart; if (elapsedTime > milliseconds) { break; } } }",
"e": 27783,
"s": 27568,
"text": null
},
{
"code": null,
"e": 27792,
"s": 27783,
"text": "Example:"
},
{
"code": "<!DOCTYPE html><html> <head> <title> JavaScript | sleep/wait before continuing </title></head> <body> <h1 style=\"color: green\">GeeksforGeeks</h1> <b>JavaScript | sleep/wait before continuing</b> <p> A sleep of 5000 milliseconds is simulated. Check the console for the output. </p> <script> function sleep(milliseconds) { let timeStart = new Date().getTime(); while (true) { let elapsedTime = new Date().getTime() - timeStart; if (elapsedTime > milliseconds) { break; } } } console.log(\"Hello World\"); console.log(\"Sleeping for 5000 milliseconds\"); // sleep for 5000 miliiseconds sleep(5000); console.log(\"Sleep completed successfully\"); </script></body> </html>",
"e": 28653,
"s": 27792,
"text": null
},
{
"code": null,
"e": 28661,
"s": 28653,
"text": "Output:"
},
{
"code": null,
"e": 28678,
"s": 28661,
"text": "Before sleeping:"
},
{
"code": null,
"e": 28716,
"s": 28678,
"text": "After sleeping for 5000 milliseconds:"
},
{
"code": null,
"e": 29041,
"s": 28716,
"text": "Method 2: Creating a new Promise and using the then() methodA new Promise is created which contains a setTimeout() function. The setTimeout() function is used to execute a function after a specified amount of time. The resolved state of the Promise is used inside the setTimeout() function to finishing it after the timeout."
},
{
"code": null,
"e": 29182,
"s": 29041,
"text": "The then() method can be used to execute the required function after the Promise has finished. This simulates a waiting time for a function."
},
{
"code": null,
"e": 29374,
"s": 29182,
"text": "This method does not block the asynchronous nature of JavaScript and is a preferred method for delaying a function. It is also only supported with the ES6 standard due to the use of Promises."
},
{
"code": null,
"e": 29382,
"s": 29374,
"text": "Syntax:"
},
{
"code": "const sleep = milliseconds => { return new Promise(resolve => setTimeout(resolve, milliseconds));}; sleep(timeToSleep).then(() => { // code to execute after sleep});",
"e": 29550,
"s": 29382,
"text": null
},
{
"code": null,
"e": 29559,
"s": 29550,
"text": "Example:"
},
{
"code": "<!DOCTYPE html><html> <head> <title> JavaScript | sleep/wait before continuing </title></head> <body> <h1 style=\"color: green\">GeeksforGeeks</h1> <b>JavaScript | sleep/wait before continuing</b> <p> A sleep of 5000 milliseconds is simulated. Check the console for the output. </p> <script> const sleep = milliseconds => { return new Promise(resolve => setTimeout(resolve, milliseconds)); }; console.log(\"Hello World\"); console.log(\"Sleeping for 5000 milliseconds\"); // sleep for 5000 miliiseconds and then execute function sleep(5000).then(() => { console.log(\"Sleep completed successfully\"); }); </script></body> </html>",
"e": 30302,
"s": 29559,
"text": null
},
{
"code": null,
"e": 30310,
"s": 30302,
"text": "Output:"
},
{
"code": null,
"e": 30327,
"s": 30310,
"text": "Before sleeping:"
},
{
"code": null,
"e": 30365,
"s": 30327,
"text": "After sleeping for 5000 milliseconds:"
},
{
"code": null,
"e": 30381,
"s": 30365,
"text": "JavaScript-Misc"
},
{
"code": null,
"e": 30388,
"s": 30381,
"text": "Picked"
},
{
"code": null,
"e": 30399,
"s": 30388,
"text": "JavaScript"
},
{
"code": null,
"e": 30416,
"s": 30399,
"text": "Web Technologies"
},
{
"code": null,
"e": 30443,
"s": 30416,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 30541,
"s": 30443,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30581,
"s": 30541,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 30642,
"s": 30581,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 30683,
"s": 30642,
"text": "Difference Between PUT and PATCH Request"
},
{
"code": null,
"e": 30705,
"s": 30683,
"text": "JavaScript | Promises"
},
{
"code": null,
"e": 30759,
"s": 30705,
"text": "How to get character array from string in JavaScript?"
},
{
"code": null,
"e": 30799,
"s": 30759,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 30832,
"s": 30799,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 30875,
"s": 30832,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 30925,
"s": 30875,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
SymPy - Installation
|
SymPy has one important prerequisite library named mpmath. It is a Python library for real and complex floating-point arithmetic with arbitrary precision. However, Python's package installer PIP installs it automatically when SymPy is installed as follows −
pip install sympy
Other Python distributions such as Anaconda, Enthought Canopy, etc., may have SymPy already bundled in it. To verify, you can type the following in the Python prompt −
>>> import sympy
>>> sympy.__version__
And you get the below output as the current version of sympy −
'1.5.1'
Source code of SymPy package is available at https://github.com/sympy/sympy.
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2277,
"s": 2019,
"text": "SymPy has one important prerequisite library named mpmath. It is a Python library for real and complex floating-point arithmetic with arbitrary precision. However, Python's package installer PIP installs it automatically when SymPy is installed as follows −"
},
{
"code": null,
"e": 2296,
"s": 2277,
"text": "pip install sympy\n"
},
{
"code": null,
"e": 2464,
"s": 2296,
"text": "Other Python distributions such as Anaconda, Enthought Canopy, etc., may have SymPy already bundled in it. To verify, you can type the following in the Python prompt −"
},
{
"code": null,
"e": 2504,
"s": 2464,
"text": ">>> import sympy\n>>> sympy.__version__\n"
},
{
"code": null,
"e": 2567,
"s": 2504,
"text": "And you get the below output as the current version of sympy −"
},
{
"code": null,
"e": 2575,
"s": 2567,
"text": "'1.5.1'"
},
{
"code": null,
"e": 2652,
"s": 2575,
"text": "Source code of SymPy package is available at https://github.com/sympy/sympy."
},
{
"code": null,
"e": 2659,
"s": 2652,
"text": " Print"
},
{
"code": null,
"e": 2670,
"s": 2659,
"text": " Add Notes"
}
] |
\Bigg - Tex Command
|
\Bigg - Used to obtain various-sized delimiters.
{ \Bigg }
\Bigg commands are used to obtain various-sized delimiters.
\Bigg[
[
\Bigg[
[
\bigg[
[
\Big[
[
\Big[
[
[
[
\Bigg[
[
\Bigg[
\Bigg[
[
\Bigg[
\bigg[
[
\bigg[
\Big[
[
\Big[
\Big[
[
\Big[
[
[
[
14 Lectures
52 mins
Ashraf Said
11 Lectures
1 hours
Ashraf Said
9 Lectures
1 hours
Emenwa Global, Ejike IfeanyiChukwu
29 Lectures
2.5 hours
Mohammad Nauman
14 Lectures
1 hours
Daniel Stern
15 Lectures
47 mins
Nishant Kumar
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 8035,
"s": 7986,
"text": "\\Bigg - Used to obtain various-sized delimiters."
},
{
"code": null,
"e": 8045,
"s": 8035,
"text": "{ \\Bigg }"
},
{
"code": null,
"e": 8105,
"s": 8045,
"text": "\\Bigg commands are used to obtain various-sized delimiters."
},
{
"code": null,
"e": 8178,
"s": 8105,
"text": "\n\\Bigg[ \n\n[\n\n\n\\Bigg[ \n\n[\n\n\n\\bigg[ \n\n[\n\n\n\\Big[ \n\n[\n\n\n\\Big[ \n\n[\n\n\n[ \n\n[\n\n\n"
},
{
"code": null,
"e": 8191,
"s": 8178,
"text": "\\Bigg[ \n\n[\n\n"
},
{
"code": null,
"e": 8199,
"s": 8191,
"text": "\\Bigg[ "
},
{
"code": null,
"e": 8212,
"s": 8199,
"text": "\\Bigg[ \n\n[\n\n"
},
{
"code": null,
"e": 8220,
"s": 8212,
"text": "\\Bigg[ "
},
{
"code": null,
"e": 8233,
"s": 8220,
"text": "\\bigg[ \n\n[\n\n"
},
{
"code": null,
"e": 8241,
"s": 8233,
"text": "\\bigg[ "
},
{
"code": null,
"e": 8253,
"s": 8241,
"text": "\\Big[ \n\n[\n\n"
},
{
"code": null,
"e": 8260,
"s": 8253,
"text": "\\Big[ "
},
{
"code": null,
"e": 8272,
"s": 8260,
"text": "\\Big[ \n\n[\n\n"
},
{
"code": null,
"e": 8279,
"s": 8272,
"text": "\\Big[ "
},
{
"code": null,
"e": 8287,
"s": 8279,
"text": "[ \n\n[\n\n"
},
{
"code": null,
"e": 8290,
"s": 8287,
"text": "[ "
},
{
"code": null,
"e": 8322,
"s": 8290,
"text": "\n 14 Lectures \n 52 mins\n"
},
{
"code": null,
"e": 8335,
"s": 8322,
"text": " Ashraf Said"
},
{
"code": null,
"e": 8368,
"s": 8335,
"text": "\n 11 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 8381,
"s": 8368,
"text": " Ashraf Said"
},
{
"code": null,
"e": 8413,
"s": 8381,
"text": "\n 9 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 8449,
"s": 8413,
"text": " Emenwa Global, Ejike IfeanyiChukwu"
},
{
"code": null,
"e": 8484,
"s": 8449,
"text": "\n 29 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 8501,
"s": 8484,
"text": " Mohammad Nauman"
},
{
"code": null,
"e": 8534,
"s": 8501,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 8548,
"s": 8534,
"text": " Daniel Stern"
},
{
"code": null,
"e": 8580,
"s": 8548,
"text": "\n 15 Lectures \n 47 mins\n"
},
{
"code": null,
"e": 8595,
"s": 8580,
"text": " Nishant Kumar"
},
{
"code": null,
"e": 8602,
"s": 8595,
"text": " Print"
},
{
"code": null,
"e": 8613,
"s": 8602,
"text": " Add Notes"
}
] |
PHP | DateTime diff() Function
|
10 Oct, 2019
The DateTime::diff() function is an inbuilt function in PHP which is used to return the difference between two given DateTime objects.
Syntax:
Object oriented style:DateInterval DateTime::diff( DateTimeInterface $datetime2,
bool $absolute = FALSE )orDateInterval DateTimeImmutable::diff( DateTimeInterface $datetime2,
bool $absolute = FALSE )orDateInterval DateTimeInterface::diff( DateTimeInterface $datetime2,
bool $absolute = FALSE )
DateInterval DateTime::diff( DateTimeInterface $datetime2,
bool $absolute = FALSE )
or
DateInterval DateTimeImmutable::diff( DateTimeInterface $datetime2,
bool $absolute = FALSE )
or
DateInterval DateTimeInterface::diff( DateTimeInterface $datetime2,
bool $absolute = FALSE )
Procedural style:DateInterval date_diff( DateTimeInterface $datetime1,
DateTimeInterface $datetime2, bool $absolute = FALSE )
DateInterval date_diff( DateTimeInterface $datetime1,
DateTimeInterface $datetime2, bool $absolute = FALSE )
Parameters: This function uses two parameters as mentioned above and described below:
$datetime: This parameter holds the date to compare with first date.
$absolute: This parameter forced the interval to be positive.
Return Value: This function return the difference between two given DateTime objects.
Below programs illustrate the DateTime::diff() function in PHP:
Program 1:
<?php // Initialising the two datetime objects$datetime1 = new DateTime('2019-9-10');$datetime2 = new DateTime('2019-9-15'); // Calling the diff() function on above// two DateTime objects$difference = $datetime1->diff($datetime2); // Getting the difference between two// given DateTime objectsecho $difference->format('%R%a days');?>
+5 days
Program 2:
<?php // Initialising the two datetime objects$datetime1 = new DateTime('2019-8-10');$datetime2 = new DateTime('2019-9-10'); // Calling the diff() function on above// two DateTime objects$difference = $datetime1->diff($datetime2); // Getting the difference between two// given DateTime objectsecho $difference->format('%R%a days');?>
+31 days
Reference: https://www.php.net/manual/en/datetime.diff.php
PHP-date-time
PHP-function
PHP
Web Technologies
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Insert Form Data into Database using PHP ?
How to convert array to string in PHP ?
How to Upload Image into Database and Display it using PHP ?
PHP | Converting string to Date and DateTime
How to receive JSON POST with PHP ?
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ?
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n10 Oct, 2019"
},
{
"code": null,
"e": 163,
"s": 28,
"text": "The DateTime::diff() function is an inbuilt function in PHP which is used to return the difference between two given DateTime objects."
},
{
"code": null,
"e": 171,
"s": 163,
"text": "Syntax:"
},
{
"code": null,
"e": 567,
"s": 171,
"text": "Object oriented style:DateInterval DateTime::diff( DateTimeInterface $datetime2,\n bool $absolute = FALSE )orDateInterval DateTimeImmutable::diff( DateTimeInterface $datetime2,\n bool $absolute = FALSE )orDateInterval DateTimeInterface::diff( DateTimeInterface $datetime2,\n bool $absolute = FALSE )"
},
{
"code": null,
"e": 685,
"s": 567,
"text": "DateInterval DateTime::diff( DateTimeInterface $datetime2,\n bool $absolute = FALSE )"
},
{
"code": null,
"e": 688,
"s": 685,
"text": "or"
},
{
"code": null,
"e": 815,
"s": 688,
"text": "DateInterval DateTimeImmutable::diff( DateTimeInterface $datetime2,\n bool $absolute = FALSE )"
},
{
"code": null,
"e": 818,
"s": 815,
"text": "or"
},
{
"code": null,
"e": 945,
"s": 818,
"text": "DateInterval DateTimeInterface::diff( DateTimeInterface $datetime2,\n bool $absolute = FALSE )"
},
{
"code": null,
"e": 1089,
"s": 945,
"text": "Procedural style:DateInterval date_diff( DateTimeInterface $datetime1,\n DateTimeInterface $datetime2, bool $absolute = FALSE )"
},
{
"code": null,
"e": 1216,
"s": 1089,
"text": "DateInterval date_diff( DateTimeInterface $datetime1,\n DateTimeInterface $datetime2, bool $absolute = FALSE )"
},
{
"code": null,
"e": 1302,
"s": 1216,
"text": "Parameters: This function uses two parameters as mentioned above and described below:"
},
{
"code": null,
"e": 1371,
"s": 1302,
"text": "$datetime: This parameter holds the date to compare with first date."
},
{
"code": null,
"e": 1433,
"s": 1371,
"text": "$absolute: This parameter forced the interval to be positive."
},
{
"code": null,
"e": 1519,
"s": 1433,
"text": "Return Value: This function return the difference between two given DateTime objects."
},
{
"code": null,
"e": 1583,
"s": 1519,
"text": "Below programs illustrate the DateTime::diff() function in PHP:"
},
{
"code": null,
"e": 1594,
"s": 1583,
"text": "Program 1:"
},
{
"code": "<?php // Initialising the two datetime objects$datetime1 = new DateTime('2019-9-10');$datetime2 = new DateTime('2019-9-15'); // Calling the diff() function on above// two DateTime objects$difference = $datetime1->diff($datetime2); // Getting the difference between two// given DateTime objectsecho $difference->format('%R%a days');?>",
"e": 1931,
"s": 1594,
"text": null
},
{
"code": null,
"e": 1940,
"s": 1931,
"text": "+5 days\n"
},
{
"code": null,
"e": 1951,
"s": 1940,
"text": "Program 2:"
},
{
"code": "<?php // Initialising the two datetime objects$datetime1 = new DateTime('2019-8-10');$datetime2 = new DateTime('2019-9-10'); // Calling the diff() function on above// two DateTime objects$difference = $datetime1->diff($datetime2); // Getting the difference between two// given DateTime objectsecho $difference->format('%R%a days');?>",
"e": 2288,
"s": 1951,
"text": null
},
{
"code": null,
"e": 2298,
"s": 2288,
"text": "+31 days\n"
},
{
"code": null,
"e": 2357,
"s": 2298,
"text": "Reference: https://www.php.net/manual/en/datetime.diff.php"
},
{
"code": null,
"e": 2371,
"s": 2357,
"text": "PHP-date-time"
},
{
"code": null,
"e": 2384,
"s": 2371,
"text": "PHP-function"
},
{
"code": null,
"e": 2388,
"s": 2384,
"text": "PHP"
},
{
"code": null,
"e": 2405,
"s": 2388,
"text": "Web Technologies"
},
{
"code": null,
"e": 2409,
"s": 2405,
"text": "PHP"
},
{
"code": null,
"e": 2507,
"s": 2409,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2557,
"s": 2507,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 2597,
"s": 2557,
"text": "How to convert array to string in PHP ?"
},
{
"code": null,
"e": 2658,
"s": 2597,
"text": "How to Upload Image into Database and Display it using PHP ?"
},
{
"code": null,
"e": 2703,
"s": 2658,
"text": "PHP | Converting string to Date and DateTime"
},
{
"code": null,
"e": 2739,
"s": 2703,
"text": "How to receive JSON POST with PHP ?"
},
{
"code": null,
"e": 2772,
"s": 2739,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 2834,
"s": 2772,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 2895,
"s": 2834,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 2945,
"s": 2895,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
C# | Check if the Hashtable contains a specific Key
|
01 Feb, 2019
The Hashtable class represents a collection of key-and-value pairs that are organized based on the hash code of the key. The key is used to access the items in the collection. Hashtable.ContainsKey(Object) Method is used to check whether the Hashtable contains a specific key or not.
Syntax:
public virtual bool ContainsKey(object key);
Parameter:
key: The key of type System.Object to locate in the Hashtable.
Return Type: It return true if the Hashtable contains an element with the specified key otherwise, false. The return type of this method is System.Boolean.
Exception: This method can give ArgumentNullException if the key is null.
Below given are some examples to understand the implementation in a better way:
Example 1:
// C# code to check if the HashTable// contains a specific keyusing System;using System.Collections; class GFG { // Driver code public static void Main() { // Creating a Hashtable Hashtable myTable = new Hashtable(); // Adding elements in Hashtable myTable.Add("g", "geeks"); myTable.Add("c", "c++"); myTable.Add("d", "data structures"); myTable.Add("q", "quiz"); // check if the HashTable contains // the required key or not. if (myTable.ContainsKey("c")) Console.WriteLine("myTable contains the key"); else Console.WriteLine("myTable doesn't contain the key"); }}
myTable contains the key
Example 2:
// C# code to check if the HashTable// contains a specific keyusing System;using System.Collections; class GFG { // Driver code public static void Main() { // Creating a Hashtable Hashtable myTable = new Hashtable(); // Adding elements in Hashtable myTable.Add("India", "Country"); myTable.Add("Chandigarh", "City"); myTable.Add("Mars", "Planet"); myTable.Add("China", "Country"); // check if the HashTable contains // the required key or not. if (myTable.ContainsKey("Earth")) Console.WriteLine("myTable contains the key"); else Console.WriteLine("myTable doesn't contain the key"); }}
myTable doesn't contain the key
Reference:
https://docs.microsoft.com/en-us/dotnet/api/system.collections.hashtable.containskey?view=netframework-4.7.2
CSharp-Collections-Hashtable
CSharp-Collections-Namespace
CSharp-method
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n01 Feb, 2019"
},
{
"code": null,
"e": 312,
"s": 28,
"text": "The Hashtable class represents a collection of key-and-value pairs that are organized based on the hash code of the key. The key is used to access the items in the collection. Hashtable.ContainsKey(Object) Method is used to check whether the Hashtable contains a specific key or not."
},
{
"code": null,
"e": 320,
"s": 312,
"text": "Syntax:"
},
{
"code": null,
"e": 366,
"s": 320,
"text": "public virtual bool ContainsKey(object key);\n"
},
{
"code": null,
"e": 377,
"s": 366,
"text": "Parameter:"
},
{
"code": null,
"e": 440,
"s": 377,
"text": "key: The key of type System.Object to locate in the Hashtable."
},
{
"code": null,
"e": 596,
"s": 440,
"text": "Return Type: It return true if the Hashtable contains an element with the specified key otherwise, false. The return type of this method is System.Boolean."
},
{
"code": null,
"e": 670,
"s": 596,
"text": "Exception: This method can give ArgumentNullException if the key is null."
},
{
"code": null,
"e": 750,
"s": 670,
"text": "Below given are some examples to understand the implementation in a better way:"
},
{
"code": null,
"e": 761,
"s": 750,
"text": "Example 1:"
},
{
"code": "// C# code to check if the HashTable// contains a specific keyusing System;using System.Collections; class GFG { // Driver code public static void Main() { // Creating a Hashtable Hashtable myTable = new Hashtable(); // Adding elements in Hashtable myTable.Add(\"g\", \"geeks\"); myTable.Add(\"c\", \"c++\"); myTable.Add(\"d\", \"data structures\"); myTable.Add(\"q\", \"quiz\"); // check if the HashTable contains // the required key or not. if (myTable.ContainsKey(\"c\")) Console.WriteLine(\"myTable contains the key\"); else Console.WriteLine(\"myTable doesn't contain the key\"); }}",
"e": 1447,
"s": 761,
"text": null
},
{
"code": null,
"e": 1473,
"s": 1447,
"text": "myTable contains the key\n"
},
{
"code": null,
"e": 1484,
"s": 1473,
"text": "Example 2:"
},
{
"code": "// C# code to check if the HashTable// contains a specific keyusing System;using System.Collections; class GFG { // Driver code public static void Main() { // Creating a Hashtable Hashtable myTable = new Hashtable(); // Adding elements in Hashtable myTable.Add(\"India\", \"Country\"); myTable.Add(\"Chandigarh\", \"City\"); myTable.Add(\"Mars\", \"Planet\"); myTable.Add(\"China\", \"Country\"); // check if the HashTable contains // the required key or not. if (myTable.ContainsKey(\"Earth\")) Console.WriteLine(\"myTable contains the key\"); else Console.WriteLine(\"myTable doesn't contain the key\"); }}",
"e": 2191,
"s": 1484,
"text": null
},
{
"code": null,
"e": 2224,
"s": 2191,
"text": "myTable doesn't contain the key\n"
},
{
"code": null,
"e": 2235,
"s": 2224,
"text": "Reference:"
},
{
"code": null,
"e": 2344,
"s": 2235,
"text": "https://docs.microsoft.com/en-us/dotnet/api/system.collections.hashtable.containskey?view=netframework-4.7.2"
},
{
"code": null,
"e": 2373,
"s": 2344,
"text": "CSharp-Collections-Hashtable"
},
{
"code": null,
"e": 2402,
"s": 2373,
"text": "CSharp-Collections-Namespace"
},
{
"code": null,
"e": 2416,
"s": 2402,
"text": "CSharp-method"
},
{
"code": null,
"e": 2419,
"s": 2416,
"text": "C#"
}
] |
GATE | GATE CS 2019 | Question 11
|
16 Aug, 2021
A certain processor uses a fully associative cache of size 16 kB, The cache block size is 16 bytes. Assume that the main memory is byte addressable and uses a 32-bit address. How many bits are required for the Tag and the Index fields respectively in the addresses generated by the processor?
(A) 24 bits and 0 bits(B) 28 bits and 4 bits(C) 24 bits and 4 bits(D) 28 bits and 0 bitsAnswer: (D)Explanation: Given cache block size is 16 bytes, so block or word offset is 4 bits. Fully associative cache of size 16 kB, so line offset should be,
= cache size / block size
= 16 kB / 16 B
= 1 k
= 1024
= 10 bits Line or Index Offset
Tag bit size would be,
= processor address size - (line offset + word offset)
= 32 - 10 - 4
= 18 bits tag size
Since, there no option matches, but if we assume that Line Offset is a part of Tag bits, therefore,
Tag bits = 18+10 = 28 bits
Line or Index offset = 0 bits (since fully associative cache memory),
Word or block offset = 4 bits
Alternative way:We know in fully associative mapping,
Line size = block size = frame size
Number of bits in tag can be founded using given below formula
Number of Tag bits
= Total number of bits in Physical Address - no of bits in Block offset
Here number of bits in block offset is not given. It can be found using
ceil(log2 Cache block size) = ceil(log2 16) = 4
So,
Number of Tag bits = 32-4 = 28
No index bits is there in fully associative mapping,hence Index bits = 0
So, option (D) is correct.
Questions on Cache Mapping Techniques with Ashish Verma | GeeksforGeeks GATE - YouTubeGeeksforGeeks GATE Computer Science17.5K subscribersQuestions on Cache Mapping Techniques with Ashish Verma | GeeksforGeeks GATEWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.More videosMore videosYou're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:0022:43 / 58:01•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=lz3J33WzNOo" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>Quiz of this Question
VeenaSreekumarNair
GATE
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
GATE | GATE-CS-2014-(Set-2) | Question 65
GATE | Sudo GATE 2020 Mock I (27 December 2019) | Question 33
GATE | GATE CS 2008 | Question 40
GATE | GATE CS 2008 | Question 46
GATE | GATE-CS-2015 (Set 3) | Question 65
GATE | GATE-CS-2014-(Set-3) | Question 65
GATE | GATE CS 2011 | Question 49
GATE | GATE CS 1996 | Question 38
GATE | GATE-CS-2004 | Question 31
GATE | GATE-CS-2016 (Set 1) | Question 45
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n16 Aug, 2021"
},
{
"code": null,
"e": 347,
"s": 54,
"text": "A certain processor uses a fully associative cache of size 16 kB, The cache block size is 16 bytes. Assume that the main memory is byte addressable and uses a 32-bit address. How many bits are required for the Tag and the Index fields respectively in the addresses generated by the processor?"
},
{
"code": null,
"e": 595,
"s": 347,
"text": "(A) 24 bits and 0 bits(B) 28 bits and 4 bits(C) 24 bits and 4 bits(D) 28 bits and 0 bitsAnswer: (D)Explanation: Given cache block size is 16 bytes, so block or word offset is 4 bits. Fully associative cache of size 16 kB, so line offset should be,"
},
{
"code": null,
"e": 682,
"s": 595,
"text": "= cache size / block size\n= 16 kB / 16 B\n= 1 k \n= 1024\n= 10 bits Line or Index Offset "
},
{
"code": null,
"e": 705,
"s": 682,
"text": "Tag bit size would be,"
},
{
"code": null,
"e": 794,
"s": 705,
"text": "= processor address size - (line offset + word offset)\n= 32 - 10 - 4\n= 18 bits tag size "
},
{
"code": null,
"e": 894,
"s": 794,
"text": "Since, there no option matches, but if we assume that Line Offset is a part of Tag bits, therefore,"
},
{
"code": null,
"e": 1022,
"s": 894,
"text": "Tag bits = 18+10 = 28 bits\nLine or Index offset = 0 bits (since fully associative cache memory),\nWord or block offset = 4 bits "
},
{
"code": null,
"e": 1076,
"s": 1022,
"text": "Alternative way:We know in fully associative mapping,"
},
{
"code": null,
"e": 1113,
"s": 1076,
"text": "Line size = block size = frame size "
},
{
"code": null,
"e": 1176,
"s": 1113,
"text": "Number of bits in tag can be founded using given below formula"
},
{
"code": null,
"e": 1269,
"s": 1176,
"text": "Number of Tag bits \n= Total number of bits in Physical Address - no of bits in Block offset "
},
{
"code": null,
"e": 1341,
"s": 1269,
"text": "Here number of bits in block offset is not given. It can be found using"
},
{
"code": null,
"e": 1390,
"s": 1341,
"text": "ceil(log2 Cache block size) = ceil(log2 16) = 4 "
},
{
"code": null,
"e": 1394,
"s": 1390,
"text": "So,"
},
{
"code": null,
"e": 1426,
"s": 1394,
"text": "Number of Tag bits = 32-4 = 28 "
},
{
"code": null,
"e": 1499,
"s": 1426,
"text": "No index bits is there in fully associative mapping,hence Index bits = 0"
},
{
"code": null,
"e": 1526,
"s": 1499,
"text": "So, option (D) is correct."
},
{
"code": null,
"e": 2510,
"s": 1526,
"text": "Questions on Cache Mapping Techniques with Ashish Verma | GeeksforGeeks GATE - YouTubeGeeksforGeeks GATE Computer Science17.5K subscribersQuestions on Cache Mapping Techniques with Ashish Verma | GeeksforGeeks GATEWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.More videosMore videosYou're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:0022:43 / 58:01•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=lz3J33WzNOo\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>Quiz of this Question"
},
{
"code": null,
"e": 2529,
"s": 2510,
"text": "VeenaSreekumarNair"
},
{
"code": null,
"e": 2534,
"s": 2529,
"text": "GATE"
},
{
"code": null,
"e": 2632,
"s": 2534,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2674,
"s": 2632,
"text": "GATE | GATE-CS-2014-(Set-2) | Question 65"
},
{
"code": null,
"e": 2736,
"s": 2674,
"text": "GATE | Sudo GATE 2020 Mock I (27 December 2019) | Question 33"
},
{
"code": null,
"e": 2770,
"s": 2736,
"text": "GATE | GATE CS 2008 | Question 40"
},
{
"code": null,
"e": 2804,
"s": 2770,
"text": "GATE | GATE CS 2008 | Question 46"
},
{
"code": null,
"e": 2846,
"s": 2804,
"text": "GATE | GATE-CS-2015 (Set 3) | Question 65"
},
{
"code": null,
"e": 2888,
"s": 2846,
"text": "GATE | GATE-CS-2014-(Set-3) | Question 65"
},
{
"code": null,
"e": 2922,
"s": 2888,
"text": "GATE | GATE CS 2011 | Question 49"
},
{
"code": null,
"e": 2956,
"s": 2922,
"text": "GATE | GATE CS 1996 | Question 38"
},
{
"code": null,
"e": 2990,
"s": 2956,
"text": "GATE | GATE-CS-2004 | Question 31"
}
] |
How to Change FTP Port in Linux?
|
24 Feb, 2021
Files are either uploaded or downloaded to the FTP server. The files are moved from a personal computer to the server when you upload files. The files are moved from the cloud to your personal computer when the files are downloaded. In order to transfer files through FTP, TCP/IP (Transmission Control Protocol/Internet Protocol), or the language used by the internet to execute commands, is used.
To adjust the default Linux port of the Proftpd operation, first, open the Proftpd main configuration file for editing with your favorite text editor by issuing the command below.
Step 1: First open Proftpd main configuration file for editing.
$ nano /etc/proftpd/proftpd.conf
Open config file
Step 2: Find the following line port number.
Port 21
Find Port
Step 3: And change the FTP default port 21 to a custom port, for example, 210.
Change the port
Step 4: Save and close the file. To take effect the changes, restart the proftpd service using the below command
$ systemctl restart proftpd
Restart Proftpd service
Step 5: Check the table of local network sockets with the netstat or ss command.
$ netstat -tlpn| grep nginx
OR
$ ss -tlpn| grep nginx
Check port
Now your FTP port should be changed.
Picked
Technical Scripter 2020
How To
Linux-Unix
Technical Scripter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Set Git Username and Password in GitBash?
How to Install Jupyter Notebook on MacOS?
How to Install and Use NVM on Windows?
How to Install Python Packages for AWS Lambda Layers?
How to Install Git in VS Code?
Sed Command in Linux/Unix with examples
AWK command in Unix/Linux with examples
grep command in Unix/Linux
cut command in Linux with examples
cp command in Linux with examples
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n24 Feb, 2021"
},
{
"code": null,
"e": 426,
"s": 28,
"text": "Files are either uploaded or downloaded to the FTP server. The files are moved from a personal computer to the server when you upload files. The files are moved from the cloud to your personal computer when the files are downloaded. In order to transfer files through FTP, TCP/IP (Transmission Control Protocol/Internet Protocol), or the language used by the internet to execute commands, is used."
},
{
"code": null,
"e": 606,
"s": 426,
"text": "To adjust the default Linux port of the Proftpd operation, first, open the Proftpd main configuration file for editing with your favorite text editor by issuing the command below."
},
{
"code": null,
"e": 670,
"s": 606,
"text": "Step 1: First open Proftpd main configuration file for editing."
},
{
"code": null,
"e": 703,
"s": 670,
"text": "$ nano /etc/proftpd/proftpd.conf"
},
{
"code": null,
"e": 720,
"s": 703,
"text": "Open config file"
},
{
"code": null,
"e": 765,
"s": 720,
"text": "Step 2: Find the following line port number."
},
{
"code": null,
"e": 773,
"s": 765,
"text": "Port 21"
},
{
"code": null,
"e": 783,
"s": 773,
"text": "Find Port"
},
{
"code": null,
"e": 862,
"s": 783,
"text": "Step 3: And change the FTP default port 21 to a custom port, for example, 210."
},
{
"code": null,
"e": 878,
"s": 862,
"text": "Change the port"
},
{
"code": null,
"e": 991,
"s": 878,
"text": "Step 4: Save and close the file. To take effect the changes, restart the proftpd service using the below command"
},
{
"code": null,
"e": 1019,
"s": 991,
"text": "$ systemctl restart proftpd"
},
{
"code": null,
"e": 1043,
"s": 1019,
"text": "Restart Proftpd service"
},
{
"code": null,
"e": 1124,
"s": 1043,
"text": "Step 5: Check the table of local network sockets with the netstat or ss command."
},
{
"code": null,
"e": 1178,
"s": 1124,
"text": "$ netstat -tlpn| grep nginx\nOR\n$ ss -tlpn| grep nginx"
},
{
"code": null,
"e": 1189,
"s": 1178,
"text": "Check port"
},
{
"code": null,
"e": 1226,
"s": 1189,
"text": "Now your FTP port should be changed."
},
{
"code": null,
"e": 1233,
"s": 1226,
"text": "Picked"
},
{
"code": null,
"e": 1257,
"s": 1233,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 1264,
"s": 1257,
"text": "How To"
},
{
"code": null,
"e": 1275,
"s": 1264,
"text": "Linux-Unix"
},
{
"code": null,
"e": 1294,
"s": 1275,
"text": "Technical Scripter"
},
{
"code": null,
"e": 1392,
"s": 1294,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1441,
"s": 1392,
"text": "How to Set Git Username and Password in GitBash?"
},
{
"code": null,
"e": 1483,
"s": 1441,
"text": "How to Install Jupyter Notebook on MacOS?"
},
{
"code": null,
"e": 1522,
"s": 1483,
"text": "How to Install and Use NVM on Windows?"
},
{
"code": null,
"e": 1576,
"s": 1522,
"text": "How to Install Python Packages for AWS Lambda Layers?"
},
{
"code": null,
"e": 1607,
"s": 1576,
"text": "How to Install Git in VS Code?"
},
{
"code": null,
"e": 1647,
"s": 1607,
"text": "Sed Command in Linux/Unix with examples"
},
{
"code": null,
"e": 1687,
"s": 1647,
"text": "AWK command in Unix/Linux with examples"
},
{
"code": null,
"e": 1714,
"s": 1687,
"text": "grep command in Unix/Linux"
},
{
"code": null,
"e": 1749,
"s": 1714,
"text": "cut command in Linux with examples"
}
] |
How to fetch data from an API in ReactJS ?
|
18 Aug, 2021
ReactJS: ReactJS is a declarative, efficient, and flexible JavaScript library for building user interfaces. It’s ‘V’ in MVC. ReactJS is an open-source, component-based front-end library responsible only for the view layer of the application. It is maintained by Facebook.
API: API is an abbreviation for Application Programming Interface which is a collection of communication protocols and subroutines used by various programs to communicate between them. A programmer can make use of various API tools to make its program easier and simpler. Also, an API facilitates the programmers with an efficient way to develop their software programs.
Approach: In this article, we will know how we fetch the data from API (Application Programming Interface). For the data, we have used the API endpoint from http://jsonplaceholder.typicode.com/users we have created the component in App.js and styling the component in App.css. From the API we have target “id”, “name”, “username”, “email” and fetch the data from API endpoints. Below is the stepwise implementation of how we fetch the data from an API in react. We will use the fetch function to get the data from the API.
Step by step implementation to fetch data from an api in react.
Step 1: Create React Project npm create-react-app MY-APP
Step 1: Create React Project
npm create-react-app MY-APP
Step 2: Change your directory and enter your main folder charting ascd MY-APP
Step 2: Change your directory and enter your main folder charting as
cd MY-APP
Step 3: API endpoint https://jsonplaceholder.typicode.com/usersAPI
Step 3: API endpoint
https://jsonplaceholder.typicode.com/users
API
Step 4: Write code in App.js to fetch data from API and we are using fetch function.
Step 4: Write code in App.js to fetch data from API and we are using fetch function.
Project Structure: It will look the following.
Project Structure
Example:
App.js
import React from "react";import './App.css';class App extends React.Component { // Constructor constructor(props) { super(props); this.state = { items: [], DataisLoaded: false }; } // ComponentDidMount is used to // execute the code componentDidMount() { fetch("https://jsonplaceholder.typicode.com/users") .then((res) => res.json()) .then((json) => { this.setState({ items: json, DataisLoaded: true }); }) } render() { const { DataisLoaded, items } = this.state; if (!DataisLoaded) return <div> <h1> Pleses wait some time.... </h1> </div> ; return ( <div className = "App"> <h1> Fetch data from an api in react </h1> { items.map((item) => ( <ol key = { item.id } > User_Name: { item.username }, Full_Name: { item.name }, User_Email: { item.email } </ol> )) } </div> );}} export default App;
Write code in App.css for styling the app.js file.
App.css
.App { text-align: center; color: Green;}.App-header { background-color: #282c34; min-height: 100vh; display: flex; flex-direction: column; align-items: center; justify-content: center; font-size: calc(10px + 2vmin); color: white;}.App-link { color: #61dafb;} @keyframes App-logo-spin { from { transform: rotate(0deg); } to { transform: rotate(360deg); }}
Step to run the application: Open the terminal and type the following command.
npm start
Output: Open the browser and our project is shown in the URL http://localhost:3000/
React-Questions
ReactJS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Axios in React: A Guide for Beginners
ReactJS setState()
How to pass data from one component to other component in ReactJS ?
Re-rendering Components in ReactJS
ReactJS defaultProps
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
Differences between Functional Components and Class Components in React
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n18 Aug, 2021"
},
{
"code": null,
"e": 324,
"s": 52,
"text": "ReactJS: ReactJS is a declarative, efficient, and flexible JavaScript library for building user interfaces. It’s ‘V’ in MVC. ReactJS is an open-source, component-based front-end library responsible only for the view layer of the application. It is maintained by Facebook."
},
{
"code": null,
"e": 695,
"s": 324,
"text": "API: API is an abbreviation for Application Programming Interface which is a collection of communication protocols and subroutines used by various programs to communicate between them. A programmer can make use of various API tools to make its program easier and simpler. Also, an API facilitates the programmers with an efficient way to develop their software programs."
},
{
"code": null,
"e": 1219,
"s": 695,
"text": "Approach: In this article, we will know how we fetch the data from API (Application Programming Interface). For the data, we have used the API endpoint from http://jsonplaceholder.typicode.com/users we have created the component in App.js and styling the component in App.css. From the API we have target “id”, “name”, “username”, “email” and fetch the data from API endpoints. Below is the stepwise implementation of how we fetch the data from an API in react. We will use the fetch function to get the data from the API."
},
{
"code": null,
"e": 1283,
"s": 1219,
"text": "Step by step implementation to fetch data from an api in react."
},
{
"code": null,
"e": 1341,
"s": 1283,
"text": "Step 1: Create React Project npm create-react-app MY-APP "
},
{
"code": null,
"e": 1371,
"s": 1341,
"text": "Step 1: Create React Project "
},
{
"code": null,
"e": 1400,
"s": 1371,
"text": "npm create-react-app MY-APP "
},
{
"code": null,
"e": 1478,
"s": 1400,
"text": "Step 2: Change your directory and enter your main folder charting ascd MY-APP"
},
{
"code": null,
"e": 1547,
"s": 1478,
"text": "Step 2: Change your directory and enter your main folder charting as"
},
{
"code": null,
"e": 1557,
"s": 1547,
"text": "cd MY-APP"
},
{
"code": null,
"e": 1624,
"s": 1557,
"text": "Step 3: API endpoint https://jsonplaceholder.typicode.com/usersAPI"
},
{
"code": null,
"e": 1646,
"s": 1624,
"text": "Step 3: API endpoint "
},
{
"code": null,
"e": 1689,
"s": 1646,
"text": "https://jsonplaceholder.typicode.com/users"
},
{
"code": null,
"e": 1693,
"s": 1689,
"text": "API"
},
{
"code": null,
"e": 1778,
"s": 1693,
"text": "Step 4: Write code in App.js to fetch data from API and we are using fetch function."
},
{
"code": null,
"e": 1863,
"s": 1778,
"text": "Step 4: Write code in App.js to fetch data from API and we are using fetch function."
},
{
"code": null,
"e": 1910,
"s": 1863,
"text": "Project Structure: It will look the following."
},
{
"code": null,
"e": 1928,
"s": 1910,
"text": "Project Structure"
},
{
"code": null,
"e": 1937,
"s": 1928,
"text": "Example:"
},
{
"code": null,
"e": 1944,
"s": 1937,
"text": "App.js"
},
{
"code": "import React from \"react\";import './App.css';class App extends React.Component { // Constructor constructor(props) { super(props); this.state = { items: [], DataisLoaded: false }; } // ComponentDidMount is used to // execute the code componentDidMount() { fetch(\"https://jsonplaceholder.typicode.com/users\") .then((res) => res.json()) .then((json) => { this.setState({ items: json, DataisLoaded: true }); }) } render() { const { DataisLoaded, items } = this.state; if (!DataisLoaded) return <div> <h1> Pleses wait some time.... </h1> </div> ; return ( <div className = \"App\"> <h1> Fetch data from an api in react </h1> { items.map((item) => ( <ol key = { item.id } > User_Name: { item.username }, Full_Name: { item.name }, User_Email: { item.email } </ol> )) } </div> );}} export default App;",
"e": 3125,
"s": 1944,
"text": null
},
{
"code": null,
"e": 3176,
"s": 3125,
"text": "Write code in App.css for styling the app.js file."
},
{
"code": null,
"e": 3184,
"s": 3176,
"text": "App.css"
},
{
"code": ".App { text-align: center; color: Green;}.App-header { background-color: #282c34; min-height: 100vh; display: flex; flex-direction: column; align-items: center; justify-content: center; font-size: calc(10px + 2vmin); color: white;}.App-link { color: #61dafb;} @keyframes App-logo-spin { from { transform: rotate(0deg); } to { transform: rotate(360deg); }}",
"e": 3600,
"s": 3184,
"text": null
},
{
"code": null,
"e": 3681,
"s": 3600,
"text": "Step to run the application: Open the terminal and type the following command. "
},
{
"code": null,
"e": 3691,
"s": 3681,
"text": "npm start"
},
{
"code": null,
"e": 3775,
"s": 3691,
"text": "Output: Open the browser and our project is shown in the URL http://localhost:3000/"
},
{
"code": null,
"e": 3791,
"s": 3775,
"text": "React-Questions"
},
{
"code": null,
"e": 3799,
"s": 3791,
"text": "ReactJS"
},
{
"code": null,
"e": 3816,
"s": 3799,
"text": "Web Technologies"
},
{
"code": null,
"e": 3914,
"s": 3816,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3952,
"s": 3914,
"text": "Axios in React: A Guide for Beginners"
},
{
"code": null,
"e": 3971,
"s": 3952,
"text": "ReactJS setState()"
},
{
"code": null,
"e": 4039,
"s": 3971,
"text": "How to pass data from one component to other component in ReactJS ?"
},
{
"code": null,
"e": 4074,
"s": 4039,
"text": "Re-rendering Components in ReactJS"
},
{
"code": null,
"e": 4095,
"s": 4074,
"text": "ReactJS defaultProps"
},
{
"code": null,
"e": 4128,
"s": 4095,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 4190,
"s": 4128,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 4251,
"s": 4190,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 4301,
"s": 4251,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
Minimum possible travel cost among N cities
|
04 Aug, 2021
There are N cities situated on a straight road and each is separated by a distance of 1 unit. You have to reach the (N + 1)th city by boarding a bus. The ith city would cost of C[i] dollars to travel 1 unit of distance. In other words, cost to travel from the ith city to the jth city is abs(i – j ) * C[i] dollars. The task is to find the minimum cost to travel from city 1 to city (N + 1) i.e. beyond the last city.Examples:
Input: C[] = {3, 5, 4} Output: 9 The bus boarded from the first city has the minimum cost of all so it will be used to travel (N + 1) unit.Input: C[] = {4, 7, 8, 3, 4} Output: 18 Board the bus at the first city then change the bus at the fourth city. (3 * 4) + (2 * 3) = 12 + 6 = 18
Approach: The approach is very simple, just travel by the bus which has the lowest cost so far. Whenever a bus with an even lower cost is found, change the bus from that city. Following are the steps to solve:
Start with the first city with a cost of C[1].Travel to the next city until a city j having cost less than the previous city (by which we are travelling, let’s say city i) is found.Calculate cost as abs(j – i) * C[i] and add it to the total cost so far.Repeat the previous steps until all the cities have been traversed.
Start with the first city with a cost of C[1].
Travel to the next city until a city j having cost less than the previous city (by which we are travelling, let’s say city i) is found.
Calculate cost as abs(j – i) * C[i] and add it to the total cost so far.
Repeat the previous steps until all the cities have been traversed.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function to return the minimum cost to// travel from the first city to the lastint minCost(vector<int>& cost, int n){ // To store the total cost int totalCost = 0; // Start from the first city int boardingBus = 0; for (int i = 1; i < n; i++) { // If found any city with cost less than // that of the previous boarded // bus then change the bus if (cost[boardingBus] > cost[i]) { // Calculate the cost to travel from // the currently boarded bus // till the current city totalCost += ((i - boardingBus) * cost[boardingBus]); // Update the currently boarded bus boardingBus = i; } } // Finally calculate the cost for the // last boarding bus till the (N + 1)th city totalCost += ((n - boardingBus) * cost[boardingBus]); return totalCost;} // Driver codeint main(){ vector<int> cost{ 4, 7, 8, 3, 4 }; int n = cost.size(); cout << minCost(cost, n); return 0;}
// Java implementation of the approachclass GFG{ // Function to return the minimum cost to// travel from the first city to the laststatic int minCost(int []cost, int n){ // To store the total cost int totalCost = 0; // Start from the first city int boardingBus = 0; for (int i = 1; i < n; i++) { // If found any city with cost less than // that of the previous boarded // bus then change the bus if (cost[boardingBus] > cost[i]) { // Calculate the cost to travel from // the currently boarded bus // till the current city totalCost += ((i - boardingBus) * cost[boardingBus]); // Update the currently boarded bus boardingBus = i; } } // Finally calculate the cost for the // last boarding bus till the (N + 1)th city totalCost += ((n - boardingBus) * cost[boardingBus]); return totalCost;} // Driver codepublic static void main(String[] args){ int []cost = { 4, 7, 8, 3, 4 }; int n = cost.length; System.out.print(minCost(cost, n));}} // This code is contributed by PrinciRaj1992
# Python3 implementation of the approach # Function to return the minimum cost to# travel from the first city to the lastdef minCost(cost, n): # To store the total cost totalCost = 0 # Start from the first city boardingBus = 0 for i in range(1, n): # If found any city with cost less than # that of the previous boarded # bus then change the bus if (cost[boardingBus] > cost[i]): # Calculate the cost to travel from # the currently boarded bus # till the current city totalCost += ((i - boardingBus) * cost[boardingBus]) # Update the currently boarded bus boardingBus = i # Finally calculate the cost for the # last boarding bus till the (N + 1)th city totalCost += ((n - boardingBus) * cost[boardingBus]) return totalCost # Driver codecost = [ 4, 7, 8, 3, 4]n = len(cost) print(minCost(cost, n)) # This code is contributed by Mohit Kumar
// C# implementation of the approachusing System; class GFG{ // Function to return the minimum cost to// travel from the first city to the laststatic int minCost(int []cost, int n){ // To store the total cost int totalCost = 0; // Start from the first city int boardingBus = 0; for (int i = 1; i < n; i++) { // If found any city with cost less than // that of the previous boarded // bus then change the bus if (cost[boardingBus] > cost[i]) { // Calculate the cost to travel from // the currently boarded bus // till the current city totalCost += ((i - boardingBus) * cost[boardingBus]); // Update the currently boarded bus boardingBus = i; } } // Finally calculate the cost for the // last boarding bus till the (N + 1)th city totalCost += ((n - boardingBus) * cost[boardingBus]); return totalCost;} // Driver codepublic static void Main(String[] args){ int []cost = { 4, 7, 8, 3, 4 }; int n = cost.Length; Console.Write(minCost(cost, n));}} // This code is contributed by 29AjayKumar
<script>// javascript implementation of the approach // Function to return the minimum cost to // travel from the first city to the last function minCost(cost , n) { // To store the total cost var totalCost = 0; // Start from the first city var boardingBus = 0; for (i = 1; i < n; i++) { // If found any city with cost less than // that of the previous boarded // bus then change the bus if (cost[boardingBus] > cost[i]) { // Calculate the cost to travel from // the currently boarded bus // till the current city totalCost += ((i - boardingBus) * cost[boardingBus]); // Update the currently boarded bus boardingBus = i; } } // Finally calculate the cost for the // last boarding bus till the (N + 1)th city totalCost += ((n - boardingBus) * cost[boardingBus]); return totalCost; } // Driver code var cost = [ 4, 7, 8, 3, 4 ]; var n = cost.length; document.write(minCost(cost, n)); // This code contributed by umadevi9616</script>
18
Time Complexity: O(N)Auxiliary Space: O(1)
mohit kumar 29
princiraj1992
29AjayKumar
umadevi9616
pankajsharmagfg
Algorithms
Greedy
Greedy
Algorithms
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
What is Hashing | A Complete Tutorial
Find if there is a path between two vertices in an undirected graph
How to Start Learning DSA?
Complete Roadmap To Learn DSA From Scratch
Types of Complexity Classes | P, NP, CoNP, NP hard and NP complete
Dijkstra's shortest path algorithm | Greedy Algo-7
Program for array rotation
Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5
Write a program to print all permutations of a given string
Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n04 Aug, 2021"
},
{
"code": null,
"e": 483,
"s": 54,
"text": "There are N cities situated on a straight road and each is separated by a distance of 1 unit. You have to reach the (N + 1)th city by boarding a bus. The ith city would cost of C[i] dollars to travel 1 unit of distance. In other words, cost to travel from the ith city to the jth city is abs(i – j ) * C[i] dollars. The task is to find the minimum cost to travel from city 1 to city (N + 1) i.e. beyond the last city.Examples: "
},
{
"code": null,
"e": 768,
"s": 483,
"text": "Input: C[] = {3, 5, 4} Output: 9 The bus boarded from the first city has the minimum cost of all so it will be used to travel (N + 1) unit.Input: C[] = {4, 7, 8, 3, 4} Output: 18 Board the bus at the first city then change the bus at the fourth city. (3 * 4) + (2 * 3) = 12 + 6 = 18 "
},
{
"code": null,
"e": 982,
"s": 770,
"text": "Approach: The approach is very simple, just travel by the bus which has the lowest cost so far. Whenever a bus with an even lower cost is found, change the bus from that city. Following are the steps to solve: "
},
{
"code": null,
"e": 1303,
"s": 982,
"text": "Start with the first city with a cost of C[1].Travel to the next city until a city j having cost less than the previous city (by which we are travelling, let’s say city i) is found.Calculate cost as abs(j – i) * C[i] and add it to the total cost so far.Repeat the previous steps until all the cities have been traversed."
},
{
"code": null,
"e": 1350,
"s": 1303,
"text": "Start with the first city with a cost of C[1]."
},
{
"code": null,
"e": 1486,
"s": 1350,
"text": "Travel to the next city until a city j having cost less than the previous city (by which we are travelling, let’s say city i) is found."
},
{
"code": null,
"e": 1559,
"s": 1486,
"text": "Calculate cost as abs(j – i) * C[i] and add it to the total cost so far."
},
{
"code": null,
"e": 1627,
"s": 1559,
"text": "Repeat the previous steps until all the cities have been traversed."
},
{
"code": null,
"e": 1680,
"s": 1627,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 1684,
"s": 1680,
"text": "C++"
},
{
"code": null,
"e": 1689,
"s": 1684,
"text": "Java"
},
{
"code": null,
"e": 1697,
"s": 1689,
"text": "Python3"
},
{
"code": null,
"e": 1700,
"s": 1697,
"text": "C#"
},
{
"code": null,
"e": 1711,
"s": 1700,
"text": "Javascript"
},
{
"code": "// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function to return the minimum cost to// travel from the first city to the lastint minCost(vector<int>& cost, int n){ // To store the total cost int totalCost = 0; // Start from the first city int boardingBus = 0; for (int i = 1; i < n; i++) { // If found any city with cost less than // that of the previous boarded // bus then change the bus if (cost[boardingBus] > cost[i]) { // Calculate the cost to travel from // the currently boarded bus // till the current city totalCost += ((i - boardingBus) * cost[boardingBus]); // Update the currently boarded bus boardingBus = i; } } // Finally calculate the cost for the // last boarding bus till the (N + 1)th city totalCost += ((n - boardingBus) * cost[boardingBus]); return totalCost;} // Driver codeint main(){ vector<int> cost{ 4, 7, 8, 3, 4 }; int n = cost.size(); cout << minCost(cost, n); return 0;}",
"e": 2804,
"s": 1711,
"text": null
},
{
"code": "// Java implementation of the approachclass GFG{ // Function to return the minimum cost to// travel from the first city to the laststatic int minCost(int []cost, int n){ // To store the total cost int totalCost = 0; // Start from the first city int boardingBus = 0; for (int i = 1; i < n; i++) { // If found any city with cost less than // that of the previous boarded // bus then change the bus if (cost[boardingBus] > cost[i]) { // Calculate the cost to travel from // the currently boarded bus // till the current city totalCost += ((i - boardingBus) * cost[boardingBus]); // Update the currently boarded bus boardingBus = i; } } // Finally calculate the cost for the // last boarding bus till the (N + 1)th city totalCost += ((n - boardingBus) * cost[boardingBus]); return totalCost;} // Driver codepublic static void main(String[] args){ int []cost = { 4, 7, 8, 3, 4 }; int n = cost.length; System.out.print(minCost(cost, n));}} // This code is contributed by PrinciRaj1992",
"e": 3941,
"s": 2804,
"text": null
},
{
"code": "# Python3 implementation of the approach # Function to return the minimum cost to# travel from the first city to the lastdef minCost(cost, n): # To store the total cost totalCost = 0 # Start from the first city boardingBus = 0 for i in range(1, n): # If found any city with cost less than # that of the previous boarded # bus then change the bus if (cost[boardingBus] > cost[i]): # Calculate the cost to travel from # the currently boarded bus # till the current city totalCost += ((i - boardingBus) * cost[boardingBus]) # Update the currently boarded bus boardingBus = i # Finally calculate the cost for the # last boarding bus till the (N + 1)th city totalCost += ((n - boardingBus) * cost[boardingBus]) return totalCost # Driver codecost = [ 4, 7, 8, 3, 4]n = len(cost) print(minCost(cost, n)) # This code is contributed by Mohit Kumar",
"e": 4950,
"s": 3941,
"text": null
},
{
"code": "// C# implementation of the approachusing System; class GFG{ // Function to return the minimum cost to// travel from the first city to the laststatic int minCost(int []cost, int n){ // To store the total cost int totalCost = 0; // Start from the first city int boardingBus = 0; for (int i = 1; i < n; i++) { // If found any city with cost less than // that of the previous boarded // bus then change the bus if (cost[boardingBus] > cost[i]) { // Calculate the cost to travel from // the currently boarded bus // till the current city totalCost += ((i - boardingBus) * cost[boardingBus]); // Update the currently boarded bus boardingBus = i; } } // Finally calculate the cost for the // last boarding bus till the (N + 1)th city totalCost += ((n - boardingBus) * cost[boardingBus]); return totalCost;} // Driver codepublic static void Main(String[] args){ int []cost = { 4, 7, 8, 3, 4 }; int n = cost.Length; Console.Write(minCost(cost, n));}} // This code is contributed by 29AjayKumar",
"e": 6136,
"s": 4950,
"text": null
},
{
"code": "<script>// javascript implementation of the approach // Function to return the minimum cost to // travel from the first city to the last function minCost(cost , n) { // To store the total cost var totalCost = 0; // Start from the first city var boardingBus = 0; for (i = 1; i < n; i++) { // If found any city with cost less than // that of the previous boarded // bus then change the bus if (cost[boardingBus] > cost[i]) { // Calculate the cost to travel from // the currently boarded bus // till the current city totalCost += ((i - boardingBus) * cost[boardingBus]); // Update the currently boarded bus boardingBus = i; } } // Finally calculate the cost for the // last boarding bus till the (N + 1)th city totalCost += ((n - boardingBus) * cost[boardingBus]); return totalCost; } // Driver code var cost = [ 4, 7, 8, 3, 4 ]; var n = cost.length; document.write(minCost(cost, n)); // This code contributed by umadevi9616</script>",
"e": 7331,
"s": 6136,
"text": null
},
{
"code": null,
"e": 7334,
"s": 7331,
"text": "18"
},
{
"code": null,
"e": 7379,
"s": 7336,
"text": "Time Complexity: O(N)Auxiliary Space: O(1)"
},
{
"code": null,
"e": 7394,
"s": 7379,
"text": "mohit kumar 29"
},
{
"code": null,
"e": 7408,
"s": 7394,
"text": "princiraj1992"
},
{
"code": null,
"e": 7420,
"s": 7408,
"text": "29AjayKumar"
},
{
"code": null,
"e": 7432,
"s": 7420,
"text": "umadevi9616"
},
{
"code": null,
"e": 7448,
"s": 7432,
"text": "pankajsharmagfg"
},
{
"code": null,
"e": 7459,
"s": 7448,
"text": "Algorithms"
},
{
"code": null,
"e": 7466,
"s": 7459,
"text": "Greedy"
},
{
"code": null,
"e": 7473,
"s": 7466,
"text": "Greedy"
},
{
"code": null,
"e": 7484,
"s": 7473,
"text": "Algorithms"
},
{
"code": null,
"e": 7582,
"s": 7484,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 7620,
"s": 7582,
"text": "What is Hashing | A Complete Tutorial"
},
{
"code": null,
"e": 7688,
"s": 7620,
"text": "Find if there is a path between two vertices in an undirected graph"
},
{
"code": null,
"e": 7715,
"s": 7688,
"text": "How to Start Learning DSA?"
},
{
"code": null,
"e": 7758,
"s": 7715,
"text": "Complete Roadmap To Learn DSA From Scratch"
},
{
"code": null,
"e": 7825,
"s": 7758,
"text": "Types of Complexity Classes | P, NP, CoNP, NP hard and NP complete"
},
{
"code": null,
"e": 7876,
"s": 7825,
"text": "Dijkstra's shortest path algorithm | Greedy Algo-7"
},
{
"code": null,
"e": 7903,
"s": 7876,
"text": "Program for array rotation"
},
{
"code": null,
"e": 7954,
"s": 7903,
"text": "Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5"
},
{
"code": null,
"e": 8014,
"s": 7954,
"text": "Write a program to print all permutations of a given string"
}
] |
MATLAB - The switch Statement
|
A switch block conditionally executes one set of statements from several choices. Each choice is covered by a case statement.
An evaluated switch_expression is a scalar or string.
An evaluated case_expression is a scalar, a string or a cell array of scalars or strings.
The switch block tests each case until one of the cases is true. A case is true when −
For numbers, eq(case_expression,switch_expression).
For numbers, eq(case_expression,switch_expression).
For strings, strcmp(case_expression,switch_expression).
For strings, strcmp(case_expression,switch_expression).
For objects that support the eq(case_expression,switch_expression).
For objects that support the eq(case_expression,switch_expression).
For a cell array case_expression, at least one of the elements of the cell array matches switch_expression, as defined above for numbers, strings and objects.
For a cell array case_expression, at least one of the elements of the cell array matches switch_expression, as defined above for numbers, strings and objects.
When a case is true, MATLAB executes the corresponding statements and then exits the switch block.
The otherwise block is optional and executes only when no case is true.
The syntax of switch statement in MATLAB is −
switch <switch_expression>
case <case_expression>
<statements>
case <case_expression>
<statements>
...
...
otherwise
<statements>
end
Create a script file and type the following code in it −
grade = 'B';
switch(grade)
case 'A'
fprintf('Excellent!\n' );
case 'B'
fprintf('Well done\n' );
case 'C'
fprintf('Well done\n' );
case 'D'
fprintf('You passed\n' );
case 'F'
fprintf('Better try again\n' );
otherwise
fprintf('Invalid grade\n' );
end
When you run the file, it displays −
|
[
{
"code": null,
"e": 2401,
"s": 2275,
"text": "A switch block conditionally executes one set of statements from several choices. Each choice is covered by a case statement."
},
{
"code": null,
"e": 2455,
"s": 2401,
"text": "An evaluated switch_expression is a scalar or string."
},
{
"code": null,
"e": 2545,
"s": 2455,
"text": "An evaluated case_expression is a scalar, a string or a cell array of scalars or strings."
},
{
"code": null,
"e": 2632,
"s": 2545,
"text": "The switch block tests each case until one of the cases is true. A case is true when −"
},
{
"code": null,
"e": 2684,
"s": 2632,
"text": "For numbers, eq(case_expression,switch_expression)."
},
{
"code": null,
"e": 2736,
"s": 2684,
"text": "For numbers, eq(case_expression,switch_expression)."
},
{
"code": null,
"e": 2792,
"s": 2736,
"text": "For strings, strcmp(case_expression,switch_expression)."
},
{
"code": null,
"e": 2848,
"s": 2792,
"text": "For strings, strcmp(case_expression,switch_expression)."
},
{
"code": null,
"e": 2916,
"s": 2848,
"text": "For objects that support the eq(case_expression,switch_expression)."
},
{
"code": null,
"e": 2984,
"s": 2916,
"text": "For objects that support the eq(case_expression,switch_expression)."
},
{
"code": null,
"e": 3143,
"s": 2984,
"text": "For a cell array case_expression, at least one of the elements of the cell array matches switch_expression, as defined above for numbers, strings and objects."
},
{
"code": null,
"e": 3302,
"s": 3143,
"text": "For a cell array case_expression, at least one of the elements of the cell array matches switch_expression, as defined above for numbers, strings and objects."
},
{
"code": null,
"e": 3401,
"s": 3302,
"text": "When a case is true, MATLAB executes the corresponding statements and then exits the switch block."
},
{
"code": null,
"e": 3473,
"s": 3401,
"text": "The otherwise block is optional and executes only when no case is true."
},
{
"code": null,
"e": 3519,
"s": 3473,
"text": "The syntax of switch statement in MATLAB is −"
},
{
"code": null,
"e": 3693,
"s": 3519,
"text": "switch <switch_expression>\n case <case_expression>\n <statements>\n case <case_expression>\n <statements>\n ...\n ...\n otherwise\n <statements>\nend\n"
},
{
"code": null,
"e": 3750,
"s": 3693,
"text": "Create a script file and type the following code in it −"
},
{
"code": null,
"e": 4063,
"s": 3750,
"text": "grade = 'B';\n switch(grade)\n case 'A' \n fprintf('Excellent!\\n' );\n case 'B' \n fprintf('Well done\\n' );\n case 'C' \n fprintf('Well done\\n' );\n case 'D'\n fprintf('You passed\\n' );\n case 'F' \n fprintf('Better try again\\n' );\n otherwise\n fprintf('Invalid grade\\n' );\n end"
}
] |
Kotlin annotations
|
09 Sep, 2021
Annotations are a feature of Kotlin that allows the programmer to embed supplemental information into the source file. This information, however, does not changes the actions of the program. This information is used by various tools during both development and deployment.Annotations contains the following parameters most frequently and these must be compile-time constants:
primitive types(Int,Long etc)
strings
enumerations
class
other annotations
arrays of the above-mentioned types
We can apply annotation by putting its name prefixed with the @ symbol in front of a code element. For example, if we want to apply an annotation named Positive, we should write the following if we want to write annotation Pos
@Positive val i: Int
A parameter can be passed in parenthesis to an annotation similar to function call.
@Allowedlanguage("Kotlin")
When an annotation is passed as parameter in another annotation, then we should omit the @ symbol. Here we have passed Replacewith() annotation as parameter.
@Deprecated("This function is deprecated, use === instead", ReplaceWith("this === other"))
When an annotation parameter is a class object, we should add ::class to the class name as:
@Throws(IOException::class)
To declare an annotation, the class keyword is prefixed with the annotation keyword. By their nature, declarations of annotation cannot contain any code. While declaring our custom annotations, we should specify to which code elements they might apply and where they should be stored. The simplest annotation contains no parameters –
annotation class MyClass
An annotation that requires parameter is much similar to a class with a primary constructor –
annotation class Suffix(val s: String)
We can also annotate the constructor of a class. It can be done by using the constructor keyword for constructor declaration and placing the annotation before it.
class MyClass@Inject constructor(dependency: MyDependency) {
//. . .
}
We can annotate the properties of class by adding an annotation to the properties. In below example, we assume that an Lang instance is valid if the value of the name is either Kotlin or Java.
class Lang (
@Allowedlanguages(["Java","Kotlin"]) val name: String)
}
Kotlin also provides certain in-built annotations, that are used to provide more attributes to user-defined annotations. To be precise, these annotations are used to annotate annotations. @Target – This annotation specifies the places where the annotated annotation can be applied such as classes, functions, constructors, type parameters, etc. When an annotation is applied to the primary constructor for a class, the constructor keyword is specified before the constructor. Example to demonstrate @Target annotation
Java
@Target(AnnotationTarget.CONSTRUCTOR, AnnotationTarget.LOCAL_VARIABLE)annotation class AnnotationDemo2 class ABC @AnnotationDemo2 constructor(val count:Int){ fun display(){ println("Constructor annotated") println("Count is $count") }}fun main(){ val obj = ABC(5) obj.display() @AnnotationDemo2 val message: String message = "Hello" println("Local parameter annotated") println(message)}
Output:
Constructor annotated
Count is 5
Local parameter annotated
Hello
@Retention – This annotation specifies the availability of the annotated annotation i.e whether the annotation remains in the source file, or it is available at runtime, etc. Its required parameter must be an instance of the AnnotationRetention enumeration that has the following elements:
SOURCE
BINARY
RUNTIME
Example to demonstrate @Retention annotation:
Java
//Specifying an annotation with runtime policy@Retention(AnnotationRetention.RUNTIME)annotation class AnnotationDemo3 @AnnotationDemo3 fun main(){ println("Main function annotated")}
Output:
Main function annotated
@Repeatable – This annotation allows an element to be annotated with the same annotation multiple times. As per the current version of Kotlin 1.3, this annotation can only be used with the Retention Policy set to SOURCE. Example to demonstrate @Repeatable
Java
@Repeatable@Retention(AnnotationRetention.SOURCE)annotation class AnnotationDemo4 (val value: Int) @AnnotationDemo4(4)@AnnotationDemo4(5)fun main(){ println("Repeatable Annotation applied on main")}
Output:
Repeatable Annotation applied on main
rajeev0719singh
Picked
Kotlin
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n09 Sep, 2021"
},
{
"code": null,
"e": 406,
"s": 28,
"text": "Annotations are a feature of Kotlin that allows the programmer to embed supplemental information into the source file. This information, however, does not changes the actions of the program. This information is used by various tools during both development and deployment.Annotations contains the following parameters most frequently and these must be compile-time constants: "
},
{
"code": null,
"e": 436,
"s": 406,
"text": "primitive types(Int,Long etc)"
},
{
"code": null,
"e": 444,
"s": 436,
"text": "strings"
},
{
"code": null,
"e": 457,
"s": 444,
"text": "enumerations"
},
{
"code": null,
"e": 463,
"s": 457,
"text": "class"
},
{
"code": null,
"e": 481,
"s": 463,
"text": "other annotations"
},
{
"code": null,
"e": 517,
"s": 481,
"text": "arrays of the above-mentioned types"
},
{
"code": null,
"e": 747,
"s": 519,
"text": "We can apply annotation by putting its name prefixed with the @ symbol in front of a code element. For example, if we want to apply an annotation named Positive, we should write the following if we want to write annotation Pos "
},
{
"code": null,
"e": 768,
"s": 747,
"text": "@Positive val i: Int"
},
{
"code": null,
"e": 854,
"s": 768,
"text": "A parameter can be passed in parenthesis to an annotation similar to function call. "
},
{
"code": null,
"e": 881,
"s": 854,
"text": "@Allowedlanguage(\"Kotlin\")"
},
{
"code": null,
"e": 1041,
"s": 881,
"text": "When an annotation is passed as parameter in another annotation, then we should omit the @ symbol. Here we have passed Replacewith() annotation as parameter. "
},
{
"code": null,
"e": 1132,
"s": 1041,
"text": "@Deprecated(\"This function is deprecated, use === instead\", ReplaceWith(\"this === other\"))"
},
{
"code": null,
"e": 1226,
"s": 1132,
"text": "When an annotation parameter is a class object, we should add ::class to the class name as: "
},
{
"code": null,
"e": 1254,
"s": 1226,
"text": "@Throws(IOException::class)"
},
{
"code": null,
"e": 1592,
"s": 1256,
"text": "To declare an annotation, the class keyword is prefixed with the annotation keyword. By their nature, declarations of annotation cannot contain any code. While declaring our custom annotations, we should specify to which code elements they might apply and where they should be stored. The simplest annotation contains no parameters – "
},
{
"code": null,
"e": 1617,
"s": 1592,
"text": "annotation class MyClass"
},
{
"code": null,
"e": 1713,
"s": 1617,
"text": "An annotation that requires parameter is much similar to a class with a primary constructor – "
},
{
"code": null,
"e": 1752,
"s": 1713,
"text": "annotation class Suffix(val s: String)"
},
{
"code": null,
"e": 1919,
"s": 1754,
"text": "We can also annotate the constructor of a class. It can be done by using the constructor keyword for constructor declaration and placing the annotation before it. "
},
{
"code": null,
"e": 1995,
"s": 1919,
"text": "class MyClass@Inject constructor(dependency: MyDependency) { \n//. . . \n}"
},
{
"code": null,
"e": 2192,
"s": 1997,
"text": "We can annotate the properties of class by adding an annotation to the properties. In below example, we assume that an Lang instance is valid if the value of the name is either Kotlin or Java. "
},
{
"code": null,
"e": 2266,
"s": 2192,
"text": "class Lang (\n @Allowedlanguages([\"Java\",\"Kotlin\"]) val name: String)\n}"
},
{
"code": null,
"e": 2788,
"s": 2268,
"text": "Kotlin also provides certain in-built annotations, that are used to provide more attributes to user-defined annotations. To be precise, these annotations are used to annotate annotations. @Target – This annotation specifies the places where the annotated annotation can be applied such as classes, functions, constructors, type parameters, etc. When an annotation is applied to the primary constructor for a class, the constructor keyword is specified before the constructor. Example to demonstrate @Target annotation "
},
{
"code": null,
"e": 2793,
"s": 2788,
"text": "Java"
},
{
"code": "@Target(AnnotationTarget.CONSTRUCTOR, AnnotationTarget.LOCAL_VARIABLE)annotation class AnnotationDemo2 class ABC @AnnotationDemo2 constructor(val count:Int){ fun display(){ println(\"Constructor annotated\") println(\"Count is $count\") }}fun main(){ val obj = ABC(5) obj.display() @AnnotationDemo2 val message: String message = \"Hello\" println(\"Local parameter annotated\") println(message)}",
"e": 3220,
"s": 2793,
"text": null
},
{
"code": null,
"e": 3230,
"s": 3220,
"text": "Output: "
},
{
"code": null,
"e": 3295,
"s": 3230,
"text": "Constructor annotated\nCount is 5\nLocal parameter annotated\nHello"
},
{
"code": null,
"e": 3587,
"s": 3295,
"text": "@Retention – This annotation specifies the availability of the annotated annotation i.e whether the annotation remains in the source file, or it is available at runtime, etc. Its required parameter must be an instance of the AnnotationRetention enumeration that has the following elements: "
},
{
"code": null,
"e": 3594,
"s": 3587,
"text": "SOURCE"
},
{
"code": null,
"e": 3601,
"s": 3594,
"text": "BINARY"
},
{
"code": null,
"e": 3609,
"s": 3601,
"text": "RUNTIME"
},
{
"code": null,
"e": 3657,
"s": 3609,
"text": "Example to demonstrate @Retention annotation: "
},
{
"code": null,
"e": 3662,
"s": 3657,
"text": "Java"
},
{
"code": "//Specifying an annotation with runtime policy@Retention(AnnotationRetention.RUNTIME)annotation class AnnotationDemo3 @AnnotationDemo3 fun main(){ println(\"Main function annotated\")}",
"e": 3848,
"s": 3662,
"text": null
},
{
"code": null,
"e": 3858,
"s": 3848,
"text": "Output: "
},
{
"code": null,
"e": 3882,
"s": 3858,
"text": "Main function annotated"
},
{
"code": null,
"e": 4140,
"s": 3882,
"text": "@Repeatable – This annotation allows an element to be annotated with the same annotation multiple times. As per the current version of Kotlin 1.3, this annotation can only be used with the Retention Policy set to SOURCE. Example to demonstrate @Repeatable "
},
{
"code": null,
"e": 4145,
"s": 4140,
"text": "Java"
},
{
"code": "@Repeatable@Retention(AnnotationRetention.SOURCE)annotation class AnnotationDemo4 (val value: Int) @AnnotationDemo4(4)@AnnotationDemo4(5)fun main(){ println(\"Repeatable Annotation applied on main\")}",
"e": 4347,
"s": 4145,
"text": null
},
{
"code": null,
"e": 4357,
"s": 4347,
"text": "Output: "
},
{
"code": null,
"e": 4395,
"s": 4357,
"text": "Repeatable Annotation applied on main"
},
{
"code": null,
"e": 4413,
"s": 4397,
"text": "rajeev0719singh"
},
{
"code": null,
"e": 4420,
"s": 4413,
"text": "Picked"
},
{
"code": null,
"e": 4427,
"s": 4420,
"text": "Kotlin"
}
] |
Java String Class strip() Method With Examples
|
02 Feb, 2021
Java String Class strip() method returns a string that provides a string with all leading and trailing white spaces removed. This method is similar to the String.trim() method. There are basically 3 methods by which we can remove white spaces in a different manner.
strip() Method: It returns a string, with all leading and trailing white space removed
Syntax:
String strippedString = string.strip()
stripLeading() Method: It returns a string, with all leading white space removed.
Syntax:
String leadingStripString = string.stripLeading()
stripTrailing() Method: It returns a string, with all trailing white space removed.
Syntax:
String trailedStripString = string.stripTrailing()
Example:
Java
// Java Program to demonstrate the use of the strip()// method,stripLeading() method,stripTrailing() method public class GFG { public static void main(String[] args) { String str = " Geeks For Geeks Internship ! "; // removing leading and // trailing white spaces System.out.println(str.strip()); // removing leading white spaces System.out.println(str.stripLeading()); // removing trailing white spaces System.out.println(str.stripTrailing()); }}
Output:
Geeks For Geeks Internship !
Geeks For Geeks Internship !
Geeks For Geeks Internship !
Java-Strings
Picked
Technical Scripter 2020
Java
Java Programs
Technical Scripter
Java-Strings
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Introduction to Java
Constructors in Java
Exceptions in Java
Generics in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
Factory method design pattern in Java
Java Program to Remove Duplicate Elements From the Array
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n02 Feb, 2021"
},
{
"code": null,
"e": 294,
"s": 28,
"text": "Java String Class strip() method returns a string that provides a string with all leading and trailing white spaces removed. This method is similar to the String.trim() method. There are basically 3 methods by which we can remove white spaces in a different manner."
},
{
"code": null,
"e": 381,
"s": 294,
"text": "strip() Method: It returns a string, with all leading and trailing white space removed"
},
{
"code": null,
"e": 389,
"s": 381,
"text": "Syntax:"
},
{
"code": null,
"e": 428,
"s": 389,
"text": "String strippedString = string.strip()"
},
{
"code": null,
"e": 510,
"s": 428,
"text": "stripLeading() Method: It returns a string, with all leading white space removed."
},
{
"code": null,
"e": 518,
"s": 510,
"text": "Syntax:"
},
{
"code": null,
"e": 568,
"s": 518,
"text": "String leadingStripString = string.stripLeading()"
},
{
"code": null,
"e": 652,
"s": 568,
"text": "stripTrailing() Method: It returns a string, with all trailing white space removed."
},
{
"code": null,
"e": 660,
"s": 652,
"text": "Syntax:"
},
{
"code": null,
"e": 711,
"s": 660,
"text": "String trailedStripString = string.stripTrailing()"
},
{
"code": null,
"e": 720,
"s": 711,
"text": "Example:"
},
{
"code": null,
"e": 725,
"s": 720,
"text": "Java"
},
{
"code": "// Java Program to demonstrate the use of the strip()// method,stripLeading() method,stripTrailing() method public class GFG { public static void main(String[] args) { String str = \" Geeks For Geeks Internship ! \"; // removing leading and // trailing white spaces System.out.println(str.strip()); // removing leading white spaces System.out.println(str.stripLeading()); // removing trailing white spaces System.out.println(str.stripTrailing()); }}",
"e": 1268,
"s": 725,
"text": null
},
{
"code": null,
"e": 1276,
"s": 1268,
"text": "Output:"
},
{
"code": null,
"e": 1383,
"s": 1276,
"text": "Geeks For Geeks Internship !\nGeeks For Geeks Internship ! \n Geeks For Geeks Internship !"
},
{
"code": null,
"e": 1396,
"s": 1383,
"text": "Java-Strings"
},
{
"code": null,
"e": 1403,
"s": 1396,
"text": "Picked"
},
{
"code": null,
"e": 1427,
"s": 1403,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 1432,
"s": 1427,
"text": "Java"
},
{
"code": null,
"e": 1446,
"s": 1432,
"text": "Java Programs"
},
{
"code": null,
"e": 1465,
"s": 1446,
"text": "Technical Scripter"
},
{
"code": null,
"e": 1478,
"s": 1465,
"text": "Java-Strings"
},
{
"code": null,
"e": 1483,
"s": 1478,
"text": "Java"
},
{
"code": null,
"e": 1581,
"s": 1483,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1596,
"s": 1581,
"text": "Stream In Java"
},
{
"code": null,
"e": 1617,
"s": 1596,
"text": "Introduction to Java"
},
{
"code": null,
"e": 1638,
"s": 1617,
"text": "Constructors in Java"
},
{
"code": null,
"e": 1657,
"s": 1638,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 1674,
"s": 1657,
"text": "Generics in Java"
},
{
"code": null,
"e": 1700,
"s": 1674,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 1734,
"s": 1700,
"text": "Convert Double to Integer in Java"
},
{
"code": null,
"e": 1781,
"s": 1734,
"text": "Implementing a Linked List in Java using Class"
},
{
"code": null,
"e": 1819,
"s": 1781,
"text": "Factory method design pattern in Java"
}
] |
GATE | GATE-CS-2001 | Question 27
|
28 Jun, 2021
Consider the following statements:
S1: There exists infinite sets A, B, C such that
A ∩ (B ∪ C) is finite.
S2: There exists two irrational numbers x and y such
that (x+y) is rational.
Which of the following is true about S1 and S2?(A) Only S1 is correct(B) Only S2 is correct(C) Both S1 and S2 are correct(D) None of S1 and S2 is correctAnswer: (C)Explanation: S1: A ∩ (B ∪ C)Here S1 is finite where A, B, C are infiniteWe’ll prove this by taking an example.Let A = {Set of all even numbers} = {2, 4, 6, 8, 10...}Let B = {Set of all odd numbers} = {1, 3, 5, 7...........}Let C = {Set of all prime numbers} = {2, 3, 5, 7, 11, 13......}B U C = {1, 2, 3, 5, 7, 9, 11, 13......}A ∩ (B ∪ C)Willbe equals to: {2} which is finite.I.e. using A, B, C as infinite sets the statement S1 is finite.So, statement S1 is correct.S2: There exists two irrational numbers x, y such that (x+y) is rationalTo prove this statement as correct, we take an example.Let X = 2-Sqrt (3), Y = 2+Sqrt (3) => X, Y are irrationalX+Y = 2+Sqrt (3) + 2-Sqrt (3) = 2+2 = 4So, statement S2 is also correct.Answer is Option C
Both Statements S1, S2 are correct.
This solution is contributed by Anil Saikrishna Devarasetty.Quiz of this Question
GATE-CS-2001
GATE-GATE-CS-2001
GATE
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
GATE | GATE-CS-2014-(Set-2) | Question 65
GATE | Sudo GATE 2020 Mock I (27 December 2019) | Question 33
GATE | GATE-CS-2014-(Set-3) | Question 20
GATE | GATE CS 2008 | Question 46
GATE | GATE CS 2008 | Question 40
GATE | GATE-CS-2015 (Set 3) | Question 65
GATE | GATE-CS-2014-(Set-3) | Question 65
GATE | GATE CS 2011 | Question 49
GATE | GATE-CS-2004 | Question 31
GATE | GATE CS 1996 | Question 63
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n28 Jun, 2021"
},
{
"code": null,
"e": 63,
"s": 28,
"text": "Consider the following statements:"
},
{
"code": null,
"e": 221,
"s": 63,
"text": "S1: There exists infinite sets A, B, C such that \n A ∩ (B ∪ C) is finite.\nS2: There exists two irrational numbers x and y such\n that (x+y) is rational."
},
{
"code": null,
"e": 1126,
"s": 221,
"text": "Which of the following is true about S1 and S2?(A) Only S1 is correct(B) Only S2 is correct(C) Both S1 and S2 are correct(D) None of S1 and S2 is correctAnswer: (C)Explanation: S1: A ∩ (B ∪ C)Here S1 is finite where A, B, C are infiniteWe’ll prove this by taking an example.Let A = {Set of all even numbers} = {2, 4, 6, 8, 10...}Let B = {Set of all odd numbers} = {1, 3, 5, 7...........}Let C = {Set of all prime numbers} = {2, 3, 5, 7, 11, 13......}B U C = {1, 2, 3, 5, 7, 9, 11, 13......}A ∩ (B ∪ C)Willbe equals to: {2} which is finite.I.e. using A, B, C as infinite sets the statement S1 is finite.So, statement S1 is correct.S2: There exists two irrational numbers x, y such that (x+y) is rationalTo prove this statement as correct, we take an example.Let X = 2-Sqrt (3), Y = 2+Sqrt (3) => X, Y are irrationalX+Y = 2+Sqrt (3) + 2-Sqrt (3) = 2+2 = 4So, statement S2 is also correct.Answer is Option C"
},
{
"code": null,
"e": 1162,
"s": 1126,
"text": "Both Statements S1, S2 are correct."
},
{
"code": null,
"e": 1246,
"s": 1164,
"text": "This solution is contributed by Anil Saikrishna Devarasetty.Quiz of this Question"
},
{
"code": null,
"e": 1259,
"s": 1246,
"text": "GATE-CS-2001"
},
{
"code": null,
"e": 1277,
"s": 1259,
"text": "GATE-GATE-CS-2001"
},
{
"code": null,
"e": 1282,
"s": 1277,
"text": "GATE"
},
{
"code": null,
"e": 1380,
"s": 1282,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1422,
"s": 1380,
"text": "GATE | GATE-CS-2014-(Set-2) | Question 65"
},
{
"code": null,
"e": 1484,
"s": 1422,
"text": "GATE | Sudo GATE 2020 Mock I (27 December 2019) | Question 33"
},
{
"code": null,
"e": 1526,
"s": 1484,
"text": "GATE | GATE-CS-2014-(Set-3) | Question 20"
},
{
"code": null,
"e": 1560,
"s": 1526,
"text": "GATE | GATE CS 2008 | Question 46"
},
{
"code": null,
"e": 1594,
"s": 1560,
"text": "GATE | GATE CS 2008 | Question 40"
},
{
"code": null,
"e": 1636,
"s": 1594,
"text": "GATE | GATE-CS-2015 (Set 3) | Question 65"
},
{
"code": null,
"e": 1678,
"s": 1636,
"text": "GATE | GATE-CS-2014-(Set-3) | Question 65"
},
{
"code": null,
"e": 1712,
"s": 1678,
"text": "GATE | GATE CS 2011 | Question 49"
},
{
"code": null,
"e": 1746,
"s": 1712,
"text": "GATE | GATE-CS-2004 | Question 31"
}
] |
Python | Scramble strings in list
|
29 Nov, 2019
Sometimes, while working with different applications, we can come across a problem in which we require to shuffle all the strings in the list input we get. This kind of problem can particularly occur in gaming domain. Let’s discuss certain ways in which this problem can be solved.
Method #1 : Using list comprehension + sample() + join()The combination of above functions can be used to solve this problem. In this, we need to disintegrate string into character list, scramble using sample(), rejoin them using join() and then remake list using list comprehension.
# Python3 code to demonstrate working of# Scramble strings in list# using list comprehension + sample() + join()from random import sample # initialize list test_list = ['gfg', 'is', 'best', 'for', 'geeks'] # printing original list print("The original list : " + str(test_list)) # Scramble strings in list# using list comprehension + sample() + join()res = [''.join(sample(ele, len(ele))) for ele in test_list] # printing resultprint("Scrambled strings in lists are : " + str(res))
The original list : ['gfg', 'is', 'best', 'for', 'geeks']
Scrambled strings in lists are : ['fgg', 'is', 'btse', 'rof', 'sgeke']
Method #2 : Using list comprehension + shuffle() + join()This is similar to the above method. The only difference is that shuffle() is used to perform scramble task than using sample().
# Python3 code to demonstrate working of# Scramble strings in list# using list comprehension + shuffle() + join()from random import shuffle # Utility function def perform_scramble(ele): ele = list(ele) shuffle(ele) return ''.join(ele) # initialize list test_list = ['gfg', 'is', 'best', 'for', 'geeks'] # printing original list print("The original list : " + str(test_list)) # Scramble strings in list# using list comprehension + shuffle() + join()res = [perform_scramble(ele) for ele in test_list] # printing resultprint("Scrambled strings in lists are : " + str(res))
The original list : ['gfg', 'is', 'best', 'for', 'geeks']
Scrambled strings in lists are : ['fgg', 'is', 'btse', 'rof', 'sgeke']
Python list-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Rotate axis tick labels in Seaborn and Matplotlib
Enumerate() in Python
Deque in Python
Stack in Python
sum() function in Python
Defaultdict in Python
Python | Split string into list of characters
Python | Get dictionary keys as a list
Iterate over characters of a string in Python
Python | Convert set into a list
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n29 Nov, 2019"
},
{
"code": null,
"e": 310,
"s": 28,
"text": "Sometimes, while working with different applications, we can come across a problem in which we require to shuffle all the strings in the list input we get. This kind of problem can particularly occur in gaming domain. Let’s discuss certain ways in which this problem can be solved."
},
{
"code": null,
"e": 594,
"s": 310,
"text": "Method #1 : Using list comprehension + sample() + join()The combination of above functions can be used to solve this problem. In this, we need to disintegrate string into character list, scramble using sample(), rejoin them using join() and then remake list using list comprehension."
},
{
"code": "# Python3 code to demonstrate working of# Scramble strings in list# using list comprehension + sample() + join()from random import sample # initialize list test_list = ['gfg', 'is', 'best', 'for', 'geeks'] # printing original list print(\"The original list : \" + str(test_list)) # Scramble strings in list# using list comprehension + sample() + join()res = [''.join(sample(ele, len(ele))) for ele in test_list] # printing resultprint(\"Scrambled strings in lists are : \" + str(res))",
"e": 1080,
"s": 594,
"text": null
},
{
"code": null,
"e": 1210,
"s": 1080,
"text": "The original list : ['gfg', 'is', 'best', 'for', 'geeks']\nScrambled strings in lists are : ['fgg', 'is', 'btse', 'rof', 'sgeke']\n"
},
{
"code": null,
"e": 1398,
"s": 1212,
"text": "Method #2 : Using list comprehension + shuffle() + join()This is similar to the above method. The only difference is that shuffle() is used to perform scramble task than using sample()."
},
{
"code": "# Python3 code to demonstrate working of# Scramble strings in list# using list comprehension + shuffle() + join()from random import shuffle # Utility function def perform_scramble(ele): ele = list(ele) shuffle(ele) return ''.join(ele) # initialize list test_list = ['gfg', 'is', 'best', 'for', 'geeks'] # printing original list print(\"The original list : \" + str(test_list)) # Scramble strings in list# using list comprehension + shuffle() + join()res = [perform_scramble(ele) for ele in test_list] # printing resultprint(\"Scrambled strings in lists are : \" + str(res))",
"e": 1982,
"s": 1398,
"text": null
},
{
"code": null,
"e": 2112,
"s": 1982,
"text": "The original list : ['gfg', 'is', 'best', 'for', 'geeks']\nScrambled strings in lists are : ['fgg', 'is', 'btse', 'rof', 'sgeke']\n"
},
{
"code": null,
"e": 2133,
"s": 2112,
"text": "Python list-programs"
},
{
"code": null,
"e": 2140,
"s": 2133,
"text": "Python"
},
{
"code": null,
"e": 2156,
"s": 2140,
"text": "Python Programs"
},
{
"code": null,
"e": 2254,
"s": 2156,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2304,
"s": 2254,
"text": "Rotate axis tick labels in Seaborn and Matplotlib"
},
{
"code": null,
"e": 2326,
"s": 2304,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 2342,
"s": 2326,
"text": "Deque in Python"
},
{
"code": null,
"e": 2358,
"s": 2342,
"text": "Stack in Python"
},
{
"code": null,
"e": 2383,
"s": 2358,
"text": "sum() function in Python"
},
{
"code": null,
"e": 2405,
"s": 2383,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 2451,
"s": 2405,
"text": "Python | Split string into list of characters"
},
{
"code": null,
"e": 2490,
"s": 2451,
"text": "Python | Get dictionary keys as a list"
},
{
"code": null,
"e": 2536,
"s": 2490,
"text": "Iterate over characters of a string in Python"
}
] |
How to change the playing speed of videos in HTML5?
|
Use the playbackRate property sets the current playback speed of the audio/video. With that, the defaultPlaybackRate property sets the default playback speed of the audio/video.
You can try to run the following code to change the playing speed of videos −
/* play video thrice as fast */
document.querySelector('video').defaultPlaybackRate = 3.0;
document.querySelector('video').play();
The following is to play the video twice as fast −
/* play video twice as fast */
document.querySelector('video').playbackRate = 2;
|
[
{
"code": null,
"e": 1240,
"s": 1062,
"text": "Use the playbackRate property sets the current playback speed of the audio/video. With that, the defaultPlaybackRate property sets the default playback speed of the audio/video."
},
{
"code": null,
"e": 1318,
"s": 1240,
"text": "You can try to run the following code to change the playing speed of videos −"
},
{
"code": null,
"e": 1449,
"s": 1318,
"text": "/* play video thrice as fast */\ndocument.querySelector('video').defaultPlaybackRate = 3.0;\ndocument.querySelector('video').play();"
},
{
"code": null,
"e": 1500,
"s": 1449,
"text": "The following is to play the video twice as fast −"
},
{
"code": null,
"e": 1581,
"s": 1500,
"text": "/* play video twice as fast */\ndocument.querySelector('video').playbackRate = 2;"
}
] |
Assigning an integer to float and comparison in C/C++
|
The integer is a data type used to define a number that contains all positive, negative or zero non-fractional values. These cannot have decimals.
Float is a data type used to define a number that has a fractional value. These can have decimals also.
Now, we will check what will be the value of float and integer return by the compiler when we input the same value for both.
Live Demo
#include <iostream>
using namespace std;
int main(){
float f = 23;
unsigned int x = 23;
cout<<"Float f = "<<f<<endl;
cout<<"Integer x = "<<x<<endl;
f = 0xffffffff;
x = 0xffffffff;
cout << "f = " << f << endl;
cout << "x = " << x << endl;
return 0;
}
Float f = 23
Integer x = 23
f = 4.29497e+09
x = 4294967295
Here in this code, we can see that if we pass an integer value to a float then it will act as an integer and returns an integer value as output. But the max values for both of them come to be different.
Now, let’s see what if we initialize integer variable with float value.
Live Demo
#include <iostream>
using namespace std;
int main(){
float f = 23.768;
unsigned int x = 23.768;
cout<<"Float f = "<<f<<endl;
cout<<"Integer x = "<<x<<endl;
return 0;
}
Float f = 23.768
Integer x = 23
In this condition too the program compiled and run. The integer variable discarded the decimal point values of the initialize float value and gets initialized with the integer value of it.
Now, let's compare these values −
Live Demo
#include <iostream>
using namespace std;
int main(){
float f = 0xffffffff;
unsigned int x = 0xffffffff;
if(f == x ){
cout<<"TRUE";
}
else
cout<<"FALSE";
return 0;
}
TRUE
|
[
{
"code": null,
"e": 1209,
"s": 1062,
"text": "The integer is a data type used to define a number that contains all positive, negative or zero non-fractional values. These cannot have decimals."
},
{
"code": null,
"e": 1313,
"s": 1209,
"text": "Float is a data type used to define a number that has a fractional value. These can have decimals also."
},
{
"code": null,
"e": 1438,
"s": 1313,
"text": "Now, we will check what will be the value of float and integer return by the compiler when we input the same value for both."
},
{
"code": null,
"e": 1449,
"s": 1438,
"text": " Live Demo"
},
{
"code": null,
"e": 1726,
"s": 1449,
"text": "#include <iostream>\nusing namespace std;\nint main(){\n float f = 23;\n unsigned int x = 23;\n cout<<\"Float f = \"<<f<<endl;\n cout<<\"Integer x = \"<<x<<endl;\n f = 0xffffffff;\n x = 0xffffffff;\n cout << \"f = \" << f << endl;\n cout << \"x = \" << x << endl;\n return 0;\n}"
},
{
"code": null,
"e": 1785,
"s": 1726,
"text": "Float f = 23\nInteger x = 23\nf = 4.29497e+09\nx = 4294967295"
},
{
"code": null,
"e": 1988,
"s": 1785,
"text": "Here in this code, we can see that if we pass an integer value to a float then it will act as an integer and returns an integer value as output. But the max values for both of them come to be different."
},
{
"code": null,
"e": 2060,
"s": 1988,
"text": "Now, let’s see what if we initialize integer variable with float value."
},
{
"code": null,
"e": 2071,
"s": 2060,
"text": " Live Demo"
},
{
"code": null,
"e": 2254,
"s": 2071,
"text": "#include <iostream>\nusing namespace std;\nint main(){\n float f = 23.768;\n unsigned int x = 23.768;\n cout<<\"Float f = \"<<f<<endl;\n cout<<\"Integer x = \"<<x<<endl;\n return 0;\n}"
},
{
"code": null,
"e": 2286,
"s": 2254,
"text": "Float f = 23.768\nInteger x = 23"
},
{
"code": null,
"e": 2475,
"s": 2286,
"text": "In this condition too the program compiled and run. The integer variable discarded the decimal point values of the initialize float value and gets initialized with the integer value of it."
},
{
"code": null,
"e": 2509,
"s": 2475,
"text": "Now, let's compare these values −"
},
{
"code": null,
"e": 2520,
"s": 2509,
"text": " Live Demo"
},
{
"code": null,
"e": 2715,
"s": 2520,
"text": "#include <iostream>\nusing namespace std;\nint main(){\n float f = 0xffffffff;\n unsigned int x = 0xffffffff;\n if(f == x ){\n cout<<\"TRUE\";\n }\n else\n cout<<\"FALSE\";\n return 0;\n}"
},
{
"code": null,
"e": 2720,
"s": 2715,
"text": "TRUE"
}
] |
Variables in C++ - GeeksforGeeks
|
04 Aug, 2021
A variable is a name given to a memory location. It is the basic unit of storage in a program.
The value stored in a variable can be changed during program execution.
A variable is only a name given to a memory location, all the operations done on the variable effects that memory location.
In C++, all the variables must be declared before use.
How to declare variables?
A typical variable declaration is of the form:
// Declaring a single variable
type variable_name;
// Declaring multiple variables:
type variable1_name, variable2_name, variable3_name;
A variable name can consist of alphabets (both upper and lower case), numbers and the underscore ‘_’ character. However, the name must not start with a number.
In the above diagram,
datatype: Type of data that can be stored in this variable. variable_name: Name given to the variable. value: It is the initial value stored in the variable.
Examples:
// Declaring float variable
float simpleInterest;
// Declaring integer variable
int time, speed;
// Declaring character variable
char var;
Difference between variable declaration and definition
The variable declaration refers to the part where a variable is first declared or introduced before its first use. A variable definition is a part where the variable is assigned a memory location and a value. Most of the times, variable declaration and definition are done together.See the following C++ program for better clarification:
CPP
#include <iostream>using namespace std; int main(){ // declaration and definition // of variable 'a123' char a123 = 'a'; // This is also both declaration and definition // as 'b' is allocated memory and // assigned some garbage value. float b; // multiple declarations and definitions int _c, _d45, e; // Let us print a variable cout << a123 << endl; return 0;}
a
Types of variables
There are three types of variables based on the scope of variables in C++:
Local Variables
Instance Variables
Static Variables
Let us now learn about each one of these variables in detail.
Local Variables: A variable defined within a block or method or constructor is called local variable. These variable are created when the block in entered or the function is called and destroyed after exiting from the block or when the call returns from the function.The scope of these variables exists only within the block in which the variable is declared. i.e. we can access these variable only within that block.Initialisation of Local Variable is Mandatory.Instance Variables: Instance variables are non-static variables and are declared in a class outside any method, constructor or block. As instance variables are declared in a class, these variables are created when an object of the class is created and destroyed when the object is destroyed.Unlike local variables, we may use access specifiers for instance variables. If we do not specify any access specifier then the default access specifier will be used.Initialisation of Instance Variable is not Mandatory.Instance Variable can be accessed only by creating objects.Static Variables: Static variables are also known as Class variables. These variables are declared similarly as instance variables, the difference is that static variables are declared using the static keyword within a class outside any method constructor or block.Unlike instance variables, we can only have one copy of a static variable per class irrespective of how many objects we create.Static variables are created at the start of program execution and destroyed automatically when execution ends.Initialization of Static Variable is not Mandatory. Its default value is 0If we access the static variable like Instance variable (through an object), the compiler will show the warning message and it won’t halt the program. The compiler will replace the object name to class name automatically.If we access the static variable without the class name, Compiler will automatically append the class name.
Local Variables: A variable defined within a block or method or constructor is called local variable. These variable are created when the block in entered or the function is called and destroyed after exiting from the block or when the call returns from the function.The scope of these variables exists only within the block in which the variable is declared. i.e. we can access these variable only within that block.Initialisation of Local Variable is Mandatory.
These variable are created when the block in entered or the function is called and destroyed after exiting from the block or when the call returns from the function.
The scope of these variables exists only within the block in which the variable is declared. i.e. we can access these variable only within that block.
Initialisation of Local Variable is Mandatory.
Instance Variables: Instance variables are non-static variables and are declared in a class outside any method, constructor or block. As instance variables are declared in a class, these variables are created when an object of the class is created and destroyed when the object is destroyed.Unlike local variables, we may use access specifiers for instance variables. If we do not specify any access specifier then the default access specifier will be used.Initialisation of Instance Variable is not Mandatory.Instance Variable can be accessed only by creating objects.
As instance variables are declared in a class, these variables are created when an object of the class is created and destroyed when the object is destroyed.
Unlike local variables, we may use access specifiers for instance variables. If we do not specify any access specifier then the default access specifier will be used.
Initialisation of Instance Variable is not Mandatory.
Instance Variable can be accessed only by creating objects.
Static Variables: Static variables are also known as Class variables. These variables are declared similarly as instance variables, the difference is that static variables are declared using the static keyword within a class outside any method constructor or block.Unlike instance variables, we can only have one copy of a static variable per class irrespective of how many objects we create.Static variables are created at the start of program execution and destroyed automatically when execution ends.Initialization of Static Variable is not Mandatory. Its default value is 0If we access the static variable like Instance variable (through an object), the compiler will show the warning message and it won’t halt the program. The compiler will replace the object name to class name automatically.If we access the static variable without the class name, Compiler will automatically append the class name.
These variables are declared similarly as instance variables, the difference is that static variables are declared using the static keyword within a class outside any method constructor or block.
Unlike instance variables, we can only have one copy of a static variable per class irrespective of how many objects we create.
Static variables are created at the start of program execution and destroyed automatically when execution ends.
Initialization of Static Variable is not Mandatory. Its default value is 0
If we access the static variable like Instance variable (through an object), the compiler will show the warning message and it won’t halt the program. The compiler will replace the object name to class name automatically.
If we access the static variable without the class name, Compiler will automatically append the class name.
Instance variable Vs Static variable
Each object will have its own copy of instance variable whereas We can only have one copy of a static variable per class irrespective of how many objects we create.
Changes made in an instance variable using one object will not be reflected in other objects as each object has its own copy of instance variable. In case of static, changes will be reflected in other objects as static variables are common to all object of a class.
We can access instance variables through object references and Static Variables can be accessed directly using class name.
Syntax for static and instance variables:class Example
{
static int a; // static variable
int b; // instance variable
}
class Example
{
static int a; // static variable
int b; // instance variable
}
ruhelaa48
C-Variable Declaration and Scope
CPP-Basics
C++
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Vector in C++ STL
Initialize a vector in C++ (6 different ways)
std::sort() in C++ STL
Bitwise Operators in C/C++
Socket Programming in C/C++
Virtual Function in C++
Templates in C++ with Examples
rand() and srand() in C/C++
vector erase() and clear() in C++
unordered_map in C++ STL
|
[
{
"code": null,
"e": 26569,
"s": 26541,
"text": "\n04 Aug, 2021"
},
{
"code": null,
"e": 26665,
"s": 26569,
"text": "A variable is a name given to a memory location. It is the basic unit of storage in a program. "
},
{
"code": null,
"e": 26737,
"s": 26665,
"text": "The value stored in a variable can be changed during program execution."
},
{
"code": null,
"e": 26861,
"s": 26737,
"text": "A variable is only a name given to a memory location, all the operations done on the variable effects that memory location."
},
{
"code": null,
"e": 26916,
"s": 26861,
"text": "In C++, all the variables must be declared before use."
},
{
"code": null,
"e": 26944,
"s": 26918,
"text": "How to declare variables?"
},
{
"code": null,
"e": 26993,
"s": 26944,
"text": "A typical variable declaration is of the form: "
},
{
"code": null,
"e": 27131,
"s": 26993,
"text": "// Declaring a single variable\ntype variable_name;\n\n// Declaring multiple variables:\ntype variable1_name, variable2_name, variable3_name;"
},
{
"code": null,
"e": 27292,
"s": 27131,
"text": "A variable name can consist of alphabets (both upper and lower case), numbers and the underscore ‘_’ character. However, the name must not start with a number. "
},
{
"code": null,
"e": 27316,
"s": 27292,
"text": "In the above diagram, "
},
{
"code": null,
"e": 27476,
"s": 27316,
"text": "datatype: Type of data that can be stored in this variable. variable_name: Name given to the variable. value: It is the initial value stored in the variable. "
},
{
"code": null,
"e": 27488,
"s": 27476,
"text": "Examples: "
},
{
"code": null,
"e": 27632,
"s": 27488,
"text": "// Declaring float variable\nfloat simpleInterest; \n\n// Declaring integer variable\nint time, speed; \n\n// Declaring character variable\nchar var; "
},
{
"code": null,
"e": 27689,
"s": 27634,
"text": "Difference between variable declaration and definition"
},
{
"code": null,
"e": 28029,
"s": 27689,
"text": "The variable declaration refers to the part where a variable is first declared or introduced before its first use. A variable definition is a part where the variable is assigned a memory location and a value. Most of the times, variable declaration and definition are done together.See the following C++ program for better clarification: "
},
{
"code": null,
"e": 28033,
"s": 28029,
"text": "CPP"
},
{
"code": "#include <iostream>using namespace std; int main(){ // declaration and definition // of variable 'a123' char a123 = 'a'; // This is also both declaration and definition // as 'b' is allocated memory and // assigned some garbage value. float b; // multiple declarations and definitions int _c, _d45, e; // Let us print a variable cout << a123 << endl; return 0;}",
"e": 28440,
"s": 28033,
"text": null
},
{
"code": null,
"e": 28442,
"s": 28440,
"text": "a"
},
{
"code": null,
"e": 28463,
"s": 28444,
"text": "Types of variables"
},
{
"code": null,
"e": 28540,
"s": 28463,
"text": "There are three types of variables based on the scope of variables in C++: "
},
{
"code": null,
"e": 28556,
"s": 28540,
"text": "Local Variables"
},
{
"code": null,
"e": 28575,
"s": 28556,
"text": "Instance Variables"
},
{
"code": null,
"e": 28592,
"s": 28575,
"text": "Static Variables"
},
{
"code": null,
"e": 28658,
"s": 28594,
"text": "Let us now learn about each one of these variables in detail. "
},
{
"code": null,
"e": 30596,
"s": 28658,
"text": "Local Variables: A variable defined within a block or method or constructor is called local variable. These variable are created when the block in entered or the function is called and destroyed after exiting from the block or when the call returns from the function.The scope of these variables exists only within the block in which the variable is declared. i.e. we can access these variable only within that block.Initialisation of Local Variable is Mandatory.Instance Variables: Instance variables are non-static variables and are declared in a class outside any method, constructor or block. As instance variables are declared in a class, these variables are created when an object of the class is created and destroyed when the object is destroyed.Unlike local variables, we may use access specifiers for instance variables. If we do not specify any access specifier then the default access specifier will be used.Initialisation of Instance Variable is not Mandatory.Instance Variable can be accessed only by creating objects.Static Variables: Static variables are also known as Class variables. These variables are declared similarly as instance variables, the difference is that static variables are declared using the static keyword within a class outside any method constructor or block.Unlike instance variables, we can only have one copy of a static variable per class irrespective of how many objects we create.Static variables are created at the start of program execution and destroyed automatically when execution ends.Initialization of Static Variable is not Mandatory. Its default value is 0If we access the static variable like Instance variable (through an object), the compiler will show the warning message and it won’t halt the program. The compiler will replace the object name to class name automatically.If we access the static variable without the class name, Compiler will automatically append the class name."
},
{
"code": null,
"e": 31060,
"s": 30596,
"text": "Local Variables: A variable defined within a block or method or constructor is called local variable. These variable are created when the block in entered or the function is called and destroyed after exiting from the block or when the call returns from the function.The scope of these variables exists only within the block in which the variable is declared. i.e. we can access these variable only within that block.Initialisation of Local Variable is Mandatory."
},
{
"code": null,
"e": 31226,
"s": 31060,
"text": "These variable are created when the block in entered or the function is called and destroyed after exiting from the block or when the call returns from the function."
},
{
"code": null,
"e": 31377,
"s": 31226,
"text": "The scope of these variables exists only within the block in which the variable is declared. i.e. we can access these variable only within that block."
},
{
"code": null,
"e": 31424,
"s": 31377,
"text": "Initialisation of Local Variable is Mandatory."
},
{
"code": null,
"e": 31994,
"s": 31424,
"text": "Instance Variables: Instance variables are non-static variables and are declared in a class outside any method, constructor or block. As instance variables are declared in a class, these variables are created when an object of the class is created and destroyed when the object is destroyed.Unlike local variables, we may use access specifiers for instance variables. If we do not specify any access specifier then the default access specifier will be used.Initialisation of Instance Variable is not Mandatory.Instance Variable can be accessed only by creating objects."
},
{
"code": null,
"e": 32152,
"s": 31994,
"text": "As instance variables are declared in a class, these variables are created when an object of the class is created and destroyed when the object is destroyed."
},
{
"code": null,
"e": 32319,
"s": 32152,
"text": "Unlike local variables, we may use access specifiers for instance variables. If we do not specify any access specifier then the default access specifier will be used."
},
{
"code": null,
"e": 32373,
"s": 32319,
"text": "Initialisation of Instance Variable is not Mandatory."
},
{
"code": null,
"e": 32433,
"s": 32373,
"text": "Instance Variable can be accessed only by creating objects."
},
{
"code": null,
"e": 33339,
"s": 32433,
"text": "Static Variables: Static variables are also known as Class variables. These variables are declared similarly as instance variables, the difference is that static variables are declared using the static keyword within a class outside any method constructor or block.Unlike instance variables, we can only have one copy of a static variable per class irrespective of how many objects we create.Static variables are created at the start of program execution and destroyed automatically when execution ends.Initialization of Static Variable is not Mandatory. Its default value is 0If we access the static variable like Instance variable (through an object), the compiler will show the warning message and it won’t halt the program. The compiler will replace the object name to class name automatically.If we access the static variable without the class name, Compiler will automatically append the class name."
},
{
"code": null,
"e": 33535,
"s": 33339,
"text": "These variables are declared similarly as instance variables, the difference is that static variables are declared using the static keyword within a class outside any method constructor or block."
},
{
"code": null,
"e": 33663,
"s": 33535,
"text": "Unlike instance variables, we can only have one copy of a static variable per class irrespective of how many objects we create."
},
{
"code": null,
"e": 33775,
"s": 33663,
"text": "Static variables are created at the start of program execution and destroyed automatically when execution ends."
},
{
"code": null,
"e": 33850,
"s": 33775,
"text": "Initialization of Static Variable is not Mandatory. Its default value is 0"
},
{
"code": null,
"e": 34072,
"s": 33850,
"text": "If we access the static variable like Instance variable (through an object), the compiler will show the warning message and it won’t halt the program. The compiler will replace the object name to class name automatically."
},
{
"code": null,
"e": 34180,
"s": 34072,
"text": "If we access the static variable without the class name, Compiler will automatically append the class name."
},
{
"code": null,
"e": 34219,
"s": 34182,
"text": "Instance variable Vs Static variable"
},
{
"code": null,
"e": 34386,
"s": 34221,
"text": "Each object will have its own copy of instance variable whereas We can only have one copy of a static variable per class irrespective of how many objects we create."
},
{
"code": null,
"e": 34652,
"s": 34386,
"text": "Changes made in an instance variable using one object will not be reflected in other objects as each object has its own copy of instance variable. In case of static, changes will be reflected in other objects as static variables are common to all object of a class."
},
{
"code": null,
"e": 34775,
"s": 34652,
"text": "We can access instance variables through object references and Static Variables can be accessed directly using class name."
},
{
"code": null,
"e": 34910,
"s": 34775,
"text": "Syntax for static and instance variables:class Example\n{\n static int a; // static variable\n int b; // instance variable\n}"
},
{
"code": null,
"e": 35004,
"s": 34910,
"text": "class Example\n{\n static int a; // static variable\n int b; // instance variable\n}"
},
{
"code": null,
"e": 35014,
"s": 35004,
"text": "ruhelaa48"
},
{
"code": null,
"e": 35047,
"s": 35014,
"text": "C-Variable Declaration and Scope"
},
{
"code": null,
"e": 35058,
"s": 35047,
"text": "CPP-Basics"
},
{
"code": null,
"e": 35062,
"s": 35058,
"text": "C++"
},
{
"code": null,
"e": 35066,
"s": 35062,
"text": "CPP"
},
{
"code": null,
"e": 35164,
"s": 35066,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 35173,
"s": 35164,
"text": "Comments"
},
{
"code": null,
"e": 35186,
"s": 35173,
"text": "Old Comments"
},
{
"code": null,
"e": 35204,
"s": 35186,
"text": "Vector in C++ STL"
},
{
"code": null,
"e": 35250,
"s": 35204,
"text": "Initialize a vector in C++ (6 different ways)"
},
{
"code": null,
"e": 35273,
"s": 35250,
"text": "std::sort() in C++ STL"
},
{
"code": null,
"e": 35300,
"s": 35273,
"text": "Bitwise Operators in C/C++"
},
{
"code": null,
"e": 35328,
"s": 35300,
"text": "Socket Programming in C/C++"
},
{
"code": null,
"e": 35352,
"s": 35328,
"text": "Virtual Function in C++"
},
{
"code": null,
"e": 35383,
"s": 35352,
"text": "Templates in C++ with Examples"
},
{
"code": null,
"e": 35411,
"s": 35383,
"text": "rand() and srand() in C/C++"
},
{
"code": null,
"e": 35445,
"s": 35411,
"text": "vector erase() and clear() in C++"
}
] |
C# - ArrayList Class
|
It represents an ordered collection of an object that can be indexed individually. It is basically an alternative to an array. However, unlike array you can add and remove items from a list at a specified position using an index and the array resizes itself automatically. It also allows dynamic memory allocation, adding, searching and sorting items in the list.
The following table lists some of the commonly used properties of the ArrayList class −
Capacity
Gets or sets the number of elements that the ArrayList can contain.
Count
Gets the number of elements actually contained in the ArrayList.
IsFixedSize
Gets a value indicating whether the ArrayList has a fixed size.
IsReadOnly
Gets a value indicating whether the ArrayList is read-only.
Item
Gets or sets the element at the specified index.
The following table lists some of the commonly used methods of the ArrayList class −
public virtual int Add(object value);
Adds an object to the end of the ArrayList.
public virtual void AddRange(ICollection c);
Adds the elements of an ICollection to the end of the ArrayList.
public virtual void Clear();
Removes all elements from the ArrayList.
public virtual bool Contains(object item);
Determines whether an element is in the ArrayList.
public virtual ArrayList GetRange(int index, int count);
Returns an ArrayList which represents a subset of the elements in the source ArrayList.
public virtual int IndexOf(object);
Returns the zero-based index of the first occurrence of a value in the ArrayList or in a portion of it.
public virtual void Insert(int index, object value);
Inserts an element into the ArrayList at the specified index.
public virtual void InsertRange(int index, ICollection c);
Inserts the elements of a collection into the ArrayList at the specified index.
public virtual void Remove(object obj);
Removes the first occurrence of a specific object from the ArrayList.
public virtual void RemoveAt(int index);
Removes the element at the specified index of the ArrayList.
public virtual void RemoveRange(int index, int count);
Removes a range of elements from the ArrayList.
public virtual void Reverse();
Reverses the order of the elements in the ArrayList.
public virtual void SetRange(int index, ICollection c);
Copies the elements of a collection over a range of elements in the ArrayList.
public virtual void Sort();
Sorts the elements in the ArrayList.
public virtual void TrimToSize();
Sets the capacity to the actual number of elements in the ArrayList.
The following example demonstrates the concept −
using System;
using System.Collections;
namespace CollectionApplication {
class Program {
static void Main(string[] args) {
ArrayList al = new ArrayList();
Console.WriteLine("Adding some numbers:");
al.Add(45);
al.Add(78);
al.Add(33);
al.Add(56);
al.Add(12);
al.Add(23);
al.Add(9);
Console.WriteLine("Capacity: {0} ", al.Capacity);
Console.WriteLine("Count: {0}", al.Count);
Console.Write("Content: ");
foreach (int i in al) {
Console.Write(i + " ");
}
Console.WriteLine();
Console.Write("Sorted Content: ");
al.Sort();
foreach (int i in al) {
Console.Write(i + " ");
}
Console.WriteLine();
Console.ReadKey();
}
}
}
When the above code is compiled and executed, it produces the following result −
Adding some numbers:
Capacity: 8
Count: 7
Content: 45 78 33 56 12 23 9
Content: 9 12 23 33 45 56 78
119 Lectures
23.5 hours
Raja Biswas
37 Lectures
13 hours
Trevoir Williams
16 Lectures
1 hours
Peter Jepson
159 Lectures
21.5 hours
Ebenezer Ogbu
193 Lectures
17 hours
Arnold Higuit
24 Lectures
2.5 hours
Eric Frick
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2634,
"s": 2270,
"text": "It represents an ordered collection of an object that can be indexed individually. It is basically an alternative to an array. However, unlike array you can add and remove items from a list at a specified position using an index and the array resizes itself automatically. It also allows dynamic memory allocation, adding, searching and sorting items in the list."
},
{
"code": null,
"e": 2722,
"s": 2634,
"text": "The following table lists some of the commonly used properties of the ArrayList class −"
},
{
"code": null,
"e": 2731,
"s": 2722,
"text": "Capacity"
},
{
"code": null,
"e": 2799,
"s": 2731,
"text": "Gets or sets the number of elements that the ArrayList can contain."
},
{
"code": null,
"e": 2805,
"s": 2799,
"text": "Count"
},
{
"code": null,
"e": 2870,
"s": 2805,
"text": "Gets the number of elements actually contained in the ArrayList."
},
{
"code": null,
"e": 2882,
"s": 2870,
"text": "IsFixedSize"
},
{
"code": null,
"e": 2946,
"s": 2882,
"text": "Gets a value indicating whether the ArrayList has a fixed size."
},
{
"code": null,
"e": 2957,
"s": 2946,
"text": "IsReadOnly"
},
{
"code": null,
"e": 3017,
"s": 2957,
"text": "Gets a value indicating whether the ArrayList is read-only."
},
{
"code": null,
"e": 3022,
"s": 3017,
"text": "Item"
},
{
"code": null,
"e": 3071,
"s": 3022,
"text": "Gets or sets the element at the specified index."
},
{
"code": null,
"e": 3156,
"s": 3071,
"text": "The following table lists some of the commonly used methods of the ArrayList class −"
},
{
"code": null,
"e": 3194,
"s": 3156,
"text": "public virtual int Add(object value);"
},
{
"code": null,
"e": 3238,
"s": 3194,
"text": "Adds an object to the end of the ArrayList."
},
{
"code": null,
"e": 3283,
"s": 3238,
"text": "public virtual void AddRange(ICollection c);"
},
{
"code": null,
"e": 3348,
"s": 3283,
"text": "Adds the elements of an ICollection to the end of the ArrayList."
},
{
"code": null,
"e": 3377,
"s": 3348,
"text": "public virtual void Clear();"
},
{
"code": null,
"e": 3418,
"s": 3377,
"text": "Removes all elements from the ArrayList."
},
{
"code": null,
"e": 3461,
"s": 3418,
"text": "public virtual bool Contains(object item);"
},
{
"code": null,
"e": 3512,
"s": 3461,
"text": "Determines whether an element is in the ArrayList."
},
{
"code": null,
"e": 3569,
"s": 3512,
"text": "public virtual ArrayList GetRange(int index, int count);"
},
{
"code": null,
"e": 3657,
"s": 3569,
"text": "Returns an ArrayList which represents a subset of the elements in the source ArrayList."
},
{
"code": null,
"e": 3693,
"s": 3657,
"text": "public virtual int IndexOf(object);"
},
{
"code": null,
"e": 3797,
"s": 3693,
"text": "Returns the zero-based index of the first occurrence of a value in the ArrayList or in a portion of it."
},
{
"code": null,
"e": 3850,
"s": 3797,
"text": "public virtual void Insert(int index, object value);"
},
{
"code": null,
"e": 3912,
"s": 3850,
"text": "Inserts an element into the ArrayList at the specified index."
},
{
"code": null,
"e": 3971,
"s": 3912,
"text": "public virtual void InsertRange(int index, ICollection c);"
},
{
"code": null,
"e": 4051,
"s": 3971,
"text": "Inserts the elements of a collection into the ArrayList at the specified index."
},
{
"code": null,
"e": 4091,
"s": 4051,
"text": "public virtual void Remove(object obj);"
},
{
"code": null,
"e": 4161,
"s": 4091,
"text": "Removes the first occurrence of a specific object from the ArrayList."
},
{
"code": null,
"e": 4202,
"s": 4161,
"text": "public virtual void RemoveAt(int index);"
},
{
"code": null,
"e": 4263,
"s": 4202,
"text": "Removes the element at the specified index of the ArrayList."
},
{
"code": null,
"e": 4318,
"s": 4263,
"text": "public virtual void RemoveRange(int index, int count);"
},
{
"code": null,
"e": 4366,
"s": 4318,
"text": "Removes a range of elements from the ArrayList."
},
{
"code": null,
"e": 4397,
"s": 4366,
"text": "public virtual void Reverse();"
},
{
"code": null,
"e": 4450,
"s": 4397,
"text": "Reverses the order of the elements in the ArrayList."
},
{
"code": null,
"e": 4506,
"s": 4450,
"text": "public virtual void SetRange(int index, ICollection c);"
},
{
"code": null,
"e": 4585,
"s": 4506,
"text": "Copies the elements of a collection over a range of elements in the ArrayList."
},
{
"code": null,
"e": 4613,
"s": 4585,
"text": "public virtual void Sort();"
},
{
"code": null,
"e": 4650,
"s": 4613,
"text": "Sorts the elements in the ArrayList."
},
{
"code": null,
"e": 4684,
"s": 4650,
"text": "public virtual void TrimToSize();"
},
{
"code": null,
"e": 4753,
"s": 4684,
"text": "Sets the capacity to the actual number of elements in the ArrayList."
},
{
"code": null,
"e": 4802,
"s": 4753,
"text": "The following example demonstrates the concept −"
},
{
"code": null,
"e": 5690,
"s": 4802,
"text": "using System;\nusing System.Collections;\n\nnamespace CollectionApplication {\n class Program {\n static void Main(string[] args) {\n ArrayList al = new ArrayList();\n \n Console.WriteLine(\"Adding some numbers:\");\n al.Add(45);\n al.Add(78);\n al.Add(33);\n al.Add(56);\n al.Add(12);\n al.Add(23);\n al.Add(9);\n \n Console.WriteLine(\"Capacity: {0} \", al.Capacity);\n Console.WriteLine(\"Count: {0}\", al.Count);\n \n Console.Write(\"Content: \");\n foreach (int i in al) {\n Console.Write(i + \" \");\n }\n \n Console.WriteLine();\n Console.Write(\"Sorted Content: \");\n al.Sort();\n foreach (int i in al) {\n Console.Write(i + \" \");\n }\n Console.WriteLine();\n Console.ReadKey();\n }\n }\n}"
},
{
"code": null,
"e": 5771,
"s": 5690,
"text": "When the above code is compiled and executed, it produces the following result −"
},
{
"code": null,
"e": 5876,
"s": 5771,
"text": "Adding some numbers:\nCapacity: 8\nCount: 7\nContent: 45 78 33 56 12 23 9\nContent: 9 12 23 33 45 56 78 \n"
},
{
"code": null,
"e": 5913,
"s": 5876,
"text": "\n 119 Lectures \n 23.5 hours \n"
},
{
"code": null,
"e": 5926,
"s": 5913,
"text": " Raja Biswas"
},
{
"code": null,
"e": 5960,
"s": 5926,
"text": "\n 37 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 5978,
"s": 5960,
"text": " Trevoir Williams"
},
{
"code": null,
"e": 6011,
"s": 5978,
"text": "\n 16 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6025,
"s": 6011,
"text": " Peter Jepson"
},
{
"code": null,
"e": 6062,
"s": 6025,
"text": "\n 159 Lectures \n 21.5 hours \n"
},
{
"code": null,
"e": 6077,
"s": 6062,
"text": " Ebenezer Ogbu"
},
{
"code": null,
"e": 6112,
"s": 6077,
"text": "\n 193 Lectures \n 17 hours \n"
},
{
"code": null,
"e": 6127,
"s": 6112,
"text": " Arnold Higuit"
},
{
"code": null,
"e": 6162,
"s": 6127,
"text": "\n 24 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 6174,
"s": 6162,
"text": " Eric Frick"
},
{
"code": null,
"e": 6181,
"s": 6174,
"text": " Print"
},
{
"code": null,
"e": 6192,
"s": 6181,
"text": " Add Notes"
}
] |
Program to Convert Centimeter to Feet and Inches in C
|
Given with the length into centimeter as an input, the task is to convert the given length into feet and inches
We can use length conversion formula for this −
1 feet = 30.48 cm
1 inche = 2.54 cm
Input-: centimetre = 100
Output -: Length in meter = 3m
Length in Kilometer = 0.003km
Start
Step 1 -> Declare function to perform conversion
double convert(int centimeter)
set double inch = 0.3937 * centimetre
set double feet = 0.0328 * centimetre
print inch and feet
Step 2 -> In main()
Declare and set int centimetre=20
Call convert(centimetre)
Stop
#include <stdio.h>
// Function to perform conversion
double convert(int centimeter){
double inch = 0.3937 * centimeter;
double feet = 0.0328 * centimeter;
printf ("Inches is: %.2f \n", inch);
printf ("Feet is: %.2f", feet);
return 0;
}
// Driver Code
int main() {
int centimeter = 20;
convert(centimeter);
return 0;
}
Inches is: 7.87
Feet is: 0.66
|
[
{
"code": null,
"e": 1174,
"s": 1062,
"text": "Given with the length into centimeter as an input, the task is to convert the given length into feet and inches"
},
{
"code": null,
"e": 1222,
"s": 1174,
"text": "We can use length conversion formula for this −"
},
{
"code": null,
"e": 1258,
"s": 1222,
"text": "1 feet = 30.48 cm\n1 inche = 2.54 cm"
},
{
"code": null,
"e": 1347,
"s": 1258,
"text": "Input-: centimetre = 100\nOutput -: Length in meter = 3m\n Length in Kilometer = 0.003km"
},
{
"code": null,
"e": 1640,
"s": 1347,
"text": "Start\nStep 1 -> Declare function to perform conversion\n double convert(int centimeter)\n set double inch = 0.3937 * centimetre\n set double feet = 0.0328 * centimetre\n print inch and feet\nStep 2 -> In main()\n Declare and set int centimetre=20\n Call convert(centimetre)\nStop"
},
{
"code": null,
"e": 1982,
"s": 1640,
"text": "#include <stdio.h>\n// Function to perform conversion\ndouble convert(int centimeter){\n double inch = 0.3937 * centimeter;\n double feet = 0.0328 * centimeter;\n printf (\"Inches is: %.2f \\n\", inch);\n printf (\"Feet is: %.2f\", feet);\n return 0;\n}\n// Driver Code\nint main() {\n int centimeter = 20;\n convert(centimeter);\n return 0;\n}"
},
{
"code": null,
"e": 2012,
"s": 1982,
"text": "Inches is: 7.87\nFeet is: 0.66"
}
] |
Ford Fulkerson Algorithm
|
The Ford-Fulkerson algorithm is used to detect maximum flow from start vertex to sink vertex in a given graph. In this graph, every edge has the capacity. Two vertices are provided named Source and Sink. The source vertex has all outward edge, no inward edge, and the sink will have all inward edge no outward edge.
There are some constraints:
Flow on an edge doesn’t exceed the given capacity of that graph.
Incoming flow and outgoing flow will also equal for every edge, except the source and the sink.
Input:
The adjacency matrix:
0 10 0 10 0 0
0 0 4 2 8 0
0 0 0 0 0 10
0 0 0 0 9 0
0 0 6 0 0 10
0 0 0 0 0 0
Output:
Maximum flow is: 19
bfs(vert, start, sink)
Input: The vertices list, the start node, and the sink node.
Output − True when the sink is visited.
Begin
initially mark all nodes as unvisited
state of start as visited
predecessor of start node is φ
insert start into the queue qu
while qu is not empty, do
delete element from queue and set to vertex u
for all vertices i, in the residual graph, do
if u and i are connected, and i is unvisited, then
add vertex i into the queue
predecessor of i is u
mark i as visited
done
done
return true if state of sink vertex is visited
End
fordFulkerson(vert, source, sink)
Input: The vertices list, the source vertex, and the sink vertex.
Output − The maximum flow from start to sink.
Begin
create a residual graph and copy given graph into it
while bfs(vert, source, sink) is true, do
pathFlow := ∞
v := sink vertex
while v ≠ start vertex, do
u := predecessor of v
pathFlow := minimum of pathFlow and residualGraph[u, v]
v := predecessor of v
done
v := sink vertex
while v ≠ start vertex, do
u := predecessor of v
residualGraph[u,v] := residualGraph[u,v] – pathFlow
residualGraph[v,u] := residualGraph[v,u] – pathFlow
v := predecessor of v
done
maFlow := maxFlow + pathFlow
done
return maxFlow
End
#include<iostream>
#include<queue>
#define NODE 6
using namespace std;
typedef struct node {
int val;
int state; //status
int pred; //predecessor
}node;
int minimum(int a, int b) {
return (a<b)?a:b;
}
int resGraph[NODE][NODE];
/* int graph[NODE][NODE] = {
{0, 16, 13, 0, 0, 0},
{0, 0, 10, 12, 0, 0},
{0, 4, 0, 0, 14, 0},
{0, 0, 9, 0, 0, 20},
{0, 0, 0, 7, 0, 4},
{0, 0, 0, 0, 0, 0}
}; */
int graph[NODE][NODE] = {
{0, 10, 0, 10, 0, 0},
{0, 0, 4, 2, 8, 0},
{0, 0, 0, 0, 0, 10},
{0, 0, 0, 0, 9, 0},
{0, 0, 6, 0, 0, 10},
{0, 0, 0, 0, 0, 0}
};
int bfs(node *vert, node start, node sink) {
node u;
int i, j;
queue<node> que;
for(i = 0; i<NODE; i++) {
vert[i].state = 0; //not visited
}
vert[start.val].state = 1; //visited
vert[start.val].pred = -1; //no parent node
que.push(start); //insert starting node
while(!que.empty()) {
//delete from queue and print
u = que.front();
que.pop();
for(i = 0; i<NODE; i++) {
if(resGraph[u.val][i] > 0 && vert[i].state == 0) {
que.push(vert[i]);
vert[i].pred = u.val;
vert[i].state = 1;
}
}
}
return (vert[sink.val].state == 1);
}
int fordFulkerson(node *vert, node source, node sink) {
int maxFlow = 0;
int u, v;
for(int i = 0; i<NODE; i++) {
for(int j = 0; j<NODE; j++) {
resGraph[i][j] = graph[i][j]; //initially residual graph is main graph
}
}
while(bfs(vert, source, sink)) { //find augmented path using bfs algorithm
int pathFlow = 999;//as infinity
for(v = sink.val; v != source.val; v=vert[v].pred) {
u = vert[v].pred;
pathFlow = minimum(pathFlow, resGraph[u][v]);
}
for(v = sink.val; v != source.val; v=vert[v].pred) {
u = vert[v].pred;
resGraph[u][v] -= pathFlow; //update residual capacity of edges
resGraph[v][u] += pathFlow; //update residual capacity of reverse edges
}
maxFlow += pathFlow;
}
return maxFlow; //the overall max flow
}
int main() {
node vertices[NODE];
node source, sink;
for(int i = 0; i<NODE; i++) {
vertices[i].val = i;
}
source.val = 0;
sink.val = 5;
int maxFlow = fordFulkerson(vertices, source, sink);
cout << "Maximum flow is: " << maxFlow << endl;
}
Maximum flow is: 19
|
[
{
"code": null,
"e": 1378,
"s": 1062,
"text": "The Ford-Fulkerson algorithm is used to detect maximum flow from start vertex to sink vertex in a given graph. In this graph, every edge has the capacity. Two vertices are provided named Source and Sink. The source vertex has all outward edge, no inward edge, and the sink will have all inward edge no outward edge."
},
{
"code": null,
"e": 1406,
"s": 1378,
"text": "There are some constraints:"
},
{
"code": null,
"e": 1471,
"s": 1406,
"text": "Flow on an edge doesn’t exceed the given capacity of that graph."
},
{
"code": null,
"e": 1567,
"s": 1471,
"text": "Incoming flow and outgoing flow will also equal for every edge, except the source and the sink."
},
{
"code": null,
"e": 1715,
"s": 1567,
"text": "Input:\nThe adjacency matrix:\n0 10 0 10 0 0\n0 0 4 2 8 0\n0 0 0 0 0 10\n0 0 0 0 9 0\n0 0 6 0 0 10\n0 0 0 0 0 0\n\nOutput:\nMaximum flow is: 19"
},
{
"code": null,
"e": 1738,
"s": 1715,
"text": "bfs(vert, start, sink)"
},
{
"code": null,
"e": 1799,
"s": 1738,
"text": "Input: The vertices list, the start node, and the sink node."
},
{
"code": null,
"e": 1839,
"s": 1799,
"text": "Output − True when the sink is visited."
},
{
"code": null,
"e": 2353,
"s": 1839,
"text": "Begin\n initially mark all nodes as unvisited\n state of start as visited\n predecessor of start node is φ\n insert start into the queue qu\n while qu is not empty, do\n delete element from queue and set to vertex u\n for all vertices i, in the residual graph, do\n if u and i are connected, and i is unvisited, then\n add vertex i into the queue\n predecessor of i is u\n mark i as visited\n done\n done\n return true if state of sink vertex is visited\nEnd"
},
{
"code": null,
"e": 2387,
"s": 2353,
"text": "fordFulkerson(vert, source, sink)"
},
{
"code": null,
"e": 2453,
"s": 2387,
"text": "Input: The vertices list, the source vertex, and the sink vertex."
},
{
"code": null,
"e": 2499,
"s": 2453,
"text": "Output − The maximum flow from start to sink."
},
{
"code": null,
"e": 3140,
"s": 2499,
"text": "Begin\n create a residual graph and copy given graph into it\n while bfs(vert, source, sink) is true, do\n pathFlow := ∞\n v := sink vertex\n while v ≠ start vertex, do\n u := predecessor of v\n pathFlow := minimum of pathFlow and residualGraph[u, v]\n v := predecessor of v\n done\n\n v := sink vertex\n while v ≠ start vertex, do\n u := predecessor of v\n residualGraph[u,v] := residualGraph[u,v] – pathFlow\n residualGraph[v,u] := residualGraph[v,u] – pathFlow\n v := predecessor of v\n done\n\n maFlow := maxFlow + pathFlow\n done\n return maxFlow\nEnd"
},
{
"code": null,
"e": 5542,
"s": 3140,
"text": "#include<iostream>\n#include<queue>\n#define NODE 6\nusing namespace std;\n\ntypedef struct node {\n int val;\n int state; //status\n int pred; //predecessor\n}node;\n\nint minimum(int a, int b) {\n return (a<b)?a:b;\n}\n\nint resGraph[NODE][NODE];\n\n/* int graph[NODE][NODE] = {\n {0, 16, 13, 0, 0, 0},\n {0, 0, 10, 12, 0, 0},\n {0, 4, 0, 0, 14, 0},\n {0, 0, 9, 0, 0, 20},\n {0, 0, 0, 7, 0, 4},\n {0, 0, 0, 0, 0, 0}\n}; */\n\nint graph[NODE][NODE] = {\n {0, 10, 0, 10, 0, 0},\n {0, 0, 4, 2, 8, 0},\n {0, 0, 0, 0, 0, 10},\n {0, 0, 0, 0, 9, 0},\n {0, 0, 6, 0, 0, 10},\n {0, 0, 0, 0, 0, 0}\n};\n \nint bfs(node *vert, node start, node sink) {\n node u;\n int i, j;\n queue<node> que;\n\n for(i = 0; i<NODE; i++) {\n vert[i].state = 0; //not visited\n }\n\n vert[start.val].state = 1; //visited\n vert[start.val].pred = -1; //no parent node\n que.push(start); //insert starting node\n\n while(!que.empty()) {\n //delete from queue and print\n u = que.front();\n que.pop();\n\n for(i = 0; i<NODE; i++) {\n if(resGraph[u.val][i] > 0 && vert[i].state == 0) {\n que.push(vert[i]);\n vert[i].pred = u.val;\n vert[i].state = 1;\n }\n }\n }\n return (vert[sink.val].state == 1);\n}\n\nint fordFulkerson(node *vert, node source, node sink) {\n int maxFlow = 0;\n int u, v;\n\n for(int i = 0; i<NODE; i++) {\n for(int j = 0; j<NODE; j++) {\n resGraph[i][j] = graph[i][j]; //initially residual graph is main graph\n }\n }\n\n while(bfs(vert, source, sink)) { //find augmented path using bfs algorithm\n int pathFlow = 999;//as infinity\n for(v = sink.val; v != source.val; v=vert[v].pred) {\n u = vert[v].pred;\n pathFlow = minimum(pathFlow, resGraph[u][v]);\n }\n\n for(v = sink.val; v != source.val; v=vert[v].pred) {\n u = vert[v].pred;\n resGraph[u][v] -= pathFlow; //update residual capacity of edges\n resGraph[v][u] += pathFlow; //update residual capacity of reverse edges\n }\n\n maxFlow += pathFlow;\n }\n return maxFlow; //the overall max flow\n}\n\nint main() {\n node vertices[NODE];\n node source, sink;\n\n for(int i = 0; i<NODE; i++) {\n vertices[i].val = i;\n }\n\n source.val = 0;\n sink.val = 5;\n int maxFlow = fordFulkerson(vertices, source, sink);\n cout << \"Maximum flow is: \" << maxFlow << endl;\n}"
},
{
"code": null,
"e": 5562,
"s": 5542,
"text": "Maximum flow is: 19"
}
] |
Get specific value of cell in MySQL
|
Let us first create a table −
mysql> create table DemoTable
(
Name varchar(40),
Score int
);
Query OK, 0 rows affected (0.72 sec)
Insert some records in the table using insert command −
mysql> insert into DemoTable values('Chris Brown',78);
Query OK, 1 row affected (0.13 sec)
mysql> insert into DemoTable values('John Doe',88);
Query OK, 1 row affected (0.11 sec)
mysql> insert into DemoTable values('Carol Taylor',98);
Query OK, 1 row affected (0.14 sec)
mysql> insert into DemoTable values('David Miller',80);
Query OK, 1 row affected (0.68 sec)
Display all records from the table using select statement −
mysql> select *from DemoTable;
This will produce the following output −
+--------------+-------+
| Name | Score |
+--------------+-------+
| Chris Brown | 78 |
| John Doe | 88 |
| Carol Taylor | 98 |
| David Miller | 80 |
+--------------+-------+
4 rows in set (0.00 sec)
Following is the query to get specific value of cell −
mysql> select Score from DemoTable where Name='Carol Taylor';
This will produce the following output −
+-------+
| Score |
+-------+
| 98 |
+-------+
1 row in set (0.00 sec)
|
[
{
"code": null,
"e": 1092,
"s": 1062,
"text": "Let us first create a table −"
},
{
"code": null,
"e": 1198,
"s": 1092,
"text": "mysql> create table DemoTable\n(\n Name varchar(40),\n Score int\n);\nQuery OK, 0 rows affected (0.72 sec)"
},
{
"code": null,
"e": 1254,
"s": 1198,
"text": "Insert some records in the table using insert command −"
},
{
"code": null,
"e": 1617,
"s": 1254,
"text": "mysql> insert into DemoTable values('Chris Brown',78);\nQuery OK, 1 row affected (0.13 sec)\nmysql> insert into DemoTable values('John Doe',88);\nQuery OK, 1 row affected (0.11 sec)\nmysql> insert into DemoTable values('Carol Taylor',98);\nQuery OK, 1 row affected (0.14 sec)\nmysql> insert into DemoTable values('David Miller',80);\nQuery OK, 1 row affected (0.68 sec)"
},
{
"code": null,
"e": 1677,
"s": 1617,
"text": "Display all records from the table using select statement −"
},
{
"code": null,
"e": 1708,
"s": 1677,
"text": "mysql> select *from DemoTable;"
},
{
"code": null,
"e": 1749,
"s": 1708,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1974,
"s": 1749,
"text": "+--------------+-------+\n| Name | Score |\n+--------------+-------+\n| Chris Brown | 78 |\n| John Doe | 88 |\n| Carol Taylor | 98 |\n| David Miller | 80 |\n+--------------+-------+\n4 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2029,
"s": 1974,
"text": "Following is the query to get specific value of cell −"
},
{
"code": null,
"e": 2091,
"s": 2029,
"text": "mysql> select Score from DemoTable where Name='Carol Taylor';"
},
{
"code": null,
"e": 2132,
"s": 2091,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2206,
"s": 2132,
"text": "+-------+\n| Score |\n+-------+\n| 98 |\n+-------+\n1 row in set (0.00 sec)"
}
] |
Introduction to Graphs (Part 1). Main concepts, properties, and... | by Maël Fabien | Towards Data Science
|
Graphs are becoming central to machine learning these days, whether you’d like to understand the structure of a social network by predicting potential connections, detecting fraud, understand customer’s behavior of a car rental service or making real-time recommendations for example.
In this article, we’ll cover the following topics:
What is a graph?
How to store a graph?
Types of and properties of graphs
Examples in Python
This is the first article of a series of three articles dedicated to Graph Theory, Graph Algorithms and Graph Learning.
This article was originally published on my personal blog: https://maelfabien.github.io/ml/#
I publish all my articles and the corresponding code on this repository :
github.com
NB: Part 2 and 3 are out! :)
towardsdatascience.com
towardsdatascience.com
For what comes next, open a Jupyter Notebook and import the following packages :
import numpy as npimport randomimport networkx as nxfrom IPython.display import Imageimport matplotlib.pyplot as plt
The following articles will be using the latest version 2.x ofnetworkx. NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.
I’ll try to keep a practical approach and illustrate each concept.
A graph is a collection of interconnected nodes.
For example, a very simple graph could be :
Graphs can be used to represent :
social networks
web pages
biological networks
...
What kind of analysis can we perform on a graph?
study topology and connectivity
community detection
identification of central nodes
predict missing nodes
predict missing edges
...
All those concepts will become clearer in a few minutes.
In our notebook, let’s import our first pre-built graph :
# Load the graphG_karate = nx.karate_club_graph()# Find key-values for the graphpos = nx.spring_layout(G_karate)# Plot the graphnx.draw(G_karate, cmap = plt.get_cmap('rainbow'), with_labels=True, pos=pos)
What does this “Karate” graph represent? “A social network of a karate club was studied by Wayne W. Zachary for a period of three years from 1970 to 1972. The network captures 34 members of a karate club, documenting links between pairs of members who interacted outside the club. During the study, a conflict arose between the administrator “John A” and instructor “Mr. Hi” (pseudonyms), which led to the split of the club into two. Half of the members formed a new club around Mr. Hi; members from the other part found a new instructor or gave up karate. Based on collected data Zachary correctly assigned all but one member of the club to the groups they actually joined after the split.”
A graph G=(V, E) is made of a set of :
Nodes (also called vertices) V=1,...,n
Edges E⊆V×V
An edge (i,j) ∈ E links nodes i and j
i and j are said to be neighbors
A degree of a node is its number of neighbors
A graph is complete if all nodes have n−1 neighbors. This would mean that all nodes are connected in every possible way.
A path from i to j is a sequence of edges that goes from i to j. This path has a length equal to the number of edges it goes through.
The diameter of a graph is the length of the longest path among all the shortest path that link any two nodes.
For example, in this case, we can compute some of the shortest paths to link any two nodes. The diameter would typically be 3 since the is no pair of nodes such that the shortest way to link them is longer than 3.
The geodesic path is the shortest path between 2 nodes.
If all the nodes can be reached from each other by a given path, they form a connected component. A graph is connected is it has a single connected component
For example, here is a graph with 2 different connected components :
A graph is directed if edges are ordered pairs. In this case, the “in-degree” of i is the number of incoming edges to i, and the “out-degree” is the number of outgoing edges from i.
A graph is cyclic if you can return to a given node. On the other hand, it is acyclic if there’s at least one node to which you can’t return.
A graph can be weighted if we put weights on either nodes or relationships.
A graph is sparse if the number of edges is large compared to the number of nodes. On the other hand, it is said to be dense if there are many edges between the nodes.
Neo4J’s book on graph algorithms provides a clear summary :
Let’s now see how to retrieve this information from a graph in Python :
n=34G_karate.degree()
The attribute .degree() returns the list of the number of degrees (neighbors) for each node of the graph :
DegreeView({0: 16, 1: 9, 2: 10, 3: 6, 4: 3, 5: 4, 6: 4, 7: 4, 8: 5, 9: 2, 10: 3, 11: 1, 12: 2, 13: 5, 14: 2, 15: 2, 16: 2, 17: 2, 18: 2, 19: 3, 20: 2, 21: 2, 22: 2, 23: 5, 24: 3, 25: 3, 26: 2, 27: 4, 28: 3, 29: 4, 30: 4, 31: 6, 32: 12, 33: 17})
Then, isolate the values of the degrees :
# Isolate the sequence of degreesdegree_sequence = list(G_karate.degree())
Compute the number of edges, but also metrics on the degree sequence :
nb_nodes = nnb_arr = len(G_karate.edges())avg_degree = np.mean(np.array(degree_sequence)[:,1])med_degree = np.median(np.array(degree_sequence)[:,1])max_degree = max(np.array(degree_sequence)[:,1])min_degree = np.min(np.array(degree_sequence)[:,1])
Finally, print all this information :
print("Number of nodes : " + str(nb_nodes))print("Number of edges : " + str(nb_arr))print("Maximum degree : " + str(max_degree))print("Minimum degree : " + str(min_degree))print("Average degree : " + str(avg_degree))print("Median degree : " + str(med_degree))
This heads :
Number of nodes : 34Number of edges : 78Maximum degree : 17Minimum degree : 1Average degree : 4.588235294117647Median degree : 3.0
On average, each person in the graph is connected to 4.6 persons.
We can also plot the histogram of the degrees :
degree_freq = np.array(nx.degree_histogram(G_karate)).astype('float')plt.figure(figsize=(12, 8))plt.stem(degree_freq)plt.ylabel("Frequence")plt.xlabel("Degre")plt.show()
We will, later on, see that the histograms of degrees are quite important to determine the kind of graph we are looking at.
You might now wonder how we can store complex graph structures?
There are 3 ways to store graphs, depending on the usage we want to make of it :
In an edge list :
1 21 31 42 33 4...
We store the ID of each pair of nodes linked by an edge.
Using the adjacency matrix, usually loaded in memory :
For each possible pair in the graph, set it to 1 if the 2 nodes are linked by an edge. A is symmetric if the graph is undirected.
Using adjacency lists :
1 : [2,3, 4]2 : [1,3]3: [2, 4]...
The best representation will depend on the usage and available memory. Graphs can usually be stored as .txt files.
Some extensions of graphs might include :
weighted edges
label on nodes/edges
feature vectors associated with nodes/edges
In this section, we’ll cover the two major types of graphs :
Erdos-Rényi
Barabasi-Albert
a. Definition
In an Erdos-Rényi model, we build a random graph model with n nodes. The graph is generated by drawing an edge between a pair of nodes (i,j) independently with probability p. We therefore have 2 parameters : the number of nodes : n and the probability : p.
In Python, the networkx package has a built-in function to generate Erdos-Rényi graphs.
# Generate the graphn = 50p = 0.2G_erdos = nx.erdos_renyi_graph(n,p, seed =100)# Plot the graphplt.figure(figsize=(12,8))nx.draw(G_erdos, node_size=10)
You’ll get a result pretty similar to this one :
b. Degree distribution
Let pk the probability that a randomly selected node has a degree k. Due to the random way the graphs are built, the distribution of the degrees of the graph is binomial :
The distribution of the number of degrees per node should be really close to the mean. The probability to observe a high number of nodes decreases exponentially.
degree_freq = np.array(nx.degree_histogram(G_erdos)).astype('float')plt.figure(figsize=(12, 8))plt.stem(degree_freq)plt.ylabel("Frequence")plt.xlabel("Degree")plt.show()
To visualize the distribution, I have increased n to 200 in the generated graph.
c. Descriptive statistics
The average degree is given by n×p. With p=0.2 and n=200, we are centered around 40.
The degree expectation is given by (n−1)×p
The maximum degree is concentrated around the average
Let’s retrieve those values in Python :
# Get the list of the degreesdegree_sequence_erdos = list(G_erdos.degree())nb_nodes = nnb_arr = len(G_erdos.edges())avg_degree = np.mean(np.array(degree_sequence_erdos)[:,1])med_degree = np.median(np.array(degree_sequence_erdos)[:,1])max_degree = max(np.array(degree_sequence_erdos)[:,1])min_degree = np.min(np.array(degree_sequence_erdos)[:,1])esp_degree = (n-1)*pprint("Number of nodes : " + str(nb_nodes))print("Number of edges : " + str(nb_arr))print("Maximum degree : " + str(max_degree))print("Minimum degree : " + str(min_degree))print("Average degree : " + str(avg_degree))print("Expected degree : " + str(esp_degree))print("Median degree : " + str(med_degree))
This should give you something similar to :
Number of nodes : 200Number of edges : 3949Maximum degree : 56Minimum degree : 25Average degree : 39.49Expected degree : 39.800000000000004Median degree : 39.5
The average and the expected degrees are really close since there is only a small factor between the two.
a. Definition
In a Barabasi-Albert model, we build a random graph model with n nodes with a preferential attachment component. The graph is generated by the following algorithm :
Step 1: With a probability p, move to the second step. Else, move to the third step.
Step 2: Connect a new node to existing nodes chosen uniformly at random
Step 3: Connect the new node to n existing nodes with a probability proportional to their degree
The aim of such graph is to model preferential attachment, which is often observed in real networks.
In Python, the networkx package has also a built-in function to generate Barabasi-Albert graphs.
# Generate the graphn = 150m = 3G_barabasi = nx.barabasi_albert_graph(n,m)# Plot the graphplt.figure(figsize=(12,8))nx.draw(G_barabasi, node_size=10)
You’ll get a result pretty similar to this one :
You can easily notice how some nodes appear to have a much larger degree than others now!
b. Degree Distribution
Let pk the probability that a randomly selected node has a degree k. The degree distribution follows a power-law :
The distribution is now heavy-tailed. There is a large number of nodes that have a small degree, but a significant number of nodes have a high degree.
degree_freq = np.array(nx.degree_histogram(G_barabasi)).astype('float')plt.figure(figsize=(12, 8))plt.stem(degree_freq)plt.ylabel("Frequence")plt.xlabel("Degree")plt.show()
The distribution is said to be scale-free, in the sense that the average degree is not informative.
c. Descriptive statistics
The average degree is constant if α≤2, else, it diverges
The maximum degree is of the following order :
# Get the list of the degreesdegree_sequence_erdos = list(G_erdos.degree())nb_nodes = nnb_arr = len(G_erdos.edges())avg_degree = np.mean(np.array(degree_sequence_erdos)[:,1])med_degree = np.median(np.array(degree_sequence_erdos)[:,1])max_degree = max(np.array(degree_sequence_erdos)[:,1])min_degree = np.min(np.array(degree_sequence_erdos)[:,1])esp_degree = (n-1)*pprint("Number of nodes : " + str(nb_nodes))print("Number of edges : " + str(nb_arr))print("Maximum degree : " + str(max_degree))print("Minimum degree : " + str(min_degree))print("Average degree : " + str(avg_degree))print("Expected degree : " + str(esp_degree))print("Median degree : " + str(med_degree))
This should give you something similar to :
Number of nodes : 200Number of edges : 3949Maximum degree : 56Minimum degree : 25Average degree : 39.49Expected degree : 39.800000000000004Median degree : 39.5
So far, we covered the main kind of graphs, and the most basic characteristics to describe a graph. In the next article, we’ll dive into graph analysis/algorithms and the different ways a graph can be analyzed, used for :
Real-time fraud detection
Real-time recommendations
Streamline regulatory compliance
Management and monitoring of complex networks
Identity and access management
Social applications/features
...
Feel free to comment if you have any question or remark.
The next article can be found here :
towardsdatascience.com
A Comprehensive Guide to Graph Algorithms in Neo4j, Mark Needham & Amy E. Hodler
Networkx documentation, https://networkx.github.io/documentation/stable/
If you’d like to read more from me, my previous articles can be found here:
|
[
{
"code": null,
"e": 456,
"s": 171,
"text": "Graphs are becoming central to machine learning these days, whether you’d like to understand the structure of a social network by predicting potential connections, detecting fraud, understand customer’s behavior of a car rental service or making real-time recommendations for example."
},
{
"code": null,
"e": 507,
"s": 456,
"text": "In this article, we’ll cover the following topics:"
},
{
"code": null,
"e": 524,
"s": 507,
"text": "What is a graph?"
},
{
"code": null,
"e": 546,
"s": 524,
"text": "How to store a graph?"
},
{
"code": null,
"e": 580,
"s": 546,
"text": "Types of and properties of graphs"
},
{
"code": null,
"e": 599,
"s": 580,
"text": "Examples in Python"
},
{
"code": null,
"e": 719,
"s": 599,
"text": "This is the first article of a series of three articles dedicated to Graph Theory, Graph Algorithms and Graph Learning."
},
{
"code": null,
"e": 812,
"s": 719,
"text": "This article was originally published on my personal blog: https://maelfabien.github.io/ml/#"
},
{
"code": null,
"e": 886,
"s": 812,
"text": "I publish all my articles and the corresponding code on this repository :"
},
{
"code": null,
"e": 897,
"s": 886,
"text": "github.com"
},
{
"code": null,
"e": 926,
"s": 897,
"text": "NB: Part 2 and 3 are out! :)"
},
{
"code": null,
"e": 949,
"s": 926,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 972,
"s": 949,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 1053,
"s": 972,
"text": "For what comes next, open a Jupyter Notebook and import the following packages :"
},
{
"code": null,
"e": 1170,
"s": 1053,
"text": "import numpy as npimport randomimport networkx as nxfrom IPython.display import Imageimport matplotlib.pyplot as plt"
},
{
"code": null,
"e": 1376,
"s": 1170,
"text": "The following articles will be using the latest version 2.x ofnetworkx. NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks."
},
{
"code": null,
"e": 1443,
"s": 1376,
"text": "I’ll try to keep a practical approach and illustrate each concept."
},
{
"code": null,
"e": 1492,
"s": 1443,
"text": "A graph is a collection of interconnected nodes."
},
{
"code": null,
"e": 1536,
"s": 1492,
"text": "For example, a very simple graph could be :"
},
{
"code": null,
"e": 1570,
"s": 1536,
"text": "Graphs can be used to represent :"
},
{
"code": null,
"e": 1586,
"s": 1570,
"text": "social networks"
},
{
"code": null,
"e": 1596,
"s": 1586,
"text": "web pages"
},
{
"code": null,
"e": 1616,
"s": 1596,
"text": "biological networks"
},
{
"code": null,
"e": 1620,
"s": 1616,
"text": "..."
},
{
"code": null,
"e": 1669,
"s": 1620,
"text": "What kind of analysis can we perform on a graph?"
},
{
"code": null,
"e": 1701,
"s": 1669,
"text": "study topology and connectivity"
},
{
"code": null,
"e": 1721,
"s": 1701,
"text": "community detection"
},
{
"code": null,
"e": 1753,
"s": 1721,
"text": "identification of central nodes"
},
{
"code": null,
"e": 1775,
"s": 1753,
"text": "predict missing nodes"
},
{
"code": null,
"e": 1797,
"s": 1775,
"text": "predict missing edges"
},
{
"code": null,
"e": 1801,
"s": 1797,
"text": "..."
},
{
"code": null,
"e": 1858,
"s": 1801,
"text": "All those concepts will become clearer in a few minutes."
},
{
"code": null,
"e": 1916,
"s": 1858,
"text": "In our notebook, let’s import our first pre-built graph :"
},
{
"code": null,
"e": 2121,
"s": 1916,
"text": "# Load the graphG_karate = nx.karate_club_graph()# Find key-values for the graphpos = nx.spring_layout(G_karate)# Plot the graphnx.draw(G_karate, cmap = plt.get_cmap('rainbow'), with_labels=True, pos=pos)"
},
{
"code": null,
"e": 2813,
"s": 2121,
"text": "What does this “Karate” graph represent? “A social network of a karate club was studied by Wayne W. Zachary for a period of three years from 1970 to 1972. The network captures 34 members of a karate club, documenting links between pairs of members who interacted outside the club. During the study, a conflict arose between the administrator “John A” and instructor “Mr. Hi” (pseudonyms), which led to the split of the club into two. Half of the members formed a new club around Mr. Hi; members from the other part found a new instructor or gave up karate. Based on collected data Zachary correctly assigned all but one member of the club to the groups they actually joined after the split.”"
},
{
"code": null,
"e": 2852,
"s": 2813,
"text": "A graph G=(V, E) is made of a set of :"
},
{
"code": null,
"e": 2891,
"s": 2852,
"text": "Nodes (also called vertices) V=1,...,n"
},
{
"code": null,
"e": 2903,
"s": 2891,
"text": "Edges E⊆V×V"
},
{
"code": null,
"e": 2941,
"s": 2903,
"text": "An edge (i,j) ∈ E links nodes i and j"
},
{
"code": null,
"e": 2974,
"s": 2941,
"text": "i and j are said to be neighbors"
},
{
"code": null,
"e": 3020,
"s": 2974,
"text": "A degree of a node is its number of neighbors"
},
{
"code": null,
"e": 3141,
"s": 3020,
"text": "A graph is complete if all nodes have n−1 neighbors. This would mean that all nodes are connected in every possible way."
},
{
"code": null,
"e": 3275,
"s": 3141,
"text": "A path from i to j is a sequence of edges that goes from i to j. This path has a length equal to the number of edges it goes through."
},
{
"code": null,
"e": 3386,
"s": 3275,
"text": "The diameter of a graph is the length of the longest path among all the shortest path that link any two nodes."
},
{
"code": null,
"e": 3600,
"s": 3386,
"text": "For example, in this case, we can compute some of the shortest paths to link any two nodes. The diameter would typically be 3 since the is no pair of nodes such that the shortest way to link them is longer than 3."
},
{
"code": null,
"e": 3656,
"s": 3600,
"text": "The geodesic path is the shortest path between 2 nodes."
},
{
"code": null,
"e": 3814,
"s": 3656,
"text": "If all the nodes can be reached from each other by a given path, they form a connected component. A graph is connected is it has a single connected component"
},
{
"code": null,
"e": 3883,
"s": 3814,
"text": "For example, here is a graph with 2 different connected components :"
},
{
"code": null,
"e": 4065,
"s": 3883,
"text": "A graph is directed if edges are ordered pairs. In this case, the “in-degree” of i is the number of incoming edges to i, and the “out-degree” is the number of outgoing edges from i."
},
{
"code": null,
"e": 4207,
"s": 4065,
"text": "A graph is cyclic if you can return to a given node. On the other hand, it is acyclic if there’s at least one node to which you can’t return."
},
{
"code": null,
"e": 4283,
"s": 4207,
"text": "A graph can be weighted if we put weights on either nodes or relationships."
},
{
"code": null,
"e": 4451,
"s": 4283,
"text": "A graph is sparse if the number of edges is large compared to the number of nodes. On the other hand, it is said to be dense if there are many edges between the nodes."
},
{
"code": null,
"e": 4511,
"s": 4451,
"text": "Neo4J’s book on graph algorithms provides a clear summary :"
},
{
"code": null,
"e": 4583,
"s": 4511,
"text": "Let’s now see how to retrieve this information from a graph in Python :"
},
{
"code": null,
"e": 4605,
"s": 4583,
"text": "n=34G_karate.degree()"
},
{
"code": null,
"e": 4712,
"s": 4605,
"text": "The attribute .degree() returns the list of the number of degrees (neighbors) for each node of the graph :"
},
{
"code": null,
"e": 4957,
"s": 4712,
"text": "DegreeView({0: 16, 1: 9, 2: 10, 3: 6, 4: 3, 5: 4, 6: 4, 7: 4, 8: 5, 9: 2, 10: 3, 11: 1, 12: 2, 13: 5, 14: 2, 15: 2, 16: 2, 17: 2, 18: 2, 19: 3, 20: 2, 21: 2, 22: 2, 23: 5, 24: 3, 25: 3, 26: 2, 27: 4, 28: 3, 29: 4, 30: 4, 31: 6, 32: 12, 33: 17})"
},
{
"code": null,
"e": 4999,
"s": 4957,
"text": "Then, isolate the values of the degrees :"
},
{
"code": null,
"e": 5074,
"s": 4999,
"text": "# Isolate the sequence of degreesdegree_sequence = list(G_karate.degree())"
},
{
"code": null,
"e": 5145,
"s": 5074,
"text": "Compute the number of edges, but also metrics on the degree sequence :"
},
{
"code": null,
"e": 5393,
"s": 5145,
"text": "nb_nodes = nnb_arr = len(G_karate.edges())avg_degree = np.mean(np.array(degree_sequence)[:,1])med_degree = np.median(np.array(degree_sequence)[:,1])max_degree = max(np.array(degree_sequence)[:,1])min_degree = np.min(np.array(degree_sequence)[:,1])"
},
{
"code": null,
"e": 5431,
"s": 5393,
"text": "Finally, print all this information :"
},
{
"code": null,
"e": 5691,
"s": 5431,
"text": "print(\"Number of nodes : \" + str(nb_nodes))print(\"Number of edges : \" + str(nb_arr))print(\"Maximum degree : \" + str(max_degree))print(\"Minimum degree : \" + str(min_degree))print(\"Average degree : \" + str(avg_degree))print(\"Median degree : \" + str(med_degree))"
},
{
"code": null,
"e": 5704,
"s": 5691,
"text": "This heads :"
},
{
"code": null,
"e": 5835,
"s": 5704,
"text": "Number of nodes : 34Number of edges : 78Maximum degree : 17Minimum degree : 1Average degree : 4.588235294117647Median degree : 3.0"
},
{
"code": null,
"e": 5901,
"s": 5835,
"text": "On average, each person in the graph is connected to 4.6 persons."
},
{
"code": null,
"e": 5949,
"s": 5901,
"text": "We can also plot the histogram of the degrees :"
},
{
"code": null,
"e": 6119,
"s": 5949,
"text": "degree_freq = np.array(nx.degree_histogram(G_karate)).astype('float')plt.figure(figsize=(12, 8))plt.stem(degree_freq)plt.ylabel(\"Frequence\")plt.xlabel(\"Degre\")plt.show()"
},
{
"code": null,
"e": 6243,
"s": 6119,
"text": "We will, later on, see that the histograms of degrees are quite important to determine the kind of graph we are looking at."
},
{
"code": null,
"e": 6307,
"s": 6243,
"text": "You might now wonder how we can store complex graph structures?"
},
{
"code": null,
"e": 6388,
"s": 6307,
"text": "There are 3 ways to store graphs, depending on the usage we want to make of it :"
},
{
"code": null,
"e": 6406,
"s": 6388,
"text": "In an edge list :"
},
{
"code": null,
"e": 6435,
"s": 6406,
"text": "1 21 31 42 33 4..."
},
{
"code": null,
"e": 6492,
"s": 6435,
"text": "We store the ID of each pair of nodes linked by an edge."
},
{
"code": null,
"e": 6547,
"s": 6492,
"text": "Using the adjacency matrix, usually loaded in memory :"
},
{
"code": null,
"e": 6677,
"s": 6547,
"text": "For each possible pair in the graph, set it to 1 if the 2 nodes are linked by an edge. A is symmetric if the graph is undirected."
},
{
"code": null,
"e": 6701,
"s": 6677,
"text": "Using adjacency lists :"
},
{
"code": null,
"e": 6736,
"s": 6701,
"text": "1 : [2,3, 4]2 : [1,3]3: [2, 4]..."
},
{
"code": null,
"e": 6851,
"s": 6736,
"text": "The best representation will depend on the usage and available memory. Graphs can usually be stored as .txt files."
},
{
"code": null,
"e": 6893,
"s": 6851,
"text": "Some extensions of graphs might include :"
},
{
"code": null,
"e": 6908,
"s": 6893,
"text": "weighted edges"
},
{
"code": null,
"e": 6929,
"s": 6908,
"text": "label on nodes/edges"
},
{
"code": null,
"e": 6973,
"s": 6929,
"text": "feature vectors associated with nodes/edges"
},
{
"code": null,
"e": 7034,
"s": 6973,
"text": "In this section, we’ll cover the two major types of graphs :"
},
{
"code": null,
"e": 7047,
"s": 7034,
"text": "Erdos-Rényi"
},
{
"code": null,
"e": 7063,
"s": 7047,
"text": "Barabasi-Albert"
},
{
"code": null,
"e": 7077,
"s": 7063,
"text": "a. Definition"
},
{
"code": null,
"e": 7335,
"s": 7077,
"text": "In an Erdos-Rényi model, we build a random graph model with n nodes. The graph is generated by drawing an edge between a pair of nodes (i,j) independently with probability p. We therefore have 2 parameters : the number of nodes : n and the probability : p."
},
{
"code": null,
"e": 7424,
"s": 7335,
"text": "In Python, the networkx package has a built-in function to generate Erdos-Rényi graphs."
},
{
"code": null,
"e": 7576,
"s": 7424,
"text": "# Generate the graphn = 50p = 0.2G_erdos = nx.erdos_renyi_graph(n,p, seed =100)# Plot the graphplt.figure(figsize=(12,8))nx.draw(G_erdos, node_size=10)"
},
{
"code": null,
"e": 7625,
"s": 7576,
"text": "You’ll get a result pretty similar to this one :"
},
{
"code": null,
"e": 7648,
"s": 7625,
"text": "b. Degree distribution"
},
{
"code": null,
"e": 7820,
"s": 7648,
"text": "Let pk the probability that a randomly selected node has a degree k. Due to the random way the graphs are built, the distribution of the degrees of the graph is binomial :"
},
{
"code": null,
"e": 7982,
"s": 7820,
"text": "The distribution of the number of degrees per node should be really close to the mean. The probability to observe a high number of nodes decreases exponentially."
},
{
"code": null,
"e": 8152,
"s": 7982,
"text": "degree_freq = np.array(nx.degree_histogram(G_erdos)).astype('float')plt.figure(figsize=(12, 8))plt.stem(degree_freq)plt.ylabel(\"Frequence\")plt.xlabel(\"Degree\")plt.show()"
},
{
"code": null,
"e": 8233,
"s": 8152,
"text": "To visualize the distribution, I have increased n to 200 in the generated graph."
},
{
"code": null,
"e": 8259,
"s": 8233,
"text": "c. Descriptive statistics"
},
{
"code": null,
"e": 8344,
"s": 8259,
"text": "The average degree is given by n×p. With p=0.2 and n=200, we are centered around 40."
},
{
"code": null,
"e": 8387,
"s": 8344,
"text": "The degree expectation is given by (n−1)×p"
},
{
"code": null,
"e": 8441,
"s": 8387,
"text": "The maximum degree is concentrated around the average"
},
{
"code": null,
"e": 8481,
"s": 8441,
"text": "Let’s retrieve those values in Python :"
},
{
"code": null,
"e": 9151,
"s": 8481,
"text": "# Get the list of the degreesdegree_sequence_erdos = list(G_erdos.degree())nb_nodes = nnb_arr = len(G_erdos.edges())avg_degree = np.mean(np.array(degree_sequence_erdos)[:,1])med_degree = np.median(np.array(degree_sequence_erdos)[:,1])max_degree = max(np.array(degree_sequence_erdos)[:,1])min_degree = np.min(np.array(degree_sequence_erdos)[:,1])esp_degree = (n-1)*pprint(\"Number of nodes : \" + str(nb_nodes))print(\"Number of edges : \" + str(nb_arr))print(\"Maximum degree : \" + str(max_degree))print(\"Minimum degree : \" + str(min_degree))print(\"Average degree : \" + str(avg_degree))print(\"Expected degree : \" + str(esp_degree))print(\"Median degree : \" + str(med_degree))"
},
{
"code": null,
"e": 9195,
"s": 9151,
"text": "This should give you something similar to :"
},
{
"code": null,
"e": 9355,
"s": 9195,
"text": "Number of nodes : 200Number of edges : 3949Maximum degree : 56Minimum degree : 25Average degree : 39.49Expected degree : 39.800000000000004Median degree : 39.5"
},
{
"code": null,
"e": 9461,
"s": 9355,
"text": "The average and the expected degrees are really close since there is only a small factor between the two."
},
{
"code": null,
"e": 9475,
"s": 9461,
"text": "a. Definition"
},
{
"code": null,
"e": 9640,
"s": 9475,
"text": "In a Barabasi-Albert model, we build a random graph model with n nodes with a preferential attachment component. The graph is generated by the following algorithm :"
},
{
"code": null,
"e": 9725,
"s": 9640,
"text": "Step 1: With a probability p, move to the second step. Else, move to the third step."
},
{
"code": null,
"e": 9797,
"s": 9725,
"text": "Step 2: Connect a new node to existing nodes chosen uniformly at random"
},
{
"code": null,
"e": 9894,
"s": 9797,
"text": "Step 3: Connect the new node to n existing nodes with a probability proportional to their degree"
},
{
"code": null,
"e": 9995,
"s": 9894,
"text": "The aim of such graph is to model preferential attachment, which is often observed in real networks."
},
{
"code": null,
"e": 10092,
"s": 9995,
"text": "In Python, the networkx package has also a built-in function to generate Barabasi-Albert graphs."
},
{
"code": null,
"e": 10242,
"s": 10092,
"text": "# Generate the graphn = 150m = 3G_barabasi = nx.barabasi_albert_graph(n,m)# Plot the graphplt.figure(figsize=(12,8))nx.draw(G_barabasi, node_size=10)"
},
{
"code": null,
"e": 10291,
"s": 10242,
"text": "You’ll get a result pretty similar to this one :"
},
{
"code": null,
"e": 10381,
"s": 10291,
"text": "You can easily notice how some nodes appear to have a much larger degree than others now!"
},
{
"code": null,
"e": 10404,
"s": 10381,
"text": "b. Degree Distribution"
},
{
"code": null,
"e": 10519,
"s": 10404,
"text": "Let pk the probability that a randomly selected node has a degree k. The degree distribution follows a power-law :"
},
{
"code": null,
"e": 10670,
"s": 10519,
"text": "The distribution is now heavy-tailed. There is a large number of nodes that have a small degree, but a significant number of nodes have a high degree."
},
{
"code": null,
"e": 10843,
"s": 10670,
"text": "degree_freq = np.array(nx.degree_histogram(G_barabasi)).astype('float')plt.figure(figsize=(12, 8))plt.stem(degree_freq)plt.ylabel(\"Frequence\")plt.xlabel(\"Degree\")plt.show()"
},
{
"code": null,
"e": 10943,
"s": 10843,
"text": "The distribution is said to be scale-free, in the sense that the average degree is not informative."
},
{
"code": null,
"e": 10969,
"s": 10943,
"text": "c. Descriptive statistics"
},
{
"code": null,
"e": 11026,
"s": 10969,
"text": "The average degree is constant if α≤2, else, it diverges"
},
{
"code": null,
"e": 11073,
"s": 11026,
"text": "The maximum degree is of the following order :"
},
{
"code": null,
"e": 11743,
"s": 11073,
"text": "# Get the list of the degreesdegree_sequence_erdos = list(G_erdos.degree())nb_nodes = nnb_arr = len(G_erdos.edges())avg_degree = np.mean(np.array(degree_sequence_erdos)[:,1])med_degree = np.median(np.array(degree_sequence_erdos)[:,1])max_degree = max(np.array(degree_sequence_erdos)[:,1])min_degree = np.min(np.array(degree_sequence_erdos)[:,1])esp_degree = (n-1)*pprint(\"Number of nodes : \" + str(nb_nodes))print(\"Number of edges : \" + str(nb_arr))print(\"Maximum degree : \" + str(max_degree))print(\"Minimum degree : \" + str(min_degree))print(\"Average degree : \" + str(avg_degree))print(\"Expected degree : \" + str(esp_degree))print(\"Median degree : \" + str(med_degree))"
},
{
"code": null,
"e": 11787,
"s": 11743,
"text": "This should give you something similar to :"
},
{
"code": null,
"e": 11947,
"s": 11787,
"text": "Number of nodes : 200Number of edges : 3949Maximum degree : 56Minimum degree : 25Average degree : 39.49Expected degree : 39.800000000000004Median degree : 39.5"
},
{
"code": null,
"e": 12169,
"s": 11947,
"text": "So far, we covered the main kind of graphs, and the most basic characteristics to describe a graph. In the next article, we’ll dive into graph analysis/algorithms and the different ways a graph can be analyzed, used for :"
},
{
"code": null,
"e": 12195,
"s": 12169,
"text": "Real-time fraud detection"
},
{
"code": null,
"e": 12221,
"s": 12195,
"text": "Real-time recommendations"
},
{
"code": null,
"e": 12254,
"s": 12221,
"text": "Streamline regulatory compliance"
},
{
"code": null,
"e": 12300,
"s": 12254,
"text": "Management and monitoring of complex networks"
},
{
"code": null,
"e": 12331,
"s": 12300,
"text": "Identity and access management"
},
{
"code": null,
"e": 12360,
"s": 12331,
"text": "Social applications/features"
},
{
"code": null,
"e": 12364,
"s": 12360,
"text": "..."
},
{
"code": null,
"e": 12421,
"s": 12364,
"text": "Feel free to comment if you have any question or remark."
},
{
"code": null,
"e": 12458,
"s": 12421,
"text": "The next article can be found here :"
},
{
"code": null,
"e": 12481,
"s": 12458,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 12562,
"s": 12481,
"text": "A Comprehensive Guide to Graph Algorithms in Neo4j, Mark Needham & Amy E. Hodler"
},
{
"code": null,
"e": 12635,
"s": 12562,
"text": "Networkx documentation, https://networkx.github.io/documentation/stable/"
}
] |
Make multiple checkboxes appear on the same line with Bootstrap
|
Use the .checkbox-line class in Bootstrap to make multiple checkboxes appear on the same line:
Live Demo
<!DOCTYPE html>
<html>
<head>
<title>Bootstrap Example</title>
<link href = "/bootstrap/css/bootstrap.min.css" rel = "stylesheet">
<script src = "/scripts/jquery.min.js"></script>
<script src = "/bootstrap/js/bootstrap.min.js"></script>
</head>
<body>
<div class = "container">
<h2>Programming Survey</h2>
<p>Which programming languages do you like?</p>
<form>
<label class = "checkbox-inline">
<input type = "checkbox" value = "">Java
</label>
<label class = "checkbox-inline">
<input type="checkbox" value="">C++
</label>
<label class = "checkbox-inline">
<input type = "checkbox" value = "">C
</label>
</form>
</div>
</body>
</html>
|
[
{
"code": null,
"e": 1157,
"s": 1062,
"text": "Use the .checkbox-line class in Bootstrap to make multiple checkboxes appear on the same line:"
},
{
"code": null,
"e": 1167,
"s": 1157,
"text": "Live Demo"
},
{
"code": null,
"e": 2004,
"s": 1167,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Bootstrap Example</title>\n <link href = \"/bootstrap/css/bootstrap.min.css\" rel = \"stylesheet\">\n <script src = \"/scripts/jquery.min.js\"></script>\n <script src = \"/bootstrap/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <div class = \"container\">\n <h2>Programming Survey</h2>\n <p>Which programming languages do you like?</p>\n <form>\n <label class = \"checkbox-inline\">\n <input type = \"checkbox\" value = \"\">Java\n </label>\n <label class = \"checkbox-inline\">\n <input type=\"checkbox\" value=\"\">C++\n </label>\n <label class = \"checkbox-inline\">\n <input type = \"checkbox\" value = \"\">C\n </label>\n </form>\n </div>\n </body>\n</html>"
}
] |
How to remove lines starting with any prefix using Python?
|
29 Dec, 2020
Given a text file, read the content of that text file line by line and print only those lines which do not start with defined prefix. Also store those printed lines in another text file. There are following ways in which this task can be done:
Method 1: Using loop and startswith().
In this method, we read the contents of file line by line. While reading, we check if the line begins with the given prefix, we simply skip that line and print it. Also store that line in another text file.
Example 1:
Suppose the text file from which lines should be read is given below:
Original Text File
Python3
# defining object file1 to# open GeeksforGeeks file in # read modefile1 = open('GeeksforGeeks.txt', 'r') # defining object file2 to # open GeeksforGeeksUpdated file# in write modefile2 = open('GeeksforGeeksUpdated.txt', 'w') # reading each line from original # text filefor line in file1.readlines(): # reading all lines that do not # begin with "TextGenerator" if not (line.startswith('TextGenerator')): # printing those lines print(line) # storing only those lines that # do not begin with "TextGenerator" file2.write(line) # close and save the filesfile2.close()file1.close()
Output:
It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged.
It was popularised in the 1960s with the release of Letraset sheets containing TextGenerator passages, and more recently with desktop publishing software like Albus Potter including versions of TextGenerator.
Updated Text File after removing lines starting with “TextGenerator”
In the above example, we open a file and read its content line by line. We check if that line begins with given prefix using startswith() method. If that line begins with “TextGenerator” we skip that line, else we print the line and store it in another file. In this way, we could remove lines starting with specified prefix.
Method 2: Using Regex.
In this method we use re module of python which offers a set of metacharacters. Metacharacters are characters with special meaning. To remove lines starting with specified prefix, we use “^” (Starts with) metacharacter.
We also make use of re.findall() function which returns a list containing all matches.
Original Text File
Python3
# importing regex moduleimport re # defining object file1 to open# GeeksforGeeks file in read modefile1 = open('GeeksforGeeks.txt', 'r') # defining object file2 to open # GeeksforGeeksUpdated file in# write modefile2 = open('GeeksforGeeksUpdated.txt','w') # reading each line from original# text filefor line in file1.readlines(): # reading all lines that begin # with "TextGenerator" x = re.findall("^Geeks", line) if not x: # printing those lines print(line) # storing only those lines that # do not begin with "TextGenerator" file2.write(line) # close and save the filesfile1.close()file2.close()
Output:
forGeeksGeeks
Updated Text File
In the above example, we open a file and read its content line by line. We check if that line begins with “Geeks” using regular exapression. If that line begins with “Geeks” we skip that line, and we print the rest of the lines and store those lines in another file.
Python file-handling-programs
python-file-handling
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Python Classes and Objects
Python OOPs Concepts
Introduction To PYTHON
How to drop one or multiple columns in Pandas Dataframe
Python | os.path.join() method
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Python | datetime.timedelta() function
Python | Get unique values from a list
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n29 Dec, 2020"
},
{
"code": null,
"e": 272,
"s": 28,
"text": "Given a text file, read the content of that text file line by line and print only those lines which do not start with defined prefix. Also store those printed lines in another text file. There are following ways in which this task can be done:"
},
{
"code": null,
"e": 311,
"s": 272,
"text": "Method 1: Using loop and startswith()."
},
{
"code": null,
"e": 518,
"s": 311,
"text": "In this method, we read the contents of file line by line. While reading, we check if the line begins with the given prefix, we simply skip that line and print it. Also store that line in another text file."
},
{
"code": null,
"e": 529,
"s": 518,
"text": "Example 1:"
},
{
"code": null,
"e": 599,
"s": 529,
"text": "Suppose the text file from which lines should be read is given below:"
},
{
"code": null,
"e": 620,
"s": 599,
"text": "Original Text File "
},
{
"code": null,
"e": 628,
"s": 620,
"text": "Python3"
},
{
"code": "# defining object file1 to# open GeeksforGeeks file in # read modefile1 = open('GeeksforGeeks.txt', 'r') # defining object file2 to # open GeeksforGeeksUpdated file# in write modefile2 = open('GeeksforGeeksUpdated.txt', 'w') # reading each line from original # text filefor line in file1.readlines(): # reading all lines that do not # begin with \"TextGenerator\" if not (line.startswith('TextGenerator')): # printing those lines print(line) # storing only those lines that # do not begin with \"TextGenerator\" file2.write(line) # close and save the filesfile2.close()file1.close()",
"e": 1303,
"s": 628,
"text": null
},
{
"code": null,
"e": 1311,
"s": 1303,
"text": "Output:"
},
{
"code": null,
"e": 1432,
"s": 1311,
"text": "It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged."
},
{
"code": null,
"e": 1641,
"s": 1432,
"text": "It was popularised in the 1960s with the release of Letraset sheets containing TextGenerator passages, and more recently with desktop publishing software like Albus Potter including versions of TextGenerator."
},
{
"code": null,
"e": 1710,
"s": 1641,
"text": "Updated Text File after removing lines starting with “TextGenerator”"
},
{
"code": null,
"e": 2036,
"s": 1710,
"text": "In the above example, we open a file and read its content line by line. We check if that line begins with given prefix using startswith() method. If that line begins with “TextGenerator” we skip that line, else we print the line and store it in another file. In this way, we could remove lines starting with specified prefix."
},
{
"code": null,
"e": 2059,
"s": 2036,
"text": "Method 2: Using Regex."
},
{
"code": null,
"e": 2280,
"s": 2059,
"text": "In this method we use re module of python which offers a set of metacharacters. Metacharacters are characters with special meaning. To remove lines starting with specified prefix, we use “^” (Starts with) metacharacter. "
},
{
"code": null,
"e": 2367,
"s": 2280,
"text": "We also make use of re.findall() function which returns a list containing all matches."
},
{
"code": null,
"e": 2386,
"s": 2367,
"text": "Original Text File"
},
{
"code": null,
"e": 2394,
"s": 2386,
"text": "Python3"
},
{
"code": "# importing regex moduleimport re # defining object file1 to open# GeeksforGeeks file in read modefile1 = open('GeeksforGeeks.txt', 'r') # defining object file2 to open # GeeksforGeeksUpdated file in# write modefile2 = open('GeeksforGeeksUpdated.txt','w') # reading each line from original# text filefor line in file1.readlines(): # reading all lines that begin # with \"TextGenerator\" x = re.findall(\"^Geeks\", line) if not x: # printing those lines print(line) # storing only those lines that # do not begin with \"TextGenerator\" file2.write(line) # close and save the filesfile1.close()file2.close() ",
"e": 3098,
"s": 2394,
"text": null
},
{
"code": null,
"e": 3106,
"s": 3098,
"text": "Output:"
},
{
"code": null,
"e": 3121,
"s": 3106,
"text": "forGeeksGeeks\n"
},
{
"code": null,
"e": 3139,
"s": 3121,
"text": "Updated Text File"
},
{
"code": null,
"e": 3407,
"s": 3139,
"text": "In the above example, we open a file and read its content line by line. We check if that line begins with “Geeks” using regular exapression. If that line begins with “Geeks” we skip that line, and we print the rest of the lines and store those lines in another file. "
},
{
"code": null,
"e": 3437,
"s": 3407,
"text": "Python file-handling-programs"
},
{
"code": null,
"e": 3458,
"s": 3437,
"text": "python-file-handling"
},
{
"code": null,
"e": 3465,
"s": 3458,
"text": "Python"
},
{
"code": null,
"e": 3563,
"s": 3465,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3595,
"s": 3563,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 3622,
"s": 3595,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 3643,
"s": 3622,
"text": "Python OOPs Concepts"
},
{
"code": null,
"e": 3666,
"s": 3643,
"text": "Introduction To PYTHON"
},
{
"code": null,
"e": 3722,
"s": 3666,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 3753,
"s": 3722,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 3795,
"s": 3753,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 3837,
"s": 3795,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 3876,
"s": 3837,
"text": "Python | datetime.timedelta() function"
}
] |
Deploying an Image Classifier using JavaScript | by Sushrut Ashtikar | Towards Data Science
|
Yet another way of inferencing an image classification model for reducing inference time by directly deploying in a static website or nodejs web application.
Deploying a machine learning model can be fun. You have to make sure you have setup with hardware and software optimized pipeline and boom your model is ready for production. But sometimes making an HTTP call to the backend with image and then returning results on frontend can be a tedious job. There are so many restrictions like late inference if you are using a heavy model and your server configuration is low.That’s why they say that model deployment is 80% of work in the field of data science.
TensorFlow.js, an open-source library you can use to define, train, and run machine learning models entirely in the browser, using Javascript and a high-level layers API. If you’re a Javascript developer who’s new to ML, TensorFlow.js is a great way to begin learning. JavaScript is a client-side scripting language, running machine learning programs entirely client-side in the browser unlocks new opportunities, like interactive ML.
I have already trained a simple cat vs dog classifier on Keras. You can refer to my notebook file if you want to train a model yourself. We are going to use the tensorflowjs_converter tool to convert Keras (.h5) weights to tfjs format.tensorflowjs_converter comes with tensorflowjs python library so let’s install.
pip install tensorflowjs
Use the tensorflow.js converter to convert the saved Keras model into JSON format. (Assuming name of weight model.h5)
Create a directory to store converted weights and mention its path (jsweights)
tensorflowjs_converter --input_format=keras ./model.h5 ./jsweights
If you did things correctly, you should now have a JSON file named model.json and various .bin files, such as group1-shard1of10.bin. The number of .bin files will depend on the size of your model: the larger your model, the greater the number of .bin files. The model.json file contains the architecture of your model and the .bin files will contain the weights of your model.
We need to create a js file that contains code for loading a model and making inferences from the given image.
Make sure you set the correct location for loading model.json in your file. Also your prediction function and model loading function should be asynchronous.
//classifier.jsvar model;var predResult = document.getElementById("result");async function initialize() { model = await tf.loadLayersModel('/weights/catsvsdogs/model.json');}async function predict() { // action for the submit buttonlet image = document.getElementById("img") let tensorImg = tf.browser.fromPixels(image).resizeNearestNeighbor([150, 150]).toFloat().expandDims(); prediction = await model.predict(tensorImg).data();if (prediction[0] === 0) { predResult.innerHTML = "I think it's a cat";} else if (prediction[0] === 1) { predResult.innerHTML = "I think it's a dog";} else { predResult.innerHTML = "This is Something else"; }}initialize();
Setup an image path inside your Html file. Just replace the path with the image you want to predict. You need to give only a relative image path.
<img src="./images/courage.jpg">
Add this file and tensorflowjs CDN link in your HTML file after </body> tag.
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs/dist/tf.min.js"> </script><script src="/classifier.js"></script>
Once our setup is completely ready, we can load our Html page and check the predictions.
Point to be noted
Even though all the files are static, still our Html file will require a server to load the model.json file. Just like hosting a webpage. The easiest way to do this is with a python HTTP server module that can act as a hosting server for our static webpages. The best part is that we don’t need to install any extra library for loading the HTTP module.Alternatively you can also use hosting servers like apache or Nginx. For windows users, the WAMP server can be useful too.
Python HTTP server can be started inside the Html files directory with the following command.
python3 -m http.server
Once you run the above command, you will get the following output:
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
Open URL in the browser to see the output of your image predicted by your js weights.
Here is an output from my model
Here is my GitHub repo link for this project. Try training and converting weights then deploying on the website all by yourself.
github.com
[1] Tensorflow.js
[2] Tensorflowjs python library for a converter tool
[3] Cat Dog Image Classification Tutorial from official TensorFlow website
[4] Garfield character reference as a cat.
[5] Courage the Cowardly Dog (My Favourite)
[5] Money Heist characters
and a few cups of coffee 😉
|
[
{
"code": null,
"e": 330,
"s": 172,
"text": "Yet another way of inferencing an image classification model for reducing inference time by directly deploying in a static website or nodejs web application."
},
{
"code": null,
"e": 832,
"s": 330,
"text": "Deploying a machine learning model can be fun. You have to make sure you have setup with hardware and software optimized pipeline and boom your model is ready for production. But sometimes making an HTTP call to the backend with image and then returning results on frontend can be a tedious job. There are so many restrictions like late inference if you are using a heavy model and your server configuration is low.That’s why they say that model deployment is 80% of work in the field of data science."
},
{
"code": null,
"e": 1267,
"s": 832,
"text": "TensorFlow.js, an open-source library you can use to define, train, and run machine learning models entirely in the browser, using Javascript and a high-level layers API. If you’re a Javascript developer who’s new to ML, TensorFlow.js is a great way to begin learning. JavaScript is a client-side scripting language, running machine learning programs entirely client-side in the browser unlocks new opportunities, like interactive ML."
},
{
"code": null,
"e": 1582,
"s": 1267,
"text": "I have already trained a simple cat vs dog classifier on Keras. You can refer to my notebook file if you want to train a model yourself. We are going to use the tensorflowjs_converter tool to convert Keras (.h5) weights to tfjs format.tensorflowjs_converter comes with tensorflowjs python library so let’s install."
},
{
"code": null,
"e": 1607,
"s": 1582,
"text": "pip install tensorflowjs"
},
{
"code": null,
"e": 1725,
"s": 1607,
"text": "Use the tensorflow.js converter to convert the saved Keras model into JSON format. (Assuming name of weight model.h5)"
},
{
"code": null,
"e": 1804,
"s": 1725,
"text": "Create a directory to store converted weights and mention its path (jsweights)"
},
{
"code": null,
"e": 1871,
"s": 1804,
"text": "tensorflowjs_converter --input_format=keras ./model.h5 ./jsweights"
},
{
"code": null,
"e": 2248,
"s": 1871,
"text": "If you did things correctly, you should now have a JSON file named model.json and various .bin files, such as group1-shard1of10.bin. The number of .bin files will depend on the size of your model: the larger your model, the greater the number of .bin files. The model.json file contains the architecture of your model and the .bin files will contain the weights of your model."
},
{
"code": null,
"e": 2359,
"s": 2248,
"text": "We need to create a js file that contains code for loading a model and making inferences from the given image."
},
{
"code": null,
"e": 2516,
"s": 2359,
"text": "Make sure you set the correct location for loading model.json in your file. Also your prediction function and model loading function should be asynchronous."
},
{
"code": null,
"e": 3192,
"s": 2516,
"text": "//classifier.jsvar model;var predResult = document.getElementById(\"result\");async function initialize() { model = await tf.loadLayersModel('/weights/catsvsdogs/model.json');}async function predict() { // action for the submit buttonlet image = document.getElementById(\"img\") let tensorImg = tf.browser.fromPixels(image).resizeNearestNeighbor([150, 150]).toFloat().expandDims(); prediction = await model.predict(tensorImg).data();if (prediction[0] === 0) { predResult.innerHTML = \"I think it's a cat\";} else if (prediction[0] === 1) { predResult.innerHTML = \"I think it's a dog\";} else { predResult.innerHTML = \"This is Something else\"; }}initialize();"
},
{
"code": null,
"e": 3338,
"s": 3192,
"text": "Setup an image path inside your Html file. Just replace the path with the image you want to predict. You need to give only a relative image path."
},
{
"code": null,
"e": 3371,
"s": 3338,
"text": "<img src=\"./images/courage.jpg\">"
},
{
"code": null,
"e": 3448,
"s": 3371,
"text": "Add this file and tensorflowjs CDN link in your HTML file after </body> tag."
},
{
"code": null,
"e": 3572,
"s": 3448,
"text": "<script src=\"https://cdn.jsdelivr.net/npm/@tensorflow/tfjs/dist/tf.min.js\"> </script><script src=\"/classifier.js\"></script>"
},
{
"code": null,
"e": 3661,
"s": 3572,
"text": "Once our setup is completely ready, we can load our Html page and check the predictions."
},
{
"code": null,
"e": 3679,
"s": 3661,
"text": "Point to be noted"
},
{
"code": null,
"e": 4154,
"s": 3679,
"text": "Even though all the files are static, still our Html file will require a server to load the model.json file. Just like hosting a webpage. The easiest way to do this is with a python HTTP server module that can act as a hosting server for our static webpages. The best part is that we don’t need to install any extra library for loading the HTTP module.Alternatively you can also use hosting servers like apache or Nginx. For windows users, the WAMP server can be useful too."
},
{
"code": null,
"e": 4248,
"s": 4154,
"text": "Python HTTP server can be started inside the Html files directory with the following command."
},
{
"code": null,
"e": 4271,
"s": 4248,
"text": "python3 -m http.server"
},
{
"code": null,
"e": 4338,
"s": 4271,
"text": "Once you run the above command, you will get the following output:"
},
{
"code": null,
"e": 4399,
"s": 4338,
"text": "Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ..."
},
{
"code": null,
"e": 4485,
"s": 4399,
"text": "Open URL in the browser to see the output of your image predicted by your js weights."
},
{
"code": null,
"e": 4517,
"s": 4485,
"text": "Here is an output from my model"
},
{
"code": null,
"e": 4646,
"s": 4517,
"text": "Here is my GitHub repo link for this project. Try training and converting weights then deploying on the website all by yourself."
},
{
"code": null,
"e": 4657,
"s": 4646,
"text": "github.com"
},
{
"code": null,
"e": 4675,
"s": 4657,
"text": "[1] Tensorflow.js"
},
{
"code": null,
"e": 4728,
"s": 4675,
"text": "[2] Tensorflowjs python library for a converter tool"
},
{
"code": null,
"e": 4803,
"s": 4728,
"text": "[3] Cat Dog Image Classification Tutorial from official TensorFlow website"
},
{
"code": null,
"e": 4846,
"s": 4803,
"text": "[4] Garfield character reference as a cat."
},
{
"code": null,
"e": 4890,
"s": 4846,
"text": "[5] Courage the Cowardly Dog (My Favourite)"
},
{
"code": null,
"e": 4917,
"s": 4890,
"text": "[5] Money Heist characters"
}
] |
C | Storage Classes and Type Qualifiers | Question 9 - GeeksforGeeks
|
28 Jun, 2021
Output?
#include <stdio.h>int fun(){ static int num = 16; return num--;} int main(){ for(fun(); fun(); fun()) printf("%d ", fun()); return 0;}
(A) Infinite loop(B) 13 10 7 4 1(C) 14 11 8 5 2(D) 15 12 8 5 2Answer: (C)Explanation: Since num is static in fun(), the old value of num is preserved for subsequent functions calls. Also, since the statement return num– is postfix, it returns the old value of num, and updates the value for next function call.
fun() called first time: num = 16 // for loop initialization done;
In test condition, compiler checks for non zero value
fun() called again : num = 15
printf("%d \n", fun());:num=14 ->printed
Increment/decrement condition check
fun(); called again : num = 13
----------------
fun() called second time: num: 13
In test condition,compiler checks for non zero value
fun() called again : num = 12
printf("%d \n", fun());:num=11 ->printed
fun(); called again : num = 10
--------
fun() called second time : num = 10
In test condition,compiler checks for non zero value
fun() called again : num = 9
printf("%d \n", fun());:num=8 ->printed
fun(); called again : num = 7
--------------------------------
fun() called second time: num = 7
In test condition,compiler checks for non zero value
fun() called again : num = 6
printf("%d \n", fun());:num=5 ->printed
fun(); called again : num = 4
-----------
fun() called second time: num: 4
In test condition,compiler checks for non zero value
fun() called again : num = 3
printf("%d \n", fun());:num=2 ->printed
fun(); called again : num = 1
----------
fun() called second time: num: 1
In test condition,compiler checks for non zero value
fun() called again : num = 0 => STOP
Quiz of this Question
C-Storage Classes and Type Qualifiers
Storage Classes and Type Qualifiers
C Language
C Quiz
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Multidimensional Arrays in C / C++
rand() and srand() in C/C++
fork() in C
Core Dump (Segmentation fault) in C/C++
Left Shift and Right Shift Operators in C/C++
Compiling a C program:- Behind the Scenes
Operator Precedence and Associativity in C
C | Pointer Basics | Question 15
C | Structure & Union | Question 4
C | File Handling | Question 1
|
[
{
"code": null,
"e": 25014,
"s": 24986,
"text": "\n28 Jun, 2021"
},
{
"code": null,
"e": 25022,
"s": 25014,
"text": "Output?"
},
{
"code": "#include <stdio.h>int fun(){ static int num = 16; return num--;} int main(){ for(fun(); fun(); fun()) printf(\"%d \", fun()); return 0;}",
"e": 25165,
"s": 25022,
"text": null
},
{
"code": null,
"e": 25476,
"s": 25165,
"text": "(A) Infinite loop(B) 13 10 7 4 1(C) 14 11 8 5 2(D) 15 12 8 5 2Answer: (C)Explanation: Since num is static in fun(), the old value of num is preserved for subsequent functions calls. Also, since the statement return num– is postfix, it returns the old value of num, and updates the value for next function call."
},
{
"code": null,
"e": 26733,
"s": 25476,
"text": "fun() called first time: num = 16 // for loop initialization done;\n\n\nIn test condition, compiler checks for non zero value\n\nfun() called again : num = 15\n\nprintf(\"%d \\n\", fun());:num=14 ->printed\n\nIncrement/decrement condition check\n\nfun(); called again : num = 13\n\n----------------\n\nfun() called second time: num: 13 \n\nIn test condition,compiler checks for non zero value\n\nfun() called again : num = 12\n\nprintf(\"%d \\n\", fun());:num=11 ->printed\n\nfun(); called again : num = 10\n\n--------\n\nfun() called second time : num = 10 \n\nIn test condition,compiler checks for non zero value\n\nfun() called again : num = 9\n\nprintf(\"%d \\n\", fun());:num=8 ->printed\n\nfun(); called again : num = 7\n\n--------------------------------\n\nfun() called second time: num = 7\n\nIn test condition,compiler checks for non zero value\n\nfun() called again : num = 6\n\nprintf(\"%d \\n\", fun());:num=5 ->printed\n\nfun(); called again : num = 4\n\n-----------\n\nfun() called second time: num: 4 \n\nIn test condition,compiler checks for non zero value\n\nfun() called again : num = 3\n\nprintf(\"%d \\n\", fun());:num=2 ->printed\n\nfun(); called again : num = 1\n\n----------\n\nfun() called second time: num: 1 \n\nIn test condition,compiler checks for non zero value\n\nfun() called again : num = 0 => STOP "
},
{
"code": null,
"e": 26755,
"s": 26733,
"text": "Quiz of this Question"
},
{
"code": null,
"e": 26793,
"s": 26755,
"text": "C-Storage Classes and Type Qualifiers"
},
{
"code": null,
"e": 26829,
"s": 26793,
"text": "Storage Classes and Type Qualifiers"
},
{
"code": null,
"e": 26840,
"s": 26829,
"text": "C Language"
},
{
"code": null,
"e": 26847,
"s": 26840,
"text": "C Quiz"
},
{
"code": null,
"e": 26945,
"s": 26847,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26954,
"s": 26945,
"text": "Comments"
},
{
"code": null,
"e": 26967,
"s": 26954,
"text": "Old Comments"
},
{
"code": null,
"e": 27002,
"s": 26967,
"text": "Multidimensional Arrays in C / C++"
},
{
"code": null,
"e": 27030,
"s": 27002,
"text": "rand() and srand() in C/C++"
},
{
"code": null,
"e": 27042,
"s": 27030,
"text": "fork() in C"
},
{
"code": null,
"e": 27082,
"s": 27042,
"text": "Core Dump (Segmentation fault) in C/C++"
},
{
"code": null,
"e": 27128,
"s": 27082,
"text": "Left Shift and Right Shift Operators in C/C++"
},
{
"code": null,
"e": 27170,
"s": 27128,
"text": "Compiling a C program:- Behind the Scenes"
},
{
"code": null,
"e": 27213,
"s": 27170,
"text": "Operator Precedence and Associativity in C"
},
{
"code": null,
"e": 27246,
"s": 27213,
"text": "C | Pointer Basics | Question 15"
},
{
"code": null,
"e": 27281,
"s": 27246,
"text": "C | Structure & Union | Question 4"
}
] |
Find length of Diagonal of Hexagon - GeeksforGeeks
|
08 Mar, 2022
Given a regular hexagon of side length a, the task is to find the length of it’s diagonal.Examples:
Input : a = 4
Output : 8
Input : a = 7
Output : 14
From the diagram, it is clear that the triangle ABC is an equilateral triangle, so AB = AC = BC = a.also it is obvious, diagonal = 2*AC or 2*BC So the length of diagonal of the hexagon = 2*aBelow is the implementation of the above approach:
C++
C
Java
Python3
C#
PHP
Javascript
// C++ Program to find the diagonal// of the hexagon#include <bits/stdc++.h>using namespace std; // Function to find the diagonal// of the hexagonfloat hexadiagonal(float a){ // side cannot be negative if (a < 0) return -1; // diagonal of the hexagon return 2 * a;} // Driver codeint main(){ float a = 4; cout << hexadiagonal(a) << endl; return 0;}
// C Program to find the diagonal// of the hexagon#include <stdio.h> // Function to find the diagonal// of the hexagonfloat hexadiagonal(float a){ // side cannot be negative if (a < 0) return -1; // diagonal of the hexagon return 2 * a;} // Driver codeint main(){ float a = 4; printf("%f\n",hexadiagonal(a)); return 0;}
// Java Program to find the diagonal// of the hexagon import java.io.*; class GFG { // Function to find the diagonal// of the hexagonstatic float hexadiagonal(float a){ // side cannot be negative if (a < 0) return -1; // diagonal of the hexagon return 2 * a;} // Driver code public static void main (String[] args) { float a = 4; System.out.print( hexadiagonal(a));}} // This code is contributed// by shs
# Python3 Program to find the diagonal# of the hexagon # Function to find the diagonal# of the hexagondef hexadiagonal(a): # side cannot be negative if (a < 0): return -1 # diagonal of the hexagon return 2 * a # Driver codeif __name__=='__main__': a = 4 print(hexadiagonal(a)) # This code is contributed by# Shivi_Aggarwal
// C# Program to find the diagonal// of the hexagonusing System; class GFG { // Function to find the diagonal // of the hexagon static float hexadiagonal(float a) { // side cannot be negative if (a < 0) return -1; // diagonal of the hexagon return 2 * a; } // Driver codepublic static void Main(){ float a = 4; Console.WriteLine( hexadiagonal(a));}} // This code is contributed// by anuj_67..
<?php// PHP Program to find the diagonal// of the hexagon // Function to find the diagonal// of the hexagonfunction hexadiagonal($a){ // side cannot be negative if ($a < 0) return -1; // diagonal of the hexagon return 2 * $a;} // Driver code $a = 4; echo hexadiagonal($a); // This code is contributed// by anuj_67.. ?>
<script>// javascript Program to find the diagonal// of the hexagon // Function to find the diagonal// of the hexagonfunction hexadiagonal(a){ // side cannot be negative if (a < 0) return -1; // diagonal of the hexagon return 2 * a;} // Driver code var a = 4;document.write( hexadiagonal(a)); // This code is contributed by 29AjayKumar</script>
8
Shashank12
vt_m
Shivi_Aggarwal
29AjayKumar
kothavvsaakash
C++ Programs
Geometric
Mathematical
Mathematical
Geometric
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
C++ Program for QuickSort
CSV file management using C++
delete keyword in C++
cin in C++
Shallow Copy and Deep Copy in C++
Closest Pair of Points using Divide and Conquer algorithm
How to check if two given line segments intersect?
How to check if a given point lies inside or outside a polygon?
Convex Hull | Set 1 (Jarvis's Algorithm or Wrapping)
Find if two rectangles overlap
|
[
{
"code": null,
"e": 24844,
"s": 24816,
"text": "\n08 Mar, 2022"
},
{
"code": null,
"e": 24946,
"s": 24844,
"text": "Given a regular hexagon of side length a, the task is to find the length of it’s diagonal.Examples: "
},
{
"code": null,
"e": 24998,
"s": 24946,
"text": "Input : a = 4\nOutput : 8\n\nInput : a = 7\nOutput : 14"
},
{
"code": null,
"e": 25245,
"s": 25002,
"text": "From the diagram, it is clear that the triangle ABC is an equilateral triangle, so AB = AC = BC = a.also it is obvious, diagonal = 2*AC or 2*BC So the length of diagonal of the hexagon = 2*aBelow is the implementation of the above approach: "
},
{
"code": null,
"e": 25249,
"s": 25245,
"text": "C++"
},
{
"code": null,
"e": 25251,
"s": 25249,
"text": "C"
},
{
"code": null,
"e": 25256,
"s": 25251,
"text": "Java"
},
{
"code": null,
"e": 25264,
"s": 25256,
"text": "Python3"
},
{
"code": null,
"e": 25267,
"s": 25264,
"text": "C#"
},
{
"code": null,
"e": 25271,
"s": 25267,
"text": "PHP"
},
{
"code": null,
"e": 25282,
"s": 25271,
"text": "Javascript"
},
{
"code": "// C++ Program to find the diagonal// of the hexagon#include <bits/stdc++.h>using namespace std; // Function to find the diagonal// of the hexagonfloat hexadiagonal(float a){ // side cannot be negative if (a < 0) return -1; // diagonal of the hexagon return 2 * a;} // Driver codeint main(){ float a = 4; cout << hexadiagonal(a) << endl; return 0;}",
"e": 25662,
"s": 25282,
"text": null
},
{
"code": "// C Program to find the diagonal// of the hexagon#include <stdio.h> // Function to find the diagonal// of the hexagonfloat hexadiagonal(float a){ // side cannot be negative if (a < 0) return -1; // diagonal of the hexagon return 2 * a;} // Driver codeint main(){ float a = 4; printf(\"%f\\n\",hexadiagonal(a)); return 0;}",
"e": 26013,
"s": 25662,
"text": null
},
{
"code": "// Java Program to find the diagonal// of the hexagon import java.io.*; class GFG { // Function to find the diagonal// of the hexagonstatic float hexadiagonal(float a){ // side cannot be negative if (a < 0) return -1; // diagonal of the hexagon return 2 * a;} // Driver code public static void main (String[] args) { float a = 4; System.out.print( hexadiagonal(a));}} // This code is contributed// by shs",
"e": 26458,
"s": 26013,
"text": null
},
{
"code": "# Python3 Program to find the diagonal# of the hexagon # Function to find the diagonal# of the hexagondef hexadiagonal(a): # side cannot be negative if (a < 0): return -1 # diagonal of the hexagon return 2 * a # Driver codeif __name__=='__main__': a = 4 print(hexadiagonal(a)) # This code is contributed by# Shivi_Aggarwal",
"e": 26809,
"s": 26458,
"text": null
},
{
"code": "// C# Program to find the diagonal// of the hexagonusing System; class GFG { // Function to find the diagonal // of the hexagon static float hexadiagonal(float a) { // side cannot be negative if (a < 0) return -1; // diagonal of the hexagon return 2 * a; } // Driver codepublic static void Main(){ float a = 4; Console.WriteLine( hexadiagonal(a));}} // This code is contributed// by anuj_67..",
"e": 27270,
"s": 26809,
"text": null
},
{
"code": "<?php// PHP Program to find the diagonal// of the hexagon // Function to find the diagonal// of the hexagonfunction hexadiagonal($a){ // side cannot be negative if ($a < 0) return -1; // diagonal of the hexagon return 2 * $a;} // Driver code $a = 4; echo hexadiagonal($a); // This code is contributed// by anuj_67.. ?>",
"e": 27619,
"s": 27270,
"text": null
},
{
"code": "<script>// javascript Program to find the diagonal// of the hexagon // Function to find the diagonal// of the hexagonfunction hexadiagonal(a){ // side cannot be negative if (a < 0) return -1; // diagonal of the hexagon return 2 * a;} // Driver code var a = 4;document.write( hexadiagonal(a)); // This code is contributed by 29AjayKumar</script>",
"e": 27985,
"s": 27619,
"text": null
},
{
"code": null,
"e": 27987,
"s": 27985,
"text": "8"
},
{
"code": null,
"e": 28000,
"s": 27989,
"text": "Shashank12"
},
{
"code": null,
"e": 28005,
"s": 28000,
"text": "vt_m"
},
{
"code": null,
"e": 28020,
"s": 28005,
"text": "Shivi_Aggarwal"
},
{
"code": null,
"e": 28032,
"s": 28020,
"text": "29AjayKumar"
},
{
"code": null,
"e": 28047,
"s": 28032,
"text": "kothavvsaakash"
},
{
"code": null,
"e": 28060,
"s": 28047,
"text": "C++ Programs"
},
{
"code": null,
"e": 28070,
"s": 28060,
"text": "Geometric"
},
{
"code": null,
"e": 28083,
"s": 28070,
"text": "Mathematical"
},
{
"code": null,
"e": 28096,
"s": 28083,
"text": "Mathematical"
},
{
"code": null,
"e": 28106,
"s": 28096,
"text": "Geometric"
},
{
"code": null,
"e": 28204,
"s": 28106,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28230,
"s": 28204,
"text": "C++ Program for QuickSort"
},
{
"code": null,
"e": 28260,
"s": 28230,
"text": "CSV file management using C++"
},
{
"code": null,
"e": 28282,
"s": 28260,
"text": "delete keyword in C++"
},
{
"code": null,
"e": 28293,
"s": 28282,
"text": "cin in C++"
},
{
"code": null,
"e": 28327,
"s": 28293,
"text": "Shallow Copy and Deep Copy in C++"
},
{
"code": null,
"e": 28385,
"s": 28327,
"text": "Closest Pair of Points using Divide and Conquer algorithm"
},
{
"code": null,
"e": 28436,
"s": 28385,
"text": "How to check if two given line segments intersect?"
},
{
"code": null,
"e": 28500,
"s": 28436,
"text": "How to check if a given point lies inside or outside a polygon?"
},
{
"code": null,
"e": 28553,
"s": 28500,
"text": "Convex Hull | Set 1 (Jarvis's Algorithm or Wrapping)"
}
] |
How to enable Pandas to access BigQuery from a service account | by Lak Lakshmanan | Towards Data Science
|
Service accounts are a way to keep a tight leash on what your applications in Google Cloud Platform are doing. Instead of running your applications with all the permissions your user account has, you typically want the application to have extremely constrained access to your organization’s resources.
Let’s say that you’d like Pandas to run a query against BigQuery. You can use the the read_gbq of Pandas (available in the pandas-gbq package):
import pandas as pdquery = """SELECT year, COUNT(1) as num_babiesFROM publicdata.samples.natalityWHERE year > 2000GROUP BY year"""df = pd.read_gbq(query, project_id='MY_PROJECT_ID', dialect='standard')print(df.head())
This will work if you run it locally (because it uses your identity, and presumably you have the ability to run queries in your project).
But if you try running the above code from a bare-bones account, it won’t work. You will get asked to go through an OAuth2 workflow to specifically authorize the application. Why? Because you don’t want an arbitrary application running up a BigQuery bill (or worse, accessing your corporate data), so you have to provide it the permissions it needs.
Let’s try it out.
To follow along with me, use the GCP web console and create a Google Compute Engine instance with the default access (this typically includes only the bare basics and doesn’t include access to BigQuery):
From the GCP web console, create a new service account. This is the account for which you will be generating a private key. For extra security and auditing, I recommend creating a brand new service account for each application and not reusing service accounts between applications.
Go to IAM & Admin, select “Service accounts” and click on +Create Service Account. Fill out the form as follows:
The roles above allow the service account to run queries and have those be billed to the project. However, the dataset owner still needs to allow the service account to view their datasets.
If you are using a non-public BigQuery dataset, give the service account the appropriate (typically just View) access to it by going to the BigQuery console and sharing the dataset with the service account’s email address. For the purposes of this tutorial, I will use a public BigQuery dataset, so we can skip this step.
SSH to the GCE instance and on the command-line, type in the following commands:
sudo apt-get install -y git python-pipgit clone https://github.com/GoogleCloudPlatform/training-data-analystsudo pip install pandas-gbq==0.4.1
Change the project id in nokey_query.py and then run it:
cd training-data-analyst/blogs/pandas-pvtkey# EDIT nokey_query.py to set the PROJECT IDpython nokey_query.py
It will ask you to go through an OAuth2 workflow. Hit Ctrl-C to exit. You don’t want this application running with *your* credentials.
A common suggestion you will hear when you run into such an error is to run:
# BAD IDEA! DO NOT DO THISgcloud auth application-default login
and go through the interactive OAuth 2 workflow in the shell before launching the application. Be careful about doing this: (1) the above command is a bazooka and allow the application to do anything you can do. (2) It is not scriptable. You will have to do it every time you run the application.
The best approach is to create the Compute Engine VM, not with the bare-bones service account, but with the service account to which you have given BigQuery viewer permissions. In other words, swap steps 1 and 2, and when you create the GCE instance, use the newly created service account.
But what if you are not on GCP (so your machine isn’t created by the service account auth) or you are using a managed service such as Cloud Dataflow or Cloud ML Engine (and so the VMs are created by that service’s service account)? In that case, a better approach is to change the Pandas code as follows:
df = pd.read_gbq(query, project_id='MY_PROJECT_ID', private_key='path_to/privatekey.json', dialect='standard', verbose=False)
The application will now use the permissions that are associated with this private key.
Change the project id in query.py and then run it:
cd training-data-analyst/blogs/pandas-pvtkey# EDIT query.py to set the PROJECT IDpython query.py
It will fail saying the private key wasn’t found
Remember that you created a JSON private key when you created the service account? You need to upload it to the GCE VM. (In case you didn’t create the key file, navigate to IAM > Service Accounts and create a private key for the pandas service account. This will create a JSON file and download it to your local computer. You can also revoke keys from here.)
In the top-right of the SSH window, there is a way to upload files. Upload the generated JSON file to the GCE instance and move it into place:
mv ~/*.json trainer/privatekey.json
python query.py
It will work now and you will get back the results of the query.
What if you are not using GCE, but are using a managed service (Cloud Dataflow, Cloud ML Engine, etc.) instead?
In that case, you will be submitting a Python package. Mark the private key as a resource in the setup.py using the package_data attribute:
from setuptools import setupsetup(name='trainer', version='1.0', description='Showing how to use private key', url='http://github.com/GoogleCloudPlatform/training-data-analyst', author='Google', author_email='[email protected]', license='Apache2', packages=['trainer'], ## WARNING! Do not upload this package to PyPI ## BECAUSE it contains a private key package_data={'trainer': ['privatekey.json']}, install_requires=[ 'pandas-gbq==0.4.1', 'urllib3', 'google-cloud-bigquery' ], zip_safe=False)
Then, instead of hardcoding the path to the privatekey.json file, do this:
private_key = pkgutil.get_data('trainer', 'privatekey.json')
Here is a more complete example . Note that Python doesn’t have a way of marking packages as private, so you should be careful not to mistakenly publish this package in a public repository such as PyPI.
The private key essentially unlocks, for whoever presents it, the resources that have been made available. In this case, anyone who presents the private key will be able to make BigQuery queries. No second form of authentication (IP address, user login, hardware token, etc.) is required. So, be careful about how/where you store the private key. What I recommend:
(1) Make sure to have a .gitignore in the Python package that explicitly ignores the private key:
$cat trainer/.gitignoreprivatekey.json
(2) Use git secrets to help safeguard against key leakage
(3) Rotate your keys. See this blog for details.
(4) Make sure to not publish the Python package to any repository of Python packages, as yours contains a private key.
Use the JSON private_key attribute to restrict the access of your Pandas code to BigQuery.
Create a service account with barebones permissions
Share specific BigQuery datasets with the service account
Generate a private key for the service account
Upload the private key to the GCE instance or add the private key to the submittable Python package
Make sure you don’t check in the private key into code or package repositories. Use key rotator and git secrets.
Here is the example code of this article in GitHub. Happy coding!
Acknowledgment: Thanks to my colleagues Tim Swast and Grace Mollison for their help and suggestions. Any errors in the article are my own.
|
[
{
"code": null,
"e": 474,
"s": 172,
"text": "Service accounts are a way to keep a tight leash on what your applications in Google Cloud Platform are doing. Instead of running your applications with all the permissions your user account has, you typically want the application to have extremely constrained access to your organization’s resources."
},
{
"code": null,
"e": 618,
"s": 474,
"text": "Let’s say that you’d like Pandas to run a query against BigQuery. You can use the the read_gbq of Pandas (available in the pandas-gbq package):"
},
{
"code": null,
"e": 881,
"s": 618,
"text": "import pandas as pdquery = \"\"\"SELECT year, COUNT(1) as num_babiesFROM publicdata.samples.natalityWHERE year > 2000GROUP BY year\"\"\"df = pd.read_gbq(query, project_id='MY_PROJECT_ID', dialect='standard')print(df.head())"
},
{
"code": null,
"e": 1019,
"s": 881,
"text": "This will work if you run it locally (because it uses your identity, and presumably you have the ability to run queries in your project)."
},
{
"code": null,
"e": 1369,
"s": 1019,
"text": "But if you try running the above code from a bare-bones account, it won’t work. You will get asked to go through an OAuth2 workflow to specifically authorize the application. Why? Because you don’t want an arbitrary application running up a BigQuery bill (or worse, accessing your corporate data), so you have to provide it the permissions it needs."
},
{
"code": null,
"e": 1387,
"s": 1369,
"text": "Let’s try it out."
},
{
"code": null,
"e": 1591,
"s": 1387,
"text": "To follow along with me, use the GCP web console and create a Google Compute Engine instance with the default access (this typically includes only the bare basics and doesn’t include access to BigQuery):"
},
{
"code": null,
"e": 1873,
"s": 1591,
"text": "From the GCP web console, create a new service account. This is the account for which you will be generating a private key. For extra security and auditing, I recommend creating a brand new service account for each application and not reusing service accounts between applications."
},
{
"code": null,
"e": 1986,
"s": 1873,
"text": "Go to IAM & Admin, select “Service accounts” and click on +Create Service Account. Fill out the form as follows:"
},
{
"code": null,
"e": 2176,
"s": 1986,
"text": "The roles above allow the service account to run queries and have those be billed to the project. However, the dataset owner still needs to allow the service account to view their datasets."
},
{
"code": null,
"e": 2498,
"s": 2176,
"text": "If you are using a non-public BigQuery dataset, give the service account the appropriate (typically just View) access to it by going to the BigQuery console and sharing the dataset with the service account’s email address. For the purposes of this tutorial, I will use a public BigQuery dataset, so we can skip this step."
},
{
"code": null,
"e": 2579,
"s": 2498,
"text": "SSH to the GCE instance and on the command-line, type in the following commands:"
},
{
"code": null,
"e": 2722,
"s": 2579,
"text": "sudo apt-get install -y git python-pipgit clone https://github.com/GoogleCloudPlatform/training-data-analystsudo pip install pandas-gbq==0.4.1"
},
{
"code": null,
"e": 2779,
"s": 2722,
"text": "Change the project id in nokey_query.py and then run it:"
},
{
"code": null,
"e": 2888,
"s": 2779,
"text": "cd training-data-analyst/blogs/pandas-pvtkey# EDIT nokey_query.py to set the PROJECT IDpython nokey_query.py"
},
{
"code": null,
"e": 3023,
"s": 2888,
"text": "It will ask you to go through an OAuth2 workflow. Hit Ctrl-C to exit. You don’t want this application running with *your* credentials."
},
{
"code": null,
"e": 3100,
"s": 3023,
"text": "A common suggestion you will hear when you run into such an error is to run:"
},
{
"code": null,
"e": 3164,
"s": 3100,
"text": "# BAD IDEA! DO NOT DO THISgcloud auth application-default login"
},
{
"code": null,
"e": 3461,
"s": 3164,
"text": "and go through the interactive OAuth 2 workflow in the shell before launching the application. Be careful about doing this: (1) the above command is a bazooka and allow the application to do anything you can do. (2) It is not scriptable. You will have to do it every time you run the application."
},
{
"code": null,
"e": 3751,
"s": 3461,
"text": "The best approach is to create the Compute Engine VM, not with the bare-bones service account, but with the service account to which you have given BigQuery viewer permissions. In other words, swap steps 1 and 2, and when you create the GCE instance, use the newly created service account."
},
{
"code": null,
"e": 4056,
"s": 3751,
"text": "But what if you are not on GCP (so your machine isn’t created by the service account auth) or you are using a managed service such as Cloud Dataflow or Cloud ML Engine (and so the VMs are created by that service’s service account)? In that case, a better approach is to change the Pandas code as follows:"
},
{
"code": null,
"e": 4266,
"s": 4056,
"text": " df = pd.read_gbq(query, project_id='MY_PROJECT_ID', private_key='path_to/privatekey.json', dialect='standard', verbose=False)"
},
{
"code": null,
"e": 4354,
"s": 4266,
"text": "The application will now use the permissions that are associated with this private key."
},
{
"code": null,
"e": 4405,
"s": 4354,
"text": "Change the project id in query.py and then run it:"
},
{
"code": null,
"e": 4503,
"s": 4405,
"text": "cd training-data-analyst/blogs/pandas-pvtkey# EDIT query.py to set the PROJECT IDpython query.py "
},
{
"code": null,
"e": 4552,
"s": 4503,
"text": "It will fail saying the private key wasn’t found"
},
{
"code": null,
"e": 4911,
"s": 4552,
"text": "Remember that you created a JSON private key when you created the service account? You need to upload it to the GCE VM. (In case you didn’t create the key file, navigate to IAM > Service Accounts and create a private key for the pandas service account. This will create a JSON file and download it to your local computer. You can also revoke keys from here.)"
},
{
"code": null,
"e": 5054,
"s": 4911,
"text": "In the top-right of the SSH window, there is a way to upload files. Upload the generated JSON file to the GCE instance and move it into place:"
},
{
"code": null,
"e": 5090,
"s": 5054,
"text": "mv ~/*.json trainer/privatekey.json"
},
{
"code": null,
"e": 5106,
"s": 5090,
"text": "python query.py"
},
{
"code": null,
"e": 5171,
"s": 5106,
"text": "It will work now and you will get back the results of the query."
},
{
"code": null,
"e": 5283,
"s": 5171,
"text": "What if you are not using GCE, but are using a managed service (Cloud Dataflow, Cloud ML Engine, etc.) instead?"
},
{
"code": null,
"e": 5423,
"s": 5283,
"text": "In that case, you will be submitting a Python package. Mark the private key as a resource in the setup.py using the package_data attribute:"
},
{
"code": null,
"e": 6010,
"s": 5423,
"text": "from setuptools import setupsetup(name='trainer', version='1.0', description='Showing how to use private key', url='http://github.com/GoogleCloudPlatform/training-data-analyst', author='Google', author_email='[email protected]', license='Apache2', packages=['trainer'], ## WARNING! Do not upload this package to PyPI ## BECAUSE it contains a private key package_data={'trainer': ['privatekey.json']}, install_requires=[ 'pandas-gbq==0.4.1', 'urllib3', 'google-cloud-bigquery' ], zip_safe=False)"
},
{
"code": null,
"e": 6085,
"s": 6010,
"text": "Then, instead of hardcoding the path to the privatekey.json file, do this:"
},
{
"code": null,
"e": 6146,
"s": 6085,
"text": "private_key = pkgutil.get_data('trainer', 'privatekey.json')"
},
{
"code": null,
"e": 6349,
"s": 6146,
"text": "Here is a more complete example . Note that Python doesn’t have a way of marking packages as private, so you should be careful not to mistakenly publish this package in a public repository such as PyPI."
},
{
"code": null,
"e": 6714,
"s": 6349,
"text": "The private key essentially unlocks, for whoever presents it, the resources that have been made available. In this case, anyone who presents the private key will be able to make BigQuery queries. No second form of authentication (IP address, user login, hardware token, etc.) is required. So, be careful about how/where you store the private key. What I recommend:"
},
{
"code": null,
"e": 6812,
"s": 6714,
"text": "(1) Make sure to have a .gitignore in the Python package that explicitly ignores the private key:"
},
{
"code": null,
"e": 6851,
"s": 6812,
"text": "$cat trainer/.gitignoreprivatekey.json"
},
{
"code": null,
"e": 6909,
"s": 6851,
"text": "(2) Use git secrets to help safeguard against key leakage"
},
{
"code": null,
"e": 6958,
"s": 6909,
"text": "(3) Rotate your keys. See this blog for details."
},
{
"code": null,
"e": 7077,
"s": 6958,
"text": "(4) Make sure to not publish the Python package to any repository of Python packages, as yours contains a private key."
},
{
"code": null,
"e": 7168,
"s": 7077,
"text": "Use the JSON private_key attribute to restrict the access of your Pandas code to BigQuery."
},
{
"code": null,
"e": 7220,
"s": 7168,
"text": "Create a service account with barebones permissions"
},
{
"code": null,
"e": 7278,
"s": 7220,
"text": "Share specific BigQuery datasets with the service account"
},
{
"code": null,
"e": 7325,
"s": 7278,
"text": "Generate a private key for the service account"
},
{
"code": null,
"e": 7425,
"s": 7325,
"text": "Upload the private key to the GCE instance or add the private key to the submittable Python package"
},
{
"code": null,
"e": 7538,
"s": 7425,
"text": "Make sure you don’t check in the private key into code or package repositories. Use key rotator and git secrets."
},
{
"code": null,
"e": 7604,
"s": 7538,
"text": "Here is the example code of this article in GitHub. Happy coding!"
}
] |
Building a Biomedical Knowledge Graph | by Daniel Crowe | Towards Data Science
|
Debrief from a Vaticle Community talk — featuring Konrad Myśliwiec, Scientist, Systems Biology, at Roche. This talk was delivered virtually at Orbit 2021 in April.
Konrad, like so many TypeDB community members, comes from a diverse engineering background. Knowledge graphs have been part of his scope since working on an enterprise knowledge graph for GSK. He’s been a part of the TypeDB community for roughly 3 years. While most of his career has been spent in the biomedical industry, he’s spent time working on business intelligence applications, developing mobile apps and currently as a data science engineer in the RGITSC (Roche Global IT Solutions Centre) for Roche Pharmaceuticals.
Over the last year, Konrad started to notice a trend online, whether it was articles and posts on LinkedIn or amongst his biomedical network; knowledge graphs were everywhere. Given that the world was in the midst of grappling with the COVID-19 pandemic, he wanted to know if a knowledge graph could aid in the efforts of his bio-peers.
In developing BioGrakn Covid, he sought to bring these two topics together to provide a more digestible and useful way to support the research and biomedical fight during the pandemic. In this talk, he describes how he and a group of TypeDB community members approached some of the technical development areas.
BioGrakn Covid — what is it, and what is the goal of the project
Publically available data sources and ontologies — what data was loaded into BioGrakn Covid
Knowledge extraction — working with unstructured data
Analytical reasoning — how to perform some analysis on a knowledge graph
BioGrakn Covid is an open-source project started by Konrad, Tomás Sabat from Vaticle, and Kim Wager from GSK. This is a database centred around COVID-19.
Modelling biomedical data as a graph becomes the obvious choice once we realise that the natural representation of biomedical data tends to be graph-like. Graph databases traditionally represent data as binary nodes and edges. Think labelled property graphs, where two nodes are connected via a directional edge. What Konrad and co. quickly realised was that it becomes much simpler and ultimately more natural to represent biomedical data as a hypergraph, such as that found in TypeDB.
Hypergraphs generalise the common notion of graphs by relaxing the definition of edges. An edge in a graph is simply a pair of vertices. Instead, a hyperedge in a hypergraph is a set of vertices. Such sets of vertices can be further structured, following some additional restrictions involved in different possible definitions of hypergraphs.
In TypeDB, the model uses rectangular shapes to denote entities (nodes) and diamonds representing relations or hyper-relations (n-ary role players in one relation).
Currently, there are quite a few publicly available datasets in BioGrakn Covid. Briefly, here are some of the datasets and mappings included in the database:
Uniprot — entities (transcript, gene, protein), relations (translation, transcription, gene-protein-encoding)
Human Protein Atlas — entities (tissue), relations (expression)
Reactome — entities (pathways), relations (pathway-participation)
DGldb — entities (drug), relations (drug-gene-interactions)
DisGeNet — entities (disease), relations (gene-disease association)
Coronavirues — entities (virus), relations (organism-virus-hosting, gene-virus-association, protein-virus-association)
ClinicalTrials.gov — entities (clinical-trial, organisation), relations (clinical-trial-collaboration, clinical-trial-drug-association)
Maintaining Data QualityAs many of us know, it is not as simple as taking the data from one of the above sources and loading it into a database. There first needs to be some consideration of data quality. Maintaining data quality, as Konrad notes in his talk, is an important aspect of the work.
You don’t want to have any data discrepancies, you don’t want to have several nodes representing the same entities, as this will affect and bias your analysis in the future.
Their approach to addressing data quality issues was to use a data source like UMLS (Unified Medical Language System). UMLS provides some structure and context that helps to maintain data quality. In the talk, Konrad focused on two subsets of UMLS. The first, UMLS’ Metathesaurus, contains the biomedical entities across different levels of various types, such as proteins, genes, diseases, drugs, and the relations between them.
As an example of what is available, we have two proteins: protein-1 and protein-2And they are connected via an interact-with edge.
The second subset is the Semantic Network data. This is focused on the taxonomy. Each concept is given in a taxonomy, e.g. that we have a concept compound and protein is a subtype of compound. The benefits of this subset will become more obvious later on.
Using UMLs as the initial or baseline ontology gives us confidence that we can move forward without data quality issues — obviously, this is not the only way to approach the challenge of data quality. Still, UMLs can be effective in the biomedical domain.
Schema in TypeDB provides the structure, the model for our data, and the safety of knowing that all any data ingested into the database will adhere to this schema.
In Konrad’s case, they are creating a biomedical schema with entities: protein, transcript, gene, pathway, virus, tissue, drug, disease; and the relations between them. It should be noted that much of these decisions can be derived from the data sources you use.
Here is an excerpt from this schema:
definegene sub fully-formed-anatomical-structure, owns gene-symbol, owns gene-name, plays gene-disease-association:associated-gene;disease sub pathological-function, owns disease-name, owns disease-id, owns disease-type, plays gene-disease-association:associated-disease;protein sub chemical, owns uniprot-id, owns uniprot-entry-name, owns uniprot-symbol, owns uniprot-name, owns ensembl-protein-stable-id, owns function-description, plays protein-disease-association:associated-protein;protein-disease-association sub relation, relates associated-protein, relates associated-disease;gene-disease-association sub relation, owns disgenet-score, relates associated-gene, relates associated-disease;
Now that we have our schema, we can start to load our data.
In the talk, Konrad walked through loading the data from UniProt, which also contains data from Ensembl. The first thing to do is to identify the relevant columns and then, based on our schema, identify the relevant entities to populate with the data. From there, it is fairly simple to add the attributes for each concept.
Loading the data is then trivial — using a client API in Python, Java, or Node.js. Konrad built this migrator for the UniProt data — available via the BioGrakn Covid repo.
Those actively working with these types of publicly available datasets know that they are not updated as often as we might like. Some of them are updated yearly, so we need to supplement these data sources with relevant, current data. The trouble here is that these data usually come from papers, articles, and unstructured text. To use this data, a sub-domain model is needed. This allows us to work with the text more expressively and ultimately connect this to our biomedical model. The two models are shown below:
With the schema set and publications identified, some challenges will need to be addressed. Konrad highlights two of them: extracting biomedical entities from text and linking different ontologies within a central knowledge graph.
Extracting Biomedical Entities from Text
When approaching this challenge, Konrad reminds us not to reinvent the wheel and make use of existing named entity recognition corpora. For this project, Konrad and co. used CORD-NER and SemMed.
Named entity recognition is the NLP algorithum of identifying named entities within text.
CORD NER
CORD NER is a data source of pre-computed Named Entity Recognition output. The nice thing about CORD NER is that the output of the NLP work, the entities from text, are mapped to concepts in UMLs. This helps to provide consistency and data quality in the knowledge graph. With the concepts and their types, we can now map to the schema in TypeQL.
SemMed
SemMed contains entities derived from publications available in PubMed, and these entities are stored in semantic triples. The problem with CORD NER is that it doesn’t contain any links between named entities, while SemMed does provide these. These semantic triples are made up of a subject, predicate, and object. All of them are mapped against UMLS Metathesaurus, which once again helps us to keep consistent naming conventions in the knowledge graph.
Below you can see an example of how Konrad worked with the data in SemMed. This is a simple join of two tables, and we see the subject, predicate, and object: NDUFS8 is a genome, NDUFS7 which is also gene or genome, and interacts-with as the predicate or relation between them. SemMed naturally derives these linkages from text, and when it comes to mapping these entities to the defined model, this provides us with much of what is needed. In fact, Konrad notes that there are around 18 different predicates that can be mapped to relations in the schema.
Text Processing with SciSpacy
Many times we need to extract structured data out of unstructured text in papers on our own. The challenge here, for Konrad, was working with clinical trials data, which usually comes in the form of XML data.
The problem lies in that there are times where the output is not straightforward. It may provide more than one drug, compound, or chemical along with other information. To perform the NER against this data, Konrad recommended using SciSpacy. SciSpacy is a Python library, built on Spacy, and it uses a transformer model that has been trained on publicly available publications to perform NER. Using it against the example above, it identifies two named entities and, even better, provides the mapping against UMLS.
ID resolution in biological data
Another situation that may present itself is when an ID from an ontology doesn’t exist in the data that you are working in. To resolve this, we can take the canonical name of an entity and use an API like RxNorm, to get back a RxNorm ID which can then be used to find the missing id.
Once data is loaded, how can we begin to traverse the graph to generate some insights and/or new information? In the screenshot below, Konrad walked through the visualised result of a query.
The higher-level question being asked is:
Find any gene, which plays the role of host in a relation with the virus SARs, and that is stimulated by another gene; as well as a drug which interacts with the stimulating gene; and finally, the publications that mention the relations between the genes.
In TypeQL this query looks like:
match$gene isa gene;$virus isa virus, has name "SARs";$r(host: $gene, virus: $virus) isa gene-virus-host;$gene2 isa gene;$r2(stimulating: $gene2, stimulated: $gene) isa gene-gene-interaction;$r3(mentioned: $r2, publication: $pub) isa a mentioning;get $gene, $gene2, $pub;$drug isa drug;$r4 (interacted: $gene2, interacting: $drug) isa gene-drug-interaction;
This question becomes quite easy to query in TypeDB and is representative of how simple it is to traverse the graph while limiting the exploration space.
Other queries can help us learn more from the clinical trials data. We are able to identify potential targets for a clinical trial, identifiers from sponsoring organisations, and link this information to what we know about drugs, drug-gene relations, and drug-disease relations.
Additional query examples can be found within the BioGrakn Covid repo on Github.
As BioGrakn Covid is an open-source database, available for anyone to fork and play with on their own; we would be remiss to not mention the additional ways to derive value.
It is commonly understood that communities in a given network — especially in the biomedical network — tend to cluster, to create communities of shared functions in a specific network or the whole system. This function of nodes in a given network can be derived from their n-hop egonet or neighbour. For example, in a protein-protein interaction network, proteins tend to cluster with other nodes that share similar functions such as, metabolic processes or information on immunity.
We might have a hypothesis for a relation but we need to confirm if this relation should exist in the database. We can do so using Graph Neural Networks
In order to create or identify these types of clusters, there are a few tasks we can perform: node classification, link prediction, and graph classification. Konrad chose to focus on link prediction for this talk, as he felt it is the most exciting avenue for continued exploration of BioGrakn Covid.
In BioGrakn Covid, there are existing relations we have instantiated, however, Konrad wanted to identify implicit, undiscovered relations. This is motivated by the fact that Biomedical Knowledge Graphs are notoriously incomplete. It is very exciting to find methods to automatically complete them. Having done so new insights can be drawn straight from the graph.
The approach he uses is to hypothesise that a relation exists and confirm this using a Graph Neural Network (a new hot topic in ML).
Using as features the eigenvectors of the graph adjacency matrix plus structural features from the graph, Konrad was able to train a plain Graph Convolutional Neural Network. New relations to be predicted are hypothetical `disease-gene` and `disease-drug` relations between existing nodes, and negative examples are taken via negative sampling from all possible and impossible relation types.
In machine learning, the term negative sampling describes the action of drawing a random sample from the data, that doesn’t exist, to act as a negative example. In an incomplete graph, the negative samples could truly be positive, but empirically Konrad finds this approach works well.
Konrad uses a final Softmax layer on the model to generate probabilities of relation existence.
He finds that the precision and recall behave as seen in the plot below as the threshold is varied. He takes interest in a value of recall of 85%, which is marked with a red dot on both curves. Happily, we can see that at around this performance point is where we see a balance of precision and recall.
Looking at the results another way, we can see the confusion matrix for a fixed recall of 85%. At this point, he presents a precision of 91.16%. We can also note that the matrix is quite well balanced. We see that there are 50,160 false negatives, slightly more than the 31,007 false positives. For comparison, we see 303,393 true negatives and 264,240 true positives.
Doing an ad-hoc analysis of the gene relation predictions made, Konrad expressed some doubt as to whether all of them could be correctly classified. To investigate further, Konrad did a spot check of the top 5 relation predictions for genes targeting another node (either a compound or another gene). He found that 3 of those relations predicted have each been investigated in one or more papers in the literature.
Doing a spot check of the top 5 drug-disease interactions predicted by the network, he found in the literature that 2 of these have been established to exist.
Konrad is naturally very excited by what he’s been able to achieve using these methods and has plenty of thoughts on how to expand the scope and accuracy of the method, as outlined in the video of his talk.
On June 30th, we are hosting a webinar on accelerating drug discovery and modelling biomedical data in TypeDB. You can join in via zoom.
This is an on-going project and we need your help! If you want to contribute, you can help out by helping us including:
Migrate more data sources (e.g. clinical trials, DrugBank, Excelra)
Extend the schema by adding relevant rules
Create a website
Write tutorials and articles for researchers to get started
If you wish to get in touch, please reach out to us on the #biograkn channel on our Discord (link here).
A special thank you to Konrad for his hard work, enthusiasm, inspiration and thorough explanations.
You can find the full presentation on the Vaticle YouTube channel here.
|
[
{
"code": null,
"e": 212,
"s": 47,
"text": "Debrief from a Vaticle Community talk — featuring Konrad Myśliwiec, Scientist, Systems Biology, at Roche. This talk was delivered virtually at Orbit 2021 in April."
},
{
"code": null,
"e": 738,
"s": 212,
"text": "Konrad, like so many TypeDB community members, comes from a diverse engineering background. Knowledge graphs have been part of his scope since working on an enterprise knowledge graph for GSK. He’s been a part of the TypeDB community for roughly 3 years. While most of his career has been spent in the biomedical industry, he’s spent time working on business intelligence applications, developing mobile apps and currently as a data science engineer in the RGITSC (Roche Global IT Solutions Centre) for Roche Pharmaceuticals."
},
{
"code": null,
"e": 1075,
"s": 738,
"text": "Over the last year, Konrad started to notice a trend online, whether it was articles and posts on LinkedIn or amongst his biomedical network; knowledge graphs were everywhere. Given that the world was in the midst of grappling with the COVID-19 pandemic, he wanted to know if a knowledge graph could aid in the efforts of his bio-peers."
},
{
"code": null,
"e": 1386,
"s": 1075,
"text": "In developing BioGrakn Covid, he sought to bring these two topics together to provide a more digestible and useful way to support the research and biomedical fight during the pandemic. In this talk, he describes how he and a group of TypeDB community members approached some of the technical development areas."
},
{
"code": null,
"e": 1451,
"s": 1386,
"text": "BioGrakn Covid — what is it, and what is the goal of the project"
},
{
"code": null,
"e": 1543,
"s": 1451,
"text": "Publically available data sources and ontologies — what data was loaded into BioGrakn Covid"
},
{
"code": null,
"e": 1597,
"s": 1543,
"text": "Knowledge extraction — working with unstructured data"
},
{
"code": null,
"e": 1670,
"s": 1597,
"text": "Analytical reasoning — how to perform some analysis on a knowledge graph"
},
{
"code": null,
"e": 1825,
"s": 1670,
"text": "BioGrakn Covid is an open-source project started by Konrad, Tomás Sabat from Vaticle, and Kim Wager from GSK. This is a database centred around COVID-19."
},
{
"code": null,
"e": 2312,
"s": 1825,
"text": "Modelling biomedical data as a graph becomes the obvious choice once we realise that the natural representation of biomedical data tends to be graph-like. Graph databases traditionally represent data as binary nodes and edges. Think labelled property graphs, where two nodes are connected via a directional edge. What Konrad and co. quickly realised was that it becomes much simpler and ultimately more natural to represent biomedical data as a hypergraph, such as that found in TypeDB."
},
{
"code": null,
"e": 2655,
"s": 2312,
"text": "Hypergraphs generalise the common notion of graphs by relaxing the definition of edges. An edge in a graph is simply a pair of vertices. Instead, a hyperedge in a hypergraph is a set of vertices. Such sets of vertices can be further structured, following some additional restrictions involved in different possible definitions of hypergraphs."
},
{
"code": null,
"e": 2820,
"s": 2655,
"text": "In TypeDB, the model uses rectangular shapes to denote entities (nodes) and diamonds representing relations or hyper-relations (n-ary role players in one relation)."
},
{
"code": null,
"e": 2978,
"s": 2820,
"text": "Currently, there are quite a few publicly available datasets in BioGrakn Covid. Briefly, here are some of the datasets and mappings included in the database:"
},
{
"code": null,
"e": 3088,
"s": 2978,
"text": "Uniprot — entities (transcript, gene, protein), relations (translation, transcription, gene-protein-encoding)"
},
{
"code": null,
"e": 3152,
"s": 3088,
"text": "Human Protein Atlas — entities (tissue), relations (expression)"
},
{
"code": null,
"e": 3218,
"s": 3152,
"text": "Reactome — entities (pathways), relations (pathway-participation)"
},
{
"code": null,
"e": 3278,
"s": 3218,
"text": "DGldb — entities (drug), relations (drug-gene-interactions)"
},
{
"code": null,
"e": 3346,
"s": 3278,
"text": "DisGeNet — entities (disease), relations (gene-disease association)"
},
{
"code": null,
"e": 3465,
"s": 3346,
"text": "Coronavirues — entities (virus), relations (organism-virus-hosting, gene-virus-association, protein-virus-association)"
},
{
"code": null,
"e": 3601,
"s": 3465,
"text": "ClinicalTrials.gov — entities (clinical-trial, organisation), relations (clinical-trial-collaboration, clinical-trial-drug-association)"
},
{
"code": null,
"e": 3897,
"s": 3601,
"text": "Maintaining Data QualityAs many of us know, it is not as simple as taking the data from one of the above sources and loading it into a database. There first needs to be some consideration of data quality. Maintaining data quality, as Konrad notes in his talk, is an important aspect of the work."
},
{
"code": null,
"e": 4071,
"s": 3897,
"text": "You don’t want to have any data discrepancies, you don’t want to have several nodes representing the same entities, as this will affect and bias your analysis in the future."
},
{
"code": null,
"e": 4501,
"s": 4071,
"text": "Their approach to addressing data quality issues was to use a data source like UMLS (Unified Medical Language System). UMLS provides some structure and context that helps to maintain data quality. In the talk, Konrad focused on two subsets of UMLS. The first, UMLS’ Metathesaurus, contains the biomedical entities across different levels of various types, such as proteins, genes, diseases, drugs, and the relations between them."
},
{
"code": null,
"e": 4632,
"s": 4501,
"text": "As an example of what is available, we have two proteins: protein-1 and protein-2And they are connected via an interact-with edge."
},
{
"code": null,
"e": 4888,
"s": 4632,
"text": "The second subset is the Semantic Network data. This is focused on the taxonomy. Each concept is given in a taxonomy, e.g. that we have a concept compound and protein is a subtype of compound. The benefits of this subset will become more obvious later on."
},
{
"code": null,
"e": 5144,
"s": 4888,
"text": "Using UMLs as the initial or baseline ontology gives us confidence that we can move forward without data quality issues — obviously, this is not the only way to approach the challenge of data quality. Still, UMLs can be effective in the biomedical domain."
},
{
"code": null,
"e": 5308,
"s": 5144,
"text": "Schema in TypeDB provides the structure, the model for our data, and the safety of knowing that all any data ingested into the database will adhere to this schema."
},
{
"code": null,
"e": 5571,
"s": 5308,
"text": "In Konrad’s case, they are creating a biomedical schema with entities: protein, transcript, gene, pathway, virus, tissue, drug, disease; and the relations between them. It should be noted that much of these decisions can be derived from the data sources you use."
},
{
"code": null,
"e": 5608,
"s": 5571,
"text": "Here is an excerpt from this schema:"
},
{
"code": null,
"e": 6404,
"s": 5608,
"text": "definegene sub fully-formed-anatomical-structure, owns gene-symbol, owns gene-name, plays gene-disease-association:associated-gene;disease sub pathological-function, owns disease-name, owns disease-id, owns disease-type, plays gene-disease-association:associated-disease;protein sub chemical, owns uniprot-id, owns uniprot-entry-name, owns uniprot-symbol, owns uniprot-name, owns ensembl-protein-stable-id, owns function-description, plays protein-disease-association:associated-protein;protein-disease-association sub relation, relates associated-protein, relates associated-disease;gene-disease-association sub relation, owns disgenet-score, relates associated-gene, relates associated-disease;"
},
{
"code": null,
"e": 6464,
"s": 6404,
"text": "Now that we have our schema, we can start to load our data."
},
{
"code": null,
"e": 6788,
"s": 6464,
"text": "In the talk, Konrad walked through loading the data from UniProt, which also contains data from Ensembl. The first thing to do is to identify the relevant columns and then, based on our schema, identify the relevant entities to populate with the data. From there, it is fairly simple to add the attributes for each concept."
},
{
"code": null,
"e": 6960,
"s": 6788,
"text": "Loading the data is then trivial — using a client API in Python, Java, or Node.js. Konrad built this migrator for the UniProt data — available via the BioGrakn Covid repo."
},
{
"code": null,
"e": 7478,
"s": 6960,
"text": "Those actively working with these types of publicly available datasets know that they are not updated as often as we might like. Some of them are updated yearly, so we need to supplement these data sources with relevant, current data. The trouble here is that these data usually come from papers, articles, and unstructured text. To use this data, a sub-domain model is needed. This allows us to work with the text more expressively and ultimately connect this to our biomedical model. The two models are shown below:"
},
{
"code": null,
"e": 7709,
"s": 7478,
"text": "With the schema set and publications identified, some challenges will need to be addressed. Konrad highlights two of them: extracting biomedical entities from text and linking different ontologies within a central knowledge graph."
},
{
"code": null,
"e": 7750,
"s": 7709,
"text": "Extracting Biomedical Entities from Text"
},
{
"code": null,
"e": 7945,
"s": 7750,
"text": "When approaching this challenge, Konrad reminds us not to reinvent the wheel and make use of existing named entity recognition corpora. For this project, Konrad and co. used CORD-NER and SemMed."
},
{
"code": null,
"e": 8035,
"s": 7945,
"text": "Named entity recognition is the NLP algorithum of identifying named entities within text."
},
{
"code": null,
"e": 8044,
"s": 8035,
"text": "CORD NER"
},
{
"code": null,
"e": 8391,
"s": 8044,
"text": "CORD NER is a data source of pre-computed Named Entity Recognition output. The nice thing about CORD NER is that the output of the NLP work, the entities from text, are mapped to concepts in UMLs. This helps to provide consistency and data quality in the knowledge graph. With the concepts and their types, we can now map to the schema in TypeQL."
},
{
"code": null,
"e": 8398,
"s": 8391,
"text": "SemMed"
},
{
"code": null,
"e": 8852,
"s": 8398,
"text": "SemMed contains entities derived from publications available in PubMed, and these entities are stored in semantic triples. The problem with CORD NER is that it doesn’t contain any links between named entities, while SemMed does provide these. These semantic triples are made up of a subject, predicate, and object. All of them are mapped against UMLS Metathesaurus, which once again helps us to keep consistent naming conventions in the knowledge graph."
},
{
"code": null,
"e": 9408,
"s": 8852,
"text": "Below you can see an example of how Konrad worked with the data in SemMed. This is a simple join of two tables, and we see the subject, predicate, and object: NDUFS8 is a genome, NDUFS7 which is also gene or genome, and interacts-with as the predicate or relation between them. SemMed naturally derives these linkages from text, and when it comes to mapping these entities to the defined model, this provides us with much of what is needed. In fact, Konrad notes that there are around 18 different predicates that can be mapped to relations in the schema."
},
{
"code": null,
"e": 9438,
"s": 9408,
"text": "Text Processing with SciSpacy"
},
{
"code": null,
"e": 9647,
"s": 9438,
"text": "Many times we need to extract structured data out of unstructured text in papers on our own. The challenge here, for Konrad, was working with clinical trials data, which usually comes in the form of XML data."
},
{
"code": null,
"e": 10162,
"s": 9647,
"text": "The problem lies in that there are times where the output is not straightforward. It may provide more than one drug, compound, or chemical along with other information. To perform the NER against this data, Konrad recommended using SciSpacy. SciSpacy is a Python library, built on Spacy, and it uses a transformer model that has been trained on publicly available publications to perform NER. Using it against the example above, it identifies two named entities and, even better, provides the mapping against UMLS."
},
{
"code": null,
"e": 10195,
"s": 10162,
"text": "ID resolution in biological data"
},
{
"code": null,
"e": 10479,
"s": 10195,
"text": "Another situation that may present itself is when an ID from an ontology doesn’t exist in the data that you are working in. To resolve this, we can take the canonical name of an entity and use an API like RxNorm, to get back a RxNorm ID which can then be used to find the missing id."
},
{
"code": null,
"e": 10670,
"s": 10479,
"text": "Once data is loaded, how can we begin to traverse the graph to generate some insights and/or new information? In the screenshot below, Konrad walked through the visualised result of a query."
},
{
"code": null,
"e": 10712,
"s": 10670,
"text": "The higher-level question being asked is:"
},
{
"code": null,
"e": 10968,
"s": 10712,
"text": "Find any gene, which plays the role of host in a relation with the virus SARs, and that is stimulated by another gene; as well as a drug which interacts with the stimulating gene; and finally, the publications that mention the relations between the genes."
},
{
"code": null,
"e": 11001,
"s": 10968,
"text": "In TypeQL this query looks like:"
},
{
"code": null,
"e": 11359,
"s": 11001,
"text": "match$gene isa gene;$virus isa virus, has name \"SARs\";$r(host: $gene, virus: $virus) isa gene-virus-host;$gene2 isa gene;$r2(stimulating: $gene2, stimulated: $gene) isa gene-gene-interaction;$r3(mentioned: $r2, publication: $pub) isa a mentioning;get $gene, $gene2, $pub;$drug isa drug;$r4 (interacted: $gene2, interacting: $drug) isa gene-drug-interaction;"
},
{
"code": null,
"e": 11513,
"s": 11359,
"text": "This question becomes quite easy to query in TypeDB and is representative of how simple it is to traverse the graph while limiting the exploration space."
},
{
"code": null,
"e": 11792,
"s": 11513,
"text": "Other queries can help us learn more from the clinical trials data. We are able to identify potential targets for a clinical trial, identifiers from sponsoring organisations, and link this information to what we know about drugs, drug-gene relations, and drug-disease relations."
},
{
"code": null,
"e": 11873,
"s": 11792,
"text": "Additional query examples can be found within the BioGrakn Covid repo on Github."
},
{
"code": null,
"e": 12047,
"s": 11873,
"text": "As BioGrakn Covid is an open-source database, available for anyone to fork and play with on their own; we would be remiss to not mention the additional ways to derive value."
},
{
"code": null,
"e": 12530,
"s": 12047,
"text": "It is commonly understood that communities in a given network — especially in the biomedical network — tend to cluster, to create communities of shared functions in a specific network or the whole system. This function of nodes in a given network can be derived from their n-hop egonet or neighbour. For example, in a protein-protein interaction network, proteins tend to cluster with other nodes that share similar functions such as, metabolic processes or information on immunity."
},
{
"code": null,
"e": 12683,
"s": 12530,
"text": "We might have a hypothesis for a relation but we need to confirm if this relation should exist in the database. We can do so using Graph Neural Networks"
},
{
"code": null,
"e": 12984,
"s": 12683,
"text": "In order to create or identify these types of clusters, there are a few tasks we can perform: node classification, link prediction, and graph classification. Konrad chose to focus on link prediction for this talk, as he felt it is the most exciting avenue for continued exploration of BioGrakn Covid."
},
{
"code": null,
"e": 13348,
"s": 12984,
"text": "In BioGrakn Covid, there are existing relations we have instantiated, however, Konrad wanted to identify implicit, undiscovered relations. This is motivated by the fact that Biomedical Knowledge Graphs are notoriously incomplete. It is very exciting to find methods to automatically complete them. Having done so new insights can be drawn straight from the graph."
},
{
"code": null,
"e": 13481,
"s": 13348,
"text": "The approach he uses is to hypothesise that a relation exists and confirm this using a Graph Neural Network (a new hot topic in ML)."
},
{
"code": null,
"e": 13874,
"s": 13481,
"text": "Using as features the eigenvectors of the graph adjacency matrix plus structural features from the graph, Konrad was able to train a plain Graph Convolutional Neural Network. New relations to be predicted are hypothetical `disease-gene` and `disease-drug` relations between existing nodes, and negative examples are taken via negative sampling from all possible and impossible relation types."
},
{
"code": null,
"e": 14160,
"s": 13874,
"text": "In machine learning, the term negative sampling describes the action of drawing a random sample from the data, that doesn’t exist, to act as a negative example. In an incomplete graph, the negative samples could truly be positive, but empirically Konrad finds this approach works well."
},
{
"code": null,
"e": 14256,
"s": 14160,
"text": "Konrad uses a final Softmax layer on the model to generate probabilities of relation existence."
},
{
"code": null,
"e": 14559,
"s": 14256,
"text": "He finds that the precision and recall behave as seen in the plot below as the threshold is varied. He takes interest in a value of recall of 85%, which is marked with a red dot on both curves. Happily, we can see that at around this performance point is where we see a balance of precision and recall."
},
{
"code": null,
"e": 14928,
"s": 14559,
"text": "Looking at the results another way, we can see the confusion matrix for a fixed recall of 85%. At this point, he presents a precision of 91.16%. We can also note that the matrix is quite well balanced. We see that there are 50,160 false negatives, slightly more than the 31,007 false positives. For comparison, we see 303,393 true negatives and 264,240 true positives."
},
{
"code": null,
"e": 15343,
"s": 14928,
"text": "Doing an ad-hoc analysis of the gene relation predictions made, Konrad expressed some doubt as to whether all of them could be correctly classified. To investigate further, Konrad did a spot check of the top 5 relation predictions for genes targeting another node (either a compound or another gene). He found that 3 of those relations predicted have each been investigated in one or more papers in the literature."
},
{
"code": null,
"e": 15502,
"s": 15343,
"text": "Doing a spot check of the top 5 drug-disease interactions predicted by the network, he found in the literature that 2 of these have been established to exist."
},
{
"code": null,
"e": 15709,
"s": 15502,
"text": "Konrad is naturally very excited by what he’s been able to achieve using these methods and has plenty of thoughts on how to expand the scope and accuracy of the method, as outlined in the video of his talk."
},
{
"code": null,
"e": 15846,
"s": 15709,
"text": "On June 30th, we are hosting a webinar on accelerating drug discovery and modelling biomedical data in TypeDB. You can join in via zoom."
},
{
"code": null,
"e": 15966,
"s": 15846,
"text": "This is an on-going project and we need your help! If you want to contribute, you can help out by helping us including:"
},
{
"code": null,
"e": 16034,
"s": 15966,
"text": "Migrate more data sources (e.g. clinical trials, DrugBank, Excelra)"
},
{
"code": null,
"e": 16077,
"s": 16034,
"text": "Extend the schema by adding relevant rules"
},
{
"code": null,
"e": 16094,
"s": 16077,
"text": "Create a website"
},
{
"code": null,
"e": 16154,
"s": 16094,
"text": "Write tutorials and articles for researchers to get started"
},
{
"code": null,
"e": 16259,
"s": 16154,
"text": "If you wish to get in touch, please reach out to us on the #biograkn channel on our Discord (link here)."
},
{
"code": null,
"e": 16359,
"s": 16259,
"text": "A special thank you to Konrad for his hard work, enthusiasm, inspiration and thorough explanations."
}
] |
Categorical Representation of Data in Julia - GeeksforGeeks
|
15 Sep, 2021
Julia is a high performance, dynamic programming language that is easy to learn as it is a high-level language. But sometimes, when dealing with data in programming languages like Julia, we encounter structures or representations with a small number of levels as represented below.
Julia
# Creating an array of stingsa = ["Geeks", "For", "Geeks", "Useful", "For", "Everybody"]
As you can see, the elements of the array are simply categorized as full strings.
By changing the array type to CategoricalArray type we can represent the elements better to make things easier in the future for some tasks. The CategoricalArray type represents the strings as indices in a number of levels.
Julia
# Creating an array of the # CategoricalArray type of the array 'a'cat = CategoricalArray(a)
In the example mentioned above, 232 levels are represented (UInt32).
CategoricalArray type can also classify a missing value as shown below:
Julia
# Creating array of CategoricalArray # type with some missing valuescat = CategoricalArray(["Geeks", "For", "Geeks", missing, missing, "Everybody"])
CategoricalArray type allows us to know the levels which are valid as there are repeated data, by using the levels() function where the argument to be passed is the array.
Julia
# Determining levels of the arraylevels(cat)
We can change the placement or order of the levels by using the levels!() function, as it might be useful later on.
Julia
# Changing the order of the levels displayedlevels!(cat, ["Geeks", "For", "Everybody"]);levels(cat)
And we can sort the array according to the changed order of the levels.
Julia
# Sorting array according to the levelssort(cat)
The CategoricalArray type can have 232 levels as shown in the description of the array in the outputs. If these many levels are not required we decrease them by using the compress() function. The following example shows the decrease of the levels to 28 levels.
Julia
# Decreasing the number of levels for the array 'cat'cat = compress(cat)
We can directly use the categorical function instead of using CategoryArrays which allows us to apply a keyword argument like the compress keyword which when set to ‘true’, implicates implementation of that keyword on the elements.
Julia
# Creating a categorical array and # applying the compress functioncat2 = categorical(["Geeks", "For", "Geeks"], compress = true)
In the same way, we have implemented the compress keyword, the ordered keyword can be implemented by equating it to ‘true’, which gives an order to the levels of the array.
Julia
# Creating an ordered categorical arraycat3 = categorical(["Geeks", "For", "Geeks"], ordered=true)
We can check the levels of arrays for order and when it is not an ordered array, it produces an error as shown below.
Julia
# Testing levels of unordered arraycat2[1] < cat2[2]
When the array is ordered, it results in either true or false based on the order of the levels.
Julia
# Testing levels of the ordered arraycat3[1] < cat3[2]
We can check whether if an array is ordered with the isordered() function.
Julia
# Checking if array is orderedisordered(cat2)
We can change an unordered array to ordered and vice-versa by using the ordered!() function.
Julia
# Changing unordered array to an ordered arrayordered!(cat2, true)
Now that we have ordered the array, we can test it.
Julia
# Testing levels of the arraycat2[1] < cat2[2]
We can implement the categorical function on one or more columns of a Dataframe by using the categorical!() function in which the first argument is the DataFrame and the second argument can be columns of the DataFrame we want to apply on and some keyword function.
Julia
# Creating a DataFrame with String elementsusing DataFrames df = DataFrame(A = ["A", "A", "A", "B", "B", "C"], B = ["D", "E", "E", "F", "G", "G"])
We can change the type of a specific column of the DataFrame to categorical type.
Julia
# Changing the column A to categoricalcategorical!(df, :A)
If we don’t specify the column, the columns with an AbstractString type will change to categorical. By equating compress keyword function to true we can apply the function on all of the columns.
Julia
# Changing columns to categorical typecategorical!(df, compress=true)
We can check the types of the columns of the DataFrame with eltype() function.
Julia
# Displaying column typeseltype.(eachcol(df))
abhishek0719kadiyan
Picked
Julia
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Get array dimensions and size of a dimension in Julia - size() Method
Searching in Array for a given element in Julia
Find maximum element along with its index in Julia - findmax() Method
Get number of elements of array in Julia - length() Method
Getting the maximum value from a list in Julia - max() Method
Working with Excel Files in Julia
Exception handling in Julia
Working with Date and Time in Julia
Decision Making in Julia (if, if-else, Nested-if, if-elseif-else ladder)
Getting the absolute value of a number in Julia - abs() Method
|
[
{
"code": null,
"e": 24544,
"s": 24516,
"text": "\n15 Sep, 2021"
},
{
"code": null,
"e": 24826,
"s": 24544,
"text": "Julia is a high performance, dynamic programming language that is easy to learn as it is a high-level language. But sometimes, when dealing with data in programming languages like Julia, we encounter structures or representations with a small number of levels as represented below."
},
{
"code": null,
"e": 24832,
"s": 24826,
"text": "Julia"
},
{
"code": "# Creating an array of stingsa = [\"Geeks\", \"For\", \"Geeks\", \"Useful\", \"For\", \"Everybody\"]",
"e": 24921,
"s": 24832,
"text": null
},
{
"code": null,
"e": 25004,
"s": 24921,
"text": "As you can see, the elements of the array are simply categorized as full strings. "
},
{
"code": null,
"e": 25229,
"s": 25004,
"text": "By changing the array type to CategoricalArray type we can represent the elements better to make things easier in the future for some tasks. The CategoricalArray type represents the strings as indices in a number of levels. "
},
{
"code": null,
"e": 25235,
"s": 25229,
"text": "Julia"
},
{
"code": "# Creating an array of the # CategoricalArray type of the array 'a'cat = CategoricalArray(a)",
"e": 25328,
"s": 25235,
"text": null
},
{
"code": null,
"e": 25397,
"s": 25328,
"text": "In the example mentioned above, 232 levels are represented (UInt32)."
},
{
"code": null,
"e": 25469,
"s": 25397,
"text": "CategoricalArray type can also classify a missing value as shown below:"
},
{
"code": null,
"e": 25475,
"s": 25469,
"text": "Julia"
},
{
"code": "# Creating array of CategoricalArray # type with some missing valuescat = CategoricalArray([\"Geeks\", \"For\", \"Geeks\", missing, missing, \"Everybody\"])",
"e": 25648,
"s": 25475,
"text": null
},
{
"code": null,
"e": 25821,
"s": 25648,
"text": "CategoricalArray type allows us to know the levels which are valid as there are repeated data, by using the levels() function where the argument to be passed is the array. "
},
{
"code": null,
"e": 25827,
"s": 25821,
"text": "Julia"
},
{
"code": "# Determining levels of the arraylevels(cat)",
"e": 25872,
"s": 25827,
"text": null
},
{
"code": null,
"e": 25988,
"s": 25872,
"text": "We can change the placement or order of the levels by using the levels!() function, as it might be useful later on."
},
{
"code": null,
"e": 25994,
"s": 25988,
"text": "Julia"
},
{
"code": "# Changing the order of the levels displayedlevels!(cat, [\"Geeks\", \"For\", \"Everybody\"]);levels(cat)",
"e": 26094,
"s": 25994,
"text": null
},
{
"code": null,
"e": 26166,
"s": 26094,
"text": "And we can sort the array according to the changed order of the levels."
},
{
"code": null,
"e": 26172,
"s": 26166,
"text": "Julia"
},
{
"code": "# Sorting array according to the levelssort(cat)",
"e": 26221,
"s": 26172,
"text": null
},
{
"code": null,
"e": 26482,
"s": 26221,
"text": "The CategoricalArray type can have 232 levels as shown in the description of the array in the outputs. If these many levels are not required we decrease them by using the compress() function. The following example shows the decrease of the levels to 28 levels."
},
{
"code": null,
"e": 26488,
"s": 26482,
"text": "Julia"
},
{
"code": "# Decreasing the number of levels for the array 'cat'cat = compress(cat)",
"e": 26561,
"s": 26488,
"text": null
},
{
"code": null,
"e": 26793,
"s": 26561,
"text": "We can directly use the categorical function instead of using CategoryArrays which allows us to apply a keyword argument like the compress keyword which when set to ‘true’, implicates implementation of that keyword on the elements."
},
{
"code": null,
"e": 26799,
"s": 26793,
"text": "Julia"
},
{
"code": "# Creating a categorical array and # applying the compress functioncat2 = categorical([\"Geeks\", \"For\", \"Geeks\"], compress = true)",
"e": 26929,
"s": 26799,
"text": null
},
{
"code": null,
"e": 27103,
"s": 26929,
"text": " In the same way, we have implemented the compress keyword, the ordered keyword can be implemented by equating it to ‘true’, which gives an order to the levels of the array."
},
{
"code": null,
"e": 27109,
"s": 27103,
"text": "Julia"
},
{
"code": "# Creating an ordered categorical arraycat3 = categorical([\"Geeks\", \"For\", \"Geeks\"], ordered=true)",
"e": 27208,
"s": 27109,
"text": null
},
{
"code": null,
"e": 27326,
"s": 27208,
"text": "We can check the levels of arrays for order and when it is not an ordered array, it produces an error as shown below."
},
{
"code": null,
"e": 27332,
"s": 27326,
"text": "Julia"
},
{
"code": "# Testing levels of unordered arraycat2[1] < cat2[2]",
"e": 27385,
"s": 27332,
"text": null
},
{
"code": null,
"e": 27481,
"s": 27385,
"text": "When the array is ordered, it results in either true or false based on the order of the levels."
},
{
"code": null,
"e": 27487,
"s": 27481,
"text": "Julia"
},
{
"code": "# Testing levels of the ordered arraycat3[1] < cat3[2]",
"e": 27542,
"s": 27487,
"text": null
},
{
"code": null,
"e": 27617,
"s": 27542,
"text": "We can check whether if an array is ordered with the isordered() function."
},
{
"code": null,
"e": 27623,
"s": 27617,
"text": "Julia"
},
{
"code": "# Checking if array is orderedisordered(cat2)",
"e": 27669,
"s": 27623,
"text": null
},
{
"code": null,
"e": 27762,
"s": 27669,
"text": "We can change an unordered array to ordered and vice-versa by using the ordered!() function."
},
{
"code": null,
"e": 27768,
"s": 27762,
"text": "Julia"
},
{
"code": "# Changing unordered array to an ordered arrayordered!(cat2, true)",
"e": 27835,
"s": 27768,
"text": null
},
{
"code": null,
"e": 27887,
"s": 27835,
"text": "Now that we have ordered the array, we can test it."
},
{
"code": null,
"e": 27893,
"s": 27887,
"text": "Julia"
},
{
"code": "# Testing levels of the arraycat2[1] < cat2[2]",
"e": 27940,
"s": 27893,
"text": null
},
{
"code": null,
"e": 28205,
"s": 27940,
"text": "We can implement the categorical function on one or more columns of a Dataframe by using the categorical!() function in which the first argument is the DataFrame and the second argument can be columns of the DataFrame we want to apply on and some keyword function."
},
{
"code": null,
"e": 28211,
"s": 28205,
"text": "Julia"
},
{
"code": "# Creating a DataFrame with String elementsusing DataFrames df = DataFrame(A = [\"A\", \"A\", \"A\", \"B\", \"B\", \"C\"], B = [\"D\", \"E\", \"E\", \"F\", \"G\", \"G\"])",
"e": 28373,
"s": 28211,
"text": null
},
{
"code": null,
"e": 28456,
"s": 28373,
"text": "We can change the type of a specific column of the DataFrame to categorical type. "
},
{
"code": null,
"e": 28462,
"s": 28456,
"text": "Julia"
},
{
"code": "# Changing the column A to categoricalcategorical!(df, :A)",
"e": 28521,
"s": 28462,
"text": null
},
{
"code": null,
"e": 28716,
"s": 28521,
"text": "If we don’t specify the column, the columns with an AbstractString type will change to categorical. By equating compress keyword function to true we can apply the function on all of the columns."
},
{
"code": null,
"e": 28722,
"s": 28716,
"text": "Julia"
},
{
"code": "# Changing columns to categorical typecategorical!(df, compress=true)",
"e": 28792,
"s": 28722,
"text": null
},
{
"code": null,
"e": 28871,
"s": 28792,
"text": "We can check the types of the columns of the DataFrame with eltype() function."
},
{
"code": null,
"e": 28877,
"s": 28871,
"text": "Julia"
},
{
"code": "# Displaying column typeseltype.(eachcol(df))",
"e": 28923,
"s": 28877,
"text": null
},
{
"code": null,
"e": 28943,
"s": 28923,
"text": "abhishek0719kadiyan"
},
{
"code": null,
"e": 28950,
"s": 28943,
"text": "Picked"
},
{
"code": null,
"e": 28956,
"s": 28950,
"text": "Julia"
},
{
"code": null,
"e": 29054,
"s": 28956,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29063,
"s": 29054,
"text": "Comments"
},
{
"code": null,
"e": 29076,
"s": 29063,
"text": "Old Comments"
},
{
"code": null,
"e": 29146,
"s": 29076,
"text": "Get array dimensions and size of a dimension in Julia - size() Method"
},
{
"code": null,
"e": 29194,
"s": 29146,
"text": "Searching in Array for a given element in Julia"
},
{
"code": null,
"e": 29264,
"s": 29194,
"text": "Find maximum element along with its index in Julia - findmax() Method"
},
{
"code": null,
"e": 29323,
"s": 29264,
"text": "Get number of elements of array in Julia - length() Method"
},
{
"code": null,
"e": 29385,
"s": 29323,
"text": "Getting the maximum value from a list in Julia - max() Method"
},
{
"code": null,
"e": 29419,
"s": 29385,
"text": "Working with Excel Files in Julia"
},
{
"code": null,
"e": 29447,
"s": 29419,
"text": "Exception handling in Julia"
},
{
"code": null,
"e": 29483,
"s": 29447,
"text": "Working with Date and Time in Julia"
},
{
"code": null,
"e": 29556,
"s": 29483,
"text": "Decision Making in Julia (if, if-else, Nested-if, if-elseif-else ladder)"
}
] |
Sieve of Eratosthenes - Finding prime numbers less than a given number - onlinetutorialspoint
|
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
In this tutorial, we will see how to find all the prime numbers less than a given number using Sieve of Eratosthenes algorithm.
Sieve of Eratosthenes is the most classic and efficient algorithms to find all the prime numbers up to a given limit. Say, you’re given a number ‘n’ and you’re asked to find all the prime numbers less than ‘n’, then how will you do that?
Take the list of all integers from 2 to n, i.e., [2,3,4,...,n]
Now, starting with i=2 in the list, mark all the multiples of 2 in the list as composite.
Assign the value of i to the next unmarked element in the list and repeat the above step.
Continue this procedure until all the elements in the list are processed.
At the end, the set of elements that have not been marked as composite are the prime numbers that are less than ‘n’.
At each step, the next unmarked number to which we assign the value of i is a prime number because it has not been divisible by any of the numbers that are less than that in the previous steps.
So, the set of unmarked numbers obtained at the end is the set of prime numbers that are less than ‘n’.
Finding all the prime numbers that are less than 18 :
Taking the list of all integers from 2 to 18: [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]
i=2, marking all the multiples of 2 in the list as composite: [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]
The next unmarked element is 3, so take i as 3 now and mark all multiples of 3 as composite : [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]
The next unmarked element is 5, so, i=5 and all the multiples of 5 are already marked. Same goes with i=7,11,13,17.
So, the list of prime numbers that are less than 18 is 2,3,5,7,11,13,17.
Taking the list of all integers from 2 to 18: [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]
i=2, marking all the multiples of 2 in the list as composite: [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]
The next unmarked element is 3, so take i as 3 now and mark all multiples of 3 as composite : [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]
The next unmarked element is 5, so, i=5 and all the multiples of 5 are already marked. Same goes with i=7,11,13,17.
So, the list of prime numbers that are less than 18 is 2,3,5,7,11,13,17.
def SieveOfEratosthenes(n):
prime = [True for i in range(n+1)]
for i in range(2,n+1):
if(prime[i] == True):
j=2
while i*j<=n :
prime[i*j] = False
j = j+1
#printing all the prime numbers less than n
for i in range(2,n+1):
if(prime[i] == True):
print (i, end=" ")
print()
if __name__ == "__main__":
SieveOfEratosthenes(18)
2 3 5 7 11 13 17
If any integer n is composite, then n has a prime divisor less than or equal to √n. So, we can further reduce the number of steps and optimize the above implementation. For optimizing the above solution,
At each step, for the unmarked integer i, mark all the integers that are divisible by i and are greater than or equal to the square of it as composite.
Check for the next unmarked integer and assign it to i, until i<= √n.
At each step, for the unmarked integer i, mark all the integers that are divisible by i and are greater than or equal to the square of it as composite.
Check for the next unmarked integer and assign it to i, until i<= √n.
The implementation in python after the above optimization:
import math
def SieveOfEratosthenes(n):
prime = [True for i in range(n+1)]
for i in range(2,(int)(math.sqrt(n))+1):
if(prime[i] == True):
j=i*i
while j<=n :
prime[j] = False
j = j+i
#printing all the prime numbers less than n
for i in range(2,n+1):
if(prime[i] == True):
print (i, end=" ")
print()
if __name__ == "__main__":
SieveOfEratosthenes(18)
The time complexity of this algorithm is O( n*log log n) and space complexity is O( n )
To find all the prime numbers in a given range of [L,R], generate all the prime numbers up to √R using the above method. Then, use those primes to mark all the composite numbers in the range of [L,R]. The list of unmarked elements at the end is the set of prime numbers in the range of [L,R].
This algorithm is based on the simple idea that, if a number is prime, then none of the numbers less than that can divide it. So, we iterate over all the numbers from 2 to n and mark all the multiples of each number in the list as composite. The numbers that are left unmarked, are the ones that were not divisible by any integer. So, the list of unmarked numbers is the list of prime numbers that are less than n.
More on Sieve of Eratosthenes
Three way partitioning
Happy Learning 🙂
C Program – Print prime numbers between two numbers
C Program – Find a number is prime or not
Java Program To check a number is prime or not ?
C Program – Armstrong numbers between given numbers
Java Program to Find the GCD of Two Numbers
What are different Python Data Types
How Java 8 Stream generate random String and Numbers
Python – Find the biggest of 2 given numbers
C Program – Find GCD of two numbers
What is Python NumPy Library
How to push docker image to docker hub ?
Different ways to use Lambdas in Python
Angularjs Custom Filter Example
Python Set Data Structure in Depth
Java 8 forEach with index Example
C Program – Print prime numbers between two numbers
C Program – Find a number is prime or not
Java Program To check a number is prime or not ?
C Program – Armstrong numbers between given numbers
Java Program to Find the GCD of Two Numbers
What are different Python Data Types
How Java 8 Stream generate random String and Numbers
Python – Find the biggest of 2 given numbers
C Program – Find GCD of two numbers
What is Python NumPy Library
How to push docker image to docker hub ?
Different ways to use Lambdas in Python
Angularjs Custom Filter Example
Python Set Data Structure in Depth
Java 8 forEach with index Example
|
[
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 526,
"s": 398,
"text": "In this tutorial, we will see how to find all the prime numbers less than a given number using Sieve of Eratosthenes algorithm."
},
{
"code": null,
"e": 764,
"s": 526,
"text": "Sieve of Eratosthenes is the most classic and efficient algorithms to find all the prime numbers up to a given limit. Say, you’re given a number ‘n’ and you’re asked to find all the prime numbers less than ‘n’, then how will you do that?"
},
{
"code": null,
"e": 827,
"s": 764,
"text": "Take the list of all integers from 2 to n, i.e., [2,3,4,...,n]"
},
{
"code": null,
"e": 917,
"s": 827,
"text": "Now, starting with i=2 in the list, mark all the multiples of 2 in the list as composite."
},
{
"code": null,
"e": 1007,
"s": 917,
"text": "Assign the value of i to the next unmarked element in the list and repeat the above step."
},
{
"code": null,
"e": 1081,
"s": 1007,
"text": "Continue this procedure until all the elements in the list are processed."
},
{
"code": null,
"e": 1198,
"s": 1081,
"text": "At the end, the set of elements that have not been marked as composite are the prime numbers that are less than ‘n’."
},
{
"code": null,
"e": 1392,
"s": 1198,
"text": "At each step, the next unmarked number to which we assign the value of i is a prime number because it has not been divisible by any of the numbers that are less than that in the previous steps."
},
{
"code": null,
"e": 1496,
"s": 1392,
"text": "So, the set of unmarked numbers obtained at the end is the set of prime numbers that are less than ‘n’."
},
{
"code": null,
"e": 1550,
"s": 1496,
"text": "Finding all the prime numbers that are less than 18 :"
},
{
"code": null,
"e": 2081,
"s": 1550,
"text": "\nTaking the list of all integers from 2 to 18: [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]\n i=2, marking all the multiples of 2 in the list as composite: [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]\nThe next unmarked element is 3, so take i as 3 now and mark all multiples of 3 as composite : [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]\nThe next unmarked element is 5, so, i=5 and all the multiples of 5 are already marked. Same goes with i=7,11,13,17.\nSo, the list of prime numbers that are less than 18 is 2,3,5,7,11,13,17.\n"
},
{
"code": null,
"e": 2174,
"s": 2081,
"text": "Taking the list of all integers from 2 to 18: [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]"
},
{
"code": null,
"e": 2282,
"s": 2174,
"text": " i=2, marking all the multiples of 2 in the list as composite: [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]"
},
{
"code": null,
"e": 2421,
"s": 2282,
"text": "The next unmarked element is 3, so take i as 3 now and mark all multiples of 3 as composite : [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]"
},
{
"code": null,
"e": 2537,
"s": 2421,
"text": "The next unmarked element is 5, so, i=5 and all the multiples of 5 are already marked. Same goes with i=7,11,13,17."
},
{
"code": null,
"e": 2610,
"s": 2537,
"text": "So, the list of prime numbers that are less than 18 is 2,3,5,7,11,13,17."
},
{
"code": null,
"e": 3044,
"s": 2610,
"text": "def SieveOfEratosthenes(n):\n\n prime = [True for i in range(n+1)]\n for i in range(2,n+1):\n if(prime[i] == True):\n j=2\n while i*j<=n :\n prime[i*j] = False\n j = j+1\n\n #printing all the prime numbers less than n\n for i in range(2,n+1):\n if(prime[i] == True):\n print (i, end=\" \") \n\n print()\n\nif __name__ == \"__main__\":\n SieveOfEratosthenes(18)"
},
{
"code": null,
"e": 3061,
"s": 3044,
"text": "2 3 5 7 11 13 17"
},
{
"code": null,
"e": 3265,
"s": 3061,
"text": "If any integer n is composite, then n has a prime divisor less than or equal to √n. So, we can further reduce the number of steps and optimize the above implementation. For optimizing the above solution,"
},
{
"code": null,
"e": 3489,
"s": 3265,
"text": "\nAt each step, for the unmarked integer i, mark all the integers that are divisible by i and are greater than or equal to the square of it as composite.\nCheck for the next unmarked integer and assign it to i, until i<= √n.\n"
},
{
"code": null,
"e": 3641,
"s": 3489,
"text": "At each step, for the unmarked integer i, mark all the integers that are divisible by i and are greater than or equal to the square of it as composite."
},
{
"code": null,
"e": 3711,
"s": 3641,
"text": "Check for the next unmarked integer and assign it to i, until i<= √n."
},
{
"code": null,
"e": 3770,
"s": 3711,
"text": "The implementation in python after the above optimization:"
},
{
"code": null,
"e": 4235,
"s": 3770,
"text": "import math\n\ndef SieveOfEratosthenes(n):\n\n prime = [True for i in range(n+1)]\n for i in range(2,(int)(math.sqrt(n))+1):\n if(prime[i] == True):\n j=i*i \n while j<=n :\n prime[j] = False\n j = j+i \n\n #printing all the prime numbers less than n\n for i in range(2,n+1):\n if(prime[i] == True):\n print (i, end=\" \") \n\n print()\n\nif __name__ == \"__main__\":\n SieveOfEratosthenes(18)"
},
{
"code": null,
"e": 4323,
"s": 4235,
"text": "The time complexity of this algorithm is O( n*log log n) and space complexity is O( n )"
},
{
"code": null,
"e": 4616,
"s": 4323,
"text": "To find all the prime numbers in a given range of [L,R], generate all the prime numbers up to √R using the above method. Then, use those primes to mark all the composite numbers in the range of [L,R]. The list of unmarked elements at the end is the set of prime numbers in the range of [L,R]."
},
{
"code": null,
"e": 5031,
"s": 4616,
"text": "This algorithm is based on the simple idea that, if a number is prime, then none of the numbers less than that can divide it. So, we iterate over all the numbers from 2 to n and mark all the multiples of each number in the list as composite. The numbers that are left unmarked, are the ones that were not divisible by any integer. So, the list of unmarked numbers is the list of prime numbers that are less than n."
},
{
"code": null,
"e": 5061,
"s": 5031,
"text": "More on Sieve of Eratosthenes"
},
{
"code": null,
"e": 5084,
"s": 5061,
"text": "Three way partitioning"
},
{
"code": null,
"e": 5101,
"s": 5084,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 5724,
"s": 5101,
"text": "\nC Program – Print prime numbers between two numbers\nC Program – Find a number is prime or not\nJava Program To check a number is prime or not ?\nC Program – Armstrong numbers between given numbers\nJava Program to Find the GCD of Two Numbers\nWhat are different Python Data Types\nHow Java 8 Stream generate random String and Numbers\nPython – Find the biggest of 2 given numbers\nC Program – Find GCD of two numbers\nWhat is Python NumPy Library\nHow to push docker image to docker hub ?\nDifferent ways to use Lambdas in Python\nAngularjs Custom Filter Example\nPython Set Data Structure in Depth\nJava 8 forEach with index Example\n"
},
{
"code": null,
"e": 5776,
"s": 5724,
"text": "C Program – Print prime numbers between two numbers"
},
{
"code": null,
"e": 5818,
"s": 5776,
"text": "C Program – Find a number is prime or not"
},
{
"code": null,
"e": 5867,
"s": 5818,
"text": "Java Program To check a number is prime or not ?"
},
{
"code": null,
"e": 5919,
"s": 5867,
"text": "C Program – Armstrong numbers between given numbers"
},
{
"code": null,
"e": 5963,
"s": 5919,
"text": "Java Program to Find the GCD of Two Numbers"
},
{
"code": null,
"e": 6000,
"s": 5963,
"text": "What are different Python Data Types"
},
{
"code": null,
"e": 6053,
"s": 6000,
"text": "How Java 8 Stream generate random String and Numbers"
},
{
"code": null,
"e": 6098,
"s": 6053,
"text": "Python – Find the biggest of 2 given numbers"
},
{
"code": null,
"e": 6134,
"s": 6098,
"text": "C Program – Find GCD of two numbers"
},
{
"code": null,
"e": 6163,
"s": 6134,
"text": "What is Python NumPy Library"
},
{
"code": null,
"e": 6204,
"s": 6163,
"text": "How to push docker image to docker hub ?"
},
{
"code": null,
"e": 6244,
"s": 6204,
"text": "Different ways to use Lambdas in Python"
},
{
"code": null,
"e": 6276,
"s": 6244,
"text": "Angularjs Custom Filter Example"
},
{
"code": null,
"e": 6311,
"s": 6276,
"text": "Python Set Data Structure in Depth"
}
] |
Which is the fastest algorithm to find prime numbers using C++?
|
The Sieve of Eratosthenes is one of the most efficient ways to find the prime numbers smaller than n when n is smaller than around 10 million.
A program that demonstrates the Sieve of Eratosthenes is given as follows.
#include <bits/stdc++.h>
using namespace std;
void SieveOfEratosthenes(int num) {
bool pno[num+1];
memset(pno, true, sizeof(pno));
for (int i = 2; i*i< = num; i++) {
if (pno[i] == true) {
for (int j = i*2; j< = num; j + = i)
pno[j] = false;
}
}
for (int i = 2; i< = num; i++)
if (pno[i])
cout << i << " ";
}
int main() {
int num = 15;
cout << "The prime numbers smaller or equal to "<< num <<" are: ";
SieveOfEratosthenes(num);
return 0;
}
The output of the above program is as follows.
The prime numbers smaller or equal to 15 are: 2 3 5 7 11 13
Now, let us understand the above program.
The function SieveOfEratosthenes() finds all the prime numbers that occur before num that is provided as argument. The code snippet for this is given as follows.
void SieveOfEratosthenes(int num) {
bool pno[num+1];
memset(pno, true, sizeof(pno));
for (int i = 2; i*i< = num; i++) {
if (pno[i] == true) {
for (int j = i*2; j< = num; j + = i)
pno[j] = false;
}
}
for (int i = 2; i< = num; i++)
if (pno[i])
cout << i << " ";
}
The function main() sets the value of num and then prints all the prime numbers that are smaller or equal to num. This is done by calling the function SieveOfEratosthenes(). The code snippet for this is given as follows.
int main() {
int num = 15;
cout << "The prime numbers smaller or equal to "<< num <<" are: ";
SieveOfEratosthenes(num);
return 0;
}
|
[
{
"code": null,
"e": 1205,
"s": 1062,
"text": "The Sieve of Eratosthenes is one of the most efficient ways to find the prime numbers smaller than n when n is smaller than around 10 million."
},
{
"code": null,
"e": 1280,
"s": 1205,
"text": "A program that demonstrates the Sieve of Eratosthenes is given as follows."
},
{
"code": null,
"e": 1783,
"s": 1280,
"text": "#include <bits/stdc++.h>\nusing namespace std;\nvoid SieveOfEratosthenes(int num) {\n bool pno[num+1];\n memset(pno, true, sizeof(pno));\n for (int i = 2; i*i< = num; i++) {\n if (pno[i] == true) {\n for (int j = i*2; j< = num; j + = i)\n pno[j] = false;\n }\n }\n for (int i = 2; i< = num; i++)\n if (pno[i])\n cout << i << \" \";\n}\nint main() {\n int num = 15;\n cout << \"The prime numbers smaller or equal to \"<< num <<\" are: \";\n SieveOfEratosthenes(num);\n return 0;\n}"
},
{
"code": null,
"e": 1830,
"s": 1783,
"text": "The output of the above program is as follows."
},
{
"code": null,
"e": 1890,
"s": 1830,
"text": "The prime numbers smaller or equal to 15 are: 2 3 5 7 11 13"
},
{
"code": null,
"e": 1932,
"s": 1890,
"text": "Now, let us understand the above program."
},
{
"code": null,
"e": 2094,
"s": 1932,
"text": "The function SieveOfEratosthenes() finds all the prime numbers that occur before num that is provided as argument. The code snippet for this is given as follows."
},
{
"code": null,
"e": 2407,
"s": 2094,
"text": "void SieveOfEratosthenes(int num) {\n bool pno[num+1];\n memset(pno, true, sizeof(pno));\n for (int i = 2; i*i< = num; i++) {\n if (pno[i] == true) {\n for (int j = i*2; j< = num; j + = i)\n pno[j] = false;\n }\n }\n for (int i = 2; i< = num; i++)\n if (pno[i])\n cout << i << \" \";\n}"
},
{
"code": null,
"e": 2628,
"s": 2407,
"text": "The function main() sets the value of num and then prints all the prime numbers that are smaller or equal to num. This is done by calling the function SieveOfEratosthenes(). The code snippet for this is given as follows."
},
{
"code": null,
"e": 2772,
"s": 2628,
"text": "int main() {\n int num = 15;\n cout << \"The prime numbers smaller or equal to \"<< num <<\" are: \";\n SieveOfEratosthenes(num);\n return 0;\n}"
}
] |
Minimum replacements with any positive integer to make the array K-increasing - GeeksforGeeks
|
15 Feb, 2022
Given an array arr[] of N positive integers and an integer K, The task is to replace minimum number of elements with any positive integer to make the array K-increasing. An array is K-increasing if for every index i in range [K, N), arr[i] ≥ arr[i-K]
Examples:
Input: arr[] = {4, 1, 5, 2, 6, 2}, k = 2Output: 0Explanation: Here, for every index i where 2 <= i <= 5, arr[i-2] <= arr[i]Since the given array is already K-increasing, there is no need to perform any operations.
Input: arr[] = {4, 1, 5, 2, 6, 2}, k = 3Output: 2Explanation: Indices 3 and 5 are the only ones not satisfying arr[i-3] <= arr[i] for 3 <= i <= 5.One of the ways we can make the array K-increasing is by changing arr[3] to 4 and arr[5] to 5.The array will now be [4,1,5,4,6,5].
Approach: This solution is based on finding the longest increasing subsequence. Since the above question requires that arr[i-K] ≤ arr[i] should hold for every index i, where K ≤ i ≤ N-1, here the importance is given to compare the elements which are K places away from each other.So the task is to confirm that the sequences formed by the elements K places away are all non decreasing in nature. If they are not then perform the replacements to make them non-decreasing.Follow the steps mentioned below:
Traverse the array and form sequence(seq[]) by picking elements K places away from each other in the given array.
Check if all the elements in seq[] is non-decreasing or not.
If not then find the length of longest non-decreasing subsequence of seq[].
Replace the remaining elements to minimize the total number of operations.
Sum of replacement operations for all such sequences is the final answer.
Follow the below illustration for better understanding.
• For example: arr[] = {4, 1, 5, 2, 6, 0, 1} ,K = 2
Indices: 0 1 2 3 4 5 6values: 4 1 5 2 6 0 1
So the work is to ensure that following sequences
arr[0], arr[2], arr[4], arr[6] => {4, 5, 6, 1}arr[1], arr[3], arr[5] => {1, 2, 0}
arr[0], arr[2], arr[4], arr[6] => {4, 5, 6, 1}
arr[1], arr[3], arr[5] => {1, 2, 0}
Obey arr[i-k] <= arr[i]So for first sequence it can be seen that {4, 5, 6} are K increasing and it is the longest non-decreasing subsequence, whereas 1 is not, so one operation is needed for it.Similarly, for 2nd {1, 2} are longest non-decreasing whereas 0 is not, so one operation is needed for it.
So total 2 minimum operations are required.
Below is the implementation of the above approach.
C++
Python3
C#
Javascript
// C++ code to implement above approach#include <bits/stdc++.h>using namespace std; // Functions finds the// longest non decreasing subsequence.int utility(vector<int>& arr, int& n){ vector<int> tail; int len = 1; tail.push_back(arr[0]); for (int i = 1; i < n; i++) { if (tail[len - 1] <= arr[i]) { len++; tail.push_back(arr[i]); } else { auto it = upper_bound(tail.begin(), tail.end(), arr[i]); *it = arr[i]; } } return len;} // Function to find the minimum operations// to make array K-increasingint kIncreasing(vector<int>& a, int K){ int ans = 0; // Size of array int N = a.size(); for (int i = 0; i < K; i++) { // Consider all elements K-places away // as a sequence vector<int> v; for (int j = i; j < N; j += K) { v.push_back(a[j]); } // Size of each sequence int k = v.size(); // Store least operations // for this sequence ans += k - utility(v, k); } return ans;} // Driver codeint main(){ vector<int> arr{ 4, 1, 5, 2, 6, 0, 1 }; int K = 2; cout << kIncreasing(arr, K); return 0;}
# Python code for the above approach def lowerBound(a, low, high, element): while (low < high): middle = low + (high - low) // 2; if (element > a[middle]): low = middle + 1; else: high = middle; return low; def utility(v): if (len(v) == 0): # boundary case return 0; tail = [0] * len(v) length = 1; # always points empty slot in tail tail[0] = v[0]; for i in range(1, len(v)): if (v[i] > tail[length - 1]): # v[i] extends the largest subsequence length += 1 tail[length] = v[i]; else: # v[i] will extend a subsequence and # discard older subsequence # find the largest value just smaller than # v[i] in tail # to find that value do binary search for # the v[i] in the range from begin to 0 + # length idx = lowerBound(v, 1, len(v), v[i]); # binarySearch in C# returns negative # value if searched element is not found in # array # this negative value stores the # appropriate place where the element is # supposed to be stored if (idx < 0): idx = -1 * idx - 1; # replacing the existing subsequence with # new end value tail[idx] = v[i]; return length; # Function to find the minimum operations# to make array K-increasingdef kIncreasing(a, K): ans = 0; # Size of array N = len(a) for i in range(K): # Consider all elements K-places away # as a sequence v = []; for j in range(i, N, K): v.append(a[j]); # Size of each sequence k = len(v); # Store least operations # for this sequence ans += k - utility(v); return ans; # Driver codearr = [4, 1, 5, 2, 6, 0, 1];K = 2;print(kIncreasing(arr, K)); # This code is contributed by gfgking
// C# code for the above approachusing System;using System.Collections.Generic;class GFG { static int lowerBound(List<int> a, int low, int high, int element) { while (low < high) { int middle = low + (high - low) / 2; if (element > a[middle]) low = middle + 1; else high = middle; } return low; } static int utility(List<int> v, int n) { if (v.Count == 0) // boundary case return 0; int[] tail = new int[v.Count]; int length = 1; // always points empty slot in tail tail[0] = v[0]; for (int i = 1; i < v.Count; i++) { if (v[i] > tail[length - 1]) { // v[i] extends the largest subsequence tail[length++] = v[i]; } else { // v[i] will extend a subsequence and // discard older subsequence // find the largest value just smaller than // v[i] in tail // to find that value do binary search for // the v[i] in the range from begin to 0 + // length var idx = lowerBound(v, 1, v.Count, v[i]); // binarySearch in C# returns negative // value if searched element is not found in // array // this negative value stores the // appropriate place where the element is // supposed to be stored if (idx < 0) idx = -1 * idx - 1; // replacing the existing subsequence with // new end value tail[idx] = v[i]; } } return length; } // Function to find the minimum operations // to make array K-increasing static int kIncreasing(int[] a, int K) { int ans = 0; // Size of array int N = a.Length; for (int i = 0; i < K; i++) { // Consider all elements K-places away // as a sequence List<int> v = new List<int>(); for (int j = i; j < N; j += K) { v.Add(a[j]); } // Size of each sequence int k = v.Count; // Store least operations // for this sequence ans += k - utility(v, k); } return ans; } // Driver code public static void Main() { int[] arr = { 4, 1, 5, 2, 6, 0, 1 }; int K = 2; Console.Write(kIncreasing(arr, K)); }} // This code is contributed by ukasp.
<script> // JavaScript code for the above approach function lowerBound(a, low, high, element) { while (low < high) { var middle = low + (high - low) / 2; if (element > a[middle]) low = middle + 1; else high = middle; } return low; } function utility(v) { if (v.length == 0) // boundary case return 0; var tail = Array(v.length).fill(0); var length = 1; // always points empty slot in tail tail[0] = v[0]; for (i = 1; i < v.length; i++) { if (v[i] > tail[length - 1]) { // v[i] extends the largest subsequence tail[length++] = v[i]; } else { // v[i] will extend a subsequence and // discard older subsequence // find the largest value just smaller than // v[i] in tail // to find that value do binary search for // the v[i] in the range from begin to 0 + // length var idx = lowerBound(v, 1, v.length, v[i]); // binarySearch in C# returns negative // value if searched element is not found in // array // this negative value stores the // appropriate place where the element is // supposed to be stored if (idx < 0) idx = -1 * idx - 1; // replacing the existing subsequence with // new end value tail[idx] = v[i]; } } return length; } // Function to find the minimum operations // to make array K-increasing function kIncreasing(a, K) { let ans = 0; // Size of array let N = a.length; for (let i = 0; i < K; i++) { // Consider all elements K-places away // as a sequence let v = []; for (let j = i; j < N; j += K) { v.push(a[j]); } // Size of each sequence let k = v.length; // Store least operations // for this sequence ans += k - utility(v, k); } return ans; } // Driver code let arr = [4, 1, 5, 2, 6, 0, 1]; let K = 2; document.write(kIncreasing(arr, K)); // This code is contributed by Potta Lokesh </script>
2
Time Complexity: O(K * N * logN)Auxiliary Space: O(N)
lokeshpotta20
gfgking
ukasp
germanshephered48
Algo-Geek 2021
Algo Geek
Arrays
Arrays
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Find sum of factorials till N factorial (1! + 2! + 3! + ... + N!)
Generate string after adding spaces at specific positions in a given String
Create a balanced BST using vector in C++ STL
Maximum count of adjacent pairs with even sum in given Circular Array
Find all Palindrome Strings in given Array of strings
Arrays in Java
Maximum and minimum of an array using minimum number of comparisons
Write a program to reverse an array or string
Arrays in C/C++
Stack Data Structure (Introduction and Program)
|
[
{
"code": null,
"e": 25600,
"s": 25572,
"text": "\n15 Feb, 2022"
},
{
"code": null,
"e": 25852,
"s": 25600,
"text": "Given an array arr[] of N positive integers and an integer K, The task is to replace minimum number of elements with any positive integer to make the array K-increasing. An array is K-increasing if for every index i in range [K, N), arr[i] ≥ arr[i-K] "
},
{
"code": null,
"e": 25862,
"s": 25852,
"text": "Examples:"
},
{
"code": null,
"e": 26076,
"s": 25862,
"text": "Input: arr[] = {4, 1, 5, 2, 6, 2}, k = 2Output: 0Explanation: Here, for every index i where 2 <= i <= 5, arr[i-2] <= arr[i]Since the given array is already K-increasing, there is no need to perform any operations."
},
{
"code": null,
"e": 26353,
"s": 26076,
"text": "Input: arr[] = {4, 1, 5, 2, 6, 2}, k = 3Output: 2Explanation: Indices 3 and 5 are the only ones not satisfying arr[i-3] <= arr[i] for 3 <= i <= 5.One of the ways we can make the array K-increasing is by changing arr[3] to 4 and arr[5] to 5.The array will now be [4,1,5,4,6,5]."
},
{
"code": null,
"e": 26858,
"s": 26353,
"text": "Approach: This solution is based on finding the longest increasing subsequence. Since the above question requires that arr[i-K] ≤ arr[i] should hold for every index i, where K ≤ i ≤ N-1, here the importance is given to compare the elements which are K places away from each other.So the task is to confirm that the sequences formed by the elements K places away are all non decreasing in nature. If they are not then perform the replacements to make them non-decreasing.Follow the steps mentioned below:"
},
{
"code": null,
"e": 26972,
"s": 26858,
"text": "Traverse the array and form sequence(seq[]) by picking elements K places away from each other in the given array."
},
{
"code": null,
"e": 27033,
"s": 26972,
"text": "Check if all the elements in seq[] is non-decreasing or not."
},
{
"code": null,
"e": 27109,
"s": 27033,
"text": "If not then find the length of longest non-decreasing subsequence of seq[]."
},
{
"code": null,
"e": 27184,
"s": 27109,
"text": "Replace the remaining elements to minimize the total number of operations."
},
{
"code": null,
"e": 27258,
"s": 27184,
"text": "Sum of replacement operations for all such sequences is the final answer."
},
{
"code": null,
"e": 27314,
"s": 27258,
"text": "Follow the below illustration for better understanding."
},
{
"code": null,
"e": 27366,
"s": 27314,
"text": "• For example: arr[] = {4, 1, 5, 2, 6, 0, 1} ,K = 2"
},
{
"code": null,
"e": 27422,
"s": 27366,
"text": "Indices: 0 1 2 3 4 5 6values: 4 1 5 2 6 0 1"
},
{
"code": null,
"e": 27472,
"s": 27422,
"text": "So the work is to ensure that following sequences"
},
{
"code": null,
"e": 27554,
"s": 27472,
"text": "arr[0], arr[2], arr[4], arr[6] => {4, 5, 6, 1}arr[1], arr[3], arr[5] => {1, 2, 0}"
},
{
"code": null,
"e": 27601,
"s": 27554,
"text": "arr[0], arr[2], arr[4], arr[6] => {4, 5, 6, 1}"
},
{
"code": null,
"e": 27637,
"s": 27601,
"text": "arr[1], arr[3], arr[5] => {1, 2, 0}"
},
{
"code": null,
"e": 27937,
"s": 27637,
"text": "Obey arr[i-k] <= arr[i]So for first sequence it can be seen that {4, 5, 6} are K increasing and it is the longest non-decreasing subsequence, whereas 1 is not, so one operation is needed for it.Similarly, for 2nd {1, 2} are longest non-decreasing whereas 0 is not, so one operation is needed for it."
},
{
"code": null,
"e": 27983,
"s": 27937,
"text": "So total 2 minimum operations are required. "
},
{
"code": null,
"e": 28035,
"s": 27983,
"text": "Below is the implementation of the above approach. "
},
{
"code": null,
"e": 28039,
"s": 28035,
"text": "C++"
},
{
"code": null,
"e": 28047,
"s": 28039,
"text": "Python3"
},
{
"code": null,
"e": 28050,
"s": 28047,
"text": "C#"
},
{
"code": null,
"e": 28061,
"s": 28050,
"text": "Javascript"
},
{
"code": "// C++ code to implement above approach#include <bits/stdc++.h>using namespace std; // Functions finds the// longest non decreasing subsequence.int utility(vector<int>& arr, int& n){ vector<int> tail; int len = 1; tail.push_back(arr[0]); for (int i = 1; i < n; i++) { if (tail[len - 1] <= arr[i]) { len++; tail.push_back(arr[i]); } else { auto it = upper_bound(tail.begin(), tail.end(), arr[i]); *it = arr[i]; } } return len;} // Function to find the minimum operations// to make array K-increasingint kIncreasing(vector<int>& a, int K){ int ans = 0; // Size of array int N = a.size(); for (int i = 0; i < K; i++) { // Consider all elements K-places away // as a sequence vector<int> v; for (int j = i; j < N; j += K) { v.push_back(a[j]); } // Size of each sequence int k = v.size(); // Store least operations // for this sequence ans += k - utility(v, k); } return ans;} // Driver codeint main(){ vector<int> arr{ 4, 1, 5, 2, 6, 0, 1 }; int K = 2; cout << kIncreasing(arr, K); return 0;}",
"e": 29356,
"s": 28061,
"text": null
},
{
"code": "# Python code for the above approach def lowerBound(a, low, high, element): while (low < high): middle = low + (high - low) // 2; if (element > a[middle]): low = middle + 1; else: high = middle; return low; def utility(v): if (len(v) == 0): # boundary case return 0; tail = [0] * len(v) length = 1; # always points empty slot in tail tail[0] = v[0]; for i in range(1, len(v)): if (v[i] > tail[length - 1]): # v[i] extends the largest subsequence length += 1 tail[length] = v[i]; else: # v[i] will extend a subsequence and # discard older subsequence # find the largest value just smaller than # v[i] in tail # to find that value do binary search for # the v[i] in the range from begin to 0 + # length idx = lowerBound(v, 1, len(v), v[i]); # binarySearch in C# returns negative # value if searched element is not found in # array # this negative value stores the # appropriate place where the element is # supposed to be stored if (idx < 0): idx = -1 * idx - 1; # replacing the existing subsequence with # new end value tail[idx] = v[i]; return length; # Function to find the minimum operations# to make array K-increasingdef kIncreasing(a, K): ans = 0; # Size of array N = len(a) for i in range(K): # Consider all elements K-places away # as a sequence v = []; for j in range(i, N, K): v.append(a[j]); # Size of each sequence k = len(v); # Store least operations # for this sequence ans += k - utility(v); return ans; # Driver codearr = [4, 1, 5, 2, 6, 0, 1];K = 2;print(kIncreasing(arr, K)); # This code is contributed by gfgking",
"e": 31331,
"s": 29356,
"text": null
},
{
"code": "// C# code for the above approachusing System;using System.Collections.Generic;class GFG { static int lowerBound(List<int> a, int low, int high, int element) { while (low < high) { int middle = low + (high - low) / 2; if (element > a[middle]) low = middle + 1; else high = middle; } return low; } static int utility(List<int> v, int n) { if (v.Count == 0) // boundary case return 0; int[] tail = new int[v.Count]; int length = 1; // always points empty slot in tail tail[0] = v[0]; for (int i = 1; i < v.Count; i++) { if (v[i] > tail[length - 1]) { // v[i] extends the largest subsequence tail[length++] = v[i]; } else { // v[i] will extend a subsequence and // discard older subsequence // find the largest value just smaller than // v[i] in tail // to find that value do binary search for // the v[i] in the range from begin to 0 + // length var idx = lowerBound(v, 1, v.Count, v[i]); // binarySearch in C# returns negative // value if searched element is not found in // array // this negative value stores the // appropriate place where the element is // supposed to be stored if (idx < 0) idx = -1 * idx - 1; // replacing the existing subsequence with // new end value tail[idx] = v[i]; } } return length; } // Function to find the minimum operations // to make array K-increasing static int kIncreasing(int[] a, int K) { int ans = 0; // Size of array int N = a.Length; for (int i = 0; i < K; i++) { // Consider all elements K-places away // as a sequence List<int> v = new List<int>(); for (int j = i; j < N; j += K) { v.Add(a[j]); } // Size of each sequence int k = v.Count; // Store least operations // for this sequence ans += k - utility(v, k); } return ans; } // Driver code public static void Main() { int[] arr = { 4, 1, 5, 2, 6, 0, 1 }; int K = 2; Console.Write(kIncreasing(arr, K)); }} // This code is contributed by ukasp.",
"e": 33551,
"s": 31331,
"text": null
},
{
"code": "<script> // JavaScript code for the above approach function lowerBound(a, low, high, element) { while (low < high) { var middle = low + (high - low) / 2; if (element > a[middle]) low = middle + 1; else high = middle; } return low; } function utility(v) { if (v.length == 0) // boundary case return 0; var tail = Array(v.length).fill(0); var length = 1; // always points empty slot in tail tail[0] = v[0]; for (i = 1; i < v.length; i++) { if (v[i] > tail[length - 1]) { // v[i] extends the largest subsequence tail[length++] = v[i]; } else { // v[i] will extend a subsequence and // discard older subsequence // find the largest value just smaller than // v[i] in tail // to find that value do binary search for // the v[i] in the range from begin to 0 + // length var idx = lowerBound(v, 1, v.length, v[i]); // binarySearch in C# returns negative // value if searched element is not found in // array // this negative value stores the // appropriate place where the element is // supposed to be stored if (idx < 0) idx = -1 * idx - 1; // replacing the existing subsequence with // new end value tail[idx] = v[i]; } } return length; } // Function to find the minimum operations // to make array K-increasing function kIncreasing(a, K) { let ans = 0; // Size of array let N = a.length; for (let i = 0; i < K; i++) { // Consider all elements K-places away // as a sequence let v = []; for (let j = i; j < N; j += K) { v.push(a[j]); } // Size of each sequence let k = v.length; // Store least operations // for this sequence ans += k - utility(v, k); } return ans; } // Driver code let arr = [4, 1, 5, 2, 6, 0, 1]; let K = 2; document.write(kIncreasing(arr, K)); // This code is contributed by Potta Lokesh </script>",
"e": 36290,
"s": 33551,
"text": null
},
{
"code": null,
"e": 36295,
"s": 36293,
"text": "2"
},
{
"code": null,
"e": 36351,
"s": 36297,
"text": "Time Complexity: O(K * N * logN)Auxiliary Space: O(N)"
},
{
"code": null,
"e": 36367,
"s": 36353,
"text": "lokeshpotta20"
},
{
"code": null,
"e": 36375,
"s": 36367,
"text": "gfgking"
},
{
"code": null,
"e": 36381,
"s": 36375,
"text": "ukasp"
},
{
"code": null,
"e": 36399,
"s": 36381,
"text": "germanshephered48"
},
{
"code": null,
"e": 36414,
"s": 36399,
"text": "Algo-Geek 2021"
},
{
"code": null,
"e": 36424,
"s": 36414,
"text": "Algo Geek"
},
{
"code": null,
"e": 36431,
"s": 36424,
"text": "Arrays"
},
{
"code": null,
"e": 36438,
"s": 36431,
"text": "Arrays"
},
{
"code": null,
"e": 36536,
"s": 36438,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 36602,
"s": 36536,
"text": "Find sum of factorials till N factorial (1! + 2! + 3! + ... + N!)"
},
{
"code": null,
"e": 36678,
"s": 36602,
"text": "Generate string after adding spaces at specific positions in a given String"
},
{
"code": null,
"e": 36724,
"s": 36678,
"text": "Create a balanced BST using vector in C++ STL"
},
{
"code": null,
"e": 36794,
"s": 36724,
"text": "Maximum count of adjacent pairs with even sum in given Circular Array"
},
{
"code": null,
"e": 36848,
"s": 36794,
"text": "Find all Palindrome Strings in given Array of strings"
},
{
"code": null,
"e": 36863,
"s": 36848,
"text": "Arrays in Java"
},
{
"code": null,
"e": 36931,
"s": 36863,
"text": "Maximum and minimum of an array using minimum number of comparisons"
},
{
"code": null,
"e": 36977,
"s": 36931,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 36993,
"s": 36977,
"text": "Arrays in C/C++"
}
] |
Find a specific pair in Matrix - GeeksforGeeks
|
12 Apr, 2021
Given an n x n matrix mat[n][n] of integers, find the maximum value of mat(c, d) – mat(a, b) over all choices of indexes such that both c > a and d > b.Example:
Input:
mat[N][N] = {{ 1, 2, -1, -4, -20 },
{ -8, -3, 4, 2, 1 },
{ 3, 8, 6, 1, 3 },
{ -4, -1, 1, 7, -6 },
{ 0, -4, 10, -5, 1 }};
Output: 18
The maximum value is 18 as mat[4][2]
- mat[1][0] = 18 has maximum difference.
The program should do only ONE traversal of the matrix. i.e. expected time complexity is O(n2)A simple solution would be to apply Brute-Force. For all values mat(a, b) in the matrix, we find mat(c, d) that has maximum value such that c > a and d > b and keeps on updating maximum value found so far. We finally return the maximum value.Below is its implementation.
C++
Java
Python 3
C#
PHP
Javascript
// A Naive method to find maximum value of mat[d][e]
// - ma[a][b] such that d > a and e > b
#include <bits/stdc++.h>
using namespace std;
#define N 5
// The function returns maximum value A(d,e) - A(a,b)
// over all choices of indexes such that both d > a
// and e > b.
int findMaxValue(int mat[][N])
{
// stores maximum value
int maxValue = INT_MIN;
// Consider all possible pairs mat[a][b] and
// mat[d][e]
for (int a = 0; a < N - 1; a++)
for (int b = 0; b < N - 1; b++)
for (int d = a + 1; d < N; d++)
for (int e = b + 1; e < N; e++)
if (maxValue < (mat[d][e] - mat[a][b]))
maxValue = mat[d][e] - mat[a][b];
return maxValue;
}
// Driver program to test above function
int main()
{
int mat[N][N] = {
{ 1, 2, -1, -4, -20 },
{ -8, -3, 4, 2, 1 },
{ 3, 8, 6, 1, 3 },
{ -4, -1, 1, 7, -6 },
{ 0, -4, 10, -5, 1 }
};
cout << "Maximum Value is "
<< findMaxValue(mat);
return 0;
}
// A Naive method to find maximum value of mat1[d][e]
// - ma[a][b] such that d > a and e > b
import java.io.*;
import java.util.*;
class GFG
{
// The function returns maximum value A(d,e) - A(a,b)
// over all choices of indexes such that both d > a
// and e > b.
static int findMaxValue(int N,int mat[][])
{
// stores maximum value
int maxValue = Integer.MIN_VALUE;
// Consider all possible pairs mat[a][b] and
// mat1[d][e]
for (int a = 0; a < N - 1; a++)
for (int b = 0; b < N - 1; b++)
for (int d = a + 1; d < N; d++)
for (int e = b + 1; e < N; e++)
if (maxValue < (mat[d][e] - mat[a][b]))
maxValue = mat[d][e] - mat[a][b];
return maxValue;
}
// Driver code
public static void main (String[] args)
{
int N = 5;
int mat[][] = {
{ 1, 2, -1, -4, -20 },
{ -8, -3, 4, 2, 1 },
{ 3, 8, 6, 1, 3 },
{ -4, -1, 1, 7, -6 },
{ 0, -4, 10, -5, 1 }
};
System.out.print("Maximum Value is " +
findMaxValue(N,mat));
}
}
// This code is contributed
// by Prakriti Gupta
# A Naive method to find maximum
# value of mat[d][e] - mat[a][b]
# such that d > a and e > b
N = 5
# The function returns maximum
# value A(d,e) - A(a,b) over
# all choices of indexes such
# that both d > a and e > b.
def findMaxValue(mat):
# stores maximum value
maxValue = 0
# Consider all possible pairs
# mat[a][b] and mat[d][e]
for a in range(N - 1):
for b in range(N - 1):
for d in range(a + 1, N):
for e in range(b + 1, N):
if maxValue < int (mat[d][e] -
mat[a][b]):
maxValue = int(mat[d][e] -
mat[a][b]);
return maxValue;
# Driver Code
mat = [[ 1, 2, -1, -4, -20 ],
[ -8, -3, 4, 2, 1 ],
[ 3, 8, 6, 1, 3 ],
[ -4, -1, 1, 7, -6 ],
[ 0, -4, 10, -5, 1 ]];
print("Maximum Value is " +
str(findMaxValue(mat)))
# This code is contributed
# by ChitraNayal
// A Naive method to find maximum
// value of mat[d][e] - mat[a][b]
// such that d > a and e > b
using System;
class GFG
{
// The function returns
// maximum value A(d,e) - A(a,b)
// over all choices of indexes
// such that both d > a
// and e > b.
static int findMaxValue(int N,
int [,]mat)
{
//stores maximum value
int maxValue = int.MinValue;
// Consider all possible pairs
// mat[a][b] and mat[d][e]
for (int a = 0; a< N - 1; a++)
for (int b = 0; b < N - 1; b++)
for (int d = a + 1; d < N; d++)
for (int e = b + 1; e < N; e++)
if (maxValue < (mat[d, e] -
mat[a, b]))
maxValue = mat[d, e] -
mat[a, b];
return maxValue;
}
// Driver code
public static void Main ()
{
int N = 5;
int [,]mat = {{1, 2, -1, -4, -20},
{-8, -3, 4, 2, 1},
{3, 8, 6, 1, 3},
{-4, -1, 1, 7, -6},
{0, -4, 10, -5, 1}};
Console.Write("Maximum Value is " +
findMaxValue(N,mat));
}
}
// This code is contributed
// by ChitraNayal
<?php
// A Naive method to find maximum
// value of $mat[d][e] - ma[a][b]
// such that $d > $a and $e > $b
$N = 5;
// The function returns maximum
// value A(d,e) - A(a,b) over
// all choices of indexes such
// that both $d > $a and $e > $b.
function findMaxValue(&$mat)
{
global $N;
// stores maximum value
$maxValue = PHP_INT_MIN;
// Consider all possible
// pairs $mat[$a][$b] and
// $mat[$d][$e]
for ($a = 0; $a < $N - 1; $a++)
for ($b = 0; $b < $N - 1; $b++)
for ($d = $a + 1; $d < $N; $d++)
for ($e = $b + 1; $e < $N; $e++)
if ($maxValue < ($mat[$d][$e] -
$mat[$a][$b]))
$maxValue = $mat[$d][$e] -
$mat[$a][$b];
return $maxValue;
}
// Driver Code
$mat = array(array(1, 2, -1, -4, -20),
array(-8, -3, 4, 2, 1),
array(3, 8, 6, 1, 3),
array(-4, -1, 1, 7, -6),
array(0, -4, 10, -5, 1));
echo "Maximum Value is " .
findMaxValue($mat);
// This code is contributed
// by ChitraNayal
?>
<script>
// A Naive method to find maximum value of mat1[d][e]
// - ma[a][b] such that d > a and e > b
// The function returns maximum value A(d,e) - A(a,b)
// over all choices of indexes such that both d > a
// and e > b.
function findMaxValue(N,mat)
{
// stores maximum value
let maxValue = Number.MIN_VALUE;
// Consider all possible pairs mat[a][b] and
// mat1[d][e]
for (let a = 0; a < N - 1; a++)
for (let b = 0; b < N - 1; b++)
for (let d = a + 1; d < N; d++)
for (let e = b + 1; e < N; e++)
if (maxValue < (mat[d][e] - mat[a][b]))
maxValue = mat[d][e] - mat[a][b];
return maxValue;
}
// Driver code
let N = 5;
let mat=[[ 1, 2, -1, -4, -20],[-8, -3, 4, 2, 1],[3, 8, 6, 1, 3],[ -4, -1, 1, 7, -6 ],[ 0, -4, 10, -5, 1 ]];
document.write("Maximum Value is " +findMaxValue(N,mat));
// This code is contributed by rag2127
</script>
Output:
Maximum Value is 18
The above program runs in O(n^4) time which is nowhere close to expected time complexity of O(n^2)An efficient solution uses extra space. We pre-process the matrix such that index(i, j) stores max of elements in matrix from (i, j) to (N-1, N-1) and in the process keeps on updating maximum value found so far. We finally return the maximum value.
C++
Java
Python3
C#
PHP
Javascript
// An efficient method to find maximum value of mat[d]
// - ma[a][b] such that c > a and d > b
#include <bits/stdc++.h>
using namespace std;
#define N 5
// The function returns maximum value A(c,d) - A(a,b)
// over all choices of indexes such that both c > a
// and d > b.
int findMaxValue(int mat[][N])
{
//stores maximum value
int maxValue = INT_MIN;
// maxArr[i][j] stores max of elements in matrix
// from (i, j) to (N-1, N-1)
int maxArr[N][N];
// last element of maxArr will be same's as of
// the input matrix
maxArr[N-1][N-1] = mat[N-1][N-1];
// preprocess last row
int maxv = mat[N-1][N-1]; // Initialize max
for (int j = N - 2; j >= 0; j--)
{
if (mat[N-1][j] > maxv)
maxv = mat[N - 1][j];
maxArr[N-1][j] = maxv;
}
// preprocess last column
maxv = mat[N - 1][N - 1]; // Initialize max
for (int i = N - 2; i >= 0; i--)
{
if (mat[i][N - 1] > maxv)
maxv = mat[i][N - 1];
maxArr[i][N - 1] = maxv;
}
// preprocess rest of the matrix from bottom
for (int i = N-2; i >= 0; i--)
{
for (int j = N-2; j >= 0; j--)
{
// Update maxValue
if (maxArr[i+1][j+1] - mat[i][j] >
maxValue)
maxValue = maxArr[i + 1][j + 1] - mat[i][j];
// set maxArr (i, j)
maxArr[i][j] = max(mat[i][j],
max(maxArr[i][j + 1],
maxArr[i + 1][j]) );
}
}
return maxValue;
}
// Driver program to test above function
int main()
{
int mat[N][N] = {
{ 1, 2, -1, -4, -20 },
{ -8, -3, 4, 2, 1 },
{ 3, 8, 6, 1, 3 },
{ -4, -1, 1, 7, -6 },
{ 0, -4, 10, -5, 1 }
};
cout << "Maximum Value is "
<< findMaxValue(mat);
return 0;
}
// An efficient method to find maximum value of mat1[d]
// - ma[a][b] such that c > a and d > b
import java.io.*;
import java.util.*;
class GFG
{
// The function returns maximum value A(c,d) - A(a,b)
// over all choices of indexes such that both c > a
// and d > b.
static int findMaxValue(int N,int mat[][])
{
//stores maximum value
int maxValue = Integer.MIN_VALUE;
// maxArr[i][j] stores max of elements in matrix
// from (i, j) to (N-1, N-1)
int maxArr[][] = new int[N][N];
// last element of maxArr will be same's as of
// the input matrix
maxArr[N-1][N-1] = mat[N-1][N-1];
// preprocess last row
int maxv = mat[N-1][N-1]; // Initialize max
for (int j = N - 2; j >= 0; j--)
{
if (mat[N-1][j] > maxv)
maxv = mat[N - 1][j];
maxArr[N-1][j] = maxv;
}
// preprocess last column
maxv = mat[N - 1][N - 1]; // Initialize max
for (int i = N - 2; i >= 0; i--)
{
if (mat[i][N - 1] > maxv)
maxv = mat[i][N - 1];
maxArr[i][N - 1] = maxv;
}
// preprocess rest of the matrix from bottom
for (int i = N-2; i >= 0; i--)
{
for (int j = N-2; j >= 0; j--)
{
// Update maxValue
if (maxArr[i+1][j+1] - mat[i][j] > maxValue)
maxValue = maxArr[i + 1][j + 1] - mat[i][j];
// set maxArr (i, j)
maxArr[i][j] = Math.max(mat[i][j],
Math.max(maxArr[i][j + 1],
maxArr[i + 1][j]) );
}
}
return maxValue;
}
// Driver code
public static void main (String[] args)
{
int N = 5;
int mat[][] = {
{ 1, 2, -1, -4, -20 },
{ -8, -3, 4, 2, 1 },
{ 3, 8, 6, 1, 3 },
{ -4, -1, 1, 7, -6 },
{ 0, -4, 10, -5, 1 }
};
System.out.print("Maximum Value is " +
findMaxValue(N,mat));
}
}
// Contributed by Prakriti Gupta
# An efficient method to find maximum value
# of mat[d] - ma[a][b] such that c > a and d > b
import sys
N = 5
# The function returns maximum value
# A(c,d) - A(a,b) over all choices of
# indexes such that both c > a and d > b.
def findMaxValue(mat):
# stores maximum value
maxValue = -sys.maxsize -1
# maxArr[i][j] stores max of elements
# in matrix from (i, j) to (N-1, N-1)
maxArr = [[0 for x in range(N)]
for y in range(N)]
# last element of maxArr will be
# same's as of the input matrix
maxArr[N - 1][N - 1] = mat[N - 1][N - 1]
# preprocess last row
maxv = mat[N - 1][N - 1]; # Initialize max
for j in range (N - 2, -1, -1):
if (mat[N - 1][j] > maxv):
maxv = mat[N - 1][j]
maxArr[N - 1][j] = maxv
# preprocess last column
maxv = mat[N - 1][N - 1] # Initialize max
for i in range (N - 2, -1, -1):
if (mat[i][N - 1] > maxv):
maxv = mat[i][N - 1]
maxArr[i][N - 1] = maxv
# preprocess rest of the matrix
# from bottom
for i in range (N - 2, -1, -1):
for j in range (N - 2, -1, -1):
# Update maxValue
if (maxArr[i + 1][j + 1] -
mat[i][j] > maxValue):
maxValue = (maxArr[i + 1][j + 1] -
mat[i][j])
# set maxArr (i, j)
maxArr[i][j] = max(mat[i][j],
max(maxArr[i][j + 1],
maxArr[i + 1][j]))
return maxValue
# Driver Code
mat = [[ 1, 2, -1, -4, -20 ],
[-8, -3, 4, 2, 1 ],
[ 3, 8, 6, 1, 3 ],
[ -4, -1, 1, 7, -6] ,
[0, -4, 10, -5, 1 ]]
print ("Maximum Value is",
findMaxValue(mat))
# This code is contributed by iAyushRaj
// An efficient method to find
// maximum value of mat1[d]
// - ma[a][b] such that c > a
// and d > b
using System;
class GFG {
// The function returns
// maximum value A(c,d) - A(a,b)
// over all choices of indexes
// such that both c > a
// and d > b.
static int findMaxValue(int N, int [,]mat)
{
//stores maximum value
int maxValue = int.MinValue;
// maxArr[i][j] stores max
// of elements in matrix
// from (i, j) to (N-1, N-1)
int [,]maxArr = new int[N, N];
// last element of maxArr
// will be same's as of
// the input matrix
maxArr[N - 1, N - 1] = mat[N - 1,N - 1];
// preprocess last row
// Initialize max
int maxv = mat[N - 1, N - 1];
for (int j = N - 2; j >= 0; j--)
{
if (mat[N - 1, j] > maxv)
maxv = mat[N - 1, j];
maxArr[N - 1, j] = maxv;
}
// preprocess last column
// Initialize max
maxv = mat[N - 1,N - 1];
for (int i = N - 2; i >= 0; i--)
{
if (mat[i, N - 1] > maxv)
maxv = mat[i,N - 1];
maxArr[i,N - 1] = maxv;
}
// preprocess rest of the
// matrix from bottom
for (int i = N - 2; i >= 0; i--)
{
for (int j = N - 2; j >= 0; j--)
{
// Update maxValue
if (maxArr[i + 1,j + 1] -
mat[i, j] > maxValue)
maxValue = maxArr[i + 1,j + 1] -
mat[i, j];
// set maxArr (i, j)
maxArr[i,j] = Math.Max(mat[i, j],
Math.Max(maxArr[i, j + 1],
maxArr[i + 1, j]) );
}
}
return maxValue;
}
// Driver code
public static void Main ()
{
int N = 5;
int [,]mat = {{ 1, 2, -1, -4, -20 },
{ -8, -3, 4, 2, 1 },
{ 3, 8, 6, 1, 3 },
{ -4, -1, 1, 7, -6 },
{ 0, -4, 10, -5, 1 }};
Console.Write("Maximum Value is " +
findMaxValue(N,mat));
}
}
// This code is contributed by nitin mittal.
<?php
// An efficient method to find
// maximum value of mat[d] - ma[a][b]
// such that c > a and d > b
$N = 5;
// The function returns maximum
// value A(c,d) - A(a,b) over
// all choices of indexes such
// that both c > a and d > b.
function findMaxValue($mat)
{
global $N;
// stores maximum value
$maxValue = PHP_INT_MIN;
// maxArr[i][j] stores max
// of elements in matrix
// from (i, j) to (N-1, N-1)
$maxArr[$N][$N] = array();
// last element of maxArr
// will be same's as of
// the input matrix
$maxArr[$N - 1][$N - 1] = $mat[$N - 1][$N - 1];
// preprocess last row
$maxv = $mat[$N - 1][$N - 1]; // Initialize max
for ($j = $N - 2; $j >= 0; $j--)
{
if ($mat[$N - 1][$j] > $maxv)
$maxv = $mat[$N - 1][$j];
$maxArr[$N - 1][$j] = $maxv;
}
// preprocess last column
$maxv = $mat[$N - 1][$N - 1]; // Initialize max
for ($i = $N - 2; $i >= 0; $i--)
{
if ($mat[$i][$N - 1] > $maxv)
$maxv = $mat[$i][$N - 1];
$maxArr[$i][$N - 1] = $maxv;
}
// preprocess rest of the
// matrix from bottom
for ($i = $N - 2; $i >= 0; $i--)
{
for ($j = $N - 2; $j >= 0; $j--)
{
// Update maxValue
if ($maxArr[$i + 1][$j + 1] -
$mat[$i][$j] > $maxValue)
$maxValue = $maxArr[$i + 1][$j + 1] -
$mat[$i][$j];
// set maxArr (i, j)
$maxArr[$i][$j] = max($mat[$i][$j],
max($maxArr[$i][$j + 1],
$maxArr[$i + 1][$j]));
}
}
return $maxValue;
}
// Driver Code
$mat = array(array(1, 2, -1, -4, -20),
array(-8, -3, 4, 2, 1),
array(3, 8, 6, 1, 3),
array(-4, -1, 1, 7, -6),
array(0, -4, 10, -5, 1)
);
echo "Maximum Value is ".
findMaxValue($mat);
// This code is contributed
// by ChitraNayal
?>
<script>
// An efficient method to find maximum value of mat1[d]
// - ma[a][b] such that c > a and d > b
// The function returns maximum value A(c,d) - A(a,b)
// over all choices of indexes such that both c > a
// and d > b.
function findMaxValue(N,mat)
{
// stores maximum value
let maxValue = Number.MIN_VALUE;
// maxArr[i][j] stores max of elements in matrix
// from (i, j) to (N-1, N-1)
let maxArr=new Array(N);
for(let i = 0; i < N; i++)
{
maxArr[i]=new Array(N);
}
// last element of maxArr will be same's as of
// the input matrix
maxArr[N - 1][N - 1] = mat[N - 1][N - 1];
// preprocess last row
let maxv = mat[N-1][N-1]; // Initialize max
for (let j = N - 2; j >= 0; j--)
{
if (mat[N - 1][j] > maxv)
maxv = mat[N - 1][j];
maxArr[N - 1][j] = maxv;
}
// preprocess last column
maxv = mat[N - 1][N - 1]; // Initialize max
for (let i = N - 2; i >= 0; i--)
{
if (mat[i][N - 1] > maxv)
maxv = mat[i][N - 1];
maxArr[i][N - 1] = maxv;
}
// preprocess rest of the matrix from bottom
for (let i = N-2; i >= 0; i--)
{
for (let j = N-2; j >= 0; j--)
{
// Update maxValue
if (maxArr[i+1][j+1] - mat[i][j] > maxValue)
maxValue = maxArr[i + 1][j + 1] - mat[i][j];
// set maxArr (i, j)
maxArr[i][j] = Math.max(mat[i][j],
Math.max(maxArr[i][j + 1],
maxArr[i + 1][j]) );
}
}
return maxValue;
}
// Driver code
let N = 5;
let mat = [[ 1, 2, -1, -4, -20 ],
[-8, -3, 4, 2, 1 ],
[ 3, 8, 6, 1, 3 ],
[ -4, -1, 1, 7, -6] ,
[0, -4, 10, -5, 1 ]];
document.write("Maximum Value is " +
findMaxValue(N,mat));
// This code is contributed by avanitrachhadiya2155
</script>
Output:
Maximum Value is 18
If we are allowed to modify of the matrix, we can avoid using extra space and use input matrix instead.Exercise: Print index (a, b) and (c, d) as well.This article is contributed by Aditya Goel. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
nitin mittal
ukasp
iAyushRaj
rag2127
avanitrachhadiya2155
Matrix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Program to find the Sum of each Row and each Column of a Matrix
Flood fill Algorithm - how to implement fill() in paint?
Python program to add two Matrices
Mathematics | L U Decomposition of a System of Linear Equations
Breadth First Traversal ( BFS ) on a 2D array
Multiplication of Matrix using threads
Efficiently compute sums of diagonals of a matrix
A Boolean Matrix Question
Check for possible path in 2D matrix
Unique paths in a Grid with Obstacles
|
[
{
"code": null,
"e": 24996,
"s": 24965,
"text": " \n12 Apr, 2021\n"
},
{
"code": null,
"e": 25158,
"s": 24996,
"text": "Given an n x n matrix mat[n][n] of integers, find the maximum value of mat(c, d) – mat(a, b) over all choices of indexes such that both c > a and d > b.Example: "
},
{
"code": null,
"e": 25430,
"s": 25158,
"text": "Input:\nmat[N][N] = {{ 1, 2, -1, -4, -20 },\n { -8, -3, 4, 2, 1 }, \n { 3, 8, 6, 1, 3 },\n { -4, -1, 1, 7, -6 },\n { 0, -4, 10, -5, 1 }};\nOutput: 18\nThe maximum value is 18 as mat[4][2] \n- mat[1][0] = 18 has maximum difference. "
},
{
"code": null,
"e": 25797,
"s": 25430,
"text": "The program should do only ONE traversal of the matrix. i.e. expected time complexity is O(n2)A simple solution would be to apply Brute-Force. For all values mat(a, b) in the matrix, we find mat(c, d) that has maximum value such that c > a and d > b and keeps on updating maximum value found so far. We finally return the maximum value.Below is its implementation. "
},
{
"code": null,
"e": 25801,
"s": 25797,
"text": "C++"
},
{
"code": null,
"e": 25806,
"s": 25801,
"text": "Java"
},
{
"code": null,
"e": 25815,
"s": 25806,
"text": "Python 3"
},
{
"code": null,
"e": 25818,
"s": 25815,
"text": "C#"
},
{
"code": null,
"e": 25822,
"s": 25818,
"text": "PHP"
},
{
"code": null,
"e": 25833,
"s": 25822,
"text": "Javascript"
},
{
"code": "\n\n\n\n\n\n\n// A Naive method to find maximum value of mat[d][e]\n// - ma[a][b] such that d > a and e > b\n#include <bits/stdc++.h>\nusing namespace std;\n#define N 5\n \n// The function returns maximum value A(d,e) - A(a,b)\n// over all choices of indexes such that both d > a\n// and e > b.\nint findMaxValue(int mat[][N])\n{\n // stores maximum value\n int maxValue = INT_MIN;\n \n // Consider all possible pairs mat[a][b] and\n // mat[d][e]\n for (int a = 0; a < N - 1; a++)\n for (int b = 0; b < N - 1; b++)\n for (int d = a + 1; d < N; d++)\n for (int e = b + 1; e < N; e++)\n if (maxValue < (mat[d][e] - mat[a][b]))\n maxValue = mat[d][e] - mat[a][b];\n \n return maxValue;\n}\n \n// Driver program to test above function\nint main()\n{\nint mat[N][N] = {\n { 1, 2, -1, -4, -20 },\n { -8, -3, 4, 2, 1 },\n { 3, 8, 6, 1, 3 },\n { -4, -1, 1, 7, -6 },\n { 0, -4, 10, -5, 1 }\n };\n cout << \"Maximum Value is \"\n << findMaxValue(mat);\n \n return 0;\n}\n\n\n\n\n\n",
"e": 26920,
"s": 25843,
"text": null
},
{
"code": "\n\n\n\n\n\n\n// A Naive method to find maximum value of mat1[d][e]\n// - ma[a][b] such that d > a and e > b\nimport java.io.*;\nimport java.util.*;\n \nclass GFG \n{\n // The function returns maximum value A(d,e) - A(a,b)\n // over all choices of indexes such that both d > a\n // and e > b.\n static int findMaxValue(int N,int mat[][])\n {\n // stores maximum value\n int maxValue = Integer.MIN_VALUE;\n \n // Consider all possible pairs mat[a][b] and\n // mat1[d][e]\n for (int a = 0; a < N - 1; a++)\n for (int b = 0; b < N - 1; b++)\n for (int d = a + 1; d < N; d++)\n for (int e = b + 1; e < N; e++)\n if (maxValue < (mat[d][e] - mat[a][b]))\n maxValue = mat[d][e] - mat[a][b];\n \n return maxValue;\n }\n \n // Driver code\n public static void main (String[] args) \n {\n int N = 5;\n \n int mat[][] = {\n { 1, 2, -1, -4, -20 },\n { -8, -3, 4, 2, 1 },\n { 3, 8, 6, 1, 3 },\n { -4, -1, 1, 7, -6 },\n { 0, -4, 10, -5, 1 }\n };\n \n System.out.print(\"Maximum Value is \" + \n findMaxValue(N,mat));\n }\n}\n \n// This code is contributed\n// by Prakriti Gupta\n\n\n\n\n\n",
"e": 28275,
"s": 26930,
"text": null
},
{
"code": "\n\n\n\n\n\n\n# A Naive method to find maximum \n# value of mat[d][e] - mat[a][b]\n# such that d > a and e > b\nN = 5\n \n# The function returns maximum \n# value A(d,e) - A(a,b) over \n# all choices of indexes such \n# that both d > a and e > b.\ndef findMaxValue(mat):\n \n # stores maximum value\n maxValue = 0\n \n # Consider all possible pairs \n # mat[a][b] and mat[d][e]\n for a in range(N - 1):\n for b in range(N - 1):\n for d in range(a + 1, N):\n for e in range(b + 1, N):\n if maxValue < int (mat[d][e] -\n mat[a][b]):\n maxValue = int(mat[d][e] -\n mat[a][b]);\n \n return maxValue;\n \n# Driver Code\nmat = [[ 1, 2, -1, -4, -20 ],\n [ -8, -3, 4, 2, 1 ],\n [ 3, 8, 6, 1, 3 ],\n [ -4, -1, 1, 7, -6 ],\n [ 0, -4, 10, -5, 1 ]];\n \nprint(\"Maximum Value is \" +\n str(findMaxValue(mat)))\n \n# This code is contributed \n# by ChitraNayal\n\n\n\n\n\n",
"e": 29308,
"s": 28285,
"text": null
},
{
"code": "\n\n\n\n\n\n\n// A Naive method to find maximum \n// value of mat[d][e] - mat[a][b]\n// such that d > a and e > b\nusing System;\nclass GFG\n{\n \n // The function returns\n // maximum value A(d,e) - A(a,b)\n // over all choices of indexes \n // such that both d > a\n // and e > b.\n static int findMaxValue(int N, \n int [,]mat)\n {\n \n //stores maximum value\n int maxValue = int.MinValue;\n \n // Consider all possible pairs \n // mat[a][b] and mat[d][e]\n for (int a = 0; a< N - 1; a++)\n for (int b = 0; b < N - 1; b++)\n for (int d = a + 1; d < N; d++)\n for (int e = b + 1; e < N; e++)\n if (maxValue < (mat[d, e] - \n mat[a, b]))\n maxValue = mat[d, e] - \n mat[a, b];\n \n return maxValue;\n }\n \n // Driver code\n public static void Main () \n {\n int N = 5;\n \n int [,]mat = {{1, 2, -1, -4, -20},\n {-8, -3, 4, 2, 1},\n {3, 8, 6, 1, 3},\n {-4, -1, 1, 7, -6},\n {0, -4, 10, -5, 1}};\n Console.Write(\"Maximum Value is \" + \n findMaxValue(N,mat));\n }\n}\n \n// This code is contributed \n// by ChitraNayal\n\n\n\n\n\n",
"e": 30662,
"s": 29318,
"text": null
},
{
"code": "\n\n\n\n\n\n\n<?php\n// A Naive method to find maximum \n// value of $mat[d][e] - ma[a][b]\n// such that $d > $a and $e > $b\n$N = 5;\n \n// The function returns maximum \n// value A(d,e) - A(a,b) over \n// all choices of indexes such \n// that both $d > $a and $e > $b.\nfunction findMaxValue(&$mat)\n{\n global $N;\n \n // stores maximum value\n $maxValue = PHP_INT_MIN;\n \n // Consider all possible \n // pairs $mat[$a][$b] and\n // $mat[$d][$e]\n for ($a = 0; $a < $N - 1; $a++)\n for ($b = 0; $b < $N - 1; $b++)\n for ($d = $a + 1; $d < $N; $d++)\n for ($e = $b + 1; $e < $N; $e++)\n if ($maxValue < ($mat[$d][$e] - \n $mat[$a][$b]))\n $maxValue = $mat[$d][$e] - \n $mat[$a][$b];\n \n return $maxValue;\n}\n \n// Driver Code\n$mat = array(array(1, 2, -1, -4, -20),\n array(-8, -3, 4, 2, 1),\n array(3, 8, 6, 1, 3),\n array(-4, -1, 1, 7, -6),\n array(0, -4, 10, -5, 1));\n \necho \"Maximum Value is \" . \n findMaxValue($mat);\n \n// This code is contributed \n// by ChitraNayal\n?>\n\n\n\n\n\n",
"e": 31805,
"s": 30672,
"text": null
},
{
"code": "\n\n\n\n\n\n\n<script>\n// A Naive method to find maximum value of mat1[d][e]\n// - ma[a][b] such that d > a and e > b \n \n // The function returns maximum value A(d,e) - A(a,b)\n // over all choices of indexes such that both d > a\n // and e > b.\n function findMaxValue(N,mat)\n {\n \n // stores maximum value\n let maxValue = Number.MIN_VALUE;\n \n // Consider all possible pairs mat[a][b] and\n // mat1[d][e]\n for (let a = 0; a < N - 1; a++)\n for (let b = 0; b < N - 1; b++)\n for (let d = a + 1; d < N; d++)\n for (let e = b + 1; e < N; e++)\n if (maxValue < (mat[d][e] - mat[a][b]))\n maxValue = mat[d][e] - mat[a][b];\n \n return maxValue;\n }\n \n // Driver code\n let N = 5;\n let mat=[[ 1, 2, -1, -4, -20],[-8, -3, 4, 2, 1],[3, 8, 6, 1, 3],[ -4, -1, 1, 7, -6 ],[ 0, -4, 10, -5, 1 ]];\n document.write(\"Maximum Value is \" +findMaxValue(N,mat));\n \n // This code is contributed by rag2127\n</script>\n\n\n\n\n\n",
"e": 32877,
"s": 31815,
"text": null
},
{
"code": null,
"e": 32886,
"s": 32877,
"text": "Output: "
},
{
"code": null,
"e": 32906,
"s": 32886,
"text": "Maximum Value is 18"
},
{
"code": null,
"e": 33254,
"s": 32906,
"text": "The above program runs in O(n^4) time which is nowhere close to expected time complexity of O(n^2)An efficient solution uses extra space. We pre-process the matrix such that index(i, j) stores max of elements in matrix from (i, j) to (N-1, N-1) and in the process keeps on updating maximum value found so far. We finally return the maximum value. "
},
{
"code": null,
"e": 33258,
"s": 33254,
"text": "C++"
},
{
"code": null,
"e": 33263,
"s": 33258,
"text": "Java"
},
{
"code": null,
"e": 33271,
"s": 33263,
"text": "Python3"
},
{
"code": null,
"e": 33274,
"s": 33271,
"text": "C#"
},
{
"code": null,
"e": 33278,
"s": 33274,
"text": "PHP"
},
{
"code": null,
"e": 33289,
"s": 33278,
"text": "Javascript"
},
{
"code": "\n\n\n\n\n\n\n// An efficient method to find maximum value of mat[d]\n// - ma[a][b] such that c > a and d > b\n#include <bits/stdc++.h>\nusing namespace std;\n#define N 5\n \n// The function returns maximum value A(c,d) - A(a,b)\n// over all choices of indexes such that both c > a\n// and d > b.\nint findMaxValue(int mat[][N])\n{\n //stores maximum value\n int maxValue = INT_MIN;\n \n // maxArr[i][j] stores max of elements in matrix\n // from (i, j) to (N-1, N-1)\n int maxArr[N][N];\n \n // last element of maxArr will be same's as of\n // the input matrix\n maxArr[N-1][N-1] = mat[N-1][N-1];\n \n // preprocess last row\n int maxv = mat[N-1][N-1]; // Initialize max\n for (int j = N - 2; j >= 0; j--)\n {\n if (mat[N-1][j] > maxv)\n maxv = mat[N - 1][j];\n maxArr[N-1][j] = maxv;\n }\n \n // preprocess last column\n maxv = mat[N - 1][N - 1]; // Initialize max\n for (int i = N - 2; i >= 0; i--)\n {\n if (mat[i][N - 1] > maxv)\n maxv = mat[i][N - 1];\n maxArr[i][N - 1] = maxv;\n }\n \n // preprocess rest of the matrix from bottom\n for (int i = N-2; i >= 0; i--)\n {\n for (int j = N-2; j >= 0; j--)\n {\n // Update maxValue\n if (maxArr[i+1][j+1] - mat[i][j] >\n maxValue)\n maxValue = maxArr[i + 1][j + 1] - mat[i][j];\n \n // set maxArr (i, j)\n maxArr[i][j] = max(mat[i][j],\n max(maxArr[i][j + 1],\n maxArr[i + 1][j]) );\n }\n }\n \n return maxValue;\n}\n \n// Driver program to test above function\nint main()\n{\n int mat[N][N] = {\n { 1, 2, -1, -4, -20 },\n { -8, -3, 4, 2, 1 },\n { 3, 8, 6, 1, 3 },\n { -4, -1, 1, 7, -6 },\n { 0, -4, 10, -5, 1 }\n };\n cout << \"Maximum Value is \"\n << findMaxValue(mat);\n \n return 0;\n}\n\n\n\n\n\n",
"e": 35314,
"s": 33299,
"text": null
},
{
"code": "\n\n\n\n\n\n\n// An efficient method to find maximum value of mat1[d]\n// - ma[a][b] such that c > a and d > b\nimport java.io.*;\nimport java.util.*;\n \nclass GFG \n{\n // The function returns maximum value A(c,d) - A(a,b)\n // over all choices of indexes such that both c > a\n // and d > b.\n static int findMaxValue(int N,int mat[][])\n {\n //stores maximum value\n int maxValue = Integer.MIN_VALUE;\n \n // maxArr[i][j] stores max of elements in matrix\n // from (i, j) to (N-1, N-1)\n int maxArr[][] = new int[N][N];\n \n // last element of maxArr will be same's as of\n // the input matrix\n maxArr[N-1][N-1] = mat[N-1][N-1];\n \n // preprocess last row\n int maxv = mat[N-1][N-1]; // Initialize max\n for (int j = N - 2; j >= 0; j--)\n {\n if (mat[N-1][j] > maxv)\n maxv = mat[N - 1][j];\n maxArr[N-1][j] = maxv;\n }\n \n // preprocess last column\n maxv = mat[N - 1][N - 1]; // Initialize max\n for (int i = N - 2; i >= 0; i--)\n {\n if (mat[i][N - 1] > maxv)\n maxv = mat[i][N - 1];\n maxArr[i][N - 1] = maxv;\n }\n \n // preprocess rest of the matrix from bottom\n for (int i = N-2; i >= 0; i--)\n {\n for (int j = N-2; j >= 0; j--)\n {\n // Update maxValue\n if (maxArr[i+1][j+1] - mat[i][j] > maxValue)\n maxValue = maxArr[i + 1][j + 1] - mat[i][j];\n \n // set maxArr (i, j)\n maxArr[i][j] = Math.max(mat[i][j],\n Math.max(maxArr[i][j + 1],\n maxArr[i + 1][j]) );\n }\n }\n \n return maxValue;\n }\n \n // Driver code\n public static void main (String[] args) \n {\n int N = 5;\n \n int mat[][] = {\n { 1, 2, -1, -4, -20 },\n { -8, -3, 4, 2, 1 },\n { 3, 8, 6, 1, 3 },\n { -4, -1, 1, 7, -6 },\n { 0, -4, 10, -5, 1 }\n };\n \n System.out.print(\"Maximum Value is \" + \n findMaxValue(N,mat));\n }\n}\n \n// Contributed by Prakriti Gupta\n\n\n\n\n\n",
"e": 37655,
"s": 35324,
"text": null
},
{
"code": "\n\n\n\n\n\n\n# An efficient method to find maximum value \n# of mat[d] - ma[a][b] such that c > a and d > b\n \nimport sys\nN = 5\n \n# The function returns maximum value \n# A(c,d) - A(a,b) over all choices of \n# indexes such that both c > a and d > b.\ndef findMaxValue(mat):\n \n # stores maximum value\n maxValue = -sys.maxsize -1\n \n # maxArr[i][j] stores max of elements \n # in matrix from (i, j) to (N-1, N-1)\n maxArr = [[0 for x in range(N)]\n for y in range(N)]\n \n # last element of maxArr will be \n # same's as of the input matrix\n maxArr[N - 1][N - 1] = mat[N - 1][N - 1]\n \n # preprocess last row\n maxv = mat[N - 1][N - 1]; # Initialize max\n for j in range (N - 2, -1, -1):\n \n if (mat[N - 1][j] > maxv):\n maxv = mat[N - 1][j]\n maxArr[N - 1][j] = maxv\n \n # preprocess last column\n maxv = mat[N - 1][N - 1] # Initialize max\n for i in range (N - 2, -1, -1):\n \n if (mat[i][N - 1] > maxv):\n maxv = mat[i][N - 1]\n maxArr[i][N - 1] = maxv\n \n # preprocess rest of the matrix\n # from bottom\n for i in range (N - 2, -1, -1):\n \n for j in range (N - 2, -1, -1):\n \n # Update maxValue\n if (maxArr[i + 1][j + 1] -\n mat[i][j] > maxValue):\n maxValue = (maxArr[i + 1][j + 1] -\n mat[i][j])\n \n # set maxArr (i, j)\n maxArr[i][j] = max(mat[i][j], \n max(maxArr[i][j + 1], \n maxArr[i + 1][j]))\n \n return maxValue\n \n# Driver Code\nmat = [[ 1, 2, -1, -4, -20 ],\n [-8, -3, 4, 2, 1 ],\n [ 3, 8, 6, 1, 3 ],\n [ -4, -1, 1, 7, -6] ,\n [0, -4, 10, -5, 1 ]]\n \nprint (\"Maximum Value is\",\n findMaxValue(mat))\n \n# This code is contributed by iAyushRaj\n\n\n\n\n\n",
"e": 39553,
"s": 37665,
"text": null
},
{
"code": "\n\n\n\n\n\n\n// An efficient method to find \n// maximum value of mat1[d]\n// - ma[a][b] such that c > a \n// and d > b\nusing System;\nclass GFG {\n \n // The function returns\n // maximum value A(c,d) - A(a,b)\n // over all choices of indexes \n // such that both c > a\n // and d > b.\n static int findMaxValue(int N, int [,]mat)\n {\n \n //stores maximum value\n int maxValue = int.MinValue;\n \n // maxArr[i][j] stores max \n // of elements in matrix\n // from (i, j) to (N-1, N-1)\n int [,]maxArr = new int[N, N];\n \n // last element of maxArr \n // will be same's as of\n // the input matrix\n maxArr[N - 1, N - 1] = mat[N - 1,N - 1];\n \n // preprocess last row\n // Initialize max\n int maxv = mat[N - 1, N - 1];\n for (int j = N - 2; j >= 0; j--)\n {\n if (mat[N - 1, j] > maxv)\n maxv = mat[N - 1, j];\n maxArr[N - 1, j] = maxv;\n }\n \n // preprocess last column\n // Initialize max\n maxv = mat[N - 1,N - 1]; \n for (int i = N - 2; i >= 0; i--)\n {\n if (mat[i, N - 1] > maxv)\n maxv = mat[i,N - 1];\n maxArr[i,N - 1] = maxv;\n }\n \n // preprocess rest of the\n // matrix from bottom\n for (int i = N - 2; i >= 0; i--)\n {\n for (int j = N - 2; j >= 0; j--)\n {\n \n // Update maxValue\n if (maxArr[i + 1,j + 1] - \n mat[i, j] > maxValue)\n maxValue = maxArr[i + 1,j + 1] - \n mat[i, j];\n \n // set maxArr (i, j)\n maxArr[i,j] = Math.Max(mat[i, j],\n Math.Max(maxArr[i, j + 1],\n maxArr[i + 1, j]) );\n }\n }\n \n return maxValue;\n }\n \n // Driver code\n public static void Main () \n {\n int N = 5;\n \n int [,]mat = {{ 1, 2, -1, -4, -20 },\n { -8, -3, 4, 2, 1 },\n { 3, 8, 6, 1, 3 },\n { -4, -1, 1, 7, -6 },\n { 0, -4, 10, -5, 1 }};\n Console.Write(\"Maximum Value is \" + \n findMaxValue(N,mat));\n }\n}\n \n// This code is contributed by nitin mittal.\n\n\n\n\n\n",
"e": 41975,
"s": 39563,
"text": null
},
{
"code": "\n\n\n\n\n\n\n<?php \n// An efficient method to find \n// maximum value of mat[d] - ma[a][b] \n// such that c > a and d > b\n$N = 5;\n \n// The function returns maximum \n// value A(c,d) - A(a,b) over\n// all choices of indexes such \n// that both c > a and d > b.\nfunction findMaxValue($mat)\n{\n global $N;\n \n // stores maximum value\n $maxValue = PHP_INT_MIN;\n \n // maxArr[i][j] stores max \n // of elements in matrix\n // from (i, j) to (N-1, N-1)\n $maxArr[$N][$N] = array();\n \n // last element of maxArr \n // will be same's as of\n // the input matrix\n $maxArr[$N - 1][$N - 1] = $mat[$N - 1][$N - 1];\n \n // preprocess last row\n $maxv = $mat[$N - 1][$N - 1]; // Initialize max\n for ($j = $N - 2; $j >= 0; $j--)\n {\n if ($mat[$N - 1][$j] > $maxv)\n $maxv = $mat[$N - 1][$j];\n $maxArr[$N - 1][$j] = $maxv;\n }\n \n // preprocess last column\n $maxv = $mat[$N - 1][$N - 1]; // Initialize max\n for ($i = $N - 2; $i >= 0; $i--)\n {\n if ($mat[$i][$N - 1] > $maxv)\n $maxv = $mat[$i][$N - 1];\n $maxArr[$i][$N - 1] = $maxv;\n }\n \n // preprocess rest of the\n // matrix from bottom\n for ($i = $N - 2; $i >= 0; $i--)\n {\n for ($j = $N - 2; $j >= 0; $j--)\n {\n // Update maxValue\n if ($maxArr[$i + 1][$j + 1] - \n $mat[$i][$j] > $maxValue)\n $maxValue = $maxArr[$i + 1][$j + 1] - \n $mat[$i][$j];\n \n // set maxArr (i, j)\n $maxArr[$i][$j] = max($mat[$i][$j],\n max($maxArr[$i][$j + 1],\n $maxArr[$i + 1][$j]));\n }\n }\n \n return $maxValue;\n}\n \n// Driver Code\n$mat = array(array(1, 2, -1, -4, -20),\n array(-8, -3, 4, 2, 1),\n array(3, 8, 6, 1, 3),\n array(-4, -1, 1, 7, -6),\n array(0, -4, 10, -5, 1)\n );\necho \"Maximum Value is \". \n findMaxValue($mat);\n \n// This code is contributed \n// by ChitraNayal\n?>\n\n\n\n\n\n",
"e": 44034,
"s": 41985,
"text": null
},
{
"code": "\n\n\n\n\n\n\n<script>\n// An efficient method to find maximum value of mat1[d]\n// - ma[a][b] such that c > a and d > b\n \n // The function returns maximum value A(c,d) - A(a,b)\n // over all choices of indexes such that both c > a\n // and d > b.\n function findMaxValue(N,mat)\n {\n \n // stores maximum value\n let maxValue = Number.MIN_VALUE;\n \n // maxArr[i][j] stores max of elements in matrix\n // from (i, j) to (N-1, N-1)\n let maxArr=new Array(N);\n for(let i = 0; i < N; i++)\n {\n maxArr[i]=new Array(N);\n }\n \n // last element of maxArr will be same's as of\n // the input matrix\n maxArr[N - 1][N - 1] = mat[N - 1][N - 1];\n \n // preprocess last row\n let maxv = mat[N-1][N-1]; // Initialize max\n for (let j = N - 2; j >= 0; j--)\n {\n if (mat[N - 1][j] > maxv)\n maxv = mat[N - 1][j];\n maxArr[N - 1][j] = maxv;\n }\n \n // preprocess last column\n maxv = mat[N - 1][N - 1]; // Initialize max\n for (let i = N - 2; i >= 0; i--)\n {\n if (mat[i][N - 1] > maxv)\n maxv = mat[i][N - 1];\n maxArr[i][N - 1] = maxv;\n }\n \n // preprocess rest of the matrix from bottom\n for (let i = N-2; i >= 0; i--)\n {\n for (let j = N-2; j >= 0; j--)\n {\n \n // Update maxValue\n if (maxArr[i+1][j+1] - mat[i][j] > maxValue)\n maxValue = maxArr[i + 1][j + 1] - mat[i][j];\n \n // set maxArr (i, j)\n maxArr[i][j] = Math.max(mat[i][j],\n Math.max(maxArr[i][j + 1],\n maxArr[i + 1][j]) );\n }\n }\n \n return maxValue;\n }\n \n // Driver code\n let N = 5;\n let mat = [[ 1, 2, -1, -4, -20 ],\n [-8, -3, 4, 2, 1 ],\n [ 3, 8, 6, 1, 3 ],\n [ -4, -1, 1, 7, -6] ,\n [0, -4, 10, -5, 1 ]];\n document.write(\"Maximum Value is \" +\n findMaxValue(N,mat));\n \n \n // This code is contributed by avanitrachhadiya2155\n</script>\n\n\n\n\n\n",
"e": 46324,
"s": 44044,
"text": null
},
{
"code": null,
"e": 46333,
"s": 46324,
"text": "Output: "
},
{
"code": null,
"e": 46353,
"s": 46333,
"text": "Maximum Value is 18"
},
{
"code": null,
"e": 46893,
"s": 46353,
"text": "If we are allowed to modify of the matrix, we can avoid using extra space and use input matrix instead.Exercise: Print index (a, b) and (c, d) as well.This article is contributed by Aditya Goel. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above "
},
{
"code": null,
"e": 46906,
"s": 46893,
"text": "nitin mittal"
},
{
"code": null,
"e": 46912,
"s": 46906,
"text": "ukasp"
},
{
"code": null,
"e": 46922,
"s": 46912,
"text": "iAyushRaj"
},
{
"code": null,
"e": 46930,
"s": 46922,
"text": "rag2127"
},
{
"code": null,
"e": 46951,
"s": 46930,
"text": "avanitrachhadiya2155"
},
{
"code": null,
"e": 46960,
"s": 46951,
"text": "\nMatrix\n"
},
{
"code": null,
"e": 47165,
"s": 46960,
"text": "Writing code in comment? \n Please use ide.geeksforgeeks.org, \n generate link and share the link here.\n "
},
{
"code": null,
"e": 47229,
"s": 47165,
"text": "Program to find the Sum of each Row and each Column of a Matrix"
},
{
"code": null,
"e": 47286,
"s": 47229,
"text": "Flood fill Algorithm - how to implement fill() in paint?"
},
{
"code": null,
"e": 47321,
"s": 47286,
"text": "Python program to add two Matrices"
},
{
"code": null,
"e": 47385,
"s": 47321,
"text": "Mathematics | L U Decomposition of a System of Linear Equations"
},
{
"code": null,
"e": 47431,
"s": 47385,
"text": "Breadth First Traversal ( BFS ) on a 2D array"
},
{
"code": null,
"e": 47470,
"s": 47431,
"text": "Multiplication of Matrix using threads"
},
{
"code": null,
"e": 47520,
"s": 47470,
"text": "Efficiently compute sums of diagonals of a matrix"
},
{
"code": null,
"e": 47546,
"s": 47520,
"text": "A Boolean Matrix Question"
},
{
"code": null,
"e": 47583,
"s": 47546,
"text": "Check for possible path in 2D matrix"
}
] |
ML | Unsupervised Face Clustering Pipeline - GeeksforGeeks
|
29 Nov, 2021
Live face-recognition is a problem that automated security division still face. With the advancements in Convolutions Neural Networks and specifically creative ways of Region-CNN, it’s already confirmed that with our current technologies, we can opt for supervised learning options such as FaceNet, YOLO for fast and live face-recognition in a real-world environment. To train a supervised model, we need to get datasets of our target labels which is still a tedious task. We need an efficient and automated solution for the dataset generation with minimal labeling effort by user intervention.
Introduction: We are proposing a dataset generation pipeline which takes a video clip as source and extracts all the faces and clusters them to limited and accurate sets of images representing a distinct person. Each set can easily be labeled by human input with ease.Technical Details: We are going to use opencv lib for per second frames extraction from input video clip. 1 second seems appropriate for covering relevant data and limited frames for processing. We will use face_recognition library (backed by dlib) for extracting the faces from the frames and align them for feature extractions. Then, we will extract the human observable features and cluster them using DBSCAN clustering provided by scikit-learn. For the solution, we will crop out all the faces, create labels and group them in folders for users to adapt them as a dataset for their training use-cases.Challenges in implementation: For a larger audience, we plan to implement the solution for execution in CPU rather than an NVIDIA GPU. Using an NVIDIA GPU may increase the efficiency of the pipeline. CPU implementation of facial embedding extraction is very slow (30+ sec per images). To cope up with the problem, we implement them with parallel pipeline executions (resulting in ~13sec per image) and later merge their results for further clustering tasks. We introduce tqdm along with PyPiper for progress updates and the resizing of frames extracted from input video for smooth execution of pipeline.
Input: Footage.mp4
Output:
Required Python3 modules: os, cv2, numpy, tensorflow, json, re, shutil, time, pickle, pyPiper, tqdm, imutils, face_recognition, dlib, warnings, sklearn
Snippets Section: For the contents of the file FaceClusteringLibrary.py, which contains all the class definitions, following are the snippets and explanation of their working.Class implementation of ResizeUtils provides function rescale_by_height and rescale_by_width. “rescale_by_width” is a function that takes ‘image’ and ‘target_width’ as input. It upscales/downscales the image dimension for width to meet the target_width. The height is automatically calculated so that aspect ratio stays the same. rescale_by_height is also the same but instead of width, it targets height.
Python3
'''The ResizeUtils provides resizing function to keep the aspect ratio intactCredits: AndyP at StackOverflow'''class ResizeUtils: # Given a target height, adjust the image # by calculating the width and resize def rescale_by_height(self, image, target_height, method = cv2.INTER_LANCZOS4): # Rescale `image` to `target_height` # (preserving aspect ratio) w = int(round(target_height * image.shape[1] / image.shape[0])) return (cv2.resize(image, (w, target_height), interpolation = method)) # Given a target width, adjust the image # by calculating the height and resize def rescale_by_width(self, image, target_width, method = cv2.INTER_LANCZOS4): # Rescale `image` to `target_width` # (preserving aspect ratio) h = int(round(target_width * image.shape[0] / image.shape[1])) return (cv2.resize(image, (target_width, h), interpolation = method))
Following is the definition of FramesGenerator class. This class provides functionality to extract jpg images by reading the video sequentially. If we take an example of an input video file, it can have a framerate of ~30 fps. We can conclude that for 1 second of video, there will be 30 images. For even a 2 minute video, the number of images for processing will be 2 * 60 * 30 = 3600. It’s a too much high number of images to process and may take hours for complete pipeline processing. But there comes one more fact that faces and people may not change within a second. So considering a 2-minute video, generating 30 images for 1 second is cumbersome and repetitive to process. Instead, we can just take only 1 snap of image in 1 second. The implementation of “FramesGenerator” dumps only 1 image per second from a video clip.Considering the dumped images are subject to face_recognition/dlib processing for face extraction, we try to keep a threshold of the height no greater than 500 and width capped to 700. This limit is imposed by the “AutoResize” function that further calls rescale_by_height or rescale_by_width to reduce the size of the image if limits are hit but still preserves the aspect ratio.Coming to the following snippet, AutoResize function tries to impose a limit to given image’s dimension. If the width is greater than 700, we down-scale it to keep the width 700 and keep maintaining aspect ratio. Another limit set here is, the height must not be greater than 500.
Python3
# The FramesGenerator extracts image# frames from the given video file# The image frames are resized for# face_recognition / dlib processingclass FramesGenerator: def __init__(self, VideoFootageSource): self.VideoFootageSource = VideoFootageSource # Resize the given input to fit in a specified # size for face embeddings extraction def AutoResize(self, frame): resizeUtils = ResizeUtils() height, width, _ = frame.shape if height > 500: frame = resizeUtils.rescale_by_height(frame, 500) self.AutoResize(frame) if width > 700: frame = resizeUtils.rescale_by_width(frame, 700) self.AutoResize(frame) return frame
Following is the snippet for GenerateFrames function. It queries the fps to decide among how many frames, 1 image can be dumped. We clear the output directory and start iterating throughout the frames. Before dumping any image, we resize the image if it hits the limit specified in AutoResize function.
Python3
# Extract 1 frame from each second from video footage# and save the frames to a specific folderdef GenerateFrames(self, OutputDirectoryName): cap = cv2.VideoCapture(self.VideoFootageSource) _, frame = cap.read() fps = cap.get(cv2.CAP_PROP_FPS) TotalFrames = cap.get(cv2.CAP_PROP_FRAME_COUNT) print("[INFO] Total Frames ", TotalFrames, " @ ", fps, " fps") print("[INFO] Calculating number of frames per second") CurrentDirectory = os.path.curdir OutputDirectoryPath = os.path.join( CurrentDirectory, OutputDirectoryName) if os.path.exists(OutputDirectoryPath): shutil.rmtree(OutputDirectoryPath) time.sleep(0.5) os.mkdir(OutputDirectoryPath) CurrentFrame = 1 fpsCounter = 0 FrameWrittenCount = 1 while CurrentFrame < TotalFrames: _, frame = cap.read() if (frame is None): continue if fpsCounter > fps: fpsCounter = 0 frame = self.AutoResize(frame) filename = "frame_" + str(FrameWrittenCount) + ".jpg" cv2.imwrite(os.path.join( OutputDirectoryPath, filename), frame) FrameWrittenCount += 1 fpsCounter += 1 CurrentFrame += 1 print('[INFO] Frames extracted')
Following is the snippet for FramesProvider class. It inherits “Node”, which can be used to construct the image processing pipeline. We implement “setup” and “run” functions. Any arguments defined in “setup” function can have the parameters, which will be expected by constructor as parameters at the time of object creation. Here, we can pass sourcePath parameter to the FramesProvider object. “setup” function only runs once. “run” function runs and keeps emitting data by calling emit function to processing pipeline till close function is called.Here, in the “setup”, we accept sourcePath as an argument and iterate through all the files in the given frames directory. Whichever file’s extension is .jpg (which will be generated by the class FrameGenerator), we add it to “filesList” list.During the calls of run function, all the jpg image paths from “filesList” are packed with attributes specifying unique “id” and “imagePath” as an object and emitted to the pipeline for processing.
Python3
# Following are nodes for pipeline constructions.# It will create and asynchronously execute threads# for reading images, extracting facial features and# storing them independently in different threads # Keep emitting the filenames into# the pipeline for processingclass FramesProvider(Node): def setup(self, sourcePath): self.sourcePath = sourcePath self.filesList = [] for item in os.listdir(self.sourcePath): _, fileExt = os.path.splitext(item) if fileExt == '.jpg': self.filesList.append(os.path.join(item)) self.TotalFilesCount = self.size = len(self.filesList) self.ProcessedFilesCount = self.pos = 0 # Emit each filename in the pipeline for parallel processing def run(self, data): if self.ProcessedFilesCount < self.TotalFilesCount: self.emit({'id': self.ProcessedFilesCount, 'imagePath': os.path.join(self.sourcePath, self.filesList[self.ProcessedFilesCount])}) self.ProcessedFilesCount += 1 self.pos = self.ProcessedFilesCount else: self.close()
Following is the class implementation of “FaceEncoder” which inherits “Node”, and can be pushed in image processing pipeline. In the “setup” function, we accept “detection_method” value for “face_recognition/dlib” face recognizer to invoke. It can have “cnn” based detector or “hog” based one. The “run” function unpacks the incoming data into “id” and “imagePath”. Subsequently, it reads the image from “imagePath”, runs the “face_location” defined in “face_recognition/dlib” library to crop out aligned face image, which is our region of interest. An aligned face image is a rectangular cropped image that has eyes and lips aligned to a specific location in the image (Note: The implementation may differ with other libraries e.g. opencv). Further, we call “face_encodings” function defined in “face_recognition/dlib” to extract the facial embeddings from each box. This embeddings floating values can help you reach the exact location of features in an aligned face image. We define variable “d” as an array of boxes and respective embeddings. Now, we pack the “id” and the array of embeddings as “encoding” key in an object and emit it to the image processing pipeline.
Python3
# Encode the face embedding, reference path# and location and emit to pipelineclass FaceEncoder(Node): def setup(self, detection_method = 'cnn'): self.detection_method = detection_method # detection_method can be cnn or hog def run(self, data): id = data['id'] imagePath = data['imagePath'] image = cv2.imread(imagePath) rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) boxes = face_recognition.face_locations( rgb, model = self.detection_method) encodings = face_recognition.face_encodings(rgb, boxes) d = [{"imagePath": imagePath, "loc": box, "encoding": enc} for (box, enc) in zip(boxes, encodings)] self.emit({'id': id, 'encodings': d})
Following is an implementation of DatastoreManager which again inherits from “Node” and can be plugged into the image processing pipeline. The aim for the class is to dump the “encodings” array as pickle file and use “id” parameter to uniquely name the pickle file. We want the pipeline to run multithreaded. To exploit the multithreading for performance improvement, we need to properly separate out the asynchronous tasks and try to avoid any need of synchronization. So, for maximum performance, we independently let the threads in the pipeline to write the data out to individual separate file without interfering any other thread operation. In case you are thinking how much time it saved, in used development hardware, without multithreading, the average embedding extraction time was ~30 seconds. After the multithreaded pipeline, (with 4 threads) it decreased to ~10 seconds but with the cost of high CPU usage. Since the thread takes around ~10 seconds, frequent disk writes do not occur and it does not hamper our multithreaded performance. Another case, if you are thinking why pickle is used instead of JSON alternative? The truth is JSON is a better alternative to pickle. Pickle is very unsafe for data storage and communication. Pickles can be maliciously modified for embedding executable codes in Python. The JSON files are human readable and faster for encoding and decoding. The only thing pickle is good at is the error-free dumping of python objects and contents into binary files. Since we are not planning to store and distribute the pickle files, and for error-free execution, we are using pickle. Else, JSON and other alternatives are strongly recommended.
Python3
# Receive the face embeddings for clustering and# id for naming the distinct filenameclass DatastoreManager(Node): def setup(self, encodingsOutputPath): self.encodingsOutputPath = encodingsOutputPath def run(self, data): encodings = data['encodings'] id = data['id'] with open(os.path.join(self.encodingsOutputPath, 'encodings_' + str(id) + '.pickle'), 'wb') as f: f.write(pickle.dumps(encodings))
Following is the implementation of class PickleListCollator. It is designed to read arrays of objects in multiple pickle files, merge into one array and dump the combined array into a single pickle file.Here, there is only one function GeneratePickle which accepts outputFilepath which specifies the single output pickle file which will contain the merged array.
Python3
# PicklesListCollator takes multiple pickle# files as input and merges them together# It is made specifically to support use-case# of merging distinct pickle files into oneclass PicklesListCollator: def __init__(self, picklesInputDirectory): self.picklesInputDirectory = picklesInputDirectory # Here we will list down all the pickles # files generated from multiple threads, # read the list of results append them to a # common list and create another pickle # with combined list as content def GeneratePickle(self, outputFilepath): datastore = [] ListOfPickleFiles = [] for item in os.listdir(self.picklesInputDirectory): _, fileExt = os.path.splitext(item) if fileExt == '.pickle': ListOfPickleFiles.append(os.path.join( self.picklesInputDirectory, item)) for picklePath in ListOfPickleFiles: with open(picklePath, "rb") as f: data = pickle.loads(f.read()) datastore.extend(data) with open(outputFilepath, 'wb') as f: f.write(pickle.dumps(datastore))
The following is the implementation of FaceClusterUtility class. There’s a constructor defined which takes “EncodingFilePath” with value as a path to merged pickle file. We read the array from the pickle file and try to cluster them using “DBSCAN” implementation in “scikit” library. Unlike k-means, the DBSCAN scan does not require the number of clusters. The number of clusters depends on the threshold parameter and will automatically be calculated. The DBSCAN implementation is provided in “scikit” and also accepts the number of threads for computation.Here, we have a function “Cluster”, that will be invoked to read the array data from the pickle file, run “DBSCAN”, print the unique clusters as unique faces and return the labels. The labels are unique values representing categories, which can be used to identify the category for a face present in array. (The array contents come from pickle file).
Python3
# Face clustering functionalityclass FaceClusterUtility: def __init__(self, EncodingFilePath): self.EncodingFilePath = EncodingFilePath # Credits: Arian's pyimagesearch for the clustering code # Here we are using the sklearn.DBSCAN functionality # cluster all the facial embeddings to get clusters # representing distinct people def Cluster(self): InputEncodingFile = self.EncodingFilePath if not (os.path.isfile(InputEncodingFile) and os.access(InputEncodingFile, os.R_OK)): print('The input encoding file, ' + str(InputEncodingFile) + ' does not exists or unreadable') exit() NumberOfParallelJobs = -1 # load the serialized face encodings # + bounding box locations from disk, # then extract the set of encodings to # so we can cluster on them print("[INFO] Loading encodings") data = pickle.loads(open(InputEncodingFile, "rb").read()) data = np.array(data) encodings = [d["encoding"] for d in data] # cluster the embeddings print("[INFO] Clustering") clt = DBSCAN(eps = 0.5, metric ="euclidean", n_jobs = NumberOfParallelJobs) clt.fit(encodings) # determine the total number of # unique faces found in the dataset labelIDs = np.unique(clt.labels_) numUniqueFaces = len(np.where(labelIDs > -1)[0]) print("[INFO] # unique faces: {}".format(numUniqueFaces)) return clt.labels_
Following is the implementation of TqdmUpdate class which inherits from “tqdm”. tqdm is a Python library that visualizes a progress bar in console interface. The variables “n” and “total” are recognized by “tqdm”. The values of these two variables are used to calculate the progress made. The parameters “done” and “total_size” in “update” function are provided values when bound to update event in the pipeline framework “PyPiper”. The super().refresh() invokes the implementation of “refresh” function in “tqdm” class which visualizes and updates the progress bar in console.
Python3
# Inherit class tqdm for visualization of progressclass TqdmUpdate(tqdm): # This function will be passed as progress # callback function. Setting the predefined # variables for auto-updates in visualization def update(self, done, total_size = None): if total_size is not None: self.total = total_size self.n = done super().refresh()
Following is the implementation of FaceImageGenerator class. This class provides functionality to generate a montage, cropped portrait image and an annotation for future training purpose (e.g. Darknet YOLO) from the labels that result after clustering.The constructor expects EncodingFilePath as the merged pickle file path. It will be used to load all the face encodings. We are now interested in the “imagePath” and face coordinates for generating the image.The call to “GenerateImages” does the intended job. We load the array from the merged pickle file. We apply the unique operation on labels and loop throughout the labels. Inside the iteration of the labels, for each unique label, we list down all the array indexes having the same current label. These array indexes are again iterated to process each face. For processing face, we use the index to obtain the path for the image file and coordinates of the face. The image file is loaded from the path of the image file. The coordinates of the face are expanded to a portrait shape (and we also ensure it does not expand more than the dimensions of the image) and it is cropped and dumped to file as a portrait image. We start again with original coordinates and expand a little to create annotations for future supervised training options for improved recognition capabilities. For annotation, we just designed it for “Darknet YOLO”, but it can also be adapted for any other framework. Finally, we build a montage and write it out into an image file.
Python3
class FaceImageGenerator: def __init__(self, EncodingFilePath): self.EncodingFilePath = EncodingFilePath # Here we are creating montages for # first 25 faces for each distinct face. # We will also generate images for all # the distinct faces by using the labels # from clusters and image url from the # encodings pickle file. # The face bounding box is increased a # little more for training purposes and # we also created the exact annotation for # each face image (similar to darknet YOLO) # to easily adapt the annotation for future # use in supervised training def GenerateImages(self, labels, OutputFolderName = "ClusteredFaces", MontageOutputFolder = "Montage"): output_directory = os.getcwd() OutputFolder = os.path.join(output_directory, OutputFolderName) if not os.path.exists(OutputFolder): os.makedirs(OutputFolder) else: shutil.rmtree(OutputFolder) time.sleep(0.5) os.makedirs(OutputFolder) MontageFolderPath = os.path.join(OutputFolder, MontageOutputFolder) os.makedirs(MontageFolderPath) data = pickle.loads(open(self.EncodingFilePath, "rb").read()) data = np.array(data) labelIDs = np.unique(labels) # loop over the unique face integers for labelID in labelIDs: # find all indexes into the `data` array # that belong to the current label ID, then # randomly sample a maximum of 25 indexes # from the set print("[INFO] faces for face ID: {}".format(labelID)) FaceFolder = os.path.join(OutputFolder, "Face_" + str(labelID)) os.makedirs(FaceFolder) idxs = np.where(labels == labelID)[0] # initialize the list of faces to # include in the montage portraits = [] # loop over the sampled indexes counter = 1 for i in idxs: # load the input image and extract the face ROI image = cv2.imread(data[i]["imagePath"]) (o_top, o_right, o_bottom, o_left) = data[i]["loc"] height, width, channel = image.shape widthMargin = 100 heightMargin = 150 top = o_top - heightMargin if top < 0: top = 0 bottom = o_bottom + heightMargin if bottom > height: bottom = height left = o_left - widthMargin if left < 0: left = 0 right = o_right + widthMargin if right > width: right = width portrait = image[top:bottom, left:right] if len(portraits) < 25: portraits.append(portrait) resizeUtils = ResizeUtils() portrait = resizeUtils.rescale_by_width(portrait, 400) FaceFilename = "face_" + str(counter) + ".jpg" FaceImagePath = os.path.join(FaceFolder, FaceFilename) cv2.imwrite(FaceImagePath, portrait) widthMargin = 20 heightMargin = 20 top = o_top - heightMargin if top < 0: top = 0 bottom = o_bottom + heightMargin if bottom > height: bottom = height left = o_left - widthMargin if left < 0: left = 0 right = o_right + widthMargin if right > width: right = width AnnotationFilename = "face_" + str(counter) + ".txt" AnnotationFilePath = os.path.join(FaceFolder, AnnotationFilename) f = open(AnnotationFilePath, 'w') f.write(str(labelID) + ' ' + str(left) + ' ' + str(top) + ' ' + str(right) + ' ' + str(bottom) + "\n") f.close() counter += 1 montage = build_montages(portraits, (96, 120), (5, 5))[0] MontageFilenamePath = os.path.join( MontageFolderPath, "Face_" + str(labelID) + ".jpg") cv2.imwrite(MontageFilenamePath, montage)
Save the file as FaceClusteringLibrary.py, which will contain all the class definitions.Following is file Driver.py, which invokes the functionalities to create a pipeline.
Python3
# importing all classes from above Python filefrom FaceClusteringLibrary import * if __name__ == "__main__": # Generate the frames from given video footage framesGenerator = FramesGenerator("Footage.mp4") framesGenerator.GenerateFrames("Frames") # Design and run the face clustering pipeline CurrentPath = os.getcwd() FramesDirectory = "Frames" FramesDirectoryPath = os.path.join(CurrentPath, FramesDirectory) EncodingsFolder = "Encodings" EncodingsFolderPath = os.path.join(CurrentPath, EncodingsFolder) if os.path.exists(EncodingsFolderPath): shutil.rmtree(EncodingsFolderPath, ignore_errors = True) time.sleep(0.5) os.makedirs(EncodingsFolderPath) pipeline = Pipeline( FramesProvider("Files source", sourcePath = FramesDirectoryPath) | FaceEncoder("Encode faces") | DatastoreManager("Store encoding", encodingsOutputPath = EncodingsFolderPath), n_threads = 3, quiet = True) pbar = TqdmUpdate() pipeline.run(update_callback = pbar.update) print() print('[INFO] Encodings extracted') # Merge all the encodings pickle files into one CurrentPath = os.getcwd() EncodingsInputDirectory = "Encodings" EncodingsInputDirectoryPath = os.path.join( CurrentPath, EncodingsInputDirectory) OutputEncodingPickleFilename = "encodings.pickle" if os.path.exists(OutputEncodingPickleFilename): os.remove(OutputEncodingPickleFilename) picklesListCollator = PicklesListCollator( EncodingsInputDirectoryPath) picklesListCollator.GeneratePickle( OutputEncodingPickleFilename) # To manage any delay in file writing time.sleep(0.5) # Start clustering process and generate # output images with annotations EncodingPickleFilePath = "encodings.pickle" faceClusterUtility = FaceClusterUtility(EncodingPickleFilePath) faceImageGenerator = FaceImageGenerator(EncodingPickleFilePath) labelIDs = faceClusterUtility.Cluster() faceImageGenerator.GenerateImages( labelIDs, "ClusteredFaces", "Montage")
Montage Output:
Troubleshooting – Question1: The whole pc freezes when extracting facial embedding. Solution: The solution is to decrease the values in frame resize function when extracting frames from an input video clip. Remember, decreasing the values too much will result in improper face clustering. Instead of resizing frame, we can introduce some frontal face detection and clip out the frontal faces only for improved accuracy.Question2: The pc becomes slow while running the pipeline. Solution: The CPU will be used at a maximum level. To cap the usage, you can decrease the number of threads specified at pipeline constructor.Question3: The output clustering is too much inaccurate. Solution: The only reason for the case can be the frames extracted from the input video clip will have very faces with a very small resolution or the number of frames is very less (around 7-8). Kindly get a video clip with bright and clear images of faces in it or for the latter case, get a 2-minute video or mod with source code for video frames extraction.Refer Github link for complete code and additional file used : https://github.com/cppxaxa/FaceRecognitionPipeline_GeeksForGeeks References: 1. Adrian’s blog post for face clustering 2. PyPiper guide 3. OpenCV manual 4. StackOverflow
singghakshay
rajeev0719singh
OpenCV
Technical Scripter 2018
Machine Learning
Project
Python
Technical Scripter
Machine Learning
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Support Vector Machine Algorithm
k-nearest neighbor algorithm in Python
Singular Value Decomposition (SVD)
Intuition of Adam Optimizer
ML | Logistic Regression using Python
SDE SHEET - A Complete Guide for SDE Preparation
XML parsing in Python
Python | Simple GUI calculator using Tkinter
Working with zip files in Python
Implementing Web Scraping in Python with BeautifulSoup
|
[
{
"code": null,
"e": 24344,
"s": 24316,
"text": "\n29 Nov, 2021"
},
{
"code": null,
"e": 24941,
"s": 24344,
"text": "Live face-recognition is a problem that automated security division still face. With the advancements in Convolutions Neural Networks and specifically creative ways of Region-CNN, it’s already confirmed that with our current technologies, we can opt for supervised learning options such as FaceNet, YOLO for fast and live face-recognition in a real-world environment. To train a supervised model, we need to get datasets of our target labels which is still a tedious task. We need an efficient and automated solution for the dataset generation with minimal labeling effort by user intervention. "
},
{
"code": null,
"e": 26419,
"s": 24941,
"text": "Introduction: We are proposing a dataset generation pipeline which takes a video clip as source and extracts all the faces and clusters them to limited and accurate sets of images representing a distinct person. Each set can easily be labeled by human input with ease.Technical Details: We are going to use opencv lib for per second frames extraction from input video clip. 1 second seems appropriate for covering relevant data and limited frames for processing. We will use face_recognition library (backed by dlib) for extracting the faces from the frames and align them for feature extractions. Then, we will extract the human observable features and cluster them using DBSCAN clustering provided by scikit-learn. For the solution, we will crop out all the faces, create labels and group them in folders for users to adapt them as a dataset for their training use-cases.Challenges in implementation: For a larger audience, we plan to implement the solution for execution in CPU rather than an NVIDIA GPU. Using an NVIDIA GPU may increase the efficiency of the pipeline. CPU implementation of facial embedding extraction is very slow (30+ sec per images). To cope up with the problem, we implement them with parallel pipeline executions (resulting in ~13sec per image) and later merge their results for further clustering tasks. We introduce tqdm along with PyPiper for progress updates and the resizing of frames extracted from input video for smooth execution of pipeline. "
},
{
"code": null,
"e": 26447,
"s": 26419,
"text": "Input: Footage.mp4\nOutput: "
},
{
"code": null,
"e": 26601,
"s": 26449,
"text": "Required Python3 modules: os, cv2, numpy, tensorflow, json, re, shutil, time, pickle, pyPiper, tqdm, imutils, face_recognition, dlib, warnings, sklearn"
},
{
"code": null,
"e": 27183,
"s": 26601,
"text": "Snippets Section: For the contents of the file FaceClusteringLibrary.py, which contains all the class definitions, following are the snippets and explanation of their working.Class implementation of ResizeUtils provides function rescale_by_height and rescale_by_width. “rescale_by_width” is a function that takes ‘image’ and ‘target_width’ as input. It upscales/downscales the image dimension for width to meet the target_width. The height is automatically calculated so that aspect ratio stays the same. rescale_by_height is also the same but instead of width, it targets height. "
},
{
"code": null,
"e": 27191,
"s": 27183,
"text": "Python3"
},
{
"code": "'''The ResizeUtils provides resizing function to keep the aspect ratio intactCredits: AndyP at StackOverflow'''class ResizeUtils: # Given a target height, adjust the image # by calculating the width and resize def rescale_by_height(self, image, target_height, method = cv2.INTER_LANCZOS4): # Rescale `image` to `target_height` # (preserving aspect ratio) w = int(round(target_height * image.shape[1] / image.shape[0])) return (cv2.resize(image, (w, target_height), interpolation = method)) # Given a target width, adjust the image # by calculating the height and resize def rescale_by_width(self, image, target_width, method = cv2.INTER_LANCZOS4): # Rescale `image` to `target_width` # (preserving aspect ratio) h = int(round(target_width * image.shape[0] / image.shape[1])) return (cv2.resize(image, (target_width, h), interpolation = method))",
"e": 28260,
"s": 27191,
"text": null
},
{
"code": null,
"e": 29751,
"s": 28260,
"text": "Following is the definition of FramesGenerator class. This class provides functionality to extract jpg images by reading the video sequentially. If we take an example of an input video file, it can have a framerate of ~30 fps. We can conclude that for 1 second of video, there will be 30 images. For even a 2 minute video, the number of images for processing will be 2 * 60 * 30 = 3600. It’s a too much high number of images to process and may take hours for complete pipeline processing. But there comes one more fact that faces and people may not change within a second. So considering a 2-minute video, generating 30 images for 1 second is cumbersome and repetitive to process. Instead, we can just take only 1 snap of image in 1 second. The implementation of “FramesGenerator” dumps only 1 image per second from a video clip.Considering the dumped images are subject to face_recognition/dlib processing for face extraction, we try to keep a threshold of the height no greater than 500 and width capped to 700. This limit is imposed by the “AutoResize” function that further calls rescale_by_height or rescale_by_width to reduce the size of the image if limits are hit but still preserves the aspect ratio.Coming to the following snippet, AutoResize function tries to impose a limit to given image’s dimension. If the width is greater than 700, we down-scale it to keep the width 700 and keep maintaining aspect ratio. Another limit set here is, the height must not be greater than 500. "
},
{
"code": null,
"e": 29759,
"s": 29751,
"text": "Python3"
},
{
"code": "# The FramesGenerator extracts image# frames from the given video file# The image frames are resized for# face_recognition / dlib processingclass FramesGenerator: def __init__(self, VideoFootageSource): self.VideoFootageSource = VideoFootageSource # Resize the given input to fit in a specified # size for face embeddings extraction def AutoResize(self, frame): resizeUtils = ResizeUtils() height, width, _ = frame.shape if height > 500: frame = resizeUtils.rescale_by_height(frame, 500) self.AutoResize(frame) if width > 700: frame = resizeUtils.rescale_by_width(frame, 700) self.AutoResize(frame) return frame",
"e": 30491,
"s": 29759,
"text": null
},
{
"code": null,
"e": 30795,
"s": 30491,
"text": "Following is the snippet for GenerateFrames function. It queries the fps to decide among how many frames, 1 image can be dumped. We clear the output directory and start iterating throughout the frames. Before dumping any image, we resize the image if it hits the limit specified in AutoResize function. "
},
{
"code": null,
"e": 30803,
"s": 30795,
"text": "Python3"
},
{
"code": "# Extract 1 frame from each second from video footage# and save the frames to a specific folderdef GenerateFrames(self, OutputDirectoryName): cap = cv2.VideoCapture(self.VideoFootageSource) _, frame = cap.read() fps = cap.get(cv2.CAP_PROP_FPS) TotalFrames = cap.get(cv2.CAP_PROP_FRAME_COUNT) print(\"[INFO] Total Frames \", TotalFrames, \" @ \", fps, \" fps\") print(\"[INFO] Calculating number of frames per second\") CurrentDirectory = os.path.curdir OutputDirectoryPath = os.path.join( CurrentDirectory, OutputDirectoryName) if os.path.exists(OutputDirectoryPath): shutil.rmtree(OutputDirectoryPath) time.sleep(0.5) os.mkdir(OutputDirectoryPath) CurrentFrame = 1 fpsCounter = 0 FrameWrittenCount = 1 while CurrentFrame < TotalFrames: _, frame = cap.read() if (frame is None): continue if fpsCounter > fps: fpsCounter = 0 frame = self.AutoResize(frame) filename = \"frame_\" + str(FrameWrittenCount) + \".jpg\" cv2.imwrite(os.path.join( OutputDirectoryPath, filename), frame) FrameWrittenCount += 1 fpsCounter += 1 CurrentFrame += 1 print('[INFO] Frames extracted')",
"e": 32064,
"s": 30803,
"text": null
},
{
"code": null,
"e": 33056,
"s": 32064,
"text": "Following is the snippet for FramesProvider class. It inherits “Node”, which can be used to construct the image processing pipeline. We implement “setup” and “run” functions. Any arguments defined in “setup” function can have the parameters, which will be expected by constructor as parameters at the time of object creation. Here, we can pass sourcePath parameter to the FramesProvider object. “setup” function only runs once. “run” function runs and keeps emitting data by calling emit function to processing pipeline till close function is called.Here, in the “setup”, we accept sourcePath as an argument and iterate through all the files in the given frames directory. Whichever file’s extension is .jpg (which will be generated by the class FrameGenerator), we add it to “filesList” list.During the calls of run function, all the jpg image paths from “filesList” are packed with attributes specifying unique “id” and “imagePath” as an object and emitted to the pipeline for processing. "
},
{
"code": null,
"e": 33064,
"s": 33056,
"text": "Python3"
},
{
"code": "# Following are nodes for pipeline constructions.# It will create and asynchronously execute threads# for reading images, extracting facial features and# storing them independently in different threads # Keep emitting the filenames into# the pipeline for processingclass FramesProvider(Node): def setup(self, sourcePath): self.sourcePath = sourcePath self.filesList = [] for item in os.listdir(self.sourcePath): _, fileExt = os.path.splitext(item) if fileExt == '.jpg': self.filesList.append(os.path.join(item)) self.TotalFilesCount = self.size = len(self.filesList) self.ProcessedFilesCount = self.pos = 0 # Emit each filename in the pipeline for parallel processing def run(self, data): if self.ProcessedFilesCount < self.TotalFilesCount: self.emit({'id': self.ProcessedFilesCount, 'imagePath': os.path.join(self.sourcePath, self.filesList[self.ProcessedFilesCount])}) self.ProcessedFilesCount += 1 self.pos = self.ProcessedFilesCount else: self.close()",
"e": 34217,
"s": 33064,
"text": null
},
{
"code": null,
"e": 35392,
"s": 34217,
"text": "Following is the class implementation of “FaceEncoder” which inherits “Node”, and can be pushed in image processing pipeline. In the “setup” function, we accept “detection_method” value for “face_recognition/dlib” face recognizer to invoke. It can have “cnn” based detector or “hog” based one. The “run” function unpacks the incoming data into “id” and “imagePath”. Subsequently, it reads the image from “imagePath”, runs the “face_location” defined in “face_recognition/dlib” library to crop out aligned face image, which is our region of interest. An aligned face image is a rectangular cropped image that has eyes and lips aligned to a specific location in the image (Note: The implementation may differ with other libraries e.g. opencv). Further, we call “face_encodings” function defined in “face_recognition/dlib” to extract the facial embeddings from each box. This embeddings floating values can help you reach the exact location of features in an aligned face image. We define variable “d” as an array of boxes and respective embeddings. Now, we pack the “id” and the array of embeddings as “encoding” key in an object and emit it to the image processing pipeline. "
},
{
"code": null,
"e": 35400,
"s": 35392,
"text": "Python3"
},
{
"code": "# Encode the face embedding, reference path# and location and emit to pipelineclass FaceEncoder(Node): def setup(self, detection_method = 'cnn'): self.detection_method = detection_method # detection_method can be cnn or hog def run(self, data): id = data['id'] imagePath = data['imagePath'] image = cv2.imread(imagePath) rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) boxes = face_recognition.face_locations( rgb, model = self.detection_method) encodings = face_recognition.face_encodings(rgb, boxes) d = [{\"imagePath\": imagePath, \"loc\": box, \"encoding\": enc} for (box, enc) in zip(boxes, encodings)] self.emit({'id': id, 'encodings': d})",
"e": 36155,
"s": 35400,
"text": null
},
{
"code": null,
"e": 37838,
"s": 36155,
"text": "Following is an implementation of DatastoreManager which again inherits from “Node” and can be plugged into the image processing pipeline. The aim for the class is to dump the “encodings” array as pickle file and use “id” parameter to uniquely name the pickle file. We want the pipeline to run multithreaded. To exploit the multithreading for performance improvement, we need to properly separate out the asynchronous tasks and try to avoid any need of synchronization. So, for maximum performance, we independently let the threads in the pipeline to write the data out to individual separate file without interfering any other thread operation. In case you are thinking how much time it saved, in used development hardware, without multithreading, the average embedding extraction time was ~30 seconds. After the multithreaded pipeline, (with 4 threads) it decreased to ~10 seconds but with the cost of high CPU usage. Since the thread takes around ~10 seconds, frequent disk writes do not occur and it does not hamper our multithreaded performance. Another case, if you are thinking why pickle is used instead of JSON alternative? The truth is JSON is a better alternative to pickle. Pickle is very unsafe for data storage and communication. Pickles can be maliciously modified for embedding executable codes in Python. The JSON files are human readable and faster for encoding and decoding. The only thing pickle is good at is the error-free dumping of python objects and contents into binary files. Since we are not planning to store and distribute the pickle files, and for error-free execution, we are using pickle. Else, JSON and other alternatives are strongly recommended. "
},
{
"code": null,
"e": 37846,
"s": 37838,
"text": "Python3"
},
{
"code": "# Receive the face embeddings for clustering and# id for naming the distinct filenameclass DatastoreManager(Node): def setup(self, encodingsOutputPath): self.encodingsOutputPath = encodingsOutputPath def run(self, data): encodings = data['encodings'] id = data['id'] with open(os.path.join(self.encodingsOutputPath, 'encodings_' + str(id) + '.pickle'), 'wb') as f: f.write(pickle.dumps(encodings))",
"e": 38307,
"s": 37846,
"text": null
},
{
"code": null,
"e": 38671,
"s": 38307,
"text": "Following is the implementation of class PickleListCollator. It is designed to read arrays of objects in multiple pickle files, merge into one array and dump the combined array into a single pickle file.Here, there is only one function GeneratePickle which accepts outputFilepath which specifies the single output pickle file which will contain the merged array. "
},
{
"code": null,
"e": 38679,
"s": 38671,
"text": "Python3"
},
{
"code": "# PicklesListCollator takes multiple pickle# files as input and merges them together# It is made specifically to support use-case# of merging distinct pickle files into oneclass PicklesListCollator: def __init__(self, picklesInputDirectory): self.picklesInputDirectory = picklesInputDirectory # Here we will list down all the pickles # files generated from multiple threads, # read the list of results append them to a # common list and create another pickle # with combined list as content def GeneratePickle(self, outputFilepath): datastore = [] ListOfPickleFiles = [] for item in os.listdir(self.picklesInputDirectory): _, fileExt = os.path.splitext(item) if fileExt == '.pickle': ListOfPickleFiles.append(os.path.join( self.picklesInputDirectory, item)) for picklePath in ListOfPickleFiles: with open(picklePath, \"rb\") as f: data = pickle.loads(f.read()) datastore.extend(data) with open(outputFilepath, 'wb') as f: f.write(pickle.dumps(datastore))",
"e": 39811,
"s": 38679,
"text": null
},
{
"code": null,
"e": 40722,
"s": 39811,
"text": "The following is the implementation of FaceClusterUtility class. There’s a constructor defined which takes “EncodingFilePath” with value as a path to merged pickle file. We read the array from the pickle file and try to cluster them using “DBSCAN” implementation in “scikit” library. Unlike k-means, the DBSCAN scan does not require the number of clusters. The number of clusters depends on the threshold parameter and will automatically be calculated. The DBSCAN implementation is provided in “scikit” and also accepts the number of threads for computation.Here, we have a function “Cluster”, that will be invoked to read the array data from the pickle file, run “DBSCAN”, print the unique clusters as unique faces and return the labels. The labels are unique values representing categories, which can be used to identify the category for a face present in array. (The array contents come from pickle file). "
},
{
"code": null,
"e": 40730,
"s": 40722,
"text": "Python3"
},
{
"code": "# Face clustering functionalityclass FaceClusterUtility: def __init__(self, EncodingFilePath): self.EncodingFilePath = EncodingFilePath # Credits: Arian's pyimagesearch for the clustering code # Here we are using the sklearn.DBSCAN functionality # cluster all the facial embeddings to get clusters # representing distinct people def Cluster(self): InputEncodingFile = self.EncodingFilePath if not (os.path.isfile(InputEncodingFile) and os.access(InputEncodingFile, os.R_OK)): print('The input encoding file, ' + str(InputEncodingFile) + ' does not exists or unreadable') exit() NumberOfParallelJobs = -1 # load the serialized face encodings # + bounding box locations from disk, # then extract the set of encodings to # so we can cluster on them print(\"[INFO] Loading encodings\") data = pickle.loads(open(InputEncodingFile, \"rb\").read()) data = np.array(data) encodings = [d[\"encoding\"] for d in data] # cluster the embeddings print(\"[INFO] Clustering\") clt = DBSCAN(eps = 0.5, metric =\"euclidean\", n_jobs = NumberOfParallelJobs) clt.fit(encodings) # determine the total number of # unique faces found in the dataset labelIDs = np.unique(clt.labels_) numUniqueFaces = len(np.where(labelIDs > -1)[0]) print(\"[INFO] # unique faces: {}\".format(numUniqueFaces)) return clt.labels_",
"e": 42314,
"s": 40730,
"text": null
},
{
"code": null,
"e": 42893,
"s": 42314,
"text": "Following is the implementation of TqdmUpdate class which inherits from “tqdm”. tqdm is a Python library that visualizes a progress bar in console interface. The variables “n” and “total” are recognized by “tqdm”. The values of these two variables are used to calculate the progress made. The parameters “done” and “total_size” in “update” function are provided values when bound to update event in the pipeline framework “PyPiper”. The super().refresh() invokes the implementation of “refresh” function in “tqdm” class which visualizes and updates the progress bar in console. "
},
{
"code": null,
"e": 42901,
"s": 42893,
"text": "Python3"
},
{
"code": "# Inherit class tqdm for visualization of progressclass TqdmUpdate(tqdm): # This function will be passed as progress # callback function. Setting the predefined # variables for auto-updates in visualization def update(self, done, total_size = None): if total_size is not None: self.total = total_size self.n = done super().refresh()",
"e": 43296,
"s": 42901,
"text": null
},
{
"code": null,
"e": 44808,
"s": 43296,
"text": "Following is the implementation of FaceImageGenerator class. This class provides functionality to generate a montage, cropped portrait image and an annotation for future training purpose (e.g. Darknet YOLO) from the labels that result after clustering.The constructor expects EncodingFilePath as the merged pickle file path. It will be used to load all the face encodings. We are now interested in the “imagePath” and face coordinates for generating the image.The call to “GenerateImages” does the intended job. We load the array from the merged pickle file. We apply the unique operation on labels and loop throughout the labels. Inside the iteration of the labels, for each unique label, we list down all the array indexes having the same current label. These array indexes are again iterated to process each face. For processing face, we use the index to obtain the path for the image file and coordinates of the face. The image file is loaded from the path of the image file. The coordinates of the face are expanded to a portrait shape (and we also ensure it does not expand more than the dimensions of the image) and it is cropped and dumped to file as a portrait image. We start again with original coordinates and expand a little to create annotations for future supervised training options for improved recognition capabilities. For annotation, we just designed it for “Darknet YOLO”, but it can also be adapted for any other framework. Finally, we build a montage and write it out into an image file. "
},
{
"code": null,
"e": 44816,
"s": 44808,
"text": "Python3"
},
{
"code": "class FaceImageGenerator: def __init__(self, EncodingFilePath): self.EncodingFilePath = EncodingFilePath # Here we are creating montages for # first 25 faces for each distinct face. # We will also generate images for all # the distinct faces by using the labels # from clusters and image url from the # encodings pickle file. # The face bounding box is increased a # little more for training purposes and # we also created the exact annotation for # each face image (similar to darknet YOLO) # to easily adapt the annotation for future # use in supervised training def GenerateImages(self, labels, OutputFolderName = \"ClusteredFaces\", MontageOutputFolder = \"Montage\"): output_directory = os.getcwd() OutputFolder = os.path.join(output_directory, OutputFolderName) if not os.path.exists(OutputFolder): os.makedirs(OutputFolder) else: shutil.rmtree(OutputFolder) time.sleep(0.5) os.makedirs(OutputFolder) MontageFolderPath = os.path.join(OutputFolder, MontageOutputFolder) os.makedirs(MontageFolderPath) data = pickle.loads(open(self.EncodingFilePath, \"rb\").read()) data = np.array(data) labelIDs = np.unique(labels) # loop over the unique face integers for labelID in labelIDs: # find all indexes into the `data` array # that belong to the current label ID, then # randomly sample a maximum of 25 indexes # from the set print(\"[INFO] faces for face ID: {}\".format(labelID)) FaceFolder = os.path.join(OutputFolder, \"Face_\" + str(labelID)) os.makedirs(FaceFolder) idxs = np.where(labels == labelID)[0] # initialize the list of faces to # include in the montage portraits = [] # loop over the sampled indexes counter = 1 for i in idxs: # load the input image and extract the face ROI image = cv2.imread(data[i][\"imagePath\"]) (o_top, o_right, o_bottom, o_left) = data[i][\"loc\"] height, width, channel = image.shape widthMargin = 100 heightMargin = 150 top = o_top - heightMargin if top < 0: top = 0 bottom = o_bottom + heightMargin if bottom > height: bottom = height left = o_left - widthMargin if left < 0: left = 0 right = o_right + widthMargin if right > width: right = width portrait = image[top:bottom, left:right] if len(portraits) < 25: portraits.append(portrait) resizeUtils = ResizeUtils() portrait = resizeUtils.rescale_by_width(portrait, 400) FaceFilename = \"face_\" + str(counter) + \".jpg\" FaceImagePath = os.path.join(FaceFolder, FaceFilename) cv2.imwrite(FaceImagePath, portrait) widthMargin = 20 heightMargin = 20 top = o_top - heightMargin if top < 0: top = 0 bottom = o_bottom + heightMargin if bottom > height: bottom = height left = o_left - widthMargin if left < 0: left = 0 right = o_right + widthMargin if right > width: right = width AnnotationFilename = \"face_\" + str(counter) + \".txt\" AnnotationFilePath = os.path.join(FaceFolder, AnnotationFilename) f = open(AnnotationFilePath, 'w') f.write(str(labelID) + ' ' + str(left) + ' ' + str(top) + ' ' + str(right) + ' ' + str(bottom) + \"\\n\") f.close() counter += 1 montage = build_montages(portraits, (96, 120), (5, 5))[0] MontageFilenamePath = os.path.join( MontageFolderPath, \"Face_\" + str(labelID) + \".jpg\") cv2.imwrite(MontageFilenamePath, montage)",
"e": 49204,
"s": 44816,
"text": null
},
{
"code": null,
"e": 49380,
"s": 49204,
"text": " Save the file as FaceClusteringLibrary.py, which will contain all the class definitions.Following is file Driver.py, which invokes the functionalities to create a pipeline. "
},
{
"code": null,
"e": 49388,
"s": 49380,
"text": "Python3"
},
{
"code": "# importing all classes from above Python filefrom FaceClusteringLibrary import * if __name__ == \"__main__\": # Generate the frames from given video footage framesGenerator = FramesGenerator(\"Footage.mp4\") framesGenerator.GenerateFrames(\"Frames\") # Design and run the face clustering pipeline CurrentPath = os.getcwd() FramesDirectory = \"Frames\" FramesDirectoryPath = os.path.join(CurrentPath, FramesDirectory) EncodingsFolder = \"Encodings\" EncodingsFolderPath = os.path.join(CurrentPath, EncodingsFolder) if os.path.exists(EncodingsFolderPath): shutil.rmtree(EncodingsFolderPath, ignore_errors = True) time.sleep(0.5) os.makedirs(EncodingsFolderPath) pipeline = Pipeline( FramesProvider(\"Files source\", sourcePath = FramesDirectoryPath) | FaceEncoder(\"Encode faces\") | DatastoreManager(\"Store encoding\", encodingsOutputPath = EncodingsFolderPath), n_threads = 3, quiet = True) pbar = TqdmUpdate() pipeline.run(update_callback = pbar.update) print() print('[INFO] Encodings extracted') # Merge all the encodings pickle files into one CurrentPath = os.getcwd() EncodingsInputDirectory = \"Encodings\" EncodingsInputDirectoryPath = os.path.join( CurrentPath, EncodingsInputDirectory) OutputEncodingPickleFilename = \"encodings.pickle\" if os.path.exists(OutputEncodingPickleFilename): os.remove(OutputEncodingPickleFilename) picklesListCollator = PicklesListCollator( EncodingsInputDirectoryPath) picklesListCollator.GeneratePickle( OutputEncodingPickleFilename) # To manage any delay in file writing time.sleep(0.5) # Start clustering process and generate # output images with annotations EncodingPickleFilePath = \"encodings.pickle\" faceClusterUtility = FaceClusterUtility(EncodingPickleFilePath) faceImageGenerator = FaceImageGenerator(EncodingPickleFilePath) labelIDs = faceClusterUtility.Cluster() faceImageGenerator.GenerateImages( labelIDs, \"ClusteredFaces\", \"Montage\")",
"e": 51564,
"s": 49388,
"text": null
},
{
"code": null,
"e": 51582,
"s": 51564,
"text": "Montage Output: "
},
{
"code": null,
"e": 52854,
"s": 51582,
"text": "Troubleshooting – Question1: The whole pc freezes when extracting facial embedding. Solution: The solution is to decrease the values in frame resize function when extracting frames from an input video clip. Remember, decreasing the values too much will result in improper face clustering. Instead of resizing frame, we can introduce some frontal face detection and clip out the frontal faces only for improved accuracy.Question2: The pc becomes slow while running the pipeline. Solution: The CPU will be used at a maximum level. To cap the usage, you can decrease the number of threads specified at pipeline constructor.Question3: The output clustering is too much inaccurate. Solution: The only reason for the case can be the frames extracted from the input video clip will have very faces with a very small resolution or the number of frames is very less (around 7-8). Kindly get a video clip with bright and clear images of faces in it or for the latter case, get a 2-minute video or mod with source code for video frames extraction.Refer Github link for complete code and additional file used : https://github.com/cppxaxa/FaceRecognitionPipeline_GeeksForGeeks References: 1. Adrian’s blog post for face clustering 2. PyPiper guide 3. OpenCV manual 4. StackOverflow "
},
{
"code": null,
"e": 52867,
"s": 52854,
"text": "singghakshay"
},
{
"code": null,
"e": 52883,
"s": 52867,
"text": "rajeev0719singh"
},
{
"code": null,
"e": 52890,
"s": 52883,
"text": "OpenCV"
},
{
"code": null,
"e": 52914,
"s": 52890,
"text": "Technical Scripter 2018"
},
{
"code": null,
"e": 52931,
"s": 52914,
"text": "Machine Learning"
},
{
"code": null,
"e": 52939,
"s": 52931,
"text": "Project"
},
{
"code": null,
"e": 52946,
"s": 52939,
"text": "Python"
},
{
"code": null,
"e": 52965,
"s": 52946,
"text": "Technical Scripter"
},
{
"code": null,
"e": 52982,
"s": 52965,
"text": "Machine Learning"
},
{
"code": null,
"e": 53080,
"s": 52982,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 53113,
"s": 53080,
"text": "Support Vector Machine Algorithm"
},
{
"code": null,
"e": 53152,
"s": 53113,
"text": "k-nearest neighbor algorithm in Python"
},
{
"code": null,
"e": 53187,
"s": 53152,
"text": "Singular Value Decomposition (SVD)"
},
{
"code": null,
"e": 53215,
"s": 53187,
"text": "Intuition of Adam Optimizer"
},
{
"code": null,
"e": 53253,
"s": 53215,
"text": "ML | Logistic Regression using Python"
},
{
"code": null,
"e": 53302,
"s": 53253,
"text": "SDE SHEET - A Complete Guide for SDE Preparation"
},
{
"code": null,
"e": 53324,
"s": 53302,
"text": "XML parsing in Python"
},
{
"code": null,
"e": 53369,
"s": 53324,
"text": "Python | Simple GUI calculator using Tkinter"
},
{
"code": null,
"e": 53402,
"s": 53369,
"text": "Working with zip files in Python"
}
] |
How to install Elasticsearch on Windows 10 - onlinetutorialspoint
|
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
In this tutorials, we will show how to install Elasticsearch on Windows 10 operating system.
Windows 10 Pro, 64 bit
Elasticsearch 6.7.1
Elasticsearch requires Java 8 (or later) installation to run. If you haven’t installed Java yet, follow our previous article to install Java on Windows operating system.
Follow the below steps to download and install Elasticsearch.
Download the latest Elasticsearchthe from the official website. Usually, it will be available as a .zip file.
The elasticsearch-x.x.x.zip file will be download, upon clicking the above-highlighted URL.
Extract the downloaded file; then you would see the below folder structure.
/bin folder contains all binaries required to run Elasticsearch.
/config folder contains all configuration properties related to Elasticsearch, Java and user settings.
Once the downloading process completed; now we can start the Elasticsearch from the command line.
D:\softwares\elasticsearch-6.7.1\bin>elasticsearch.exe
If everything went well, the command line terminal would display output similar to the following:
Elasticsearch loads the configuration details from an elasticsearch.yml file while running it. If you wanted to change the configurations; you always free to change it from the elasticsearch.yml file or you can even provide as parameters while executing the elasticsearch.exe file.
You can test that your Elasticsearch node is running by sending an HTTP request to port 9200 on localhost:
Elasticsearch Installation guide
Install Java on Windows 10
Happy Learning 🙂
How to install AWS CLI on Windows 10
Install Mysql on Windows 10 Step by Step
How to install Minikube on Windows 10
How to insert documents to Elasticsearch cluster Java
How to Install Ant on Windows 10
How install Python on Windows 10
How to install PuTTY on windows 10
How to install RabbitMQ on Windows 10
How to install Gradle on Windows 10
How to install SOAPUI on Windows 10
How to Install Git windows 10 Operating System
How to setup or install MongoDB on Windows 10
How to install Android Studio on Windows 10
Setup/Install Redis Server on Windows 10
Install Apache Solr on Windows 10
How to install AWS CLI on Windows 10
Install Mysql on Windows 10 Step by Step
How to install Minikube on Windows 10
How to insert documents to Elasticsearch cluster Java
How to Install Ant on Windows 10
How install Python on Windows 10
How to install PuTTY on windows 10
How to install RabbitMQ on Windows 10
How to install Gradle on Windows 10
How to install SOAPUI on Windows 10
How to Install Git windows 10 Operating System
How to setup or install MongoDB on Windows 10
How to install Android Studio on Windows 10
Setup/Install Redis Server on Windows 10
Install Apache Solr on Windows 10
|
[
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 491,
"s": 398,
"text": "In this tutorials, we will show how to install Elasticsearch on Windows 10 operating system."
},
{
"code": null,
"e": 514,
"s": 491,
"text": "Windows 10 Pro, 64 bit"
},
{
"code": null,
"e": 534,
"s": 514,
"text": "Elasticsearch 6.7.1"
},
{
"code": null,
"e": 704,
"s": 534,
"text": "Elasticsearch requires Java 8 (or later) installation to run. If you haven’t installed Java yet, follow our previous article to install Java on Windows operating system."
},
{
"code": null,
"e": 766,
"s": 704,
"text": "Follow the below steps to download and install Elasticsearch."
},
{
"code": null,
"e": 876,
"s": 766,
"text": "Download the latest Elasticsearchthe from the official website. Usually, it will be available as a .zip file."
},
{
"code": null,
"e": 968,
"s": 876,
"text": "The elasticsearch-x.x.x.zip file will be download, upon clicking the above-highlighted URL."
},
{
"code": null,
"e": 1044,
"s": 968,
"text": "Extract the downloaded file; then you would see the below folder structure."
},
{
"code": null,
"e": 1109,
"s": 1044,
"text": "/bin folder contains all binaries required to run Elasticsearch."
},
{
"code": null,
"e": 1212,
"s": 1109,
"text": "/config folder contains all configuration properties related to Elasticsearch, Java and user settings."
},
{
"code": null,
"e": 1310,
"s": 1212,
"text": "Once the downloading process completed; now we can start the Elasticsearch from the command line."
},
{
"code": null,
"e": 1365,
"s": 1310,
"text": "D:\\softwares\\elasticsearch-6.7.1\\bin>elasticsearch.exe"
},
{
"code": null,
"e": 1463,
"s": 1365,
"text": "If everything went well, the command line terminal would display output similar to the following:"
},
{
"code": null,
"e": 1745,
"s": 1463,
"text": "Elasticsearch loads the configuration details from an elasticsearch.yml file while running it. If you wanted to change the configurations; you always free to change it from the elasticsearch.yml file or you can even provide as parameters while executing the elasticsearch.exe file."
},
{
"code": null,
"e": 1852,
"s": 1745,
"text": "You can test that your Elasticsearch node is running by sending an HTTP request to port 9200 on localhost:"
},
{
"code": null,
"e": 1885,
"s": 1852,
"text": "Elasticsearch Installation guide"
},
{
"code": null,
"e": 1912,
"s": 1885,
"text": "Install Java on Windows 10"
},
{
"code": null,
"e": 1929,
"s": 1912,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 2524,
"s": 1929,
"text": "\nHow to install AWS CLI on Windows 10\nInstall Mysql on Windows 10 Step by Step\nHow to install Minikube on Windows 10\nHow to insert documents to Elasticsearch cluster Java\nHow to Install Ant on Windows 10\nHow install Python on Windows 10\nHow to install PuTTY on windows 10\nHow to install RabbitMQ on Windows 10\nHow to install Gradle on Windows 10\nHow to install SOAPUI on Windows 10\nHow to Install Git windows 10 Operating System\nHow to setup or install MongoDB on Windows 10\nHow to install Android Studio on Windows 10\nSetup/Install Redis Server on Windows 10\nInstall Apache Solr on Windows 10\n"
},
{
"code": null,
"e": 2561,
"s": 2524,
"text": "How to install AWS CLI on Windows 10"
},
{
"code": null,
"e": 2602,
"s": 2561,
"text": "Install Mysql on Windows 10 Step by Step"
},
{
"code": null,
"e": 2640,
"s": 2602,
"text": "How to install Minikube on Windows 10"
},
{
"code": null,
"e": 2694,
"s": 2640,
"text": "How to insert documents to Elasticsearch cluster Java"
},
{
"code": null,
"e": 2727,
"s": 2694,
"text": "How to Install Ant on Windows 10"
},
{
"code": null,
"e": 2760,
"s": 2727,
"text": "How install Python on Windows 10"
},
{
"code": null,
"e": 2795,
"s": 2760,
"text": "How to install PuTTY on windows 10"
},
{
"code": null,
"e": 2833,
"s": 2795,
"text": "How to install RabbitMQ on Windows 10"
},
{
"code": null,
"e": 2869,
"s": 2833,
"text": "How to install Gradle on Windows 10"
},
{
"code": null,
"e": 2905,
"s": 2869,
"text": "How to install SOAPUI on Windows 10"
},
{
"code": null,
"e": 2952,
"s": 2905,
"text": "How to Install Git windows 10 Operating System"
},
{
"code": null,
"e": 2998,
"s": 2952,
"text": "How to setup or install MongoDB on Windows 10"
},
{
"code": null,
"e": 3042,
"s": 2998,
"text": "How to install Android Studio on Windows 10"
},
{
"code": null,
"e": 3083,
"s": 3042,
"text": "Setup/Install Redis Server on Windows 10"
}
] |
How to use an Image as a button in Tkinter?
|
In this example, we will create a rounded button in a window that can be used in
many other applications like forms, games, dialogue boxes, etc.
The best way to create rounded buttons in Tkinter is to use the desired images of buttons and turn it into a clickable button in the frame. That is really possible by using PhotoImage() function which grabs the desired image of the button.
So, the following steps make the desired image a button,
First, we will create a dummy button which can be used to make the image
clickable.
First, we will create a dummy button which can be used to make the image
clickable.
Grab the image from the source using PhotoImage(file) function.
Grab the image from the source using PhotoImage(file) function.
Pass the image file as the value in Button function
Pass the image file as the value in Button function
Remove the borderwidth=0.
Remove the borderwidth=0.
Now, we got the button rounded.
Now, we got the button rounded.
For this example we will use this image and will make it clickable.
#Import all the necessary libraries
from tkinter import *
#Define the tkinter instance
win= Toplevel()
win.title("Rounded Button")
#Define the size of the tkinter frame
win.geometry("700x300")
#Define the working of the button
def my_command():
text.config(text= "You have clicked Me...")
#Import the image using PhotoImage function
click_btn= PhotoImage(file='clickme.png')
#Let us create a label for button event
img_label= Label(image=click_btn)
#Let us create a dummy button and pass the image
button= Button(win, image=click_btn,command= my_command,
borderwidth=0)
button.pack(pady=30)
text= Label(win, text= "")
text.pack(pady=30)
win.mainloop()
Running the above code will produce the following output −
|
[
{
"code": null,
"e": 1207,
"s": 1062,
"text": "In this example, we will create a rounded button in a window that can be used in\nmany other applications like forms, games, dialogue boxes, etc."
},
{
"code": null,
"e": 1447,
"s": 1207,
"text": "The best way to create rounded buttons in Tkinter is to use the desired images of buttons and turn it into a clickable button in the frame. That is really possible by using PhotoImage() function which grabs the desired image of the button."
},
{
"code": null,
"e": 1504,
"s": 1447,
"text": "So, the following steps make the desired image a button,"
},
{
"code": null,
"e": 1588,
"s": 1504,
"text": "First, we will create a dummy button which can be used to make the image\nclickable."
},
{
"code": null,
"e": 1672,
"s": 1588,
"text": "First, we will create a dummy button which can be used to make the image\nclickable."
},
{
"code": null,
"e": 1736,
"s": 1672,
"text": "Grab the image from the source using PhotoImage(file) function."
},
{
"code": null,
"e": 1800,
"s": 1736,
"text": "Grab the image from the source using PhotoImage(file) function."
},
{
"code": null,
"e": 1852,
"s": 1800,
"text": "Pass the image file as the value in Button function"
},
{
"code": null,
"e": 1904,
"s": 1852,
"text": "Pass the image file as the value in Button function"
},
{
"code": null,
"e": 1930,
"s": 1904,
"text": "Remove the borderwidth=0."
},
{
"code": null,
"e": 1956,
"s": 1930,
"text": "Remove the borderwidth=0."
},
{
"code": null,
"e": 1988,
"s": 1956,
"text": "Now, we got the button rounded."
},
{
"code": null,
"e": 2020,
"s": 1988,
"text": "Now, we got the button rounded."
},
{
"code": null,
"e": 2088,
"s": 2020,
"text": "For this example we will use this image and will make it clickable."
},
{
"code": null,
"e": 2752,
"s": 2088,
"text": "#Import all the necessary libraries\nfrom tkinter import *\n\n#Define the tkinter instance\nwin= Toplevel()\nwin.title(\"Rounded Button\")\n\n#Define the size of the tkinter frame\nwin.geometry(\"700x300\")\n\n#Define the working of the button\n\ndef my_command():\n text.config(text= \"You have clicked Me...\")\n\n#Import the image using PhotoImage function\nclick_btn= PhotoImage(file='clickme.png')\n\n#Let us create a label for button event\nimg_label= Label(image=click_btn)\n\n#Let us create a dummy button and pass the image\nbutton= Button(win, image=click_btn,command= my_command,\nborderwidth=0)\nbutton.pack(pady=30)\n\ntext= Label(win, text= \"\")\ntext.pack(pady=30)\n\nwin.mainloop()"
},
{
"code": null,
"e": 2811,
"s": 2752,
"text": "Running the above code will produce the following output −"
}
] |
Constructor-based Dependency Injection
|
Constructor-based DI is accomplished when the container invokes a class constructor with a number of arguments, each representing a dependency on the other class.
The following example shows a class TextEditor that can only be dependency-injected with constructor injection.
Let us have a working Eclipse IDE in place and take the following steps to create a Spring application −
Here is the content of TextEditor.java file −
package com.tutorialspoint;
public class TextEditor {
private SpellChecker spellChecker;
public TextEditor(SpellChecker spellChecker) {
System.out.println("Inside TextEditor constructor." );
this.spellChecker = spellChecker;
}
public void spellCheck() {
spellChecker.checkSpelling();
}
}
Following is the content of another dependent class file SpellChecker.java
package com.tutorialspoint;
public class SpellChecker {
public SpellChecker(){
System.out.println("Inside SpellChecker constructor." );
}
public void checkSpelling() {
System.out.println("Inside checkSpelling." );
}
}
Following is the content of the MainApp.java file
package com.tutorialspoint;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
public class MainApp {
public static void main(String[] args) {
ApplicationContext context = new ClassPathXmlApplicationContext("Beans.xml");
TextEditor te = (TextEditor) context.getBean("textEditor");
te.spellCheck();
}
}
Following is the configuration file Beans.xml which has configuration for the constructor-based injection −
<?xml version = "1.0" encoding = "UTF-8"?>
<beans xmlns = "http://www.springframework.org/schema/beans"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
<!-- Definition for textEditor bean -->
<bean id = "textEditor" class = "com.tutorialspoint.TextEditor">
<constructor-arg ref = "spellChecker"/>
</bean>
<!-- Definition for spellChecker bean -->
<bean id = "spellChecker" class = "com.tutorialspoint.SpellChecker"></bean>
</beans>
Once you are done creating the source and bean configuration files, let us run the application. If everything is fine with your application, it will print the following message −
Inside SpellChecker constructor.
Inside TextEditor constructor.
Inside checkSpelling.
There may be an ambiguity while passing arguments to the constructor, in case there are more than one parameters. To resolve this ambiguity, the order in which the constructor arguments are defined in a bean definition is the order in which those arguments are supplied to the appropriate constructor. Consider the following class −
package x.y;
public class Foo {
public Foo(Bar bar, Baz baz) {
// ...
}
}
The following configuration works fine −
<beans>
<bean id = "foo" class = "x.y.Foo">
<constructor-arg ref = "bar"/>
<constructor-arg ref = "baz"/>
</bean>
<bean id = "bar" class = "x.y.Bar"/>
<bean id = "baz" class = "x.y.Baz"/>
</beans>
Let us check one more case where we pass different types to the constructor. Consider the following class −
package x.y;
public class Foo {
public Foo(int year, String name) {
// ...
}
}
The container can also use type matching with simple types, if you explicitly specify the type of the constructor argument using the type attribute. For example −
<beans>
<bean id = "exampleBean" class = "examples.ExampleBean">
<constructor-arg type = "int" value = "2001"/>
<constructor-arg type = "java.lang.String" value = "Zara"/>
</bean>
</beans>
Finally, the best way to pass constructor arguments, use the index attribute to specify explicitly the index of constructor arguments. Here, the index is 0 based. For example −
<beans>
<bean id = "exampleBean" class = "examples.ExampleBean">
<constructor-arg index = "0" value = "2001"/>
<constructor-arg index = "1" value = "Zara"/>
</bean>
</beans>
A final note, in case you are passing a reference to an object, you need to use ref attribute of <constructor-arg> tag and if you are passing a value directly then you should use value attribute as shown above.
102 Lectures
8 hours
Karthikeya T
39 Lectures
5 hours
Chaand Sheikh
73 Lectures
5.5 hours
Senol Atac
62 Lectures
4.5 hours
Senol Atac
67 Lectures
4.5 hours
Senol Atac
69 Lectures
5 hours
Senol Atac
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2455,
"s": 2292,
"text": "Constructor-based DI is accomplished when the container invokes a class constructor with a number of arguments, each representing a dependency on the other class."
},
{
"code": null,
"e": 2567,
"s": 2455,
"text": "The following example shows a class TextEditor that can only be dependency-injected with constructor injection."
},
{
"code": null,
"e": 2672,
"s": 2567,
"text": "Let us have a working Eclipse IDE in place and take the following steps to create a Spring application −"
},
{
"code": null,
"e": 2718,
"s": 2672,
"text": "Here is the content of TextEditor.java file −"
},
{
"code": null,
"e": 3041,
"s": 2718,
"text": "package com.tutorialspoint;\n\npublic class TextEditor {\n private SpellChecker spellChecker;\n\n public TextEditor(SpellChecker spellChecker) {\n System.out.println(\"Inside TextEditor constructor.\" );\n this.spellChecker = spellChecker;\n }\n public void spellCheck() {\n spellChecker.checkSpelling();\n }\n}"
},
{
"code": null,
"e": 3116,
"s": 3041,
"text": "Following is the content of another dependent class file SpellChecker.java"
},
{
"code": null,
"e": 3359,
"s": 3116,
"text": "package com.tutorialspoint;\n\npublic class SpellChecker {\n public SpellChecker(){\n System.out.println(\"Inside SpellChecker constructor.\" );\n }\n public void checkSpelling() {\n System.out.println(\"Inside checkSpelling.\" );\n }\n}"
},
{
"code": null,
"e": 3409,
"s": 3359,
"text": "Following is the content of the MainApp.java file"
},
{
"code": null,
"e": 3817,
"s": 3409,
"text": "package com.tutorialspoint;\n\nimport org.springframework.context.ApplicationContext;\nimport org.springframework.context.support.ClassPathXmlApplicationContext;\n\npublic class MainApp {\n public static void main(String[] args) {\n ApplicationContext context = new ClassPathXmlApplicationContext(\"Beans.xml\");\n\n TextEditor te = (TextEditor) context.getBean(\"textEditor\");\n te.spellCheck();\n }\n}"
},
{
"code": null,
"e": 3925,
"s": 3817,
"text": "Following is the configuration file Beans.xml which has configuration for the constructor-based injection −"
},
{
"code": null,
"e": 4532,
"s": 3925,
"text": "<?xml version = \"1.0\" encoding = \"UTF-8\"?>\n\n<beans xmlns = \"http://www.springframework.org/schema/beans\"\n xmlns:xsi = \"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation = \"http://www.springframework.org/schema/beans\n http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\">\n\n <!-- Definition for textEditor bean -->\n <bean id = \"textEditor\" class = \"com.tutorialspoint.TextEditor\">\n <constructor-arg ref = \"spellChecker\"/>\n </bean>\n\n <!-- Definition for spellChecker bean -->\n <bean id = \"spellChecker\" class = \"com.tutorialspoint.SpellChecker\"></bean>\n\n</beans>"
},
{
"code": null,
"e": 4711,
"s": 4532,
"text": "Once you are done creating the source and bean configuration files, let us run the application. If everything is fine with your application, it will print the following message −"
},
{
"code": null,
"e": 4798,
"s": 4711,
"text": "Inside SpellChecker constructor.\nInside TextEditor constructor.\nInside checkSpelling.\n"
},
{
"code": null,
"e": 5131,
"s": 4798,
"text": "There may be an ambiguity while passing arguments to the constructor, in case there are more than one parameters. To resolve this ambiguity, the order in which the constructor arguments are defined in a bean definition is the order in which those arguments are supplied to the appropriate constructor. Consider the following class −"
},
{
"code": null,
"e": 5218,
"s": 5131,
"text": "package x.y;\n\npublic class Foo {\n public Foo(Bar bar, Baz baz) {\n // ...\n }\n}"
},
{
"code": null,
"e": 5259,
"s": 5218,
"text": "The following configuration works fine −"
},
{
"code": null,
"e": 5481,
"s": 5259,
"text": "<beans>\n <bean id = \"foo\" class = \"x.y.Foo\">\n <constructor-arg ref = \"bar\"/>\n <constructor-arg ref = \"baz\"/>\n </bean>\n\n <bean id = \"bar\" class = \"x.y.Bar\"/>\n <bean id = \"baz\" class = \"x.y.Baz\"/>\n</beans>"
},
{
"code": null,
"e": 5589,
"s": 5481,
"text": "Let us check one more case where we pass different types to the constructor. Consider the following class −"
},
{
"code": null,
"e": 5681,
"s": 5589,
"text": "package x.y;\n\npublic class Foo {\n public Foo(int year, String name) {\n // ...\n }\n}"
},
{
"code": null,
"e": 5844,
"s": 5681,
"text": "The container can also use type matching with simple types, if you explicitly specify the type of the constructor argument using the type attribute. For example −"
},
{
"code": null,
"e": 6053,
"s": 5844,
"text": "<beans>\n\n <bean id = \"exampleBean\" class = \"examples.ExampleBean\">\n <constructor-arg type = \"int\" value = \"2001\"/>\n <constructor-arg type = \"java.lang.String\" value = \"Zara\"/>\n </bean>\n\n</beans>"
},
{
"code": null,
"e": 6230,
"s": 6053,
"text": "Finally, the best way to pass constructor arguments, use the index attribute to specify explicitly the index of constructor arguments. Here, the index is 0 based. For example −"
},
{
"code": null,
"e": 6424,
"s": 6230,
"text": "<beans>\n\n <bean id = \"exampleBean\" class = \"examples.ExampleBean\">\n <constructor-arg index = \"0\" value = \"2001\"/>\n <constructor-arg index = \"1\" value = \"Zara\"/>\n </bean>\n\n</beans>"
},
{
"code": null,
"e": 6635,
"s": 6424,
"text": "A final note, in case you are passing a reference to an object, you need to use ref attribute of <constructor-arg> tag and if you are passing a value directly then you should use value attribute as shown above."
},
{
"code": null,
"e": 6669,
"s": 6635,
"text": "\n 102 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 6683,
"s": 6669,
"text": " Karthikeya T"
},
{
"code": null,
"e": 6716,
"s": 6683,
"text": "\n 39 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 6731,
"s": 6716,
"text": " Chaand Sheikh"
},
{
"code": null,
"e": 6766,
"s": 6731,
"text": "\n 73 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 6778,
"s": 6766,
"text": " Senol Atac"
},
{
"code": null,
"e": 6813,
"s": 6778,
"text": "\n 62 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 6825,
"s": 6813,
"text": " Senol Atac"
},
{
"code": null,
"e": 6860,
"s": 6825,
"text": "\n 67 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 6872,
"s": 6860,
"text": " Senol Atac"
},
{
"code": null,
"e": 6905,
"s": 6872,
"text": "\n 69 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 6917,
"s": 6905,
"text": " Senol Atac"
},
{
"code": null,
"e": 6924,
"s": 6917,
"text": " Print"
},
{
"code": null,
"e": 6935,
"s": 6924,
"text": " Add Notes"
}
] |
p5.js | vertex() Function - GeeksforGeeks
|
31 May, 2020
The vertex() function in p5.js is used to specify the coordinates of the vertices used to draw a shape. It can only be used with the beginShape() and endShape() functions to make various shapes and curves like points, lines, triangles, quads and polygons.
Syntax:
vertex( x, y )
OR
vertex( x, y, z, [u], [v] )
Parameters: This function accepts five parameters as mentioned above and described below:
x: It is a number that specifies the x-coordinate of the vertex.
y: It is a number that specifies the y-coordinate of the vertex.
z: It is a number that specifies the z-coordinate of the vertex.
u: It is a number that specifies the u-coordinate of the texture of the vertex. It is an optional parameter.
v: It is a number that specifies the v-coordinate of the texture of the vertex. It is an optional parameter.
The examples below illustrates the vertex() function in p5.js:
Example 1:
let currMode; function setup() { createCanvas(400, 300); textSize(18); let shapeModes = [LINES, TRIANGLES, TRIANGLE_FAN, TRIANGLE_STRIP, QUADS]; let index = 0; currMode = shapeModes[index]; let helpText = createP( `Click on the button to change the shape drawing mode. The red circles represent the vertices of the shape` ); helpText.position(20, 0); let closeBtn = createButton("Change mode"); closeBtn.position(20, 60); closeBtn.mouseClicked(() => { if (index < shapeModes.length) index++; else index = 0; currMode = shapeModes[index]; });} function draw() { clear(); // Starting the shape using beginShape() beginShape(currMode); // Specifying all the vertices vertex(145, 245); vertex(50, 105); vertex(25, 235); vertex(115, 120); vertex(250, 125); // Ending the shape using endShape() endShape(); // Points for demonstration fill("red"); circle(145, 245, 10); circle(50, 105, 10); circle(25, 235, 10); circle(115, 120, 10); circle(250, 125, 10); noFill();}
Output:
Example 2:
let vertices = []; function setup() { createCanvas(400, 300); textSize(18); text("Click anywhere to place a vertice "+ "at that point", 10, 20);} function mouseClicked() { // Update the vertices array with // current mouse position vertices.push({ x: mouseX, y: mouseY }); clear(); fill("black"); text("Click anywhere to place a vertice "+ "at that point", 10, 20); noFill(); // Draw shape using the current vertices array beginShape(); for (let i = 0; i < vertices.length; i++) vertex(vertices[i].x, vertices[i].y); endShape(CLOSE); fill("red"); // Draw a circle at all the vertices for (let i = 0; i < vertices.length; i++) circle(vertices[i].x, vertices[i].y, 15);}
Output:
Online editor: https://editor.p5js.org/
Environment Setup: https://www.geeksforgeeks.org/p5-js-soundfile-object-installation-and-methods/
Reference: https://p5js.org/reference/#/p5/vertex
JavaScript-p5.js
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between var, let and const keywords in JavaScript
Difference Between PUT and PATCH Request
How to get character array from string in JavaScript?
Remove elements from a JavaScript Array
How to get selected value in dropdown list using JavaScript ?
Top 10 Front End Developer Skills That You Need in 2022
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
|
[
{
"code": null,
"e": 24991,
"s": 24963,
"text": "\n31 May, 2020"
},
{
"code": null,
"e": 25247,
"s": 24991,
"text": "The vertex() function in p5.js is used to specify the coordinates of the vertices used to draw a shape. It can only be used with the beginShape() and endShape() functions to make various shapes and curves like points, lines, triangles, quads and polygons."
},
{
"code": null,
"e": 25255,
"s": 25247,
"text": "Syntax:"
},
{
"code": null,
"e": 25270,
"s": 25255,
"text": "vertex( x, y )"
},
{
"code": null,
"e": 25273,
"s": 25270,
"text": "OR"
},
{
"code": null,
"e": 25301,
"s": 25273,
"text": "vertex( x, y, z, [u], [v] )"
},
{
"code": null,
"e": 25391,
"s": 25301,
"text": "Parameters: This function accepts five parameters as mentioned above and described below:"
},
{
"code": null,
"e": 25456,
"s": 25391,
"text": "x: It is a number that specifies the x-coordinate of the vertex."
},
{
"code": null,
"e": 25521,
"s": 25456,
"text": "y: It is a number that specifies the y-coordinate of the vertex."
},
{
"code": null,
"e": 25586,
"s": 25521,
"text": "z: It is a number that specifies the z-coordinate of the vertex."
},
{
"code": null,
"e": 25695,
"s": 25586,
"text": "u: It is a number that specifies the u-coordinate of the texture of the vertex. It is an optional parameter."
},
{
"code": null,
"e": 25804,
"s": 25695,
"text": "v: It is a number that specifies the v-coordinate of the texture of the vertex. It is an optional parameter."
},
{
"code": null,
"e": 25867,
"s": 25804,
"text": "The examples below illustrates the vertex() function in p5.js:"
},
{
"code": null,
"e": 25878,
"s": 25867,
"text": "Example 1:"
},
{
"code": "let currMode; function setup() { createCanvas(400, 300); textSize(18); let shapeModes = [LINES, TRIANGLES, TRIANGLE_FAN, TRIANGLE_STRIP, QUADS]; let index = 0; currMode = shapeModes[index]; let helpText = createP( `Click on the button to change the shape drawing mode. The red circles represent the vertices of the shape` ); helpText.position(20, 0); let closeBtn = createButton(\"Change mode\"); closeBtn.position(20, 60); closeBtn.mouseClicked(() => { if (index < shapeModes.length) index++; else index = 0; currMode = shapeModes[index]; });} function draw() { clear(); // Starting the shape using beginShape() beginShape(currMode); // Specifying all the vertices vertex(145, 245); vertex(50, 105); vertex(25, 235); vertex(115, 120); vertex(250, 125); // Ending the shape using endShape() endShape(); // Points for demonstration fill(\"red\"); circle(145, 245, 10); circle(50, 105, 10); circle(25, 235, 10); circle(115, 120, 10); circle(250, 125, 10); noFill();}",
"e": 26921,
"s": 25878,
"text": null
},
{
"code": null,
"e": 26929,
"s": 26921,
"text": "Output:"
},
{
"code": null,
"e": 26940,
"s": 26929,
"text": "Example 2:"
},
{
"code": "let vertices = []; function setup() { createCanvas(400, 300); textSize(18); text(\"Click anywhere to place a vertice \"+ \"at that point\", 10, 20);} function mouseClicked() { // Update the vertices array with // current mouse position vertices.push({ x: mouseX, y: mouseY }); clear(); fill(\"black\"); text(\"Click anywhere to place a vertice \"+ \"at that point\", 10, 20); noFill(); // Draw shape using the current vertices array beginShape(); for (let i = 0; i < vertices.length; i++) vertex(vertices[i].x, vertices[i].y); endShape(CLOSE); fill(\"red\"); // Draw a circle at all the vertices for (let i = 0; i < vertices.length; i++) circle(vertices[i].x, vertices[i].y, 15);}",
"e": 27654,
"s": 26940,
"text": null
},
{
"code": null,
"e": 27662,
"s": 27654,
"text": "Output:"
},
{
"code": null,
"e": 27702,
"s": 27662,
"text": "Online editor: https://editor.p5js.org/"
},
{
"code": null,
"e": 27800,
"s": 27702,
"text": "Environment Setup: https://www.geeksforgeeks.org/p5-js-soundfile-object-installation-and-methods/"
},
{
"code": null,
"e": 27850,
"s": 27800,
"text": "Reference: https://p5js.org/reference/#/p5/vertex"
},
{
"code": null,
"e": 27867,
"s": 27850,
"text": "JavaScript-p5.js"
},
{
"code": null,
"e": 27878,
"s": 27867,
"text": "JavaScript"
},
{
"code": null,
"e": 27895,
"s": 27878,
"text": "Web Technologies"
},
{
"code": null,
"e": 27993,
"s": 27895,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28002,
"s": 27993,
"text": "Comments"
},
{
"code": null,
"e": 28015,
"s": 28002,
"text": "Old Comments"
},
{
"code": null,
"e": 28076,
"s": 28015,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 28117,
"s": 28076,
"text": "Difference Between PUT and PATCH Request"
},
{
"code": null,
"e": 28171,
"s": 28117,
"text": "How to get character array from string in JavaScript?"
},
{
"code": null,
"e": 28211,
"s": 28171,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 28273,
"s": 28211,
"text": "How to get selected value in dropdown list using JavaScript ?"
},
{
"code": null,
"e": 28329,
"s": 28273,
"text": "Top 10 Front End Developer Skills That You Need in 2022"
},
{
"code": null,
"e": 28362,
"s": 28329,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 28424,
"s": 28362,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 28467,
"s": 28424,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Apache POI PPT - Images
|
In this chapter, you will learn how to add an image to a PPT and how to read an image from it.
You can add images to a presentation using the createPicture() method of XSLFSlide. This method accepts image in the form of byte array format. Therefore, you have to create a byte array of the image that is to be added to the presentation.
Follow the given procedure to add an image to a presentation. Create an empty slideshow using XMLSlideShow as shown below −
XMLSlideShow ppt = new XMLSlideShow();
Create an empty presentation in it using createSlide().
XSLFSlide slide = ppt.createSlide();
Read the image file that is to be added and convert it into byte array using IOUtils.toByteArray() of the IOUtils class as shown below −
//reading an image
File image = new File("C://POIPPT//boy.jpg");
//converting it into a byte array
byte[] picture = IOUtils.toByteArray(new FileInputStream(image));
Add the image to the presentation using addPicture(). This method accepts two variables: byte array format of the image that is to be added and the static variable representing the file format of the image. The usage of the addPicture() method is shown below −
XSLFPictureData idx = ppt.addPicture(picture, XSLFPictureData.PICTURE_TYPE_PNG);
Embed the image to the slide using createPicture() as shown below −
XSLFPictureShape pic = slide.createPicture(idx);
Given below is the complete program to add an image to the slide in a presentation −
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import org.apache.poi.sl.usermodel.PictureData.PictureType;
import org.apache.poi.util.IOUtils;
import org.apache.poi.xslf.usermodel.XMLSlideShow;
import org.apache.poi.xslf.usermodel.XSLFPictureData;
import org.apache.poi.xslf.usermodel.XSLFPictureShape;
import org.apache.poi.xslf.usermodel.XSLFSlide;
public class AddingImage {
public static void main(String args[]) throws IOException {
//creating a presentation
XMLSlideShow ppt = new XMLSlideShow();
//creating a slide in it
XSLFSlide slide = ppt.createSlide();
//reading an image
File image = new File("C://POIPPT//boy.jpg");
//converting it into a byte array
byte[] picture = IOUtils.toByteArray(new FileInputStream(image));
//adding the image to the presentation
XSLFPictureData idx = ppt.addPicture(picture, PictureType.PNG);
//creating a slide with given picture on it
XSLFPictureShape pic = slide.createPicture(idx);
//creating a file object
File file = new File("addingimage.pptx");
FileOutputStream out = new FileOutputStream(file);
//saving the changes to a file
ppt.write(out);
System.out.println("image added successfully");
out.close();
}
}
Save the above Java code as AddingImage.java, and then compile and execute it from the command prompt as follows −
$javac AddingImage.java
$java AddingImage
It will compile and execute to generate the following output −
reordering of the slides is done
The presentation with the newly added slide with image appears as follows −
You can get the data of all the pictures using the getPictureData() method of the XMLSlideShow class. The following program reads the images from a presentation −
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import org.apache.poi.sl.usermodel.PictureData.PictureType;
import org.apache.poi.xslf.usermodel.XMLSlideShow;
import org.apache.poi.xslf.usermodel.XSLFPictureData;
public class Readingimage {
public static void main(String args[]) throws IOException {
//open an existing presentation
File file = new File("addingimage.pptx");
XMLSlideShow ppt = new XMLSlideShow(new FileInputStream(file));
//reading all the pictures in the presentation
for(XSLFPictureData data : ppt.getPictureData()){
byte[] bytes = data.getData();
String fileName = data.getFileName();
PictureType pictureFormat = data.getType();
System.out.println("picture name: " + fileName);
System.out.println("picture format: " + pictureFormat);
}
//saving the changes to a file
FileOutputStream out = new FileOutputStream(file);
ppt.write(out);
out.close();
}
}
Save the above Java code as Readingimage.java, and then compile and execute it from the command prompt as follows −
$javac Readingimage.java
$java Readingimage
It will compile and execute to generate the following output −
picture name: image1.png
picture format: 6
46 Lectures
3.5 hours
Arnab Chakraborty
23 Lectures
1.5 hours
Mukund Kumar Mishra
16 Lectures
1 hours
Nilay Mehta
52 Lectures
1.5 hours
Bigdata Engineer
14 Lectures
1 hours
Bigdata Engineer
23 Lectures
1 hours
Bigdata Engineer
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2112,
"s": 2017,
"text": "In this chapter, you will learn how to add an image to a PPT and how to read an image from it."
},
{
"code": null,
"e": 2353,
"s": 2112,
"text": "You can add images to a presentation using the createPicture() method of XSLFSlide. This method accepts image in the form of byte array format. Therefore, you have to create a byte array of the image that is to be added to the presentation."
},
{
"code": null,
"e": 2477,
"s": 2353,
"text": "Follow the given procedure to add an image to a presentation. Create an empty slideshow using XMLSlideShow as shown below −"
},
{
"code": null,
"e": 2517,
"s": 2477,
"text": "XMLSlideShow ppt = new XMLSlideShow();\n"
},
{
"code": null,
"e": 2573,
"s": 2517,
"text": "Create an empty presentation in it using createSlide()."
},
{
"code": null,
"e": 2611,
"s": 2573,
"text": "XSLFSlide slide = ppt.createSlide();\n"
},
{
"code": null,
"e": 2749,
"s": 2611,
"text": "Read the image file that is to be added and convert it into byte array using IOUtils.toByteArray() of the IOUtils class as shown below −"
},
{
"code": null,
"e": 2915,
"s": 2749,
"text": "//reading an image\nFile image = new File(\"C://POIPPT//boy.jpg\");\n\n//converting it into a byte array\nbyte[] picture = IOUtils.toByteArray(new FileInputStream(image));"
},
{
"code": null,
"e": 3176,
"s": 2915,
"text": "Add the image to the presentation using addPicture(). This method accepts two variables: byte array format of the image that is to be added and the static variable representing the file format of the image. The usage of the addPicture() method is shown below −"
},
{
"code": null,
"e": 3258,
"s": 3176,
"text": "XSLFPictureData idx = ppt.addPicture(picture, XSLFPictureData.PICTURE_TYPE_PNG);\n"
},
{
"code": null,
"e": 3326,
"s": 3258,
"text": "Embed the image to the slide using createPicture() as shown below −"
},
{
"code": null,
"e": 3376,
"s": 3326,
"text": "XSLFPictureShape pic = slide.createPicture(idx);\n"
},
{
"code": null,
"e": 3461,
"s": 3376,
"text": "Given below is the complete program to add an image to the slide in a presentation −"
},
{
"code": null,
"e": 4859,
"s": 3461,
"text": "import java.io.File;\nimport java.io.FileInputStream;\nimport java.io.FileOutputStream;\nimport java.io.IOException;\n\nimport org.apache.poi.sl.usermodel.PictureData.PictureType;\nimport org.apache.poi.util.IOUtils;\nimport org.apache.poi.xslf.usermodel.XMLSlideShow;\nimport org.apache.poi.xslf.usermodel.XSLFPictureData;\nimport org.apache.poi.xslf.usermodel.XSLFPictureShape;\nimport org.apache.poi.xslf.usermodel.XSLFSlide;\n\npublic class AddingImage {\n public static void main(String args[]) throws IOException {\n //creating a presentation \n XMLSlideShow ppt = new XMLSlideShow();\n \n //creating a slide in it \n XSLFSlide slide = ppt.createSlide();\n \n //reading an image\n File image = new File(\"C://POIPPT//boy.jpg\");\n \n //converting it into a byte array\n byte[] picture = IOUtils.toByteArray(new FileInputStream(image));\n \n //adding the image to the presentation\n XSLFPictureData idx = ppt.addPicture(picture, PictureType.PNG);\n \n //creating a slide with given picture on it\n XSLFPictureShape pic = slide.createPicture(idx);\n \n //creating a file object \n File file = new File(\"addingimage.pptx\");\n FileOutputStream out = new FileOutputStream(file);\n \n //saving the changes to a file\n ppt.write(out);\n System.out.println(\"image added successfully\");\n out.close();\t\n }\n}"
},
{
"code": null,
"e": 4974,
"s": 4859,
"text": "Save the above Java code as AddingImage.java, and then compile and execute it from the command prompt as follows −"
},
{
"code": null,
"e": 5017,
"s": 4974,
"text": "$javac AddingImage.java\n$java AddingImage\n"
},
{
"code": null,
"e": 5080,
"s": 5017,
"text": "It will compile and execute to generate the following output −"
},
{
"code": null,
"e": 5114,
"s": 5080,
"text": "reordering of the slides is done\n"
},
{
"code": null,
"e": 5190,
"s": 5114,
"text": "The presentation with the newly added slide with image appears as follows −"
},
{
"code": null,
"e": 5353,
"s": 5190,
"text": "You can get the data of all the pictures using the getPictureData() method of the XMLSlideShow class. The following program reads the images from a presentation −"
},
{
"code": null,
"e": 6420,
"s": 5353,
"text": "import java.io.File;\nimport java.io.FileInputStream;\nimport java.io.FileOutputStream;\nimport java.io.IOException;\n\nimport org.apache.poi.sl.usermodel.PictureData.PictureType;\nimport org.apache.poi.xslf.usermodel.XMLSlideShow;\nimport org.apache.poi.xslf.usermodel.XSLFPictureData;\n\npublic class Readingimage {\n public static void main(String args[]) throws IOException {\n //open an existing presentation \n File file = new File(\"addingimage.pptx\");\n XMLSlideShow ppt = new XMLSlideShow(new FileInputStream(file));\n \n //reading all the pictures in the presentation\n for(XSLFPictureData data : ppt.getPictureData()){\n byte[] bytes = data.getData();\n String fileName = data.getFileName();\n PictureType pictureFormat = data.getType();\n System.out.println(\"picture name: \" + fileName);\n System.out.println(\"picture format: \" + pictureFormat); \n }\t \n //saving the changes to a file\n FileOutputStream out = new FileOutputStream(file);\n ppt.write(out);\n out.close();\t\n }\n}"
},
{
"code": null,
"e": 6536,
"s": 6420,
"text": "Save the above Java code as Readingimage.java, and then compile and execute it from the command prompt as follows −"
},
{
"code": null,
"e": 6581,
"s": 6536,
"text": "$javac Readingimage.java\n$java Readingimage\n"
},
{
"code": null,
"e": 6644,
"s": 6581,
"text": "It will compile and execute to generate the following output −"
},
{
"code": null,
"e": 6689,
"s": 6644,
"text": "picture name: image1.png\npicture format: 6 \n"
},
{
"code": null,
"e": 6724,
"s": 6689,
"text": "\n 46 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 6743,
"s": 6724,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 6778,
"s": 6743,
"text": "\n 23 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6799,
"s": 6778,
"text": " Mukund Kumar Mishra"
},
{
"code": null,
"e": 6832,
"s": 6799,
"text": "\n 16 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6845,
"s": 6832,
"text": " Nilay Mehta"
},
{
"code": null,
"e": 6880,
"s": 6845,
"text": "\n 52 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6898,
"s": 6880,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 6931,
"s": 6898,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6949,
"s": 6931,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 6982,
"s": 6949,
"text": "\n 23 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 7000,
"s": 6982,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 7007,
"s": 7000,
"text": " Print"
},
{
"code": null,
"e": 7018,
"s": 7007,
"text": " Add Notes"
}
] |
PHP Throwing Exceptions
|
Throwable interface is implemented by Error and Exception class. All predefined Error classes are inherited from Error class. Instance of corresponding Error class is thrown inside try block and processed inside appropriate catch block.
Normal execution (when no exception is thrown within the try block) will continue after that last catch block defined in sequence.
Live Demo
<?php
function div($x, $y) {
if (!$y) {
throw new Exception('Division by zero.');
}
return $x/$y;
}
try {
echo div(10,5) . "\n";
echo div(10,0) . "\n";
} catch (Exception $e) {
echo 'Caught exception: ', $e->getMessage(), "\n";
}
// Continue execution
echo "Execution continues\n";
?>
Following output is displayed
2
Caught exception: Division by zero.
Execution continues
In following example, TypeError is thrown while executing a function because appropriate arguments are not passed to it. Corresponding error message is displayed
Live Demo
<?php
function add(int $num1, int $num2){
return $num1 + $num2;
}
try {
$value = add(1, 'one');
} catch (TypeError $e) {
echo $e->getMessage(). "\n";
}
?>
Following output is displayed
Argument 2 passed to add() must be of the type integer, string given
Standard PHP library contains predefined exceptions
Following example shows OutOfBoundsException thrown when a key in PHP array is not found
Live Demo
<?php
$arr=array("one"=>1, "two"=>2,"three"=>3,"four"=>4);
$key="ten";
try{
if (array_key_exists($key, $arr)==FALSE){
throw new OutOfBoundsException("key not found");}
else {
echo $arr[$key];}
}
catch (OutOfBoundsException $e){
echo $e->getMessage(). "\n";
}
?>
Following output is displayed
key not found
|
[
{
"code": null,
"e": 1299,
"s": 1062,
"text": "Throwable interface is implemented by Error and Exception class. All predefined Error classes are inherited from Error class. Instance of corresponding Error class is thrown inside try block and processed inside appropriate catch block."
},
{
"code": null,
"e": 1430,
"s": 1299,
"text": "Normal execution (when no exception is thrown within the try block) will continue after that last catch block defined in sequence."
},
{
"code": null,
"e": 1441,
"s": 1430,
"text": " Live Demo"
},
{
"code": null,
"e": 1747,
"s": 1441,
"text": "<?php\nfunction div($x, $y) {\n if (!$y) {\n throw new Exception('Division by zero.');\n }\nreturn $x/$y;\n}\ntry {\n echo div(10,5) . \"\\n\";\n echo div(10,0) . \"\\n\";\n} catch (Exception $e) {\n echo 'Caught exception: ', $e->getMessage(), \"\\n\";\n}\n// Continue execution\necho \"Execution continues\\n\";\n?>"
},
{
"code": null,
"e": 1777,
"s": 1747,
"text": "Following output is displayed"
},
{
"code": null,
"e": 1835,
"s": 1777,
"text": "2\nCaught exception: Division by zero.\nExecution continues"
},
{
"code": null,
"e": 1997,
"s": 1835,
"text": "In following example, TypeError is thrown while executing a function because appropriate arguments are not passed to it. Corresponding error message is displayed"
},
{
"code": null,
"e": 2008,
"s": 1997,
"text": " Live Demo"
},
{
"code": null,
"e": 2172,
"s": 2008,
"text": "<?php\nfunction add(int $num1, int $num2){\n return $num1 + $num2;\n}\ntry {\n $value = add(1, 'one');\n} catch (TypeError $e) {\n echo $e->getMessage(). \"\\n\";\n}\n?>"
},
{
"code": null,
"e": 2202,
"s": 2172,
"text": "Following output is displayed"
},
{
"code": null,
"e": 2271,
"s": 2202,
"text": "Argument 2 passed to add() must be of the type integer, string given"
},
{
"code": null,
"e": 2323,
"s": 2271,
"text": "Standard PHP library contains predefined exceptions"
},
{
"code": null,
"e": 2412,
"s": 2323,
"text": "Following example shows OutOfBoundsException thrown when a key in PHP array is not found"
},
{
"code": null,
"e": 2423,
"s": 2412,
"text": " Live Demo"
},
{
"code": null,
"e": 2715,
"s": 2423,
"text": "<?php\n$arr=array(\"one\"=>1, \"two\"=>2,\"three\"=>3,\"four\"=>4);\n$key=\"ten\";\ntry{\n if (array_key_exists($key, $arr)==FALSE){\n throw new OutOfBoundsException(\"key not found\");}\n else {\n echo $arr[$key];}\n }\n catch (OutOfBoundsException $e){\n echo $e->getMessage(). \"\\n\";\n}\n?>"
},
{
"code": null,
"e": 2745,
"s": 2715,
"text": "Following output is displayed"
},
{
"code": null,
"e": 2759,
"s": 2745,
"text": "key not found"
}
] |
Difference Between Cellpadding and Cellspacing
|
In this post, we will understand the difference between cellpadding and cell spacing.
It is associated with a single cell.
It is associated with a single cell.
It helps control the white space that is present between the border of the cell and the contents within the cell.
It helps control the white space that is present between the border of the cell and the contents within the cell.
The default value of cell padding is 1.
The default value of cell padding is 1.
It is used as an effective method.
It is used as an effective method.
It is created using HTML <table> tag.
It is created using HTML <table> tag.
The type of attribute is set to ‘cellpadding’.
The type of attribute is set to ‘cellpadding’.
<table cellpadding="value" >.....</table>
It is associated with more than a single cell.
It is associated with more than a single cell.
It helps set the space between the single cells.
It helps set the space between the single cells.
It is less effective than cell padding.
It is less effective than cell padding.
The default cell spacing value is 2.
The default cell spacing value is 2.
It can be created using HTML <table> tag.
It can be created using HTML <table> tag.
The type of attribute is set to ‘cellspacing’.
The type of attribute is set to ‘cellspacing’.
<table cellspacing="value" >.....</table>
|
[
{
"code": null,
"e": 1148,
"s": 1062,
"text": "In this post, we will understand the difference between cellpadding and cell spacing."
},
{
"code": null,
"e": 1185,
"s": 1148,
"text": "It is associated with a single cell."
},
{
"code": null,
"e": 1222,
"s": 1185,
"text": "It is associated with a single cell."
},
{
"code": null,
"e": 1336,
"s": 1222,
"text": "It helps control the white space that is present between the border of the cell and the contents within the cell."
},
{
"code": null,
"e": 1450,
"s": 1336,
"text": "It helps control the white space that is present between the border of the cell and the contents within the cell."
},
{
"code": null,
"e": 1490,
"s": 1450,
"text": "The default value of cell padding is 1."
},
{
"code": null,
"e": 1530,
"s": 1490,
"text": "The default value of cell padding is 1."
},
{
"code": null,
"e": 1565,
"s": 1530,
"text": "It is used as an effective method."
},
{
"code": null,
"e": 1600,
"s": 1565,
"text": "It is used as an effective method."
},
{
"code": null,
"e": 1638,
"s": 1600,
"text": "It is created using HTML <table> tag."
},
{
"code": null,
"e": 1676,
"s": 1638,
"text": "It is created using HTML <table> tag."
},
{
"code": null,
"e": 1723,
"s": 1676,
"text": "The type of attribute is set to ‘cellpadding’."
},
{
"code": null,
"e": 1770,
"s": 1723,
"text": "The type of attribute is set to ‘cellpadding’."
},
{
"code": null,
"e": 1812,
"s": 1770,
"text": "<table cellpadding=\"value\" >.....</table>"
},
{
"code": null,
"e": 1859,
"s": 1812,
"text": "It is associated with more than a single cell."
},
{
"code": null,
"e": 1906,
"s": 1859,
"text": "It is associated with more than a single cell."
},
{
"code": null,
"e": 1955,
"s": 1906,
"text": "It helps set the space between the single cells."
},
{
"code": null,
"e": 2004,
"s": 1955,
"text": "It helps set the space between the single cells."
},
{
"code": null,
"e": 2044,
"s": 2004,
"text": "It is less effective than cell padding."
},
{
"code": null,
"e": 2084,
"s": 2044,
"text": "It is less effective than cell padding."
},
{
"code": null,
"e": 2121,
"s": 2084,
"text": "The default cell spacing value is 2."
},
{
"code": null,
"e": 2158,
"s": 2121,
"text": "The default cell spacing value is 2."
},
{
"code": null,
"e": 2200,
"s": 2158,
"text": "It can be created using HTML <table> tag."
},
{
"code": null,
"e": 2242,
"s": 2200,
"text": "It can be created using HTML <table> tag."
},
{
"code": null,
"e": 2289,
"s": 2242,
"text": "The type of attribute is set to ‘cellspacing’."
},
{
"code": null,
"e": 2336,
"s": 2289,
"text": "The type of attribute is set to ‘cellspacing’."
},
{
"code": null,
"e": 2378,
"s": 2336,
"text": "<table cellspacing=\"value\" >.....</table>"
}
] |
How to find a replace a word in a string in C#?
|
Firstly, set the string to be replaced.
string str = "Demo text!";
Now use the replace() method to replace the above string.
string res = str.Replace("Demo ", "New ");
The following is the complete code to replace a word in a string.
Live Demo
using System;
public class Demo {
public static void Main() {
string str = "Demo text!";
Console.WriteLine(str);
string res = str.Replace("Demo ", "New ");
Console.WriteLine("After replacing...");
Console.WriteLine(res);
}
}
Demo text!
After replacing...
New text!
|
[
{
"code": null,
"e": 1102,
"s": 1062,
"text": "Firstly, set the string to be replaced."
},
{
"code": null,
"e": 1129,
"s": 1102,
"text": "string str = \"Demo text!\";"
},
{
"code": null,
"e": 1187,
"s": 1129,
"text": "Now use the replace() method to replace the above string."
},
{
"code": null,
"e": 1230,
"s": 1187,
"text": "string res = str.Replace(\"Demo \", \"New \");"
},
{
"code": null,
"e": 1296,
"s": 1230,
"text": "The following is the complete code to replace a word in a string."
},
{
"code": null,
"e": 1307,
"s": 1296,
"text": " Live Demo"
},
{
"code": null,
"e": 1570,
"s": 1307,
"text": "using System;\npublic class Demo {\n public static void Main() {\n\n string str = \"Demo text!\";\n Console.WriteLine(str);\n\n string res = str.Replace(\"Demo \", \"New \");\n Console.WriteLine(\"After replacing...\");\n Console.WriteLine(res);\n }\n}"
},
{
"code": null,
"e": 1610,
"s": 1570,
"text": "Demo text!\nAfter replacing...\nNew text!"
}
] |
What is a static polymorphism in C#?
|
Static Polymorphism is the linking of a function with an object during compile time is called static. It is also called static binding. C# provides two techniques to implement static polymorphism i.e. Function overloading and Operator overloading.
Let us learn about Function Overloading. You can have multiple definitions for the same function name in the same scope. The definition of the function must differ from each other by the types and/or the number of arguments in the argument list. You cannot overload function declarations that differ only by return type.
The following is the complete example −
using System;
namespace PolymorphismApplication {
class Printdata {
void print(int i) {
Console.WriteLine("Printing int: {0}", i );
}
void print(double f) {
Console.WriteLine("Printing float: {0}" , f);
}
void print(string s) {
Console.WriteLine("Printing string: {0}", s);
}
static void Main(string[] args) {
Printdata p = new Printdata();
// Call print to print integer
p.print(5);
// Call print to print float
p.print(500.263);
// Call print to print string
p.print("Hello C++");
Console.ReadKey();
}
}
}
|
[
{
"code": null,
"e": 1310,
"s": 1062,
"text": "Static Polymorphism is the linking of a function with an object during compile time is called static. It is also called static binding. C# provides two techniques to implement static polymorphism i.e. Function overloading and Operator overloading."
},
{
"code": null,
"e": 1631,
"s": 1310,
"text": "Let us learn about Function Overloading. You can have multiple definitions for the same function name in the same scope. The definition of the function must differ from each other by the types and/or the number of arguments in the argument list. You cannot overload function declarations that differ only by return type."
},
{
"code": null,
"e": 1671,
"s": 1631,
"text": "The following is the complete example −"
},
{
"code": null,
"e": 2339,
"s": 1671,
"text": "using System;\n\nnamespace PolymorphismApplication {\n class Printdata {\n void print(int i) {\n Console.WriteLine(\"Printing int: {0}\", i );\n }\n\n void print(double f) {\n Console.WriteLine(\"Printing float: {0}\" , f);\n }\n\n void print(string s) {\n Console.WriteLine(\"Printing string: {0}\", s);\n }\n\n static void Main(string[] args) {\n Printdata p = new Printdata();\n\n // Call print to print integer\n p.print(5);\n\n // Call print to print float\n p.print(500.263);\n\n // Call print to print string\n p.print(\"Hello C++\");\n Console.ReadKey();\n }\n }\n}"
}
] |
How to use Try/catch blocks in C#?
|
Exceptions provide a way to transfer control from one part of a program to another. C# exception handling is built upon four keywords: try, catch, finally, and throw.
try − A try block identifies a block of code for which particular exceptions is activated. It is followed by one or more catch blocks.
try − A try block identifies a block of code for which particular exceptions is activated. It is followed by one or more catch blocks.
catch − A program catches an exception with an exception handler at the place in a program where you want to handle the problem. The catch keyword indicates the catching of an exception.
catch − A program catches an exception with an exception handler at the place in a program where you want to handle the problem. The catch keyword indicates the catching of an exception.
The following is an example showing how to use the try, catch, and finally in C#.
Live Demo
using System;
namespace Demo {
class DivNumbers {
int result;
DivNumbers() {
result = 0;
}
public void division(int num1, int num2) {
try {
result = num1 / num2;
} catch (DivideByZeroException e) {
Console.WriteLine("Exception caught: {0}", e);
} finally {
Console.WriteLine("Result: {0}", result);
}
}
static void Main(string[] args) {
DivNumbers d = new DivNumbers();
d.division(25, 0);
Console.ReadKey();
}
}
}
Result: 0
|
[
{
"code": null,
"e": 1229,
"s": 1062,
"text": "Exceptions provide a way to transfer control from one part of a program to another. C# exception handling is built upon four keywords: try, catch, finally, and throw."
},
{
"code": null,
"e": 1364,
"s": 1229,
"text": "try − A try block identifies a block of code for which particular exceptions is activated. It is followed by one or more catch blocks."
},
{
"code": null,
"e": 1499,
"s": 1364,
"text": "try − A try block identifies a block of code for which particular exceptions is activated. It is followed by one or more catch blocks."
},
{
"code": null,
"e": 1686,
"s": 1499,
"text": "catch − A program catches an exception with an exception handler at the place in a program where you want to handle the problem. The catch keyword indicates the catching of an exception."
},
{
"code": null,
"e": 1873,
"s": 1686,
"text": "catch − A program catches an exception with an exception handler at the place in a program where you want to handle the problem. The catch keyword indicates the catching of an exception."
},
{
"code": null,
"e": 1955,
"s": 1873,
"text": "The following is an example showing how to use the try, catch, and finally in C#."
},
{
"code": null,
"e": 1966,
"s": 1955,
"text": " Live Demo"
},
{
"code": null,
"e": 2536,
"s": 1966,
"text": "using System;\nnamespace Demo {\n class DivNumbers {\n int result;\n DivNumbers() {\n result = 0;\n }\n public void division(int num1, int num2) {\n try {\n result = num1 / num2;\n } catch (DivideByZeroException e) {\n Console.WriteLine(\"Exception caught: {0}\", e);\n } finally {\n Console.WriteLine(\"Result: {0}\", result);\n }\n }\n static void Main(string[] args) {\n DivNumbers d = new DivNumbers();\n d.division(25, 0);\n Console.ReadKey();\n }\n }\n}"
},
{
"code": null,
"e": 2546,
"s": 2536,
"text": "Result: 0"
}
] |
GATE | GATE-CS-2004 | Question 55 - GeeksforGeeks
|
28 Jun, 2021
The routing table of a router is shown below:
Destination Sub net mask Interface
128.75.43.0 255.255.255.0 Eth0
128.75.43.0 255.255.255.128 Eth1
192.12.17.5 255.255.255.255 Eth3
default Eth2
On which interfaces will the router forward packets addressed to destinations 128.75.43.16 and 192.12.17.10 respectively?(A) Eth1 and Eth2(B) Eth0 and Eth2(C) Eth0 and Eth3(D) Eth1 and Eth3Answer: (A)Explanation: To find the interface, we need to do AND of incoming IP address and Subnet mask. Compare the result of AND with the destination. Note that if there is a match between multiple Destinations, then we need to select the destination with longest length subnet mask.
128.75.43.16, matches with 128.75.43.0 and 128.75.43.0, the packet is forwarded to Eth1 as length of subnet mask in Eth11 is more.
If a result is not matching with any of the given destinations then the packet is forwarded to the default interface (here Eth2). Therefore the packet’s addressed to 192.12.17.10 will be forwarded to Eth2.Quiz of this Question
GATE-CS-2004
GATE-GATE-CS-2004
GATE
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
GATE | GATE-IT-2004 | Question 83
GATE | GATE-CS-2014-(Set-3) | Question 38
GATE | GATE CS 2018 | Question 37
GATE | GATE-CS-2016 (Set 1) | Question 65
GATE | GATE-CS-2016 (Set 1) | Question 63
GATE | GATE-CS-2014-(Set-3) | Question 65
GATE | GATE-CS-2007 | Question 17
GATE | GATE-CS-2016 (Set 2) | Question 48
GATE | GATE CS 2019 | Question 37
GATE | GATE-CS-2014-(Set-3) | Question 65
|
[
{
"code": null,
"e": 24386,
"s": 24358,
"text": "\n28 Jun, 2021"
},
{
"code": null,
"e": 24432,
"s": 24386,
"text": "The routing table of a router is shown below:"
},
{
"code": null,
"e": 24656,
"s": 24432,
"text": " Destination Sub net mask Interface\n 128.75.43.0 255.255.255.0 Eth0\n 128.75.43.0 255.255.255.128 Eth1\n 192.12.17.5 255.255.255.255 Eth3\n default Eth2"
},
{
"code": null,
"e": 25131,
"s": 24656,
"text": "On which interfaces will the router forward packets addressed to destinations 128.75.43.16 and 192.12.17.10 respectively?(A) Eth1 and Eth2(B) Eth0 and Eth2(C) Eth0 and Eth3(D) Eth1 and Eth3Answer: (A)Explanation: To find the interface, we need to do AND of incoming IP address and Subnet mask. Compare the result of AND with the destination. Note that if there is a match between multiple Destinations, then we need to select the destination with longest length subnet mask."
},
{
"code": null,
"e": 25262,
"s": 25131,
"text": "128.75.43.16, matches with 128.75.43.0 and 128.75.43.0, the packet is forwarded to Eth1 as length of subnet mask in Eth11 is more."
},
{
"code": null,
"e": 25489,
"s": 25262,
"text": "If a result is not matching with any of the given destinations then the packet is forwarded to the default interface (here Eth2). Therefore the packet’s addressed to 192.12.17.10 will be forwarded to Eth2.Quiz of this Question"
},
{
"code": null,
"e": 25502,
"s": 25489,
"text": "GATE-CS-2004"
},
{
"code": null,
"e": 25520,
"s": 25502,
"text": "GATE-GATE-CS-2004"
},
{
"code": null,
"e": 25525,
"s": 25520,
"text": "GATE"
},
{
"code": null,
"e": 25623,
"s": 25525,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25632,
"s": 25623,
"text": "Comments"
},
{
"code": null,
"e": 25645,
"s": 25632,
"text": "Old Comments"
},
{
"code": null,
"e": 25679,
"s": 25645,
"text": "GATE | GATE-IT-2004 | Question 83"
},
{
"code": null,
"e": 25721,
"s": 25679,
"text": "GATE | GATE-CS-2014-(Set-3) | Question 38"
},
{
"code": null,
"e": 25755,
"s": 25721,
"text": "GATE | GATE CS 2018 | Question 37"
},
{
"code": null,
"e": 25797,
"s": 25755,
"text": "GATE | GATE-CS-2016 (Set 1) | Question 65"
},
{
"code": null,
"e": 25839,
"s": 25797,
"text": "GATE | GATE-CS-2016 (Set 1) | Question 63"
},
{
"code": null,
"e": 25881,
"s": 25839,
"text": "GATE | GATE-CS-2014-(Set-3) | Question 65"
},
{
"code": null,
"e": 25915,
"s": 25881,
"text": "GATE | GATE-CS-2007 | Question 17"
},
{
"code": null,
"e": 25957,
"s": 25915,
"text": "GATE | GATE-CS-2016 (Set 2) | Question 48"
},
{
"code": null,
"e": 25991,
"s": 25957,
"text": "GATE | GATE CS 2019 | Question 37"
}
] |
How to add current date to an existing MySQL table?
|
To update an existing table, use UPDATE. With that, to set the current date, use the CURDATE() method −
update yourTableName set yourCoumnName=CURDATE();
Let us first create a table −
mysql> create table DemoTable
-> (
-> DueDate datetime
-> );
Query OK, 0 rows affected (0.58 sec)
Insert some records in the table using insert command −
mysql> insert into DemoTable values('2019-01-10') ;
Query OK, 1 row affected (0.21 sec)
mysql> insert into DemoTable values('2019-03-31');
Query OK, 1 row affected (0.18 sec)
Display all records from the table using select statement −
mysql> select *from DemoTable;
This will produce the following output −
+---------------------+
| DueDate |
+---------------------+
| 2019-01-10 00:00:00 |
| 2019-03-31 00:00:00 |
+---------------------+
2 rows in set (0.00 sec)
Following is the query to add current date to existing table −
mysql> update DemoTable set DueDate=CURDATE();
Query OK, 2 rows affected (0.13 sec)
Rows matched: 2 Changed: 2 Warnings: 0
Let us check the table records once again −
mysql> select *from DemoTable;
This will produce the following output −
+---------------------+
| DueDate |
+---------------------+
| 2019-06-22 00:00:00 |
| 2019-06-22 00:00:00 |
+---------------------+
2 rows in set (0.00 sec)
|
[
{
"code": null,
"e": 1166,
"s": 1062,
"text": "To update an existing table, use UPDATE. With that, to set the current date, use the CURDATE() method −"
},
{
"code": null,
"e": 1216,
"s": 1166,
"text": "update yourTableName set yourCoumnName=CURDATE();"
},
{
"code": null,
"e": 1246,
"s": 1216,
"text": "Let us first create a table −"
},
{
"code": null,
"e": 1344,
"s": 1246,
"text": "mysql> create table DemoTable\n-> (\n-> DueDate datetime\n-> );\nQuery OK, 0 rows affected (0.58 sec)"
},
{
"code": null,
"e": 1400,
"s": 1344,
"text": "Insert some records in the table using insert command −"
},
{
"code": null,
"e": 1576,
"s": 1400,
"text": "mysql> insert into DemoTable values('2019-01-10') ;\nQuery OK, 1 row affected (0.21 sec)\n\nmysql> insert into DemoTable values('2019-03-31');\nQuery OK, 1 row affected (0.18 sec)"
},
{
"code": null,
"e": 1636,
"s": 1576,
"text": "Display all records from the table using select statement −"
},
{
"code": null,
"e": 1667,
"s": 1636,
"text": "mysql> select *from DemoTable;"
},
{
"code": null,
"e": 1708,
"s": 1667,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1877,
"s": 1708,
"text": "+---------------------+\n| DueDate |\n+---------------------+\n| 2019-01-10 00:00:00 |\n| 2019-03-31 00:00:00 |\n+---------------------+\n2 rows in set (0.00 sec)"
},
{
"code": null,
"e": 1940,
"s": 1877,
"text": "Following is the query to add current date to existing table −"
},
{
"code": null,
"e": 2063,
"s": 1940,
"text": "mysql> update DemoTable set DueDate=CURDATE();\nQuery OK, 2 rows affected (0.13 sec)\nRows matched: 2 Changed: 2 Warnings: 0"
},
{
"code": null,
"e": 2107,
"s": 2063,
"text": "Let us check the table records once again −"
},
{
"code": null,
"e": 2138,
"s": 2107,
"text": "mysql> select *from DemoTable;"
},
{
"code": null,
"e": 2179,
"s": 2138,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2348,
"s": 2179,
"text": "+---------------------+\n| DueDate |\n+---------------------+\n| 2019-06-22 00:00:00 |\n| 2019-06-22 00:00:00 |\n+---------------------+\n2 rows in set (0.00 sec)"
}
] |
Set the image path with CSS
|
The border-image-source property is used in CSS to set the image path. You can try to run the following code to set the image path −
<html>
<head>
<style>
#borderimg1 {
border: 15px solid transparent;
padding: 15px;
border-image-source: url(https://tutorialspoint.com/css/images/border.png);
border-image-repeat: round;
}
</style>
</head>
<body>
<p id = "borderimg1">This is image border example.</p>
</body>
</html>
|
[
{
"code": null,
"e": 1195,
"s": 1062,
"text": "The border-image-source property is used in CSS to set the image path. You can try to run the following code to set the image path −"
},
{
"code": null,
"e": 1579,
"s": 1195,
"text": "<html>\n <head>\n <style>\n #borderimg1 {\n border: 15px solid transparent;\n padding: 15px;\n border-image-source: url(https://tutorialspoint.com/css/images/border.png);\n border-image-repeat: round;\n }\n </style>\n </head>\n \n <body>\n <p id = \"borderimg1\">This is image border example.</p>\n </body>\n</html>"
}
] |
What is .htaccess in PHP?
|
.htaccess is a configuration file for use on web servers running on the web apache server software. when a .htaccess file is placed in a directory which in turn loaded via the Apache web server, then the .htaccess file detected and executed by the Apache server software.
.htaccess files can be utilized to modify the setup of the Apache server software to empower additional functionality and fetures that the apache web server softwatre brings to the table. We can use the .htaccess file for alteration various configuration in apache web server software. Some of them are listed below:
Creating custom error pages is very useful, it allows us to show web site visitors a friendly error message, in case of if a URL on your web site does not work.
ErrorDocument 404 /error_pages/404.html
Very easily, we can password protect a directory of an application that requires a username and password to access.
AuthName "Admin Area"
AuthUserFile /path/to/password/file/.htpasswd
AuthType Basic
require valid-user
The first line tells the Apache Web Server the secure directory is called 'Admin Area', this will be displayed when the pop-up login prompt appears. The subsequent line indicates the location of the password file. The third line determines the authentication type, in this example, we are using 'Basic' because we are using basic HTTP authentication lastly the fourth line indicates that we require valid login credentials
Redirects enable us to direct web site visitors from one document within your web site to another.
Redirect /old_dir/ http://www.test.com(your domain)/new_dir/index.html
order allow,deny
deny from 155.0.2.0
deny from 123.45.6.1
allow from all
The above lines tell the Apache Web Server to block visitors from the IP address '155.0.2.0' and '123.45.6.1' and allow all other IP addresses.
To set up a MIME type, create a .htaccess file following the main instructions and guidance which includes the following text:
AddType text/html htm0
'AddType' determines that you are including a MIME type. TThe subsequent part is the MIME type, for this situation content or HTML, and the last part is the file extension, in this example 'htm0'.
|
[
{
"code": null,
"e": 1335,
"s": 1062,
"text": ".htaccess is a configuration file for use on web servers running on the web apache server software. when a .htaccess file is placed in a directory which in turn loaded via the Apache web server, then the .htaccess file detected and executed by the Apache server software. "
},
{
"code": null,
"e": 1652,
"s": 1335,
"text": ".htaccess files can be utilized to modify the setup of the Apache server software to empower additional functionality and fetures that the apache web server softwatre brings to the table. We can use the .htaccess file for alteration various configuration in apache web server software. Some of them are listed below:"
},
{
"code": null,
"e": 1813,
"s": 1652,
"text": "Creating custom error pages is very useful, it allows us to show web site visitors a friendly error message, in case of if a URL on your web site does not work."
},
{
"code": null,
"e": 1853,
"s": 1813,
"text": "ErrorDocument 404 /error_pages/404.html"
},
{
"code": null,
"e": 1969,
"s": 1853,
"text": "Very easily, we can password protect a directory of an application that requires a username and password to access."
},
{
"code": null,
"e": 2071,
"s": 1969,
"text": "AuthName \"Admin Area\"\nAuthUserFile /path/to/password/file/.htpasswd\nAuthType Basic\nrequire valid-user"
},
{
"code": null,
"e": 2494,
"s": 2071,
"text": "The first line tells the Apache Web Server the secure directory is called 'Admin Area', this will be displayed when the pop-up login prompt appears. The subsequent line indicates the location of the password file. The third line determines the authentication type, in this example, we are using 'Basic' because we are using basic HTTP authentication lastly the fourth line indicates that we require valid login credentials"
},
{
"code": null,
"e": 2593,
"s": 2494,
"text": "Redirects enable us to direct web site visitors from one document within your web site to another."
},
{
"code": null,
"e": 2664,
"s": 2593,
"text": "Redirect /old_dir/ http://www.test.com(your domain)/new_dir/index.html"
},
{
"code": null,
"e": 2737,
"s": 2664,
"text": "order allow,deny\ndeny from 155.0.2.0\ndeny from 123.45.6.1\nallow from all"
},
{
"code": null,
"e": 2881,
"s": 2737,
"text": "The above lines tell the Apache Web Server to block visitors from the IP address '155.0.2.0' and '123.45.6.1' and allow all other IP addresses."
},
{
"code": null,
"e": 3008,
"s": 2881,
"text": "To set up a MIME type, create a .htaccess file following the main instructions and guidance which includes the following text:"
},
{
"code": null,
"e": 3031,
"s": 3008,
"text": "AddType text/html htm0"
},
{
"code": null,
"e": 3228,
"s": 3031,
"text": "'AddType' determines that you are including a MIME type. TThe subsequent part is the MIME type, for this situation content or HTML, and the last part is the file extension, in this example 'htm0'."
}
] |
Bash Scripting - Split String - GeeksforGeeks
|
04 Jan, 2022
In this article, we will discuss how to split strings in a bash script.
Dividing a single string into multiple strings is called string splitting. Many programming languages have a built-in function to perform string splitting but there is no built-in function in bash to do so. There are various methods to perform split string in bash. Let’s see all methods one by one with examples.
$IFS(Internal Field Separator) is a special shell variable. It is used to assign the delimiter(a sequence of one or more characters based on which we want to split the string). Any value or character like ‘\t’, ‘\n’, ‘-‘ etc. can be the delimiter. After assigning the value to the $IFS variable, the string value needs to be read. We can read string using ‘-r’ and ‘-a’ options.
‘-r’: It read backslash(\) as a character instead of an escape character
‘-a’: It is used to store the split-ted words into an array variable that is declared after the -a option.
Example 1: Split the string by space
Code:
#!/bin/bash
# String
text="Welcome to GeeksforGeeks"
# Set space as the delimiter
IFS=' '
# Read the split words into an array
# based on space delimiter
read -ra newarr <<< "$text"
# Print each value of the array by using
# the loop
for val in "${newarr[@]}";
do
echo "$val"
done
Output:
Welcome
to
GeeksforGeeks
Example 2: Split string by a symbol
String split using @ symbol.
Code:
#!/bin/bash
#String
text="Welcome@to@GeeksforGeeks@!!"
# Set @ as the delimiter
IFS='@'
# Read the split words into an array
# based on space delimiter
read -ra newarr <<< "$text"
# Print each value of the array by
# using the loop
for val in "${newarr[@]}";
do
echo "$val"
done
Output:
Welcome
to
GeeksforGeeks
!!
In this method, readarray command with the -d option is used to split the string data. ‘-d’: this option act as an IFS variable to define the delimiter.
Example 1: Split string by space
Code:
#!/bin/bash
# Read the main string
text="Welcome to GeeksforGeeks"
# Split the string by space
readarray -d " " -t strarr <<< "$text"
# Print each value of the array by
# using loop
for (( n=0; n < ${#strarr[*]}; n++))
do
echo "${strarr[n]}"
done
Output:
Welcome
to
GeeksforGeeks
Example 2: Split using a colon as a delimiter
Code:
#!/bin/bash
# Read the main string
text="Welcome:to:GeeksforGeeks"
# Split the string based on the delimiter, ':'
readarray -d : -t strarr <<< "$text"
# Print each value of the array by using
# loop
for (( n=0; n < ${#strarr[*]}; n++))
do
echo "${strarr[n]}"
done
Output:
Welcome
to
GeeksforGeeks
In this method, a variable is used to store string data and another variable is used to store multi-character delimiter data. An array variable is also declared to store the split-ted string.
Code:
# Define the string to split
text="HelloRomy HelloPushkar HelloNikhil HelloRinkle"
# store multi-character delimiter
delimiter="Hello"
# Concatenate the delimiter with the
# main string
string=$text$delimiter
# Split the text based on the delimiter
newarray=()
while [[ $string ]]; do
newarray+=( "${string%%"$delimiter"*}" )
string=${string#*"$delimiter"}
done
# Print the words after the split
for value in ${newarray[@]}
do
echo "$value "
done
Output:
Romy
Pushkar
Nikhil
Rinkle
Bash-Script
Picked
Linux-Unix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Named Pipe or FIFO with example C program
Thread functions in C/C++
SED command in Linux | Set 2
Array Basics in Shell Scripting | Set 1
Introduction to Linux Shell and Shell Scripting
Linux system call in Detail
chown command in Linux with Examples
Looping Statements | Shell Script
mv command in Linux with examples
PING Command in Linux with examples
|
[
{
"code": null,
"e": 24352,
"s": 24324,
"text": "\n04 Jan, 2022"
},
{
"code": null,
"e": 24424,
"s": 24352,
"text": "In this article, we will discuss how to split strings in a bash script."
},
{
"code": null,
"e": 24738,
"s": 24424,
"text": "Dividing a single string into multiple strings is called string splitting. Many programming languages have a built-in function to perform string splitting but there is no built-in function in bash to do so. There are various methods to perform split string in bash. Let’s see all methods one by one with examples."
},
{
"code": null,
"e": 25118,
"s": 24738,
"text": "$IFS(Internal Field Separator) is a special shell variable. It is used to assign the delimiter(a sequence of one or more characters based on which we want to split the string). Any value or character like ‘\\t’, ‘\\n’, ‘-‘ etc. can be the delimiter. After assigning the value to the $IFS variable, the string value needs to be read. We can read string using ‘-r’ and ‘-a’ options. "
},
{
"code": null,
"e": 25192,
"s": 25118,
"text": "‘-r’: It read backslash(\\) as a character instead of an escape character "
},
{
"code": null,
"e": 25300,
"s": 25192,
"text": "‘-a’: It is used to store the split-ted words into an array variable that is declared after the -a option. "
},
{
"code": null,
"e": 25338,
"s": 25300,
"text": "Example 1: Split the string by space "
},
{
"code": null,
"e": 25344,
"s": 25338,
"text": "Code:"
},
{
"code": null,
"e": 25631,
"s": 25344,
"text": "#!/bin/bash\n\n# String\ntext=\"Welcome to GeeksforGeeks\"\n\n# Set space as the delimiter\nIFS=' '\n\n# Read the split words into an array\n# based on space delimiter\nread -ra newarr <<< \"$text\"\n\n\n# Print each value of the array by using\n# the loop\nfor val in \"${newarr[@]}\";\ndo\n echo \"$val\"\ndone"
},
{
"code": null,
"e": 25639,
"s": 25631,
"text": "Output:"
},
{
"code": null,
"e": 25664,
"s": 25639,
"text": "Welcome\nto\nGeeksforGeeks"
},
{
"code": null,
"e": 25700,
"s": 25664,
"text": "Example 2: Split string by a symbol"
},
{
"code": null,
"e": 25729,
"s": 25700,
"text": "String split using @ symbol."
},
{
"code": null,
"e": 25735,
"s": 25729,
"text": "Code:"
},
{
"code": null,
"e": 26022,
"s": 25735,
"text": "#!/bin/bash\n\n#String\ntext=\"Welcome@to@GeeksforGeeks@!!\"\n\n# Set @ as the delimiter\nIFS='@'\n\n# Read the split words into an array \n# based on space delimiter\nread -ra newarr <<< \"$text\"\n\n\n# Print each value of the array by \n# using the loop\nfor val in \"${newarr[@]}\";\ndo\n echo \"$val\"\ndone"
},
{
"code": null,
"e": 26030,
"s": 26022,
"text": "Output:"
},
{
"code": null,
"e": 26058,
"s": 26030,
"text": "Welcome\nto\nGeeksforGeeks\n!!"
},
{
"code": null,
"e": 26212,
"s": 26058,
"text": "In this method, readarray command with the -d option is used to split the string data. ‘-d’: this option act as an IFS variable to define the delimiter."
},
{
"code": null,
"e": 26245,
"s": 26212,
"text": "Example 1: Split string by space"
},
{
"code": null,
"e": 26251,
"s": 26245,
"text": "Code:"
},
{
"code": null,
"e": 26503,
"s": 26251,
"text": "#!/bin/bash\n\n# Read the main string\ntext=\"Welcome to GeeksforGeeks\"\n\n# Split the string by space\nreadarray -d \" \" -t strarr <<< \"$text\"\n\n# Print each value of the array by \n# using loop\nfor (( n=0; n < ${#strarr[*]}; n++))\ndo\n echo \"${strarr[n]}\"\ndone"
},
{
"code": null,
"e": 26511,
"s": 26503,
"text": "Output:"
},
{
"code": null,
"e": 26536,
"s": 26511,
"text": "Welcome\nto\nGeeksforGeeks"
},
{
"code": null,
"e": 26582,
"s": 26536,
"text": "Example 2: Split using a colon as a delimiter"
},
{
"code": null,
"e": 26588,
"s": 26582,
"text": "Code:"
},
{
"code": null,
"e": 26856,
"s": 26588,
"text": "#!/bin/bash\n\n# Read the main string\ntext=\"Welcome:to:GeeksforGeeks\"\n\n# Split the string based on the delimiter, ':'\nreadarray -d : -t strarr <<< \"$text\"\n\n# Print each value of the array by using\n# loop\nfor (( n=0; n < ${#strarr[*]}; n++))\ndo\n echo \"${strarr[n]}\"\ndone"
},
{
"code": null,
"e": 26864,
"s": 26856,
"text": "Output:"
},
{
"code": null,
"e": 26889,
"s": 26864,
"text": "Welcome\nto\nGeeksforGeeks"
},
{
"code": null,
"e": 27081,
"s": 26889,
"text": "In this method, a variable is used to store string data and another variable is used to store multi-character delimiter data. An array variable is also declared to store the split-ted string."
},
{
"code": null,
"e": 27087,
"s": 27081,
"text": "Code:"
},
{
"code": null,
"e": 27543,
"s": 27087,
"text": "# Define the string to split\ntext=\"HelloRomy HelloPushkar HelloNikhil HelloRinkle\"\n\n# store multi-character delimiter \ndelimiter=\"Hello\"\n\n# Concatenate the delimiter with the \n# main string\nstring=$text$delimiter\n\n# Split the text based on the delimiter\nnewarray=()\nwhile [[ $string ]]; do\n newarray+=( \"${string%%\"$delimiter\"*}\" )\n string=${string#*\"$delimiter\"}\ndone\n\n# Print the words after the split\nfor value in ${newarray[@]}\ndo\n echo \"$value \"\ndone"
},
{
"code": null,
"e": 27551,
"s": 27543,
"text": "Output:"
},
{
"code": null,
"e": 27585,
"s": 27551,
"text": "Romy \nPushkar \nNikhil \nRinkle "
},
{
"code": null,
"e": 27597,
"s": 27585,
"text": "Bash-Script"
},
{
"code": null,
"e": 27604,
"s": 27597,
"text": "Picked"
},
{
"code": null,
"e": 27615,
"s": 27604,
"text": "Linux-Unix"
},
{
"code": null,
"e": 27713,
"s": 27615,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27722,
"s": 27713,
"text": "Comments"
},
{
"code": null,
"e": 27735,
"s": 27722,
"text": "Old Comments"
},
{
"code": null,
"e": 27777,
"s": 27735,
"text": "Named Pipe or FIFO with example C program"
},
{
"code": null,
"e": 27803,
"s": 27777,
"text": "Thread functions in C/C++"
},
{
"code": null,
"e": 27832,
"s": 27803,
"text": "SED command in Linux | Set 2"
},
{
"code": null,
"e": 27872,
"s": 27832,
"text": "Array Basics in Shell Scripting | Set 1"
},
{
"code": null,
"e": 27920,
"s": 27872,
"text": "Introduction to Linux Shell and Shell Scripting"
},
{
"code": null,
"e": 27948,
"s": 27920,
"text": "Linux system call in Detail"
},
{
"code": null,
"e": 27985,
"s": 27948,
"text": "chown command in Linux with Examples"
},
{
"code": null,
"e": 28019,
"s": 27985,
"text": "Looping Statements | Shell Script"
},
{
"code": null,
"e": 28053,
"s": 28019,
"text": "mv command in Linux with examples"
}
] |
HTML DOM Form submit() method
|
The HTML DOM Form submit() method is used for submitting the form data to the address specified by the action attribute. It acts as a submit button to submit form data and it doesn’t take any kind of parameters.
Following is the syntax for Form submit() method −
formObject.submit()
Let us look at an example for the Form submit() method −
<!DOCTYPE html>
<html>
<head>
<style>
form{
border:2px solid blue;
margin:2px;
padding:4px;
}
</style>
<script>
function ResetForm() {
document.getElementById("FORM1").reset();
document.getElementById("Sample").innerHTML="Form has been reset";
}
</script>
</head>
<body>
<h1>Form reset() method example</h1>
<form id="FORM1" method="post" action="/sample_page.php">
<label>User Name <input type="text" name="usrN"></label> <br><br>
<label>Age <input type="text" name="Age"><label> <br><br>
<input type="submit" value="SUBMIT">
<input type="button" onclick="ResetForm()" value="RESET">
</form>
<p id="Sample"></p>
</body>
</html>
This will produce the following output −
On clicking the SUBMIT button the form will be submitted and the “sample_page.php” will display this −
In the above example −
We have first created a form with id=”FORM1”, method=”post” and action=”/sample_page.php”. It contains two input fields with type text and a button named SUBMIT −
<form id="FORM1" method="post" action="/sample_page.php">
<label>User Name <input type="text" name="usrN"></label> <br><br>
<label>Age <input type="text" name="Age"><label> <br><br>
<input type="submit" value="SUBMIT">
<input type="button" onclick="ResetForm()" value="RESET">
</form>
The SUBMIT button executes the SubmitForm() method on being clicked by the user −
<input type="button" onclick="SubmitForm()" value="SUBMIT">
The ResetForm() method gets the <form> element using the document object getElementById() method and calls the submit() method on it. This submits the form data to the address specified by the action attribute −
function SubmitForm() {
document.getElementById("FORM1").submit();
}
|
[
{
"code": null,
"e": 1274,
"s": 1062,
"text": "The HTML DOM Form submit() method is used for submitting the form data to the address specified by the action attribute. It acts as a submit button to submit form data and it doesn’t take any kind of parameters."
},
{
"code": null,
"e": 1325,
"s": 1274,
"text": "Following is the syntax for Form submit() method −"
},
{
"code": null,
"e": 1345,
"s": 1325,
"text": "formObject.submit()"
},
{
"code": null,
"e": 1402,
"s": 1345,
"text": "Let us look at an example for the Form submit() method −"
},
{
"code": null,
"e": 2073,
"s": 1402,
"text": "<!DOCTYPE html>\n<html>\n<head>\n<style>\n form{\n border:2px solid blue;\n margin:2px;\n padding:4px;\n }\n</style>\n<script>\n function ResetForm() {\n document.getElementById(\"FORM1\").reset();\n document.getElementById(\"Sample\").innerHTML=\"Form has been reset\";\n }\n</script>\n</head>\n<body>\n<h1>Form reset() method example</h1>\n<form id=\"FORM1\" method=\"post\" action=\"/sample_page.php\">\n<label>User Name <input type=\"text\" name=\"usrN\"></label> <br><br>\n<label>Age <input type=\"text\" name=\"Age\"><label> <br><br>\n<input type=\"submit\" value=\"SUBMIT\">\n<input type=\"button\" onclick=\"ResetForm()\" value=\"RESET\">\n</form>\n<p id=\"Sample\"></p>\n</body>\n</html>"
},
{
"code": null,
"e": 2114,
"s": 2073,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2217,
"s": 2114,
"text": "On clicking the SUBMIT button the form will be submitted and the “sample_page.php” will display this −"
},
{
"code": null,
"e": 2240,
"s": 2217,
"text": "In the above example −"
},
{
"code": null,
"e": 2403,
"s": 2240,
"text": "We have first created a form with id=”FORM1”, method=”post” and action=”/sample_page.php”. It contains two input fields with type text and a button named SUBMIT −"
},
{
"code": null,
"e": 2688,
"s": 2403,
"text": "<form id=\"FORM1\" method=\"post\" action=\"/sample_page.php\">\n<label>User Name <input type=\"text\" name=\"usrN\"></label> <br><br>\n<label>Age <input type=\"text\" name=\"Age\"><label> <br><br>\n<input type=\"submit\" value=\"SUBMIT\">\n<input type=\"button\" onclick=\"ResetForm()\" value=\"RESET\">\n</form>"
},
{
"code": null,
"e": 2770,
"s": 2688,
"text": "The SUBMIT button executes the SubmitForm() method on being clicked by the user −"
},
{
"code": null,
"e": 2830,
"s": 2770,
"text": "<input type=\"button\" onclick=\"SubmitForm()\" value=\"SUBMIT\">"
},
{
"code": null,
"e": 3042,
"s": 2830,
"text": "The ResetForm() method gets the <form> element using the document object getElementById() method and calls the submit() method on it. This submits the form data to the address specified by the action attribute −"
},
{
"code": null,
"e": 3114,
"s": 3042,
"text": "function SubmitForm() {\n document.getElementById(\"FORM1\").submit();\n}"
}
] |
Data Visualization of Uber Rides with Tableau | by Vanessa Leung | Towards Data Science
|
This piece of work is inspired by the plotly demo sample New York Uber Rides. But we are going to do it in Tableau without writing a single line of code.
You can find my work on my Tableau page here.
This visualization creates a view of Uber rides in New York City from April to May in 2014. It aims to provide consumers insights on how the number of Uber rides changed within different day periods on different days in a month, in different NYC neighborhoods.
We use a map to display the distribution of rides geographically and a histogram to display the rides distribution among hours of the day.
We will use the data provided by FiveThirtyEight.
You can find how to link data sources to Tableau here.
Drag Lat and Lon from Measures to Columns and Rows to draw a map.
We want to create a dark theme for our map like most map visualizations do.
2.1. Select Map -> Map Layers
2.2. Change Background style to Dark.
2.3. Select Streets, Highways, Routes, Cities in Map Layer.
The map now shows more info about the geography.
3.1. Drag Date/Time from Dimensions to Marks -> Color
3.2. Click the dropdown button on the right side of Date/Time in the Marks section, select More -> Hour
3.3. Click Color -> Edit Color, select Color Palette Green-Gold, click Assign Palette
3.4. Click Color -> Effects -> Border -> None
Now the map is clearer with high contrast color.
We want to add filters to our map for users to choose specific dates and times.
4.1. Drag Date/Time from Dimensions to Filters, select Month/Day/Year, click All
4.2. Click the dropdown button on the right side of MDY(Date/Time) in the Filters section, click Show Filter
4.3. Click the dropdown button on the right side of MDY(Date/Time) on the sheet, select Single Value (drop down)
Now, we’re done with the map chart.
Open a new sheet to create the histogram.
1.1. Drag Date/Time(Hours) from Dimensions to Columns, drag Number of Records from Measures to Rows.
1.2. Click on the axis name Number of Records, deselect Show Header
1.3. Click on the axis name Date/Time on the sheet, select Hide Labels
We will use the same color theme as the map.
2.1. Drag Date/Time(Hours) from Dimensions to Marks -> Color
2.2. Drag Number of Records from Measures to Marks -> Label
2.3. Right-click on the histogram, select Format -> Lines -> Grid Lines -> Select None
2.4. Right-click on the histogram, select Format -> Shading -> Worksheet -> Select Black
We’re done with the histogram.
Now we want to create a dashboard to display the map and histogram together.
Click on the dashboard, click Format -> Dashboard -> Dashboard Shading -> Default -> Select Black
Drag Sheet 1 (Map) and Sheet 2 (Histogram) to the dashboard, with the histogram at the bottom.
Right-click on the sheet titles respectively, click Hide Title
1.1. Click on the Month, Day, Year of Date/Time filter, click Apply to Worksheets, select All Using Related Data Sources. So that we can delete the Date/Time (Hours) legend for Sheet 2 (Histogram).
1.2. Click on the Month, Day, Year of Date/Time filter, deselect Show Title
1.3. Click on the Hour of Date/Time legend for Sheet 1 (Map), deselect Show Title
1.4. Click on the Hour of Date/Time legend for Sheet 1 (Map), select Floating
1.5. Move the Hour of Date/Time legend on the right side of the map, resize the map to fit the legend
We want to add some descriptions to let users better understand the viz.
2.1. Title: Drag a text box to the dashboard, type
UBER RIDES IN NYCApril - May 2014
2.2. Instructions: Drag a text box to the dashboard, type Select the legends on the map or bars on the histogram to section data by time.
2.3. Descriptions: Drag a text box to the dashboard, type
TOTAL RIDES:SOURCE: FIVETHIRTYEIGHT
2.4. Total Rides
To display dynamic TOTAL RIDES text based on the date and time filter, we need to add a new sheet.
2.4.1. Drag Number of Records from Measures to Marks -> Text. A number should be displayed on the sheet.
2.4.2. Right-click the number, click Format... -> Font -> Worksheet -> White
2.4.3. Right-click the number, click Format... -> Shading -> Worksheet -> Black
2.4.4. Drag the new sheet to the dashboard
2.4.5. Click on the new sheet -> select Hide Title
2.4.6. Click on the new sheet -> select Floating. Drag the number right next to the TOTAL RIDES text.
2.5. Add Blank
Add blanks where is necessary to the dashboard, to let text vertically align with the center.
Now we’re done with the visualization.
New York Uber Rides
|
[
{
"code": null,
"e": 326,
"s": 172,
"text": "This piece of work is inspired by the plotly demo sample New York Uber Rides. But we are going to do it in Tableau without writing a single line of code."
},
{
"code": null,
"e": 372,
"s": 326,
"text": "You can find my work on my Tableau page here."
},
{
"code": null,
"e": 633,
"s": 372,
"text": "This visualization creates a view of Uber rides in New York City from April to May in 2014. It aims to provide consumers insights on how the number of Uber rides changed within different day periods on different days in a month, in different NYC neighborhoods."
},
{
"code": null,
"e": 772,
"s": 633,
"text": "We use a map to display the distribution of rides geographically and a histogram to display the rides distribution among hours of the day."
},
{
"code": null,
"e": 822,
"s": 772,
"text": "We will use the data provided by FiveThirtyEight."
},
{
"code": null,
"e": 877,
"s": 822,
"text": "You can find how to link data sources to Tableau here."
},
{
"code": null,
"e": 943,
"s": 877,
"text": "Drag Lat and Lon from Measures to Columns and Rows to draw a map."
},
{
"code": null,
"e": 1019,
"s": 943,
"text": "We want to create a dark theme for our map like most map visualizations do."
},
{
"code": null,
"e": 1049,
"s": 1019,
"text": "2.1. Select Map -> Map Layers"
},
{
"code": null,
"e": 1087,
"s": 1049,
"text": "2.2. Change Background style to Dark."
},
{
"code": null,
"e": 1147,
"s": 1087,
"text": "2.3. Select Streets, Highways, Routes, Cities in Map Layer."
},
{
"code": null,
"e": 1196,
"s": 1147,
"text": "The map now shows more info about the geography."
},
{
"code": null,
"e": 1250,
"s": 1196,
"text": "3.1. Drag Date/Time from Dimensions to Marks -> Color"
},
{
"code": null,
"e": 1354,
"s": 1250,
"text": "3.2. Click the dropdown button on the right side of Date/Time in the Marks section, select More -> Hour"
},
{
"code": null,
"e": 1440,
"s": 1354,
"text": "3.3. Click Color -> Edit Color, select Color Palette Green-Gold, click Assign Palette"
},
{
"code": null,
"e": 1486,
"s": 1440,
"text": "3.4. Click Color -> Effects -> Border -> None"
},
{
"code": null,
"e": 1535,
"s": 1486,
"text": "Now the map is clearer with high contrast color."
},
{
"code": null,
"e": 1615,
"s": 1535,
"text": "We want to add filters to our map for users to choose specific dates and times."
},
{
"code": null,
"e": 1696,
"s": 1615,
"text": "4.1. Drag Date/Time from Dimensions to Filters, select Month/Day/Year, click All"
},
{
"code": null,
"e": 1805,
"s": 1696,
"text": "4.2. Click the dropdown button on the right side of MDY(Date/Time) in the Filters section, click Show Filter"
},
{
"code": null,
"e": 1918,
"s": 1805,
"text": "4.3. Click the dropdown button on the right side of MDY(Date/Time) on the sheet, select Single Value (drop down)"
},
{
"code": null,
"e": 1954,
"s": 1918,
"text": "Now, we’re done with the map chart."
},
{
"code": null,
"e": 1996,
"s": 1954,
"text": "Open a new sheet to create the histogram."
},
{
"code": null,
"e": 2097,
"s": 1996,
"text": "1.1. Drag Date/Time(Hours) from Dimensions to Columns, drag Number of Records from Measures to Rows."
},
{
"code": null,
"e": 2165,
"s": 2097,
"text": "1.2. Click on the axis name Number of Records, deselect Show Header"
},
{
"code": null,
"e": 2236,
"s": 2165,
"text": "1.3. Click on the axis name Date/Time on the sheet, select Hide Labels"
},
{
"code": null,
"e": 2281,
"s": 2236,
"text": "We will use the same color theme as the map."
},
{
"code": null,
"e": 2342,
"s": 2281,
"text": "2.1. Drag Date/Time(Hours) from Dimensions to Marks -> Color"
},
{
"code": null,
"e": 2402,
"s": 2342,
"text": "2.2. Drag Number of Records from Measures to Marks -> Label"
},
{
"code": null,
"e": 2489,
"s": 2402,
"text": "2.3. Right-click on the histogram, select Format -> Lines -> Grid Lines -> Select None"
},
{
"code": null,
"e": 2578,
"s": 2489,
"text": "2.4. Right-click on the histogram, select Format -> Shading -> Worksheet -> Select Black"
},
{
"code": null,
"e": 2609,
"s": 2578,
"text": "We’re done with the histogram."
},
{
"code": null,
"e": 2686,
"s": 2609,
"text": "Now we want to create a dashboard to display the map and histogram together."
},
{
"code": null,
"e": 2784,
"s": 2686,
"text": "Click on the dashboard, click Format -> Dashboard -> Dashboard Shading -> Default -> Select Black"
},
{
"code": null,
"e": 2879,
"s": 2784,
"text": "Drag Sheet 1 (Map) and Sheet 2 (Histogram) to the dashboard, with the histogram at the bottom."
},
{
"code": null,
"e": 2942,
"s": 2879,
"text": "Right-click on the sheet titles respectively, click Hide Title"
},
{
"code": null,
"e": 3140,
"s": 2942,
"text": "1.1. Click on the Month, Day, Year of Date/Time filter, click Apply to Worksheets, select All Using Related Data Sources. So that we can delete the Date/Time (Hours) legend for Sheet 2 (Histogram)."
},
{
"code": null,
"e": 3216,
"s": 3140,
"text": "1.2. Click on the Month, Day, Year of Date/Time filter, deselect Show Title"
},
{
"code": null,
"e": 3298,
"s": 3216,
"text": "1.3. Click on the Hour of Date/Time legend for Sheet 1 (Map), deselect Show Title"
},
{
"code": null,
"e": 3376,
"s": 3298,
"text": "1.4. Click on the Hour of Date/Time legend for Sheet 1 (Map), select Floating"
},
{
"code": null,
"e": 3478,
"s": 3376,
"text": "1.5. Move the Hour of Date/Time legend on the right side of the map, resize the map to fit the legend"
},
{
"code": null,
"e": 3551,
"s": 3478,
"text": "We want to add some descriptions to let users better understand the viz."
},
{
"code": null,
"e": 3602,
"s": 3551,
"text": "2.1. Title: Drag a text box to the dashboard, type"
},
{
"code": null,
"e": 3636,
"s": 3602,
"text": "UBER RIDES IN NYCApril - May 2014"
},
{
"code": null,
"e": 3774,
"s": 3636,
"text": "2.2. Instructions: Drag a text box to the dashboard, type Select the legends on the map or bars on the histogram to section data by time."
},
{
"code": null,
"e": 3832,
"s": 3774,
"text": "2.3. Descriptions: Drag a text box to the dashboard, type"
},
{
"code": null,
"e": 3868,
"s": 3832,
"text": "TOTAL RIDES:SOURCE: FIVETHIRTYEIGHT"
},
{
"code": null,
"e": 3885,
"s": 3868,
"text": "2.4. Total Rides"
},
{
"code": null,
"e": 3984,
"s": 3885,
"text": "To display dynamic TOTAL RIDES text based on the date and time filter, we need to add a new sheet."
},
{
"code": null,
"e": 4089,
"s": 3984,
"text": "2.4.1. Drag Number of Records from Measures to Marks -> Text. A number should be displayed on the sheet."
},
{
"code": null,
"e": 4166,
"s": 4089,
"text": "2.4.2. Right-click the number, click Format... -> Font -> Worksheet -> White"
},
{
"code": null,
"e": 4246,
"s": 4166,
"text": "2.4.3. Right-click the number, click Format... -> Shading -> Worksheet -> Black"
},
{
"code": null,
"e": 4289,
"s": 4246,
"text": "2.4.4. Drag the new sheet to the dashboard"
},
{
"code": null,
"e": 4340,
"s": 4289,
"text": "2.4.5. Click on the new sheet -> select Hide Title"
},
{
"code": null,
"e": 4442,
"s": 4340,
"text": "2.4.6. Click on the new sheet -> select Floating. Drag the number right next to the TOTAL RIDES text."
},
{
"code": null,
"e": 4457,
"s": 4442,
"text": "2.5. Add Blank"
},
{
"code": null,
"e": 4551,
"s": 4457,
"text": "Add blanks where is necessary to the dashboard, to let text vertically align with the center."
},
{
"code": null,
"e": 4590,
"s": 4551,
"text": "Now we’re done with the visualization."
}
] |
Understanding the Seasonal Order of the SARIMA Model | by Angelica Lo Duca | Towards Data Science
|
Some months ago, I wrote an article, which described the full process to build a SARIMA model for time series forecasting. In that article, I explained how to tune the p, d and q order of a SARIMA model and I evaluated the performance of the trained model in terms of NRMSE.
One comment about that article was that the proposed model was basically an ARIMA model, since it did not consider the seasonal order. I thanked the comment’s author and I investigated this aspect.
And now I am here to explain you an interesting aspect that I discovered and that, unfortunately, it is not well explained in the articles that I found in the Web.
A SARIMA model can be tuned with two kinds of orders:
(p,d,q) order, which refers to the order of the time series. This order is also used in the ARIMA model (which does not consider seasonality);
(P,D,Q,M) seasonal order, which refers to the order of the seasonal component of the time series.
In this article, I focus on the importance of the seasonal order.
Firstly, I import the dataset related to tourists arrivals to Italy from 1990 to 2019 as as a pandas dataframe. Data are extracted from the European Statistics: Annual Data on Tourism Industries.
import pandas as pddf = pd.read_csv('source/tourist_arrivals.csv')df.head()
Now I convert the dataset into a time series. This can be done in three steps:
I convert the date column to the datetime type
I set the date column as index of the dataframe
I assign the value column of the dataframe to a new variable, called ts.
df['date'] = pd.to_datetime(df['date'])df.set_index('date', inplace=True)ts = df['value']
I explore the time series by plotting it. I exploit the matplotlib library.
from matplotlib import pyplot as pltplt.plot(ts, label='prediction seasonal')plt.grid()plt.xticks(rotation=90)plt.show()
I note that the time series presents a seasonality thus a SARIMA model is appropriate. In addition, the time series also presents an increasing trend.
Now I split the time series into two parts: train and test sets. Training set will be used to fit the model, while the test set will be use to evalute it.
ts_train = ts[:'2019-03-01']ts_test = ts['2019-04-01':]
Into another notebook, I have already explained how to calculate the (p,d,q) order of the SARIMA model, thus I set it manually to the discovered values:
d = 1p = 10q = 7
The (P,D,Q,M) Order refers to the seasonal component of the model for the Auto Regressive parameters, differences, Moving Average parameters, and periodicity:
D indicates the integration order of the seasonal process (the number of transformation needed to make stationary the time series)
P indicates the Auto Regressive order for the seasonal component
Q indicated the Moving Average order for the seasonal component
M indicates the periodicity, i.e. the number of periods in season, such as 12 for monthly data.
In order to evaluate the seasonal order, we must extract the seasonal component from the time series. For this reason, we exploit the seasonal_decompose() function provided by the statsmodels library. Among the input parameters, we can specify the decomposition model (additive or multiplicative) and if we want to extrapolate the trend or not. The function returns the trend, the seasonal and the resid components.
from statsmodels.tsa.seasonal import seasonal_decomposeresult = seasonal_decompose(ts, model='additive',extrapolate_trend='freq')result.plot()plt.show()
In order to extract D, we have to check whether the seasonal componenet is stationary or not. We define a function, which exploits the Adfuller test to check for stationarity.
from statsmodels.tsa.stattools import adfullerimport numpy as npdef check_stationarity(ts): dftest = adfuller(ts) adf = dftest[0] pvalue = dftest[1] critical_value = dftest[4]['5%'] if (pvalue < 0.05) and (adf < critical_value): print('The series is stationary') else: print('The series is NOT stationary')
Then, we invoke the check_stationarity() function on the seasonal component of the time series:
seasonal = result.seasonalcheck_stationarity(seasonal)
The series is stationary, thus we do not need any additional transformation to make it stationary. We can set D = 0.
The value of P can be extracted by looking at the Partial Autocorrelation (PACF) graph of the seasonal component. PACF can be imagined as the correlation between the series and its lag, after excluding the contributions from the intermediate lags.
from statsmodels.graphics.tsaplots import plot_acf, plot_pacfplot_pacf(seasonal, lags =12)plt.show()
In the PACF graph the maximum lag with a value out the confidence intervals (in light blue) is 12, thus we can set P = 12.
The Q order can be calculated from the Autocorrelation (ACF) plot. Autocorrelation is the correlation of a single time series with a lagged copy of itself.
plot_acf(seasonal, lags =12)plt.show()
From the above graph, we note that the maximum lag with a value out the confidence intervals is 8, thus Q = 8.
In order to show the difference between a SARIMA model with and without the tuning of the (P,D,Q,M) order, we fit two models, the first without the seasonality order and second with it.
We exploit the SARIMAX() class of the statsmodels package and we configure it to work only with the (p,d,q) order.
from statsmodels.tools.eval_measures import rmsenrmse = rmse(ts_pred, ts_test)/(np.max(ts_test)-np.min(ts_test))nrmse
which gives the following output:
0.0602505102281063
We fit the model by also passing to it the seasonal_order parameter. This operation requires more than the previous one.
model_seasonal = SARIMAX(ts_train, order=(p,d,q), seasonal_order=(P,D,Q,12))model_fit_seasonal = model_seasonal.fit()
We calculate forecasts and NRMSE:
ts_pred_seasonal = model_fit_seasonal.forecast(steps=n_test)nrmse_seasonal = rmse(ts_pred_seasonal, ts_test)/(np.max(ts_test)-np.min(ts_test))nrmse_seasonal
which gives the following output:
0.08154124683514104
We note that the model without the seasonal order outperforms the other model, in terms of NRMSE.
Is the (P,D,Q,M) order useless? Indeed it is not. For short-term predictions, the two models behave almost in the same way. However, for long-term predictions, the model with the (P,D,Q,M) order is more realistic, since it reflects the increasing trend.
To explain this concept, we can calculate a long-term forecasting in both cases:
N = 300ts_pred = model_fit.forecast(steps=n_test+N)ts_pred_seasonal = model_fit_seasonal.forecast(steps=n_test+N)
and we can plot the results. We note that there is a big difference between the two models. While the model without the (P,D,Q,M) order tends to decrease over time, the other reflects the increasing trend.
plt.figure(figsize=(10,5))plt.plot(ts_pred_seasonal, label='prediction with P,D,Q')plt.plot(ts_pred, label='prediction without P,D,Q')#plt.plot(ts_test, label='actual')plt.title('Multi-step Forecasting (manual parameters)')plt.legend()plt.grid()plt.xticks(rotation=90)plt.show()
In this tutorial, I have explained the importance of the seasonal order in the SARIMA model for a long-term prediction.
All the code explained in this article can be downloaded from my Github repository as a Jupyter Notebook.
If you wanted to be updated on my research and other activities, you can follow me on Twitter, Youtube and and Github.
|
[
{
"code": null,
"e": 322,
"s": 47,
"text": "Some months ago, I wrote an article, which described the full process to build a SARIMA model for time series forecasting. In that article, I explained how to tune the p, d and q order of a SARIMA model and I evaluated the performance of the trained model in terms of NRMSE."
},
{
"code": null,
"e": 520,
"s": 322,
"text": "One comment about that article was that the proposed model was basically an ARIMA model, since it did not consider the seasonal order. I thanked the comment’s author and I investigated this aspect."
},
{
"code": null,
"e": 684,
"s": 520,
"text": "And now I am here to explain you an interesting aspect that I discovered and that, unfortunately, it is not well explained in the articles that I found in the Web."
},
{
"code": null,
"e": 738,
"s": 684,
"text": "A SARIMA model can be tuned with two kinds of orders:"
},
{
"code": null,
"e": 881,
"s": 738,
"text": "(p,d,q) order, which refers to the order of the time series. This order is also used in the ARIMA model (which does not consider seasonality);"
},
{
"code": null,
"e": 979,
"s": 881,
"text": "(P,D,Q,M) seasonal order, which refers to the order of the seasonal component of the time series."
},
{
"code": null,
"e": 1045,
"s": 979,
"text": "In this article, I focus on the importance of the seasonal order."
},
{
"code": null,
"e": 1241,
"s": 1045,
"text": "Firstly, I import the dataset related to tourists arrivals to Italy from 1990 to 2019 as as a pandas dataframe. Data are extracted from the European Statistics: Annual Data on Tourism Industries."
},
{
"code": null,
"e": 1317,
"s": 1241,
"text": "import pandas as pddf = pd.read_csv('source/tourist_arrivals.csv')df.head()"
},
{
"code": null,
"e": 1396,
"s": 1317,
"text": "Now I convert the dataset into a time series. This can be done in three steps:"
},
{
"code": null,
"e": 1443,
"s": 1396,
"text": "I convert the date column to the datetime type"
},
{
"code": null,
"e": 1491,
"s": 1443,
"text": "I set the date column as index of the dataframe"
},
{
"code": null,
"e": 1564,
"s": 1491,
"text": "I assign the value column of the dataframe to a new variable, called ts."
},
{
"code": null,
"e": 1654,
"s": 1564,
"text": "df['date'] = pd.to_datetime(df['date'])df.set_index('date', inplace=True)ts = df['value']"
},
{
"code": null,
"e": 1730,
"s": 1654,
"text": "I explore the time series by plotting it. I exploit the matplotlib library."
},
{
"code": null,
"e": 1851,
"s": 1730,
"text": "from matplotlib import pyplot as pltplt.plot(ts, label='prediction seasonal')plt.grid()plt.xticks(rotation=90)plt.show()"
},
{
"code": null,
"e": 2002,
"s": 1851,
"text": "I note that the time series presents a seasonality thus a SARIMA model is appropriate. In addition, the time series also presents an increasing trend."
},
{
"code": null,
"e": 2157,
"s": 2002,
"text": "Now I split the time series into two parts: train and test sets. Training set will be used to fit the model, while the test set will be use to evalute it."
},
{
"code": null,
"e": 2213,
"s": 2157,
"text": "ts_train = ts[:'2019-03-01']ts_test = ts['2019-04-01':]"
},
{
"code": null,
"e": 2366,
"s": 2213,
"text": "Into another notebook, I have already explained how to calculate the (p,d,q) order of the SARIMA model, thus I set it manually to the discovered values:"
},
{
"code": null,
"e": 2383,
"s": 2366,
"text": "d = 1p = 10q = 7"
},
{
"code": null,
"e": 2542,
"s": 2383,
"text": "The (P,D,Q,M) Order refers to the seasonal component of the model for the Auto Regressive parameters, differences, Moving Average parameters, and periodicity:"
},
{
"code": null,
"e": 2673,
"s": 2542,
"text": "D indicates the integration order of the seasonal process (the number of transformation needed to make stationary the time series)"
},
{
"code": null,
"e": 2738,
"s": 2673,
"text": "P indicates the Auto Regressive order for the seasonal component"
},
{
"code": null,
"e": 2802,
"s": 2738,
"text": "Q indicated the Moving Average order for the seasonal component"
},
{
"code": null,
"e": 2898,
"s": 2802,
"text": "M indicates the periodicity, i.e. the number of periods in season, such as 12 for monthly data."
},
{
"code": null,
"e": 3314,
"s": 2898,
"text": "In order to evaluate the seasonal order, we must extract the seasonal component from the time series. For this reason, we exploit the seasonal_decompose() function provided by the statsmodels library. Among the input parameters, we can specify the decomposition model (additive or multiplicative) and if we want to extrapolate the trend or not. The function returns the trend, the seasonal and the resid components."
},
{
"code": null,
"e": 3467,
"s": 3314,
"text": "from statsmodels.tsa.seasonal import seasonal_decomposeresult = seasonal_decompose(ts, model='additive',extrapolate_trend='freq')result.plot()plt.show()"
},
{
"code": null,
"e": 3643,
"s": 3467,
"text": "In order to extract D, we have to check whether the seasonal componenet is stationary or not. We define a function, which exploits the Adfuller test to check for stationarity."
},
{
"code": null,
"e": 3982,
"s": 3643,
"text": "from statsmodels.tsa.stattools import adfullerimport numpy as npdef check_stationarity(ts): dftest = adfuller(ts) adf = dftest[0] pvalue = dftest[1] critical_value = dftest[4]['5%'] if (pvalue < 0.05) and (adf < critical_value): print('The series is stationary') else: print('The series is NOT stationary')"
},
{
"code": null,
"e": 4078,
"s": 3982,
"text": "Then, we invoke the check_stationarity() function on the seasonal component of the time series:"
},
{
"code": null,
"e": 4133,
"s": 4078,
"text": "seasonal = result.seasonalcheck_stationarity(seasonal)"
},
{
"code": null,
"e": 4250,
"s": 4133,
"text": "The series is stationary, thus we do not need any additional transformation to make it stationary. We can set D = 0."
},
{
"code": null,
"e": 4498,
"s": 4250,
"text": "The value of P can be extracted by looking at the Partial Autocorrelation (PACF) graph of the seasonal component. PACF can be imagined as the correlation between the series and its lag, after excluding the contributions from the intermediate lags."
},
{
"code": null,
"e": 4599,
"s": 4498,
"text": "from statsmodels.graphics.tsaplots import plot_acf, plot_pacfplot_pacf(seasonal, lags =12)plt.show()"
},
{
"code": null,
"e": 4722,
"s": 4599,
"text": "In the PACF graph the maximum lag with a value out the confidence intervals (in light blue) is 12, thus we can set P = 12."
},
{
"code": null,
"e": 4878,
"s": 4722,
"text": "The Q order can be calculated from the Autocorrelation (ACF) plot. Autocorrelation is the correlation of a single time series with a lagged copy of itself."
},
{
"code": null,
"e": 4917,
"s": 4878,
"text": "plot_acf(seasonal, lags =12)plt.show()"
},
{
"code": null,
"e": 5028,
"s": 4917,
"text": "From the above graph, we note that the maximum lag with a value out the confidence intervals is 8, thus Q = 8."
},
{
"code": null,
"e": 5214,
"s": 5028,
"text": "In order to show the difference between a SARIMA model with and without the tuning of the (P,D,Q,M) order, we fit two models, the first without the seasonality order and second with it."
},
{
"code": null,
"e": 5329,
"s": 5214,
"text": "We exploit the SARIMAX() class of the statsmodels package and we configure it to work only with the (p,d,q) order."
},
{
"code": null,
"e": 5447,
"s": 5329,
"text": "from statsmodels.tools.eval_measures import rmsenrmse = rmse(ts_pred, ts_test)/(np.max(ts_test)-np.min(ts_test))nrmse"
},
{
"code": null,
"e": 5481,
"s": 5447,
"text": "which gives the following output:"
},
{
"code": null,
"e": 5500,
"s": 5481,
"text": "0.0602505102281063"
},
{
"code": null,
"e": 5621,
"s": 5500,
"text": "We fit the model by also passing to it the seasonal_order parameter. This operation requires more than the previous one."
},
{
"code": null,
"e": 5739,
"s": 5621,
"text": "model_seasonal = SARIMAX(ts_train, order=(p,d,q), seasonal_order=(P,D,Q,12))model_fit_seasonal = model_seasonal.fit()"
},
{
"code": null,
"e": 5773,
"s": 5739,
"text": "We calculate forecasts and NRMSE:"
},
{
"code": null,
"e": 5930,
"s": 5773,
"text": "ts_pred_seasonal = model_fit_seasonal.forecast(steps=n_test)nrmse_seasonal = rmse(ts_pred_seasonal, ts_test)/(np.max(ts_test)-np.min(ts_test))nrmse_seasonal"
},
{
"code": null,
"e": 5964,
"s": 5930,
"text": "which gives the following output:"
},
{
"code": null,
"e": 5984,
"s": 5964,
"text": "0.08154124683514104"
},
{
"code": null,
"e": 6082,
"s": 5984,
"text": "We note that the model without the seasonal order outperforms the other model, in terms of NRMSE."
},
{
"code": null,
"e": 6336,
"s": 6082,
"text": "Is the (P,D,Q,M) order useless? Indeed it is not. For short-term predictions, the two models behave almost in the same way. However, for long-term predictions, the model with the (P,D,Q,M) order is more realistic, since it reflects the increasing trend."
},
{
"code": null,
"e": 6417,
"s": 6336,
"text": "To explain this concept, we can calculate a long-term forecasting in both cases:"
},
{
"code": null,
"e": 6531,
"s": 6417,
"text": "N = 300ts_pred = model_fit.forecast(steps=n_test+N)ts_pred_seasonal = model_fit_seasonal.forecast(steps=n_test+N)"
},
{
"code": null,
"e": 6737,
"s": 6531,
"text": "and we can plot the results. We note that there is a big difference between the two models. While the model without the (P,D,Q,M) order tends to decrease over time, the other reflects the increasing trend."
},
{
"code": null,
"e": 7016,
"s": 6737,
"text": "plt.figure(figsize=(10,5))plt.plot(ts_pred_seasonal, label='prediction with P,D,Q')plt.plot(ts_pred, label='prediction without P,D,Q')#plt.plot(ts_test, label='actual')plt.title('Multi-step Forecasting (manual parameters)')plt.legend()plt.grid()plt.xticks(rotation=90)plt.show()"
},
{
"code": null,
"e": 7136,
"s": 7016,
"text": "In this tutorial, I have explained the importance of the seasonal order in the SARIMA model for a long-term prediction."
},
{
"code": null,
"e": 7242,
"s": 7136,
"text": "All the code explained in this article can be downloaded from my Github repository as a Jupyter Notebook."
}
] |
C# Enum Equals Method
|
To find the equality between enums, use the Equals() method.
Let’s say we have the following Enum.
enum Products { HardDrive, PenDrive, Keyboard};
Create two Products objects and assign the same values.
Products prod1 = Products.HardDrive;
Products prod2 = Products.HardDrive;
Now check for equality using Equals() method. It would be True since both have the same underlying value.
Live Demo
using System;
class Program {
enum Products {HardDrive, PenDrive, Keyboard};
enum ProductsNew { Mouse, HeadPhone, Speakers};
static void Main() {
Products prod1 = Products.HardDrive;
Products prod2 = Products.HardDrive;
ProductsNew newProd1 = ProductsNew.HeadPhone;
ProductsNew newProd2 = ProductsNew.Speakers;
Console.WriteLine("Both are same products = {0}", prod1.Equals(prod2) ? "Yes" : "No");
Console.WriteLine("Both are same products = {0}", newProd1.Equals(newProd2) ? "Yes" : "No");
}
}
Both are same products = Yes
Both are same products = No
|
[
{
"code": null,
"e": 1123,
"s": 1062,
"text": "To find the equality between enums, use the Equals() method."
},
{
"code": null,
"e": 1161,
"s": 1123,
"text": "Let’s say we have the following Enum."
},
{
"code": null,
"e": 1209,
"s": 1161,
"text": "enum Products { HardDrive, PenDrive, Keyboard};"
},
{
"code": null,
"e": 1265,
"s": 1209,
"text": "Create two Products objects and assign the same values."
},
{
"code": null,
"e": 1339,
"s": 1265,
"text": "Products prod1 = Products.HardDrive;\nProducts prod2 = Products.HardDrive;"
},
{
"code": null,
"e": 1445,
"s": 1339,
"text": "Now check for equality using Equals() method. It would be True since both have the same underlying value."
},
{
"code": null,
"e": 1456,
"s": 1445,
"text": " Live Demo"
},
{
"code": null,
"e": 1999,
"s": 1456,
"text": "using System;\nclass Program {\n enum Products {HardDrive, PenDrive, Keyboard};\n enum ProductsNew { Mouse, HeadPhone, Speakers};\n static void Main() {\n Products prod1 = Products.HardDrive;\n Products prod2 = Products.HardDrive;\n ProductsNew newProd1 = ProductsNew.HeadPhone;\n ProductsNew newProd2 = ProductsNew.Speakers;\n Console.WriteLine(\"Both are same products = {0}\", prod1.Equals(prod2) ? \"Yes\" : \"No\");\n Console.WriteLine(\"Both are same products = {0}\", newProd1.Equals(newProd2) ? \"Yes\" : \"No\");\n }\n}"
},
{
"code": null,
"e": 2056,
"s": 1999,
"text": "Both are same products = Yes\nBoth are same products = No"
}
] |
less - Unix, Linux Command
|
Commands are based on both
more and
vi. Commands may be preceded by a decimal number,
called N in the descriptions below.
The number is used by some commands, as indicated.
Certain characters are special
if entered at the beginning of the pattern;
they modify the type of search rather than become part of the pattern:
Certain characters are special as in the / command:
Certain characters are special as in the / command:
Most options may be given in one of two forms:
either a dash followed by a single letter,
or two dashes followed by a long option name.
A long option name may be abbreviated as long as
the abbreviation is unambiguous.
For example, --quit-at-eof may be abbreviated --quit, but not
--qui, since both --quit-at-eof and --quiet begin with --qui.
Some long option names are in uppercase, such as --QUIT-AT-EOF, as
distinct from --quit-at-eof.
Such option names need only have their first letter capitalized;
the remainder of the name may be in either case.
For example, --Quit-at-eof is equivalent to --QUIT-AT-EOF.
Options are also taken from the environment variable "LESS".
For example,
to avoid typing "less -options ..." each time
less is invoked, you might tell
csh:
setenv LESS "-options"
or if you use
sh:
LESS="-options"; export LESS
On MS-DOS, you don’t need the quotes, but you should replace any
percent signs in the options string by double percent signs.
The environment variable is parsed before the command line,
so command line options override the LESS environment variable.
If an option appears in the LESS variable, it can be reset
to its default value on the command line by beginning the command
line option with "-+".
For options like -P or -D which take a following string,
a dollar sign ($) must be used to signal the end of the string.
For example, to set two -D options on MS-DOS, you must have
a dollar sign between them, like this:
LESS="-Dn9.1$-Ds4.1"
If no log file has been specified,
the -o and -O options can be used from within
less to specify a log file.
Without a file name, they will simply report the name of the log file.
The "s" command is equivalent to specifying -o from within
less.
ESC [ ... m
where the "..." is zero or more color specification characters
For the purpose of keeping track of screen appearance,
ANSI color escape sequences are assumed to not move the cursor.
You can make
less think that characters other than "m" can end ANSI color escape sequences
by setting the environment variable LESSANSIENDCHARS to the list of
characters which can end a color escape sequence.
And you can make
less think that characters other than the standard ones may appear between
the ESC and the m by setting the environment variable LESSANSIMIDCHARS
to the list of characters which can appear.
By default, if neither -u nor -U is given,
backspaces which appear adjacent to an underscore character
are treated specially:
the underlined text is displayed
using the terminal’s hardware underlining capability.
Also, backspaces which appear between two identical characters
are treated specially:
the overstruck text is printed
using the terminal’s hardware boldface capability.
Other backspaces are deleted, along with the preceding character.
Carriage returns immediately followed by a newline are deleted.
other carriage returns are handled as specified by the -r option.
Text which is overstruck or underlined can be searched for
if neither -u nor -U is in effect.
A system-wide lesskey file may also be set up to provide key bindings.
If a key is defined in both a local lesskey file and in the
system-wide file, key bindings in the local file take precedence over
those in the system-wide file.
If the environment variable LESSKEY_SYSTEM is set,
less uses that as the name of the system-wide lesskey file.
Otherwise,
less looks in a standard place for the system-wide lesskey file:
On Unix systems, the system-wide lesskey file is /usr/local/etc/sysless.
(However, if
less was built with a different sysconf directory than /usr/local/etc,
that directory is where the sysless file is found.)
On MS-DOS and Windows systems, the system-wide lesskey file is c:\_sysless.
On OS/2 systems, the system-wide lesskey file is c:\sysless.ini.
An input preprocessor receives one command line argument, the original filename,
as entered by the user.
It should create the replacement file, and when finished,
print the name of the replacement file to its standard output.
If the input preprocessor does not output a replacement filename,
less uses the original file, as normal.
The input preprocessor is not called when viewing standard input.
To set up an input preprocessor, set the LESSOPEN environment variable
to a command line which will invoke your input preprocessor.
This command line should include one occurrence of the string "%s",
which will be replaced by the filename
when the input preprocessor command is invoked.
When
less closes a file opened in such a way, it will call another program,
called the input postprocessor,
which may perform any desired clean-up action (such as deleting the
replacement file created by LESSOPEN).
This program receives two command line arguments, the original filename
as entered by the user, and the name of the replacement file.
To set up an input postprocessor, set the LESSCLOSE environment variable
to a command line which will invoke your input postprocessor.
It may include two occurrences of the string "%s";
the first is replaced with the original name of the file and
the second with the name of the replacement file,
which was output by LESSOPEN.
For example, on many Unix systems, these two scripts will allow you
to keep files in compressed format, but still let
less view them directly:
lessopen.sh:
#! /bin/sh
case "$1" in
*.Z) uncompress - $1 >/tmp/less.$$ 2>/dev/null
if [ -s /tmp/less.$$ ]; then
echo /tmp/less.$$
else
rm -f /tmp/less.$$
fi
;;
esac
lessclose.sh:
#! /bin/sh
rm $2
To use these scripts, put them both where they can be executed and
set LESSOPEN="lessopen.sh %s", and
LESSCLOSE="lessclose.sh %s %s".
More complex LESSOPEN and LESSCLOSE scripts may be written
to accept other types of compressed files, and so on.
It is also possible to set up an input preprocessor to
pipe the file data directly to
less, rather than putting the data into a replacement file.
This avoids the need to decompress the entire file before
starting to view it.
An input preprocessor that works this way is called an input pipe.
An input pipe, instead of writing the name of a replacement file on
its standard output,
writes the entire contents of the replacement file on its standard output.
If the input pipe does not write any characters on its standard output,
then there is no replacement file and
less uses the original file, as normal.
To use an input pipe,
make the first character in the LESSOPEN environment variable a
vertical bar (|) to signify that the input preprocessor is an input pipe.
For example, on many Unix systems, this script will work like the
previous example scripts:
lesspipe.sh:
#! /bin/sh
case "$1" in
*.Z) uncompress -c $1 2>/dev/null
;;
esac
To use this script, put it where it can be executed and set
LESSOPEN="|lesspipe.sh %s".
When an input pipe is used, a LESSCLOSE postprocessor can be used,
but it is usually not necessary since there is no replacement file
to clean up.
In this case, the replacement file name passed to the LESSCLOSE
postprocessor is "-".
For compatibility with previous versions of
less, the input preprocessor or pipe is not used if
less is viewing standard input.
However, if the first character of LESSOPEN is a dash (-),
the input preprocessor is used on standard input as well as other files.
In this case, the dash is not considered to be part of
the preprocessor command.
If standard input is being viewed, the input preprocessor is passed
a file name consisting of a single dash.
Similarly, if the first two characters of LESSOPEN are vertical bar and dash
(|-), the input pipe is used on standard input as well as other files.
Again, in this case the dash is not considered to be part of
the input pipe command.
This table shows the value of LESSCHARDEF which is equivalent
to each of the possible values for LESSCHARSET:
ascii 8bcccbcc18b95.b
dos 8bcccbcc12bc5b95.b.
ebcdic 5bc6bcc7bcc41b.9b7.9b5.b..8b6.10b6.b9.7b
9.8b8.17b3.3b9.7b9.8b8.6b10.b.b.b.
IBM-1047 4cbcbc3b9cbccbccbb4c6bcc5b3cbbc4bc4bccbc
191.b
iso8859 8bcccbcc18b95.33b.
koi8-r 8bcccbcc18b95.b128.
latin1 8bcccbcc18b95.33b.
next 8bcccbcc18b95.bb125.bb
If neither LESSCHARSET nor LESSCHARDEF is set,
but any of the strings "UTF-8", "UTF8", "utf-8" or "utf8"
is found in the LC_ALL, LC_TYPE or LANG
environment variables, then the default character set is utf-8.
If that string is not found, but your system supports the
setlocale interface,
less will use setlocale to determine the character set.
setlocale is controlled by setting the LANG or LC_CTYPE environment
variables.
Finally, if the
setlocale interface is also not available, the default character set is latin1.
Control and binary characters are displayed in standout (reverse video).
Each such character is displayed in caret notation if possible
(e.g. ^A for control-A). Caret notation is used only if
inverting the 0100 bit results in a normal printable character.
Otherwise, the character is displayed as a hex number in angle brackets.
This format can be changed by
setting the LESSBINFMT environment variable.
LESSBINFMT may begin with a "*" and one character to select
the display attribute:
"*k" is blinking, "*d" is bold, "*u" is underlined, "*s" is standout,
and "*n" is normal.
If LESSBINFMT does not begin with a "*", normal attribute is assumed.
The remainder of LESSBINFMT is a string which may include one
printf-style escape sequence (a % followed by x, X, o, d, etc.).
For example, if LESSBINFMT is "*u[%x]", binary characters
are displayed in underlined hexadecimal surrounded by brackets.
The default if no LESSBINFMT is specified is "*s<%X>".
The default if no LESSBINFMT is specified is "*s<%02X>".
Warning: the result of expanding the character via LESSBINFMT must
be less than 31 characters.
When the character set is utf-8, the LESSUTFBINFMT environment variable
acts similarly to LESSBINFMT but it applies to Unicode code points
that were successfully decoded but are unsuitable for display (e.g.,
unassigned code points).
Its default value is "<U+%04lX>".
Note that LESSUTFBINFMT and LESSBINFMT share their display attribute
setting ("*x") so specifying one will affect both;
LESSUTFBINFMT is read after LESSBINFMT so its setting, if any,
will have priority.
Problematic octets in a UTF-8 file (octets of a truncated sequence,
octets of a complete but non-shortest form sequence, illegal octets,
and stray trailing octets)
are displayed individually using LESSBINFMT so as to facilitate diagnostic
of how the UTF-8 file is ill-formed.
A percent sign followed by a single character is expanded
according to what the following character is:
Some examples:
?f%f:Standard input.
This prompt prints the filename, if known;
otherwise the string "Standard input".
?f%f .?ltLine %lt:?pt%pt\%:?btByte %bt:-...
This prompt would print the filename, if known.
The filename is followed by the line number, if known,
otherwise the percent if known, otherwise the byte offset if known.
Otherwise, a dash is printed.
Notice how each question mark has a matching period,
and how the % after the %pt
is included literally by escaping it with a backslash.
?n?f%f .?m(file %i of %m) ..?e(END) ?x- Next\: %x..%t
This prints the filename if this is the first prompt in a file,
followed by the "file N of N" message if there is more
than one input file.
Then, if we are at end-of-file, the string "(END)" is printed
followed by the name of the next file, if there is one.
Finally, any trailing spaces are truncated.
This is the default prompt.
For reference, here are the defaults for
the other two prompts (-m and -M respectively).
Each is broken into two lines here for readability only.
?n?f%f .?m(file %i of %m) ..?e(END) ?x- Next\: %x.:
?pB%pB\%:byte %bB?s/%s...%t
?f%f .?n?m(file %i of %m) ..?ltlines %lt-%lb?L/%L. :
byte %bB?s/%s. .?e(END) ?x- Next\: %x.:?pB%pB\%..%t
?n?f%f .?m(file %i of %m) ..?e(END) ?x- Next\: %x.:
?pB%pB\%:byte %bB?s/%s...%t
?f%f .?n?m(file %i of %m) ..?ltlines %lt-%lb?L/%L. :
byte %bB?s/%s. .?e(END) ?x- Next\: %x.:?pB%pB\%..%t
?f%f .?m(file %i of %m) .?ltlines %lt-%lb?L/%L. .
byte %bB?s/%s. ?e(END) :?pB%pB\%..%t
?f%f .?m(file %i of %m) .?ltlines %lt-%lb?L/%L. .
byte %bB?s/%s. ?e(END) :?pB%pB\%..%t
The prompt expansion features are also used for another purpose:
if an environment variable LESSEDIT is defined, it is used
as the command to be executed when the v command is invoked.
The LESSEDIT string is expanded in the same way as the prompt strings.
The default value for LESSEDIT is:
%E ?lm+%lm. %f
%E ?lm+%lm. %f
Less can also be compiled to be permanently in "secure" mode.
The -e option works differently.
If the -e option is not set,
less behaves as if the -E option were set.
If the -e option is set,
less behaves as if the -e and -F options were set.
The -m option works differently.
If the -m option is not set, the medium prompt is used,
and it is prefixed with the string "--More--".
If the -m option is set, the short prompt is used.
The -n option acts like the -z option.
The normal behavior of the -n option is unavailable in this mode.
The parameter to the -p option is taken to be a
less command rather than a search pattern.
The LESS environment variable is ignored,
and the MORE environment variable is used in its place.
less is part of the GNU project and is free software.
You can redistribute it and/or modify it
under the terms of either
(1) the GNU General Public License as published by
the Free Software Foundation; or (2) the Less License.
See the file README in the less distribution for more details
regarding redistribution.
You should have received a copy of the GNU General Public License
along with the source for less; see the file COPYING.
If not, write to the Free Software Foundation, 59 Temple Place,
Suite 330, Boston, MA 02111-1307, USA.
You should also have received a copy of the Less License;
see the file LICENSE.
less is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU General Public License for more details.
Mark Nudelman <[email protected]>
See http://www.greenwoodsoftware.com/less/bugs.html for the latest list of known bugs in less.
Send bug reports or comments to the above address or to
[email protected].
For more information, see the less homepage at
http://www.greenwoodsoftware.com/less.
Advertisements
129 Lectures
23 hours
Eduonix Learning Solutions
5 Lectures
4.5 hours
Frahaan Hussain
35 Lectures
2 hours
Pradeep D
41 Lectures
2.5 hours
Musab Zayadneh
46 Lectures
4 hours
GUHARAJANM
6 Lectures
4 hours
Uplatz
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 10754,
"s": 10579,
"text": "\nCommands are based on both\nmore and\nvi. Commands may be preceded by a decimal number,\ncalled N in the descriptions below.\nThe number is used by some commands, as indicated.\n"
},
{
"code": null,
"e": 10904,
"s": 10756,
"text": "\nCertain characters are special\nif entered at the beginning of the pattern;\nthey modify the type of search rather than become part of the pattern:\n"
},
{
"code": null,
"e": 10958,
"s": 10904,
"text": "\nCertain characters are special as in the / command:\n"
},
{
"code": null,
"e": 11012,
"s": 10958,
"text": "\nCertain characters are special as in the / command:\n"
},
{
"code": null,
"e": 11625,
"s": 11012,
"text": "\nMost options may be given in one of two forms:\neither a dash followed by a single letter,\nor two dashes followed by a long option name.\nA long option name may be abbreviated as long as\nthe abbreviation is unambiguous.\nFor example, --quit-at-eof may be abbreviated --quit, but not\n--qui, since both --quit-at-eof and --quiet begin with --qui.\nSome long option names are in uppercase, such as --QUIT-AT-EOF, as\ndistinct from --quit-at-eof.\nSuch option names need only have their first letter capitalized;\nthe remainder of the name may be in either case.\nFor example, --Quit-at-eof is equivalent to --QUIT-AT-EOF.\n"
},
{
"code": null,
"e": 11784,
"s": 11625,
"text": "\nOptions are also taken from the environment variable \"LESS\".\nFor example,\nto avoid typing \"less -options ...\" each time\nless is invoked, you might tell\ncsh: "
},
{
"code": null,
"e": 11809,
"s": 11784,
"text": "\nsetenv LESS \"-options\"\n"
},
{
"code": null,
"e": 11829,
"s": 11809,
"text": "\nor if you use\nsh: "
},
{
"code": null,
"e": 11860,
"s": 11829,
"text": "\nLESS=\"-options\"; export LESS\n"
},
{
"code": null,
"e": 11988,
"s": 11860,
"text": "\nOn MS-DOS, you don’t need the quotes, but you should replace any\npercent signs in the options string by double percent signs.\n"
},
{
"code": null,
"e": 12262,
"s": 11988,
"text": "\nThe environment variable is parsed before the command line,\nso command line options override the LESS environment variable.\nIf an option appears in the LESS variable, it can be reset\nto its default value on the command line by beginning the command\nline option with \"-+\".\n"
},
{
"code": null,
"e": 12484,
"s": 12262,
"text": "\nFor options like -P or -D which take a following string,\na dollar sign ($) must be used to signal the end of the string.\nFor example, to set two -D options on MS-DOS, you must have\na dollar sign between them, like this:\n"
},
{
"code": null,
"e": 12507,
"s": 12484,
"text": "\nLESS=\"-Dn9.1$-Ds4.1\"\n"
},
{
"code": null,
"e": 12758,
"s": 12511,
"text": "\nIf no log file has been specified,\nthe -o and -O options can be used from within\nless to specify a log file.\nWithout a file name, they will simply report the name of the log file.\nThe \"s\" command is equivalent to specifying -o from within\nless. "
},
{
"code": null,
"e": 12780,
"s": 12758,
"text": "\n ESC [ ... m\n"
},
{
"code": null,
"e": 13380,
"s": 12780,
"text": "\nwhere the \"...\" is zero or more color specification characters\nFor the purpose of keeping track of screen appearance,\nANSI color escape sequences are assumed to not move the cursor.\nYou can make\nless think that characters other than \"m\" can end ANSI color escape sequences\nby setting the environment variable LESSANSIENDCHARS to the list of\ncharacters which can end a color escape sequence.\nAnd you can make\nless think that characters other than the standard ones may appear between\nthe ESC and the m by setting the environment variable LESSANSIMIDCHARS\nto the list of characters which can appear.\n"
},
{
"code": null,
"e": 14053,
"s": 13380,
"text": "\nBy default, if neither -u nor -U is given,\nbackspaces which appear adjacent to an underscore character\nare treated specially:\nthe underlined text is displayed\nusing the terminal’s hardware underlining capability.\nAlso, backspaces which appear between two identical characters\nare treated specially:\nthe overstruck text is printed\nusing the terminal’s hardware boldface capability.\nOther backspaces are deleted, along with the preceding character.\nCarriage returns immediately followed by a newline are deleted.\nother carriage returns are handled as specified by the -r option.\nText which is overstruck or underlined can be searched for\nif neither -u nor -U is in effect.\n"
},
{
"code": null,
"e": 14828,
"s": 14057,
"text": "\nA system-wide lesskey file may also be set up to provide key bindings.\nIf a key is defined in both a local lesskey file and in the\nsystem-wide file, key bindings in the local file take precedence over\nthose in the system-wide file.\nIf the environment variable LESSKEY_SYSTEM is set,\nless uses that as the name of the system-wide lesskey file.\nOtherwise,\nless looks in a standard place for the system-wide lesskey file:\nOn Unix systems, the system-wide lesskey file is /usr/local/etc/sysless.\n(However, if\nless was built with a different sysconf directory than /usr/local/etc,\nthat directory is where the sysless file is found.)\nOn MS-DOS and Windows systems, the system-wide lesskey file is c:\\_sysless.\nOn OS/2 systems, the system-wide lesskey file is c:\\sysless.ini.\n"
},
{
"code": null,
"e": 15517,
"s": 14830,
"text": "\nAn input preprocessor receives one command line argument, the original filename,\nas entered by the user.\nIt should create the replacement file, and when finished,\nprint the name of the replacement file to its standard output.\nIf the input preprocessor does not output a replacement filename,\nless uses the original file, as normal.\nThe input preprocessor is not called when viewing standard input.\nTo set up an input preprocessor, set the LESSOPEN environment variable\nto a command line which will invoke your input preprocessor.\nThis command line should include one occurrence of the string \"%s\",\nwhich will be replaced by the filename\nwhen the input preprocessor command is invoked.\n"
},
{
"code": null,
"e": 16195,
"s": 15517,
"text": "\nWhen\nless closes a file opened in such a way, it will call another program,\ncalled the input postprocessor,\nwhich may perform any desired clean-up action (such as deleting the\nreplacement file created by LESSOPEN).\nThis program receives two command line arguments, the original filename\nas entered by the user, and the name of the replacement file.\nTo set up an input postprocessor, set the LESSCLOSE environment variable\nto a command line which will invoke your input postprocessor.\nIt may include two occurrences of the string \"%s\";\nthe first is replaced with the original name of the file and\nthe second with the name of the replacement file,\nwhich was output by LESSOPEN.\n"
},
{
"code": null,
"e": 16340,
"s": 16195,
"text": "\nFor example, on many Unix systems, these two scripts will allow you\nto keep files in compressed format, but still let\nless view them directly:\n"
},
{
"code": null,
"e": 16667,
"s": 16340,
"text": "\nlessopen.sh:\n\n #! /bin/sh\n\n case \"$1\" in\n\n *.Z) uncompress - $1 >/tmp/less.$$ 2>/dev/null\n\n if [ -s /tmp/less.$$ ]; then\n\n echo /tmp/less.$$\n\n else\n\n rm -f /tmp/less.$$\n\n fi\n\n ;;\n\n esac\n"
},
{
"code": null,
"e": 16718,
"s": 16667,
"text": "\nlessclose.sh:\n\n #! /bin/sh\n\n rm $2\n"
},
{
"code": null,
"e": 16967,
"s": 16718,
"text": "\nTo use these scripts, put them both where they can be executed and\nset LESSOPEN=\"lessopen.sh %s\", and\nLESSCLOSE=\"lessclose.sh %s %s\".\nMore complex LESSOPEN and LESSCLOSE scripts may be written\nto accept other types of compressed files, and so on.\n"
},
{
"code": null,
"e": 17735,
"s": 16967,
"text": "\nIt is also possible to set up an input preprocessor to\npipe the file data directly to\nless, rather than putting the data into a replacement file.\nThis avoids the need to decompress the entire file before\nstarting to view it.\nAn input preprocessor that works this way is called an input pipe.\nAn input pipe, instead of writing the name of a replacement file on\nits standard output,\nwrites the entire contents of the replacement file on its standard output.\nIf the input pipe does not write any characters on its standard output,\nthen there is no replacement file and\nless uses the original file, as normal.\nTo use an input pipe,\nmake the first character in the LESSOPEN environment variable a\nvertical bar (|) to signify that the input preprocessor is an input pipe.\n"
},
{
"code": null,
"e": 17829,
"s": 17735,
"text": "\nFor example, on many Unix systems, this script will work like the\nprevious example scripts:\n"
},
{
"code": null,
"e": 17967,
"s": 17829,
"text": "\nlesspipe.sh:\n\n #! /bin/sh\n\n case \"$1\" in\n\n *.Z) uncompress -c $1 2>/dev/null\n\n ;;\n\n esac\n"
},
{
"code": null,
"e": 18290,
"s": 17967,
"text": "\nTo use this script, put it where it can be executed and set\nLESSOPEN=\"|lesspipe.sh %s\".\nWhen an input pipe is used, a LESSCLOSE postprocessor can be used,\nbut it is usually not necessary since there is no replacement file\nto clean up.\nIn this case, the replacement file name passed to the LESSCLOSE\npostprocessor is \"-\".\n"
},
{
"code": null,
"e": 18976,
"s": 18290,
"text": "\nFor compatibility with previous versions of\nless, the input preprocessor or pipe is not used if\nless is viewing standard input. \nHowever, if the first character of LESSOPEN is a dash (-),\nthe input preprocessor is used on standard input as well as other files.\nIn this case, the dash is not considered to be part of\nthe preprocessor command.\nIf standard input is being viewed, the input preprocessor is passed\na file name consisting of a single dash.\nSimilarly, if the first two characters of LESSOPEN are vertical bar and dash\n(|-), the input pipe is used on standard input as well as other files.\nAgain, in this case the dash is not considered to be part of\nthe input pipe command.\n"
},
{
"code": null,
"e": 19090,
"s": 18978,
"text": "\nThis table shows the value of LESSCHARDEF which is equivalent\nto each of the possible values for LESSCHARSET:\n"
},
{
"code": null,
"e": 19509,
"s": 19090,
"text": "\n ascii 8bcccbcc18b95.b\n\n dos 8bcccbcc12bc5b95.b.\n\n ebcdic 5bc6bcc7bcc41b.9b7.9b5.b..8b6.10b6.b9.7b\n\n 9.8b8.17b3.3b9.7b9.8b8.6b10.b.b.b.\n\n IBM-1047 4cbcbc3b9cbccbccbb4c6bcc5b3cbbc4bc4bccbc\n\n 191.b\n\n iso8859 8bcccbcc18b95.33b.\n\n koi8-r 8bcccbcc18b95.b128.\n\n latin1 8bcccbcc18b95.33b.\n\n next 8bcccbcc18b95.bb125.bb\n"
},
{
"code": null,
"e": 19720,
"s": 19509,
"text": "\nIf neither LESSCHARSET nor LESSCHARDEF is set,\nbut any of the strings \"UTF-8\", \"UTF8\", \"utf-8\" or \"utf8\"\nis found in the LC_ALL, LC_TYPE or LANG\nenvironment variables, then the default character set is utf-8.\n"
},
{
"code": null,
"e": 19936,
"s": 19720,
"text": "\nIf that string is not found, but your system supports the\nsetlocale interface,\nless will use setlocale to determine the character set.\nsetlocale is controlled by setting the LANG or LC_CTYPE environment\nvariables.\n"
},
{
"code": null,
"e": 20034,
"s": 19936,
"text": "\nFinally, if the\nsetlocale interface is also not available, the default character set is latin1.\n"
},
{
"code": null,
"e": 21140,
"s": 20034,
"text": "\nControl and binary characters are displayed in standout (reverse video).\nEach such character is displayed in caret notation if possible\n(e.g. ^A for control-A). Caret notation is used only if\ninverting the 0100 bit results in a normal printable character.\nOtherwise, the character is displayed as a hex number in angle brackets.\nThis format can be changed by\nsetting the LESSBINFMT environment variable.\nLESSBINFMT may begin with a \"*\" and one character to select\nthe display attribute:\n\"*k\" is blinking, \"*d\" is bold, \"*u\" is underlined, \"*s\" is standout,\nand \"*n\" is normal.\nIf LESSBINFMT does not begin with a \"*\", normal attribute is assumed.\nThe remainder of LESSBINFMT is a string which may include one\nprintf-style escape sequence (a % followed by x, X, o, d, etc.).\nFor example, if LESSBINFMT is \"*u[%x]\", binary characters\nare displayed in underlined hexadecimal surrounded by brackets.\nThe default if no LESSBINFMT is specified is \"*s<%X>\".\nThe default if no LESSBINFMT is specified is \"*s<%02X>\".\nWarning: the result of expanding the character via LESSBINFMT must\nbe less than 31 characters.\n"
},
{
"code": null,
"e": 21888,
"s": 21140,
"text": "\nWhen the character set is utf-8, the LESSUTFBINFMT environment variable\nacts similarly to LESSBINFMT but it applies to Unicode code points\nthat were successfully decoded but are unsuitable for display (e.g.,\nunassigned code points).\nIts default value is \"<U+%04lX>\".\nNote that LESSUTFBINFMT and LESSBINFMT share their display attribute\nsetting (\"*x\") so specifying one will affect both;\nLESSUTFBINFMT is read after LESSBINFMT so its setting, if any,\nwill have priority.\nProblematic octets in a UTF-8 file (octets of a truncated sequence,\noctets of a complete but non-shortest form sequence, illegal octets,\nand stray trailing octets)\nare displayed individually using LESSBINFMT so as to facilitate diagnostic\nof how the UTF-8 file is ill-formed.\n"
},
{
"code": null,
"e": 21996,
"s": 21890,
"text": "\nA percent sign followed by a single character is expanded\naccording to what the following character is:\n"
},
{
"code": null,
"e": 22013,
"s": 21996,
"text": "\nSome examples:\n"
},
{
"code": null,
"e": 22036,
"s": 22013,
"text": "\n?f%f:Standard input.\n"
},
{
"code": null,
"e": 22120,
"s": 22036,
"text": "\nThis prompt prints the filename, if known;\notherwise the string \"Standard input\".\n"
},
{
"code": null,
"e": 22166,
"s": 22120,
"text": "\n?f%f .?ltLine %lt:?pt%pt\\%:?btByte %bt:-...\n"
},
{
"code": null,
"e": 22505,
"s": 22166,
"text": "\nThis prompt would print the filename, if known.\nThe filename is followed by the line number, if known,\notherwise the percent if known, otherwise the byte offset if known.\nOtherwise, a dash is printed.\nNotice how each question mark has a matching period,\nand how the % after the %pt\nis included literally by escaping it with a backslash.\n"
},
{
"code": null,
"e": 22561,
"s": 22505,
"text": "\n?n?f%f .?m(file %i of %m) ..?e(END) ?x- Next\\: %x..%t\n"
},
{
"code": null,
"e": 23039,
"s": 22561,
"text": "\nThis prints the filename if this is the first prompt in a file,\nfollowed by the \"file N of N\" message if there is more\nthan one input file.\nThen, if we are at end-of-file, the string \"(END)\" is printed\nfollowed by the name of the next file, if there is one.\nFinally, any trailing spaces are truncated.\nThis is the default prompt.\nFor reference, here are the defaults for\nthe other two prompts (-m and -M respectively).\nEach is broken into two lines here for readability only.\n"
},
{
"code": null,
"e": 23244,
"s": 23039,
"text": "\n?n?f%f .?m(file %i of %m) ..?e(END) ?x- Next\\: %x.:\n ?pB%pB\\%:byte %bB?s/%s...%t\n\n?f%f .?n?m(file %i of %m) ..?ltlines %lt-%lb?L/%L. :\n byte %bB?s/%s. .?e(END) ?x- Next\\: %x.:?pB%pB\\%..%t\n\n"
},
{
"code": null,
"e": 23334,
"s": 23244,
"text": "\n?n?f%f .?m(file %i of %m) ..?e(END) ?x- Next\\: %x.:\n ?pB%pB\\%:byte %bB?s/%s...%t\n"
},
{
"code": null,
"e": 23449,
"s": 23334,
"text": "\n?f%f .?n?m(file %i of %m) ..?ltlines %lt-%lb?L/%L. :\n byte %bB?s/%s. .?e(END) ?x- Next\\: %x.:?pB%pB\\%..%t\n"
},
{
"code": null,
"e": 23548,
"s": 23451,
"text": "\n?f%f .?m(file %i of %m) .?ltlines %lt-%lb?L/%L. .\n byte %bB?s/%s. ?e(END) :?pB%pB\\%..%t\n"
},
{
"code": null,
"e": 23645,
"s": 23548,
"text": "\n?f%f .?m(file %i of %m) .?ltlines %lt-%lb?L/%L. .\n byte %bB?s/%s. ?e(END) :?pB%pB\\%..%t\n"
},
{
"code": null,
"e": 23938,
"s": 23645,
"text": "\nThe prompt expansion features are also used for another purpose:\nif an environment variable LESSEDIT is defined, it is used\nas the command to be executed when the v command is invoked.\nThe LESSEDIT string is expanded in the same way as the prompt strings.\nThe default value for LESSEDIT is:\n"
},
{
"code": null,
"e": 23964,
"s": 23938,
"text": "\n %E ?lm+%lm. %f\n\n"
},
{
"code": null,
"e": 23989,
"s": 23964,
"text": "\n %E ?lm+%lm. %f\n"
},
{
"code": null,
"e": 24057,
"s": 23993,
"text": "\nLess can also be compiled to be permanently in \"secure\" mode.\n"
},
{
"code": null,
"e": 24242,
"s": 24059,
"text": "\nThe -e option works differently.\nIf the -e option is not set,\nless behaves as if the -E option were set.\nIf the -e option is set,\nless behaves as if the -e and -F options were set.\n"
},
{
"code": null,
"e": 24431,
"s": 24242,
"text": "\nThe -m option works differently.\nIf the -m option is not set, the medium prompt is used,\nand it is prefixed with the string \"--More--\".\nIf the -m option is set, the short prompt is used.\n"
},
{
"code": null,
"e": 24538,
"s": 24431,
"text": "\nThe -n option acts like the -z option.\nThe normal behavior of the -n option is unavailable in this mode.\n"
},
{
"code": null,
"e": 24631,
"s": 24538,
"text": "\nThe parameter to the -p option is taken to be a\nless command rather than a search pattern.\n"
},
{
"code": null,
"e": 24731,
"s": 24631,
"text": "\nThe LESS environment variable is ignored,\nand the MORE environment variable is used in its place.\n"
},
{
"code": null,
"e": 25358,
"s": 24737,
"text": "\nless is part of the GNU project and is free software.\nYou can redistribute it and/or modify it\nunder the terms of either\n(1) the GNU General Public License as published by\nthe Free Software Foundation; or (2) the Less License.\nSee the file README in the less distribution for more details\nregarding redistribution.\nYou should have received a copy of the GNU General Public License\nalong with the source for less; see the file COPYING.\nIf not, write to the Free Software Foundation, 59 Temple Place,\nSuite 330, Boston, MA 02111-1307, USA.\nYou should also have received a copy of the Less License;\nsee the file LICENSE.\n"
},
{
"code": null,
"e": 25585,
"s": 25358,
"text": "\nless is distributed in the hope that it will be useful, but\nWITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY\nor FITNESS FOR A PARTICULAR PURPOSE.\nSee the GNU General Public License for more details.\n"
},
{
"code": null,
"e": 25902,
"s": 25587,
"text": "\nMark Nudelman <[email protected]>\n\nSee http://www.greenwoodsoftware.com/less/bugs.html for the latest list of known bugs in less.\n\nSend bug reports or comments to the above address or to\n\[email protected].\n\nFor more information, see the less homepage at\n\nhttp://www.greenwoodsoftware.com/less.\n\n\n\n\n\n\n\n\n\n"
},
{
"code": null,
"e": 25919,
"s": 25902,
"text": "\nAdvertisements\n"
},
{
"code": null,
"e": 25954,
"s": 25919,
"text": "\n 129 Lectures \n 23 hours \n"
},
{
"code": null,
"e": 25982,
"s": 25954,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 26016,
"s": 25982,
"text": "\n 5 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 26033,
"s": 26016,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 26066,
"s": 26033,
"text": "\n 35 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 26077,
"s": 26066,
"text": " Pradeep D"
},
{
"code": null,
"e": 26112,
"s": 26077,
"text": "\n 41 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 26128,
"s": 26112,
"text": " Musab Zayadneh"
},
{
"code": null,
"e": 26161,
"s": 26128,
"text": "\n 46 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 26173,
"s": 26161,
"text": " GUHARAJANM"
},
{
"code": null,
"e": 26205,
"s": 26173,
"text": "\n 6 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 26213,
"s": 26205,
"text": " Uplatz"
},
{
"code": null,
"e": 26220,
"s": 26213,
"text": " Print"
},
{
"code": null,
"e": 26231,
"s": 26220,
"text": " Add Notes"
}
] |
Classifying Hate Speech: an overview | by Jacob Crabb | Towards Data Science
|
By Jacob Crabb, Sherry Yang, and Anna Zubova.
The challenge of wrangling hate speech is an ancient one, but the scale, personalization, and velocity of today’s hate speech a uniquely modern dilemma. While there is no exact definition of hate speech, in general, it is speech that is intended not just to insult or mock, but to harass and cause lasting pain by attacking something uniquely dear to the target. Hate speech has been especially prevalent in online forums, chatrooms, and social media.
Hatebase.org, a Canadian company that created a multilingual dictionary of words used in hate speech, has the following criteria for identifying hate speech (source):
It is addressed to a specific group of people (ethnicity, nationality, religion, sexuality, disability or class);
There is a malicious intent;
There is one main problem with hate speech that makes it hard to classify: subjectivity. With the exceptions from the First Amendment, hate speech has no legal definition and is not punished by law. For this reason, what is and isn’t hate speech is open to interpretation. A lot depends on the domain and the context, according to Aylin Caliskan, a computer science researcher at George Washington University (source) A seemingly neutral sentence can be offensive for one person and not bother another.
But since humans can’t always agree on what can be classified as hate speech, it is especially complicated to create a universal machine learning algorithm that would identify it. Besides, the datasets used to train models tend to “reflect the majority view of the people who collected or labeled the data”, according to Tommi Gröndahl from the Aalto University, Finland (source).
One more complication is that it is hard to distinguish hate speech from just an offensive language, even for a human. This becomes a problem especially when labeling is done by random users based on their own subjective judgment, like in this dataset, where users were suggested to label tweets as “hate speech”, “offensive language” or “neither”. So when designing a model, it is important to follow criteria that will help to distinguish between hate speech and offensive language.
It is worth pointing out that Hatebase’s database can be very useful in creating hate speech detection algorithms. It is a multilingual vocabulary where the words attain labels from “mildly” to “extremely offensive” depending on the probability of it to be used in hate speech.
While there are different opinions on whether hate speech should be restricted, some companies like Facebook, Twitter, Riot Games, and others decided to control and restrict it, using machine learning for its detection.
One more problem with detecting hate speech is the sensitivity of the machine learning algorithms to text anomalies.
A model is “the output of the algorithm that trained with data” (source). Machine learning algorithms, or models, can classify text for us but are sensitive to small changes, like removing the spaces between words that are hateful. This change can drastically reduce the negative score a sentence receives (Source). Learning models can be fooled into labeling their inputs incorrectly. A crucial challenge for machine learning algorithms is understanding the context.
The insidious nature of hate speech is that it can morph into many different shapes depending on the context. Discriminatory ideas can be hidden in many benign words if the community comes to a consensus on word choice. For example names of children’s toys and hashtags can be used as monikers for hateful ideas. Static definitions that attribute meaning to one word in boolean logic don’t have the flexibility to adapt to changing nicknames for hate. “What A.I. doesn’t pick up at this point is the context, and that’s what makes language hateful,” says Brittan Heller, an affiliate with the Berkman Klein Center for Internet and Society at Harvard University (Source).
Every company that allows users to publish on their own face the challenge that the speech becomes associated with their brand. Wired’s April issue describes how relentless growth at Facebook has created a major question “whether the company’s business model is even compatible with its stated mission [to bring the world closer together].” (Source) We may not be able to bring people together and make the world a better place by simply giving people more tools to share.
In the next section, we will look at a case study of how another company, Riot Games faced the challenge of moderating hate speech.
For those who don’t know, League of Legends is a competitive game where two teams of five players each attempt to destroy the opposing team’s base.
On release in 2009, Riot games were, of course, looking to create a fun environment where friendly competition could thrive. This was the original goal.
League of Legends uses built-in text, and as the game’s popularity and user base grew, the competition intensified. The players begin to use the chat to gloat about their in-game performance or tear down the enemy team’s futile attempts to stop inevitable defeat. This was still within the company’s goals. Soon, however, it devolved until it was commonplace to see things such as: “your whole life is trash”, “kill yourself IRL”, or numerous other declarations of obscene things the victors would do to the losers. This became so commonplace that the players gave it a name: Toxicity. Players became desensitized to the point were even positive, upstanding players would act toxic without thought. Riot Games saw that of their in-house player classifications (negative, neutral, and positive), 87% of the Toxicity came from neutral or positive players. The disease was spreading. (Source)
In 2011 Riot Games released an attempt at a solution called “The Tribunal” (Source). The Tribunal was designed to work with another in-game feature called “Reporting” (Source). At the end of a game, if you felt another player had been toxic, reporting was a way of sending those concerns to Riot Games for review. Riot Games would then turn the report over to The Tribunal. The Tribunal was a jury based judgment system comprised of volunteers. Concerned players could sign up for The Tribunal, then view game reports and vote on a case by case basis whether someone had indeed acted toxic or not. If your vote aligned with the majority, you would be granted a small amount of in-game currency and the offending toxic player would be given a small punishment, which would increase for repeat offenders. Riot Games also enacted small bonuses to players who were non-toxic. These combined efforts saw improvements in the player base, and Riot found that just one punishment from The Tribunal was enough to curb toxic behavior in most players.
This system had two main problems:
It was slow and inefficient. Manual reviews require those chat logs to be pulled out to The Tribunal website, then having to wait for responses from enough players, and then decide on a penalty from there.
It was slow and inefficient. Manual reviews require those chat logs to be pulled out to The Tribunal website, then having to wait for responses from enough players, and then decide on a penalty from there.
2. It was at times wildly inaccurate (especially before they removed the reward per “successful” penalty, which lead to a super innate bias in the system). (Source)
Riot closed down the Tribunal in 2014. It had worked for a while, but toxicity was still a problem.
A Machine Learning Solution:
After that though, Riot Games took their approximately 100 million Tribunal reports (Source) and used it as training data to create a machine-learning system that detected questionable behaviors and offered a customized response to them (based on how players voted in the training data’s Tribunal cases.)
While The Tribunal had been slow or inefficient, sometimes taking days or a week or two to dish out judgment, (long after a player had forgotten about their toxic behavior) this new system could analyze and dispense judgment in 15 minutes. Players were seeing nearly immediate consequences to their actions (Source).
“As a result of these governance systems changing online cultural norms, incidences of homophobia, sexism and racism in League of Legends have fallen to a combined 2 percent of all games,” ... “Verbal abuse has dropped by more than 40 percent, and 91.6 percent of negative players change their act and never commit another offense after just one reported penalty. (Source)
Machine learning approaches have made a breakthrough in detecting hate speech on web platforms. In this section, we will talk about some techniques that are traditionally used for this task as well as some new approaches.
Preprocessing Data
Natural language processing (NLP) is the process of converting human words into numbers and vectors the machine can understand. A way of working between the world of the human and the world of the machine. Naturally, this requires quite a lot of data cleaning. Typically, cleaning means removing stop words, stemming, tokenization, and the implementation of Term Frequency-Inverse Document Frequency (TFIDF) which weights words of more importance heavier than words like “the” which get penalized for adding less meaning. (Source) Lemmatization is a more computationally expensive method used to stem, or take the root, words. (Source)
Model Implementation
Once the data is clean we use several methods for classification. Common methods of classifying text include ”sentiment analysis, topic labeling, language detection, and intent detection.” (Source) More advanced tools include Naive Bayes, bagging, boosting, and random forests. Each method can have a recall, precision, accuracy, and F1 score attached to how well it classifies. Then we want to test these methods over and over. As awesomely accurate as our artificial intelligence can be with trained data sets, they can be equally rogue with test data. We need to make sure our model is not overfit to our training data in a way that makes it excellent at classifying test data but poor at accurately classifying future data. Below are three further ways to deal with challenges in classify text.
Multilabel Classification
The baseline multilabel classification approach, called the binary relevance method, amounts to independently training one binary classifier for each label (Source). This approach treats each label independently from the rest. For example, if you were trying to classify ‘I hate that kind of food, let’s not have it for lunch.’ For the labels: lunch talk, love talk, hate talk. Your classifier would go through the data three times, once for each label.
For the data and labels below, (after preprocessing) the binary relevance method will make individual predictions for each label. (We will be using a Naive Bayes classifier, which is explained quite well here)
data = pd.read_csv('lunchHateLove.csv')data.head()
# using binary relevancefrom skmultilearn.problem_transform import BinaryRelevancefrom sklearn.naive_bayes import GaussianNB# initialize binary relevance multi-label classifier# with a gaussian naive bayes base classifierclassifier = BinaryRelevance(GaussianNB())# trainclassifier.fit(x_train, y_train)# predictpredictions = classifier.predict(x_test)#print the keywords derived from our text#along with the labels we assigned, and then the final predictionsprint(data.comment_text[1], '\n', data.comment_text[3], '\n', y_test, '\n', predictions, '\n')hate love kind food okay lunch food hate love get lunch lunch_talk love_talk hate_talk1 1 0 13 1 1 1 (0, 0) 1 (1, 0) 1 (1, 1) 1 (0, 2) 1 (1, 2) 1
So in this simple example, binary relevance predicted that the spot in the first row in the first column (0,0) was true for the label “lunch_talk”, which is the correct label based on our original inputs. In fact, in this very simple example, binary relevance predicts our labels perfectly.
Here’s a link to this example on Github if you’d like to see the steps in more detail. Or better yet, check out this blog on the subject, which has more detail, and a link to the Github page I used as a starting point.
Transfer Learning and Weak Supervision
One bottleneck in machine learning models is a lack of labeled data to train our algorithms for identifying hate speech. Two solutions are transfer learning and weak supervision.
Transfer learning implies reusing already existing models for new tasks, which is extremely helpful not only in situations where lack of labeled data is an issue, but also when there is a potential need for future relabeling. The idea that a model can perform better if it does not learn from scratch but rather from another model designed to solve similar tasks is not new, but it wasn’t used much in NLP until Fastai’s ULMFiT came along (source).
ULMFiT is a method that uses a pre-trained model on millions of Wikipedia pages that can be tuned in for a specific task. This tuned-in model is later used to create a classifier. This approach is impressively efficient: “with only 100 labeled examples (and giving it access to about 50,000 unlabeled examples), [it was possible] to achieve the same performance as training a model from scratch with 10,000 labeled examples” (source). Another advantage is that this method can be used for languages other than English since the data used for the initial training was from Wikipedia pages available in many languages. Some other transfer learning language models for NLP are: Transformer, Google’s BERT, Transformer-XL, OpenAI’s GPT-2, ELMo, Flair, StanfordNLP (source).
Another paradigm that can be applied in case there is lack of labeled data, is weak supervision, where we use hand-written heuristic rules (“label functions”) to create “weak labels” that can be applied instead of labeling data by hand. Within this paradigm, a generative model, based on these weak labels is established first, and then it is used to train a discriminatory model (source).
An example of using these two approaches is presented in this article. Abraham Starosta, Master’s Student in AI from Stanford University, shows how he used a combination of weak supervising and transfer learning to identify anti-semitic tweets.
He started with an unlabeled set of data of about 25000 tweets and used Snorkel (a tool for weak supervision labeling) to create a training set through writing simple label functions. Those functions were used to train a “weak” label model in order to classify this large dataset.
To apply transfer learning to this problem, he fine-tuned the ULMFiT’s language model by training it on generalized tweets. He then trained this new model on the training set created with weak supervision labeling. The results were quite impressive: the author was able to reach 95% precision and 39% recall (probability threshold of 0.63), while without using weak supervision technique, it would be 10% recall for a 90% precision. The model also performed better than logistic regression from sklearn, XGBoost, and Feed Forward Neural Networks (source).
Voting Based Classifications
Voting based methods are ensemble learning used for classification that helps balance out individual classifier weaknesses. Ensemble methods combine individual classifier algorithms such as: bagging (or bootstrap aggregating), decision trees, and boosting. If we think about a linear regression or one line predicting our y values given x, we can see that the linear model would not be good at identifying non-linear clusters. That’s where ensemble methods come in. An “appropriate combination of an ensemble of such linear classifiers can learn any non-linear boundary.” (Source) Classifiers which each have unique decision boundaries can be used together. The accuracy of the voting classifier is “generally higher than the individual classifiers.” (Source)
We can combine classifiers through majority voting also known as naive voting, weighted voting, and maximum voting. In majority voting, “the classification system follows a divide-and-conquer approach by dividing the data space into smaller and easier-to-learn partitions, where each classifier learns only one of the simpler partitions.” (Source) In weighted voting, we can count models that are more useful multiple times. (Source) We can use these methods for text in foreign languages to get an effective classification. (Source) On a usability note, voting based methods are good for optimizing classification but are not easily interpretable.
Awareness of how powerful machine learning can be should come with an understanding of how to address its limitations. The artificial intelligence used to remove hate speech from our social spaces often lie in black boxes. However, with some exploration of natural language processing and text classification, we can begin to unpack what we can and cannot expect of our A.I. We don’t need to be part of a tech giant to implement classifiers for good.
|
[
{
"code": null,
"e": 218,
"s": 172,
"text": "By Jacob Crabb, Sherry Yang, and Anna Zubova."
},
{
"code": null,
"e": 670,
"s": 218,
"text": "The challenge of wrangling hate speech is an ancient one, but the scale, personalization, and velocity of today’s hate speech a uniquely modern dilemma. While there is no exact definition of hate speech, in general, it is speech that is intended not just to insult or mock, but to harass and cause lasting pain by attacking something uniquely dear to the target. Hate speech has been especially prevalent in online forums, chatrooms, and social media."
},
{
"code": null,
"e": 837,
"s": 670,
"text": "Hatebase.org, a Canadian company that created a multilingual dictionary of words used in hate speech, has the following criteria for identifying hate speech (source):"
},
{
"code": null,
"e": 951,
"s": 837,
"text": "It is addressed to a specific group of people (ethnicity, nationality, religion, sexuality, disability or class);"
},
{
"code": null,
"e": 980,
"s": 951,
"text": "There is a malicious intent;"
},
{
"code": null,
"e": 1483,
"s": 980,
"text": "There is one main problem with hate speech that makes it hard to classify: subjectivity. With the exceptions from the First Amendment, hate speech has no legal definition and is not punished by law. For this reason, what is and isn’t hate speech is open to interpretation. A lot depends on the domain and the context, according to Aylin Caliskan, a computer science researcher at George Washington University (source) A seemingly neutral sentence can be offensive for one person and not bother another."
},
{
"code": null,
"e": 1865,
"s": 1483,
"text": "But since humans can’t always agree on what can be classified as hate speech, it is especially complicated to create a universal machine learning algorithm that would identify it. Besides, the datasets used to train models tend to “reflect the majority view of the people who collected or labeled the data”, according to Tommi Gröndahl from the Aalto University, Finland (source)."
},
{
"code": null,
"e": 2350,
"s": 1865,
"text": "One more complication is that it is hard to distinguish hate speech from just an offensive language, even for a human. This becomes a problem especially when labeling is done by random users based on their own subjective judgment, like in this dataset, where users were suggested to label tweets as “hate speech”, “offensive language” or “neither”. So when designing a model, it is important to follow criteria that will help to distinguish between hate speech and offensive language."
},
{
"code": null,
"e": 2628,
"s": 2350,
"text": "It is worth pointing out that Hatebase’s database can be very useful in creating hate speech detection algorithms. It is a multilingual vocabulary where the words attain labels from “mildly” to “extremely offensive” depending on the probability of it to be used in hate speech."
},
{
"code": null,
"e": 2848,
"s": 2628,
"text": "While there are different opinions on whether hate speech should be restricted, some companies like Facebook, Twitter, Riot Games, and others decided to control and restrict it, using machine learning for its detection."
},
{
"code": null,
"e": 2965,
"s": 2848,
"text": "One more problem with detecting hate speech is the sensitivity of the machine learning algorithms to text anomalies."
},
{
"code": null,
"e": 3433,
"s": 2965,
"text": "A model is “the output of the algorithm that trained with data” (source). Machine learning algorithms, or models, can classify text for us but are sensitive to small changes, like removing the spaces between words that are hateful. This change can drastically reduce the negative score a sentence receives (Source). Learning models can be fooled into labeling their inputs incorrectly. A crucial challenge for machine learning algorithms is understanding the context."
},
{
"code": null,
"e": 4104,
"s": 3433,
"text": "The insidious nature of hate speech is that it can morph into many different shapes depending on the context. Discriminatory ideas can be hidden in many benign words if the community comes to a consensus on word choice. For example names of children’s toys and hashtags can be used as monikers for hateful ideas. Static definitions that attribute meaning to one word in boolean logic don’t have the flexibility to adapt to changing nicknames for hate. “What A.I. doesn’t pick up at this point is the context, and that’s what makes language hateful,” says Brittan Heller, an affiliate with the Berkman Klein Center for Internet and Society at Harvard University (Source)."
},
{
"code": null,
"e": 4577,
"s": 4104,
"text": "Every company that allows users to publish on their own face the challenge that the speech becomes associated with their brand. Wired’s April issue describes how relentless growth at Facebook has created a major question “whether the company’s business model is even compatible with its stated mission [to bring the world closer together].” (Source) We may not be able to bring people together and make the world a better place by simply giving people more tools to share."
},
{
"code": null,
"e": 4709,
"s": 4577,
"text": "In the next section, we will look at a case study of how another company, Riot Games faced the challenge of moderating hate speech."
},
{
"code": null,
"e": 4857,
"s": 4709,
"text": "For those who don’t know, League of Legends is a competitive game where two teams of five players each attempt to destroy the opposing team’s base."
},
{
"code": null,
"e": 5010,
"s": 4857,
"text": "On release in 2009, Riot games were, of course, looking to create a fun environment where friendly competition could thrive. This was the original goal."
},
{
"code": null,
"e": 5900,
"s": 5010,
"text": "League of Legends uses built-in text, and as the game’s popularity and user base grew, the competition intensified. The players begin to use the chat to gloat about their in-game performance or tear down the enemy team’s futile attempts to stop inevitable defeat. This was still within the company’s goals. Soon, however, it devolved until it was commonplace to see things such as: “your whole life is trash”, “kill yourself IRL”, or numerous other declarations of obscene things the victors would do to the losers. This became so commonplace that the players gave it a name: Toxicity. Players became desensitized to the point were even positive, upstanding players would act toxic without thought. Riot Games saw that of their in-house player classifications (negative, neutral, and positive), 87% of the Toxicity came from neutral or positive players. The disease was spreading. (Source)"
},
{
"code": null,
"e": 6941,
"s": 5900,
"text": "In 2011 Riot Games released an attempt at a solution called “The Tribunal” (Source). The Tribunal was designed to work with another in-game feature called “Reporting” (Source). At the end of a game, if you felt another player had been toxic, reporting was a way of sending those concerns to Riot Games for review. Riot Games would then turn the report over to The Tribunal. The Tribunal was a jury based judgment system comprised of volunteers. Concerned players could sign up for The Tribunal, then view game reports and vote on a case by case basis whether someone had indeed acted toxic or not. If your vote aligned with the majority, you would be granted a small amount of in-game currency and the offending toxic player would be given a small punishment, which would increase for repeat offenders. Riot Games also enacted small bonuses to players who were non-toxic. These combined efforts saw improvements in the player base, and Riot found that just one punishment from The Tribunal was enough to curb toxic behavior in most players."
},
{
"code": null,
"e": 6976,
"s": 6941,
"text": "This system had two main problems:"
},
{
"code": null,
"e": 7182,
"s": 6976,
"text": "It was slow and inefficient. Manual reviews require those chat logs to be pulled out to The Tribunal website, then having to wait for responses from enough players, and then decide on a penalty from there."
},
{
"code": null,
"e": 7388,
"s": 7182,
"text": "It was slow and inefficient. Manual reviews require those chat logs to be pulled out to The Tribunal website, then having to wait for responses from enough players, and then decide on a penalty from there."
},
{
"code": null,
"e": 7553,
"s": 7388,
"text": "2. It was at times wildly inaccurate (especially before they removed the reward per “successful” penalty, which lead to a super innate bias in the system). (Source)"
},
{
"code": null,
"e": 7653,
"s": 7553,
"text": "Riot closed down the Tribunal in 2014. It had worked for a while, but toxicity was still a problem."
},
{
"code": null,
"e": 7682,
"s": 7653,
"text": "A Machine Learning Solution:"
},
{
"code": null,
"e": 7987,
"s": 7682,
"text": "After that though, Riot Games took their approximately 100 million Tribunal reports (Source) and used it as training data to create a machine-learning system that detected questionable behaviors and offered a customized response to them (based on how players voted in the training data’s Tribunal cases.)"
},
{
"code": null,
"e": 8304,
"s": 7987,
"text": "While The Tribunal had been slow or inefficient, sometimes taking days or a week or two to dish out judgment, (long after a player had forgotten about their toxic behavior) this new system could analyze and dispense judgment in 15 minutes. Players were seeing nearly immediate consequences to their actions (Source)."
},
{
"code": null,
"e": 8677,
"s": 8304,
"text": "“As a result of these governance systems changing online cultural norms, incidences of homophobia, sexism and racism in League of Legends have fallen to a combined 2 percent of all games,” ... “Verbal abuse has dropped by more than 40 percent, and 91.6 percent of negative players change their act and never commit another offense after just one reported penalty. (Source)"
},
{
"code": null,
"e": 8899,
"s": 8677,
"text": "Machine learning approaches have made a breakthrough in detecting hate speech on web platforms. In this section, we will talk about some techniques that are traditionally used for this task as well as some new approaches."
},
{
"code": null,
"e": 8918,
"s": 8899,
"text": "Preprocessing Data"
},
{
"code": null,
"e": 9554,
"s": 8918,
"text": "Natural language processing (NLP) is the process of converting human words into numbers and vectors the machine can understand. A way of working between the world of the human and the world of the machine. Naturally, this requires quite a lot of data cleaning. Typically, cleaning means removing stop words, stemming, tokenization, and the implementation of Term Frequency-Inverse Document Frequency (TFIDF) which weights words of more importance heavier than words like “the” which get penalized for adding less meaning. (Source) Lemmatization is a more computationally expensive method used to stem, or take the root, words. (Source)"
},
{
"code": null,
"e": 9575,
"s": 9554,
"text": "Model Implementation"
},
{
"code": null,
"e": 10374,
"s": 9575,
"text": "Once the data is clean we use several methods for classification. Common methods of classifying text include ”sentiment analysis, topic labeling, language detection, and intent detection.” (Source) More advanced tools include Naive Bayes, bagging, boosting, and random forests. Each method can have a recall, precision, accuracy, and F1 score attached to how well it classifies. Then we want to test these methods over and over. As awesomely accurate as our artificial intelligence can be with trained data sets, they can be equally rogue with test data. We need to make sure our model is not overfit to our training data in a way that makes it excellent at classifying test data but poor at accurately classifying future data. Below are three further ways to deal with challenges in classify text."
},
{
"code": null,
"e": 10400,
"s": 10374,
"text": "Multilabel Classification"
},
{
"code": null,
"e": 10854,
"s": 10400,
"text": "The baseline multilabel classification approach, called the binary relevance method, amounts to independently training one binary classifier for each label (Source). This approach treats each label independently from the rest. For example, if you were trying to classify ‘I hate that kind of food, let’s not have it for lunch.’ For the labels: lunch talk, love talk, hate talk. Your classifier would go through the data three times, once for each label."
},
{
"code": null,
"e": 11064,
"s": 10854,
"text": "For the data and labels below, (after preprocessing) the binary relevance method will make individual predictions for each label. (We will be using a Naive Bayes classifier, which is explained quite well here)"
},
{
"code": null,
"e": 11115,
"s": 11064,
"text": "data = pd.read_csv('lunchHateLove.csv')data.head()"
},
{
"code": null,
"e": 11890,
"s": 11115,
"text": "# using binary relevancefrom skmultilearn.problem_transform import BinaryRelevancefrom sklearn.naive_bayes import GaussianNB# initialize binary relevance multi-label classifier# with a gaussian naive bayes base classifierclassifier = BinaryRelevance(GaussianNB())# trainclassifier.fit(x_train, y_train)# predictpredictions = classifier.predict(x_test)#print the keywords derived from our text#along with the labels we assigned, and then the final predictionsprint(data.comment_text[1], '\\n', data.comment_text[3], '\\n', y_test, '\\n', predictions, '\\n')hate love kind food okay lunch food hate love get lunch lunch_talk love_talk hate_talk1 1 0 13 1 1 1 (0, 0)\t1 (1, 0)\t1 (1, 1)\t1 (0, 2)\t1 (1, 2)\t1"
},
{
"code": null,
"e": 12181,
"s": 11890,
"text": "So in this simple example, binary relevance predicted that the spot in the first row in the first column (0,0) was true for the label “lunch_talk”, which is the correct label based on our original inputs. In fact, in this very simple example, binary relevance predicts our labels perfectly."
},
{
"code": null,
"e": 12400,
"s": 12181,
"text": "Here’s a link to this example on Github if you’d like to see the steps in more detail. Or better yet, check out this blog on the subject, which has more detail, and a link to the Github page I used as a starting point."
},
{
"code": null,
"e": 12439,
"s": 12400,
"text": "Transfer Learning and Weak Supervision"
},
{
"code": null,
"e": 12618,
"s": 12439,
"text": "One bottleneck in machine learning models is a lack of labeled data to train our algorithms for identifying hate speech. Two solutions are transfer learning and weak supervision."
},
{
"code": null,
"e": 13067,
"s": 12618,
"text": "Transfer learning implies reusing already existing models for new tasks, which is extremely helpful not only in situations where lack of labeled data is an issue, but also when there is a potential need for future relabeling. The idea that a model can perform better if it does not learn from scratch but rather from another model designed to solve similar tasks is not new, but it wasn’t used much in NLP until Fastai’s ULMFiT came along (source)."
},
{
"code": null,
"e": 13837,
"s": 13067,
"text": "ULMFiT is a method that uses a pre-trained model on millions of Wikipedia pages that can be tuned in for a specific task. This tuned-in model is later used to create a classifier. This approach is impressively efficient: “with only 100 labeled examples (and giving it access to about 50,000 unlabeled examples), [it was possible] to achieve the same performance as training a model from scratch with 10,000 labeled examples” (source). Another advantage is that this method can be used for languages other than English since the data used for the initial training was from Wikipedia pages available in many languages. Some other transfer learning language models for NLP are: Transformer, Google’s BERT, Transformer-XL, OpenAI’s GPT-2, ELMo, Flair, StanfordNLP (source)."
},
{
"code": null,
"e": 14227,
"s": 13837,
"text": "Another paradigm that can be applied in case there is lack of labeled data, is weak supervision, where we use hand-written heuristic rules (“label functions”) to create “weak labels” that can be applied instead of labeling data by hand. Within this paradigm, a generative model, based on these weak labels is established first, and then it is used to train a discriminatory model (source)."
},
{
"code": null,
"e": 14472,
"s": 14227,
"text": "An example of using these two approaches is presented in this article. Abraham Starosta, Master’s Student in AI from Stanford University, shows how he used a combination of weak supervising and transfer learning to identify anti-semitic tweets."
},
{
"code": null,
"e": 14753,
"s": 14472,
"text": "He started with an unlabeled set of data of about 25000 tweets and used Snorkel (a tool for weak supervision labeling) to create a training set through writing simple label functions. Those functions were used to train a “weak” label model in order to classify this large dataset."
},
{
"code": null,
"e": 15309,
"s": 14753,
"text": "To apply transfer learning to this problem, he fine-tuned the ULMFiT’s language model by training it on generalized tweets. He then trained this new model on the training set created with weak supervision labeling. The results were quite impressive: the author was able to reach 95% precision and 39% recall (probability threshold of 0.63), while without using weak supervision technique, it would be 10% recall for a 90% precision. The model also performed better than logistic regression from sklearn, XGBoost, and Feed Forward Neural Networks (source)."
},
{
"code": null,
"e": 15338,
"s": 15309,
"text": "Voting Based Classifications"
},
{
"code": null,
"e": 16098,
"s": 15338,
"text": "Voting based methods are ensemble learning used for classification that helps balance out individual classifier weaknesses. Ensemble methods combine individual classifier algorithms such as: bagging (or bootstrap aggregating), decision trees, and boosting. If we think about a linear regression or one line predicting our y values given x, we can see that the linear model would not be good at identifying non-linear clusters. That’s where ensemble methods come in. An “appropriate combination of an ensemble of such linear classifiers can learn any non-linear boundary.” (Source) Classifiers which each have unique decision boundaries can be used together. The accuracy of the voting classifier is “generally higher than the individual classifiers.” (Source)"
},
{
"code": null,
"e": 16747,
"s": 16098,
"text": "We can combine classifiers through majority voting also known as naive voting, weighted voting, and maximum voting. In majority voting, “the classification system follows a divide-and-conquer approach by dividing the data space into smaller and easier-to-learn partitions, where each classifier learns only one of the simpler partitions.” (Source) In weighted voting, we can count models that are more useful multiple times. (Source) We can use these methods for text in foreign languages to get an effective classification. (Source) On a usability note, voting based methods are good for optimizing classification but are not easily interpretable."
}
] |
Dart Programming - Logical Operators
|
The following example shows how you can use Logical Operators in Dart −
void main() {
var a = 10;
var b = 12;
var res = (a<b)&&(b>10);
print(res);
}
It will produce the following output −
true
Let’s take another example −
void main() {
var a = 10;
var b = 12;
var res = (a>b)||(b<10);
print(res);
var res1 =!(a==b);
print(res1);
}
It will produce the following output −
false
true
The && and || operators are used to combine expressions. The && operator returns true only when both the conditions return true.
Let us consider the following expression −
var a = 10
var result = (a<10 && a>5)
In the above example, a<10 and a>5 are two expressions combined by an && operator. Here, the first expression returns false. However, the && operator requires both the expressions to return true. So, the operator skips the second expression.
The || operator returns true if one of the expressions returns true. For example −
var a = 10
var result = ( a>5 || a<10)
In the above snippet, two expressions a>5 and a<10 are combined by a || operator. Here, the first expression returns true. Since, the first expression returns true, the || operator skips the subsequent expression and returns true.
Due to this behavior of the && and || operator, they are called as short-circuit operators.
44 Lectures
4.5 hours
Sriyank Siddhartha
34 Lectures
4 hours
Sriyank Siddhartha
69 Lectures
4 hours
Frahaan Hussain
117 Lectures
10 hours
Frahaan Hussain
22 Lectures
1.5 hours
Pranjal Srivastava
34 Lectures
3 hours
Pranjal Srivastava
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2597,
"s": 2525,
"text": "The following example shows how you can use Logical Operators in Dart −"
},
{
"code": null,
"e": 2697,
"s": 2597,
"text": "void main() { \n var a = 10; \n var b = 12; \n var res = (a<b)&&(b>10); \n print(res); \n} "
},
{
"code": null,
"e": 2736,
"s": 2697,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 2743,
"s": 2736,
"text": "true \n"
},
{
"code": null,
"e": 2772,
"s": 2743,
"text": "Let’s take another example −"
},
{
"code": null,
"e": 2915,
"s": 2772,
"text": "void main() { \n var a = 10; \n var b = 12; \n var res = (a>b)||(b<10); \n \n print(res); \n var res1 =!(a==b); \n print(res1); \n} "
},
{
"code": null,
"e": 2954,
"s": 2915,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 2968,
"s": 2954,
"text": "false \ntrue \n"
},
{
"code": null,
"e": 3097,
"s": 2968,
"text": "The && and || operators are used to combine expressions. The && operator returns true only when both the conditions return true."
},
{
"code": null,
"e": 3140,
"s": 3097,
"text": "Let us consider the following expression −"
},
{
"code": null,
"e": 3179,
"s": 3140,
"text": "var a = 10 \nvar result = (a<10 && a>5)"
},
{
"code": null,
"e": 3421,
"s": 3179,
"text": "In the above example, a<10 and a>5 are two expressions combined by an && operator. Here, the first expression returns false. However, the && operator requires both the expressions to return true. So, the operator skips the second expression."
},
{
"code": null,
"e": 3504,
"s": 3421,
"text": "The || operator returns true if one of the expressions returns true. For example −"
},
{
"code": null,
"e": 3544,
"s": 3504,
"text": "var a = 10 \nvar result = ( a>5 || a<10)"
},
{
"code": null,
"e": 3775,
"s": 3544,
"text": "In the above snippet, two expressions a>5 and a<10 are combined by a || operator. Here, the first expression returns true. Since, the first expression returns true, the || operator skips the subsequent expression and returns true."
},
{
"code": null,
"e": 3867,
"s": 3775,
"text": "Due to this behavior of the && and || operator, they are called as short-circuit operators."
},
{
"code": null,
"e": 3902,
"s": 3867,
"text": "\n 44 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 3922,
"s": 3902,
"text": " Sriyank Siddhartha"
},
{
"code": null,
"e": 3955,
"s": 3922,
"text": "\n 34 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 3975,
"s": 3955,
"text": " Sriyank Siddhartha"
},
{
"code": null,
"e": 4008,
"s": 3975,
"text": "\n 69 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 4025,
"s": 4008,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 4060,
"s": 4025,
"text": "\n 117 Lectures \n 10 hours \n"
},
{
"code": null,
"e": 4077,
"s": 4060,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 4112,
"s": 4077,
"text": "\n 22 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 4132,
"s": 4112,
"text": " Pranjal Srivastava"
},
{
"code": null,
"e": 4165,
"s": 4132,
"text": "\n 34 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 4185,
"s": 4165,
"text": " Pranjal Srivastava"
},
{
"code": null,
"e": 4192,
"s": 4185,
"text": " Print"
},
{
"code": null,
"e": 4203,
"s": 4192,
"text": " Add Notes"
}
] |
The Complete Guide to Building a Chatbot with Deep Learning From Scratch | by Matthew Evan Taruno | Towards Data Science
|
Over the past month, I wanted to look for a project that encompasses the entire data science end-to-end workflow — from the data pipeline, to deep learning, to deployment. It had to be challenging, but not pointlessly so — it still had to be something useful. It took a little ideation and divergent thinking, but when the idea of making a personal assistant came up, it didn’t take long for me to settle on it. Conversational assitants are everywhere. Even my university is currently using Dr. Chatbot to track the health status of its members as an effective way to monitor this current pandemic. And it just makes sense: chatbots are faster, easier to interact with, and is super useful especially for things that we just want a fast response on. In this day and age, being able to talk to a bot for help is starting to become the new standard. I personally believe bots are the future because they just make our lives so much easier. Chatbots are also a key component in Robotic Process Automation.
Now I want to introduce EVE bot, my robot designed to Enhance Virtual Engagement (see what I did there) for the Apple Support team on Twitter. Although this methodology is used to support Apple products, it honestly could be applied to any domain you can think of where a chatbot would be useful.
Here’s my demo video for EVE. (And here’s my Github repo for this project)
The demo goes through a high level and less technical overview of my work to keep it short and sweet. But if you want to geek out and learn about how I did it, that’s what the rest of this article is exactly for!
I also hope this post would be a guide for those out there who need some structure on how to build your very own bot from scratch — in the sense that you are only using well-known, general purpose packages like Keras and spaCy, and not huge APIs specifically designed for chatbots like the Rasa API.
EVE is a context based bot powered by deep learning. Context-based bots are the step above the simple, keyword-based chatbot you might have seen a long time ago (see: Eliza bot). While I of course did have inspirations and it does have similarities to how it’s done in the industry, I offer some approaches that I reasoned myself on how to make a chatbot in 2020.
This method I show in this post utilizes the same logic that powers the chatbots of big companies like Amazon with their Lex conversational AI service.
Before I get in the technical workings, it’s important to know at what level of granularity you want to be making a chatbot at. A chatbot is like cooking spaghetti. You can really start from raw tomatoes, or you can start from canned ones that other people already made for you. It’s also similar in that there are many different components — you have:
The framework: where your bot decides how to respond to a customer based on their utterance. You can use higher level tools such as DialogFlow (by Google), Amazon Lex, and Rasa for your framework. These higher level APIs require less work from you compared to the Python based work I am going to show in this post, but you may not be as confident about what’s going on in the background. Mine opts for the more white box approach with Tensorflow, spaCy, and Python.
Dialogue management: This is the part of your bot that is responsible for the state and flow of the conversation — it’s where you can prompt users for information you need and more.
Deployment interface: You can build an interface with the Messenger API, you can deploy it on WhatsApp Business (with a fee), or really anywhere like your own website or app if you have. I deployed mine on Streamlit as a really quick demo tool.
Okay, you’ve decided to make your own framework. Now here’s how.
Your goal is two fold — and both are important:
Entity ExtractionIntent Classification
Entity Extraction
Intent Classification
When starting off making a new bot, this is exactly what you would try to figure out first, because it guides what kind of data you want to collect or generate. I recommend you start off with a base idea of what your intents and entities would be, then iteratively improve upon it as you test it out more and more.
Entities are predefined categories of names, organizations, time expressions, quantities, and other general groups of objects that make sense.
Every chatbot would have different sets of entities that should be captured. For a pizza delivery chatbot, you might want to capture the different types of pizza as an entity and delivery location. For this case, cheese or pepperoni might be the pizza entity and Cook Street might be the delivery location entity. In my case, I created an Apple Support bot, so I wanted to capture the hardware and application a user was using.
To get these visualizations, displaCy was used, which is spaCy’s visualization tool for Named Entity Recognition (they have more visualizers for other things like dependency parsing as well).
Intents are simply what the customer intend to do.
Are they trying to greet you? Are they trying to talk to a representative? Are they challenging if you’re a robot? Are they trying to do an update?
Intent classification just means figuring out what the user intent is given a user utterance. Here is a list of all the intents I want to capture in the case of my Eve bot, and a respective user utterance example for each to help you understand what each intent is.
Greeting: Hi!
Info: What’s the thinnest MacBook you have available?
Forgot password: I forgot my login details, can you help me recover it?
Speak Representative: Can I talk to a human please
Challenge Robot: Are you even a human
Update: I would like to update my MacBook Pro to the latest OS
Payment: I was charged double for the iPhone X I bought yesterday at Best Buy
Location: Where is the nearest Apple Store located from me?
Battery: My battery keeps on draining and dies in an hour
Goodbye: Thanks Eve, see you later
Intents and entities are basically the way we are going to decipher what the customer wants and how to give a good answer back to a customer. I initially thought I only need intents to give an answer without entities, but that leads to a lot of difficulty because you aren’t able to be granular in your responses to your customer. And without multi-label classification, where you are assigning multiple class labels to one user input (at the cost of accuracy), it’s hard to get personalized responses. Entities go a long way to make your intents just be intents, and personalize the user experience to the details of the user.
With these two goals established, I boiled down my process into five steps that I’ll break down one by one in this post:
My notebook for data preprocessing is here.
I mention the first step as data preprocessing, but really these 5 steps are not done linearly, because you will be preprocessing your data throughout the entire chatbot creation.
But even before data preprocessing, where on earth do you get your data?
It really depends on the domain of your chatbot.
You have to find data that best covers as many scenarios that the customer might ask you and that you want to reply to as possible. The data should contain all the intents you want to be able to answer. This might be a very hard task, but just know your data doesn’t have to be perfect, and it could come from multiple sources as long as they are within the same general domain.
For each intent, you should have a sizable amount of examples so that your bot will be able to learn the nature of that intent.
If you have really have no data at all, like the one I am thinking of in my next project (which isn’t an English chatbot), I am trying to work around it by making a Google sheets form for people to ask questions to my bot. The entire point is to get data that most closely resembles questions that real people are going to ask your bot.
But back to Eve bot, since I am making a Twitter Apple Support robot, I got my data from customer support Tweets on Kaggle. Once you finished getting the right dataset, then you can start to preprocess it. The goal of this initial preprocessing step is to get it ready for our further steps of data generation and modeling.
First, I got my data in a format of inbound and outbound text by some Pandas merge statements. With any sort of customer data, you have to make sure that the data is formatted in a way that separates utterances from the customer to the company (inbound) and from the company to the customer (outbound). Just be sensitive enough to wrangle the data in such a way where you’re left with questions your customer will likely ask you.
Shortly after, I applied an NLP preprocessing pipeline. Which steps to include depends on your use case (like languages you want to support, how colloquial you want your bot to be, etc.). Mine included:
Converting to lower case
Tokenizing using NLTK’s Tweet tokenizer
Removing punctuation and URL links
Correcting misspellings (leviathans distance)
Removing stop words
Expanding contractions
Removing non-english Tweets with spaCy
Lemmatization (you can choose stemming as an alternative)
Removing emojis and numbers (you can leave emojis in if your model can read them in, and if you plan to do emoji analysis)
Limiting each Tweet length to 50 (for compactness)
I compiled all this steps into one function called tokenize. At every preprocessing step, I visualize the lengths of each tokens at the data. I also provide a peek to the head of the data at each step so that it clearly shows what processing is being done at each step.
I started with 106k Apple Suppourt inbound Tweets. This is a histogram of my token lengths before preprocessing this data.
After step 10, I am left with ~76k Tweets. In general, things like removing stop-words will shift the distribution to the left because we have fewer and fewer tokens at every preprocessing step.
Here you can see the results of this step. I got my data to go from the Cyan Blue on the left to the Processed Inbound Column in the middle. I also keep the Outbound data on the right in case I need to see how Apple Support responds to their inquiries that will be used for the step where I actually respond to my customers (it’s called Natural Language Generation).
Finally, as a brief EDA, here are the emojis I have in my dataset — it’s interesting to visualize, but I didn’t end up using this information for anything that’s really useful.
My complete script for generating my training data is here, but if you want a more step-by-step explanation I have a notebook here as well.
Yes, you read that right — data generation. You may be thinking:
Why do we need to generate data? Why can’t we just use the data that we preprocessed in the previous step?
That’s a great question. The answer is because the data isn’t labeled yet. and intent classification is a supervised learning problem. This means that we need intent labels for every single data point.
If you already have a labelled dataset with all the intents you want to classify, we don’t need this step. But more times than not, you won’t have that data. That’s why we need to do some extra work to add intent labels to our dataset. This is quite an involved process, but I’m sure we can do it.
In this step, we want to group the Tweets together to represent an intent so we can label them. Moreover, for the intents that are not expressed in our data, we either are forced to manually add them in, or find them in another dataset.
For example, my Tweets did not have any Tweet that asked “are you a robot.” This actually makes perfect sense because Twitter Apple Support is answered by a real customer support team, not a chatbot. So in these cases, since there are no documents in out dataset that express an intent for challenging a robot, I manually added examples of this intent in its own group that represents this intent. I explain more on how I did this in Step 4.
Since I plan to use quite an involved neural network architecture (Bidirectional LSTM) for classifying my intents, I need to generate sufficient examples for each intent. The number I chose is 1000 — I generate 1000 examples for each intent (i.e. 1000 examples for a greeting, 1000 examples of customers who are having trouble with an update, etc.). I pegged every intent to have exactly 1000 examples so that I will not have to worry about class imbalance in the modeling stage later. In general, for your own bot, the more complex the bot, the more training examples you would need per intent.
This is where the how comes in, how do we find 1000 examples per intent? Well first, we need to know if there are 1000 examples in our dataset of the intent that we want. In order to do this, we need some concept of distance between each Tweet where if two Tweets are deemed “close” to each other, they should possess the same intent. Likewise, two Tweets that are “further” from each other should be very different in its meaning.
To do this, we use an encoding method known as Doc2Vec. Embedding methods are ways to convert words (or sequences of them) into a numeric representation that could be compared to each other. I created a training data generator tool with Streamlit to convert my Tweets into a 20D Doc2Vec representation of my data where each Tweet can be compared to each other using cosine similarity.
The following is a diagram to illustrate Doc2Vec can be used to group together similar documents. A document is a sequence of tokens, and a token is a sequence of characters that are grouped together as a useful semantic unit for processing. For this data generation step, I also experimented with gloVe, but those only vectorize at a per word level, and if you want to try another word embedding (maybe it’s more better suited to your domain), make sure that it vectorizes at the document level.
In this toy example, we convert every one of the utterances into 3D vectors (as can be seen in the pink array of 3 numbers below each phrase). Each of these 3D vectors is the numeric representation of that document. For example, “Hi there” is represented numerically as [3.333, 0.1125, -0.4869].
When we compare the top two similar meaning Tweets in this toy example (both are asking to talk to a representative), we get a dummy cosine similarity of 0.8. When we compare the bottom two different meaning Tweets (one is a greeting, one is an exit), we get -0.3.
As for implementation, I used gensim’s Doc2Vec. You have to train it, and it’s similar to how you would train a neural network (using epochs).
Here’s the catch. Before you train your Doc2Vec vectorizer, it’s important to already know the intent buckets you want before hand. Once you know what intent buckets you want, you can apply this procedure that aims to get you your top N group of similar-in-meaning Tweets:
Simply come up with the top keywords you can think of in that intent, then you append that to the end of your training data as a row (so for greeting, that row might be “hi hello hey”).
The extra rows you add which represent the respective intents are going to be vectorized, which is great news because now you can then compare it to every single other row with cosine similarity.
Vectorize by training your Doc2Vec vectorizer then fitting it on your data with the extra rows.
You’ve successfully used keywords to represent an intent, and from this representation, you will find the top 1000 Tweets similar to it to generate your training data for that intent with Gensim’s model.docvecs.most_similar() method.
Note that we have to append our new intent keyword representations to the training data before you train your vectorizer because Gensim’s implementation could only compare documents that have been put into this Doc2Vec vectorizer as training data. Moreover, it can only access the tags of each Tweet, so I had to do extra work in Python to find the tag of a Tweet given its content.
Once you’ve generated your data, make sure you store it as two columns “Utterance” and “Intent”. Notice that the utterances are stored as a tokenized list. This is something you’ll run into a lot and this is okay because you can just convert it to String form with Series.apply(" ".join) at any time.
The first thing I thought of to do was clustering. However, after I tried K-Means, it’s obvious that clustering and unsupervised learning generally yields bad results. The reality is, as good as it is as a technique, it is still an algorithm at the end of the day. You can’t come in expecting the algorithm to cluster your data the way you exactly want it to.
I also tried word-level embedding techniques like gloVe, but for this data generation step we want something at the document level because we are trying to compare between utterances, not between words in an utterance.
My intent classification notebook is here.
With our data labelled, we can finally get to the fun part — actually classifying the intents! I recommend that you don’t spend too long trying to get the perfect data beforehand. Try to get to this step at a reasonably fast pace so you can first get a minimum viable product. The idea is to get a result out first to use as a benchmark so we can then iteratively improve upon on data.
There are many ways to do intent classification, Rasa NLU for instance allows you to use many different models such as support vector machines (SVMs), but here I will demonstrate how to do it with a neural network with a bidirectional LSTM architecture.
We are trying to map a user utterance (which is just a sequence of tokens) to one of the N intents that we specify. The data we start with should just have an utterance and an intent ground truth label as the columns. The order of this process is as follows (for the implementation, check out the Intent Classification notebook available at my Github):
Train Test Split (should always go first, my mentor drilled this point in my head)Keras TokenizerLabel Encode the target variable (intents)Initialize the embedding matrix (I used gloVe embeddings because they had a special variant trained on Twitter data)Initialize the model architectureInitialize the model callbacks (techniques to address overfitting)Fit the model and save itLoad the model and save the output (I recommend a dictionary)
Train Test Split (should always go first, my mentor drilled this point in my head)
Keras Tokenizer
Label Encode the target variable (intents)
Initialize the embedding matrix (I used gloVe embeddings because they had a special variant trained on Twitter data)
Initialize the model architecture
Initialize the model callbacks (techniques to address overfitting)
Fit the model and save it
Load the model and save the output (I recommend a dictionary)
Some things to keep in mind for the embedding layer (step 4):
These pre-trained embeddings are essentially how you convert the text that goes into the model into a numeric representation. When you compare the cosine similarities of your numeric representation of different documents, they should have meaningful distances between each other (the cosine similarity between ‘king’ and ‘man’ should be closer than ‘king’ and ‘women’ for example).
It is really important you choose the right pre-trained embeddings that is appropriate for the domain of your chatbot. If you have a conversational Twitter based chatbot, you probably don’t want embeddings trained on Wikipedia.
I also recommend you to see if all the vocabulary you want to cover is in your pre-trained embedding file. I checked if my gloVe Twitter embeddings covers Apple specific words like “macbook pro” for example, and fortunately it does.
Some things to keep in mind for the model architecture (step 5):
Make sure the output layer is softmax (if you want to do multi-label classification, then use sigmoid).
Make sure the output layer has a dimensionality that is same as the number of intents you want to classify, otherwise you will run into shape issues.
If you don’t label encode, your use of model.predict() might be inaccurate because that final dictionary where you output where the keys are the intents and the values are the probabilities of the utterance being that intent wouldn’t be mapped properly.
When you are deploying your bot, you shouldn’t rerun the model. Rather, I wrote a script that starts by reading in the saved model file and does the predictions from there.
The results are promising. The loss converges to a low level and the accuracy of my model on unseen data is 87%!
If you visualize the output of the intent classification, this is what it looks like for the utterance “my battery on my iphone stopped working!”:
In order to prevent my model from overfitting, there are also some other settings I set in the form of Keras callbacks:
Learning rate scheduling — Slowing down the learning rate after it gets past a certain epoch number
Early stopping — Stopping the training early once the validation loss (or any other parameter you choose) reaches a certain threshold
And finally, after I ran the model, I saved it into an h5 file so I can initialize it later without retraining my model using Model Checkpoint. The code is below:
# Initializing checkpoint settings to view progress and save modelfilename = 'models/intent_classification_b.h5'# Learning rate scheduling# This function keeps the initial learning rate for the first ten epochs # and decreases it exponentially after that. def scheduler(epoch, lr): if epoch < 10: return lr else: return lr * tf.math.exp(-0.1)lr_sched_checkpoint = tf.keras.callbacks.LearningRateScheduler(scheduler)# Early stoppingearly_stopping = tf.keras.callbacks.EarlyStopping( monitor='val_loss', min_delta=0, patience=3, verbose=0, mode='auto', baseline=None, restore_best_weights=True)# This saves the best modelcheckpoint = ModelCheckpoint(filename, monitor='val_loss', verbose=1, save_best_only=True, mode='min')# The model you get at the end of it is after 100 epochs, but that might not have been# the weights most associated with validation accuracy# Only save the weights when you model has the lowest val loss. Early stopping# Fitting model with all the callbacks abovehist = model.fit(padded_X_train, y_train, epochs = 20, batch_size = 32, validation_data = (padded_X_val, y_val), callbacks = [checkpoint, lr_sched_checkpoint, early_stopping])
Here is my complete notebook on entity extraction.
For EVE bot, the goal is to extract Apple-specific keywords that fit under the hardware or application category. Like intent classification, there are many ways to do this — each has its benefits depending for the context. Rasa NLU uses a conditional random field (CRF) model, but for this I will use spaCy’s implementation of stochastic gradient descent (SGD).
The first step is to create a dictionary that stores the entity categories you think are relevant to your chatbot. Then you see if spaCy has them by default. More likely than not, they won’t. So in that case, you would have to train your own custom spaCy Named Entity Recognition (NER) model. For Apple products, it makes sense for the entities to be what hardware and what application the customer is using. You want to respond to customers who are asking about an iPhone differently than customers who are asking about their Macbook Pro.
{'hardware': ['macbook pro', 'iphone', 'iphones', 'mac', 'ipad', 'watch', 'TV', 'airpods'], 'apps': ['app store', 'garageband', 'books', 'calendar', 'podcasts', 'notes', 'icloud', 'music', 'messages', 'facetime', 'catalina', 'maverick']}
Once you stored the entity keywords in the dictionary, you should also have a dataset that essentially just uses these keywords in a sentence. Lucky for me, I already have a large Twitter dataset from Kaggle that I have been using. If you feed in these examples and specify which of the words are the entity keywords, you essentially have a labeled dataset, and spaCy can learn the context from which these words are used in a sentence.
In order to label your dataset, you need to convert your data to spaCy format. This is a sample of how my training data should look like to be able to be fed into spaCy for training your custom NER model using Stochastic Gradient Descent (SGD). We make an offsetter and use spaCy’s PhraseMatcher, all in the name of making it easier to make it into this format.
TRAIN_DATA = [('what is the price of polo?', {'entities': [(21, 25, 'PrdName')]}), ('what is the price of ball?', {'entities': [(21, 25, 'PrdName')]}), ('what is the price of jegging?', {'entities': [(21, 28, 'PrdName')]}), ('what is the price of t-shirt?', {'entities': [(21, 28, 'PrdName')]})]
# Utility function - converts the output of the PhraseMatcher to something usable in trainingdef offsetter(lbl, doc, matchitem): ''' Converts word position to string position ''' one = len(str(doc[0:matchitem[1]])) subdoc = doc[matchitem[1]:matchitem[2]] two = one + len(str(subdoc)) # This function was misaligned by a factor of one character, not sure why, but this is my solution if one != 0: one += 1 two += 1 return (one, two, lbl)# Example# offsetter(‘HARDWARE’, nlp(‘hmm macbooks are great’),(2271554079456360229, 1, 2)) -> (4, 12, ‘HARDWARE’)
I used this function in my more general function to ‘spaCify’ a row, a function that takes as input the raw row data and converts it to a tagged version of it spaCy can read in. I had to modify the index positioning to shift by one index on the start, I am not sure why but it worked out well.
Then I also made a function train_spacy to feed it into spaCy, which uses the nlp.update method to train my NER model. It trains it for the arbitrary number of 20 epochs, where at each epoch the training examples are shuffled beforehand. Try not to choose a number of epochs that are too high, otherwise the model might start to ‘forget’ the patterns it has already learned at earlier stages. Since you are minimizing loss with stochastic gradient descent, you can visualize your loss over the epochs.
I did not figure out a way to combine all the different models I trained into a single spaCy pipe object, so I had two separate models serialized into two pickle files. Again, here are the displaCy visualizations I demoed above — it successfully tagged macbook pro and garageband into it’s correct entity buckets.
From the pickle files you save, you can store all the extracted entities as a list by looping over the ents attribute from the doc object, demonstrated here:
def extract_app(user_input, visualize = False): # Loading it in app_nlp = pickle.load(open("models/app_big_nlp.pkl", "rb")) doc = app_nlp(user_input) extracted_entities = [] # These are the objects you can take out for ent in doc.ents: extracted_entities.append((ent.text, ent.start_char, ent.end_char, ent.label_)) return extracted_entities
By iteration, I really just mean improving upon my model. I made it its own stage because this actually takes more time than you may expect. I improve my model by:
Choosing better intents and entities
Improving the quality of my data
Improving your model architecture
You don’t just have to do generate the data the way I did it in step 2. Think of that as one of your toolkits to be able to create your perfect dataset.
The goal for the data you feed into your intent classifier is to have each intent be broad ranging (meaning that the intent examples sufficiently exhausts the state space and worlds of what the user might say) and unique from each other.
That way the neural network is able to make better predictions on user utterances it has never seen before. Here is how I tried to achieve this goal.
In addition to using Doc2Vec similarity to generate training examples, I also manually added examples in. I started with several examples I can think of, then I looped over these same examples until it meets the 1000 threshold. This makes all the difference in how good your model will be. If you know a customer is very likely to write something, you should just add it to the training examples.
How do we choose what intents and examples to include in? To help make a more data informed decision for this, I made a keyword exploration tool that tells you how many Tweets contain that keyword, and gives you a preview of what those Tweets actually are. This is useful to exploring what your customers often ask you and also how to respond to them because we also have outbound data we can take a look at.
I’ve also made a way to estimate the true distribution of intents or topics in my Twitter data and plot it out. It’s quite simple. You start with your intents, then you think of the keywords that represent that intent.
{"update":['update'], "battery": ['battery','power'], "forgot_password": ['password', 'account', 'login'],"repair": ['repair', 'fix', 'broken'],"payment": ['payment'}
I would also encourage you to look at 2, 3, or even 4 combinations of the keywords to see if your data naturally contain Tweets with multiple intents at once. In this following example, you can see that nearly 500 Tweets contain the update, battery, and repair keywords all at once. It’s clear that in these Tweets, the customers are looking to fix their battery issue that’s potentially caused by their recent update.
Remember, this is all in the name of making your intent buckets distinct and wide-ranging.
As for this development side, this is where you implement business logic that you think suits your context the best. I like to use affirmations like “Did that solve your problem” to reaffirm an intent.
For demo purposes, I used Streamlit. It isn’t the ideal place for deploying because it is hard to display conversation history dynamically, but it gets the job done. In reality, you would deploy on one messaging platform. For example, you can use Flask to deploy your chatbot on Facebook Messenger and other platforms. You can also use api.slack.com for integration and can quickly build up your Slack app there.
Conversational interfaces are a whole other topic that has tremendous potential as we go further into the future. And there are many guides out there to knock out your design UX design for these conversational interfaces.
In this article, I essentially show you how to do data generation, intent classification, and entity extraction. These 3 steps are absolutely essential for making a chatbot. However, there is still more to making a chatbot fully functional and feel natural. This mostly lies in how you map the current dialogue state to what actions the chatbot is supposed to take — or in short, dialogue management.
The bot needs to learn exactly when to execute actions like to listen and when to ask for essential bits of information if it is needed to answer a particular intent.
Taking a weather bot as an example, when the user asks about the weather, the bot needs the location to be able to answer that question so that it knows how to make the right API call to retrieve the weather information. So for this specific intent of weather retrieval, it is important to save the location into a slot stored in memory. If the user doesn’t mention the location, the bot should ask the user where the user is located. It is unrealistic and inefficient to ask the bot to make API calls for the weather in every city in the world.
I recommend checking out this video and the Rasa documentation to see how Rasa NLU (for Natural Language Understanding) and Rasa Core (for Dialogue Management) modules are used to create an intelligent chatbot. I talk a lot about Rasa because apart from the data generation techniques, I learned my chatbot logic from their masterclass videos and understood it to implement it myself using Python packages. Their framework gives lots of customizability to use different policies and techniques at different stages of the chatbot (such as whether or not to use LSTMs or SVMs for intent classification, to even the more granular details of how to fall back when the bot is not confident in its intent classification).
Also, I would like to use a meta model that controls the dialogue management of my chatbot better. One interesting way is to use a transformer neural network for this (refer to the paper made by Rasa on this, they called it the Transformer Embedding Dialogue Policy). This basically helps you have more natural feeling conversations.
Finally, scaling my chatbot would also be important. This just means expanding the domain of intents and entities that my chatbot would be able to respond to so that it covers the most important areas and edge cases. It’s helpful to remember that this framework I have made is transferable to any other chatbot, so I would like to support other languages as well in the future!
Of course, this is what I learned over approximately the past month from watching NLP lectures, git cloning many Github repos to personally do a hands on dive on how they work, YouTube video scouring, and documentation hunting. So if you have any feedback as for how to improve my chatbot or if there is a better practice compared to my current method, please do comment or reach out to let me know! I am always striving to make the best product I can deliver and always striving to learn more.
|
[
{
"code": null,
"e": 1174,
"s": 171,
"text": "Over the past month, I wanted to look for a project that encompasses the entire data science end-to-end workflow — from the data pipeline, to deep learning, to deployment. It had to be challenging, but not pointlessly so — it still had to be something useful. It took a little ideation and divergent thinking, but when the idea of making a personal assistant came up, it didn’t take long for me to settle on it. Conversational assitants are everywhere. Even my university is currently using Dr. Chatbot to track the health status of its members as an effective way to monitor this current pandemic. And it just makes sense: chatbots are faster, easier to interact with, and is super useful especially for things that we just want a fast response on. In this day and age, being able to talk to a bot for help is starting to become the new standard. I personally believe bots are the future because they just make our lives so much easier. Chatbots are also a key component in Robotic Process Automation."
},
{
"code": null,
"e": 1471,
"s": 1174,
"text": "Now I want to introduce EVE bot, my robot designed to Enhance Virtual Engagement (see what I did there) for the Apple Support team on Twitter. Although this methodology is used to support Apple products, it honestly could be applied to any domain you can think of where a chatbot would be useful."
},
{
"code": null,
"e": 1546,
"s": 1471,
"text": "Here’s my demo video for EVE. (And here’s my Github repo for this project)"
},
{
"code": null,
"e": 1759,
"s": 1546,
"text": "The demo goes through a high level and less technical overview of my work to keep it short and sweet. But if you want to geek out and learn about how I did it, that’s what the rest of this article is exactly for!"
},
{
"code": null,
"e": 2059,
"s": 1759,
"text": "I also hope this post would be a guide for those out there who need some structure on how to build your very own bot from scratch — in the sense that you are only using well-known, general purpose packages like Keras and spaCy, and not huge APIs specifically designed for chatbots like the Rasa API."
},
{
"code": null,
"e": 2423,
"s": 2059,
"text": "EVE is a context based bot powered by deep learning. Context-based bots are the step above the simple, keyword-based chatbot you might have seen a long time ago (see: Eliza bot). While I of course did have inspirations and it does have similarities to how it’s done in the industry, I offer some approaches that I reasoned myself on how to make a chatbot in 2020."
},
{
"code": null,
"e": 2575,
"s": 2423,
"text": "This method I show in this post utilizes the same logic that powers the chatbots of big companies like Amazon with their Lex conversational AI service."
},
{
"code": null,
"e": 2928,
"s": 2575,
"text": "Before I get in the technical workings, it’s important to know at what level of granularity you want to be making a chatbot at. A chatbot is like cooking spaghetti. You can really start from raw tomatoes, or you can start from canned ones that other people already made for you. It’s also similar in that there are many different components — you have:"
},
{
"code": null,
"e": 3394,
"s": 2928,
"text": "The framework: where your bot decides how to respond to a customer based on their utterance. You can use higher level tools such as DialogFlow (by Google), Amazon Lex, and Rasa for your framework. These higher level APIs require less work from you compared to the Python based work I am going to show in this post, but you may not be as confident about what’s going on in the background. Mine opts for the more white box approach with Tensorflow, spaCy, and Python."
},
{
"code": null,
"e": 3576,
"s": 3394,
"text": "Dialogue management: This is the part of your bot that is responsible for the state and flow of the conversation — it’s where you can prompt users for information you need and more."
},
{
"code": null,
"e": 3821,
"s": 3576,
"text": "Deployment interface: You can build an interface with the Messenger API, you can deploy it on WhatsApp Business (with a fee), or really anywhere like your own website or app if you have. I deployed mine on Streamlit as a really quick demo tool."
},
{
"code": null,
"e": 3886,
"s": 3821,
"text": "Okay, you’ve decided to make your own framework. Now here’s how."
},
{
"code": null,
"e": 3934,
"s": 3886,
"text": "Your goal is two fold — and both are important:"
},
{
"code": null,
"e": 3973,
"s": 3934,
"text": "Entity ExtractionIntent Classification"
},
{
"code": null,
"e": 3991,
"s": 3973,
"text": "Entity Extraction"
},
{
"code": null,
"e": 4013,
"s": 3991,
"text": "Intent Classification"
},
{
"code": null,
"e": 4328,
"s": 4013,
"text": "When starting off making a new bot, this is exactly what you would try to figure out first, because it guides what kind of data you want to collect or generate. I recommend you start off with a base idea of what your intents and entities would be, then iteratively improve upon it as you test it out more and more."
},
{
"code": null,
"e": 4471,
"s": 4328,
"text": "Entities are predefined categories of names, organizations, time expressions, quantities, and other general groups of objects that make sense."
},
{
"code": null,
"e": 4899,
"s": 4471,
"text": "Every chatbot would have different sets of entities that should be captured. For a pizza delivery chatbot, you might want to capture the different types of pizza as an entity and delivery location. For this case, cheese or pepperoni might be the pizza entity and Cook Street might be the delivery location entity. In my case, I created an Apple Support bot, so I wanted to capture the hardware and application a user was using."
},
{
"code": null,
"e": 5091,
"s": 4899,
"text": "To get these visualizations, displaCy was used, which is spaCy’s visualization tool for Named Entity Recognition (they have more visualizers for other things like dependency parsing as well)."
},
{
"code": null,
"e": 5142,
"s": 5091,
"text": "Intents are simply what the customer intend to do."
},
{
"code": null,
"e": 5290,
"s": 5142,
"text": "Are they trying to greet you? Are they trying to talk to a representative? Are they challenging if you’re a robot? Are they trying to do an update?"
},
{
"code": null,
"e": 5556,
"s": 5290,
"text": "Intent classification just means figuring out what the user intent is given a user utterance. Here is a list of all the intents I want to capture in the case of my Eve bot, and a respective user utterance example for each to help you understand what each intent is."
},
{
"code": null,
"e": 5570,
"s": 5556,
"text": "Greeting: Hi!"
},
{
"code": null,
"e": 5624,
"s": 5570,
"text": "Info: What’s the thinnest MacBook you have available?"
},
{
"code": null,
"e": 5696,
"s": 5624,
"text": "Forgot password: I forgot my login details, can you help me recover it?"
},
{
"code": null,
"e": 5747,
"s": 5696,
"text": "Speak Representative: Can I talk to a human please"
},
{
"code": null,
"e": 5785,
"s": 5747,
"text": "Challenge Robot: Are you even a human"
},
{
"code": null,
"e": 5848,
"s": 5785,
"text": "Update: I would like to update my MacBook Pro to the latest OS"
},
{
"code": null,
"e": 5926,
"s": 5848,
"text": "Payment: I was charged double for the iPhone X I bought yesterday at Best Buy"
},
{
"code": null,
"e": 5986,
"s": 5926,
"text": "Location: Where is the nearest Apple Store located from me?"
},
{
"code": null,
"e": 6044,
"s": 5986,
"text": "Battery: My battery keeps on draining and dies in an hour"
},
{
"code": null,
"e": 6079,
"s": 6044,
"text": "Goodbye: Thanks Eve, see you later"
},
{
"code": null,
"e": 6707,
"s": 6079,
"text": "Intents and entities are basically the way we are going to decipher what the customer wants and how to give a good answer back to a customer. I initially thought I only need intents to give an answer without entities, but that leads to a lot of difficulty because you aren’t able to be granular in your responses to your customer. And without multi-label classification, where you are assigning multiple class labels to one user input (at the cost of accuracy), it’s hard to get personalized responses. Entities go a long way to make your intents just be intents, and personalize the user experience to the details of the user."
},
{
"code": null,
"e": 6828,
"s": 6707,
"text": "With these two goals established, I boiled down my process into five steps that I’ll break down one by one in this post:"
},
{
"code": null,
"e": 6872,
"s": 6828,
"text": "My notebook for data preprocessing is here."
},
{
"code": null,
"e": 7052,
"s": 6872,
"text": "I mention the first step as data preprocessing, but really these 5 steps are not done linearly, because you will be preprocessing your data throughout the entire chatbot creation."
},
{
"code": null,
"e": 7125,
"s": 7052,
"text": "But even before data preprocessing, where on earth do you get your data?"
},
{
"code": null,
"e": 7174,
"s": 7125,
"text": "It really depends on the domain of your chatbot."
},
{
"code": null,
"e": 7553,
"s": 7174,
"text": "You have to find data that best covers as many scenarios that the customer might ask you and that you want to reply to as possible. The data should contain all the intents you want to be able to answer. This might be a very hard task, but just know your data doesn’t have to be perfect, and it could come from multiple sources as long as they are within the same general domain."
},
{
"code": null,
"e": 7681,
"s": 7553,
"text": "For each intent, you should have a sizable amount of examples so that your bot will be able to learn the nature of that intent."
},
{
"code": null,
"e": 8018,
"s": 7681,
"text": "If you have really have no data at all, like the one I am thinking of in my next project (which isn’t an English chatbot), I am trying to work around it by making a Google sheets form for people to ask questions to my bot. The entire point is to get data that most closely resembles questions that real people are going to ask your bot."
},
{
"code": null,
"e": 8342,
"s": 8018,
"text": "But back to Eve bot, since I am making a Twitter Apple Support robot, I got my data from customer support Tweets on Kaggle. Once you finished getting the right dataset, then you can start to preprocess it. The goal of this initial preprocessing step is to get it ready for our further steps of data generation and modeling."
},
{
"code": null,
"e": 8772,
"s": 8342,
"text": "First, I got my data in a format of inbound and outbound text by some Pandas merge statements. With any sort of customer data, you have to make sure that the data is formatted in a way that separates utterances from the customer to the company (inbound) and from the company to the customer (outbound). Just be sensitive enough to wrangle the data in such a way where you’re left with questions your customer will likely ask you."
},
{
"code": null,
"e": 8975,
"s": 8772,
"text": "Shortly after, I applied an NLP preprocessing pipeline. Which steps to include depends on your use case (like languages you want to support, how colloquial you want your bot to be, etc.). Mine included:"
},
{
"code": null,
"e": 9000,
"s": 8975,
"text": "Converting to lower case"
},
{
"code": null,
"e": 9040,
"s": 9000,
"text": "Tokenizing using NLTK’s Tweet tokenizer"
},
{
"code": null,
"e": 9075,
"s": 9040,
"text": "Removing punctuation and URL links"
},
{
"code": null,
"e": 9121,
"s": 9075,
"text": "Correcting misspellings (leviathans distance)"
},
{
"code": null,
"e": 9141,
"s": 9121,
"text": "Removing stop words"
},
{
"code": null,
"e": 9164,
"s": 9141,
"text": "Expanding contractions"
},
{
"code": null,
"e": 9203,
"s": 9164,
"text": "Removing non-english Tweets with spaCy"
},
{
"code": null,
"e": 9261,
"s": 9203,
"text": "Lemmatization (you can choose stemming as an alternative)"
},
{
"code": null,
"e": 9384,
"s": 9261,
"text": "Removing emojis and numbers (you can leave emojis in if your model can read them in, and if you plan to do emoji analysis)"
},
{
"code": null,
"e": 9435,
"s": 9384,
"text": "Limiting each Tweet length to 50 (for compactness)"
},
{
"code": null,
"e": 9705,
"s": 9435,
"text": "I compiled all this steps into one function called tokenize. At every preprocessing step, I visualize the lengths of each tokens at the data. I also provide a peek to the head of the data at each step so that it clearly shows what processing is being done at each step."
},
{
"code": null,
"e": 9828,
"s": 9705,
"text": "I started with 106k Apple Suppourt inbound Tweets. This is a histogram of my token lengths before preprocessing this data."
},
{
"code": null,
"e": 10023,
"s": 9828,
"text": "After step 10, I am left with ~76k Tweets. In general, things like removing stop-words will shift the distribution to the left because we have fewer and fewer tokens at every preprocessing step."
},
{
"code": null,
"e": 10390,
"s": 10023,
"text": "Here you can see the results of this step. I got my data to go from the Cyan Blue on the left to the Processed Inbound Column in the middle. I also keep the Outbound data on the right in case I need to see how Apple Support responds to their inquiries that will be used for the step where I actually respond to my customers (it’s called Natural Language Generation)."
},
{
"code": null,
"e": 10567,
"s": 10390,
"text": "Finally, as a brief EDA, here are the emojis I have in my dataset — it’s interesting to visualize, but I didn’t end up using this information for anything that’s really useful."
},
{
"code": null,
"e": 10707,
"s": 10567,
"text": "My complete script for generating my training data is here, but if you want a more step-by-step explanation I have a notebook here as well."
},
{
"code": null,
"e": 10772,
"s": 10707,
"text": "Yes, you read that right — data generation. You may be thinking:"
},
{
"code": null,
"e": 10879,
"s": 10772,
"text": "Why do we need to generate data? Why can’t we just use the data that we preprocessed in the previous step?"
},
{
"code": null,
"e": 11081,
"s": 10879,
"text": "That’s a great question. The answer is because the data isn’t labeled yet. and intent classification is a supervised learning problem. This means that we need intent labels for every single data point."
},
{
"code": null,
"e": 11379,
"s": 11081,
"text": "If you already have a labelled dataset with all the intents you want to classify, we don’t need this step. But more times than not, you won’t have that data. That’s why we need to do some extra work to add intent labels to our dataset. This is quite an involved process, but I’m sure we can do it."
},
{
"code": null,
"e": 11616,
"s": 11379,
"text": "In this step, we want to group the Tweets together to represent an intent so we can label them. Moreover, for the intents that are not expressed in our data, we either are forced to manually add them in, or find them in another dataset."
},
{
"code": null,
"e": 12058,
"s": 11616,
"text": "For example, my Tweets did not have any Tweet that asked “are you a robot.” This actually makes perfect sense because Twitter Apple Support is answered by a real customer support team, not a chatbot. So in these cases, since there are no documents in out dataset that express an intent for challenging a robot, I manually added examples of this intent in its own group that represents this intent. I explain more on how I did this in Step 4."
},
{
"code": null,
"e": 12654,
"s": 12058,
"text": "Since I plan to use quite an involved neural network architecture (Bidirectional LSTM) for classifying my intents, I need to generate sufficient examples for each intent. The number I chose is 1000 — I generate 1000 examples for each intent (i.e. 1000 examples for a greeting, 1000 examples of customers who are having trouble with an update, etc.). I pegged every intent to have exactly 1000 examples so that I will not have to worry about class imbalance in the modeling stage later. In general, for your own bot, the more complex the bot, the more training examples you would need per intent."
},
{
"code": null,
"e": 13086,
"s": 12654,
"text": "This is where the how comes in, how do we find 1000 examples per intent? Well first, we need to know if there are 1000 examples in our dataset of the intent that we want. In order to do this, we need some concept of distance between each Tweet where if two Tweets are deemed “close” to each other, they should possess the same intent. Likewise, two Tweets that are “further” from each other should be very different in its meaning."
},
{
"code": null,
"e": 13471,
"s": 13086,
"text": "To do this, we use an encoding method known as Doc2Vec. Embedding methods are ways to convert words (or sequences of them) into a numeric representation that could be compared to each other. I created a training data generator tool with Streamlit to convert my Tweets into a 20D Doc2Vec representation of my data where each Tweet can be compared to each other using cosine similarity."
},
{
"code": null,
"e": 13968,
"s": 13471,
"text": "The following is a diagram to illustrate Doc2Vec can be used to group together similar documents. A document is a sequence of tokens, and a token is a sequence of characters that are grouped together as a useful semantic unit for processing. For this data generation step, I also experimented with gloVe, but those only vectorize at a per word level, and if you want to try another word embedding (maybe it’s more better suited to your domain), make sure that it vectorizes at the document level."
},
{
"code": null,
"e": 14264,
"s": 13968,
"text": "In this toy example, we convert every one of the utterances into 3D vectors (as can be seen in the pink array of 3 numbers below each phrase). Each of these 3D vectors is the numeric representation of that document. For example, “Hi there” is represented numerically as [3.333, 0.1125, -0.4869]."
},
{
"code": null,
"e": 14529,
"s": 14264,
"text": "When we compare the top two similar meaning Tweets in this toy example (both are asking to talk to a representative), we get a dummy cosine similarity of 0.8. When we compare the bottom two different meaning Tweets (one is a greeting, one is an exit), we get -0.3."
},
{
"code": null,
"e": 14672,
"s": 14529,
"text": "As for implementation, I used gensim’s Doc2Vec. You have to train it, and it’s similar to how you would train a neural network (using epochs)."
},
{
"code": null,
"e": 14945,
"s": 14672,
"text": "Here’s the catch. Before you train your Doc2Vec vectorizer, it’s important to already know the intent buckets you want before hand. Once you know what intent buckets you want, you can apply this procedure that aims to get you your top N group of similar-in-meaning Tweets:"
},
{
"code": null,
"e": 15131,
"s": 14945,
"text": "Simply come up with the top keywords you can think of in that intent, then you append that to the end of your training data as a row (so for greeting, that row might be “hi hello hey”)."
},
{
"code": null,
"e": 15327,
"s": 15131,
"text": "The extra rows you add which represent the respective intents are going to be vectorized, which is great news because now you can then compare it to every single other row with cosine similarity."
},
{
"code": null,
"e": 15423,
"s": 15327,
"text": "Vectorize by training your Doc2Vec vectorizer then fitting it on your data with the extra rows."
},
{
"code": null,
"e": 15657,
"s": 15423,
"text": "You’ve successfully used keywords to represent an intent, and from this representation, you will find the top 1000 Tweets similar to it to generate your training data for that intent with Gensim’s model.docvecs.most_similar() method."
},
{
"code": null,
"e": 16040,
"s": 15657,
"text": "Note that we have to append our new intent keyword representations to the training data before you train your vectorizer because Gensim’s implementation could only compare documents that have been put into this Doc2Vec vectorizer as training data. Moreover, it can only access the tags of each Tweet, so I had to do extra work in Python to find the tag of a Tweet given its content."
},
{
"code": null,
"e": 16341,
"s": 16040,
"text": "Once you’ve generated your data, make sure you store it as two columns “Utterance” and “Intent”. Notice that the utterances are stored as a tokenized list. This is something you’ll run into a lot and this is okay because you can just convert it to String form with Series.apply(\" \".join) at any time."
},
{
"code": null,
"e": 16701,
"s": 16341,
"text": "The first thing I thought of to do was clustering. However, after I tried K-Means, it’s obvious that clustering and unsupervised learning generally yields bad results. The reality is, as good as it is as a technique, it is still an algorithm at the end of the day. You can’t come in expecting the algorithm to cluster your data the way you exactly want it to."
},
{
"code": null,
"e": 16920,
"s": 16701,
"text": "I also tried word-level embedding techniques like gloVe, but for this data generation step we want something at the document level because we are trying to compare between utterances, not between words in an utterance."
},
{
"code": null,
"e": 16963,
"s": 16920,
"text": "My intent classification notebook is here."
},
{
"code": null,
"e": 17349,
"s": 16963,
"text": "With our data labelled, we can finally get to the fun part — actually classifying the intents! I recommend that you don’t spend too long trying to get the perfect data beforehand. Try to get to this step at a reasonably fast pace so you can first get a minimum viable product. The idea is to get a result out first to use as a benchmark so we can then iteratively improve upon on data."
},
{
"code": null,
"e": 17603,
"s": 17349,
"text": "There are many ways to do intent classification, Rasa NLU for instance allows you to use many different models such as support vector machines (SVMs), but here I will demonstrate how to do it with a neural network with a bidirectional LSTM architecture."
},
{
"code": null,
"e": 17956,
"s": 17603,
"text": "We are trying to map a user utterance (which is just a sequence of tokens) to one of the N intents that we specify. The data we start with should just have an utterance and an intent ground truth label as the columns. The order of this process is as follows (for the implementation, check out the Intent Classification notebook available at my Github):"
},
{
"code": null,
"e": 18397,
"s": 17956,
"text": "Train Test Split (should always go first, my mentor drilled this point in my head)Keras TokenizerLabel Encode the target variable (intents)Initialize the embedding matrix (I used gloVe embeddings because they had a special variant trained on Twitter data)Initialize the model architectureInitialize the model callbacks (techniques to address overfitting)Fit the model and save itLoad the model and save the output (I recommend a dictionary)"
},
{
"code": null,
"e": 18480,
"s": 18397,
"text": "Train Test Split (should always go first, my mentor drilled this point in my head)"
},
{
"code": null,
"e": 18496,
"s": 18480,
"text": "Keras Tokenizer"
},
{
"code": null,
"e": 18539,
"s": 18496,
"text": "Label Encode the target variable (intents)"
},
{
"code": null,
"e": 18656,
"s": 18539,
"text": "Initialize the embedding matrix (I used gloVe embeddings because they had a special variant trained on Twitter data)"
},
{
"code": null,
"e": 18690,
"s": 18656,
"text": "Initialize the model architecture"
},
{
"code": null,
"e": 18757,
"s": 18690,
"text": "Initialize the model callbacks (techniques to address overfitting)"
},
{
"code": null,
"e": 18783,
"s": 18757,
"text": "Fit the model and save it"
},
{
"code": null,
"e": 18845,
"s": 18783,
"text": "Load the model and save the output (I recommend a dictionary)"
},
{
"code": null,
"e": 18907,
"s": 18845,
"text": "Some things to keep in mind for the embedding layer (step 4):"
},
{
"code": null,
"e": 19289,
"s": 18907,
"text": "These pre-trained embeddings are essentially how you convert the text that goes into the model into a numeric representation. When you compare the cosine similarities of your numeric representation of different documents, they should have meaningful distances between each other (the cosine similarity between ‘king’ and ‘man’ should be closer than ‘king’ and ‘women’ for example)."
},
{
"code": null,
"e": 19517,
"s": 19289,
"text": "It is really important you choose the right pre-trained embeddings that is appropriate for the domain of your chatbot. If you have a conversational Twitter based chatbot, you probably don’t want embeddings trained on Wikipedia."
},
{
"code": null,
"e": 19750,
"s": 19517,
"text": "I also recommend you to see if all the vocabulary you want to cover is in your pre-trained embedding file. I checked if my gloVe Twitter embeddings covers Apple specific words like “macbook pro” for example, and fortunately it does."
},
{
"code": null,
"e": 19815,
"s": 19750,
"text": "Some things to keep in mind for the model architecture (step 5):"
},
{
"code": null,
"e": 19919,
"s": 19815,
"text": "Make sure the output layer is softmax (if you want to do multi-label classification, then use sigmoid)."
},
{
"code": null,
"e": 20069,
"s": 19919,
"text": "Make sure the output layer has a dimensionality that is same as the number of intents you want to classify, otherwise you will run into shape issues."
},
{
"code": null,
"e": 20323,
"s": 20069,
"text": "If you don’t label encode, your use of model.predict() might be inaccurate because that final dictionary where you output where the keys are the intents and the values are the probabilities of the utterance being that intent wouldn’t be mapped properly."
},
{
"code": null,
"e": 20496,
"s": 20323,
"text": "When you are deploying your bot, you shouldn’t rerun the model. Rather, I wrote a script that starts by reading in the saved model file and does the predictions from there."
},
{
"code": null,
"e": 20609,
"s": 20496,
"text": "The results are promising. The loss converges to a low level and the accuracy of my model on unseen data is 87%!"
},
{
"code": null,
"e": 20756,
"s": 20609,
"text": "If you visualize the output of the intent classification, this is what it looks like for the utterance “my battery on my iphone stopped working!”:"
},
{
"code": null,
"e": 20876,
"s": 20756,
"text": "In order to prevent my model from overfitting, there are also some other settings I set in the form of Keras callbacks:"
},
{
"code": null,
"e": 20976,
"s": 20876,
"text": "Learning rate scheduling — Slowing down the learning rate after it gets past a certain epoch number"
},
{
"code": null,
"e": 21110,
"s": 20976,
"text": "Early stopping — Stopping the training early once the validation loss (or any other parameter you choose) reaches a certain threshold"
},
{
"code": null,
"e": 21273,
"s": 21110,
"text": "And finally, after I ran the model, I saved it into an h5 file so I can initialize it later without retraining my model using Model Checkpoint. The code is below:"
},
{
"code": null,
"e": 22523,
"s": 21273,
"text": "# Initializing checkpoint settings to view progress and save modelfilename = 'models/intent_classification_b.h5'# Learning rate scheduling# This function keeps the initial learning rate for the first ten epochs # and decreases it exponentially after that. def scheduler(epoch, lr): if epoch < 10: return lr else: return lr * tf.math.exp(-0.1)lr_sched_checkpoint = tf.keras.callbacks.LearningRateScheduler(scheduler)# Early stoppingearly_stopping = tf.keras.callbacks.EarlyStopping( monitor='val_loss', min_delta=0, patience=3, verbose=0, mode='auto', baseline=None, restore_best_weights=True)# This saves the best modelcheckpoint = ModelCheckpoint(filename, monitor='val_loss', verbose=1, save_best_only=True, mode='min')# The model you get at the end of it is after 100 epochs, but that might not have been# the weights most associated with validation accuracy# Only save the weights when you model has the lowest val loss. Early stopping# Fitting model with all the callbacks abovehist = model.fit(padded_X_train, y_train, epochs = 20, batch_size = 32, validation_data = (padded_X_val, y_val), callbacks = [checkpoint, lr_sched_checkpoint, early_stopping])"
},
{
"code": null,
"e": 22574,
"s": 22523,
"text": "Here is my complete notebook on entity extraction."
},
{
"code": null,
"e": 22936,
"s": 22574,
"text": "For EVE bot, the goal is to extract Apple-specific keywords that fit under the hardware or application category. Like intent classification, there are many ways to do this — each has its benefits depending for the context. Rasa NLU uses a conditional random field (CRF) model, but for this I will use spaCy’s implementation of stochastic gradient descent (SGD)."
},
{
"code": null,
"e": 23476,
"s": 22936,
"text": "The first step is to create a dictionary that stores the entity categories you think are relevant to your chatbot. Then you see if spaCy has them by default. More likely than not, they won’t. So in that case, you would have to train your own custom spaCy Named Entity Recognition (NER) model. For Apple products, it makes sense for the entities to be what hardware and what application the customer is using. You want to respond to customers who are asking about an iPhone differently than customers who are asking about their Macbook Pro."
},
{
"code": null,
"e": 23732,
"s": 23476,
"text": "{'hardware': ['macbook pro', 'iphone', 'iphones', 'mac', 'ipad', 'watch', 'TV', 'airpods'], 'apps': ['app store', 'garageband', 'books', 'calendar', 'podcasts', 'notes', 'icloud', 'music', 'messages', 'facetime', 'catalina', 'maverick']}"
},
{
"code": null,
"e": 24169,
"s": 23732,
"text": "Once you stored the entity keywords in the dictionary, you should also have a dataset that essentially just uses these keywords in a sentence. Lucky for me, I already have a large Twitter dataset from Kaggle that I have been using. If you feed in these examples and specify which of the words are the entity keywords, you essentially have a labeled dataset, and spaCy can learn the context from which these words are used in a sentence."
},
{
"code": null,
"e": 24531,
"s": 24169,
"text": "In order to label your dataset, you need to convert your data to spaCy format. This is a sample of how my training data should look like to be able to be fed into spaCy for training your custom NER model using Stochastic Gradient Descent (SGD). We make an offsetter and use spaCy’s PhraseMatcher, all in the name of making it easier to make it into this format."
},
{
"code": null,
"e": 24857,
"s": 24531,
"text": "TRAIN_DATA = [('what is the price of polo?', {'entities': [(21, 25, 'PrdName')]}), ('what is the price of ball?', {'entities': [(21, 25, 'PrdName')]}), ('what is the price of jegging?', {'entities': [(21, 28, 'PrdName')]}), ('what is the price of t-shirt?', {'entities': [(21, 28, 'PrdName')]})]"
},
{
"code": null,
"e": 25447,
"s": 24857,
"text": "# Utility function - converts the output of the PhraseMatcher to something usable in trainingdef offsetter(lbl, doc, matchitem): ''' Converts word position to string position ''' one = len(str(doc[0:matchitem[1]])) subdoc = doc[matchitem[1]:matchitem[2]] two = one + len(str(subdoc)) # This function was misaligned by a factor of one character, not sure why, but this is my solution if one != 0: one += 1 two += 1 return (one, two, lbl)# Example# offsetter(‘HARDWARE’, nlp(‘hmm macbooks are great’),(2271554079456360229, 1, 2)) -> (4, 12, ‘HARDWARE’)"
},
{
"code": null,
"e": 25741,
"s": 25447,
"text": "I used this function in my more general function to ‘spaCify’ a row, a function that takes as input the raw row data and converts it to a tagged version of it spaCy can read in. I had to modify the index positioning to shift by one index on the start, I am not sure why but it worked out well."
},
{
"code": null,
"e": 26243,
"s": 25741,
"text": "Then I also made a function train_spacy to feed it into spaCy, which uses the nlp.update method to train my NER model. It trains it for the arbitrary number of 20 epochs, where at each epoch the training examples are shuffled beforehand. Try not to choose a number of epochs that are too high, otherwise the model might start to ‘forget’ the patterns it has already learned at earlier stages. Since you are minimizing loss with stochastic gradient descent, you can visualize your loss over the epochs."
},
{
"code": null,
"e": 26557,
"s": 26243,
"text": "I did not figure out a way to combine all the different models I trained into a single spaCy pipe object, so I had two separate models serialized into two pickle files. Again, here are the displaCy visualizations I demoed above — it successfully tagged macbook pro and garageband into it’s correct entity buckets."
},
{
"code": null,
"e": 26715,
"s": 26557,
"text": "From the pickle files you save, you can store all the extracted entities as a list by looping over the ents attribute from the doc object, demonstrated here:"
},
{
"code": null,
"e": 27085,
"s": 26715,
"text": "def extract_app(user_input, visualize = False): # Loading it in app_nlp = pickle.load(open(\"models/app_big_nlp.pkl\", \"rb\")) doc = app_nlp(user_input) extracted_entities = [] # These are the objects you can take out for ent in doc.ents: extracted_entities.append((ent.text, ent.start_char, ent.end_char, ent.label_)) return extracted_entities"
},
{
"code": null,
"e": 27249,
"s": 27085,
"text": "By iteration, I really just mean improving upon my model. I made it its own stage because this actually takes more time than you may expect. I improve my model by:"
},
{
"code": null,
"e": 27286,
"s": 27249,
"text": "Choosing better intents and entities"
},
{
"code": null,
"e": 27319,
"s": 27286,
"text": "Improving the quality of my data"
},
{
"code": null,
"e": 27353,
"s": 27319,
"text": "Improving your model architecture"
},
{
"code": null,
"e": 27506,
"s": 27353,
"text": "You don’t just have to do generate the data the way I did it in step 2. Think of that as one of your toolkits to be able to create your perfect dataset."
},
{
"code": null,
"e": 27744,
"s": 27506,
"text": "The goal for the data you feed into your intent classifier is to have each intent be broad ranging (meaning that the intent examples sufficiently exhausts the state space and worlds of what the user might say) and unique from each other."
},
{
"code": null,
"e": 27894,
"s": 27744,
"text": "That way the neural network is able to make better predictions on user utterances it has never seen before. Here is how I tried to achieve this goal."
},
{
"code": null,
"e": 28291,
"s": 27894,
"text": "In addition to using Doc2Vec similarity to generate training examples, I also manually added examples in. I started with several examples I can think of, then I looped over these same examples until it meets the 1000 threshold. This makes all the difference in how good your model will be. If you know a customer is very likely to write something, you should just add it to the training examples."
},
{
"code": null,
"e": 28700,
"s": 28291,
"text": "How do we choose what intents and examples to include in? To help make a more data informed decision for this, I made a keyword exploration tool that tells you how many Tweets contain that keyword, and gives you a preview of what those Tweets actually are. This is useful to exploring what your customers often ask you and also how to respond to them because we also have outbound data we can take a look at."
},
{
"code": null,
"e": 28919,
"s": 28700,
"text": "I’ve also made a way to estimate the true distribution of intents or topics in my Twitter data and plot it out. It’s quite simple. You start with your intents, then you think of the keywords that represent that intent."
},
{
"code": null,
"e": 29086,
"s": 28919,
"text": "{\"update\":['update'], \"battery\": ['battery','power'], \"forgot_password\": ['password', 'account', 'login'],\"repair\": ['repair', 'fix', 'broken'],\"payment\": ['payment'}"
},
{
"code": null,
"e": 29505,
"s": 29086,
"text": "I would also encourage you to look at 2, 3, or even 4 combinations of the keywords to see if your data naturally contain Tweets with multiple intents at once. In this following example, you can see that nearly 500 Tweets contain the update, battery, and repair keywords all at once. It’s clear that in these Tweets, the customers are looking to fix their battery issue that’s potentially caused by their recent update."
},
{
"code": null,
"e": 29596,
"s": 29505,
"text": "Remember, this is all in the name of making your intent buckets distinct and wide-ranging."
},
{
"code": null,
"e": 29798,
"s": 29596,
"text": "As for this development side, this is where you implement business logic that you think suits your context the best. I like to use affirmations like “Did that solve your problem” to reaffirm an intent."
},
{
"code": null,
"e": 30211,
"s": 29798,
"text": "For demo purposes, I used Streamlit. It isn’t the ideal place for deploying because it is hard to display conversation history dynamically, but it gets the job done. In reality, you would deploy on one messaging platform. For example, you can use Flask to deploy your chatbot on Facebook Messenger and other platforms. You can also use api.slack.com for integration and can quickly build up your Slack app there."
},
{
"code": null,
"e": 30433,
"s": 30211,
"text": "Conversational interfaces are a whole other topic that has tremendous potential as we go further into the future. And there are many guides out there to knock out your design UX design for these conversational interfaces."
},
{
"code": null,
"e": 30834,
"s": 30433,
"text": "In this article, I essentially show you how to do data generation, intent classification, and entity extraction. These 3 steps are absolutely essential for making a chatbot. However, there is still more to making a chatbot fully functional and feel natural. This mostly lies in how you map the current dialogue state to what actions the chatbot is supposed to take — or in short, dialogue management."
},
{
"code": null,
"e": 31001,
"s": 30834,
"text": "The bot needs to learn exactly when to execute actions like to listen and when to ask for essential bits of information if it is needed to answer a particular intent."
},
{
"code": null,
"e": 31547,
"s": 31001,
"text": "Taking a weather bot as an example, when the user asks about the weather, the bot needs the location to be able to answer that question so that it knows how to make the right API call to retrieve the weather information. So for this specific intent of weather retrieval, it is important to save the location into a slot stored in memory. If the user doesn’t mention the location, the bot should ask the user where the user is located. It is unrealistic and inefficient to ask the bot to make API calls for the weather in every city in the world."
},
{
"code": null,
"e": 32263,
"s": 31547,
"text": "I recommend checking out this video and the Rasa documentation to see how Rasa NLU (for Natural Language Understanding) and Rasa Core (for Dialogue Management) modules are used to create an intelligent chatbot. I talk a lot about Rasa because apart from the data generation techniques, I learned my chatbot logic from their masterclass videos and understood it to implement it myself using Python packages. Their framework gives lots of customizability to use different policies and techniques at different stages of the chatbot (such as whether or not to use LSTMs or SVMs for intent classification, to even the more granular details of how to fall back when the bot is not confident in its intent classification)."
},
{
"code": null,
"e": 32597,
"s": 32263,
"text": "Also, I would like to use a meta model that controls the dialogue management of my chatbot better. One interesting way is to use a transformer neural network for this (refer to the paper made by Rasa on this, they called it the Transformer Embedding Dialogue Policy). This basically helps you have more natural feeling conversations."
},
{
"code": null,
"e": 32975,
"s": 32597,
"text": "Finally, scaling my chatbot would also be important. This just means expanding the domain of intents and entities that my chatbot would be able to respond to so that it covers the most important areas and edge cases. It’s helpful to remember that this framework I have made is transferable to any other chatbot, so I would like to support other languages as well in the future!"
}
] |
How to use loc and iloc for selecting data in Pandas | by B. Chen | Towards Data Science
|
When it comes to select data on a DataFrame, Pandas loc and iloc are two top favorites. They are quick, fast, easy to read, and sometimes interchangeable.
In this article, we’ll explore the differences between loc and iloc, take a looks at their similarities, and check how to perform data selection with them. We will go over the following topics:
Differences between loc and ilocSelecting via a single valueSelecting via a list of valuesSelecting a range of data via sliceSelecting via conditions and callableloc and iloc are interchangeable when labels are 0-based integers
Differences between loc and iloc
Selecting via a single value
Selecting via a list of values
Selecting a range of data via slice
Selecting via conditions and callable
loc and iloc are interchangeable when labels are 0-based integers
Please check out Notebook for the source code.
The main distinction between loc and iloc is:
loc is label-based, which means that you have to specify rows and columns based on their row and column labels.
iloc is integer position-based, so you have to specify rows and columns by their integer position values (0-based integer position).
Here are some differences and similarities between loc and iloc :
For demonstration, we create a DataFrame and load it with the Day column as the index.
df = pd.read_csv('data/data.csv', index_col=['Day'])
Both loc and iloc allow input to be a single value. We can use the following syntax for data selection:
loc[row_label, column_label]
iloc[row_position, column_position]
For example, let’s say we would like to retrieve Friday’s temperature value.
With loc, we can pass the row label 'Fri' and the column label 'Temperature'.
# To get Friday's temperature>>> df.loc['Fri', 'Temperature']10.51
The equivalent iloc statement should take the row number 4 and the column number 1 .
# The equivalent `iloc` statement>>> df.iloc[4, 1]10.51
We can also use : to return all data. For example, to get all rows:
# To get all rows>>> df.loc[:, 'Temperature']DayMon 12.79Tue 19.67Wed 17.51Thu 14.44Fri 10.51Sat 11.07Sun 17.50Name: Temperature, dtype: float64# The equivalent `iloc` statement>>> df.iloc[:, 1]
And to get all columns:
# To get all columns>>> df.loc['Fri', :]Weather ShowerTemperature 10.51Wind 26Humidity 79Name: Fri, dtype: object# The equivalent `iloc` statement>>> df.iloc[4, :]
Note that the above 2 outputs are Series. loc and iloc will return a Series when the result is 1-dimensional data.
We can pass a list of labels to loc to select multiple rows or columns:
# Multiple rows>>> df.loc[['Thu', 'Fri'], 'Temperature']DayThu 14.44Fri 10.51Name: Temperature, dtype: float64# Multiple columns>>> df.loc['Fri', ['Temperature', 'Wind']]Temperature 10.51Wind 26Name: Fri, dtype: object
Similarly, a list of integer values can be passed to iloc to select multiple rows or columns. Here are the equivalent statements using iloc:
>>> df.iloc[[3, 4], 1]DayThu 14.44Fri 10.51Name: Temperature, dtype: float64>>> df.iloc[4, [1, 2]]Temperature 10.51Wind 26Name: Fri, dtype: object
All the above outputs are Series because their results are 1-dimensional data.
The output will be a DataFrame when the result is 2-dimensional data, for example, to access multiple rows and columns
# Multiple rows and columnsrows = ['Thu', 'Fri']cols=['Temperature','Wind']df.loc[rows, cols]
The equivalent iloc statement is:
rows = [3, 4]cols = [1, 2]df.iloc[rows, cols]
Slice (written as start:stop:step) is a powerful technique that allows selecting a range of data. It is very useful when we want to select everything in between two items.
With loc, we can use the syntax A:B to select data from label A to label B (Both A and B are included):
# Slicing column labelsrows=['Thu', 'Fri']df.loc[rows, 'Temperature':'Humidity' ]
# Slicing row labelscols = ['Temperature', 'Wind']df.loc['Mon':'Thu', cols]
We can use the syntax A:B:S to select data from label A to label B with step size S (Both A and B are included):
# Slicing with stepdf.loc['Mon':'Fri':2 , :]
With iloc, we can also use the syntax n:m to select data from position n (included) to position m (excluded). However, the main difference here is that the endpoint (m) is excluded from the iloc result.
For example, selecting columns from position 0 up to 3 (excluded):
df.iloc[[1, 2], 0 : 3]
Similarly, we can use the syntax n:m:s to select data from position n (included) to position m (excluded) with step size s. Notes that the endpoint m is excluded.
df.iloc[0:4:2, :]
loc with conditions
Often we would like to filter the data based on conditions. For example, we may need to find the rows where humidity is greater than 50.
With loc, we just need to pass the condition to the loc statement.
# One conditiondf.loc[df.Humidity > 50, :]
Sometimes, we may need to use multiple conditions to filter our data. For example, find all the rows where humidity is more than 50 and the weather is Shower:
## multiple conditionsdf.loc[ (df.Humidity > 50) & (df.Weather == 'Shower'), ['Temperature','Wind'],]
iloc with conditions
For iloc, we will get a ValueError if pass the condition straight into the statement:
# Getting ValueErrordf.iloc[df.Humidity > 50, :]
We get the error because iloc cannot accept a boolean Series. It only accepts a boolean list. We can use the list() function to convert a Series into a boolean list.
# Single conditiondf.iloc[list(df.Humidity > 50)]
Similarly, we can use list() to convert the output of multiple conditions into a boolean list:
## multiple conditionsdf.iloc[ list((df.Humidity > 50) & (df.Weather == 'Shower')), :,]
loc with callable
loc accepts a callable as an indexer. The callable must be a function with one argument that returns valid output for indexing.
For example to select columns
# Selecting columnsdf.loc[:, lambda df: ['Humidity', 'Wind']]
And to filter data with a callable:
# With conditiondf.loc[lambda df: df.Humidity > 50, :]
iloc with callable
iloc can also take a callable as an indexer.
df.iloc[lambda df: [0,1], :]
To filter data with callable, iloc will require list() to convert the output of conditions into a boolean list:
df.iloc[lambda df: list(df.Humidity > 50), :]
For demonstration, let’s create a DataFrame with 0-based integers as headers and index labels.
df = pd.read_csv( 'data/data.csv', header=None, skiprows=[0],)
With header=None, the Pandas will generate 0-based integer values as headers. With skiprows=[0], those headers Weather, Temperature, etc we have been using will be skipped.
Now, loc, a label-based data selector, can accept a single integer and a list of integer values. For example:
>>> df.loc[1, 2]19.67>>> df.loc[1, [1, 2]]1 Sunny2 19.67Name: 1, dtype: object
The reason they are working is that those integer values (1 and 2) are interpreted as labels of the index. This use is not an integer position along with the index and is a bit confusing.
In this case, loc and iloc are interchangeable when selecting via a single value or a list of values.
>>> df.loc[1, 2] == df.iloc[1, 2]True>>> df.loc[1, [1, 2]] == df.iloc[1, [1, 2]]1 True2 TrueName: 1, dtype: bool
Note that loc and iloc will return different results when selecting via slice and conditions. They are essentially different because:
slice: endpoint is excluded from iloc result, but included in loc
conditions: loc accepts boolean Series, but iloc can only accept a boolean list.
Finally, here is a summary
loc is label based and allowed inputs are:
A single label 'A' or 2 (Note that 2 is interpreted as a label of the index.)
A list of labels ['A', 'B', 'C'] or [1, 2, 3] (Note that 1, 2, 3 are interpreted as labels of the index.)
A slice with labels 'A':'C' (Both are included)
Conditions, a boolean Series or a boolean array
A callable function with one argument
iloc is integer position based and allowed inputs are:
An integer e.g. 2.
A list or array of integers [1, 2, 3].
A slice with integers 1:7(the endpoint 7 is excluded)
Conditions, but only accept a boolean array
A callable function with one argument
loc and iloc are interchangeable when the labels of Pandas DataFrame are 0-based integers
I hope this article will help you to save time in learning Pandas data selection. I recommend you to check out the documentation to know about other things you can do.
Thanks for reading. Please check out the notebook for the source code and stay tuned if you are interested in the practical aspect of machine learning.
Pandas cut() function for transforming numerical data into categorical data
Using Pandas method chaining to improve code readability
How to do a Custom Sort on Pandas DataFrame
All the Pandas shift() you should know for data analysis
When to use Pandas transform() function
Pandas concat() tricks you should know
Difference between apply() and transform() in Pandas
All the Pandas merge() you should know
Working with datetime in Pandas DataFrame
Pandas read_csv() tricks you should know
4 tricks you should know to parse date columns with Pandas read_csv()
More tutorials can be found on my Github
|
[
{
"code": null,
"e": 327,
"s": 172,
"text": "When it comes to select data on a DataFrame, Pandas loc and iloc are two top favorites. They are quick, fast, easy to read, and sometimes interchangeable."
},
{
"code": null,
"e": 521,
"s": 327,
"text": "In this article, we’ll explore the differences between loc and iloc, take a looks at their similarities, and check how to perform data selection with them. We will go over the following topics:"
},
{
"code": null,
"e": 749,
"s": 521,
"text": "Differences between loc and ilocSelecting via a single valueSelecting via a list of valuesSelecting a range of data via sliceSelecting via conditions and callableloc and iloc are interchangeable when labels are 0-based integers"
},
{
"code": null,
"e": 782,
"s": 749,
"text": "Differences between loc and iloc"
},
{
"code": null,
"e": 811,
"s": 782,
"text": "Selecting via a single value"
},
{
"code": null,
"e": 842,
"s": 811,
"text": "Selecting via a list of values"
},
{
"code": null,
"e": 878,
"s": 842,
"text": "Selecting a range of data via slice"
},
{
"code": null,
"e": 916,
"s": 878,
"text": "Selecting via conditions and callable"
},
{
"code": null,
"e": 982,
"s": 916,
"text": "loc and iloc are interchangeable when labels are 0-based integers"
},
{
"code": null,
"e": 1029,
"s": 982,
"text": "Please check out Notebook for the source code."
},
{
"code": null,
"e": 1075,
"s": 1029,
"text": "The main distinction between loc and iloc is:"
},
{
"code": null,
"e": 1187,
"s": 1075,
"text": "loc is label-based, which means that you have to specify rows and columns based on their row and column labels."
},
{
"code": null,
"e": 1320,
"s": 1187,
"text": "iloc is integer position-based, so you have to specify rows and columns by their integer position values (0-based integer position)."
},
{
"code": null,
"e": 1386,
"s": 1320,
"text": "Here are some differences and similarities between loc and iloc :"
},
{
"code": null,
"e": 1473,
"s": 1386,
"text": "For demonstration, we create a DataFrame and load it with the Day column as the index."
},
{
"code": null,
"e": 1526,
"s": 1473,
"text": "df = pd.read_csv('data/data.csv', index_col=['Day'])"
},
{
"code": null,
"e": 1630,
"s": 1526,
"text": "Both loc and iloc allow input to be a single value. We can use the following syntax for data selection:"
},
{
"code": null,
"e": 1659,
"s": 1630,
"text": "loc[row_label, column_label]"
},
{
"code": null,
"e": 1695,
"s": 1659,
"text": "iloc[row_position, column_position]"
},
{
"code": null,
"e": 1772,
"s": 1695,
"text": "For example, let’s say we would like to retrieve Friday’s temperature value."
},
{
"code": null,
"e": 1850,
"s": 1772,
"text": "With loc, we can pass the row label 'Fri' and the column label 'Temperature'."
},
{
"code": null,
"e": 1917,
"s": 1850,
"text": "# To get Friday's temperature>>> df.loc['Fri', 'Temperature']10.51"
},
{
"code": null,
"e": 2002,
"s": 1917,
"text": "The equivalent iloc statement should take the row number 4 and the column number 1 ."
},
{
"code": null,
"e": 2058,
"s": 2002,
"text": "# The equivalent `iloc` statement>>> df.iloc[4, 1]10.51"
},
{
"code": null,
"e": 2126,
"s": 2058,
"text": "We can also use : to return all data. For example, to get all rows:"
},
{
"code": null,
"e": 2342,
"s": 2126,
"text": "# To get all rows>>> df.loc[:, 'Temperature']DayMon 12.79Tue 19.67Wed 17.51Thu 14.44Fri 10.51Sat 11.07Sun 17.50Name: Temperature, dtype: float64# The equivalent `iloc` statement>>> df.iloc[:, 1]"
},
{
"code": null,
"e": 2366,
"s": 2342,
"text": "And to get all columns:"
},
{
"code": null,
"e": 2565,
"s": 2366,
"text": "# To get all columns>>> df.loc['Fri', :]Weather ShowerTemperature 10.51Wind 26Humidity 79Name: Fri, dtype: object# The equivalent `iloc` statement>>> df.iloc[4, :]"
},
{
"code": null,
"e": 2680,
"s": 2565,
"text": "Note that the above 2 outputs are Series. loc and iloc will return a Series when the result is 1-dimensional data."
},
{
"code": null,
"e": 2752,
"s": 2680,
"text": "We can pass a list of labels to loc to select multiple rows or columns:"
},
{
"code": null,
"e": 2993,
"s": 2752,
"text": "# Multiple rows>>> df.loc[['Thu', 'Fri'], 'Temperature']DayThu 14.44Fri 10.51Name: Temperature, dtype: float64# Multiple columns>>> df.loc['Fri', ['Temperature', 'Wind']]Temperature 10.51Wind 26Name: Fri, dtype: object"
},
{
"code": null,
"e": 3134,
"s": 2993,
"text": "Similarly, a list of integer values can be passed to iloc to select multiple rows or columns. Here are the equivalent statements using iloc:"
},
{
"code": null,
"e": 3303,
"s": 3134,
"text": ">>> df.iloc[[3, 4], 1]DayThu 14.44Fri 10.51Name: Temperature, dtype: float64>>> df.iloc[4, [1, 2]]Temperature 10.51Wind 26Name: Fri, dtype: object"
},
{
"code": null,
"e": 3382,
"s": 3303,
"text": "All the above outputs are Series because their results are 1-dimensional data."
},
{
"code": null,
"e": 3501,
"s": 3382,
"text": "The output will be a DataFrame when the result is 2-dimensional data, for example, to access multiple rows and columns"
},
{
"code": null,
"e": 3595,
"s": 3501,
"text": "# Multiple rows and columnsrows = ['Thu', 'Fri']cols=['Temperature','Wind']df.loc[rows, cols]"
},
{
"code": null,
"e": 3629,
"s": 3595,
"text": "The equivalent iloc statement is:"
},
{
"code": null,
"e": 3675,
"s": 3629,
"text": "rows = [3, 4]cols = [1, 2]df.iloc[rows, cols]"
},
{
"code": null,
"e": 3847,
"s": 3675,
"text": "Slice (written as start:stop:step) is a powerful technique that allows selecting a range of data. It is very useful when we want to select everything in between two items."
},
{
"code": null,
"e": 3951,
"s": 3847,
"text": "With loc, we can use the syntax A:B to select data from label A to label B (Both A and B are included):"
},
{
"code": null,
"e": 4033,
"s": 3951,
"text": "# Slicing column labelsrows=['Thu', 'Fri']df.loc[rows, 'Temperature':'Humidity' ]"
},
{
"code": null,
"e": 4109,
"s": 4033,
"text": "# Slicing row labelscols = ['Temperature', 'Wind']df.loc['Mon':'Thu', cols]"
},
{
"code": null,
"e": 4222,
"s": 4109,
"text": "We can use the syntax A:B:S to select data from label A to label B with step size S (Both A and B are included):"
},
{
"code": null,
"e": 4267,
"s": 4222,
"text": "# Slicing with stepdf.loc['Mon':'Fri':2 , :]"
},
{
"code": null,
"e": 4470,
"s": 4267,
"text": "With iloc, we can also use the syntax n:m to select data from position n (included) to position m (excluded). However, the main difference here is that the endpoint (m) is excluded from the iloc result."
},
{
"code": null,
"e": 4537,
"s": 4470,
"text": "For example, selecting columns from position 0 up to 3 (excluded):"
},
{
"code": null,
"e": 4560,
"s": 4537,
"text": "df.iloc[[1, 2], 0 : 3]"
},
{
"code": null,
"e": 4723,
"s": 4560,
"text": "Similarly, we can use the syntax n:m:s to select data from position n (included) to position m (excluded) with step size s. Notes that the endpoint m is excluded."
},
{
"code": null,
"e": 4741,
"s": 4723,
"text": "df.iloc[0:4:2, :]"
},
{
"code": null,
"e": 4761,
"s": 4741,
"text": "loc with conditions"
},
{
"code": null,
"e": 4898,
"s": 4761,
"text": "Often we would like to filter the data based on conditions. For example, we may need to find the rows where humidity is greater than 50."
},
{
"code": null,
"e": 4965,
"s": 4898,
"text": "With loc, we just need to pass the condition to the loc statement."
},
{
"code": null,
"e": 5008,
"s": 4965,
"text": "# One conditiondf.loc[df.Humidity > 50, :]"
},
{
"code": null,
"e": 5167,
"s": 5008,
"text": "Sometimes, we may need to use multiple conditions to filter our data. For example, find all the rows where humidity is more than 50 and the weather is Shower:"
},
{
"code": null,
"e": 5276,
"s": 5167,
"text": "## multiple conditionsdf.loc[ (df.Humidity > 50) & (df.Weather == 'Shower'), ['Temperature','Wind'],]"
},
{
"code": null,
"e": 5297,
"s": 5276,
"text": "iloc with conditions"
},
{
"code": null,
"e": 5383,
"s": 5297,
"text": "For iloc, we will get a ValueError if pass the condition straight into the statement:"
},
{
"code": null,
"e": 5432,
"s": 5383,
"text": "# Getting ValueErrordf.iloc[df.Humidity > 50, :]"
},
{
"code": null,
"e": 5598,
"s": 5432,
"text": "We get the error because iloc cannot accept a boolean Series. It only accepts a boolean list. We can use the list() function to convert a Series into a boolean list."
},
{
"code": null,
"e": 5648,
"s": 5598,
"text": "# Single conditiondf.iloc[list(df.Humidity > 50)]"
},
{
"code": null,
"e": 5743,
"s": 5648,
"text": "Similarly, we can use list() to convert the output of multiple conditions into a boolean list:"
},
{
"code": null,
"e": 5838,
"s": 5743,
"text": "## multiple conditionsdf.iloc[ list((df.Humidity > 50) & (df.Weather == 'Shower')), :,]"
},
{
"code": null,
"e": 5856,
"s": 5838,
"text": "loc with callable"
},
{
"code": null,
"e": 5984,
"s": 5856,
"text": "loc accepts a callable as an indexer. The callable must be a function with one argument that returns valid output for indexing."
},
{
"code": null,
"e": 6014,
"s": 5984,
"text": "For example to select columns"
},
{
"code": null,
"e": 6076,
"s": 6014,
"text": "# Selecting columnsdf.loc[:, lambda df: ['Humidity', 'Wind']]"
},
{
"code": null,
"e": 6112,
"s": 6076,
"text": "And to filter data with a callable:"
},
{
"code": null,
"e": 6167,
"s": 6112,
"text": "# With conditiondf.loc[lambda df: df.Humidity > 50, :]"
},
{
"code": null,
"e": 6186,
"s": 6167,
"text": "iloc with callable"
},
{
"code": null,
"e": 6231,
"s": 6186,
"text": "iloc can also take a callable as an indexer."
},
{
"code": null,
"e": 6260,
"s": 6231,
"text": "df.iloc[lambda df: [0,1], :]"
},
{
"code": null,
"e": 6372,
"s": 6260,
"text": "To filter data with callable, iloc will require list() to convert the output of conditions into a boolean list:"
},
{
"code": null,
"e": 6418,
"s": 6372,
"text": "df.iloc[lambda df: list(df.Humidity > 50), :]"
},
{
"code": null,
"e": 6513,
"s": 6418,
"text": "For demonstration, let’s create a DataFrame with 0-based integers as headers and index labels."
},
{
"code": null,
"e": 6587,
"s": 6513,
"text": "df = pd.read_csv( 'data/data.csv', header=None, skiprows=[0],)"
},
{
"code": null,
"e": 6760,
"s": 6587,
"text": "With header=None, the Pandas will generate 0-based integer values as headers. With skiprows=[0], those headers Weather, Temperature, etc we have been using will be skipped."
},
{
"code": null,
"e": 6870,
"s": 6760,
"text": "Now, loc, a label-based data selector, can accept a single integer and a list of integer values. For example:"
},
{
"code": null,
"e": 6955,
"s": 6870,
"text": ">>> df.loc[1, 2]19.67>>> df.loc[1, [1, 2]]1 Sunny2 19.67Name: 1, dtype: object"
},
{
"code": null,
"e": 7143,
"s": 6955,
"text": "The reason they are working is that those integer values (1 and 2) are interpreted as labels of the index. This use is not an integer position along with the index and is a bit confusing."
},
{
"code": null,
"e": 7245,
"s": 7143,
"text": "In this case, loc and iloc are interchangeable when selecting via a single value or a list of values."
},
{
"code": null,
"e": 7364,
"s": 7245,
"text": ">>> df.loc[1, 2] == df.iloc[1, 2]True>>> df.loc[1, [1, 2]] == df.iloc[1, [1, 2]]1 True2 TrueName: 1, dtype: bool"
},
{
"code": null,
"e": 7498,
"s": 7364,
"text": "Note that loc and iloc will return different results when selecting via slice and conditions. They are essentially different because:"
},
{
"code": null,
"e": 7564,
"s": 7498,
"text": "slice: endpoint is excluded from iloc result, but included in loc"
},
{
"code": null,
"e": 7645,
"s": 7564,
"text": "conditions: loc accepts boolean Series, but iloc can only accept a boolean list."
},
{
"code": null,
"e": 7672,
"s": 7645,
"text": "Finally, here is a summary"
},
{
"code": null,
"e": 7715,
"s": 7672,
"text": "loc is label based and allowed inputs are:"
},
{
"code": null,
"e": 7793,
"s": 7715,
"text": "A single label 'A' or 2 (Note that 2 is interpreted as a label of the index.)"
},
{
"code": null,
"e": 7899,
"s": 7793,
"text": "A list of labels ['A', 'B', 'C'] or [1, 2, 3] (Note that 1, 2, 3 are interpreted as labels of the index.)"
},
{
"code": null,
"e": 7947,
"s": 7899,
"text": "A slice with labels 'A':'C' (Both are included)"
},
{
"code": null,
"e": 7995,
"s": 7947,
"text": "Conditions, a boolean Series or a boolean array"
},
{
"code": null,
"e": 8033,
"s": 7995,
"text": "A callable function with one argument"
},
{
"code": null,
"e": 8088,
"s": 8033,
"text": "iloc is integer position based and allowed inputs are:"
},
{
"code": null,
"e": 8107,
"s": 8088,
"text": "An integer e.g. 2."
},
{
"code": null,
"e": 8146,
"s": 8107,
"text": "A list or array of integers [1, 2, 3]."
},
{
"code": null,
"e": 8200,
"s": 8146,
"text": "A slice with integers 1:7(the endpoint 7 is excluded)"
},
{
"code": null,
"e": 8244,
"s": 8200,
"text": "Conditions, but only accept a boolean array"
},
{
"code": null,
"e": 8282,
"s": 8244,
"text": "A callable function with one argument"
},
{
"code": null,
"e": 8372,
"s": 8282,
"text": "loc and iloc are interchangeable when the labels of Pandas DataFrame are 0-based integers"
},
{
"code": null,
"e": 8540,
"s": 8372,
"text": "I hope this article will help you to save time in learning Pandas data selection. I recommend you to check out the documentation to know about other things you can do."
},
{
"code": null,
"e": 8692,
"s": 8540,
"text": "Thanks for reading. Please check out the notebook for the source code and stay tuned if you are interested in the practical aspect of machine learning."
},
{
"code": null,
"e": 8768,
"s": 8692,
"text": "Pandas cut() function for transforming numerical data into categorical data"
},
{
"code": null,
"e": 8825,
"s": 8768,
"text": "Using Pandas method chaining to improve code readability"
},
{
"code": null,
"e": 8869,
"s": 8825,
"text": "How to do a Custom Sort on Pandas DataFrame"
},
{
"code": null,
"e": 8926,
"s": 8869,
"text": "All the Pandas shift() you should know for data analysis"
},
{
"code": null,
"e": 8966,
"s": 8926,
"text": "When to use Pandas transform() function"
},
{
"code": null,
"e": 9005,
"s": 8966,
"text": "Pandas concat() tricks you should know"
},
{
"code": null,
"e": 9058,
"s": 9005,
"text": "Difference between apply() and transform() in Pandas"
},
{
"code": null,
"e": 9097,
"s": 9058,
"text": "All the Pandas merge() you should know"
},
{
"code": null,
"e": 9139,
"s": 9097,
"text": "Working with datetime in Pandas DataFrame"
},
{
"code": null,
"e": 9180,
"s": 9139,
"text": "Pandas read_csv() tricks you should know"
},
{
"code": null,
"e": 9250,
"s": 9180,
"text": "4 tricks you should know to parse date columns with Pandas read_csv()"
}
] |
Bootstrap 4 .flex-row class
|
Use the flex-row class in Bootstrap to dispay flex items horizontally.
Achieve the following using the flex-row class −
<div class="d-flex flex-row bg-primary mb-3">
Now add the flex-items in the class to allow horizontal alignment −
<div class="d-flex flex-row bg-primary mb-3">
<div class="p-2 bg-primary">Audi</div>
<div class="p-2 bg-danger">BMW</div>
<div class="p-2 bg-info">Benz</div>
</div>
You can try to run the following code to implement the flex-row class −
Live Demo
<!DOCTYPE html>
<html lang="en">
<head>
<title>Bootstrap Example</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.1.0/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.1.0/js/bootstrap.min.js"></script>
</head>
<body>
<div class="container mt-3">
<h2>Flex Row</h2>
<div class="d-flex flex-row bg-primary mb-3">
<div class="p-2 bg-primary">Audi</div>
<div class="p-2 bg-danger">BMW</div>
<div class="p-2 bg-info">Benz</div>
</div>
</div>
</body>
</html>
|
[
{
"code": null,
"e": 1133,
"s": 1062,
"text": "Use the flex-row class in Bootstrap to dispay flex items horizontally."
},
{
"code": null,
"e": 1182,
"s": 1133,
"text": "Achieve the following using the flex-row class −"
},
{
"code": null,
"e": 1228,
"s": 1182,
"text": "<div class=\"d-flex flex-row bg-primary mb-3\">"
},
{
"code": null,
"e": 1296,
"s": 1228,
"text": "Now add the flex-items in the class to allow horizontal alignment −"
},
{
"code": null,
"e": 1467,
"s": 1296,
"text": "<div class=\"d-flex flex-row bg-primary mb-3\">\n <div class=\"p-2 bg-primary\">Audi</div>\n <div class=\"p-2 bg-danger\">BMW</div>\n <div class=\"p-2 bg-info\">Benz</div>\n</div>"
},
{
"code": null,
"e": 1539,
"s": 1467,
"text": "You can try to run the following code to implement the flex-row class −"
},
{
"code": null,
"e": 1549,
"s": 1539,
"text": "Live Demo"
},
{
"code": null,
"e": 2306,
"s": 1549,
"text": "<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <title>Bootstrap Example</title>\n <meta charset=\"utf-8\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n <link rel=\"stylesheet\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/4.1.0/css/bootstrap.min.css\">\n <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"></script>\n <script src=\"https://maxcdn.bootstrapcdn.com/bootstrap/4.1.0/js/bootstrap.min.js\"></script>\n </head>\n\n<body>\n <div class=\"container mt-3\">\n <h2>Flex Row</h2>\n <div class=\"d-flex flex-row bg-primary mb-3\">\n <div class=\"p-2 bg-primary\">Audi</div>\n <div class=\"p-2 bg-danger\">BMW</div>\n <div class=\"p-2 bg-info\">Benz</div>\n </div>\n </div>\n\n</body>\n</html>"
}
] |
Getting screens height and width using Tkinter Python
|
Tkinter is the library which gives GUI programming capability to python programs. As part of GUI creation we need to create screen layouts of different sizes and depth. In this program we will see how to calculate the size of a screen in terms of pixels as well as in mm. We can also get the depth of the screen in pixels. There are various methods available as part of Tkinter which we use for this.
from tkinter import *
# creating tkinter window
base = Tk()
#screen's length and width in pixels and mm
length_1= base.winfo_screenheight()
width_1= base.winfo_screenwidth()
length_2 = base.winfo_screenmmheight()
width_2 = base.winfo_screenmmwidth()
#screen Depth
screendepth = base.winfo_screendepth()
print("\n width x length (in pixels) = ",(width_1,length_1))
print("\n width x length (in mm) = ", (width_2, length_2))
print("\n Screen depth = ",screendepth)
mainloop()
Running the above code gives us the following result
width x length (in pixels) = (1600, 900)
width x length (in mm) = (423, 238)
Screen depth = 32
|
[
{
"code": null,
"e": 1463,
"s": 1062,
"text": "Tkinter is the library which gives GUI programming capability to python programs. As part of GUI creation we need to create screen layouts of different sizes and depth. In this program we will see how to calculate the size of a screen in terms of pixels as well as in mm. We can also get the depth of the screen in pixels. There are various methods available as part of Tkinter which we use for this."
},
{
"code": null,
"e": 1937,
"s": 1463,
"text": "from tkinter import *\n# creating tkinter window\nbase = Tk()\n#screen's length and width in pixels and mm\nlength_1= base.winfo_screenheight()\nwidth_1= base.winfo_screenwidth()\nlength_2 = base.winfo_screenmmheight()\nwidth_2 = base.winfo_screenmmwidth()\n#screen Depth\nscreendepth = base.winfo_screendepth()\nprint(\"\\n width x length (in pixels) = \",(width_1,length_1))\nprint(\"\\n width x length (in mm) = \", (width_2, length_2))\nprint(\"\\n Screen depth = \",screendepth)\nmainloop()"
},
{
"code": null,
"e": 1990,
"s": 1937,
"text": "Running the above code gives us the following result"
},
{
"code": null,
"e": 2085,
"s": 1990,
"text": "width x length (in pixels) = (1600, 900)\nwidth x length (in mm) = (423, 238)\nScreen depth = 32"
}
] |
Bubble plot with ggplot2 in R - GeeksforGeeks
|
18 Jul, 2021
A bubble plot is a data visualization that helps to displays multiple circles (bubbles) in a two-dimensional plot as same in a scatter plot. A bubble plot is primarily used to depict and show relationships between numeric variables.
With ggplot2, bubble plots can be built using geom_point() function. At least three variables must be provided to aes() that are x, y, and size. The legend will automatically be displayed by ggplot2.
Syntax:
ggplot(data, aes(x, y, size)) + geom_point()
Size in the above syntax, will take in the name of one of the attributes based on whose values the size of the bubbles will differ.
Example: Creating bubble plot
R
# importing the ggplot2 librarylibrary(ggplot2) # creating data set columnsx <- c(12,23,43,61,78,54,34,76,58,103,39,46,52,33,11,9,60)y <- c(12,54,34,76,54,23,43,61,78,23,12,34,56,98,67,36,54)r <- c(1,5,13,8,12,3,2,16,7,40,23,45,76,8,7,41,23) # creating the dataframe from the above columnsdata <- data.frame(x, y, r) ggplot(data, aes(x = x, y = y,size = r))+ geom_point(alpha = 0.7)
Output:
In the above example, all the bubbles appear in the same color, hence making the graph hard to interpret. The colors can be added to plot to make the bubbles differ from one another.
Example: Adding colors to the bubble plot
R
# importing the ggplot2 librarylibrary(ggplot2) # creating data set columnsx <- c(12,23,43,61,78,54,34,76,58,103,39,46,52,33,11)y <- c(12,54,34,76,54,23,43,61,78,23,12,34,56,98,67)r <- c(1,5,13,8,12,3,2,16,7,40,23,45,76,8,7) color <- c(rep("color1", 1), rep("color2", 2), rep("Color3", 3), rep("color4", 4), rep("color5", 5))# creating the dataframe from the above columnsdata <- data.frame(x, y, r, color) ggplot(data, aes(x = x, y = y,size = r, color=color))+geom_point(alpha = 0.7)
Output:
Color can be customized using the palette. To change colors with RColorBrewer palette add scale_color_brewer() to the plot code of ggplot2
Syntax:
scale_color_brewer(palette=<Palette-Name>)
Example: Customizing colors of bubble plot in R
R
# creating data set columnsx <- c(12,23,43,61,78,54,34,76,58,103,39,46,52,33,11)y <- c(12,54,34,76,54,23,43,61,78,23,12,34,56,98,67)r <- c(1,5,13,8,12,3,2,16,7,40,23,45,76,8,7) colors <- c(1,2,3,1,2,3,1,1,2,3,1,2,2,3,3) # creating the dataframe from the # above columnsdata <- data.frame(x, y, r, colors) # importing the ggplot2 library and# RColorBrewerlibrary(RColorBrewer)library(ggplot2) # Draw plotggplot(data, aes(x = x, y = y,size = r, color=as.factor(colors))) + geom_point(alpha = 0.7)+ scale_color_brewer(palette="Spectral")
Output:
To change the size of bubbles in the plot i.e. giving the range of sizes from smallest bubble to biggest bubble, we use scale_size(). This allows setting the size of the smallest and the biggest circles using the range argument.
Syntax:
scale_size(range =< range-vector>)
Example: Changing size of the bubble plot
R
# creating data set columnsx <- c(12,23,43,61,78,54,34,76,58,103,39,46,52,33,11)y <- c(12,54,34,76,54,23,43,61,78,23,12,34,56,98,67)r <- c(1,5,13,8,12,3,2,16,7,40,23,45,76,8,7) color <- c(rep("color1", 1), rep("color2", 2), rep("Color3", 3), rep("color4", 4), rep("color5", 5))sizeRange <- c(2,18) # creating the dataframe from the above # columnsdata <- data.frame(x, y, r, color) # importing the ggplot2 librarylibrary(ggplot2) ggplot(data, aes(x = x, y = y,size = r, color=color)) + geom_point(alpha = 0.7) + scale_size(range = sizeRange, name="index")
Output:
To alter the labels on the axis, add the code +labs(y= “y axis name”, x = “x axis name”) to your line of basic ggplot code.
Example: Altering labels of bubble plot in R
R
# creating data set columnsx <- c(12,23,43,61,78,54,34,76,58,103,39,46,52,33,11)y <- c(12,54,34,76,54,23,43,61,78,23,12,34,56,98,67)r <- c(1,5,13,8,12,3,2,16,7,40,23,45,76,8,7) library("RColorBrewer")color <- brewer.pal(n = 5, name = "RdBu") # creating the dataframe from the above columnsdata <- data.frame(x, y, r, color) # importing the ggplot2 librarylibrary(ggplot2)ggplot(data, aes(x = x, y = y,size = r, color=color)) +geom_point(alpha = 0.7) + labs(title= "Title of Graph", y="Y-Axis label", x = "X-Axis Label")
Output:
To change legend position in ggplot2 add theme() to basic ggplot2 code.
Syntax:
theme(legend.position=<desired-position>)
Example: Changing legend position of bubble plot
R
# creating data set columnsx <- c(12,23,43,61,78,54,34,76,58,103,39,46,52,33,11)y <- c(12,54,34,76,54,23,43,61,78,23,12,34,56,98,67)r <- c(1,5,13,8,12,3,2,16,7,40,23,45,76,8,7) library("RColorBrewer")color <- brewer.pal(n = 5, name = "RdBu") # creating the dataframe from the above columnsdata <- data.frame(x, y, r, color) # importing the ggplot2 librarylibrary(ggplot2) ggplot(data, aes(x = x, y = y,size = r, color=color)) + geom_point(alpha = 0.7) + labs(title= "Title of Graph",y="Y-Axis label", x = "X-Axis Label") +theme(legend.position="left")
Output:
Picked
R-ggplot
R Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Change Color of Bars in Barchart using ggplot2 in R
How to Change Axis Scales in R Plots?
Group by function in R using Dplyr
How to Split Column Into Multiple Columns in R DataFrame?
How to filter R DataFrame by values in a column?
How to import an Excel File into R ?
How to filter R dataframe by multiple conditions?
Replace Specific Characters in String in R
Time Series Analysis in R
R - if statement
|
[
{
"code": null,
"e": 25268,
"s": 25240,
"text": "\n18 Jul, 2021"
},
{
"code": null,
"e": 25502,
"s": 25268,
"text": " A bubble plot is a data visualization that helps to displays multiple circles (bubbles) in a two-dimensional plot as same in a scatter plot. A bubble plot is primarily used to depict and show relationships between numeric variables."
},
{
"code": null,
"e": 25702,
"s": 25502,
"text": "With ggplot2, bubble plots can be built using geom_point() function. At least three variables must be provided to aes() that are x, y, and size. The legend will automatically be displayed by ggplot2."
},
{
"code": null,
"e": 25710,
"s": 25702,
"text": "Syntax:"
},
{
"code": null,
"e": 25755,
"s": 25710,
"text": "ggplot(data, aes(x, y, size)) + geom_point()"
},
{
"code": null,
"e": 25887,
"s": 25755,
"text": "Size in the above syntax, will take in the name of one of the attributes based on whose values the size of the bubbles will differ."
},
{
"code": null,
"e": 25917,
"s": 25887,
"text": "Example: Creating bubble plot"
},
{
"code": null,
"e": 25919,
"s": 25917,
"text": "R"
},
{
"code": "# importing the ggplot2 librarylibrary(ggplot2) # creating data set columnsx <- c(12,23,43,61,78,54,34,76,58,103,39,46,52,33,11,9,60)y <- c(12,54,34,76,54,23,43,61,78,23,12,34,56,98,67,36,54)r <- c(1,5,13,8,12,3,2,16,7,40,23,45,76,8,7,41,23) # creating the dataframe from the above columnsdata <- data.frame(x, y, r) ggplot(data, aes(x = x, y = y,size = r))+ geom_point(alpha = 0.7)",
"e": 26305,
"s": 25919,
"text": null
},
{
"code": null,
"e": 26313,
"s": 26305,
"text": "Output:"
},
{
"code": null,
"e": 26496,
"s": 26313,
"text": "In the above example, all the bubbles appear in the same color, hence making the graph hard to interpret. The colors can be added to plot to make the bubbles differ from one another."
},
{
"code": null,
"e": 26538,
"s": 26496,
"text": "Example: Adding colors to the bubble plot"
},
{
"code": null,
"e": 26540,
"s": 26538,
"text": "R"
},
{
"code": "# importing the ggplot2 librarylibrary(ggplot2) # creating data set columnsx <- c(12,23,43,61,78,54,34,76,58,103,39,46,52,33,11)y <- c(12,54,34,76,54,23,43,61,78,23,12,34,56,98,67)r <- c(1,5,13,8,12,3,2,16,7,40,23,45,76,8,7) color <- c(rep(\"color1\", 1), rep(\"color2\", 2), rep(\"Color3\", 3), rep(\"color4\", 4), rep(\"color5\", 5))# creating the dataframe from the above columnsdata <- data.frame(x, y, r, color) ggplot(data, aes(x = x, y = y,size = r, color=color))+geom_point(alpha = 0.7)",
"e": 27048,
"s": 26540,
"text": null
},
{
"code": null,
"e": 27056,
"s": 27048,
"text": "Output:"
},
{
"code": null,
"e": 27195,
"s": 27056,
"text": "Color can be customized using the palette. To change colors with RColorBrewer palette add scale_color_brewer() to the plot code of ggplot2"
},
{
"code": null,
"e": 27203,
"s": 27195,
"text": "Syntax:"
},
{
"code": null,
"e": 27246,
"s": 27203,
"text": "scale_color_brewer(palette=<Palette-Name>)"
},
{
"code": null,
"e": 27294,
"s": 27246,
"text": "Example: Customizing colors of bubble plot in R"
},
{
"code": null,
"e": 27296,
"s": 27294,
"text": "R"
},
{
"code": "# creating data set columnsx <- c(12,23,43,61,78,54,34,76,58,103,39,46,52,33,11)y <- c(12,54,34,76,54,23,43,61,78,23,12,34,56,98,67)r <- c(1,5,13,8,12,3,2,16,7,40,23,45,76,8,7) colors <- c(1,2,3,1,2,3,1,1,2,3,1,2,2,3,3) # creating the dataframe from the # above columnsdata <- data.frame(x, y, r, colors) # importing the ggplot2 library and# RColorBrewerlibrary(RColorBrewer)library(ggplot2) # Draw plotggplot(data, aes(x = x, y = y,size = r, color=as.factor(colors))) + geom_point(alpha = 0.7)+ scale_color_brewer(palette=\"Spectral\")",
"e": 27835,
"s": 27296,
"text": null
},
{
"code": null,
"e": 27843,
"s": 27835,
"text": "Output:"
},
{
"code": null,
"e": 28072,
"s": 27843,
"text": "To change the size of bubbles in the plot i.e. giving the range of sizes from smallest bubble to biggest bubble, we use scale_size(). This allows setting the size of the smallest and the biggest circles using the range argument."
},
{
"code": null,
"e": 28080,
"s": 28072,
"text": "Syntax:"
},
{
"code": null,
"e": 28115,
"s": 28080,
"text": "scale_size(range =< range-vector>)"
},
{
"code": null,
"e": 28157,
"s": 28115,
"text": "Example: Changing size of the bubble plot"
},
{
"code": null,
"e": 28159,
"s": 28157,
"text": "R"
},
{
"code": "# creating data set columnsx <- c(12,23,43,61,78,54,34,76,58,103,39,46,52,33,11)y <- c(12,54,34,76,54,23,43,61,78,23,12,34,56,98,67)r <- c(1,5,13,8,12,3,2,16,7,40,23,45,76,8,7) color <- c(rep(\"color1\", 1), rep(\"color2\", 2), rep(\"Color3\", 3), rep(\"color4\", 4), rep(\"color5\", 5))sizeRange <- c(2,18) # creating the dataframe from the above # columnsdata <- data.frame(x, y, r, color) # importing the ggplot2 librarylibrary(ggplot2) ggplot(data, aes(x = x, y = y,size = r, color=color)) + geom_point(alpha = 0.7) + scale_size(range = sizeRange, name=\"index\")",
"e": 28770,
"s": 28159,
"text": null
},
{
"code": null,
"e": 28778,
"s": 28770,
"text": "Output:"
},
{
"code": null,
"e": 28902,
"s": 28778,
"text": "To alter the labels on the axis, add the code +labs(y= “y axis name”, x = “x axis name”) to your line of basic ggplot code."
},
{
"code": null,
"e": 28948,
"s": 28902,
"text": "Example: Altering labels of bubble plot in R "
},
{
"code": null,
"e": 28950,
"s": 28948,
"text": "R"
},
{
"code": "# creating data set columnsx <- c(12,23,43,61,78,54,34,76,58,103,39,46,52,33,11)y <- c(12,54,34,76,54,23,43,61,78,23,12,34,56,98,67)r <- c(1,5,13,8,12,3,2,16,7,40,23,45,76,8,7) library(\"RColorBrewer\")color <- brewer.pal(n = 5, name = \"RdBu\") # creating the dataframe from the above columnsdata <- data.frame(x, y, r, color) # importing the ggplot2 librarylibrary(ggplot2)ggplot(data, aes(x = x, y = y,size = r, color=color)) +geom_point(alpha = 0.7) + labs(title= \"Title of Graph\", y=\"Y-Axis label\", x = \"X-Axis Label\")",
"e": 29473,
"s": 28950,
"text": null
},
{
"code": null,
"e": 29481,
"s": 29473,
"text": "Output:"
},
{
"code": null,
"e": 29553,
"s": 29481,
"text": "To change legend position in ggplot2 add theme() to basic ggplot2 code."
},
{
"code": null,
"e": 29561,
"s": 29553,
"text": "Syntax:"
},
{
"code": null,
"e": 29603,
"s": 29561,
"text": "theme(legend.position=<desired-position>)"
},
{
"code": null,
"e": 29652,
"s": 29603,
"text": "Example: Changing legend position of bubble plot"
},
{
"code": null,
"e": 29654,
"s": 29652,
"text": "R"
},
{
"code": "# creating data set columnsx <- c(12,23,43,61,78,54,34,76,58,103,39,46,52,33,11)y <- c(12,54,34,76,54,23,43,61,78,23,12,34,56,98,67)r <- c(1,5,13,8,12,3,2,16,7,40,23,45,76,8,7) library(\"RColorBrewer\")color <- brewer.pal(n = 5, name = \"RdBu\") # creating the dataframe from the above columnsdata <- data.frame(x, y, r, color) # importing the ggplot2 librarylibrary(ggplot2) ggplot(data, aes(x = x, y = y,size = r, color=color)) + geom_point(alpha = 0.7) + labs(title= \"Title of Graph\",y=\"Y-Axis label\", x = \"X-Axis Label\") +theme(legend.position=\"left\")",
"e": 30210,
"s": 29654,
"text": null
},
{
"code": null,
"e": 30218,
"s": 30210,
"text": "Output:"
},
{
"code": null,
"e": 30225,
"s": 30218,
"text": "Picked"
},
{
"code": null,
"e": 30234,
"s": 30225,
"text": "R-ggplot"
},
{
"code": null,
"e": 30245,
"s": 30234,
"text": "R Language"
},
{
"code": null,
"e": 30343,
"s": 30245,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30395,
"s": 30343,
"text": "Change Color of Bars in Barchart using ggplot2 in R"
},
{
"code": null,
"e": 30433,
"s": 30395,
"text": "How to Change Axis Scales in R Plots?"
},
{
"code": null,
"e": 30468,
"s": 30433,
"text": "Group by function in R using Dplyr"
},
{
"code": null,
"e": 30526,
"s": 30468,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 30575,
"s": 30526,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 30612,
"s": 30575,
"text": "How to import an Excel File into R ?"
},
{
"code": null,
"e": 30662,
"s": 30612,
"text": "How to filter R dataframe by multiple conditions?"
},
{
"code": null,
"e": 30705,
"s": 30662,
"text": "Replace Specific Characters in String in R"
},
{
"code": null,
"e": 30731,
"s": 30705,
"text": "Time Series Analysis in R"
}
] |
How to setup Python VirtualEnv - onlinetutorialspoint
|
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
In this tutorials we are going to see what is Python Virtual Environment and how to setup Python VirtualEnv in different operating systems.
Virtual Environment (virtual env or venv) is a tool that helps to keep the libraries and dependencies required for a project separated from others. Using this, one can install the requirements specific to a python project and prevent them from interfering with other projects.
Consider you are working on a project that uses the Django version 3.1 and at the same time, you want to run a Django application that uses the older 3.0 version. With time, new features are added to any package and some features become deprecated, thus, both the different versions of the applications need that particular version of Django installed to work properly, else there might be dependency issues.
Using virtual environments, one can install the packages separately for each application and keep them along with that project confined to a directory. Also, multiple environments can be activated simultaneously in different terminals.
It is a general practice to begin any python project in a venv to prevent any issues in future.
Ensure pip is installed and added to the PATH variable and then install virtualenv using the command:
pip install virtualenv
Open the terminal or Command Prompt and navigate to the directory where the project needs to be created. Run this command to create a virtualenv with the name env.
virtualenv env
You can name the virtualenv as you wish for eg. something like env, venv or project-specific: testappenv, blogappenv.
This will create a folder with that name with all the required scripts within that.
Windows
env\Scripts\activate
Linux
source ./env/bin/activate
Once the virtualenv is activated, (env) will be visible on the starting of the command line.
After the work for the python environment is done, you can deactivate it using the command:
deactivate
When working on a shared project or when you want to run your project on some other machine, it is essential to install all the required version of the dependencies first. For this, pip has a feature called freeze to make a list of all the installed packages which can then be used to make a clone of the venv in some other machine.
Once in an activated virtualenv run the command:
pip freeze > requirements.txt
This will list all the dependencies for that venv and store them in a file named requirements.txt.
requirements.txt file is used for specifying what python packages are required to run the project and is supposed to be present in its root directory.
The requirements.txt file is used to replicate any venv. Once this file (or list of dependencies) is available:
Create and activate a virtualenv
Run this command
pip install -r requirements.txt
This will install all the packages as per the version number given in requirements.txt in the newly created venv.
pip install -r requirements.txt
This will install all the packages as per the version number given in requirements.txt in the newly created venv.
Project PyEnv
Happy Learning 🙂
Python Django Helloworld Example
How to upgrade Python PIP version on Windows
Where can I find Python PIP in windows ?
Python Selenium HelloWorld Example
How to connect MySQL DB with Python
STS Gradle Setup Tutorials
How to pass Command line Arguments in Python
How to check whether a file exists python ?
Modes of Python Program
Python raw_input read input from keyboard
How install Python on Windows 10
Python Selenium Automate the Login Form
Python – Selenium Download a File in Headless Mode
How to get Words Count in Python from a File
What are Python default function parameters ?
Python Django Helloworld Example
How to upgrade Python PIP version on Windows
Where can I find Python PIP in windows ?
Python Selenium HelloWorld Example
How to connect MySQL DB with Python
STS Gradle Setup Tutorials
How to pass Command line Arguments in Python
How to check whether a file exists python ?
Modes of Python Program
Python raw_input read input from keyboard
How install Python on Windows 10
Python Selenium Automate the Login Form
Python – Selenium Download a File in Headless Mode
How to get Words Count in Python from a File
What are Python default function parameters ?
|
[
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 339,
"s": 199,
"text": "In this tutorials we are going to see what is Python Virtual Environment and how to setup Python VirtualEnv in different operating systems."
},
{
"code": null,
"e": 616,
"s": 339,
"text": "Virtual Environment (virtual env or venv) is a tool that helps to keep the libraries and dependencies required for a project separated from others. Using this, one can install the requirements specific to a python project and prevent them from interfering with other projects."
},
{
"code": null,
"e": 1025,
"s": 616,
"text": "Consider you are working on a project that uses the Django version 3.1 and at the same time, you want to run a Django application that uses the older 3.0 version. With time, new features are added to any package and some features become deprecated, thus, both the different versions of the applications need that particular version of Django installed to work properly, else there might be dependency issues."
},
{
"code": null,
"e": 1261,
"s": 1025,
"text": "Using virtual environments, one can install the packages separately for each application and keep them along with that project confined to a directory. Also, multiple environments can be activated simultaneously in different terminals."
},
{
"code": null,
"e": 1357,
"s": 1261,
"text": "It is a general practice to begin any python project in a venv to prevent any issues in future."
},
{
"code": null,
"e": 1459,
"s": 1357,
"text": "Ensure pip is installed and added to the PATH variable and then install virtualenv using the command:"
},
{
"code": null,
"e": 1482,
"s": 1459,
"text": "pip install virtualenv"
},
{
"code": null,
"e": 1646,
"s": 1482,
"text": "Open the terminal or Command Prompt and navigate to the directory where the project needs to be created. Run this command to create a virtualenv with the name env."
},
{
"code": null,
"e": 1661,
"s": 1646,
"text": "virtualenv env"
},
{
"code": null,
"e": 1780,
"s": 1661,
"text": "You can name the virtualenv as you wish for eg. something like env, venv or project-specific: testappenv, blogappenv. "
},
{
"code": null,
"e": 1864,
"s": 1780,
"text": "This will create a folder with that name with all the required scripts within that."
},
{
"code": null,
"e": 1872,
"s": 1864,
"text": "Windows"
},
{
"code": null,
"e": 1893,
"s": 1872,
"text": "env\\Scripts\\activate"
},
{
"code": null,
"e": 1899,
"s": 1893,
"text": "Linux"
},
{
"code": null,
"e": 1925,
"s": 1899,
"text": "source ./env/bin/activate"
},
{
"code": null,
"e": 2018,
"s": 1925,
"text": "Once the virtualenv is activated, (env) will be visible on the starting of the command line."
},
{
"code": null,
"e": 2110,
"s": 2018,
"text": "After the work for the python environment is done, you can deactivate it using the command:"
},
{
"code": null,
"e": 2121,
"s": 2110,
"text": "deactivate"
},
{
"code": null,
"e": 2454,
"s": 2121,
"text": "When working on a shared project or when you want to run your project on some other machine, it is essential to install all the required version of the dependencies first. For this, pip has a feature called freeze to make a list of all the installed packages which can then be used to make a clone of the venv in some other machine."
},
{
"code": null,
"e": 2503,
"s": 2454,
"text": "Once in an activated virtualenv run the command:"
},
{
"code": null,
"e": 2533,
"s": 2503,
"text": "pip freeze > requirements.txt"
},
{
"code": null,
"e": 2632,
"s": 2533,
"text": "This will list all the dependencies for that venv and store them in a file named requirements.txt."
},
{
"code": null,
"e": 2783,
"s": 2632,
"text": "requirements.txt file is used for specifying what python packages are required to run the project and is supposed to be present in its root directory."
},
{
"code": null,
"e": 2895,
"s": 2783,
"text": "The requirements.txt file is used to replicate any venv. Once this file (or list of dependencies) is available:"
},
{
"code": null,
"e": 2928,
"s": 2895,
"text": "Create and activate a virtualenv"
},
{
"code": null,
"e": 3091,
"s": 2928,
"text": "Run this command\npip install -r requirements.txt\nThis will install all the packages as per the version number given in requirements.txt in the newly created venv."
},
{
"code": null,
"e": 3123,
"s": 3091,
"text": "pip install -r requirements.txt"
},
{
"code": null,
"e": 3237,
"s": 3123,
"text": "This will install all the packages as per the version number given in requirements.txt in the newly created venv."
},
{
"code": null,
"e": 3251,
"s": 3237,
"text": "Project PyEnv"
},
{
"code": null,
"e": 3268,
"s": 3251,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 3857,
"s": 3268,
"text": "\nPython Django Helloworld Example\nHow to upgrade Python PIP version on Windows\nWhere can I find Python PIP in windows ?\nPython Selenium HelloWorld Example\nHow to connect MySQL DB with Python\nSTS Gradle Setup Tutorials\nHow to pass Command line Arguments in Python\nHow to check whether a file exists python ?\nModes of Python Program\nPython raw_input read input from keyboard\nHow install Python on Windows 10\nPython Selenium Automate the Login Form\nPython – Selenium Download a File in Headless Mode\nHow to get Words Count in Python from a File\nWhat are Python default function parameters ?\n"
},
{
"code": null,
"e": 3890,
"s": 3857,
"text": "Python Django Helloworld Example"
},
{
"code": null,
"e": 3935,
"s": 3890,
"text": "How to upgrade Python PIP version on Windows"
},
{
"code": null,
"e": 3976,
"s": 3935,
"text": "Where can I find Python PIP in windows ?"
},
{
"code": null,
"e": 4011,
"s": 3976,
"text": "Python Selenium HelloWorld Example"
},
{
"code": null,
"e": 4047,
"s": 4011,
"text": "How to connect MySQL DB with Python"
},
{
"code": null,
"e": 4074,
"s": 4047,
"text": "STS Gradle Setup Tutorials"
},
{
"code": null,
"e": 4119,
"s": 4074,
"text": "How to pass Command line Arguments in Python"
},
{
"code": null,
"e": 4163,
"s": 4119,
"text": "How to check whether a file exists python ?"
},
{
"code": null,
"e": 4187,
"s": 4163,
"text": "Modes of Python Program"
},
{
"code": null,
"e": 4229,
"s": 4187,
"text": "Python raw_input read input from keyboard"
},
{
"code": null,
"e": 4262,
"s": 4229,
"text": "How install Python on Windows 10"
},
{
"code": null,
"e": 4302,
"s": 4262,
"text": "Python Selenium Automate the Login Form"
},
{
"code": null,
"e": 4353,
"s": 4302,
"text": "Python – Selenium Download a File in Headless Mode"
},
{
"code": null,
"e": 4398,
"s": 4353,
"text": "How to get Words Count in Python from a File"
}
] |
Kotlin Classes and Objects
|
Everything in Kotlin is associated with classes and objects, along with its properties and
functions. For example: in real life, a car is an object. The car has properties, such as
brand, weight and color, and
functions, such as drive and brake.
A Class is like an object constructor, or a "blueprint" for creating objects.
To create a class, use the class keyword, and specify the name of the class:
Create a Car class along with some properties (brand, model and year)
class Car {
var brand = ""
var model = ""
var year = 0
}
A property is basically a variable that belongs to the class.
Good to Know: It is considered good practice to start the
name of a class with an upper case letter, for better organization.
Now we can use the class named Car to create objects.
In the example below, we create an object of Car called
c1, and then we access the properties of c1 by
using the dot syntax (.), just like we did to
access array and string properties:
// Create a c1 object of the Car class
val c1 = Car()
// Access the properties and add some values to it
c1.brand = "Ford"
c1.model = "Mustang"
c1.year = 1969
println(c1.brand) // Outputs Ford
println(c1.model) // Outputs Mustang
println(c1.year) // Outputs 1969
You can create multiple objects of one class:
val c1 = Car()
c1.brand = "Ford"
c1.model = "Mustang"
c1.year = 1969
val c2 = Car()
c2.brand = "BMW"
c2.model = "X5"
c2.year = 1999
println(c1.brand) // Ford
println(c2.brand) // BMW
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
[email protected]
Your message has been sent to W3Schools.
|
[
{
"code": null,
"e": 250,
"s": 0,
"text": "Everything in Kotlin is associated with classes and objects, along with its properties and \nfunctions. For example: in real life, a car is an object. The car has properties, such as \nbrand, weight and color, and \nfunctions, such as drive and brake. "
},
{
"code": null,
"e": 328,
"s": 250,
"text": "A Class is like an object constructor, or a \"blueprint\" for creating objects."
},
{
"code": null,
"e": 405,
"s": 328,
"text": "To create a class, use the class keyword, and specify the name of the class:"
},
{
"code": null,
"e": 475,
"s": 405,
"text": "Create a Car class along with some properties (brand, model and year)"
},
{
"code": null,
"e": 539,
"s": 475,
"text": "class Car {\n var brand = \"\"\n var model = \"\"\n var year = 0\n} "
},
{
"code": null,
"e": 601,
"s": 539,
"text": "A property is basically a variable that belongs to the class."
},
{
"code": null,
"e": 730,
"s": 601,
"text": "Good to Know: It is considered good practice to start the \n name of a class with an upper case letter, for better organization."
},
{
"code": null,
"e": 784,
"s": 730,
"text": "Now we can use the class named Car to create objects."
},
{
"code": null,
"e": 971,
"s": 784,
"text": "In the example below, we create an object of Car called\nc1, and then we access the properties of c1 by \nusing the dot syntax (.), just like we did to \naccess array and string properties:"
},
{
"code": null,
"e": 1244,
"s": 971,
"text": "// Create a c1 object of the Car class\nval c1 = Car()\n\n// Access the properties and add some values to it\nc1.brand = \"Ford\"\nc1.model = \"Mustang\"\nc1.year = 1969\n\nprintln(c1.brand) // Outputs Ford\nprintln(c1.model) // Outputs Mustang\nprintln(c1.year) // Outputs 1969 "
},
{
"code": null,
"e": 1290,
"s": 1244,
"text": "You can create multiple objects of one class:"
},
{
"code": null,
"e": 1478,
"s": 1290,
"text": "val c1 = Car()\nc1.brand = \"Ford\"\nc1.model = \"Mustang\"\nc1.year = 1969\n\nval c2 = Car()\nc2.brand = \"BMW\"\nc2.model = \"X5\"\nc2.year = 1999\n\nprintln(c1.brand) // Ford\nprintln(c2.brand) // BMW\n"
},
{
"code": null,
"e": 1511,
"s": 1478,
"text": "We just launchedW3Schools videos"
},
{
"code": null,
"e": 1553,
"s": 1511,
"text": "Get certifiedby completinga course today!"
},
{
"code": null,
"e": 1660,
"s": 1553,
"text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:"
},
{
"code": null,
"e": 1679,
"s": 1660,
"text": "[email protected]"
}
] |
How to Fix java.lang.classcastexception in Java? - GeeksforGeeks
|
29 Mar, 2021
ClassCastException in Java occurs when we try to convert the data type of entry into another. This is related to the type conversion feature and data type conversion is successful only where a class extends a parent class and the child class is cast to its parent class.
Here we can consider parent class as vehicle and child class may be car, bike, cycle etc., Parent class as shape and child class may be 2d shapes or 3d shapes, etc.
Two different kinds of constructors available for ClassCastException.
ClassCastException(): It is used to create an instance of the ClassCastException class.ClassCastException(String s): It is used to create an instance of the ClassCastException class, by accepting the specified string as a message.
ClassCastException(): It is used to create an instance of the ClassCastException class.
ClassCastException(String s): It is used to create an instance of the ClassCastException class, by accepting the specified string as a message.
Let us see in details
Java
import java.math.BigDecimal;public class ClassCastExceptionExample { public static void main(String[] args) { // Creating a BigDecimal object Object sampleObject = new BigDecimal(10000000.45); System.out.println(sampleObject); }}
10000000.4499999992549419403076171875
If we try to print this value by casting to different data type like String or Integer etc., we will be getting ClassCastException.
Java
import java.math.BigDecimal;public class Main { public static void main(String[] args) { // Creating a BigDecimal object Object sampleObject = new BigDecimal(10000000.45); // Trying to display the object by casting to String // As the object is created as BigDecimal but tried // to display by casting to String, // ClassCastException is thrown System.out.println((String)sampleObject); }}
Output :
Exception in thread “main” java.lang.ClassCastException: class java.math.BigDecimal cannot be cast to class java.lang.String (java.math.BigDecimal and java.lang.String are in module java.base of loader ‘bootstrap’)
at Main.main(Main.java:11)
We can fix the exception printing by means of converting the code in the below format:
Java
import java.math.BigDecimal; public class ClassCastExceptionExample { public static void main(String[] args) { // Creating a BigDecimal object Object sampleObject = new BigDecimal(10000000.45); // We can avoid ClassCastException by this way System.out.println(String.valueOf(sampleObject)); }}
10000000.4499999992549419403076171875
So, in any instances when we try to convert data type of object, we cannot directly downcast or upcast to a specified data type. Direct casting will not work and it throws ClassCastException. Instead, we can use
String.valueOf() method. It converts different types of values like int, long, boolean, character, float etc., into the string.
public static String valueOf(boolean boolValue)public static String valueOf(char charValue)public static String valueOf(char[] charArrayValue)public static String valueOf(int intValue)public static String valueOf(long longValue)public static String valueOf(float floatValue)public static String valueOf(double doubleValue)public static String valueOf(Object objectValue)
public static String valueOf(boolean boolValue)
public static String valueOf(char charValue)
public static String valueOf(char[] charArrayValue)
public static String valueOf(int intValue)
public static String valueOf(long longValue)
public static String valueOf(float floatValue)
public static String valueOf(double doubleValue)
public static String valueOf(Object objectValue)
are the different methods available and in our above example, the last method is used.
Between Parent and Child class. Example shows that the instance of the parent class cannot be cast to an instance of the child class.
Java
class Vehicle { public Vehicle() { System.out.println( "An example instance of the Vehicle class to proceed for showing ClassCastException"); }} final class Bike extends Vehicle { public Bike() { super(); System.out.println( "An example instance of the Bike class that extends Vehicle as parent class to proceed for showing ClassCastException"); }} public class ClassCastExceptionExample { public static void main(String[] args) { Vehicle vehicle = new Vehicle(); Bike bike = new Bike(); Bike bike1 = new Vehicle(); // Check out for this statement. Tried to convert // parent(vehicle) object to child object(bike). // Here compiler error is thrown bike = vehicle; }}
Compiler error :
In order to overcome Compile-time errors, we need to downcast explicitly. i.e. downcasting means the typecasting of a parent object to a child object. That means features of parent object lost and hence there is no implicit downcasting possible and hence we need to do explicitly as below way
Giving the snippet where the change requires. In the downcasting Explicit way
Java
// An easier way to understand Downcastingclass Vehicle { String vehicleName; // Method in parent class void displayData() { System.out.println("From Vehicle class"); }} class Bike extends Vehicle { double cost; // Overriding the parent class method and we can // additionaly mention about the child class @Override void displayData() { System.out.println("From bike class" + cost); }} public class ClassCastExceptionExample { public static void main(String[] args) { Vehicle vehicle = new Bike(); vehicle.vehicleName = "BMW"; // Downcasting Explicitly Bike bike = (Bike)vehicle; bike.cost = 1000000; // Though vehiclename is not assigned, it takes BMW // as it is System.out.println(bike.vehicleName); System.out.println(bike.cost); bike.displayData(); }}
BMW
1000000.0
From bike class1000000.0
Upcasting implicit way
An example for upcasting of the child object to the parent object. It can be done implicitly. This facility gives us the flexibility to access the parent class members.
Java
// An easier way to understand Upcastingclass Vehicle { String vehicleName; // Method in parent class void displayData() { System.out.println("From Vehicle class"); }}class Bike extends Vehicle { double cost; // Overriding the parent class method and we can // additionaly mention about the child class @Override void displayData() { System.out.println("From bike class..." + cost); }} public class ClassCastExceptionExample { public static void main(String[] args) { // Upcasting Vehicle vehicle = new Bike(); vehicle.vehicleName = "Harley-Davidson"; // vehicle.cost //not available as upcasting done // but originally Vehicle class dont have cost // attribute System.out.println(vehicle.vehicleName); vehicle.displayData(); // Hence here we will get // output as 0.0 }}
Harley-Davidson
From bike class...0.0
Fixing ClassCastException in an upcasting way and at the same time, loss of data also occurs
Java
class Vehicle { public Vehicle() { System.out.println( "An example instance of the Vehicle class to proceed for showing ClassCast Exception"); } public String display() { return "Vehicle class display method"; }} class Bike extends Vehicle { public Bike() { super(); // Vehicle class constructor msg display as // super() is nothing but calling parent // method System.out.println( "An example instance of the Bike class that extends \nVehicle as parent class to proceed for showing ClassCast Exception"); } public String display() { return "Bike class display method"; }} public class ClassCastExceptionExample { public static void main(String[] args) { Vehicle vehicle = new Vehicle(); Bike bike = new Bike(); // But we can make bike point to vehicle, i.e. // pointing child object to parent object This is an // example for upcasting of child object to parent // object. It can be done implicitly This facility // gives us the flexibility to access the parent // class members vehicle = bike; // As upcasted here, vehicle.display() will provide // "Bike class display method" as output It has lost // its parent properties as now vehicle is nothing // but bike only System.out.println( "After upcasting bike(child) to vehicle(parent).." + vehicle.display()); }}
An example instance of the Vehicle class to proceed for showing ClassCast Exception
An example instance of the Vehicle class to proceed for showing ClassCast Exception
An example instance of the Bike class that extends
Vehicle as parent class to proceed for showing ClassCast Exception
After upcasting bike(child) to vehicle(parent)..Bike class display method
Conclusion: By means of either upcasting or downcasting, we can overcome ClassCastException if we are following Parent-child relationships. String.valueOf() methods help to convert the different data type to String and in that way also we can overcome.
Java-Exceptions
Picked
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Different ways of Reading a text file in Java
Constructors in Java
Exceptions in Java
Functional Interfaces in Java
Generics in Java
Comparator Interface in Java with Examples
HashMap get() Method in Java
Introduction to Java
Difference between Abstract Class and Interface in Java
|
[
{
"code": null,
"e": 23948,
"s": 23920,
"text": "\n29 Mar, 2021"
},
{
"code": null,
"e": 24220,
"s": 23948,
"text": "ClassCastException in Java occurs when we try to convert the data type of entry into another. This is related to the type conversion feature and data type conversion is successful only where a class extends a parent class and the child class is cast to its parent class. "
},
{
"code": null,
"e": 24385,
"s": 24220,
"text": "Here we can consider parent class as vehicle and child class may be car, bike, cycle etc., Parent class as shape and child class may be 2d shapes or 3d shapes, etc."
},
{
"code": null,
"e": 24455,
"s": 24385,
"text": "Two different kinds of constructors available for ClassCastException."
},
{
"code": null,
"e": 24686,
"s": 24455,
"text": "ClassCastException(): It is used to create an instance of the ClassCastException class.ClassCastException(String s): It is used to create an instance of the ClassCastException class, by accepting the specified string as a message."
},
{
"code": null,
"e": 24774,
"s": 24686,
"text": "ClassCastException(): It is used to create an instance of the ClassCastException class."
},
{
"code": null,
"e": 24918,
"s": 24774,
"text": "ClassCastException(String s): It is used to create an instance of the ClassCastException class, by accepting the specified string as a message."
},
{
"code": null,
"e": 24940,
"s": 24918,
"text": "Let us see in details"
},
{
"code": null,
"e": 24945,
"s": 24940,
"text": "Java"
},
{
"code": "import java.math.BigDecimal;public class ClassCastExceptionExample { public static void main(String[] args) { // Creating a BigDecimal object Object sampleObject = new BigDecimal(10000000.45); System.out.println(sampleObject); }}",
"e": 25205,
"s": 24945,
"text": null
},
{
"code": null,
"e": 25243,
"s": 25205,
"text": "10000000.4499999992549419403076171875"
},
{
"code": null,
"e": 25376,
"s": 25243,
"text": "If we try to print this value by casting to different data type like String or Integer etc., we will be getting ClassCastException. "
},
{
"code": null,
"e": 25381,
"s": 25376,
"text": "Java"
},
{
"code": "import java.math.BigDecimal;public class Main { public static void main(String[] args) { // Creating a BigDecimal object Object sampleObject = new BigDecimal(10000000.45); // Trying to display the object by casting to String // As the object is created as BigDecimal but tried // to display by casting to String, // ClassCastException is thrown System.out.println((String)sampleObject); }}",
"e": 25837,
"s": 25381,
"text": null
},
{
"code": null,
"e": 25846,
"s": 25837,
"text": "Output :"
},
{
"code": null,
"e": 26061,
"s": 25846,
"text": "Exception in thread “main” java.lang.ClassCastException: class java.math.BigDecimal cannot be cast to class java.lang.String (java.math.BigDecimal and java.lang.String are in module java.base of loader ‘bootstrap’)"
},
{
"code": null,
"e": 26088,
"s": 26061,
"text": "at Main.main(Main.java:11)"
},
{
"code": null,
"e": 26175,
"s": 26088,
"text": "We can fix the exception printing by means of converting the code in the below format:"
},
{
"code": null,
"e": 26180,
"s": 26175,
"text": "Java"
},
{
"code": "import java.math.BigDecimal; public class ClassCastExceptionExample { public static void main(String[] args) { // Creating a BigDecimal object Object sampleObject = new BigDecimal(10000000.45); // We can avoid ClassCastException by this way System.out.println(String.valueOf(sampleObject)); }}",
"e": 26520,
"s": 26180,
"text": null
},
{
"code": null,
"e": 26558,
"s": 26520,
"text": "10000000.4499999992549419403076171875"
},
{
"code": null,
"e": 26770,
"s": 26558,
"text": "So, in any instances when we try to convert data type of object, we cannot directly downcast or upcast to a specified data type. Direct casting will not work and it throws ClassCastException. Instead, we can use"
},
{
"code": null,
"e": 26899,
"s": 26770,
"text": "String.valueOf() method. It converts different types of values like int, long, boolean, character, float etc., into the string. "
},
{
"code": null,
"e": 27270,
"s": 26899,
"text": "public static String valueOf(boolean boolValue)public static String valueOf(char charValue)public static String valueOf(char[] charArrayValue)public static String valueOf(int intValue)public static String valueOf(long longValue)public static String valueOf(float floatValue)public static String valueOf(double doubleValue)public static String valueOf(Object objectValue)"
},
{
"code": null,
"e": 27318,
"s": 27270,
"text": "public static String valueOf(boolean boolValue)"
},
{
"code": null,
"e": 27363,
"s": 27318,
"text": "public static String valueOf(char charValue)"
},
{
"code": null,
"e": 27415,
"s": 27363,
"text": "public static String valueOf(char[] charArrayValue)"
},
{
"code": null,
"e": 27458,
"s": 27415,
"text": "public static String valueOf(int intValue)"
},
{
"code": null,
"e": 27503,
"s": 27458,
"text": "public static String valueOf(long longValue)"
},
{
"code": null,
"e": 27550,
"s": 27503,
"text": "public static String valueOf(float floatValue)"
},
{
"code": null,
"e": 27599,
"s": 27550,
"text": "public static String valueOf(double doubleValue)"
},
{
"code": null,
"e": 27648,
"s": 27599,
"text": "public static String valueOf(Object objectValue)"
},
{
"code": null,
"e": 27735,
"s": 27648,
"text": "are the different methods available and in our above example, the last method is used."
},
{
"code": null,
"e": 27869,
"s": 27735,
"text": "Between Parent and Child class. Example shows that the instance of the parent class cannot be cast to an instance of the child class."
},
{
"code": null,
"e": 27874,
"s": 27869,
"text": "Java"
},
{
"code": "class Vehicle { public Vehicle() { System.out.println( \"An example instance of the Vehicle class to proceed for showing ClassCastException\"); }} final class Bike extends Vehicle { public Bike() { super(); System.out.println( \"An example instance of the Bike class that extends Vehicle as parent class to proceed for showing ClassCastException\"); }} public class ClassCastExceptionExample { public static void main(String[] args) { Vehicle vehicle = new Vehicle(); Bike bike = new Bike(); Bike bike1 = new Vehicle(); // Check out for this statement. Tried to convert // parent(vehicle) object to child object(bike). // Here compiler error is thrown bike = vehicle; }}",
"e": 28665,
"s": 27874,
"text": null
},
{
"code": null,
"e": 28682,
"s": 28665,
"text": "Compiler error :"
},
{
"code": null,
"e": 28975,
"s": 28682,
"text": "In order to overcome Compile-time errors, we need to downcast explicitly. i.e. downcasting means the typecasting of a parent object to a child object. That means features of parent object lost and hence there is no implicit downcasting possible and hence we need to do explicitly as below way"
},
{
"code": null,
"e": 29053,
"s": 28975,
"text": "Giving the snippet where the change requires. In the downcasting Explicit way"
},
{
"code": null,
"e": 29058,
"s": 29053,
"text": "Java"
},
{
"code": "// An easier way to understand Downcastingclass Vehicle { String vehicleName; // Method in parent class void displayData() { System.out.println(\"From Vehicle class\"); }} class Bike extends Vehicle { double cost; // Overriding the parent class method and we can // additionaly mention about the child class @Override void displayData() { System.out.println(\"From bike class\" + cost); }} public class ClassCastExceptionExample { public static void main(String[] args) { Vehicle vehicle = new Bike(); vehicle.vehicleName = \"BMW\"; // Downcasting Explicitly Bike bike = (Bike)vehicle; bike.cost = 1000000; // Though vehiclename is not assigned, it takes BMW // as it is System.out.println(bike.vehicleName); System.out.println(bike.cost); bike.displayData(); }}",
"e": 29962,
"s": 29058,
"text": null
},
{
"code": null,
"e": 30002,
"s": 29962,
"text": "BMW\n1000000.0\nFrom bike class1000000.0"
},
{
"code": null,
"e": 30025,
"s": 30002,
"text": "Upcasting implicit way"
},
{
"code": null,
"e": 30194,
"s": 30025,
"text": "An example for upcasting of the child object to the parent object. It can be done implicitly. This facility gives us the flexibility to access the parent class members."
},
{
"code": null,
"e": 30199,
"s": 30194,
"text": "Java"
},
{
"code": "// An easier way to understand Upcastingclass Vehicle { String vehicleName; // Method in parent class void displayData() { System.out.println(\"From Vehicle class\"); }}class Bike extends Vehicle { double cost; // Overriding the parent class method and we can // additionaly mention about the child class @Override void displayData() { System.out.println(\"From bike class...\" + cost); }} public class ClassCastExceptionExample { public static void main(String[] args) { // Upcasting Vehicle vehicle = new Bike(); vehicle.vehicleName = \"Harley-Davidson\"; // vehicle.cost //not available as upcasting done // but originally Vehicle class dont have cost // attribute System.out.println(vehicle.vehicleName); vehicle.displayData(); // Hence here we will get // output as 0.0 }}",
"e": 31151,
"s": 30199,
"text": null
},
{
"code": null,
"e": 31190,
"s": 31151,
"text": "Harley-Davidson\nFrom bike class...0.0"
},
{
"code": null,
"e": 31283,
"s": 31190,
"text": "Fixing ClassCastException in an upcasting way and at the same time, loss of data also occurs"
},
{
"code": null,
"e": 31288,
"s": 31283,
"text": "Java"
},
{
"code": "class Vehicle { public Vehicle() { System.out.println( \"An example instance of the Vehicle class to proceed for showing ClassCast Exception\"); } public String display() { return \"Vehicle class display method\"; }} class Bike extends Vehicle { public Bike() { super(); // Vehicle class constructor msg display as // super() is nothing but calling parent // method System.out.println( \"An example instance of the Bike class that extends \\nVehicle as parent class to proceed for showing ClassCast Exception\"); } public String display() { return \"Bike class display method\"; }} public class ClassCastExceptionExample { public static void main(String[] args) { Vehicle vehicle = new Vehicle(); Bike bike = new Bike(); // But we can make bike point to vehicle, i.e. // pointing child object to parent object This is an // example for upcasting of child object to parent // object. It can be done implicitly This facility // gives us the flexibility to access the parent // class members vehicle = bike; // As upcasted here, vehicle.display() will provide // \"Bike class display method\" as output It has lost // its parent properties as now vehicle is nothing // but bike only System.out.println( \"After upcasting bike(child) to vehicle(parent)..\" + vehicle.display()); }}",
"e": 32833,
"s": 31288,
"text": null
},
{
"code": null,
"e": 33194,
"s": 32833,
"text": "An example instance of the Vehicle class to proceed for showing ClassCast Exception\nAn example instance of the Vehicle class to proceed for showing ClassCast Exception\nAn example instance of the Bike class that extends \nVehicle as parent class to proceed for showing ClassCast Exception\nAfter upcasting bike(child) to vehicle(parent)..Bike class display method"
},
{
"code": null,
"e": 33447,
"s": 33194,
"text": "Conclusion: By means of either upcasting or downcasting, we can overcome ClassCastException if we are following Parent-child relationships. String.valueOf() methods help to convert the different data type to String and in that way also we can overcome."
},
{
"code": null,
"e": 33463,
"s": 33447,
"text": "Java-Exceptions"
},
{
"code": null,
"e": 33470,
"s": 33463,
"text": "Picked"
},
{
"code": null,
"e": 33475,
"s": 33470,
"text": "Java"
},
{
"code": null,
"e": 33480,
"s": 33475,
"text": "Java"
},
{
"code": null,
"e": 33578,
"s": 33480,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 33593,
"s": 33578,
"text": "Stream In Java"
},
{
"code": null,
"e": 33639,
"s": 33593,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 33660,
"s": 33639,
"text": "Constructors in Java"
},
{
"code": null,
"e": 33679,
"s": 33660,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 33709,
"s": 33679,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 33726,
"s": 33709,
"text": "Generics in Java"
},
{
"code": null,
"e": 33769,
"s": 33726,
"text": "Comparator Interface in Java with Examples"
},
{
"code": null,
"e": 33798,
"s": 33769,
"text": "HashMap get() Method in Java"
},
{
"code": null,
"e": 33819,
"s": 33798,
"text": "Introduction to Java"
}
] |
parallel streams in Java 8 | Streams in Java 8 | Onlinetutorialspoint
|
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
Parallel Streams are greatest addition to Java 8 after Lambdas. The actual essence of Stream API can only be observed if used as parallel.
Suppose let’s take a scenario of you having a list of employee objects and you have to count employees whose salary is above 15000. Generally, to solve this problem you will iterate over list going through each and every employee and checking if employees salary is above 15000. This takes O(N) time since you go sequentially.
Streams provide us with the flexibility to iterate over the list in a parallel pattern and can give the aggregate in quick fashion.
Stream implementation in Java is by default sequential unless until it is explicitly mentioned for parallel. When a stream executes in parallel, the Java runtime partitions the stream into multiple substreams. Aggregate operations iterate over and process these sub-streams in parallel and then combine the results.
The only thing to keep in mind to create parallel stream is to call parallelStream() on the collection else by default sequential stream gets returned by stream().
We have created a list of 600 employees out of which there are 300 employees whose salary is above 15000.
Creating a sequential stream and filtering elements it took above 40 milliseconds, whereas the parallel stream only took 4 milliseconds.
import java.util.ArrayList;
import java.util.List;
public class ParallelStream {
public static void main(String[] args) {
List < Employee > empList = new ArrayList < Employee > ();
for (int i = 0; i < 100; i++) {
empList.add(new Employee("A", 20000));
empList.add(new Employee("B", 3000));
empList.add(new Employee("C", 15002));
empList.add(new Employee("D", 7856));
empList.add(new Employee("E", 200));
empList.add(new Employee("F", 50000));
}
long t1 = System.currentTimeMillis();
System.out.println("Sequential Stream count: " + empList.stream().filter(e -> e.getSalary() > 15000).count());
long t2 = System.currentTimeMillis();
System.out.println("Sequential Stream Time taken:" + (t2 - t1));
t1 = System.currentTimeMillis();
System.out.println("Parallel Stream count: " + empList.parallelStream().filter(e -> e.getSalary() > 15000).count());
t2 = System.currentTimeMillis();
System.out.println("Parallel Stream Time taken:" + (t2 - t1));
}
}
Employee.java
class Employee {
private int salary;
private String name;
Employee(String name, int salary) {
this.name = name;
this.salary = salary;
}
public int getSalary() {
return salary;
}
public void setSalary(int salary) {
this.salary = salary;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
Sequential Stream count: 300
Sequential Stream Time taken:59
Parallel Stream count: 300
Parallel Stream Time taken:4
Performance Implications:
Parallel Stream has equal performance impacts as like its advantages. Since each substream is a single thread running and acting on the data, it has overhead compared to sequential stream. Inter-thread communication is dangerous and takes time for coordination
When to use Parallel Streams:
They should be used when the output of the operation is not needed to be dependent on the order of elements present in source collection (on which stream is created).
Parallel Streams can be used in case of aggregate functions.
Iterate over large sized collections.
If you have performance implications with sequential streams.
If your environment is not multi threaded, because parallel stream creates thread and can effect new requests coming in.
Happy Learning 🙂
How to Merge Streams in Java 8
How to Sort an Array in Parallel in Java 8
How to calculate Employees Salaries Java 8 summingInt
How to Convert Iterable to Stream Java 8
What is Association in Java
Java 8 Stream API and Parallelism
Java 8 Stream Filter Example with Objects
Java 8 groupingBy Example
How to get Stream count in Java 8
What are Lambda Expressions in Java 8
How Java 8 Stream generate random String and Numbers
Java – How to Convert InputStream to String
Java 8 foreach Example Tutorials
How to convert List to Map in Java 8
Java 8 How to Convert List to String comma separated values
How to Merge Streams in Java 8
How to Sort an Array in Parallel in Java 8
How to calculate Employees Salaries Java 8 summingInt
How to Convert Iterable to Stream Java 8
What is Association in Java
Java 8 Stream API and Parallelism
Java 8 Stream Filter Example with Objects
Java 8 groupingBy Example
How to get Stream count in Java 8
What are Lambda Expressions in Java 8
How Java 8 Stream generate random String and Numbers
Java – How to Convert InputStream to String
Java 8 foreach Example Tutorials
How to convert List to Map in Java 8
Java 8 How to Convert List to String comma separated values
priya
July 11, 2018 at 12:33 pm - Reply
A stream is a sequence of data. The InputStream is used to read data from a source and the OutputStream is used for writing data to a destination. InputStream and OutputStream are the basic stream classes in Java.
priya
July 11, 2018 at 12:33 pm - Reply
A stream is a sequence of data. The InputStream is used to read data from a source and the OutputStream is used for writing data to a destination. InputStream and OutputStream are the basic stream classes in Java.
A stream is a sequence of data. The InputStream is used to read data from a source and the OutputStream is used for writing data to a destination. InputStream and OutputStream are the basic stream classes in Java.
Δ
Install Java on Mac OS
Install AWS CLI on Windows
Install Minikube on Windows
Install Docker Toolbox on Windows
Install SOAPUI on Windows
Install Gradle on Windows
Install RabbitMQ on Windows
Install PuTTY on windows
Install Mysql on Windows
Install Hibernate Tools in Eclipse
Install Elasticsearch on Windows
Install Maven on Windows
Install Maven on Ubuntu
Install Maven on Windows Command
Add OJDBC jar to Maven Repository
Install Ant on Windows
Install RabbitMQ on Windows
Install Apache Kafka on Ubuntu
Install Apache Kafka on Windows
Java8 – Install Windows
Java8 – foreach
Java8 – forEach with index
Java8 – Stream Filter Objects
Java8 – Comparator Userdefined
Java8 – GroupingBy
Java8 – SummingInt
Java8 – walk ReadFiles
Java8 – JAVA_HOME on Windows
Howto – Install Java on Mac OS
Howto – Convert Iterable to Stream
Howto – Get common elements from two Lists
Howto – Convert List to String
Howto – Concatenate Arrays using Stream
Howto – Remove duplicates from List
Howto – Filter null values from Stream
Howto – Convert List to Map
Howto – Convert Stream to List
Howto – Sort a Map
Howto – Filter a Map
Howto – Get Current UTC Time
Howto – Verify an Array contains a specific value
Howto – Convert ArrayList to Array
Howto – Read File Line By Line
Howto – Convert Date to LocalDate
Howto – Merge Streams
Howto – Resolve NullPointerException in toMap
Howto -Get Stream count
Howto – Get Min and Max values in a Stream
Howto – Convert InputStream to String
|
[
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 537,
"s": 398,
"text": "Parallel Streams are greatest addition to Java 8 after Lambdas. The actual essence of Stream API can only be observed if used as parallel."
},
{
"code": null,
"e": 864,
"s": 537,
"text": "Suppose let’s take a scenario of you having a list of employee objects and you have to count employees whose salary is above 15000. Generally, to solve this problem you will iterate over list going through each and every employee and checking if employees salary is above 15000. This takes O(N) time since you go sequentially."
},
{
"code": null,
"e": 996,
"s": 864,
"text": "Streams provide us with the flexibility to iterate over the list in a parallel pattern and can give the aggregate in quick fashion."
},
{
"code": null,
"e": 1312,
"s": 996,
"text": "Stream implementation in Java is by default sequential unless until it is explicitly mentioned for parallel. When a stream executes in parallel, the Java runtime partitions the stream into multiple substreams. Aggregate operations iterate over and process these sub-streams in parallel and then combine the results."
},
{
"code": null,
"e": 1476,
"s": 1312,
"text": "The only thing to keep in mind to create parallel stream is to call parallelStream() on the collection else by default sequential stream gets returned by stream()."
},
{
"code": null,
"e": 1582,
"s": 1476,
"text": "We have created a list of 600 employees out of which there are 300 employees whose salary is above 15000."
},
{
"code": null,
"e": 1719,
"s": 1582,
"text": "Creating a sequential stream and filtering elements it took above 40 milliseconds, whereas the parallel stream only took 4 milliseconds."
},
{
"code": null,
"e": 2708,
"s": 1719,
"text": "import java.util.ArrayList;\nimport java.util.List;\npublic class ParallelStream {\n public static void main(String[] args) {\n List < Employee > empList = new ArrayList < Employee > ();\n for (int i = 0; i < 100; i++) {\n empList.add(new Employee(\"A\", 20000));\n empList.add(new Employee(\"B\", 3000));\n empList.add(new Employee(\"C\", 15002));\n empList.add(new Employee(\"D\", 7856));\n empList.add(new Employee(\"E\", 200));\n empList.add(new Employee(\"F\", 50000));\n }\n long t1 = System.currentTimeMillis();\n System.out.println(\"Sequential Stream count: \" + empList.stream().filter(e -> e.getSalary() > 15000).count());\n long t2 = System.currentTimeMillis();\n System.out.println(\"Sequential Stream Time taken:\" + (t2 - t1));\n t1 = System.currentTimeMillis();\n System.out.println(\"Parallel Stream count: \" + empList.parallelStream().filter(e -> e.getSalary() > 15000).count());\n t2 = System.currentTimeMillis();\n System.out.println(\"Parallel Stream Time taken:\" + (t2 - t1));\n }\n}"
},
{
"code": null,
"e": 2722,
"s": 2708,
"text": "Employee.java"
},
{
"code": null,
"e": 3163,
"s": 2722,
"text": "\nclass Employee {\n\n private int salary;\n private String name;\n\n Employee(String name, int salary) {\n this.name = name;\n this.salary = salary;\n }\n\n public int getSalary() {\n return salary;\n }\n\n public void setSalary(int salary) {\n this.salary = salary;\n }\n\n public String getName() {\n return name;\n }\n\n public void setName(String name) {\n this.name = name;\n }\n\n}\n"
},
{
"code": null,
"e": 3280,
"s": 3163,
"text": "Sequential Stream count: 300\nSequential Stream Time taken:59\nParallel Stream count: 300\nParallel Stream Time taken:4"
},
{
"code": null,
"e": 3306,
"s": 3280,
"text": "Performance Implications:"
},
{
"code": null,
"e": 3567,
"s": 3306,
"text": "Parallel Stream has equal performance impacts as like its advantages. Since each substream is a single thread running and acting on the data, it has overhead compared to sequential stream. Inter-thread communication is dangerous and takes time for coordination"
},
{
"code": null,
"e": 3597,
"s": 3567,
"text": "When to use Parallel Streams:"
},
{
"code": null,
"e": 3764,
"s": 3597,
"text": "They should be used when the output of the operation is not needed to be dependent on the order of elements present in source collection (on which stream is created)."
},
{
"code": null,
"e": 3825,
"s": 3764,
"text": "Parallel Streams can be used in case of aggregate functions."
},
{
"code": null,
"e": 3863,
"s": 3825,
"text": "Iterate over large sized collections."
},
{
"code": null,
"e": 3925,
"s": 3863,
"text": "If you have performance implications with sequential streams."
},
{
"code": null,
"e": 4046,
"s": 3925,
"text": "If your environment is not multi threaded, because parallel stream creates thread and can effect new requests coming in."
},
{
"code": null,
"e": 4063,
"s": 4046,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 4663,
"s": 4063,
"text": "\nHow to Merge Streams in Java 8\nHow to Sort an Array in Parallel in Java 8\nHow to calculate Employees Salaries Java 8 summingInt\nHow to Convert Iterable to Stream Java 8\nWhat is Association in Java\nJava 8 Stream API and Parallelism\nJava 8 Stream Filter Example with Objects\nJava 8 groupingBy Example\nHow to get Stream count in Java 8\nWhat are Lambda Expressions in Java 8\nHow Java 8 Stream generate random String and Numbers\nJava – How to Convert InputStream to String\nJava 8 foreach Example Tutorials\nHow to convert List to Map in Java 8\nJava 8 How to Convert List to String comma separated values\n"
},
{
"code": null,
"e": 4694,
"s": 4663,
"text": "How to Merge Streams in Java 8"
},
{
"code": null,
"e": 4737,
"s": 4694,
"text": "How to Sort an Array in Parallel in Java 8"
},
{
"code": null,
"e": 4791,
"s": 4737,
"text": "How to calculate Employees Salaries Java 8 summingInt"
},
{
"code": null,
"e": 4832,
"s": 4791,
"text": "How to Convert Iterable to Stream Java 8"
},
{
"code": null,
"e": 4860,
"s": 4832,
"text": "What is Association in Java"
},
{
"code": null,
"e": 4894,
"s": 4860,
"text": "Java 8 Stream API and Parallelism"
},
{
"code": null,
"e": 4936,
"s": 4894,
"text": "Java 8 Stream Filter Example with Objects"
},
{
"code": null,
"e": 4962,
"s": 4936,
"text": "Java 8 groupingBy Example"
},
{
"code": null,
"e": 4996,
"s": 4962,
"text": "How to get Stream count in Java 8"
},
{
"code": null,
"e": 5034,
"s": 4996,
"text": "What are Lambda Expressions in Java 8"
},
{
"code": null,
"e": 5087,
"s": 5034,
"text": "How Java 8 Stream generate random String and Numbers"
},
{
"code": null,
"e": 5131,
"s": 5087,
"text": "Java – How to Convert InputStream to String"
},
{
"code": null,
"e": 5164,
"s": 5131,
"text": "Java 8 foreach Example Tutorials"
},
{
"code": null,
"e": 5201,
"s": 5164,
"text": "How to convert List to Map in Java 8"
},
{
"code": null,
"e": 5261,
"s": 5201,
"text": "Java 8 How to Convert List to String comma separated values"
},
{
"code": null,
"e": 5528,
"s": 5261,
"text": "\n\n\n\n\n\npriya\nJuly 11, 2018 at 12:33 pm - Reply \n\nA stream is a sequence of data. The InputStream is used to read data from a source and the OutputStream is used for writing data to a destination. InputStream and OutputStream are the basic stream classes in Java.\n\n\n\n\n"
},
{
"code": null,
"e": 5793,
"s": 5528,
"text": "\n\n\n\n\npriya\nJuly 11, 2018 at 12:33 pm - Reply \n\nA stream is a sequence of data. The InputStream is used to read data from a source and the OutputStream is used for writing data to a destination. InputStream and OutputStream are the basic stream classes in Java.\n\n\n\n"
},
{
"code": null,
"e": 6007,
"s": 5793,
"text": "A stream is a sequence of data. The InputStream is used to read data from a source and the OutputStream is used for writing data to a destination. InputStream and OutputStream are the basic stream classes in Java."
},
{
"code": null,
"e": 6013,
"s": 6011,
"text": "Δ"
},
{
"code": null,
"e": 6037,
"s": 6013,
"text": " Install Java on Mac OS"
},
{
"code": null,
"e": 6065,
"s": 6037,
"text": " Install AWS CLI on Windows"
},
{
"code": null,
"e": 6094,
"s": 6065,
"text": " Install Minikube on Windows"
},
{
"code": null,
"e": 6129,
"s": 6094,
"text": " Install Docker Toolbox on Windows"
},
{
"code": null,
"e": 6156,
"s": 6129,
"text": " Install SOAPUI on Windows"
},
{
"code": null,
"e": 6183,
"s": 6156,
"text": " Install Gradle on Windows"
},
{
"code": null,
"e": 6212,
"s": 6183,
"text": " Install RabbitMQ on Windows"
},
{
"code": null,
"e": 6238,
"s": 6212,
"text": " Install PuTTY on windows"
},
{
"code": null,
"e": 6264,
"s": 6238,
"text": " Install Mysql on Windows"
},
{
"code": null,
"e": 6300,
"s": 6264,
"text": " Install Hibernate Tools in Eclipse"
},
{
"code": null,
"e": 6334,
"s": 6300,
"text": " Install Elasticsearch on Windows"
},
{
"code": null,
"e": 6360,
"s": 6334,
"text": " Install Maven on Windows"
},
{
"code": null,
"e": 6385,
"s": 6360,
"text": " Install Maven on Ubuntu"
},
{
"code": null,
"e": 6419,
"s": 6385,
"text": " Install Maven on Windows Command"
},
{
"code": null,
"e": 6454,
"s": 6419,
"text": " Add OJDBC jar to Maven Repository"
},
{
"code": null,
"e": 6478,
"s": 6454,
"text": " Install Ant on Windows"
},
{
"code": null,
"e": 6507,
"s": 6478,
"text": " Install RabbitMQ on Windows"
},
{
"code": null,
"e": 6539,
"s": 6507,
"text": " Install Apache Kafka on Ubuntu"
},
{
"code": null,
"e": 6572,
"s": 6539,
"text": " Install Apache Kafka on Windows"
},
{
"code": null,
"e": 6597,
"s": 6572,
"text": " Java8 – Install Windows"
},
{
"code": null,
"e": 6614,
"s": 6597,
"text": " Java8 – foreach"
},
{
"code": null,
"e": 6642,
"s": 6614,
"text": " Java8 – forEach with index"
},
{
"code": null,
"e": 6673,
"s": 6642,
"text": " Java8 – Stream Filter Objects"
},
{
"code": null,
"e": 6705,
"s": 6673,
"text": " Java8 – Comparator Userdefined"
},
{
"code": null,
"e": 6725,
"s": 6705,
"text": " Java8 – GroupingBy"
},
{
"code": null,
"e": 6745,
"s": 6725,
"text": " Java8 – SummingInt"
},
{
"code": null,
"e": 6769,
"s": 6745,
"text": " Java8 – walk ReadFiles"
},
{
"code": null,
"e": 6799,
"s": 6769,
"text": " Java8 – JAVA_HOME on Windows"
},
{
"code": null,
"e": 6831,
"s": 6799,
"text": " Howto – Install Java on Mac OS"
},
{
"code": null,
"e": 6867,
"s": 6831,
"text": " Howto – Convert Iterable to Stream"
},
{
"code": null,
"e": 6911,
"s": 6867,
"text": " Howto – Get common elements from two Lists"
},
{
"code": null,
"e": 6943,
"s": 6911,
"text": " Howto – Convert List to String"
},
{
"code": null,
"e": 6984,
"s": 6943,
"text": " Howto – Concatenate Arrays using Stream"
},
{
"code": null,
"e": 7021,
"s": 6984,
"text": " Howto – Remove duplicates from List"
},
{
"code": null,
"e": 7061,
"s": 7021,
"text": " Howto – Filter null values from Stream"
},
{
"code": null,
"e": 7090,
"s": 7061,
"text": " Howto – Convert List to Map"
},
{
"code": null,
"e": 7122,
"s": 7090,
"text": " Howto – Convert Stream to List"
},
{
"code": null,
"e": 7142,
"s": 7122,
"text": " Howto – Sort a Map"
},
{
"code": null,
"e": 7164,
"s": 7142,
"text": " Howto – Filter a Map"
},
{
"code": null,
"e": 7194,
"s": 7164,
"text": " Howto – Get Current UTC Time"
},
{
"code": null,
"e": 7245,
"s": 7194,
"text": " Howto – Verify an Array contains a specific value"
},
{
"code": null,
"e": 7281,
"s": 7245,
"text": " Howto – Convert ArrayList to Array"
},
{
"code": null,
"e": 7313,
"s": 7281,
"text": " Howto – Read File Line By Line"
},
{
"code": null,
"e": 7348,
"s": 7313,
"text": " Howto – Convert Date to LocalDate"
},
{
"code": null,
"e": 7371,
"s": 7348,
"text": " Howto – Merge Streams"
},
{
"code": null,
"e": 7418,
"s": 7371,
"text": " Howto – Resolve NullPointerException in toMap"
},
{
"code": null,
"e": 7443,
"s": 7418,
"text": " Howto -Get Stream count"
},
{
"code": null,
"e": 7487,
"s": 7443,
"text": " Howto – Get Min and Max values in a Stream"
}
] |
Batch Script - Remove Both Ends
|
This is used to remove the first and the last character of a string
@echo off
set str = Batch scripts is easy. It is really easy
echo %str%
set str = %str:~1,-1%
echo %str%
The key thing to note about the above program is, the ~1,-1 is used to remove the first and last character of a string.
The above command produces the following output.
Batch scripts is easy. It is really easy
atch scripts is easy. It is really eas
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2237,
"s": 2169,
"text": "This is used to remove the first and the last character of a string"
},
{
"code": null,
"e": 2347,
"s": 2237,
"text": "@echo off \nset str = Batch scripts is easy. It is really easy \necho %str% \n\nset str = %str:~1,-1% \necho %str%"
},
{
"code": null,
"e": 2467,
"s": 2347,
"text": "The key thing to note about the above program is, the ~1,-1 is used to remove the first and last character of a string."
},
{
"code": null,
"e": 2516,
"s": 2467,
"text": "The above command produces the following output."
},
{
"code": null,
"e": 2598,
"s": 2516,
"text": "Batch scripts is easy. It is really easy \natch scripts is easy. It is really eas\n"
},
{
"code": null,
"e": 2605,
"s": 2598,
"text": " Print"
},
{
"code": null,
"e": 2616,
"s": 2605,
"text": " Add Notes"
}
] |
What is the difference between “word-break: break-all” versus “word-wrap: break-word” in CSS ?
|
12 Feb, 2019
The word-break property in CSS is used to specify how a word should be broken or split when reaching the end of a line. The word-wrap property is used to split/break long words and wrap them into the next line.
Difference between the “word-break: break-all;” and “word-wrap: break-word;”
word-break: break-all; It is used to break the words at any character to prevent overflow.
word-wrap: break-word; It is used to broken the words at arbitrary points to prevent overflow.
The “word-break: break-all;” will break the word at any character so the result is to difficulty in reading whereas “word-wrap: break-word;” will split word without making the word not break in the middle and wrap it into next line.
Example 1: This example display the break-all property values.
<!DOCTYPE html> <html> <head> <style> p { width: 142px; border: 1px solid #000000; } p.gfg { word-break: break-all; } </style> </head> <body> <center> <h1 style="color:green;">GeeksforGeeks</h1> <h2>word-break: break-all;</h2> <p class="gfg">GeeksforGeeksGeeksGeeks. A computer science portal for geeks .</p> </center> </body> </html>
Output:
Example 2: This example display the break-word property values.
<!DOCTYPE html> <html> <head> <style> p { width: 140px; border: 1px solid #000000; color:black; } p.gfg { word-break: break-word; } </style> </head> <body> <center> <h1>GeeksforGeeks</h1> <h2>word-break: break-word</h2> <p class="gfg">GeeksforGeeksGeeksGeeks.A computer science portal for geeks .</p> </center> </body> </html>
Output:
Example 3: This example display the comparison of break-all and break-word property values.
<!DOCTYPE html><html> <head> <!-- style to set word-break property --> <style> .wb { word-break: break-all; width: 140px; border: 1px solid green; } .wr { word-wrap: break-word; width: 140px; border: 1px solid green; } .main1 { width:50%; float:left; } .main2 { width:50%; float:left; } </style></head> <body> <center> <h1>GeeksforGeeks</h1> <div style = "width:100%;"> <div class = "main1"> <h2>word-break: break-all:</h2> <div class="wb"> Prepare for the Recruitment drive of product based companies like Microsoft, Amazon, Adobe etc with a free online placement preparation course. The course focuses on various MCQ's & Coding question likely to be asked in the interviews & make your upcoming placement season efficient and successful. </div> </div> <div class = "main2"> <h2>word-wrap: break-word:</h2> <div class="wr"> Prepare for the Recruitment drive of product based companies like Microsoft, Amazon, Adobe etc with a free online placement preparation course. The course focuses on various MCQ's & Coding question likely to be asked in the interviews & make your upcoming placement season efficient and successful. </div> </div> </div> </center></body> </html>
Output:
CSS-Misc
Picked
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n12 Feb, 2019"
},
{
"code": null,
"e": 239,
"s": 28,
"text": "The word-break property in CSS is used to specify how a word should be broken or split when reaching the end of a line. The word-wrap property is used to split/break long words and wrap them into the next line."
},
{
"code": null,
"e": 316,
"s": 239,
"text": "Difference between the “word-break: break-all;” and “word-wrap: break-word;”"
},
{
"code": null,
"e": 407,
"s": 316,
"text": "word-break: break-all; It is used to break the words at any character to prevent overflow."
},
{
"code": null,
"e": 502,
"s": 407,
"text": "word-wrap: break-word; It is used to broken the words at arbitrary points to prevent overflow."
},
{
"code": null,
"e": 735,
"s": 502,
"text": "The “word-break: break-all;” will break the word at any character so the result is to difficulty in reading whereas “word-wrap: break-word;” will split word without making the word not break in the middle and wrap it into next line."
},
{
"code": null,
"e": 798,
"s": 735,
"text": "Example 1: This example display the break-all property values."
},
{
"code": "<!DOCTYPE html> <html> <head> <style> p { width: 142px; border: 1px solid #000000; } p.gfg { word-break: break-all; } </style> </head> <body> <center> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h2>word-break: break-all;</h2> <p class=\"gfg\">GeeksforGeeksGeeksGeeks. A computer science portal for geeks .</p> </center> </body> </html> ",
"e": 1340,
"s": 798,
"text": null
},
{
"code": null,
"e": 1348,
"s": 1340,
"text": "Output:"
},
{
"code": null,
"e": 1412,
"s": 1348,
"text": "Example 2: This example display the break-word property values."
},
{
"code": "<!DOCTYPE html> <html> <head> <style> p { width: 140px; border: 1px solid #000000; color:black; } p.gfg { word-break: break-word; } </style> </head> <body> <center> <h1>GeeksforGeeks</h1> <h2>word-break: break-word</h2> <p class=\"gfg\">GeeksforGeeksGeeksGeeks.A computer science portal for geeks .</p> </center> </body> </html> ",
"e": 1948,
"s": 1412,
"text": null
},
{
"code": null,
"e": 1956,
"s": 1948,
"text": "Output:"
},
{
"code": null,
"e": 2048,
"s": 1956,
"text": "Example 3: This example display the comparison of break-all and break-word property values."
},
{
"code": "<!DOCTYPE html><html> <head> <!-- style to set word-break property --> <style> .wb { word-break: break-all; width: 140px; border: 1px solid green; } .wr { word-wrap: break-word; width: 140px; border: 1px solid green; } .main1 { width:50%; float:left; } .main2 { width:50%; float:left; } </style></head> <body> <center> <h1>GeeksforGeeks</h1> <div style = \"width:100%;\"> <div class = \"main1\"> <h2>word-break: break-all:</h2> <div class=\"wb\"> Prepare for the Recruitment drive of product based companies like Microsoft, Amazon, Adobe etc with a free online placement preparation course. The course focuses on various MCQ's & Coding question likely to be asked in the interviews & make your upcoming placement season efficient and successful. </div> </div> <div class = \"main2\"> <h2>word-wrap: break-word:</h2> <div class=\"wr\"> Prepare for the Recruitment drive of product based companies like Microsoft, Amazon, Adobe etc with a free online placement preparation course. The course focuses on various MCQ's & Coding question likely to be asked in the interviews & make your upcoming placement season efficient and successful. </div> </div> </div> </center></body> </html> ",
"e": 3929,
"s": 2048,
"text": null
},
{
"code": null,
"e": 3937,
"s": 3929,
"text": "Output:"
},
{
"code": null,
"e": 3946,
"s": 3937,
"text": "CSS-Misc"
},
{
"code": null,
"e": 3953,
"s": 3946,
"text": "Picked"
},
{
"code": null,
"e": 3970,
"s": 3953,
"text": "Web Technologies"
}
] |
How to Set Axis Limits bokeh?
|
03 Mar, 2021
In this article, we will be learning about how to set axis limits to a plot in bokeh. When we are plotting a graph with a set of values, then X-limits and Y-limits for the points are automatically created. But bokeh allows us to set those axis limits according to our choice.
So, at first, we need to know that in order to set Axis limits to both X and Y axis, we need to import a package from our bokeh.models module which is known as range1d. Range1d enables us to set axis limits to the plot of our choice.
We can either use google colab or we can use any text editor in our local device to do the above implementation. In order to use a text-editor on our local device, we need to open the command prompt at first and write the following code.
pip install bokeh
After installation, now we are ready to proceed to the main implementation.
In the code below, we are creating an empty plot with the limits of X-axis and Y-Axis according to our choice.
Code:
Python3
# importing figure and show from# bokeh.plottingfrom bokeh.plotting import figure, show # importing range1d from# bokeh.models in order to change# the X-Axis and Y-Axis rangesfrom bokeh.models import Range1d # Determining the plot height# and plot widthfig = figure(plot_width=500, plot_height=500) # With the help of x_range and# y_range functions we are setting# up the range of both the axisfig.x_range = Range1d(0, 10)fig.y_range = Range1d(5, 15) # Thus an empty plot is created# where the ranges of both the# axes are custom set by us.show(fig)
Output:
Explanation:
As we can clearly see from the code, that we have set X-Axis limits from 0-10 and Y-Axis Limits from 5-15. The thing has been implemented in the graph shown above.
Example 2:
In the second example, we are setting the X and Y Axis limits of our own, and then we are pointing a set of points in that range in the graph. The below code shows the above implementation.
Python3
# importing figure and show from# bokeh.plottingfrom bokeh.plotting import figure, show # importing range1d from# bokeh.models in order to change# the X-Axis and Y-Axis rangesfrom bokeh.models import Range1d # Determining the plot height# and plot widthfig = figure(plot_width=620, plot_height=600) # With the help of x_range and# y_range functions we are setting# up the range of both the axisfig.x_range = Range1d(20, 25)fig.y_range = Range1d(35, 100) # Creating two graphs in a# single figure where we are# plotting the points in the range# set by usfig.line([21, 20, 23, 24, 22], [36, 72, 51, 90, 56], line_width=2, color="green") fig.circle([21, 20, 23, 24, 22], [36, 72, 51, 90, 56], size=20, color="green", alpha=0.5) # Showing the plotshow(fig)
Output:
Explanation:
Since this is a customized axis made by us, the size of the graph will be totally dependent on the axis limits. For example: if the axis limits for both the axis is larger, then the graph plotted above will be smaller than the one shown now.
Picked
Python-Bokeh
Technical Scripter 2020
Python
Technical Scripter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
Different ways to create Pandas Dataframe
Enumerate() in Python
Read a file line by line in Python
How to Install PIP on Windows ?
Python String | replace()
*args and **kwargs in Python
Python Classes and Objects
Python OOPs Concepts
Iterate over a list in Python
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n03 Mar, 2021"
},
{
"code": null,
"e": 304,
"s": 28,
"text": "In this article, we will be learning about how to set axis limits to a plot in bokeh. When we are plotting a graph with a set of values, then X-limits and Y-limits for the points are automatically created. But bokeh allows us to set those axis limits according to our choice."
},
{
"code": null,
"e": 538,
"s": 304,
"text": "So, at first, we need to know that in order to set Axis limits to both X and Y axis, we need to import a package from our bokeh.models module which is known as range1d. Range1d enables us to set axis limits to the plot of our choice."
},
{
"code": null,
"e": 776,
"s": 538,
"text": "We can either use google colab or we can use any text editor in our local device to do the above implementation. In order to use a text-editor on our local device, we need to open the command prompt at first and write the following code."
},
{
"code": null,
"e": 794,
"s": 776,
"text": "pip install bokeh"
},
{
"code": null,
"e": 870,
"s": 794,
"text": "After installation, now we are ready to proceed to the main implementation."
},
{
"code": null,
"e": 981,
"s": 870,
"text": "In the code below, we are creating an empty plot with the limits of X-axis and Y-Axis according to our choice."
},
{
"code": null,
"e": 987,
"s": 981,
"text": "Code:"
},
{
"code": null,
"e": 995,
"s": 987,
"text": "Python3"
},
{
"code": "# importing figure and show from# bokeh.plottingfrom bokeh.plotting import figure, show # importing range1d from# bokeh.models in order to change# the X-Axis and Y-Axis rangesfrom bokeh.models import Range1d # Determining the plot height# and plot widthfig = figure(plot_width=500, plot_height=500) # With the help of x_range and# y_range functions we are setting# up the range of both the axisfig.x_range = Range1d(0, 10)fig.y_range = Range1d(5, 15) # Thus an empty plot is created# where the ranges of both the# axes are custom set by us.show(fig)",
"e": 1549,
"s": 995,
"text": null
},
{
"code": null,
"e": 1557,
"s": 1549,
"text": "Output:"
},
{
"code": null,
"e": 1570,
"s": 1557,
"text": "Explanation:"
},
{
"code": null,
"e": 1734,
"s": 1570,
"text": "As we can clearly see from the code, that we have set X-Axis limits from 0-10 and Y-Axis Limits from 5-15. The thing has been implemented in the graph shown above."
},
{
"code": null,
"e": 1745,
"s": 1734,
"text": "Example 2:"
},
{
"code": null,
"e": 1935,
"s": 1745,
"text": "In the second example, we are setting the X and Y Axis limits of our own, and then we are pointing a set of points in that range in the graph. The below code shows the above implementation."
},
{
"code": null,
"e": 1943,
"s": 1935,
"text": "Python3"
},
{
"code": "# importing figure and show from# bokeh.plottingfrom bokeh.plotting import figure, show # importing range1d from# bokeh.models in order to change# the X-Axis and Y-Axis rangesfrom bokeh.models import Range1d # Determining the plot height# and plot widthfig = figure(plot_width=620, plot_height=600) # With the help of x_range and# y_range functions we are setting# up the range of both the axisfig.x_range = Range1d(20, 25)fig.y_range = Range1d(35, 100) # Creating two graphs in a# single figure where we are# plotting the points in the range# set by usfig.line([21, 20, 23, 24, 22], [36, 72, 51, 90, 56], line_width=2, color=\"green\") fig.circle([21, 20, 23, 24, 22], [36, 72, 51, 90, 56], size=20, color=\"green\", alpha=0.5) # Showing the plotshow(fig)",
"e": 2760,
"s": 1943,
"text": null
},
{
"code": null,
"e": 2768,
"s": 2760,
"text": "Output:"
},
{
"code": null,
"e": 2781,
"s": 2768,
"text": "Explanation:"
},
{
"code": null,
"e": 3024,
"s": 2781,
"text": "Since this is a customized axis made by us, the size of the graph will be totally dependent on the axis limits. For example: if the axis limits for both the axis is larger, then the graph plotted above will be smaller than the one shown now. "
},
{
"code": null,
"e": 3031,
"s": 3024,
"text": "Picked"
},
{
"code": null,
"e": 3044,
"s": 3031,
"text": "Python-Bokeh"
},
{
"code": null,
"e": 3068,
"s": 3044,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 3075,
"s": 3068,
"text": "Python"
},
{
"code": null,
"e": 3094,
"s": 3075,
"text": "Technical Scripter"
},
{
"code": null,
"e": 3192,
"s": 3094,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3210,
"s": 3192,
"text": "Python Dictionary"
},
{
"code": null,
"e": 3252,
"s": 3210,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 3274,
"s": 3252,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 3309,
"s": 3274,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 3341,
"s": 3309,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 3367,
"s": 3341,
"text": "Python String | replace()"
},
{
"code": null,
"e": 3396,
"s": 3367,
"text": "*args and **kwargs in Python"
},
{
"code": null,
"e": 3423,
"s": 3396,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 3444,
"s": 3423,
"text": "Python OOPs Concepts"
}
] |
Python Nested Dictionary
|
10 Feb, 2022
Prerequisite – Python dictionaryA Dictionary in Python works similar to the Dictionary in the real world. Keys of a Dictionary must be unique and of immutable data type such as Strings, Integers and tuples, but the key-values can be repeated and be of any type.Nested Dictionary: Nesting Dictionary means putting a dictionary inside another dictionary. Nesting is of great use as the kind of information we can model in programs is expanded greatly.
Python3
nested_dict = { 'dict1': {'key_A': 'value_A'}, 'dict2': {'key_B': 'value_B'}}
Python3
# As shown in image # Creating a Nested DictionaryDict = {1: 'Geeks', 2: 'For', 3: {'A' : 'Welcome', 'B' : 'To', 'C' : 'Geeks'}}
In Python, a Nested dictionary can be created by placing the comma-separated dictionaries enclosed within braces.
Python3
# Empty nested dictionaryDict = { 'Dict1': { }, 'Dict2': { }}print("Nested dictionary 1-")print(Dict) # Nested dictionary having same keysDict = { 'Dict1': {'name': 'Ali', 'age': '19'}, 'Dict2': {'name': 'Bob', 'age': '25'}}print("\nNested dictionary 2-")print(Dict) # Nested dictionary of mixed dictionary keysDict = { 'Dict1': {1: 'G', 2: 'F', 3: 'G'}, 'Dict2': {'Name': 'Geeks', 1: [1, 2]} }print("\nNested dictionary 3-")print(Dict)
Output:
Nested dictionary 1-
{'Dict1': {}, 'Dict2': {}}
Nested dictionary 2-
{'Dict1': {'name': 'Ali', 'age': '19'}, 'Dict2': {'name': 'Bob', 'age': '25'}}
Nested dictionary 3-
{'Dict1': {1: 'G', 2: 'F', 3: 'G'}, 'Dict2': {1: [1, 2], 'Name': 'Geeks'}}
Addition of elements to a nested Dictionary can be done in multiple ways. One way to add a dictionary in the Nested dictionary is to add values one be one, Nested_dict[dict][key] = ‘value’. Another way is to add the whole dictionary in one go, Nested_dict[dict] = { ‘key’: ‘value’}.
Python3
Dict = { }print("Initial nested dictionary:-")print(Dict) Dict['Dict1'] = {} # Adding elements one at a timeDict['Dict1']['name'] = 'Bob'Dict['Dict1']['age'] = 21print("\nAfter adding dictionary Dict1")print(Dict) # Adding whole dictionaryDict['Dict2'] = {'name': 'Cara', 'age': 25}print("\nAfter adding dictionary Dict1")print(Dict)
Output:
Initial nested dictionary:-
{}
After adding dictionary Dict1
{'Dict1': {'age': 21, 'name': 'Bob'}}
After adding dictionary Dict1
{'Dict1': {'age': 21, 'name': 'Bob'}, 'Dict2': {'age': 25, 'name': 'Cara'}}
In order to access the value of any key in nested dictionary, use indexing [] syntax.
Python3
# Nested dictionary having same keysDict = { 'Dict1': {'name': 'Ali', 'age': '19'}, 'Dict2': {'name': 'Bob', 'age': '25'}} # Prints value corresponding to key 'name' in Dict1print(Dict['Dict1']['name']) # Prints value corresponding to key 'age' in Dict2print(Dict['Dict2']['age'])
Output:
Ali
25
Deletion of dictionaries from a nested dictionary can be done either by using del keyword or by using pop() function.
Python3
Dict = {'Dict1': {'name': 'Ali', 'age': 19}, 'Dict2': {'name': 'Bob', 'age': 21}}print("Initial nested dictionary:-")print(Dict) # Deleting dictionary using del keywordprint("\nDeleting Dict2:-")del Dict['Dict2']print(Dict) # Deleting dictionary using pop functionprint("\nDeleting Dict1:-")Dict.pop('Dict1')print (Dict)
Output:
Initial nested dictionary:-
{'Dict2': {'name': 'Bob', 'age': 21}, 'Dict1': {'name': 'Ali', 'age': 19}}
Deleting Dict2:-
{'Dict1': {'name': 'Ali', 'age': 19}}
Deleting Dict1:-
{}
rkbhola5
python-dict
Python-nested-dictionary
Python
python-dict
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n10 Feb, 2022"
},
{
"code": null,
"e": 506,
"s": 54,
"text": "Prerequisite – Python dictionaryA Dictionary in Python works similar to the Dictionary in the real world. Keys of a Dictionary must be unique and of immutable data type such as Strings, Integers and tuples, but the key-values can be repeated and be of any type.Nested Dictionary: Nesting Dictionary means putting a dictionary inside another dictionary. Nesting is of great use as the kind of information we can model in programs is expanded greatly. "
},
{
"code": null,
"e": 514,
"s": 506,
"text": "Python3"
},
{
"code": "nested_dict = { 'dict1': {'key_A': 'value_A'}, 'dict2': {'key_B': 'value_B'}}",
"e": 607,
"s": 514,
"text": null
},
{
"code": null,
"e": 615,
"s": 607,
"text": "Python3"
},
{
"code": "# As shown in image # Creating a Nested DictionaryDict = {1: 'Geeks', 2: 'For', 3: {'A' : 'Welcome', 'B' : 'To', 'C' : 'Geeks'}}",
"e": 744,
"s": 615,
"text": null
},
{
"code": null,
"e": 860,
"s": 744,
"text": "In Python, a Nested dictionary can be created by placing the comma-separated dictionaries enclosed within braces. "
},
{
"code": null,
"e": 868,
"s": 860,
"text": "Python3"
},
{
"code": "# Empty nested dictionaryDict = { 'Dict1': { }, 'Dict2': { }}print(\"Nested dictionary 1-\")print(Dict) # Nested dictionary having same keysDict = { 'Dict1': {'name': 'Ali', 'age': '19'}, 'Dict2': {'name': 'Bob', 'age': '25'}}print(\"\\nNested dictionary 2-\")print(Dict) # Nested dictionary of mixed dictionary keysDict = { 'Dict1': {1: 'G', 2: 'F', 3: 'G'}, 'Dict2': {'Name': 'Geeks', 1: [1, 2]} }print(\"\\nNested dictionary 3-\")print(Dict)",
"e": 1329,
"s": 868,
"text": null
},
{
"code": null,
"e": 1338,
"s": 1329,
"text": "Output: "
},
{
"code": null,
"e": 1584,
"s": 1338,
"text": "Nested dictionary 1-\n{'Dict1': {}, 'Dict2': {}}\n\nNested dictionary 2-\n{'Dict1': {'name': 'Ali', 'age': '19'}, 'Dict2': {'name': 'Bob', 'age': '25'}}\n\nNested dictionary 3-\n{'Dict1': {1: 'G', 2: 'F', 3: 'G'}, 'Dict2': {1: [1, 2], 'Name': 'Geeks'}}"
},
{
"code": null,
"e": 1871,
"s": 1586,
"text": "Addition of elements to a nested Dictionary can be done in multiple ways. One way to add a dictionary in the Nested dictionary is to add values one be one, Nested_dict[dict][key] = ‘value’. Another way is to add the whole dictionary in one go, Nested_dict[dict] = { ‘key’: ‘value’}. "
},
{
"code": null,
"e": 1879,
"s": 1871,
"text": "Python3"
},
{
"code": "Dict = { }print(\"Initial nested dictionary:-\")print(Dict) Dict['Dict1'] = {} # Adding elements one at a timeDict['Dict1']['name'] = 'Bob'Dict['Dict1']['age'] = 21print(\"\\nAfter adding dictionary Dict1\")print(Dict) # Adding whole dictionaryDict['Dict2'] = {'name': 'Cara', 'age': 25}print(\"\\nAfter adding dictionary Dict1\")print(Dict)",
"e": 2213,
"s": 1879,
"text": null
},
{
"code": null,
"e": 2223,
"s": 2213,
"text": "Output: "
},
{
"code": null,
"e": 2430,
"s": 2223,
"text": "Initial nested dictionary:-\n{}\n\nAfter adding dictionary Dict1\n{'Dict1': {'age': 21, 'name': 'Bob'}}\n\nAfter adding dictionary Dict1\n{'Dict1': {'age': 21, 'name': 'Bob'}, 'Dict2': {'age': 25, 'name': 'Cara'}}"
},
{
"code": null,
"e": 2520,
"s": 2432,
"text": "In order to access the value of any key in nested dictionary, use indexing [] syntax. "
},
{
"code": null,
"e": 2528,
"s": 2520,
"text": "Python3"
},
{
"code": "# Nested dictionary having same keysDict = { 'Dict1': {'name': 'Ali', 'age': '19'}, 'Dict2': {'name': 'Bob', 'age': '25'}} # Prints value corresponding to key 'name' in Dict1print(Dict['Dict1']['name']) # Prints value corresponding to key 'age' in Dict2print(Dict['Dict2']['age'])",
"e": 2817,
"s": 2528,
"text": null
},
{
"code": null,
"e": 2827,
"s": 2817,
"text": "Output: "
},
{
"code": null,
"e": 2834,
"s": 2827,
"text": "Ali\n25"
},
{
"code": null,
"e": 2956,
"s": 2836,
"text": "Deletion of dictionaries from a nested dictionary can be done either by using del keyword or by using pop() function. "
},
{
"code": null,
"e": 2964,
"s": 2956,
"text": "Python3"
},
{
"code": "Dict = {'Dict1': {'name': 'Ali', 'age': 19}, 'Dict2': {'name': 'Bob', 'age': 21}}print(\"Initial nested dictionary:-\")print(Dict) # Deleting dictionary using del keywordprint(\"\\nDeleting Dict2:-\")del Dict['Dict2']print(Dict) # Deleting dictionary using pop functionprint(\"\\nDeleting Dict1:-\")Dict.pop('Dict1')print (Dict)",
"e": 3292,
"s": 2964,
"text": null
},
{
"code": null,
"e": 3302,
"s": 3292,
"text": "Output: "
},
{
"code": null,
"e": 3482,
"s": 3302,
"text": "Initial nested dictionary:-\n{'Dict2': {'name': 'Bob', 'age': 21}, 'Dict1': {'name': 'Ali', 'age': 19}}\n\nDeleting Dict2:-\n{'Dict1': {'name': 'Ali', 'age': 19}}\n\nDeleting Dict1:-\n{}"
},
{
"code": null,
"e": 3493,
"s": 3484,
"text": "rkbhola5"
},
{
"code": null,
"e": 3505,
"s": 3493,
"text": "python-dict"
},
{
"code": null,
"e": 3530,
"s": 3505,
"text": "Python-nested-dictionary"
},
{
"code": null,
"e": 3537,
"s": 3530,
"text": "Python"
},
{
"code": null,
"e": 3549,
"s": 3537,
"text": "python-dict"
}
] |
Metagoofil – Tool to Extract Information from Docs, Images in Kali Linux
|
31 Oct, 2021
The Metagoofil is an information-gathering tool. This is a free and open-source tool designed to extract all the metadata information from public documents that are available on websites. This tool uses two libraries to extract data. These are Hachoir and PdfMiner. After extracting all the data, this tool will generate a report which contains usernames, software versions, and servers or machine names that will help Penetration testers in the information-gathering phase. This tool can also extract MAC addresses from Microsoft office documents. This tool can give information about the hardware of the system by which they generated the report of the tool.
Step 1: Open your kali Linux operating system and install the tool using the following command.
git clone https://github.com/laramies/metagoofil.git
cd metagoofil
Step 2: Now use the following command to run the tool.
python metagoofil.py
The tool is running successfully. Now we will see some examples of using the tool.
Example 1: Use the metagoofil tool to extract PDFs from a website.
python metagoofil.py -d flipkart.com -l 100 -n 5 -t pdf -o newflipkart
In this way, you can extract PDFsf and information on files from a website.
Example 2: Use the metagoofil tool to extract pdf from a website.
python metagoofil.py -d microsoft.com -l 20 -f all -o micro.html -t micro-files
In this way, the tool will find all the details of the tool. You can find the details, if any, available on the website.
surinderdawra388
Kali-Linux
Linux-Tools
Linux-Unix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n31 Oct, 2021"
},
{
"code": null,
"e": 689,
"s": 28,
"text": "The Metagoofil is an information-gathering tool. This is a free and open-source tool designed to extract all the metadata information from public documents that are available on websites. This tool uses two libraries to extract data. These are Hachoir and PdfMiner. After extracting all the data, this tool will generate a report which contains usernames, software versions, and servers or machine names that will help Penetration testers in the information-gathering phase. This tool can also extract MAC addresses from Microsoft office documents. This tool can give information about the hardware of the system by which they generated the report of the tool."
},
{
"code": null,
"e": 785,
"s": 689,
"text": "Step 1: Open your kali Linux operating system and install the tool using the following command."
},
{
"code": null,
"e": 852,
"s": 785,
"text": "git clone https://github.com/laramies/metagoofil.git\ncd metagoofil"
},
{
"code": null,
"e": 907,
"s": 852,
"text": "Step 2: Now use the following command to run the tool."
},
{
"code": null,
"e": 928,
"s": 907,
"text": "python metagoofil.py"
},
{
"code": null,
"e": 1011,
"s": 928,
"text": "The tool is running successfully. Now we will see some examples of using the tool."
},
{
"code": null,
"e": 1078,
"s": 1011,
"text": "Example 1: Use the metagoofil tool to extract PDFs from a website."
},
{
"code": null,
"e": 1149,
"s": 1078,
"text": "python metagoofil.py -d flipkart.com -l 100 -n 5 -t pdf -o newflipkart"
},
{
"code": null,
"e": 1226,
"s": 1149,
"text": "In this way, you can extract PDFsf and information on files from a website. "
},
{
"code": null,
"e": 1292,
"s": 1226,
"text": "Example 2: Use the metagoofil tool to extract pdf from a website."
},
{
"code": null,
"e": 1372,
"s": 1292,
"text": "python metagoofil.py -d microsoft.com -l 20 -f all -o micro.html -t micro-files"
},
{
"code": null,
"e": 1494,
"s": 1372,
"text": "In this way, the tool will find all the details of the tool. You can find the details, if any, available on the website. "
},
{
"code": null,
"e": 1511,
"s": 1494,
"text": "surinderdawra388"
},
{
"code": null,
"e": 1522,
"s": 1511,
"text": "Kali-Linux"
},
{
"code": null,
"e": 1534,
"s": 1522,
"text": "Linux-Tools"
},
{
"code": null,
"e": 1545,
"s": 1534,
"text": "Linux-Unix"
}
] |
Multiply large integers under large modulo
|
13 Jun, 2022
Given an integer a, b, m. Find (a * b ) mod m, where a, b may be large and their direct multiplication may cause overflow. However they are smaller than half of the maximum allowed long long int value.
Examples:
Input: a = 426, b = 964, m = 235
Output: 119
Explanation: (426 * 964) % 235
= 410664 % 235
= 119
Input: a = 10123465234878998,
b = 65746311545646431
m = 10005412336548794
Output: 4652135769797794
A naive approach is to use arbitrary precision data type such as int in python or Biginteger class in Java. But that approach will not be fruitful because internal conversion of string to int and then perform operation will lead to slow down the calculations of addition and multiplications in binary number system.
Efficient solution : Since a and b may be very large numbers, if we try to multiply directly then it will definitely overflow. Therefore we use the basic approach of multiplication i.e.,a * b = a + a + ... + a (b times) So we can easily compute the value of addition (under modulo m) without any overflow in the calculation. But if we try to add the value of a repeatedly up to b times then it will definitely timeout for the large value of b, since the time complexity of this approach would become O(b).
So we divide the above repeated steps of a in simpler way i.e.,
If b is even then
a * b = 2 * a * (b / 2),
otherwise
a * b = a + a * (b - 1)
Below is the approach describing above explanation :
C++
C
Java
Python3
C#
PHP
Javascript
// C++ program of finding modulo multiplication#include <bits/stdc++.h> using namespace std; // Returns (a * b) % modlong long moduloMultiplication(long long a, long long b, long long mod){ long long res = 0; // Initialize result // Update a if it is more than // or equal to mod a %= mod; while (b) { // If b is odd, add a with result if (b & 1) res = (res + a) % mod; // Here we assume that doing 2*a // doesn't cause overflow a = (2 * a) % mod; b >>= 1; // b = b / 2 } return res;} // Driver programint main(){ long long a = 426; long long b = 964; long long m = 235; cout << moduloMultiplication(a, b, m); return 0;} // This code is contributed// by Akanksha Rai
// C program of finding modulo multiplication#include<stdio.h> // Returns (a * b) % modlong long moduloMultiplication(long long a, long long b, long long mod){ long long res = 0; // Initialize result // Update a if it is more than // or equal to mod a %= mod; while (b) { // If b is odd, add a with result if (b & 1) res = (res + a) % mod; // Here we assume that doing 2*a // doesn't cause overflow a = (2 * a) % mod; b >>= 1; // b = b / 2 } return res;} // Driver programint main(){ long long a = 10123465234878998; long long b = 65746311545646431; long long m = 10005412336548794; printf("%lld", moduloMultiplication(a, b, m)); return 0;}
// Java program of finding modulo multiplicationclass GFG{ // Returns (a * b) % mod static long moduloMultiplication(long a, long b, long mod) { // Initialize result long res = 0; // Update a if it is more than // or equal to mod a %= mod; while (b > 0) { // If b is odd, add a with result if ((b & 1) > 0) { res = (res + a) % mod; } // Here we assume that doing 2*a // doesn't cause overflow a = (2 * a) % mod; b >>= 1; // b = b / 2 } return res; } // Driver code public static void main(String[] args) { long a = 10123465234878998L; long b = 65746311545646431L; long m = 10005412336548794L; System.out.print(moduloMultiplication(a, b, m)); }} // This code is contributed by Rajput-JI
# Python 3 program of finding# modulo multiplication # Returns (a * b) % moddef moduloMultiplication(a, b, mod): res = 0; # Initialize result # Update a if it is more than # or equal to mod a = a % mod; while (b): # If b is odd, add a with result if (b & 1): res = (res + a) % mod; # Here we assume that doing 2*a # doesn't cause overflow a = (2 * a) % mod; b >>= 1; # b = b / 2 return res; # Driver Codea = 10123465234878998;b = 65746311545646431;m = 10005412336548794;print(moduloMultiplication(a, b, m)); # This code is contributed# by Shivi_Aggarwal
// C# program of finding modulo multiplicationusing System; class GFG{ // Returns (a * b) % modstatic long moduloMultiplication(long a, long b, long mod){ long res = 0; // Initialize result // Update a if it is more than // or equal to mod a %= mod; while (b > 0) { // If b is odd, add a with result if ((b & 1) > 0) res = (res + a) % mod; // Here we assume that doing 2*a // doesn't cause overflow a = (2 * a) % mod; b >>= 1; // b = b / 2 } return res;} // Driver codestatic void Main(){ long a = 10123465234878998; long b = 65746311545646431; long m = 10005412336548794; Console.WriteLine(moduloMultiplication(a, b, m));}} // This code is contributed// by chandan_jnu
<?php//PHP program of finding// modulo multiplication // Returns (a * b) % modfunction moduloMultiplication($a, $b, $mod){ $res = 0; // Initialize result // Update a if it is more than // or equal to mod $a %= $mod; while ($b) { // If b is odd, // add a with result if ($b & 1) $res = ($res + $a) % $mod; // Here we assume that doing 2*a // doesn't cause overflow $a = (2 * $a) % $mod; $b >>= 1; // b = b / 2 } return $res;} // Driver Code $a = 10123465234878998; $b = 65746311545646431; $m = 10005412336548794; echo moduloMultiplication($a, $b, $m); // This oce is contributed by ajit?>
<script> // JavaScript program for the above approach // Returns (a * b) % modfunction moduloMultiplication(a, b, mod){ // Initialize result let res = 0; // Update a if it is more than // or equal to mod a = (a % mod); while (b > 0) { // If b is odd, add a with result if ((b & 1) > 0) { res = (res + a) % mod; } // Here we assume that doing 2*a // doesn't cause overflow a = (2 * a) % mod; b = (b >> 1); // b = b / 2 } return res;} // Driver Codelet a = 426;let b = 964;let m = 235; document.write(moduloMultiplication(a, b, m)); // This code is contributed by code_hunt </script>
119
Time complexity: O(log b) Auxiliary space: O(1)
Note: Above approach will only work if 2 * m can be represent in standard data type otherwise it will lead to overflow.
This article is contributed by Shubham Bansal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
jit_t
Shivi_Aggarwal
Akanksha_Rai
Chandan_Kumar
Rajput-Ji
punnapavankumar9
patiencewins
code_hunt
guptashruti232
large-numbers
Modular Arithmetic
Mathematical
Mathematical
Modular Arithmetic
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Program for Fibonacci numbers
Set in C++ Standard Template Library (STL)
C++ Data Types
Write a program to print all permutations of a given string
Merge two sorted arrays
Coin Change | DP-7
Operators in C / C++
Find minimum number of coins that make a given value
Modulo 10^9+7 (1000000007)
Minimum number of jumps to reach end
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n13 Jun, 2022"
},
{
"code": null,
"e": 256,
"s": 54,
"text": "Given an integer a, b, m. Find (a * b ) mod m, where a, b may be large and their direct multiplication may cause overflow. However they are smaller than half of the maximum allowed long long int value."
},
{
"code": null,
"e": 267,
"s": 256,
"text": "Examples: "
},
{
"code": null,
"e": 506,
"s": 267,
"text": "Input: a = 426, b = 964, m = 235\nOutput: 119\nExplanation: (426 * 964) % 235 \n = 410664 % 235\n = 119\n\nInput: a = 10123465234878998, \n b = 65746311545646431\n m = 10005412336548794 \nOutput: 4652135769797794"
},
{
"code": null,
"e": 822,
"s": 506,
"text": "A naive approach is to use arbitrary precision data type such as int in python or Biginteger class in Java. But that approach will not be fruitful because internal conversion of string to int and then perform operation will lead to slow down the calculations of addition and multiplications in binary number system."
},
{
"code": null,
"e": 1328,
"s": 822,
"text": "Efficient solution : Since a and b may be very large numbers, if we try to multiply directly then it will definitely overflow. Therefore we use the basic approach of multiplication i.e.,a * b = a + a + ... + a (b times) So we can easily compute the value of addition (under modulo m) without any overflow in the calculation. But if we try to add the value of a repeatedly up to b times then it will definitely timeout for the large value of b, since the time complexity of this approach would become O(b)."
},
{
"code": null,
"e": 1393,
"s": 1328,
"text": "So we divide the above repeated steps of a in simpler way i.e., "
},
{
"code": null,
"e": 1477,
"s": 1393,
"text": "If b is even then \n a * b = 2 * a * (b / 2), \notherwise \n a * b = a + a * (b - 1)"
},
{
"code": null,
"e": 1531,
"s": 1477,
"text": "Below is the approach describing above explanation : "
},
{
"code": null,
"e": 1535,
"s": 1531,
"text": "C++"
},
{
"code": null,
"e": 1537,
"s": 1535,
"text": "C"
},
{
"code": null,
"e": 1542,
"s": 1537,
"text": "Java"
},
{
"code": null,
"e": 1550,
"s": 1542,
"text": "Python3"
},
{
"code": null,
"e": 1553,
"s": 1550,
"text": "C#"
},
{
"code": null,
"e": 1557,
"s": 1553,
"text": "PHP"
},
{
"code": null,
"e": 1568,
"s": 1557,
"text": "Javascript"
},
{
"code": "// C++ program of finding modulo multiplication#include <bits/stdc++.h> using namespace std; // Returns (a * b) % modlong long moduloMultiplication(long long a, long long b, long long mod){ long long res = 0; // Initialize result // Update a if it is more than // or equal to mod a %= mod; while (b) { // If b is odd, add a with result if (b & 1) res = (res + a) % mod; // Here we assume that doing 2*a // doesn't cause overflow a = (2 * a) % mod; b >>= 1; // b = b / 2 } return res;} // Driver programint main(){ long long a = 426; long long b = 964; long long m = 235; cout << moduloMultiplication(a, b, m); return 0;} // This code is contributed// by Akanksha Rai",
"e": 2359,
"s": 1568,
"text": null
},
{
"code": "// C program of finding modulo multiplication#include<stdio.h> // Returns (a * b) % modlong long moduloMultiplication(long long a, long long b, long long mod){ long long res = 0; // Initialize result // Update a if it is more than // or equal to mod a %= mod; while (b) { // If b is odd, add a with result if (b & 1) res = (res + a) % mod; // Here we assume that doing 2*a // doesn't cause overflow a = (2 * a) % mod; b >>= 1; // b = b / 2 } return res;} // Driver programint main(){ long long a = 10123465234878998; long long b = 65746311545646431; long long m = 10005412336548794; printf(\"%lld\", moduloMultiplication(a, b, m)); return 0;}",
"e": 3159,
"s": 2359,
"text": null
},
{
"code": "// Java program of finding modulo multiplicationclass GFG{ // Returns (a * b) % mod static long moduloMultiplication(long a, long b, long mod) { // Initialize result long res = 0; // Update a if it is more than // or equal to mod a %= mod; while (b > 0) { // If b is odd, add a with result if ((b & 1) > 0) { res = (res + a) % mod; } // Here we assume that doing 2*a // doesn't cause overflow a = (2 * a) % mod; b >>= 1; // b = b / 2 } return res; } // Driver code public static void main(String[] args) { long a = 10123465234878998L; long b = 65746311545646431L; long m = 10005412336548794L; System.out.print(moduloMultiplication(a, b, m)); }} // This code is contributed by Rajput-JI",
"e": 4119,
"s": 3159,
"text": null
},
{
"code": "# Python 3 program of finding# modulo multiplication # Returns (a * b) % moddef moduloMultiplication(a, b, mod): res = 0; # Initialize result # Update a if it is more than # or equal to mod a = a % mod; while (b): # If b is odd, add a with result if (b & 1): res = (res + a) % mod; # Here we assume that doing 2*a # doesn't cause overflow a = (2 * a) % mod; b >>= 1; # b = b / 2 return res; # Driver Codea = 10123465234878998;b = 65746311545646431;m = 10005412336548794;print(moduloMultiplication(a, b, m)); # This code is contributed# by Shivi_Aggarwal",
"e": 4775,
"s": 4119,
"text": null
},
{
"code": "// C# program of finding modulo multiplicationusing System; class GFG{ // Returns (a * b) % modstatic long moduloMultiplication(long a, long b, long mod){ long res = 0; // Initialize result // Update a if it is more than // or equal to mod a %= mod; while (b > 0) { // If b is odd, add a with result if ((b & 1) > 0) res = (res + a) % mod; // Here we assume that doing 2*a // doesn't cause overflow a = (2 * a) % mod; b >>= 1; // b = b / 2 } return res;} // Driver codestatic void Main(){ long a = 10123465234878998; long b = 65746311545646431; long m = 10005412336548794; Console.WriteLine(moduloMultiplication(a, b, m));}} // This code is contributed// by chandan_jnu",
"e": 5597,
"s": 4775,
"text": null
},
{
"code": "<?php//PHP program of finding// modulo multiplication // Returns (a * b) % modfunction moduloMultiplication($a, $b, $mod){ $res = 0; // Initialize result // Update a if it is more than // or equal to mod $a %= $mod; while ($b) { // If b is odd, // add a with result if ($b & 1) $res = ($res + $a) % $mod; // Here we assume that doing 2*a // doesn't cause overflow $a = (2 * $a) % $mod; $b >>= 1; // b = b / 2 } return $res;} // Driver Code $a = 10123465234878998; $b = 65746311545646431; $m = 10005412336548794; echo moduloMultiplication($a, $b, $m); // This oce is contributed by ajit?>",
"e": 6299,
"s": 5597,
"text": null
},
{
"code": "<script> // JavaScript program for the above approach // Returns (a * b) % modfunction moduloMultiplication(a, b, mod){ // Initialize result let res = 0; // Update a if it is more than // or equal to mod a = (a % mod); while (b > 0) { // If b is odd, add a with result if ((b & 1) > 0) { res = (res + a) % mod; } // Here we assume that doing 2*a // doesn't cause overflow a = (2 * a) % mod; b = (b >> 1); // b = b / 2 } return res;} // Driver Codelet a = 426;let b = 964;let m = 235; document.write(moduloMultiplication(a, b, m)); // This code is contributed by code_hunt </script>",
"e": 6994,
"s": 6299,
"text": null
},
{
"code": null,
"e": 6998,
"s": 6994,
"text": "119"
},
{
"code": null,
"e": 7046,
"s": 6998,
"text": "Time complexity: O(log b) Auxiliary space: O(1)"
},
{
"code": null,
"e": 7167,
"s": 7046,
"text": "Note: Above approach will only work if 2 * m can be represent in standard data type otherwise it will lead to overflow. "
},
{
"code": null,
"e": 7466,
"s": 7167,
"text": "This article is contributed by Shubham Bansal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. "
},
{
"code": null,
"e": 7472,
"s": 7466,
"text": "jit_t"
},
{
"code": null,
"e": 7487,
"s": 7472,
"text": "Shivi_Aggarwal"
},
{
"code": null,
"e": 7500,
"s": 7487,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 7514,
"s": 7500,
"text": "Chandan_Kumar"
},
{
"code": null,
"e": 7524,
"s": 7514,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 7541,
"s": 7524,
"text": "punnapavankumar9"
},
{
"code": null,
"e": 7554,
"s": 7541,
"text": "patiencewins"
},
{
"code": null,
"e": 7564,
"s": 7554,
"text": "code_hunt"
},
{
"code": null,
"e": 7579,
"s": 7564,
"text": "guptashruti232"
},
{
"code": null,
"e": 7593,
"s": 7579,
"text": "large-numbers"
},
{
"code": null,
"e": 7612,
"s": 7593,
"text": "Modular Arithmetic"
},
{
"code": null,
"e": 7625,
"s": 7612,
"text": "Mathematical"
},
{
"code": null,
"e": 7638,
"s": 7625,
"text": "Mathematical"
},
{
"code": null,
"e": 7657,
"s": 7638,
"text": "Modular Arithmetic"
},
{
"code": null,
"e": 7755,
"s": 7657,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 7785,
"s": 7755,
"text": "Program for Fibonacci numbers"
},
{
"code": null,
"e": 7828,
"s": 7785,
"text": "Set in C++ Standard Template Library (STL)"
},
{
"code": null,
"e": 7843,
"s": 7828,
"text": "C++ Data Types"
},
{
"code": null,
"e": 7903,
"s": 7843,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 7927,
"s": 7903,
"text": "Merge two sorted arrays"
},
{
"code": null,
"e": 7946,
"s": 7927,
"text": "Coin Change | DP-7"
},
{
"code": null,
"e": 7967,
"s": 7946,
"text": "Operators in C / C++"
},
{
"code": null,
"e": 8020,
"s": 7967,
"text": "Find minimum number of coins that make a given value"
},
{
"code": null,
"e": 8047,
"s": 8020,
"text": "Modulo 10^9+7 (1000000007)"
}
] |
Using Tags in Git
|
08 Jun, 2020
Tagging in GIT refers to creating specific points in the history of your repository/data. It is usually done to mark the release points.
Two main purposes of tags are:
Make Release point on your code.
Create historic restore points.
You can create tags when you want to create a release point for a stable version of your code. You can also create tags when you want to create a historic point for your code that you can refer to in the future to restore your data.
To create a tag we need to go through the following steps:
Step 1: Checkout to the branch you want to create the tag.
git checkout {branch name}
Step 2: Create a tag with some name
git tag {tag name}
There are many more ways in which we create tags.Annotated Tags
git tag -a {tag name} -m {some message}
Step 3: See all the created tags.
git tag
To see the details of the tag we can use
git show {tag name}
To see tags starting with some letters
git tag -l "v2.*"
Step 4: Push tags to remote.
git push origin {tag name}
git push --tags
“git push –tags” will push all tags at once.
Before
After
Step 5: Delete Tags. (locally)
git tag -d {tag name}
git tag --delete {tag name}
Step 6: Delete tags from remote
git push origin -d {tag name}
git push origin --delete {tag name}
git push origin :{tag name}
“git push origin -d {tag name}” can also delete tag locally and remote at the same time.
GitHub
Picked
Git
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Git - Difference Between Git Fetch and Git Pull
How to Set Upstream Branch on Git?
How to Push Git Branch to Remote?
How to Export Eclipse projects to GitHub?
Merge Conflicts and How to handle them
How to Add Git Credentials in Eclipse?
What is README.md File?
Git - Origin Master
How to Add Videos on README .md File in a GitHub Repository?
Git - Cherry Pick
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n08 Jun, 2020"
},
{
"code": null,
"e": 191,
"s": 54,
"text": "Tagging in GIT refers to creating specific points in the history of your repository/data. It is usually done to mark the release points."
},
{
"code": null,
"e": 222,
"s": 191,
"text": "Two main purposes of tags are:"
},
{
"code": null,
"e": 255,
"s": 222,
"text": "Make Release point on your code."
},
{
"code": null,
"e": 287,
"s": 255,
"text": "Create historic restore points."
},
{
"code": null,
"e": 520,
"s": 287,
"text": "You can create tags when you want to create a release point for a stable version of your code. You can also create tags when you want to create a historic point for your code that you can refer to in the future to restore your data."
},
{
"code": null,
"e": 579,
"s": 520,
"text": "To create a tag we need to go through the following steps:"
},
{
"code": null,
"e": 638,
"s": 579,
"text": "Step 1: Checkout to the branch you want to create the tag."
},
{
"code": null,
"e": 667,
"s": 638,
"text": " git checkout {branch name} "
},
{
"code": null,
"e": 703,
"s": 667,
"text": "Step 2: Create a tag with some name"
},
{
"code": null,
"e": 724,
"s": 703,
"text": " git tag {tag name} "
},
{
"code": null,
"e": 788,
"s": 724,
"text": "There are many more ways in which we create tags.Annotated Tags"
},
{
"code": null,
"e": 830,
"s": 788,
"text": " git tag -a {tag name} -m {some message} "
},
{
"code": null,
"e": 864,
"s": 830,
"text": "Step 3: See all the created tags."
},
{
"code": null,
"e": 873,
"s": 864,
"text": " git tag"
},
{
"code": null,
"e": 914,
"s": 873,
"text": "To see the details of the tag we can use"
},
{
"code": null,
"e": 936,
"s": 914,
"text": " git show {tag name} "
},
{
"code": null,
"e": 975,
"s": 936,
"text": "To see tags starting with some letters"
},
{
"code": null,
"e": 994,
"s": 975,
"text": " git tag -l \"v2.*\""
},
{
"code": null,
"e": 1023,
"s": 994,
"text": "Step 4: Push tags to remote."
},
{
"code": null,
"e": 1070,
"s": 1023,
"text": " \ngit push origin {tag name}\ngit push --tags\n "
},
{
"code": null,
"e": 1115,
"s": 1070,
"text": "“git push –tags” will push all tags at once."
},
{
"code": null,
"e": 1122,
"s": 1115,
"text": "Before"
},
{
"code": null,
"e": 1128,
"s": 1122,
"text": "After"
},
{
"code": null,
"e": 1159,
"s": 1128,
"text": "Step 5: Delete Tags. (locally)"
},
{
"code": null,
"e": 1212,
"s": 1159,
"text": "git tag -d {tag name}\ngit tag --delete {tag name}\n "
},
{
"code": null,
"e": 1244,
"s": 1212,
"text": "Step 6: Delete tags from remote"
},
{
"code": null,
"e": 1339,
"s": 1244,
"text": "git push origin -d {tag name}\ngit push origin --delete {tag name}\ngit push origin :{tag name}\n"
},
{
"code": null,
"e": 1428,
"s": 1339,
"text": "“git push origin -d {tag name}” can also delete tag locally and remote at the same time."
},
{
"code": null,
"e": 1435,
"s": 1428,
"text": "GitHub"
},
{
"code": null,
"e": 1442,
"s": 1435,
"text": "Picked"
},
{
"code": null,
"e": 1446,
"s": 1442,
"text": "Git"
},
{
"code": null,
"e": 1544,
"s": 1446,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1592,
"s": 1544,
"text": "Git - Difference Between Git Fetch and Git Pull"
},
{
"code": null,
"e": 1627,
"s": 1592,
"text": "How to Set Upstream Branch on Git?"
},
{
"code": null,
"e": 1661,
"s": 1627,
"text": "How to Push Git Branch to Remote?"
},
{
"code": null,
"e": 1703,
"s": 1661,
"text": "How to Export Eclipse projects to GitHub?"
},
{
"code": null,
"e": 1742,
"s": 1703,
"text": "Merge Conflicts and How to handle them"
},
{
"code": null,
"e": 1781,
"s": 1742,
"text": "How to Add Git Credentials in Eclipse?"
},
{
"code": null,
"e": 1805,
"s": 1781,
"text": "What is README.md File?"
},
{
"code": null,
"e": 1825,
"s": 1805,
"text": "Git - Origin Master"
},
{
"code": null,
"e": 1886,
"s": 1825,
"text": "How to Add Videos on README .md File in a GitHub Repository?"
}
] |
Delete Nth node from the end of the given linked list
|
12 Jul, 2022
Given a linked list and an integer N, the task is to delete the Nth node from the end of the given linked list.
Examples:
Input: 2 -> 3 -> 1 -> 7 -> NULL, N = 1 Output: The created linked list is: 2 3 1 7 The linked list after deletion is: 2 3 1
Input: 1 -> 2 -> 3 -> 4 -> NULL, N = 4 Output: The created linked list is: 1 2 3 4 The linked list after deletion is: 2 3 4
Intuition:
Lets K be the total nodes in the linked list.
Observation : The Nth node from the end is (K-N+1)th node from the beginning.
So the problem simplifies down to that we have to find (K-N+1)th node from the beginning.
One way of doing it is to find the length (K) of the linked list in one pass and then in the second pass move (K-N+1) step from the beginning to reach the Nth node from the end.
To do it in one pass. Let’s take the first pointer and move N step from the beginning. Now the first pointer is (K-N+1) steps away from the last node, which is the same number of steps the second pointer require to move from the beginning to reach the Nth node from the end.
Approach:
Take two pointers; the first will point to the head of the linked list and the second will point to the Nth node from the beginning.
Now keep incrementing both the pointers by one at the same time until the second is pointing to the last node of the linked list.
After the operations from the previous step, the first pointer should point to the Nth node from the end now. So, delete the node the first pointer is pointing to.
Below is the implementation of the above approach:
C++
C
Java
Python3
C#
Javascript
// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; class LinkedList {public: // Linked list Node class Node { public: int data; Node* next; Node(int d) { data = d; next = NULL; } }; // Head of list Node* head; // Function to delete the nth node from the end of the // given linked list Node* deleteNode(int key) { // We will be using this pointer for holding address // temporarily while we delete the node Node* temp; // First pointer will point to the head of the // linked list Node* first = head; // Second pointer will point to the Nth node from // the beginning Node* second = head; for (int i = 0; i < key; i++) { // If count of nodes in the given linked list is <= N if (second->next == NULL) { // If count = N i.e. delete the head node if (i == key - 1) { temp = head; head = head->next; free(temp); } return head; } second = second->next; } // Increment both the pointers by one until second // pointer reaches the end while (second->next != NULL) { first = first->next; second = second->next; } // First must be pointing to the Nth node from the // end by now So, delete the node first is pointing to temp = first->next; first->next = first->next->next; free(temp); return head; } // Function to insert a new Node at front of the list Node* push(int new_data) { Node* new_node = new Node(new_data); new_node->next = head; head = new_node; return head; } // Function to print the linked list void printList() { Node* tnode = head; while (tnode != NULL) { cout << (tnode->data) << (" "); tnode = tnode->next; } }}; // Driver codeint main(){ LinkedList* llist = new LinkedList(); llist->head = llist->push(7); llist->head = llist->push(1); llist->head = llist->push(3); llist->head = llist->push(2); cout << ("Created Linked list is:\n"); llist->printList(); int N = 1; llist->head = llist->deleteNode(N); cout << ("\nLinked List after Deletion is:\n"); llist->printList();} // This code is contributed by Sania Kumari Gupta
/* C program to merge two sorted linked lists */#include <assert.h>#include <stdio.h>#include <stdlib.h> /* Link list node */typedef struct Node { int data; struct Node* next;} Node; Node* deleteNode(Node* head, int key){ // We will be using this pointer for holding address // temporarily while we delete the node Node* temp; // First pointer will point to the head of the linked // list Node* first = head; // Second pointer will point to the Nth node from the // beginning Node* second = head; for (int i = 0; i < key; i++) { // If count of nodes in the given linked list is <=N if (second->next == NULL) { // If count = N i.e. delete the head node if (i == key - 1) { temp = head; head = head->next; free(temp); } return head; } second = second->next; } // Increment both the pointers by one until // second pointer reaches the end while (second->next != NULL) { first = first->next; second = second->next; } // First must be pointing to the Nth node from the end // by now So, delete the node first is pointing to temp = first->next; first->next = first->next->next; free(temp); return head;} /* Function to insert a node at the beginning of thelinked list */void push(Node** head_ref, int new_data){ /* allocate node */ Node* new_node = (Node*)malloc(sizeof(Node)); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node;} /* Function to print nodes in a given linked list */void printList(struct Node* node){ while (node != NULL) { printf("%d ", node->data); node = node->next; }} // Driver programint main(){ struct Node* head = NULL; push(&head, 7); push(&head, 1); push(&head, 3); push(&head, 2); printf("Created Linked list is:\n"); printList(head); int n = 1; deleteNode(head, n); printf("\nLinked List after Deletion is:\n"); printList(head); return 0;} // This code is contributed by Sania Kumari Gupta
// Java implementation of the approachclass LinkedList { // Head of list Node head; // Linked list Node class Node { int data; Node next; Node(int d) { data = d; next = null; } } // Function to delete the nth node from // the end of the given linked list void deleteNode(int key) { // First pointer will point to // the head of the linked list Node first = head; // Second pointer will point to the // Nth node from the beginning Node second = head; for (int i = 0; i < key; i++) { // If count of nodes in the given // linked list is <= N if (second.next == null) { // If count = N i.e. delete the head node if (i == key - 1) head = head.next; return; } second = second.next; } // Increment both the pointers by one until // second pointer reaches the end while (second.next != null) { first = first.next; second = second.next; } // First must be pointing to the // Nth node from the end by now // So, delete the node first is pointing to first.next = first.next.next; } // Function to insert a new Node at front of the list public void push(int new_data) { Node new_node = new Node(new_data); new_node.next = head; head = new_node; } // Function to print the linked list public void printList() { Node tnode = head; while (tnode != null) { System.out.print(tnode.data + " "); tnode = tnode.next; } } // Driver code public static void main(String[] args) { LinkedList llist = new LinkedList(); llist.push(7); llist.push(1); llist.push(3); llist.push(2); System.out.println("\nCreated Linked list is:"); llist.printList(); int N = 1; llist.deleteNode(N); System.out.println("\nLinked List after Deletion is:"); llist.printList(); }}
# Python3 implementation of the approachclass Node: def __init__(self, new_data): self.data = new_data self.next = Noneclass LinkedList: def __init__(self): self.head = None # createNode and make linked list def push(self, new_data): new_node = Node(new_data) new_node.next = self.head self.head = new_node def deleteNode(self, n): first = self.head second = self.head for i in range(n): # If count of nodes in the # given list is less than 'n' if(second.next == None): # If index = n then # delete the head node if(i == n - 1): self.head = self.head.next return self.head second = second.next while(second.next != None): second = second.next first = first.next first.next = first.next.next def printList(self): tmp_head = self.head while(tmp_head != None): print(tmp_head.data, end = ' ') tmp_head = tmp_head.next # Driver Codellist = LinkedList()llist.push(7)llist.push(1)llist.push(3)llist.push(2)print("Created Linked list is:")llist.printList()llist.deleteNode(1)print("\nLinked List after Deletion is:")llist.printList() # This code is contributed by RaviParkash
// C# implementation of the approachusing System; public class LinkedList{ // Head of list public Node head; // Linked list Node public class Node { public int data; public Node next; public Node(int d) { data = d; next = null; } } // Function to delete the nth node from // the end of the given linked list void deleteNode(int key) { // First pointer will point to // the head of the linked list Node first = head; // Second pointer will point to the // Nth node from the beginning Node second = head; for (int i = 0; i < key; i++) { // If count of nodes in the given // linked list is <= N if (second.next == null) { // If count = N i.e. delete the head node if (i == key - 1) head = head.next; return; } second = second.next; } // Increment both the pointers by one until // second pointer reaches the end while (second.next != null) { first = first.next; second = second.next; } // First must be pointing to the // Nth node from the end by now // So, delete the node first is pointing to first.next = first.next.next; } // Function to insert a new Node at front of the list public void push(int new_data) { Node new_node = new Node(new_data); new_node.next = head; head = new_node; } // Function to print the linked list public void printList() { Node tnode = head; while (tnode != null) { Console.Write(tnode.data + " "); tnode = tnode.next; } } // Driver code public static void Main(String[] args) { LinkedList llist = new LinkedList(); llist.push(7); llist.push(1); llist.push(3); llist.push(2); Console.WriteLine("\nCreated Linked list is:"); llist.printList(); int N = 1; llist.deleteNode(N); Console.WriteLine("\nLinked List after Deletion is:"); llist.printList(); }} // This code is contributed by 29AjayKumar
<script>// javascript implementation of the approach // Head of list var head; // Linked list Node class Node { constructor(val) { this.data = val; this.next = null; } } // Function to delete the nth node from // the end of the given linked list function deleteNode(key) { // First pointer will point to // the head of the linked list var first = head; // Second pointer will point to the // Nth node from the beginning var second = head; for (i = 0; i < key; i++) { // If count of nodes in the given // linked list is <= N if (second.next == null) { // If count = N i.e. delete the head node if (i == key - 1) head = head.next; return; } second = second.next; } // Increment both the pointers by one until // second pointer reaches the end while (second.next != null) { first = first.next; second = second.next; } // First must be pointing to the // Nth node from the end by now // So, delete the node first is pointing to first.next = first.next.next; } // Function to insert a new Node at front of the list function push(new_data) { var new_node = new Node(new_data); new_node.next = head; head = new_node; } // Function to print the linked list function printList() { var tnode = head; while (tnode != null) { document.write(tnode.data + " "); tnode = tnode.next; } } // Driver code push(7); push(1); push(3); push(2); document.write("\nCreated Linked list is:<br/>"); printList(); var N = 1; deleteNode(N); document.write("<br/>Linked List after Deletion is:<br/>"); printList(); // This code is contributed by todaysgaurav</script>
Created Linked list is:
2 3 1 7
Linked List after Deletion is:
2 3 1
we already covered the iterative version above,Now let us see its recursive approach as well,
Recursive Approach :1) Create a dummy node and create a link from dummy node to head node. i.e, dummy->next = head 2) Then we will use the recursion stack to keep track of elements that are being pushed in recursion calls.3) While popping the elements from recursion stack, we will decrement the N(position of target node from the end of linked list) i.e, N = N-1.4) When we reach (N==0) that means we have reached at the target node,5) But here is the catch, to delete the target node we require its previous node,6) So we will now stop when (N==-1) i.e, we reached the previous node.7) Now it is very simple to delete the node by using previousNode->next = previousNode->next->next.
C++
// C++ implementation of the approach// Code is contributed by Paras Saini#include <bits/stdc++.h>using namespace std;class LinkedList {public: int val; LinkedList* next; LinkedList() { this->next = NULL; this->val = 0; } LinkedList(int val) { this->next = NULL; this->val = val; } LinkedList* addNode(int val) { if (this == NULL) { return new LinkedList(val); } else { LinkedList* ptr = this; while (ptr->next) { ptr = ptr->next; } ptr->next = new LinkedList(val); return this; } } void removeNthNodeFromEndHelper(LinkedList* head, int& n) { if (!head) return; // Adding the elements in the recursion // stack removeNthNodeFromEndHelper(head->next, n); // Popping the elements from recursion stack n -= 1; // If we reach the previous of target node if (n == -1){ LinkedList* temp = head->next; head->next = head->next->next; free (temp); } } LinkedList* removeNthNodeFromEnd(int n) { // return NULL if we have NULL head or only // one node. if (!this or !this->next) return NULL; // Create a dummy node and point its next to // head. LinkedList* dummy = new LinkedList(); dummy->next = this; // Call function to remove Nth node from end removeNthNodeFromEndHelper(dummy, n); // Return new head i.e, dummy->next return dummy->next; } void printLinkedList() { if (!this) { cout << "Empty List\n"; return; } LinkedList* ptr = this; while (ptr) { cout << ptr->val << " "; ptr = ptr->next; } cout << endl; }}; class TestCase {private: void printOutput(LinkedList* head) { // Output: if (!head) cout << "Empty Linked List\n"; else head->printLinkedList(); } void testCase1() { LinkedList* head = new LinkedList(1); head = head->addNode(2); head = head->addNode(3); head = head->addNode(4); head = head->addNode(5); head->printLinkedList(); // Print: 1 2 3 4 5 head = head->removeNthNodeFromEnd(2); printOutput(head); // Output: 1 2 3 5 } void testCase2() { // Important Edge Case, where linkedList [1] // and n=1, LinkedList* head = new LinkedList(1); head->printLinkedList(); // Print: 1 head = head->removeNthNodeFromEnd(2); printOutput(head); // Output: Empty Linked List } void testCase3() { LinkedList* head = new LinkedList(1); head = head->addNode(2); head->printLinkedList(); // Print: 1 2 head = head->removeNthNodeFromEnd(1); printOutput(head); // Output: 1 } public: void executeTestCases() { testCase1(); testCase2(); testCase3(); }}; int main(){ TestCase testCase; testCase.executeTestCases(); return 0;}
1 2 3 4 5
1 2 3 5
1
Empty Linked List
1 2
1
Two Pointer Approach – Slow and Fast Pointers
This problem can be solved by using two pointer approach as below:
Take two pointers – fast and slow. And initialize their values as head node
Iterate fast pointer till the value of n.
Now, start iteration of fast pointer till the None value of the linked list. Also, iterate slow pointer.
Hence, once the fast pointer will reach to the end the slow pointer will reach the node which you want to delete.
Replace the next node of the slow pointer with the next to next node of the slow pointer.
C++
Java
Python
C#
Javascript
// C++ code for the deleting a node from end using two// pointer approach #include <iostream>using namespace std; class LinkedList {public: // structure of a node class Node { public: int data; Node* next; Node(int d) { data = d; next = NULL; } }; // Head node Node* head; // Function for inserting a node at the beginning void push(int data) { Node* new_node = new Node(data); new_node->next = head; head = new_node; } // Function to display the nodes in the list. void display() { Node* temp = head; while (temp != NULL) { cout << temp->data << endl; temp = temp->next; } } // Function to delete the nth node from the end. void deleteNthNodeFromEnd(Node* head, int n) { Node* fast = head; Node* slow = head; for (int i = 0; i < n; i++) { fast = fast->next; } if (fast == NULL) { return; } while (fast->next != NULL) { fast = fast->next; slow = slow->next; } slow->next = slow->next->next; return; }}; int main(){ LinkedList* l = new LinkedList(); // Create a list 1->2->3->4->5->NULL l->push(5); l->push(4); l->push(3); l->push(2); l->push(1); cout << "***** Linked List Before deletion *****" << endl; l->display(); cout << "************** Delete nth Node from the End " "*****" << endl; l->deleteNthNodeFromEnd(l->head, 2); cout << "*********** Linked List after Deletion *****" << endl; l->display(); return 0;} // This code is contributed by lokesh (lokeshmvs21).
// Java code for deleting a node from the end using two// Pointer Approach class GFG { class Node { int data; Node next; Node(int data) { this.data = data; this.next = null; } } Node head; // Function to insert node at the beginning of the list. public void push(int data) { Node new_node = new Node(data); new_node.next = head; head = new_node; } // Function to print the nodes in the linked list. public void display() { Node temp = head; while (temp != null) { System.out.println(temp.data); temp = temp.next; } } public void deleteNthNodeFromEnd(Node head, int n) { Node fast = head; Node slow = head; for (int i = 0; i < n; i++) { fast = fast.next; } if (fast == null) { return; } while (fast.next != null) { fast = fast.next; slow = slow.next; } slow.next = slow.next.next; return; } public static void main(String[] args) { GFG l = new GFG(); // Create a list 1->2->3->4->5->NULL l.push(5); l.push(4); l.push(3); l.push(2); l.push(1); System.out.println( "***** Linked List Before deletion *****"); l.display(); System.out.println( "************** Delete nth Node from the End *****"); l.deleteNthNodeFromEnd(l.head, 2); System.out.println( "*********** Linked List after Deletion *****"); l.display(); }} // This code is contributed by lokesh (lokeshmvs21).
class Node: def __init__(self, data): self.data = data self.next = None class LinkedList: def __init__(self): self.head = None def push(self, data): new_node = Node(data) new_node.next = self.head self.head = new_node def display(self): temp = self.head while temp != None: print(temp.data) temp = temp.next def deleteNthNodeFromEnd(self, head, n): fast = head slow = head for _ in range(n): fast = fast.next if not fast: return head.next while fast.next: fast = fast.next slow = slow.next slow.next = slow.next.next return head if __name__ == '__main__': l = LinkedList() l.push(5) l.push(4) l.push(3) l.push(2) l.push(1) print('***** Linked List Before deletion *****') l.display() print('************** Delete nth Node from the End *****') l.deleteNthNodeFromEnd(l.head, 2) print('*********** Linked List after Deletion *****') l.display()
// C# code for deleting a node from the end using two// Pointer Approach using System; public class GFG { public class Node { public int data; public Node next; public Node(int data) { this.data = data; this.next = null; } } Node head; void push(int data) { Node new_node = new Node(data); new_node.next = head; head = new_node; } void display() { Node temp = head; while (temp != null) { Console.WriteLine(temp.data); temp = temp.next; } } public void deleteNthNodeFromEnd(Node head, int n) { Node fast = head; Node slow = head; for (int i = 0; i < n; i++) { fast = fast.next; } if (fast == null) { return; } while (fast.next != null) { fast = fast.next; slow = slow.next; } slow.next = slow.next.next; return; } static public void Main() { // Code GFG l = new GFG(); // Create a list 1->2->3->4->5->NULL l.push(5); l.push(4); l.push(3); l.push(2); l.push(1); Console.WriteLine( "***** Linked List Before deletion *****"); l.display(); Console.WriteLine( "************** Delete nth Node from the End *****"); l.deleteNthNodeFromEnd(l.head, 2); Console.WriteLine( "*********** Linked List after Deletion *****"); l.display(); }} // This code is contributed by lokesh(lokeshmvs21).
<script> class Node{ constructor(data){ this.data = data this.next = null }} class LinkedList{ constructor(){ this.head = null } push(data){ let new_node = new Node(data) new_node.next =this.head this.head = new_node } display(){ let temp =this.head while(temp != null){ document.write(temp.data,"</br>") temp = temp.next } } deleteNthNodeFromEnd(head, n){ let fast = head let slow = head for(let i=0;i<n;i++){ fast = fast.next } if(!fast) return head.next while(fast.next){ fast = fast.next slow = slow.next } slow.next = slow.next.next return head }} // driver code let l = new LinkedList()l.push(5)l.push(4)l.push(3)l.push(2)l.push(1)document.write('***** Linked List Before deletion *****',"</br>")l.display() document.write('************** Delete nth Node from the End *****',"</br>")l.deleteNthNodeFromEnd(l.head, 2) document.write('*********** Linked List after Deletion *****',"</br>")l.display() // This code is contributed by shinjanpatra </script>
***** Linked List Before deletion *****
1
2
3
4
5
************** Delete nth Node from the End *****
*********** Linked List after Deletion *****
1
2
3
5
Time complexity: O(n)
Space complexity: O(1) using constant space
29AjayKumar
RaviParkash
andrew1234
sparshgarg07
todaysgaurav
Paras Saini
vivek shivhare
utkarshbhatnagar30
ramakrishna1729
simranarora5sos
gul md ershad
surinderdawra388
krisania804
shinjanpatra
noviced3vq6
lokeshmvs21
Delete a Linked List
Traversal
Linked List
Linked List
Traversal
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n12 Jul, 2022"
},
{
"code": null,
"e": 167,
"s": 54,
"text": "Given a linked list and an integer N, the task is to delete the Nth node from the end of the given linked list. "
},
{
"code": null,
"e": 179,
"s": 167,
"text": "Examples: "
},
{
"code": null,
"e": 303,
"s": 179,
"text": "Input: 2 -> 3 -> 1 -> 7 -> NULL, N = 1 Output: The created linked list is: 2 3 1 7 The linked list after deletion is: 2 3 1"
},
{
"code": null,
"e": 428,
"s": 303,
"text": "Input: 1 -> 2 -> 3 -> 4 -> NULL, N = 4 Output: The created linked list is: 1 2 3 4 The linked list after deletion is: 2 3 4 "
},
{
"code": null,
"e": 439,
"s": 428,
"text": "Intuition:"
},
{
"code": null,
"e": 486,
"s": 439,
"text": "Lets K be the total nodes in the linked list."
},
{
"code": null,
"e": 564,
"s": 486,
"text": "Observation : The Nth node from the end is (K-N+1)th node from the beginning."
},
{
"code": null,
"e": 655,
"s": 564,
"text": "So the problem simplifies down to that we have to find (K-N+1)th node from the beginning."
},
{
"code": null,
"e": 833,
"s": 655,
"text": "One way of doing it is to find the length (K) of the linked list in one pass and then in the second pass move (K-N+1) step from the beginning to reach the Nth node from the end."
},
{
"code": null,
"e": 1108,
"s": 833,
"text": "To do it in one pass. Let’s take the first pointer and move N step from the beginning. Now the first pointer is (K-N+1) steps away from the last node, which is the same number of steps the second pointer require to move from the beginning to reach the Nth node from the end."
},
{
"code": null,
"e": 1120,
"s": 1108,
"text": "Approach: "
},
{
"code": null,
"e": 1253,
"s": 1120,
"text": "Take two pointers; the first will point to the head of the linked list and the second will point to the Nth node from the beginning."
},
{
"code": null,
"e": 1383,
"s": 1253,
"text": "Now keep incrementing both the pointers by one at the same time until the second is pointing to the last node of the linked list."
},
{
"code": null,
"e": 1547,
"s": 1383,
"text": "After the operations from the previous step, the first pointer should point to the Nth node from the end now. So, delete the node the first pointer is pointing to."
},
{
"code": null,
"e": 1599,
"s": 1547,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 1603,
"s": 1599,
"text": "C++"
},
{
"code": null,
"e": 1605,
"s": 1603,
"text": "C"
},
{
"code": null,
"e": 1610,
"s": 1605,
"text": "Java"
},
{
"code": null,
"e": 1618,
"s": 1610,
"text": "Python3"
},
{
"code": null,
"e": 1621,
"s": 1618,
"text": "C#"
},
{
"code": null,
"e": 1632,
"s": 1621,
"text": "Javascript"
},
{
"code": "// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; class LinkedList {public: // Linked list Node class Node { public: int data; Node* next; Node(int d) { data = d; next = NULL; } }; // Head of list Node* head; // Function to delete the nth node from the end of the // given linked list Node* deleteNode(int key) { // We will be using this pointer for holding address // temporarily while we delete the node Node* temp; // First pointer will point to the head of the // linked list Node* first = head; // Second pointer will point to the Nth node from // the beginning Node* second = head; for (int i = 0; i < key; i++) { // If count of nodes in the given linked list is <= N if (second->next == NULL) { // If count = N i.e. delete the head node if (i == key - 1) { temp = head; head = head->next; free(temp); } return head; } second = second->next; } // Increment both the pointers by one until second // pointer reaches the end while (second->next != NULL) { first = first->next; second = second->next; } // First must be pointing to the Nth node from the // end by now So, delete the node first is pointing to temp = first->next; first->next = first->next->next; free(temp); return head; } // Function to insert a new Node at front of the list Node* push(int new_data) { Node* new_node = new Node(new_data); new_node->next = head; head = new_node; return head; } // Function to print the linked list void printList() { Node* tnode = head; while (tnode != NULL) { cout << (tnode->data) << (\" \"); tnode = tnode->next; } }}; // Driver codeint main(){ LinkedList* llist = new LinkedList(); llist->head = llist->push(7); llist->head = llist->push(1); llist->head = llist->push(3); llist->head = llist->push(2); cout << (\"Created Linked list is:\\n\"); llist->printList(); int N = 1; llist->head = llist->deleteNode(N); cout << (\"\\nLinked List after Deletion is:\\n\"); llist->printList();} // This code is contributed by Sania Kumari Gupta",
"e": 4155,
"s": 1632,
"text": null
},
{
"code": "/* C program to merge two sorted linked lists */#include <assert.h>#include <stdio.h>#include <stdlib.h> /* Link list node */typedef struct Node { int data; struct Node* next;} Node; Node* deleteNode(Node* head, int key){ // We will be using this pointer for holding address // temporarily while we delete the node Node* temp; // First pointer will point to the head of the linked // list Node* first = head; // Second pointer will point to the Nth node from the // beginning Node* second = head; for (int i = 0; i < key; i++) { // If count of nodes in the given linked list is <=N if (second->next == NULL) { // If count = N i.e. delete the head node if (i == key - 1) { temp = head; head = head->next; free(temp); } return head; } second = second->next; } // Increment both the pointers by one until // second pointer reaches the end while (second->next != NULL) { first = first->next; second = second->next; } // First must be pointing to the Nth node from the end // by now So, delete the node first is pointing to temp = first->next; first->next = first->next->next; free(temp); return head;} /* Function to insert a node at the beginning of thelinked list */void push(Node** head_ref, int new_data){ /* allocate node */ Node* new_node = (Node*)malloc(sizeof(Node)); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node;} /* Function to print nodes in a given linked list */void printList(struct Node* node){ while (node != NULL) { printf(\"%d \", node->data); node = node->next; }} // Driver programint main(){ struct Node* head = NULL; push(&head, 7); push(&head, 1); push(&head, 3); push(&head, 2); printf(\"Created Linked list is:\\n\"); printList(head); int n = 1; deleteNode(head, n); printf(\"\\nLinked List after Deletion is:\\n\"); printList(head); return 0;} // This code is contributed by Sania Kumari Gupta",
"e": 6394,
"s": 4155,
"text": null
},
{
"code": "// Java implementation of the approachclass LinkedList { // Head of list Node head; // Linked list Node class Node { int data; Node next; Node(int d) { data = d; next = null; } } // Function to delete the nth node from // the end of the given linked list void deleteNode(int key) { // First pointer will point to // the head of the linked list Node first = head; // Second pointer will point to the // Nth node from the beginning Node second = head; for (int i = 0; i < key; i++) { // If count of nodes in the given // linked list is <= N if (second.next == null) { // If count = N i.e. delete the head node if (i == key - 1) head = head.next; return; } second = second.next; } // Increment both the pointers by one until // second pointer reaches the end while (second.next != null) { first = first.next; second = second.next; } // First must be pointing to the // Nth node from the end by now // So, delete the node first is pointing to first.next = first.next.next; } // Function to insert a new Node at front of the list public void push(int new_data) { Node new_node = new Node(new_data); new_node.next = head; head = new_node; } // Function to print the linked list public void printList() { Node tnode = head; while (tnode != null) { System.out.print(tnode.data + \" \"); tnode = tnode.next; } } // Driver code public static void main(String[] args) { LinkedList llist = new LinkedList(); llist.push(7); llist.push(1); llist.push(3); llist.push(2); System.out.println(\"\\nCreated Linked list is:\"); llist.printList(); int N = 1; llist.deleteNode(N); System.out.println(\"\\nLinked List after Deletion is:\"); llist.printList(); }}",
"e": 8559,
"s": 6394,
"text": null
},
{
"code": "# Python3 implementation of the approachclass Node: def __init__(self, new_data): self.data = new_data self.next = Noneclass LinkedList: def __init__(self): self.head = None # createNode and make linked list def push(self, new_data): new_node = Node(new_data) new_node.next = self.head self.head = new_node def deleteNode(self, n): first = self.head second = self.head for i in range(n): # If count of nodes in the # given list is less than 'n' if(second.next == None): # If index = n then # delete the head node if(i == n - 1): self.head = self.head.next return self.head second = second.next while(second.next != None): second = second.next first = first.next first.next = first.next.next def printList(self): tmp_head = self.head while(tmp_head != None): print(tmp_head.data, end = ' ') tmp_head = tmp_head.next # Driver Codellist = LinkedList()llist.push(7)llist.push(1)llist.push(3)llist.push(2)print(\"Created Linked list is:\")llist.printList()llist.deleteNode(1)print(\"\\nLinked List after Deletion is:\")llist.printList() # This code is contributed by RaviParkash",
"e": 9967,
"s": 8559,
"text": null
},
{
"code": "// C# implementation of the approachusing System; public class LinkedList{ // Head of list public Node head; // Linked list Node public class Node { public int data; public Node next; public Node(int d) { data = d; next = null; } } // Function to delete the nth node from // the end of the given linked list void deleteNode(int key) { // First pointer will point to // the head of the linked list Node first = head; // Second pointer will point to the // Nth node from the beginning Node second = head; for (int i = 0; i < key; i++) { // If count of nodes in the given // linked list is <= N if (second.next == null) { // If count = N i.e. delete the head node if (i == key - 1) head = head.next; return; } second = second.next; } // Increment both the pointers by one until // second pointer reaches the end while (second.next != null) { first = first.next; second = second.next; } // First must be pointing to the // Nth node from the end by now // So, delete the node first is pointing to first.next = first.next.next; } // Function to insert a new Node at front of the list public void push(int new_data) { Node new_node = new Node(new_data); new_node.next = head; head = new_node; } // Function to print the linked list public void printList() { Node tnode = head; while (tnode != null) { Console.Write(tnode.data + \" \"); tnode = tnode.next; } } // Driver code public static void Main(String[] args) { LinkedList llist = new LinkedList(); llist.push(7); llist.push(1); llist.push(3); llist.push(2); Console.WriteLine(\"\\nCreated Linked list is:\"); llist.printList(); int N = 1; llist.deleteNode(N); Console.WriteLine(\"\\nLinked List after Deletion is:\"); llist.printList(); }} // This code is contributed by 29AjayKumar",
"e": 12262,
"s": 9967,
"text": null
},
{
"code": "<script>// javascript implementation of the approach // Head of list var head; // Linked list Node class Node { constructor(val) { this.data = val; this.next = null; } } // Function to delete the nth node from // the end of the given linked list function deleteNode(key) { // First pointer will point to // the head of the linked list var first = head; // Second pointer will point to the // Nth node from the beginning var second = head; for (i = 0; i < key; i++) { // If count of nodes in the given // linked list is <= N if (second.next == null) { // If count = N i.e. delete the head node if (i == key - 1) head = head.next; return; } second = second.next; } // Increment both the pointers by one until // second pointer reaches the end while (second.next != null) { first = first.next; second = second.next; } // First must be pointing to the // Nth node from the end by now // So, delete the node first is pointing to first.next = first.next.next; } // Function to insert a new Node at front of the list function push(new_data) { var new_node = new Node(new_data); new_node.next = head; head = new_node; } // Function to print the linked list function printList() { var tnode = head; while (tnode != null) { document.write(tnode.data + \" \"); tnode = tnode.next; } } // Driver code push(7); push(1); push(3); push(2); document.write(\"\\nCreated Linked list is:<br/>\"); printList(); var N = 1; deleteNode(N); document.write(\"<br/>Linked List after Deletion is:<br/>\"); printList(); // This code is contributed by todaysgaurav</script>",
"e": 14292,
"s": 12262,
"text": null
},
{
"code": null,
"e": 14363,
"s": 14292,
"text": "Created Linked list is:\n2 3 1 7 \nLinked List after Deletion is:\n2 3 1 "
},
{
"code": null,
"e": 14458,
"s": 14363,
"text": "we already covered the iterative version above,Now let us see its recursive approach as well, "
},
{
"code": null,
"e": 15144,
"s": 14458,
"text": "Recursive Approach :1) Create a dummy node and create a link from dummy node to head node. i.e, dummy->next = head 2) Then we will use the recursion stack to keep track of elements that are being pushed in recursion calls.3) While popping the elements from recursion stack, we will decrement the N(position of target node from the end of linked list) i.e, N = N-1.4) When we reach (N==0) that means we have reached at the target node,5) But here is the catch, to delete the target node we require its previous node,6) So we will now stop when (N==-1) i.e, we reached the previous node.7) Now it is very simple to delete the node by using previousNode->next = previousNode->next->next. "
},
{
"code": null,
"e": 15148,
"s": 15144,
"text": "C++"
},
{
"code": "// C++ implementation of the approach// Code is contributed by Paras Saini#include <bits/stdc++.h>using namespace std;class LinkedList {public: int val; LinkedList* next; LinkedList() { this->next = NULL; this->val = 0; } LinkedList(int val) { this->next = NULL; this->val = val; } LinkedList* addNode(int val) { if (this == NULL) { return new LinkedList(val); } else { LinkedList* ptr = this; while (ptr->next) { ptr = ptr->next; } ptr->next = new LinkedList(val); return this; } } void removeNthNodeFromEndHelper(LinkedList* head, int& n) { if (!head) return; // Adding the elements in the recursion // stack removeNthNodeFromEndHelper(head->next, n); // Popping the elements from recursion stack n -= 1; // If we reach the previous of target node if (n == -1){ LinkedList* temp = head->next; head->next = head->next->next; free (temp); } } LinkedList* removeNthNodeFromEnd(int n) { // return NULL if we have NULL head or only // one node. if (!this or !this->next) return NULL; // Create a dummy node and point its next to // head. LinkedList* dummy = new LinkedList(); dummy->next = this; // Call function to remove Nth node from end removeNthNodeFromEndHelper(dummy, n); // Return new head i.e, dummy->next return dummy->next; } void printLinkedList() { if (!this) { cout << \"Empty List\\n\"; return; } LinkedList* ptr = this; while (ptr) { cout << ptr->val << \" \"; ptr = ptr->next; } cout << endl; }}; class TestCase {private: void printOutput(LinkedList* head) { // Output: if (!head) cout << \"Empty Linked List\\n\"; else head->printLinkedList(); } void testCase1() { LinkedList* head = new LinkedList(1); head = head->addNode(2); head = head->addNode(3); head = head->addNode(4); head = head->addNode(5); head->printLinkedList(); // Print: 1 2 3 4 5 head = head->removeNthNodeFromEnd(2); printOutput(head); // Output: 1 2 3 5 } void testCase2() { // Important Edge Case, where linkedList [1] // and n=1, LinkedList* head = new LinkedList(1); head->printLinkedList(); // Print: 1 head = head->removeNthNodeFromEnd(2); printOutput(head); // Output: Empty Linked List } void testCase3() { LinkedList* head = new LinkedList(1); head = head->addNode(2); head->printLinkedList(); // Print: 1 2 head = head->removeNthNodeFromEnd(1); printOutput(head); // Output: 1 } public: void executeTestCases() { testCase1(); testCase2(); testCase3(); }}; int main(){ TestCase testCase; testCase.executeTestCases(); return 0;}",
"e": 18335,
"s": 15148,
"text": null
},
{
"code": null,
"e": 18384,
"s": 18335,
"text": "1 2 3 4 5 \n1 2 3 5 \n1 \nEmpty Linked List\n1 2 \n1 "
},
{
"code": null,
"e": 18430,
"s": 18384,
"text": "Two Pointer Approach – Slow and Fast Pointers"
},
{
"code": null,
"e": 18497,
"s": 18430,
"text": "This problem can be solved by using two pointer approach as below:"
},
{
"code": null,
"e": 18573,
"s": 18497,
"text": "Take two pointers – fast and slow. And initialize their values as head node"
},
{
"code": null,
"e": 18615,
"s": 18573,
"text": "Iterate fast pointer till the value of n."
},
{
"code": null,
"e": 18720,
"s": 18615,
"text": "Now, start iteration of fast pointer till the None value of the linked list. Also, iterate slow pointer."
},
{
"code": null,
"e": 18834,
"s": 18720,
"text": "Hence, once the fast pointer will reach to the end the slow pointer will reach the node which you want to delete."
},
{
"code": null,
"e": 18924,
"s": 18834,
"text": "Replace the next node of the slow pointer with the next to next node of the slow pointer."
},
{
"code": null,
"e": 18928,
"s": 18924,
"text": "C++"
},
{
"code": null,
"e": 18933,
"s": 18928,
"text": "Java"
},
{
"code": null,
"e": 18940,
"s": 18933,
"text": "Python"
},
{
"code": null,
"e": 18943,
"s": 18940,
"text": "C#"
},
{
"code": null,
"e": 18954,
"s": 18943,
"text": "Javascript"
},
{
"code": "// C++ code for the deleting a node from end using two// pointer approach #include <iostream>using namespace std; class LinkedList {public: // structure of a node class Node { public: int data; Node* next; Node(int d) { data = d; next = NULL; } }; // Head node Node* head; // Function for inserting a node at the beginning void push(int data) { Node* new_node = new Node(data); new_node->next = head; head = new_node; } // Function to display the nodes in the list. void display() { Node* temp = head; while (temp != NULL) { cout << temp->data << endl; temp = temp->next; } } // Function to delete the nth node from the end. void deleteNthNodeFromEnd(Node* head, int n) { Node* fast = head; Node* slow = head; for (int i = 0; i < n; i++) { fast = fast->next; } if (fast == NULL) { return; } while (fast->next != NULL) { fast = fast->next; slow = slow->next; } slow->next = slow->next->next; return; }}; int main(){ LinkedList* l = new LinkedList(); // Create a list 1->2->3->4->5->NULL l->push(5); l->push(4); l->push(3); l->push(2); l->push(1); cout << \"***** Linked List Before deletion *****\" << endl; l->display(); cout << \"************** Delete nth Node from the End \" \"*****\" << endl; l->deleteNthNodeFromEnd(l->head, 2); cout << \"*********** Linked List after Deletion *****\" << endl; l->display(); return 0;} // This code is contributed by lokesh (lokeshmvs21).",
"e": 20702,
"s": 18954,
"text": null
},
{
"code": "// Java code for deleting a node from the end using two// Pointer Approach class GFG { class Node { int data; Node next; Node(int data) { this.data = data; this.next = null; } } Node head; // Function to insert node at the beginning of the list. public void push(int data) { Node new_node = new Node(data); new_node.next = head; head = new_node; } // Function to print the nodes in the linked list. public void display() { Node temp = head; while (temp != null) { System.out.println(temp.data); temp = temp.next; } } public void deleteNthNodeFromEnd(Node head, int n) { Node fast = head; Node slow = head; for (int i = 0; i < n; i++) { fast = fast.next; } if (fast == null) { return; } while (fast.next != null) { fast = fast.next; slow = slow.next; } slow.next = slow.next.next; return; } public static void main(String[] args) { GFG l = new GFG(); // Create a list 1->2->3->4->5->NULL l.push(5); l.push(4); l.push(3); l.push(2); l.push(1); System.out.println( \"***** Linked List Before deletion *****\"); l.display(); System.out.println( \"************** Delete nth Node from the End *****\"); l.deleteNthNodeFromEnd(l.head, 2); System.out.println( \"*********** Linked List after Deletion *****\"); l.display(); }} // This code is contributed by lokesh (lokeshmvs21).",
"e": 22391,
"s": 20702,
"text": null
},
{
"code": "class Node: def __init__(self, data): self.data = data self.next = None class LinkedList: def __init__(self): self.head = None def push(self, data): new_node = Node(data) new_node.next = self.head self.head = new_node def display(self): temp = self.head while temp != None: print(temp.data) temp = temp.next def deleteNthNodeFromEnd(self, head, n): fast = head slow = head for _ in range(n): fast = fast.next if not fast: return head.next while fast.next: fast = fast.next slow = slow.next slow.next = slow.next.next return head if __name__ == '__main__': l = LinkedList() l.push(5) l.push(4) l.push(3) l.push(2) l.push(1) print('***** Linked List Before deletion *****') l.display() print('************** Delete nth Node from the End *****') l.deleteNthNodeFromEnd(l.head, 2) print('*********** Linked List after Deletion *****') l.display()",
"e": 23467,
"s": 22391,
"text": null
},
{
"code": "// C# code for deleting a node from the end using two// Pointer Approach using System; public class GFG { public class Node { public int data; public Node next; public Node(int data) { this.data = data; this.next = null; } } Node head; void push(int data) { Node new_node = new Node(data); new_node.next = head; head = new_node; } void display() { Node temp = head; while (temp != null) { Console.WriteLine(temp.data); temp = temp.next; } } public void deleteNthNodeFromEnd(Node head, int n) { Node fast = head; Node slow = head; for (int i = 0; i < n; i++) { fast = fast.next; } if (fast == null) { return; } while (fast.next != null) { fast = fast.next; slow = slow.next; } slow.next = slow.next.next; return; } static public void Main() { // Code GFG l = new GFG(); // Create a list 1->2->3->4->5->NULL l.push(5); l.push(4); l.push(3); l.push(2); l.push(1); Console.WriteLine( \"***** Linked List Before deletion *****\"); l.display(); Console.WriteLine( \"************** Delete nth Node from the End *****\"); l.deleteNthNodeFromEnd(l.head, 2); Console.WriteLine( \"*********** Linked List after Deletion *****\"); l.display(); }} // This code is contributed by lokesh(lokeshmvs21).",
"e": 25073,
"s": 23467,
"text": null
},
{
"code": "<script> class Node{ constructor(data){ this.data = data this.next = null }} class LinkedList{ constructor(){ this.head = null } push(data){ let new_node = new Node(data) new_node.next =this.head this.head = new_node } display(){ let temp =this.head while(temp != null){ document.write(temp.data,\"</br>\") temp = temp.next } } deleteNthNodeFromEnd(head, n){ let fast = head let slow = head for(let i=0;i<n;i++){ fast = fast.next } if(!fast) return head.next while(fast.next){ fast = fast.next slow = slow.next } slow.next = slow.next.next return head }} // driver code let l = new LinkedList()l.push(5)l.push(4)l.push(3)l.push(2)l.push(1)document.write('***** Linked List Before deletion *****',\"</br>\")l.display() document.write('************** Delete nth Node from the End *****',\"</br>\")l.deleteNthNodeFromEnd(l.head, 2) document.write('*********** Linked List after Deletion *****',\"</br>\")l.display() // This code is contributed by shinjanpatra </script>",
"e": 26263,
"s": 25073,
"text": null
},
{
"code": null,
"e": 26416,
"s": 26263,
"text": "***** Linked List Before deletion *****\n1\n2\n3\n4\n5\n************** Delete nth Node from the End *****\n*********** Linked List after Deletion *****\n1\n2\n3\n5"
},
{
"code": null,
"e": 26438,
"s": 26416,
"text": "Time complexity: O(n)"
},
{
"code": null,
"e": 26483,
"s": 26438,
"text": "Space complexity: O(1) using constant space "
},
{
"code": null,
"e": 26495,
"s": 26483,
"text": "29AjayKumar"
},
{
"code": null,
"e": 26507,
"s": 26495,
"text": "RaviParkash"
},
{
"code": null,
"e": 26518,
"s": 26507,
"text": "andrew1234"
},
{
"code": null,
"e": 26531,
"s": 26518,
"text": "sparshgarg07"
},
{
"code": null,
"e": 26544,
"s": 26531,
"text": "todaysgaurav"
},
{
"code": null,
"e": 26556,
"s": 26544,
"text": "Paras Saini"
},
{
"code": null,
"e": 26571,
"s": 26556,
"text": "vivek shivhare"
},
{
"code": null,
"e": 26590,
"s": 26571,
"text": "utkarshbhatnagar30"
},
{
"code": null,
"e": 26606,
"s": 26590,
"text": "ramakrishna1729"
},
{
"code": null,
"e": 26622,
"s": 26606,
"text": "simranarora5sos"
},
{
"code": null,
"e": 26636,
"s": 26622,
"text": "gul md ershad"
},
{
"code": null,
"e": 26653,
"s": 26636,
"text": "surinderdawra388"
},
{
"code": null,
"e": 26665,
"s": 26653,
"text": "krisania804"
},
{
"code": null,
"e": 26678,
"s": 26665,
"text": "shinjanpatra"
},
{
"code": null,
"e": 26690,
"s": 26678,
"text": "noviced3vq6"
},
{
"code": null,
"e": 26702,
"s": 26690,
"text": "lokeshmvs21"
},
{
"code": null,
"e": 26723,
"s": 26702,
"text": "Delete a Linked List"
},
{
"code": null,
"e": 26733,
"s": 26723,
"text": "Traversal"
},
{
"code": null,
"e": 26745,
"s": 26733,
"text": "Linked List"
},
{
"code": null,
"e": 26757,
"s": 26745,
"text": "Linked List"
},
{
"code": null,
"e": 26767,
"s": 26757,
"text": "Traversal"
}
] |
Maximum modulo of all the pairs of array where arr[i] >= arr[j]
|
13 Jul, 2022
Given an array of n integers. Find the maximum value of arr[i] mod arr[j] where arr[i] >= arr[j] and 1 <= i, j <= n
Examples:
Input: arr[] = {3, 4, 7}
Output: 3
Explanation:
There are 3 pairs which satisfies arr[i] >= arr[j] are:-
4, 3 => 4 % 3 = 1
7, 3 => 7 % 3 = 1
7, 4 => 7 % 4 = 3
Hence Maximum value among all is 3.
Input: arr[] = {3, 7, 4, 11}
Output: 4
Input: arr[] = {4, 4, 4}
Output: 0
A Naive approach is to run two nested for loops and select the maximum of every possible pairs after taking modulo of them. Time complexity of this approach will be O(n2) which will not be sufficient for large value of n.
An efficient approach (when elements are from small range) is to use sorting and binary search method. Firstly we will sort the array so that we would able to apply binary search on it. Since we need to maximize the value of arr[i] mod arr[j] so we iterate through each x(such x divisible by arr[j]) in range from 2*arr[j] to M+arr[j], where M is Maximum value of sequence. For each value of x we need to find maximum value of arr[i] such that arr[i] < x.
By doing this we would assure that we have chosen only those values of arr[i] that will give the maximum value of arr[i] mod arr[j]. After that we just need to repeat the above process for other values of arr[j] and update the answer by value a[i] mod arr[j].
For example:-
If arr[] = {4, 6, 7, 8, 10, 12, 15} then for
first element, i.e., arr[j] = 4 we iterate
through x = {8, 12, 16}.
Therefore for each value of x, a[i] will be:-
x = 8, arr[i] = 7 (7 < 8)
ans = 7 mod 4 = 3
x = 12, arr[i] = 10 (10 < 12)
ans = 10 mod 4 = 2 (Since 2 < 3,
No update)
x = 16, arr[i] = 15 (15 < 16)
ans = 15 mod 4 = 3 (Since 3 == 3,
No need to update)
Implementation:
C++
Java
Python3
C#
Javascript
// C++ program to find Maximum modulo value#include <bits/stdc++.h>using namespace std; int maxModValue(int arr[], int n){ int ans = 0; // Sort the array[] by using inbuilt sort function sort(arr, arr + n); for (int j = n - 2; j >= 0; --j) { // Break loop if answer is greater or equals to // the arr[j] as any number modulo with arr[j] // can only give maximum value up-to arr[j]-1 if (ans >= arr[j]) break; // If both elements are same then skip the next // loop as it would be worthless to repeat the // rest process for same value if (arr[j] == arr[j + 1]) continue; for (int i = 2 * arr[j]; i <= arr[n - 1] + arr[j]; i += arr[j]) { // Fetch the index which is greater than or // equals to arr[i] by using binary search // inbuilt lower_bound() function of C++ int ind = lower_bound(arr, arr + n, i) - arr; // Update the answer ans = max(ans, arr[ind - 1] % arr[j]); } } return ans;} // Driver codeint main(){ int arr[] = { 3, 4, 5, 9, 11 }; int n = sizeof(arr) / sizeof(arr[0]); cout << maxModValue(arr, n);}
// Java program to find Maximum modulo value import java.util.Arrays; class Test { static int maxModValue(int arr[], int n) { int ans = 0; // Sort the array[] by using inbuilt sort function Arrays.sort(arr); for (int j = n - 2; j >= 0; --j) { // Break loop if answer is greater or equals to // the arr[j] as any number modulo with arr[j] // can only give maximum value up-to arr[j]-1 if (ans >= arr[j]) break; // If both elements are same then skip the next // loop as it would be worthless to repeat the // rest process for same value if (arr[j] == arr[j + 1]) continue; for (int i = 2 * arr[j]; i <= arr[n - 1] + arr[j]; i += arr[j]) { // Fetch the index which is greater than or // equals to arr[i] by using binary search int ind = Arrays.binarySearch(arr, i); if (ind < 0) ind = Math.abs(ind + 1); else { while (arr[ind] == i) { ind--; if (ind == 0) { ind = -1; break; } } ind++; } // Update the answer ans = Math.max(ans, arr[ind - 1] % arr[j]); } } return ans; } // Driver method public static void main(String args[]) { int arr[] = { 3, 4, 5, 9, 11 }; System.out.println(maxModValue(arr, arr.length)); }}
# Python3 program to find Maximum modulo value def maxModValue(arr, n): ans = 0 # Sort the array[] by using inbuilt # sort function arr = sorted(arr) for j in range(n - 2, -1, -1): # Break loop if answer is greater or equals to # the arr[j] as any number modulo with arr[j] # can only give maximum value up-to arr[j]-1 if (ans >= arr[j]): break # If both elements are same then skip the next # loop as it would be worthless to repeat the # rest process for same value if (arr[j] == arr[j + 1]) : continue i = 2 * arr[j] while(i <= arr[n - 1] + arr[j]): # Fetch the index which is greater than or # equals to arr[i] by using binary search # inbuilt lower_bound() function of C++ ind = 0 for k in arr: if k >= i: ind = arr.index(k) # Update the answer ans = max(ans, arr[ind - 1] % arr[j]) i += arr[j] return ans # Driver Codearr = [3, 4, 5, 9, 11 ]n = 5print(maxModValue(arr, n)) # This code is contributed by# Shubham Singh(SHUBHAMSINGH10)
// C# program to find Maximum modulo valueusing System; public class GFG { static int maxModValue(int[] arr, int n) { int ans = 0; // Sort the array[] by using inbuilt // sort function Array.Sort(arr); for (int j = n - 2; j >= 0; --j) { // Break loop if answer is greater // or equals to the arr[j] as any // number modulo with arr[j] can // only give maximum value up-to // arr[j]-1 if (ans >= arr[j]) break; // If both elements are same then // skip the next loop as it would // be worthless to repeat the // rest process for same value if (arr[j] == arr[j + 1]) continue; for (int i = 2 * arr[j]; i <= arr[n - 1] + arr[j]; i += arr[j]) { // Fetch the index which is // greater than or equals to // arr[i] by using binary search int ind = Array.BinarySearch(arr, i); if (ind < 0) ind = Math.Abs(ind + 1); else { while (arr[ind] == i) { ind--; if (ind == 0) { ind = -1; break; } } ind++; } // Update the answer ans = Math.Max(ans, arr[ind - 1] % arr[j]); } } return ans; } // Driver method public static void Main() { int[] arr = { 3, 4, 5, 9, 11 }; Console.WriteLine( maxModValue(arr, arr.Length)); }} // This code is contributed by Sam007.
<script> // JavaScript program to find Maximum modulo value function maxModValue(arr, n) { let ans = 0; // Sort the array[] by using inbuilt sort function arr.sort((a, b) => a - b); for (let j = n - 2; j >= 0; --j) { // Break loop if answer is greater or equals to // the arr[j] as any number modulo with arr[j] // can only give maximum value up-to arr[j]-1 if (ans >= arr[j]) break; // If both elements are same then skip the next // loop as it would be worthless to repeat the // rest process for same value if (arr[j] == arr[j + 1]) continue; for (let i = 2 * arr[j]; i <= arr[n - 1] + arr[j]; i += arr[j]) { // Fetch the index which is greater than or // equals to arr[i] by using binary search let ind = arr.indexOf(i); if (ind < 0) ind = Math.abs(ind) + 1; else { while (arr[ind] == i) { ind--; if (ind == 0) { ind = -1; break; } } ind++; } // Update the answer ans = Math.max(ans, arr[ind - 1] % arr[j]); } } return ans;} // Driver method let arr = [3, 4, 5, 9, 11];document.write(maxModValue(arr, arr.length)); </script>
4
ime complexity: O(nlog(n) + Mlog(M)) where n is total number of elements and M is maximum value of all the elements. Auxiliary space: O(1)
This blog is contributed by Shubham Bansal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Sam007
SHUBHAMSINGH10
shantamverma
gfgking
hardikkoriintern
Binary Search
Modular Arithmetic
Arrays
Arrays
Modular Arithmetic
Binary Search
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n13 Jul, 2022"
},
{
"code": null,
"e": 169,
"s": 52,
"text": "Given an array of n integers. Find the maximum value of arr[i] mod arr[j] where arr[i] >= arr[j] and 1 <= i, j <= n "
},
{
"code": null,
"e": 180,
"s": 169,
"text": "Examples: "
},
{
"code": null,
"e": 451,
"s": 180,
"text": "Input: arr[] = {3, 4, 7}\nOutput: 3\nExplanation:\nThere are 3 pairs which satisfies arr[i] >= arr[j] are:-\n4, 3 => 4 % 3 = 1\n7, 3 => 7 % 3 = 1\n7, 4 => 7 % 4 = 3\nHence Maximum value among all is 3.\n\nInput: arr[] = {3, 7, 4, 11}\nOutput: 4\n\nInput: arr[] = {4, 4, 4}\nOutput: 0"
},
{
"code": null,
"e": 673,
"s": 451,
"text": "A Naive approach is to run two nested for loops and select the maximum of every possible pairs after taking modulo of them. Time complexity of this approach will be O(n2) which will not be sufficient for large value of n."
},
{
"code": null,
"e": 1130,
"s": 673,
"text": "An efficient approach (when elements are from small range) is to use sorting and binary search method. Firstly we will sort the array so that we would able to apply binary search on it. Since we need to maximize the value of arr[i] mod arr[j] so we iterate through each x(such x divisible by arr[j]) in range from 2*arr[j] to M+arr[j], where M is Maximum value of sequence. For each value of x we need to find maximum value of arr[i] such that arr[i] < x. "
},
{
"code": null,
"e": 1390,
"s": 1130,
"text": "By doing this we would assure that we have chosen only those values of arr[i] that will give the maximum value of arr[i] mod arr[j]. After that we just need to repeat the above process for other values of arr[j] and update the answer by value a[i] mod arr[j]."
},
{
"code": null,
"e": 1405,
"s": 1390,
"text": "For example:- "
},
{
"code": null,
"e": 1848,
"s": 1405,
"text": "If arr[] = {4, 6, 7, 8, 10, 12, 15} then for\nfirst element, i.e., arr[j] = 4 we iterate\nthrough x = {8, 12, 16}. \nTherefore for each value of x, a[i] will be:-\nx = 8, arr[i] = 7 (7 < 8)\n ans = 7 mod 4 = 3 \nx = 12, arr[i] = 10 (10 < 12)\n ans = 10 mod 4 = 2 (Since 2 < 3, \n No update)\nx = 16, arr[i] = 15 (15 < 16)\n ans = 15 mod 4 = 3 (Since 3 == 3, \n No need to update)"
},
{
"code": null,
"e": 1864,
"s": 1848,
"text": "Implementation:"
},
{
"code": null,
"e": 1868,
"s": 1864,
"text": "C++"
},
{
"code": null,
"e": 1873,
"s": 1868,
"text": "Java"
},
{
"code": null,
"e": 1881,
"s": 1873,
"text": "Python3"
},
{
"code": null,
"e": 1884,
"s": 1881,
"text": "C#"
},
{
"code": null,
"e": 1895,
"s": 1884,
"text": "Javascript"
},
{
"code": "// C++ program to find Maximum modulo value#include <bits/stdc++.h>using namespace std; int maxModValue(int arr[], int n){ int ans = 0; // Sort the array[] by using inbuilt sort function sort(arr, arr + n); for (int j = n - 2; j >= 0; --j) { // Break loop if answer is greater or equals to // the arr[j] as any number modulo with arr[j] // can only give maximum value up-to arr[j]-1 if (ans >= arr[j]) break; // If both elements are same then skip the next // loop as it would be worthless to repeat the // rest process for same value if (arr[j] == arr[j + 1]) continue; for (int i = 2 * arr[j]; i <= arr[n - 1] + arr[j]; i += arr[j]) { // Fetch the index which is greater than or // equals to arr[i] by using binary search // inbuilt lower_bound() function of C++ int ind = lower_bound(arr, arr + n, i) - arr; // Update the answer ans = max(ans, arr[ind - 1] % arr[j]); } } return ans;} // Driver codeint main(){ int arr[] = { 3, 4, 5, 9, 11 }; int n = sizeof(arr) / sizeof(arr[0]); cout << maxModValue(arr, n);}",
"e": 3098,
"s": 1895,
"text": null
},
{
"code": "// Java program to find Maximum modulo value import java.util.Arrays; class Test { static int maxModValue(int arr[], int n) { int ans = 0; // Sort the array[] by using inbuilt sort function Arrays.sort(arr); for (int j = n - 2; j >= 0; --j) { // Break loop if answer is greater or equals to // the arr[j] as any number modulo with arr[j] // can only give maximum value up-to arr[j]-1 if (ans >= arr[j]) break; // If both elements are same then skip the next // loop as it would be worthless to repeat the // rest process for same value if (arr[j] == arr[j + 1]) continue; for (int i = 2 * arr[j]; i <= arr[n - 1] + arr[j]; i += arr[j]) { // Fetch the index which is greater than or // equals to arr[i] by using binary search int ind = Arrays.binarySearch(arr, i); if (ind < 0) ind = Math.abs(ind + 1); else { while (arr[ind] == i) { ind--; if (ind == 0) { ind = -1; break; } } ind++; } // Update the answer ans = Math.max(ans, arr[ind - 1] % arr[j]); } } return ans; } // Driver method public static void main(String args[]) { int arr[] = { 3, 4, 5, 9, 11 }; System.out.println(maxModValue(arr, arr.length)); }}",
"e": 4756,
"s": 3098,
"text": null
},
{
"code": "# Python3 program to find Maximum modulo value def maxModValue(arr, n): ans = 0 # Sort the array[] by using inbuilt # sort function arr = sorted(arr) for j in range(n - 2, -1, -1): # Break loop if answer is greater or equals to # the arr[j] as any number modulo with arr[j] # can only give maximum value up-to arr[j]-1 if (ans >= arr[j]): break # If both elements are same then skip the next # loop as it would be worthless to repeat the # rest process for same value if (arr[j] == arr[j + 1]) : continue i = 2 * arr[j] while(i <= arr[n - 1] + arr[j]): # Fetch the index which is greater than or # equals to arr[i] by using binary search # inbuilt lower_bound() function of C++ ind = 0 for k in arr: if k >= i: ind = arr.index(k) # Update the answer ans = max(ans, arr[ind - 1] % arr[j]) i += arr[j] return ans # Driver Codearr = [3, 4, 5, 9, 11 ]n = 5print(maxModValue(arr, n)) # This code is contributed by# Shubham Singh(SHUBHAMSINGH10)",
"e": 5960,
"s": 4756,
"text": null
},
{
"code": "// C# program to find Maximum modulo valueusing System; public class GFG { static int maxModValue(int[] arr, int n) { int ans = 0; // Sort the array[] by using inbuilt // sort function Array.Sort(arr); for (int j = n - 2; j >= 0; --j) { // Break loop if answer is greater // or equals to the arr[j] as any // number modulo with arr[j] can // only give maximum value up-to // arr[j]-1 if (ans >= arr[j]) break; // If both elements are same then // skip the next loop as it would // be worthless to repeat the // rest process for same value if (arr[j] == arr[j + 1]) continue; for (int i = 2 * arr[j]; i <= arr[n - 1] + arr[j]; i += arr[j]) { // Fetch the index which is // greater than or equals to // arr[i] by using binary search int ind = Array.BinarySearch(arr, i); if (ind < 0) ind = Math.Abs(ind + 1); else { while (arr[ind] == i) { ind--; if (ind == 0) { ind = -1; break; } } ind++; } // Update the answer ans = Math.Max(ans, arr[ind - 1] % arr[j]); } } return ans; } // Driver method public static void Main() { int[] arr = { 3, 4, 5, 9, 11 }; Console.WriteLine( maxModValue(arr, arr.Length)); }} // This code is contributed by Sam007.",
"e": 7902,
"s": 5960,
"text": null
},
{
"code": "<script> // JavaScript program to find Maximum modulo value function maxModValue(arr, n) { let ans = 0; // Sort the array[] by using inbuilt sort function arr.sort((a, b) => a - b); for (let j = n - 2; j >= 0; --j) { // Break loop if answer is greater or equals to // the arr[j] as any number modulo with arr[j] // can only give maximum value up-to arr[j]-1 if (ans >= arr[j]) break; // If both elements are same then skip the next // loop as it would be worthless to repeat the // rest process for same value if (arr[j] == arr[j + 1]) continue; for (let i = 2 * arr[j]; i <= arr[n - 1] + arr[j]; i += arr[j]) { // Fetch the index which is greater than or // equals to arr[i] by using binary search let ind = arr.indexOf(i); if (ind < 0) ind = Math.abs(ind) + 1; else { while (arr[ind] == i) { ind--; if (ind == 0) { ind = -1; break; } } ind++; } // Update the answer ans = Math.max(ans, arr[ind - 1] % arr[j]); } } return ans;} // Driver method let arr = [3, 4, 5, 9, 11];document.write(maxModValue(arr, arr.length)); </script>",
"e": 9316,
"s": 7902,
"text": null
},
{
"code": null,
"e": 9318,
"s": 9316,
"text": "4"
},
{
"code": null,
"e": 9457,
"s": 9318,
"text": "ime complexity: O(nlog(n) + Mlog(M)) where n is total number of elements and M is maximum value of all the elements. Auxiliary space: O(1)"
},
{
"code": null,
"e": 9752,
"s": 9457,
"text": "This blog is contributed by Shubham Bansal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks."
},
{
"code": null,
"e": 9759,
"s": 9752,
"text": "Sam007"
},
{
"code": null,
"e": 9774,
"s": 9759,
"text": "SHUBHAMSINGH10"
},
{
"code": null,
"e": 9787,
"s": 9774,
"text": "shantamverma"
},
{
"code": null,
"e": 9795,
"s": 9787,
"text": "gfgking"
},
{
"code": null,
"e": 9812,
"s": 9795,
"text": "hardikkoriintern"
},
{
"code": null,
"e": 9826,
"s": 9812,
"text": "Binary Search"
},
{
"code": null,
"e": 9845,
"s": 9826,
"text": "Modular Arithmetic"
},
{
"code": null,
"e": 9852,
"s": 9845,
"text": "Arrays"
},
{
"code": null,
"e": 9859,
"s": 9852,
"text": "Arrays"
},
{
"code": null,
"e": 9878,
"s": 9859,
"text": "Modular Arithmetic"
},
{
"code": null,
"e": 9892,
"s": 9878,
"text": "Binary Search"
}
] |
Check if possible to move from given coordinate to desired coordinate
|
06 Jul, 2022
Given two coordinates (x, y) and (a, b). Find if it is possible to reach (x, y) from (a, b).
Only possible moves from any coordinate (i, j) are
(i-j, j)
(i, i-j)
(i+j, j)
(i, i+j)
Given x, y, a, b can be negative.
Examples:
Input : (x, y) = (1, 1) and (a, b) = (2, 3).
Output : Yes.
(1, 1) -> (2, 1) -> (2, 3).
Input : (x, y) = (2, 1) and (a, b) = (2, 3).
Output : Yes.
Input : (x, y) = (35, 15) and (a, b) = (20, 25).
Output : Yes.
(35, 15) -> (20, 15) -> (5, 15) -> (5, 10) -> (5, 5) ->
(10, 5) -> (15, 5) -> (20, 5) -> (20, 25)
If we take a closer look at the problem, we can notice that the moves are similar steps of Euclidean algorithm for finding GCD. So, it is only possible to reach coordinate (a, b) from (x, y) if GCD of x, y is equal to GCD of a, b. Otherwise, it is not possible.
Let GCD of (x, y) be gcd. From (x, y), we can reach (gcd, gcd) and from this point, we can reach to (a, b) if and only if GCD of ‘a’ and ‘b’ is also gcd.
Below is the implementation of this approach:
C++
Java
Python3
C#
PHP
Javascript
// C++ program to check if it is possible to reach// (a, b) from (x, y).#include <bits/stdc++.h>using namespace std; // Returns GCD of i and jint gcd(int i, int j){ if (i == j) return i; if (i > j) return gcd(i - j, j); return gcd(i, j - i);} // Returns true if it is possible to go to (a, b)// from (x, y)bool ispossible(int x, int y, int a, int b){ // Find absolute values of all as sign doesn't // matter. x = abs(x), y = abs(y), a = abs(a), b = abs(b); // If gcd is equal then it is possible to reach. // Else not possible. return (gcd(x, y) == gcd(a, b));} // Driven Programint main(){ // Converting coordinate into positive integer int x = 35, y = 15; int a = 20, b = 25; (ispossible(x, y, a, b)) ? (cout << "Yes") : (cout << "No"); return 0;}
// Java program to check if it is possible// to reach (a, b) from (x, y).class GFG { // Returns GCD of i and j static int gcd(int i, int j) { if (i == j) return i; if (i > j) return gcd(i - j, j); return gcd(i, j - i); } // Returns true if it is possible to go to (a, b) // from (x, y) static boolean ispossible(int x, int y, int a, int b) { // Find absolute values of all as // sign doesn't matter. x = Math.abs(x); y = Math.abs(y); a = Math.abs(a); b = Math.abs(b); // If gcd is equal then it is possible to reach. // Else not possible. return (gcd(x, y) == gcd(a, b)); } // Driver code public static void main(String[] args) { // Converting coordinate into positive integer int x = 35, y = 15; int a = 20, b = 25; if (ispossible(x, y, a, b)) System.out.print("Yes"); else System.out.print("No"); }}// This code is contributed by Anant Agarwal.
# Python program to check if it is possible to reach# (a, b) from (x, y).# Returns GCD of i and jdef gcd(i, j): if (i == j): return i if (i > j): return gcd(i - j, j) return gcd(i, j - i) # Returns true if it is possible to go to (a, b)# from (x, y)def ispossible(x, y, a, b): # Find absolute values of all as sign doesn't # matter. x, y, a, b = abs(x), abs(y), abs(a), abs(b) # If gcd is equal then it is possible to reach. # Else not possible. return (gcd(x, y) == gcd(a, b)) # Driven Program # Converting coordinate into positive integerx, y = 35, 15a, b = 20, 25if(ispossible(x, y, a, b)): print ("Yes")else: print ("No")# Contributed by Afzal Ansari
// C# program to check if it is possible// to reach (a, b) from (x, y).using System; class GFG { // Returns GCD of i and j static int gcd(int i, int j) { if (i == j) return i; if (i > j) return gcd(i - j, j); return gcd(i, j - i); } // Returns true if it is possible // to go to (a, b) from (x, y) static bool ispossible(int x, int y, int a, int b) { // Find absolute values of all as // sign doesn't matter. x = Math.Abs(x); y = Math.Abs(y); a = Math.Abs(a); b = Math.Abs(b); // If gcd is equal then it is possible // to reach. Else not possible. return (gcd(x, y) == gcd(a, b)); } // Driver code public static void Main() { // Converting coordinate // into positive integer int x = 35, y = 15; int a = 20, b = 25; if (ispossible(x, y, a, b)) Console.Write("Yes"); else Console.Write("No"); }} // This code is contributed by nitin mittal.
<?php// PHP program to check if it// is possible to reach// (a, b) from (x, y). // Returns GCD of i and jfunction gcd($i, $j){ if ($i == $j) return $i; if ($i > $j) return gcd($i - $j, $j); return gcd($i, $j - $i);} // Returns true if it is// possible to go to (a, b)// from (x, y)function ispossible($x, $y, $a, $b){ // Find absolute values // of all as sign doesn't // matter. $x = abs($x); $y = abs($y); $a = abs($a); $b = abs($b); // If gcd is equal then // it is possible to reach. // Else not possible. return (gcd($x, $y) == gcd($a, $b));} // Driver Code{ // Converting coordinate // into positive integer $x = 35; $y = 15; $a = 20; $b = 25; if (ispossible($x, $y, $a, $b)) echo( "Yes"); else echo( "No"); return 0;} // This code is contributed by nitin mittal.?>
<script>// javascript program to check if it is possible// to reach (a, b) from (x, y). // Returns GCD of i and j function gcd(i , j) { if (i == j) return i; if (i > j) return gcd(i - j, j); return gcd(i, j - i); } // Returns true if it is possible to go to (a, b) // from (x, y) function ispossible(x , y , a , b) { // Find absolute values of all as // sign doesn't matter. x = Math.abs(x); y = Math.abs(y); a = Math.abs(a); b = Math.abs(b); // If gcd is equal then it is possible to reach. // Else not possible. return (gcd(x, y) == gcd(a, b)); } // Driver code // Converting coordinate into positive integer var x = 35, y = 15; var a = 20, b = 25; if (ispossible(x, y, a, b)) document.write("Yes"); else document.write("No"); // This code is contributed by todaysgaurav</script>
Yes
Time Complexity: O(min(x, y) + min(a, b)), where x, y, a and b are the given integers.Auxiliary Space: O(min(x, y) + min(a, b)), space required due to the recursion stack.
This article is contributed by Anuj Chauhan(anuj0503). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
nitin mittal
todaysgaurav
sumitgumber28
amartyaghoshgfg
tamanna17122007
hardikkoriintern
GCD-LCM
Mathematical
Matrix
Mathematical
Matrix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Merge two sorted arrays
Operators in C / C++
Prime Numbers
Find minimum number of coins that make a given value
Minimum number of jumps to reach end
Print a given matrix in spiral form
Program to find largest element in an array
Matrix Chain Multiplication | DP-8
Rat in a Maze | Backtracking-2
Sudoku | Backtracking-7
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n06 Jul, 2022"
},
{
"code": null,
"e": 148,
"s": 54,
"text": "Given two coordinates (x, y) and (a, b). Find if it is possible to reach (x, y) from (a, b). "
},
{
"code": null,
"e": 200,
"s": 148,
"text": "Only possible moves from any coordinate (i, j) are "
},
{
"code": null,
"e": 209,
"s": 200,
"text": "(i-j, j)"
},
{
"code": null,
"e": 218,
"s": 209,
"text": "(i, i-j)"
},
{
"code": null,
"e": 227,
"s": 218,
"text": "(i+j, j)"
},
{
"code": null,
"e": 236,
"s": 227,
"text": "(i, i+j)"
},
{
"code": null,
"e": 270,
"s": 236,
"text": "Given x, y, a, b can be negative."
},
{
"code": null,
"e": 281,
"s": 270,
"text": "Examples: "
},
{
"code": null,
"e": 593,
"s": 281,
"text": "Input : (x, y) = (1, 1) and (a, b) = (2, 3).\nOutput : Yes.\n(1, 1) -> (2, 1) -> (2, 3).\n\nInput : (x, y) = (2, 1) and (a, b) = (2, 3).\nOutput : Yes.\n\nInput : (x, y) = (35, 15) and (a, b) = (20, 25).\nOutput : Yes.\n(35, 15) -> (20, 15) -> (5, 15) -> (5, 10) -> (5, 5) ->\n(10, 5) -> (15, 5) -> (20, 5) -> (20, 25)"
},
{
"code": null,
"e": 855,
"s": 593,
"text": "If we take a closer look at the problem, we can notice that the moves are similar steps of Euclidean algorithm for finding GCD. So, it is only possible to reach coordinate (a, b) from (x, y) if GCD of x, y is equal to GCD of a, b. Otherwise, it is not possible."
},
{
"code": null,
"e": 1009,
"s": 855,
"text": "Let GCD of (x, y) be gcd. From (x, y), we can reach (gcd, gcd) and from this point, we can reach to (a, b) if and only if GCD of ‘a’ and ‘b’ is also gcd."
},
{
"code": null,
"e": 1056,
"s": 1009,
"text": "Below is the implementation of this approach: "
},
{
"code": null,
"e": 1060,
"s": 1056,
"text": "C++"
},
{
"code": null,
"e": 1065,
"s": 1060,
"text": "Java"
},
{
"code": null,
"e": 1073,
"s": 1065,
"text": "Python3"
},
{
"code": null,
"e": 1076,
"s": 1073,
"text": "C#"
},
{
"code": null,
"e": 1080,
"s": 1076,
"text": "PHP"
},
{
"code": null,
"e": 1091,
"s": 1080,
"text": "Javascript"
},
{
"code": "// C++ program to check if it is possible to reach// (a, b) from (x, y).#include <bits/stdc++.h>using namespace std; // Returns GCD of i and jint gcd(int i, int j){ if (i == j) return i; if (i > j) return gcd(i - j, j); return gcd(i, j - i);} // Returns true if it is possible to go to (a, b)// from (x, y)bool ispossible(int x, int y, int a, int b){ // Find absolute values of all as sign doesn't // matter. x = abs(x), y = abs(y), a = abs(a), b = abs(b); // If gcd is equal then it is possible to reach. // Else not possible. return (gcd(x, y) == gcd(a, b));} // Driven Programint main(){ // Converting coordinate into positive integer int x = 35, y = 15; int a = 20, b = 25; (ispossible(x, y, a, b)) ? (cout << \"Yes\") : (cout << \"No\"); return 0;}",
"e": 1899,
"s": 1091,
"text": null
},
{
"code": "// Java program to check if it is possible// to reach (a, b) from (x, y).class GFG { // Returns GCD of i and j static int gcd(int i, int j) { if (i == j) return i; if (i > j) return gcd(i - j, j); return gcd(i, j - i); } // Returns true if it is possible to go to (a, b) // from (x, y) static boolean ispossible(int x, int y, int a, int b) { // Find absolute values of all as // sign doesn't matter. x = Math.abs(x); y = Math.abs(y); a = Math.abs(a); b = Math.abs(b); // If gcd is equal then it is possible to reach. // Else not possible. return (gcd(x, y) == gcd(a, b)); } // Driver code public static void main(String[] args) { // Converting coordinate into positive integer int x = 35, y = 15; int a = 20, b = 25; if (ispossible(x, y, a, b)) System.out.print(\"Yes\"); else System.out.print(\"No\"); }}// This code is contributed by Anant Agarwal.",
"e": 2955,
"s": 1899,
"text": null
},
{
"code": "# Python program to check if it is possible to reach# (a, b) from (x, y).# Returns GCD of i and jdef gcd(i, j): if (i == j): return i if (i > j): return gcd(i - j, j) return gcd(i, j - i) # Returns true if it is possible to go to (a, b)# from (x, y)def ispossible(x, y, a, b): # Find absolute values of all as sign doesn't # matter. x, y, a, b = abs(x), abs(y), abs(a), abs(b) # If gcd is equal then it is possible to reach. # Else not possible. return (gcd(x, y) == gcd(a, b)) # Driven Program # Converting coordinate into positive integerx, y = 35, 15a, b = 20, 25if(ispossible(x, y, a, b)): print (\"Yes\")else: print (\"No\")# Contributed by Afzal Ansari",
"e": 3666,
"s": 2955,
"text": null
},
{
"code": "// C# program to check if it is possible// to reach (a, b) from (x, y).using System; class GFG { // Returns GCD of i and j static int gcd(int i, int j) { if (i == j) return i; if (i > j) return gcd(i - j, j); return gcd(i, j - i); } // Returns true if it is possible // to go to (a, b) from (x, y) static bool ispossible(int x, int y, int a, int b) { // Find absolute values of all as // sign doesn't matter. x = Math.Abs(x); y = Math.Abs(y); a = Math.Abs(a); b = Math.Abs(b); // If gcd is equal then it is possible // to reach. Else not possible. return (gcd(x, y) == gcd(a, b)); } // Driver code public static void Main() { // Converting coordinate // into positive integer int x = 35, y = 15; int a = 20, b = 25; if (ispossible(x, y, a, b)) Console.Write(\"Yes\"); else Console.Write(\"No\"); }} // This code is contributed by nitin mittal.",
"e": 4757,
"s": 3666,
"text": null
},
{
"code": "<?php// PHP program to check if it// is possible to reach// (a, b) from (x, y). // Returns GCD of i and jfunction gcd($i, $j){ if ($i == $j) return $i; if ($i > $j) return gcd($i - $j, $j); return gcd($i, $j - $i);} // Returns true if it is// possible to go to (a, b)// from (x, y)function ispossible($x, $y, $a, $b){ // Find absolute values // of all as sign doesn't // matter. $x = abs($x); $y = abs($y); $a = abs($a); $b = abs($b); // If gcd is equal then // it is possible to reach. // Else not possible. return (gcd($x, $y) == gcd($a, $b));} // Driver Code{ // Converting coordinate // into positive integer $x = 35; $y = 15; $a = 20; $b = 25; if (ispossible($x, $y, $a, $b)) echo( \"Yes\"); else echo( \"No\"); return 0;} // This code is contributed by nitin mittal.?>",
"e": 5631,
"s": 4757,
"text": null
},
{
"code": "<script>// javascript program to check if it is possible// to reach (a, b) from (x, y). // Returns GCD of i and j function gcd(i , j) { if (i == j) return i; if (i > j) return gcd(i - j, j); return gcd(i, j - i); } // Returns true if it is possible to go to (a, b) // from (x, y) function ispossible(x , y , a , b) { // Find absolute values of all as // sign doesn't matter. x = Math.abs(x); y = Math.abs(y); a = Math.abs(a); b = Math.abs(b); // If gcd is equal then it is possible to reach. // Else not possible. return (gcd(x, y) == gcd(a, b)); } // Driver code // Converting coordinate into positive integer var x = 35, y = 15; var a = 20, b = 25; if (ispossible(x, y, a, b)) document.write(\"Yes\"); else document.write(\"No\"); // This code is contributed by todaysgaurav</script>",
"e": 6612,
"s": 5631,
"text": null
},
{
"code": null,
"e": 6616,
"s": 6612,
"text": "Yes"
},
{
"code": null,
"e": 6788,
"s": 6616,
"text": "Time Complexity: O(min(x, y) + min(a, b)), where x, y, a and b are the given integers.Auxiliary Space: O(min(x, y) + min(a, b)), space required due to the recursion stack."
},
{
"code": null,
"e": 7095,
"s": 6788,
"text": "This article is contributed by Anuj Chauhan(anuj0503). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. "
},
{
"code": null,
"e": 7108,
"s": 7095,
"text": "nitin mittal"
},
{
"code": null,
"e": 7121,
"s": 7108,
"text": "todaysgaurav"
},
{
"code": null,
"e": 7135,
"s": 7121,
"text": "sumitgumber28"
},
{
"code": null,
"e": 7151,
"s": 7135,
"text": "amartyaghoshgfg"
},
{
"code": null,
"e": 7167,
"s": 7151,
"text": "tamanna17122007"
},
{
"code": null,
"e": 7184,
"s": 7167,
"text": "hardikkoriintern"
},
{
"code": null,
"e": 7192,
"s": 7184,
"text": "GCD-LCM"
},
{
"code": null,
"e": 7205,
"s": 7192,
"text": "Mathematical"
},
{
"code": null,
"e": 7212,
"s": 7205,
"text": "Matrix"
},
{
"code": null,
"e": 7225,
"s": 7212,
"text": "Mathematical"
},
{
"code": null,
"e": 7232,
"s": 7225,
"text": "Matrix"
},
{
"code": null,
"e": 7330,
"s": 7232,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 7354,
"s": 7330,
"text": "Merge two sorted arrays"
},
{
"code": null,
"e": 7375,
"s": 7354,
"text": "Operators in C / C++"
},
{
"code": null,
"e": 7389,
"s": 7375,
"text": "Prime Numbers"
},
{
"code": null,
"e": 7442,
"s": 7389,
"text": "Find minimum number of coins that make a given value"
},
{
"code": null,
"e": 7479,
"s": 7442,
"text": "Minimum number of jumps to reach end"
},
{
"code": null,
"e": 7515,
"s": 7479,
"text": "Print a given matrix in spiral form"
},
{
"code": null,
"e": 7559,
"s": 7515,
"text": "Program to find largest element in an array"
},
{
"code": null,
"e": 7594,
"s": 7559,
"text": "Matrix Chain Multiplication | DP-8"
},
{
"code": null,
"e": 7625,
"s": 7594,
"text": "Rat in a Maze | Backtracking-2"
}
] |
Difference between Abstraction and Encapsulation in Java with Examples
|
13 Jun, 2022
Encapsulation is defined as the wrapping up of data under a single unit. It is the mechanism that binds together code and the data it manipulates. Another way to think about encapsulation is, that it is a protective shield that prevents the data from being accessed by the code outside this shield. Technically in encapsulation, the variables or data of a class is hidden from any other class and can be accessed only through any member function of its own class in which they are declared. As in encapsulation, the data in a class is hidden from other classes, so it is also known as data-hiding. Encapsulation can be achieved by Declaring all the variables in the class as private and writing public methods in the class to set and get the values of variables.
Java
// Java program to demonstrate encapsulation public class Encapsulate { // private variables declared // these can only be accessed by // public methods of class private String geekName; private int geekRoll; private int geekAge; // get method for age to access // private variable geekAge public int getAge() { return geekAge; } // get method for name to access // private variable geekName public String getName() { return geekName; } // get method for roll to access // private variable geekRoll public int getRoll() { return geekRoll; } // set method for age to access // private variable geekage public void setAge(int newAge) { geekAge = newAge; } // set method for name to access // private variable geekName public void setName(String newName) { geekName = newName; } // set method for roll to access // private variable geekRoll public void setRoll(int newRoll) { geekRoll = newRoll; }} // Class to access variables// of the class Encapsulateclass TestEncapsulation { public static void main(String[] args) { Encapsulate obj = new Encapsulate(); // setting values of the variables obj.setName(" Harsh & quot;); obj.setAge(19); obj.setRoll(51); // Displaying values of the variables System.out.println("Geek's name: " + obj.getName()); System.out.println("Geek's age: " + obj.getAge()); System.out.println("Geek's roll: " + obj.getRoll()); // Direct access of geekRoll is not possible // due to encapsulation // System.out.println("Geek's roll: " + // obj.geekName); }}
Geek's name: Harsh
Geek's age: 19
Geek's roll: 51
Abstraction in Java
Data Abstraction is the property by virtue of which only the essential details are displayed to the user. The trivial or the non-essential units are not displayed to the user. Ex: A car is viewed as a car rather than its individual components. Data Abstraction may also be defined as the process of identifying only the required characteristics of an object ignoring the irrelevant details. The properties and behaviors of an object differentiate it from other objects of similar type and also help in classifying/grouping the objects.
Java
// Java program to illustrate the concept of Abstraction abstract class Shape { String color; // these are abstract methods abstract double area(); public abstract String toString(); // abstract class can have a constructor public Shape(String color) { System.out.println("Shape constructor called"); this.color = color; } // this is a concrete method public String getColor() { return color; }}class Circle extends Shape { double radius; public Circle(String color, double radius) { // calling Shape constructor super(color); System.out.println("Circle constructor called"); this.radius = radius; } @Override double area() { return Math.PI * Math.pow(radius, 2); } @Override public String toString() { return "Circle color is " + super.color + "and area is : " + area(); }} class Rectangle extends Shape { double length; double width; public Rectangle(String color, double length, double width) { // calling Shape constructor super(color); System.out.println("Rectangle constructor called"); this.length = length; this.width = width; } @Override double area() { return length * width; } @Override public String toString() { return "Rectangle color is " + super.color + "and area is : " + area(); }} public class Test { public static void main(String[] args) { Shape s1 = new Circle("Red", 2.2); Shape s2 = new Rectangle("Yellow", 2, 4); System.out.println(s1.toString()); System.out.println(s2.toString()); }}
Shape constructor called
Circle constructor called
Shape constructor called
Rectangle constructor called
Circle color is Redand area is : 15.205308443374602
Rectangle color is Yellowand area is : 8.0
Difference between Abstraction and Encapsulation:
priyansh70890
Java-Object Oriented
Difference Between
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n13 Jun, 2022"
},
{
"code": null,
"e": 816,
"s": 52,
"text": "Encapsulation is defined as the wrapping up of data under a single unit. It is the mechanism that binds together code and the data it manipulates. Another way to think about encapsulation is, that it is a protective shield that prevents the data from being accessed by the code outside this shield. Technically in encapsulation, the variables or data of a class is hidden from any other class and can be accessed only through any member function of its own class in which they are declared. As in encapsulation, the data in a class is hidden from other classes, so it is also known as data-hiding. Encapsulation can be achieved by Declaring all the variables in the class as private and writing public methods in the class to set and get the values of variables. "
},
{
"code": null,
"e": 821,
"s": 816,
"text": "Java"
},
{
"code": "// Java program to demonstrate encapsulation public class Encapsulate { // private variables declared // these can only be accessed by // public methods of class private String geekName; private int geekRoll; private int geekAge; // get method for age to access // private variable geekAge public int getAge() { return geekAge; } // get method for name to access // private variable geekName public String getName() { return geekName; } // get method for roll to access // private variable geekRoll public int getRoll() { return geekRoll; } // set method for age to access // private variable geekage public void setAge(int newAge) { geekAge = newAge; } // set method for name to access // private variable geekName public void setName(String newName) { geekName = newName; } // set method for roll to access // private variable geekRoll public void setRoll(int newRoll) { geekRoll = newRoll; }} // Class to access variables// of the class Encapsulateclass TestEncapsulation { public static void main(String[] args) { Encapsulate obj = new Encapsulate(); // setting values of the variables obj.setName(\" Harsh & quot;); obj.setAge(19); obj.setRoll(51); // Displaying values of the variables System.out.println(\"Geek's name: \" + obj.getName()); System.out.println(\"Geek's age: \" + obj.getAge()); System.out.println(\"Geek's roll: \" + obj.getRoll()); // Direct access of geekRoll is not possible // due to encapsulation // System.out.println(\"Geek's roll: \" + // obj.geekName); }}",
"e": 2494,
"s": 821,
"text": null
},
{
"code": null,
"e": 2544,
"s": 2494,
"text": "Geek's name: Harsh\nGeek's age: 19\nGeek's roll: 51"
},
{
"code": null,
"e": 2564,
"s": 2544,
"text": "Abstraction in Java"
},
{
"code": null,
"e": 3101,
"s": 2564,
"text": "Data Abstraction is the property by virtue of which only the essential details are displayed to the user. The trivial or the non-essential units are not displayed to the user. Ex: A car is viewed as a car rather than its individual components. Data Abstraction may also be defined as the process of identifying only the required characteristics of an object ignoring the irrelevant details. The properties and behaviors of an object differentiate it from other objects of similar type and also help in classifying/grouping the objects. "
},
{
"code": null,
"e": 3106,
"s": 3101,
"text": "Java"
},
{
"code": "// Java program to illustrate the concept of Abstraction abstract class Shape { String color; // these are abstract methods abstract double area(); public abstract String toString(); // abstract class can have a constructor public Shape(String color) { System.out.println(\"Shape constructor called\"); this.color = color; } // this is a concrete method public String getColor() { return color; }}class Circle extends Shape { double radius; public Circle(String color, double radius) { // calling Shape constructor super(color); System.out.println(\"Circle constructor called\"); this.radius = radius; } @Override double area() { return Math.PI * Math.pow(radius, 2); } @Override public String toString() { return \"Circle color is \" + super.color + \"and area is : \" + area(); }} class Rectangle extends Shape { double length; double width; public Rectangle(String color, double length, double width) { // calling Shape constructor super(color); System.out.println(\"Rectangle constructor called\"); this.length = length; this.width = width; } @Override double area() { return length * width; } @Override public String toString() { return \"Rectangle color is \" + super.color + \"and area is : \" + area(); }} public class Test { public static void main(String[] args) { Shape s1 = new Circle(\"Red\", 2.2); Shape s2 = new Rectangle(\"Yellow\", 2, 4); System.out.println(s1.toString()); System.out.println(s2.toString()); }}",
"e": 4896,
"s": 3106,
"text": null
},
{
"code": null,
"e": 5096,
"s": 4896,
"text": "Shape constructor called\nCircle constructor called\nShape constructor called\nRectangle constructor called\nCircle color is Redand area is : 15.205308443374602\nRectangle color is Yellowand area is : 8.0"
},
{
"code": null,
"e": 5146,
"s": 5096,
"text": "Difference between Abstraction and Encapsulation:"
},
{
"code": null,
"e": 5160,
"s": 5146,
"text": "priyansh70890"
},
{
"code": null,
"e": 5181,
"s": 5160,
"text": "Java-Object Oriented"
},
{
"code": null,
"e": 5200,
"s": 5181,
"text": "Difference Between"
},
{
"code": null,
"e": 5205,
"s": 5200,
"text": "Java"
},
{
"code": null,
"e": 5210,
"s": 5205,
"text": "Java"
}
] |
Python - Linked Lists
|
A linked list is a sequence of data elements, which are connected together via links. Each data element contains a connection to another data element in form of a pointer. Python does not have linked lists in its standard library. We implement the concept of linked lists using the concept of nodes as discussed in the previous chapter.
We have already seen how we create a node class and how to traverse the elements of a node.In this chapter we are going to study the types of linked lists known as singly linked lists. In this type of data structure there is only one link between any two data elements. We create such a list and create additional methods to insert, update and remove elements from the list.
A linked list is created by using the node class we studied in the last chapter. We create a Node object and create another class to use this ode object. We pass the appropriate values through the node object to point the to the next data elements. The below program creates the linked list with three data elements. In the next section we will see how to traverse the linked list.
class Node:
def __init__(self, dataval=None):
self.dataval = dataval
self.nextval = None
class SLinkedList:
def __init__(self):
self.headval = None
list1 = SLinkedList()
list1.headval = Node("Mon")
e2 = Node("Tue")
e3 = Node("Wed")
# Link first Node to second node
list1.headval.nextval = e2
# Link second Node to third node
e2.nextval = e3
Singly linked lists can be traversed in only forward direction starting form the first data element. We simply print the value of the next data element by assigning the pointer of the next node to the current data element.
class Node:
def __init__(self, dataval=None):
self.dataval = dataval
self.nextval = None
class SLinkedList:
def __init__(self):
self.headval = None
def listprint(self):
printval = self.headval
while printval is not None:
print (printval.dataval)
printval = printval.nextval
list = SLinkedList()
list.headval = Node("Mon")
e2 = Node("Tue")
e3 = Node("Wed")
# Link first Node to second node
list.headval.nextval = e2
# Link second Node to third node
e2.nextval = e3
list.listprint()
When the above code is executed, it produces the following result −
Mon
Tue
Wed
Inserting element in the linked list involves reassigning the pointers from the existing nodes to the newly inserted node. Depending on whether the new data element is getting inserted at the beginning or at the middle or at the end of the linked list, we have the below scenarios.
This involves pointing the next pointer of the new data node to the current head of the linked list. So the current head of the linked list becomes the second data element and the new node becomes the head of the linked list.
class Node:
def __init__(self, dataval=None):
self.dataval = dataval
self.nextval = None
class SLinkedList:
def __init__(self):
self.headval = None
# Print the linked list
def listprint(self):
printval = self.headval
while printval is not None:
print (printval.dataval)
printval = printval.nextval
def AtBegining(self,newdata):
NewNode = Node(newdata)
# Update the new nodes next val to existing node
NewNode.nextval = self.headval
self.headval = NewNode
list = SLinkedList()
list.headval = Node("Mon")
e2 = Node("Tue")
e3 = Node("Wed")
list.headval.nextval = e2
e2.nextval = e3
list.AtBegining("Sun")
list.listprint()
When the above code is executed, it produces the following result −
Sun
Mon
Tue
Wed
This involves pointing the next pointer of the the current last node of the linked list to the new data node. So the current last node of the linked list becomes the second last data node and the new node becomes the last node of the linked list.
class Node:
def __init__(self, dataval=None):
self.dataval = dataval
self.nextval = None
class SLinkedList:
def __init__(self):
self.headval = None
# Function to add newnode
def AtEnd(self, newdata):
NewNode = Node(newdata)
if self.headval is None:
self.headval = NewNode
return
laste = self.headval
while(laste.nextval):
laste = laste.nextval
laste.nextval=NewNode
# Print the linked list
def listprint(self):
printval = self.headval
while printval is not None:
print (printval.dataval)
printval = printval.nextval
list = SLinkedList()
list.headval = Node("Mon")
e2 = Node("Tue")
e3 = Node("Wed")
list.headval.nextval = e2
e2.nextval = e3
list.AtEnd("Thu")
list.listprint()
When the above code is executed, it produces the following result −
Mon
Tue
Wed
Thu
This involves changing the pointer of a specific node to point to the new node. That is possible by passing in both the new node and the existing node after which the new node will be inserted. So we define an additional class which will change the next pointer of the new node to the next pointer of middle node. Then assign the new node to next pointer of the middle node.
class Node:
def __init__(self, dataval=None):
self.dataval = dataval
self.nextval = None
class SLinkedList:
def __init__(self):
self.headval = None
# Function to add node
def Inbetween(self,middle_node,newdata):
if middle_node is None:
print("The mentioned node is absent")
return
NewNode = Node(newdata)
NewNode.nextval = middle_node.nextval
middle_node.nextval = NewNode
# Print the linked list
def listprint(self):
printval = self.headval
while printval is not None:
print (printval.dataval)
printval = printval.nextval
list = SLinkedList()
list.headval = Node("Mon")
e2 = Node("Tue")
e3 = Node("Thu")
list.headval.nextval = e2
e2.nextval = e3
list.Inbetween(list.headval.nextval,"Fri")
list.listprint()
When the above code is executed, it produces the following result −
Mon
Tue
Fri
Thu
We can remove an existing node using the key for that node. In the below program we locate the previous node of the node which is to be deleted.Then, point the next pointer of this node to the next node of the node to be deleted.
class Node:
def __init__(self, data=None):
self.data = data
self.next = None
class SLinkedList:
def __init__(self):
self.head = None
def Atbegining(self, data_in):
NewNode = Node(data_in)
NewNode.next = self.head
self.head = NewNode
# Function to remove node
def RemoveNode(self, Removekey):
HeadVal = self.head
if (HeadVal is not None):
if (HeadVal.data == Removekey):
self.head = HeadVal.next
HeadVal = None
return
while (HeadVal is not None):
if HeadVal.data == Removekey:
break
prev = HeadVal
HeadVal = HeadVal.next
if (HeadVal == None):
return
prev.next = HeadVal.next
HeadVal = None
def LListprint(self):
printval = self.head
while (printval):
print(printval.data),
printval = printval.next
llist = SLinkedList()
llist.Atbegining("Mon")
llist.Atbegining("Tue")
llist.Atbegining("Wed")
llist.Atbegining("Thu")
llist.RemoveNode("Tue")
llist.LListprint()
When the above code is executed, it produces the following result −
Thu
Wed
Mon
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2664,
"s": 2327,
"text": "A linked list is a sequence of data elements, which are connected together via links. Each data element contains a connection to another data element in form of a pointer. Python does not have linked lists in its standard library. We implement the concept of linked lists using the concept of nodes as discussed in the previous chapter."
},
{
"code": null,
"e": 3039,
"s": 2664,
"text": "We have already seen how we create a node class and how to traverse the elements of a node.In this chapter we are going to study the types of linked lists known as singly linked lists. In this type of data structure there is only one link between any two data elements. We create such a list and create additional methods to insert, update and remove elements from the list."
},
{
"code": null,
"e": 3421,
"s": 3039,
"text": "A linked list is created by using the node class we studied in the last chapter. We create a Node object and create another class to use this ode object. We pass the appropriate values through the node object to point the to the next data elements. The below program creates the linked list with three data elements. In the next section we will see how to traverse the linked list."
},
{
"code": null,
"e": 3789,
"s": 3421,
"text": "class Node:\n def __init__(self, dataval=None):\n self.dataval = dataval\n self.nextval = None\n\nclass SLinkedList:\n def __init__(self):\n self.headval = None\n\nlist1 = SLinkedList()\nlist1.headval = Node(\"Mon\")\ne2 = Node(\"Tue\")\ne3 = Node(\"Wed\")\n# Link first Node to second node\nlist1.headval.nextval = e2\n\n# Link second Node to third node\ne2.nextval = e3"
},
{
"code": null,
"e": 4012,
"s": 3789,
"text": "Singly linked lists can be traversed in only forward direction starting form the first data element. We simply print the value of the next data element by assigning the pointer of the next node to the current data element."
},
{
"code": null,
"e": 4556,
"s": 4012,
"text": "class Node:\n def __init__(self, dataval=None):\n self.dataval = dataval\n self.nextval = None\n\nclass SLinkedList:\n def __init__(self):\n self.headval = None\n\n def listprint(self):\n printval = self.headval\n while printval is not None:\n print (printval.dataval)\n printval = printval.nextval\n\nlist = SLinkedList()\nlist.headval = Node(\"Mon\")\ne2 = Node(\"Tue\")\ne3 = Node(\"Wed\")\n\n# Link first Node to second node\nlist.headval.nextval = e2\n\n# Link second Node to third node\ne2.nextval = e3\n\nlist.listprint()"
},
{
"code": null,
"e": 4624,
"s": 4556,
"text": "When the above code is executed, it produces the following result −"
},
{
"code": null,
"e": 4637,
"s": 4624,
"text": "Mon\nTue\nWed\n"
},
{
"code": null,
"e": 4919,
"s": 4637,
"text": "Inserting element in the linked list involves reassigning the pointers from the existing nodes to the newly inserted node. Depending on whether the new data element is getting inserted at the beginning or at the middle or at the end of the linked list, we have the below scenarios."
},
{
"code": null,
"e": 5145,
"s": 4919,
"text": "This involves pointing the next pointer of the new data node to the current head of the linked list. So the current head of the linked list becomes the second data element and the new node becomes the head of the linked list."
},
{
"code": null,
"e": 5841,
"s": 5145,
"text": "class Node:\n def __init__(self, dataval=None):\n self.dataval = dataval\n self.nextval = None\n\nclass SLinkedList:\n def __init__(self):\n self.headval = None\n# Print the linked list\n def listprint(self):\n printval = self.headval\n while printval is not None:\n print (printval.dataval)\n printval = printval.nextval\n def AtBegining(self,newdata):\n NewNode = Node(newdata)\n\n# Update the new nodes next val to existing node\n NewNode.nextval = self.headval\n self.headval = NewNode\n\nlist = SLinkedList()\nlist.headval = Node(\"Mon\")\ne2 = Node(\"Tue\")\ne3 = Node(\"Wed\")\n\nlist.headval.nextval = e2\ne2.nextval = e3\n\nlist.AtBegining(\"Sun\")\nlist.listprint()"
},
{
"code": null,
"e": 5909,
"s": 5841,
"text": "When the above code is executed, it produces the following result −"
},
{
"code": null,
"e": 5926,
"s": 5909,
"text": "Sun\nMon\nTue\nWed\n"
},
{
"code": null,
"e": 6173,
"s": 5926,
"text": "This involves pointing the next pointer of the the current last node of the linked list to the new data node. So the current last node of the linked list becomes the second last data node and the new node becomes the last node of the linked list."
},
{
"code": null,
"e": 6969,
"s": 6173,
"text": "class Node:\n def __init__(self, dataval=None):\n self.dataval = dataval\n self.nextval = None\nclass SLinkedList:\n def __init__(self):\n self.headval = None\n# Function to add newnode\n def AtEnd(self, newdata):\n NewNode = Node(newdata)\n if self.headval is None:\n self.headval = NewNode\n return\n laste = self.headval\n while(laste.nextval):\n laste = laste.nextval\n laste.nextval=NewNode\n# Print the linked list\n def listprint(self):\n printval = self.headval\n while printval is not None:\n print (printval.dataval)\n printval = printval.nextval\n\nlist = SLinkedList()\nlist.headval = Node(\"Mon\")\ne2 = Node(\"Tue\")\ne3 = Node(\"Wed\")\n\nlist.headval.nextval = e2\ne2.nextval = e3\n\nlist.AtEnd(\"Thu\")\n\nlist.listprint()"
},
{
"code": null,
"e": 7037,
"s": 6969,
"text": "When the above code is executed, it produces the following result −"
},
{
"code": null,
"e": 7054,
"s": 7037,
"text": "Mon\nTue\nWed\nThu\n"
},
{
"code": null,
"e": 7429,
"s": 7054,
"text": "This involves changing the pointer of a specific node to point to the new node. That is possible by passing in both the new node and the existing node after which the new node will be inserted. So we define an additional class which will change the next pointer of the new node to the next pointer of middle node. Then assign the new node to next pointer of the middle node."
},
{
"code": null,
"e": 8245,
"s": 7429,
"text": "class Node:\n def __init__(self, dataval=None):\n self.dataval = dataval\n self.nextval = None\nclass SLinkedList:\n def __init__(self):\n self.headval = None\n\n# Function to add node\n def Inbetween(self,middle_node,newdata):\n if middle_node is None:\n print(\"The mentioned node is absent\")\n return\n\n NewNode = Node(newdata)\n NewNode.nextval = middle_node.nextval\n middle_node.nextval = NewNode\n\n# Print the linked list\n def listprint(self):\n printval = self.headval\n while printval is not None:\n print (printval.dataval)\n printval = printval.nextval\n\nlist = SLinkedList()\nlist.headval = Node(\"Mon\")\ne2 = Node(\"Tue\")\ne3 = Node(\"Thu\")\n\nlist.headval.nextval = e2\ne2.nextval = e3\n\nlist.Inbetween(list.headval.nextval,\"Fri\")\n\nlist.listprint()"
},
{
"code": null,
"e": 8313,
"s": 8245,
"text": "When the above code is executed, it produces the following result −"
},
{
"code": null,
"e": 8330,
"s": 8313,
"text": "Mon\nTue\nFri\nThu\n"
},
{
"code": null,
"e": 8560,
"s": 8330,
"text": "We can remove an existing node using the key for that node. In the below program we locate the previous node of the node which is to be deleted.Then, point the next pointer of this node to the next node of the node to be deleted."
},
{
"code": null,
"e": 9647,
"s": 8560,
"text": "class Node:\n def __init__(self, data=None):\n self.data = data\n self.next = None\nclass SLinkedList:\n def __init__(self):\n self.head = None\n\n def Atbegining(self, data_in):\n NewNode = Node(data_in)\n NewNode.next = self.head\n self.head = NewNode\n\n# Function to remove node\n def RemoveNode(self, Removekey):\n HeadVal = self.head\n \n if (HeadVal is not None):\n if (HeadVal.data == Removekey):\n self.head = HeadVal.next\n HeadVal = None\n return\n while (HeadVal is not None):\n if HeadVal.data == Removekey:\n break\n prev = HeadVal\n HeadVal = HeadVal.next\n\n if (HeadVal == None):\n return\n\n prev.next = HeadVal.next\n HeadVal = None\n\n def LListprint(self):\n printval = self.head\n while (printval):\n print(printval.data),\n printval = printval.next\n\nllist = SLinkedList()\nllist.Atbegining(\"Mon\")\nllist.Atbegining(\"Tue\")\nllist.Atbegining(\"Wed\")\nllist.Atbegining(\"Thu\")\nllist.RemoveNode(\"Tue\")\nllist.LListprint()"
},
{
"code": null,
"e": 9715,
"s": 9647,
"text": "When the above code is executed, it produces the following result −"
},
{
"code": null,
"e": 9728,
"s": 9715,
"text": "Thu\nWed\nMon\n"
},
{
"code": null,
"e": 9765,
"s": 9728,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 9781,
"s": 9765,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 9814,
"s": 9781,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 9833,
"s": 9814,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 9868,
"s": 9833,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 9890,
"s": 9868,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 9924,
"s": 9890,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 9952,
"s": 9924,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 9987,
"s": 9952,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 10001,
"s": 9987,
"text": " Lets Kode It"
},
{
"code": null,
"e": 10034,
"s": 10001,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 10051,
"s": 10034,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 10058,
"s": 10051,
"text": " Print"
},
{
"code": null,
"e": 10069,
"s": 10058,
"text": " Add Notes"
}
] |
C / C++ Program for Median of two sorted arrays of same size - GeeksforGeeks
|
11 Dec, 2018
There are 2 sorted arrays A and B of size n each. Write an algorithm to find the median of the array obtained merging the above 2 arrays(i.e. array of length 2n). The complexity should be O(log(n)).
Note : Since size of the set for which we are looking for median is even (2n), we need take average of middle two numbers and return floor of the average.
Method 1 (Simply count while Merging)Use merge procedure of merge sort. Keep track of count while comparing elements of two arrays. If count becomes n(For 2n elements), we have reached the median. Take the average of the elements at indexes n-1 and n in the merged array. See the below implementation.
C++
C
// A Simple Merge based O(n)// solution to find median of// two sorted arrays#include <bits/stdc++.h>using namespace std; /* This function returns median of ar1[] and ar2[].Assumptions in this function:Both ar1[] and ar2[] are sorted arraysBoth have n elements */int getMedian(int ar1[], int ar2[], int n){ int i = 0; /* Current index of i/p array ar1[] */ int j = 0; /* Current index of i/p array ar2[] */ int count; int m1 = -1, m2 = -1; /* Since there are 2n elements, median will be average of elements at index n-1 and n in the array obtained after merging ar1 and ar2 */ for (count = 0; count <= n; count++) { /* Below is to handle case where all elements of ar1[] are smaller than smallest(or first) element of ar2[]*/ if (i == n) { m1 = m2; m2 = ar2[0]; break; } /*Below is to handle case where all elements of ar2[] are smaller than smallest(or first) element of ar1[]*/ else if (j == n) { m1 = m2; m2 = ar1[0]; break; } if (ar1[i] < ar2[j]) { /* Store the prev median */ m1 = m2; m2 = ar1[i]; i++; } else { /* Store the prev median */ m1 = m2; m2 = ar2[j]; j++; } } return (m1 + m2) / 2;} // Driver Codeint main(){ int ar1[] = { 1, 12, 15, 26, 38 }; int ar2[] = { 2, 13, 17, 30, 45 }; int n1 = sizeof(ar1) / sizeof(ar1[0]); int n2 = sizeof(ar2) / sizeof(ar2[0]); if (n1 == n2) cout << "Median is " << getMedian(ar1, ar2, n1); else cout << "Doesn't work for arrays" << " of unequal size"; getchar(); return 0;} // This code is contributed// by Shivi_Aggarwal
// A Simple Merge based O(n) solution to find median of// two sorted arrays#include <stdio.h> /* This function returns median of ar1[] and ar2[]. Assumptions in this function: Both ar1[] and ar2[] are sorted arrays Both have n elements */int getMedian(int ar1[], int ar2[], int n){ int i = 0; /* Current index of i/p array ar1[] */ int j = 0; /* Current index of i/p array ar2[] */ int count; int m1 = -1, m2 = -1; /* Since there are 2n elements, median will be average of elements at index n-1 and n in the array obtained after merging ar1 and ar2 */ for (count = 0; count <= n; count++) { /*Below is to handle case where all elements of ar1[] are smaller than smallest(or first) element of ar2[]*/ if (i == n) { m1 = m2; m2 = ar2[0]; break; } /*Below is to handle case where all elements of ar2[] are smaller than smallest(or first) element of ar1[]*/ else if (j == n) { m1 = m2; m2 = ar1[0]; break; } if (ar1[i] < ar2[j]) { m1 = m2; /* Store the prev median */ m2 = ar1[i]; i++; } else { m1 = m2; /* Store the prev median */ m2 = ar2[j]; j++; } } return (m1 + m2) / 2;} /* Driver program to test above function */int main(){ int ar1[] = { 1, 12, 15, 26, 38 }; int ar2[] = { 2, 13, 17, 30, 45 }; int n1 = sizeof(ar1) / sizeof(ar1[0]); int n2 = sizeof(ar2) / sizeof(ar2[0]); if (n1 == n2) printf("Median is %d", getMedian(ar1, ar2, n1)); else printf("Doesn't work for arrays of unequal size"); getchar(); return 0;}
Median is 16
Method 2 (By comparing the medians of two arrays)This method works by first getting medians of the two sorted arrays and then comparing them.
C++
C
// A divide and conquer based// efficient solution to find// median of two sorted arrays// of same size.#include <bits/stdc++.h>using namespace std; /* to get median of a sorted array */int median(int[], int); /* This function returns median of ar1[] and ar2[].Assumptions in this function: Both ar1[] and ar2[] are sorted arrays Both have n elements */int getMedian(int ar1[], int ar2[], int n){ /* return -1 for invalid input */ if (n <= 0) return -1; if (n == 1) return (ar1[0] + ar2[0]) / 2; if (n == 2) return (max(ar1[0], ar2[0]) + min(ar1[1], ar2[1])) / 2; /* get the median of the first array */ int m1 = median(ar1, n); /* get the median of the second array */ int m2 = median(ar2, n); /* If medians are equal then return either m1 or m2 */ if (m1 == m2) return m1; /* if m1 < m2 then median must exist in ar1[m1....] and ar2[....m2] */ if (m1 < m2) { if (n % 2 == 0) return getMedian(ar1 + n / 2 - 1, ar2, n - n / 2 + 1); return getMedian(ar1 + n / 2, ar2, n - n / 2); } /* if m1 > m2 then median must exist in ar1[....m1] and ar2[m2...] */ if (n % 2 == 0) return getMedian(ar2 + n / 2 - 1, ar1, n - n / 2 + 1); return getMedian(ar2 + n / 2, ar1, n - n / 2);} /* Function to get median of a sorted array */int median(int arr[], int n){ if (n % 2 == 0) return (arr[n / 2] + arr[n / 2 - 1]) / 2; else return arr[n / 2];} // Driver codeint main(){ int ar1[] = { 1, 2, 3, 6 }; int ar2[] = { 4, 6, 8, 10 }; int n1 = sizeof(ar1) / sizeof(ar1[0]); int n2 = sizeof(ar2) / sizeof(ar2[0]); if (n1 == n2) cout << "Median is " << getMedian(ar1, ar2, n1); else cout << "Doesn't work for arrays " << "of unequal size"; return 0;} // This code is contributed// by Shivi_Aggarwal
// A divide and conquer based efficient solution to find median// of two sorted arrays of same size.#include <bits/stdc++.h>using namespace std; int median(int[], int); /* to get median of a sorted array */ /* This function returns median of ar1[] and ar2[]. Assumptions in this function: Both ar1[] and ar2[] are sorted arrays Both have n elements */int getMedian(int ar1[], int ar2[], int n){ /* return -1 for invalid input */ if (n <= 0) return -1; if (n == 1) return (ar1[0] + ar2[0]) / 2; if (n == 2) return (max(ar1[0], ar2[0]) + min(ar1[1], ar2[1])) / 2; int m1 = median(ar1, n); /* get the median of the first array */ int m2 = median(ar2, n); /* get the median of the second array */ /* If medians are equal then return either m1 or m2 */ if (m1 == m2) return m1; /* if m1 < m2 then median must exist in ar1[m1....] and ar2[....m2] */ if (m1 < m2) { if (n % 2 == 0) return getMedian(ar1 + n / 2 - 1, ar2, n - n / 2 + 1); return getMedian(ar1 + n / 2, ar2, n - n / 2); } /* if m1 > m2 then median must exist in ar1[....m1] and ar2[m2...] */ if (n % 2 == 0) return getMedian(ar2 + n / 2 - 1, ar1, n - n / 2 + 1); return getMedian(ar2 + n / 2, ar1, n - n / 2);} /* Function to get median of a sorted array */int median(int arr[], int n){ if (n % 2 == 0) return (arr[n / 2] + arr[n / 2 - 1]) / 2; else return arr[n / 2];} /* Driver program to test above function */int main(){ int ar1[] = { 1, 2, 3, 6 }; int ar2[] = { 4, 6, 8, 10 }; int n1 = sizeof(ar1) / sizeof(ar1[0]); int n2 = sizeof(ar2) / sizeof(ar2[0]); if (n1 == n2) printf("Median is %d", getMedian(ar1, ar2, n1)); else printf("Doesn't work for arrays of unequal size"); return 0;}
Median is 5
Please refer complete article on Median of two sorted arrays of same size for more details!
C Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
C Program to read contents of Whole File
Producer Consumer Problem in C
C program to find the length of a string
Exit codes in C/C++ with Examples
Difference between break and continue statement in C
How to store words in an array in C?
Regular expressions in C
Handling multiple clients on server with multithreading using Socket Programming in C/C++
Conditional wait and signal in multi-threading
C / C++ Program for Dijkstra's shortest path algorithm | Greedy Algo-7
|
[
{
"code": null,
"e": 24539,
"s": 24511,
"text": "\n11 Dec, 2018"
},
{
"code": null,
"e": 24738,
"s": 24539,
"text": "There are 2 sorted arrays A and B of size n each. Write an algorithm to find the median of the array obtained merging the above 2 arrays(i.e. array of length 2n). The complexity should be O(log(n))."
},
{
"code": null,
"e": 24893,
"s": 24738,
"text": "Note : Since size of the set for which we are looking for median is even (2n), we need take average of middle two numbers and return floor of the average."
},
{
"code": null,
"e": 25195,
"s": 24893,
"text": "Method 1 (Simply count while Merging)Use merge procedure of merge sort. Keep track of count while comparing elements of two arrays. If count becomes n(For 2n elements), we have reached the median. Take the average of the elements at indexes n-1 and n in the merged array. See the below implementation."
},
{
"code": null,
"e": 25199,
"s": 25195,
"text": "C++"
},
{
"code": null,
"e": 25201,
"s": 25199,
"text": "C"
},
{
"code": "// A Simple Merge based O(n)// solution to find median of// two sorted arrays#include <bits/stdc++.h>using namespace std; /* This function returns median of ar1[] and ar2[].Assumptions in this function:Both ar1[] and ar2[] are sorted arraysBoth have n elements */int getMedian(int ar1[], int ar2[], int n){ int i = 0; /* Current index of i/p array ar1[] */ int j = 0; /* Current index of i/p array ar2[] */ int count; int m1 = -1, m2 = -1; /* Since there are 2n elements, median will be average of elements at index n-1 and n in the array obtained after merging ar1 and ar2 */ for (count = 0; count <= n; count++) { /* Below is to handle case where all elements of ar1[] are smaller than smallest(or first) element of ar2[]*/ if (i == n) { m1 = m2; m2 = ar2[0]; break; } /*Below is to handle case where all elements of ar2[] are smaller than smallest(or first) element of ar1[]*/ else if (j == n) { m1 = m2; m2 = ar1[0]; break; } if (ar1[i] < ar2[j]) { /* Store the prev median */ m1 = m2; m2 = ar1[i]; i++; } else { /* Store the prev median */ m1 = m2; m2 = ar2[j]; j++; } } return (m1 + m2) / 2;} // Driver Codeint main(){ int ar1[] = { 1, 12, 15, 26, 38 }; int ar2[] = { 2, 13, 17, 30, 45 }; int n1 = sizeof(ar1) / sizeof(ar1[0]); int n2 = sizeof(ar2) / sizeof(ar2[0]); if (n1 == n2) cout << \"Median is \" << getMedian(ar1, ar2, n1); else cout << \"Doesn't work for arrays\" << \" of unequal size\"; getchar(); return 0;} // This code is contributed// by Shivi_Aggarwal",
"e": 27112,
"s": 25201,
"text": null
},
{
"code": "// A Simple Merge based O(n) solution to find median of// two sorted arrays#include <stdio.h> /* This function returns median of ar1[] and ar2[]. Assumptions in this function: Both ar1[] and ar2[] are sorted arrays Both have n elements */int getMedian(int ar1[], int ar2[], int n){ int i = 0; /* Current index of i/p array ar1[] */ int j = 0; /* Current index of i/p array ar2[] */ int count; int m1 = -1, m2 = -1; /* Since there are 2n elements, median will be average of elements at index n-1 and n in the array obtained after merging ar1 and ar2 */ for (count = 0; count <= n; count++) { /*Below is to handle case where all elements of ar1[] are smaller than smallest(or first) element of ar2[]*/ if (i == n) { m1 = m2; m2 = ar2[0]; break; } /*Below is to handle case where all elements of ar2[] are smaller than smallest(or first) element of ar1[]*/ else if (j == n) { m1 = m2; m2 = ar1[0]; break; } if (ar1[i] < ar2[j]) { m1 = m2; /* Store the prev median */ m2 = ar1[i]; i++; } else { m1 = m2; /* Store the prev median */ m2 = ar2[j]; j++; } } return (m1 + m2) / 2;} /* Driver program to test above function */int main(){ int ar1[] = { 1, 12, 15, 26, 38 }; int ar2[] = { 2, 13, 17, 30, 45 }; int n1 = sizeof(ar1) / sizeof(ar1[0]); int n2 = sizeof(ar2) / sizeof(ar2[0]); if (n1 == n2) printf(\"Median is %d\", getMedian(ar1, ar2, n1)); else printf(\"Doesn't work for arrays of unequal size\"); getchar(); return 0;}",
"e": 28837,
"s": 27112,
"text": null
},
{
"code": null,
"e": 28851,
"s": 28837,
"text": "Median is 16\n"
},
{
"code": null,
"e": 28993,
"s": 28851,
"text": "Method 2 (By comparing the medians of two arrays)This method works by first getting medians of the two sorted arrays and then comparing them."
},
{
"code": null,
"e": 28997,
"s": 28993,
"text": "C++"
},
{
"code": null,
"e": 28999,
"s": 28997,
"text": "C"
},
{
"code": "// A divide and conquer based// efficient solution to find// median of two sorted arrays// of same size.#include <bits/stdc++.h>using namespace std; /* to get median of a sorted array */int median(int[], int); /* This function returns median of ar1[] and ar2[].Assumptions in this function: Both ar1[] and ar2[] are sorted arrays Both have n elements */int getMedian(int ar1[], int ar2[], int n){ /* return -1 for invalid input */ if (n <= 0) return -1; if (n == 1) return (ar1[0] + ar2[0]) / 2; if (n == 2) return (max(ar1[0], ar2[0]) + min(ar1[1], ar2[1])) / 2; /* get the median of the first array */ int m1 = median(ar1, n); /* get the median of the second array */ int m2 = median(ar2, n); /* If medians are equal then return either m1 or m2 */ if (m1 == m2) return m1; /* if m1 < m2 then median must exist in ar1[m1....] and ar2[....m2] */ if (m1 < m2) { if (n % 2 == 0) return getMedian(ar1 + n / 2 - 1, ar2, n - n / 2 + 1); return getMedian(ar1 + n / 2, ar2, n - n / 2); } /* if m1 > m2 then median must exist in ar1[....m1] and ar2[m2...] */ if (n % 2 == 0) return getMedian(ar2 + n / 2 - 1, ar1, n - n / 2 + 1); return getMedian(ar2 + n / 2, ar1, n - n / 2);} /* Function to get median of a sorted array */int median(int arr[], int n){ if (n % 2 == 0) return (arr[n / 2] + arr[n / 2 - 1]) / 2; else return arr[n / 2];} // Driver codeint main(){ int ar1[] = { 1, 2, 3, 6 }; int ar2[] = { 4, 6, 8, 10 }; int n1 = sizeof(ar1) / sizeof(ar1[0]); int n2 = sizeof(ar2) / sizeof(ar2[0]); if (n1 == n2) cout << \"Median is \" << getMedian(ar1, ar2, n1); else cout << \"Doesn't work for arrays \" << \"of unequal size\"; return 0;} // This code is contributed// by Shivi_Aggarwal",
"e": 31075,
"s": 28999,
"text": null
},
{
"code": "// A divide and conquer based efficient solution to find median// of two sorted arrays of same size.#include <bits/stdc++.h>using namespace std; int median(int[], int); /* to get median of a sorted array */ /* This function returns median of ar1[] and ar2[]. Assumptions in this function: Both ar1[] and ar2[] are sorted arrays Both have n elements */int getMedian(int ar1[], int ar2[], int n){ /* return -1 for invalid input */ if (n <= 0) return -1; if (n == 1) return (ar1[0] + ar2[0]) / 2; if (n == 2) return (max(ar1[0], ar2[0]) + min(ar1[1], ar2[1])) / 2; int m1 = median(ar1, n); /* get the median of the first array */ int m2 = median(ar2, n); /* get the median of the second array */ /* If medians are equal then return either m1 or m2 */ if (m1 == m2) return m1; /* if m1 < m2 then median must exist in ar1[m1....] and ar2[....m2] */ if (m1 < m2) { if (n % 2 == 0) return getMedian(ar1 + n / 2 - 1, ar2, n - n / 2 + 1); return getMedian(ar1 + n / 2, ar2, n - n / 2); } /* if m1 > m2 then median must exist in ar1[....m1] and ar2[m2...] */ if (n % 2 == 0) return getMedian(ar2 + n / 2 - 1, ar1, n - n / 2 + 1); return getMedian(ar2 + n / 2, ar1, n - n / 2);} /* Function to get median of a sorted array */int median(int arr[], int n){ if (n % 2 == 0) return (arr[n / 2] + arr[n / 2 - 1]) / 2; else return arr[n / 2];} /* Driver program to test above function */int main(){ int ar1[] = { 1, 2, 3, 6 }; int ar2[] = { 4, 6, 8, 10 }; int n1 = sizeof(ar1) / sizeof(ar1[0]); int n2 = sizeof(ar2) / sizeof(ar2[0]); if (n1 == n2) printf(\"Median is %d\", getMedian(ar1, ar2, n1)); else printf(\"Doesn't work for arrays of unequal size\"); return 0;}",
"e": 32912,
"s": 31075,
"text": null
},
{
"code": null,
"e": 32925,
"s": 32912,
"text": "Median is 5\n"
},
{
"code": null,
"e": 33017,
"s": 32925,
"text": "Please refer complete article on Median of two sorted arrays of same size for more details!"
},
{
"code": null,
"e": 33028,
"s": 33017,
"text": "C Programs"
},
{
"code": null,
"e": 33126,
"s": 33028,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 33135,
"s": 33126,
"text": "Comments"
},
{
"code": null,
"e": 33148,
"s": 33135,
"text": "Old Comments"
},
{
"code": null,
"e": 33189,
"s": 33148,
"text": "C Program to read contents of Whole File"
},
{
"code": null,
"e": 33220,
"s": 33189,
"text": "Producer Consumer Problem in C"
},
{
"code": null,
"e": 33261,
"s": 33220,
"text": "C program to find the length of a string"
},
{
"code": null,
"e": 33295,
"s": 33261,
"text": "Exit codes in C/C++ with Examples"
},
{
"code": null,
"e": 33348,
"s": 33295,
"text": "Difference between break and continue statement in C"
},
{
"code": null,
"e": 33385,
"s": 33348,
"text": "How to store words in an array in C?"
},
{
"code": null,
"e": 33410,
"s": 33385,
"text": "Regular expressions in C"
},
{
"code": null,
"e": 33500,
"s": 33410,
"text": "Handling multiple clients on server with multithreading using Socket Programming in C/C++"
},
{
"code": null,
"e": 33547,
"s": 33500,
"text": "Conditional wait and signal in multi-threading"
}
] |
Fine-Tuning GPT2 on Colab GPU... For Free! | by Joey S | Towards Data Science
|
Models these days are very big, and most of us don’t have the resources to train them from scratch. Luckily, HuggingFace has generously provided pretrained models in PyTorch, and Google Colab allows usage of their GPU (for a fixed time). Otherwise, even fine-tuning a dataset on my local machine without a NVIDIA GPU would take a significant amount of time. While the tutorial here is for GPT2, this can be done for any of the pretrained models given by HuggingFace, and for any size too.
Go to Google Colab and create a new notebook. It should look something like this.
Set to use GPU by clicking Runtime > Change runtime type
Then click Save.
We would run pip3 install transformers normally in Bash, but because this is in Colab, we have to run it with !
!pip3 install transformers
You can read more about WikiText data here. Overall, there’s WikiText-2 and WikiText-103. We’re going to use WikiText-2 because it’s smaller, and we have limits in terms of how long we can run on GPU, and how much data we can load into memory in Colab. To download and run, in a cell, run
%%bashwget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zipunzip wikitext-2-raw-v1.zip
HuggingFace actually provides a script to help fine-tune models here. We can just download the script by running
!wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/language-modeling/run_language_modeling.py
Now we are ready to fine-tune.
There are many parameters to the script, and you can understand them by reading the manual. I’m just going to go over the important ones for basic training.
output_dir is where the model will be output
model_type is what model you want to use. In our case, it’s gpt2. If you have more memory and time, you can select larger gpt2 sizes which are listed in HuggingFace pretrained models list.
model_name_or_path is the path to the model. If you want to train from scratch, you can leave this blank. In our case, it’s also gpt2
do_train tells it to train
train_data_file points to the training file
do_eval tells it to evaluate afterwards. Not always required, but good to have
eval_data_file points to the evaluation file
Some extra ones you MAY care about, but you can also skip this.
save_steps is when to save checkpoints. If you have limited memory, you can set this to -1 so it’ll skip saving until the end
per_gpu_train_batch_size is batch size for GPU. You can increase this if your GPU has enough memory. To be safe, you can start with 1 and ramp it up if you still have memory
num_train_epochs is the number of epochs to train. Since we’re fine-tuning, I’m going to set this to 2
overwrite_output_dir is used if the output_dir already has something in it, and you’re fine to overwrite the existing model
All in all, to train, run this in a cell
%%bashexport TRAIN_FILE=wikitext-2-raw/wiki.train.rawexport TEST_FILE=wikitext-2-raw/wiki.test.rawexport MODEL_NAME=gpt2export OUTPUT_DIR=outputpython run_language_modeling.py --output_dir=$OUTPUT_DIR \--model_type=$MODEL_NAME \--model_name_or_path=$MODEL_NAME \--do_train \--train_data_file=$TRAIN_FILE \--do_eval \--eval_data_file=$TEST_FILE \--per_gpu_train_batch_size=1 \--save_steps=-1 \--num_train_epochs=2
Note that if you want to fine-tune the model you just trained, you can change MODEL_NAME=gpt2 to MODEL_NAME=output/ so it’ll load the model we just trained
When you run this, if it takes some time without any output, you can hover over the RAM/Disk on the top right corner to see what’s happening.
The downside to Colab GPU is that it’s shared between Colab users. That means execution may not happen right away, as it’s being used by another user. When that happens, it’ll say something like
Waiting for 'Python 3 Google Compute Engine backend (GPU(' to finish its current execution.
There’s really nothing to do but wait it out.
After you’ve finished running the model, you can check that it exists in your output directory.
To use it, you can run something like
where = Toronto Raptors = is the equivalent of describing Toronto Raptors as the title of the article.
The result I got (and yours will differ) is
= Toronto Raptors = Toronto's first @-@ round draft pick in 2006 was selected by the Toronto Raptors with the seventh overall pick. He played in all 82 games, averaging 18 points per game and 6 assists. The Raptors won their third straight NBA championship in 2007, and won the 2009 NBA All @-@ Star Game. He played in a record 16 games for Toronto, averaging 19 points on 5 @.@ 6 rebounds and 6 assists in a season that saw Toronto win the Eastern Conference finals. He also played in the 2008 All @-@ Star Game and was named to the All @-@ Star Game MVP for the first time. He also was named to the All @-@ Star Game's all @-@ time career scoring list, and was the first player to accomplish the feat. He finished the season with an assist and an assist in eight games, and was the first player in NBA history to score in double figures. He was named to the All @-@ Star Game's All @-@ time scoring list in 2011, and was the first player to do this in consecutive seasons. = = Draft = = Toronto selected Jordan Matthews with the seventh overall pick in
From my example, I only generated the first 250 words, which is why it was cut off so abruptly. You can expand that if you want. Notice that this description of the Toronto Raptors is completely fake, as Jordan Matthews never played for the Raptors. The text coherency can be better as well, which can be adjusted by tuning with more epochs, or simply using a larger model. However, that requires more memory, so be careful with it.
In order for us to preserve this model, we should compress it and save it somewhere. This can be done easily with
! tar -czf gpt2-tuned.tar.gz output/
which creates a file called gpt2-tuned.tar.gz
To save it to your Google Drive from Colab, first you must have a Google account/Gmail account. In your Colab cell, you can run
from google.colab import drivedrive.mount('/content/drive')
without needing to install anything additional, since google.colab library comes with using Google Colab. When you run the code described above, you should see something like this
You have to click on the link, sign in and allow your Google Drive to access the your Colab. Then at the end, you’ll see something like
Copy that and paste it back into your Colab notebook.
Now you can copy your output model to your Google Drive by running
!cp gpt2-tuned.tar.gz /content/drive/My\ Drive/
And Voila! You’ve successfully tuned a pretrained model on a GPU, and saved it in your Google Drive. And you did it completely for free.
If you have any questions or improvements based on what we’ve worked on, please let me know in the comments.
You can run this Colab Notebook to reproduce everything shown above
|
[
{
"code": null,
"e": 661,
"s": 172,
"text": "Models these days are very big, and most of us don’t have the resources to train them from scratch. Luckily, HuggingFace has generously provided pretrained models in PyTorch, and Google Colab allows usage of their GPU (for a fixed time). Otherwise, even fine-tuning a dataset on my local machine without a NVIDIA GPU would take a significant amount of time. While the tutorial here is for GPT2, this can be done for any of the pretrained models given by HuggingFace, and for any size too."
},
{
"code": null,
"e": 743,
"s": 661,
"text": "Go to Google Colab and create a new notebook. It should look something like this."
},
{
"code": null,
"e": 800,
"s": 743,
"text": "Set to use GPU by clicking Runtime > Change runtime type"
},
{
"code": null,
"e": 817,
"s": 800,
"text": "Then click Save."
},
{
"code": null,
"e": 929,
"s": 817,
"text": "We would run pip3 install transformers normally in Bash, but because this is in Colab, we have to run it with !"
},
{
"code": null,
"e": 956,
"s": 929,
"text": "!pip3 install transformers"
},
{
"code": null,
"e": 1245,
"s": 956,
"text": "You can read more about WikiText data here. Overall, there’s WikiText-2 and WikiText-103. We’re going to use WikiText-2 because it’s smaller, and we have limits in terms of how long we can run on GPU, and how much data we can load into memory in Colab. To download and run, in a cell, run"
},
{
"code": null,
"e": 1360,
"s": 1245,
"text": "%%bashwget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zipunzip wikitext-2-raw-v1.zip"
},
{
"code": null,
"e": 1473,
"s": 1360,
"text": "HuggingFace actually provides a script to help fine-tune models here. We can just download the script by running"
},
{
"code": null,
"e": 1597,
"s": 1473,
"text": "!wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/language-modeling/run_language_modeling.py"
},
{
"code": null,
"e": 1628,
"s": 1597,
"text": "Now we are ready to fine-tune."
},
{
"code": null,
"e": 1785,
"s": 1628,
"text": "There are many parameters to the script, and you can understand them by reading the manual. I’m just going to go over the important ones for basic training."
},
{
"code": null,
"e": 1830,
"s": 1785,
"text": "output_dir is where the model will be output"
},
{
"code": null,
"e": 2019,
"s": 1830,
"text": "model_type is what model you want to use. In our case, it’s gpt2. If you have more memory and time, you can select larger gpt2 sizes which are listed in HuggingFace pretrained models list."
},
{
"code": null,
"e": 2153,
"s": 2019,
"text": "model_name_or_path is the path to the model. If you want to train from scratch, you can leave this blank. In our case, it’s also gpt2"
},
{
"code": null,
"e": 2180,
"s": 2153,
"text": "do_train tells it to train"
},
{
"code": null,
"e": 2224,
"s": 2180,
"text": "train_data_file points to the training file"
},
{
"code": null,
"e": 2303,
"s": 2224,
"text": "do_eval tells it to evaluate afterwards. Not always required, but good to have"
},
{
"code": null,
"e": 2348,
"s": 2303,
"text": "eval_data_file points to the evaluation file"
},
{
"code": null,
"e": 2412,
"s": 2348,
"text": "Some extra ones you MAY care about, but you can also skip this."
},
{
"code": null,
"e": 2538,
"s": 2412,
"text": "save_steps is when to save checkpoints. If you have limited memory, you can set this to -1 so it’ll skip saving until the end"
},
{
"code": null,
"e": 2712,
"s": 2538,
"text": "per_gpu_train_batch_size is batch size for GPU. You can increase this if your GPU has enough memory. To be safe, you can start with 1 and ramp it up if you still have memory"
},
{
"code": null,
"e": 2815,
"s": 2712,
"text": "num_train_epochs is the number of epochs to train. Since we’re fine-tuning, I’m going to set this to 2"
},
{
"code": null,
"e": 2939,
"s": 2815,
"text": "overwrite_output_dir is used if the output_dir already has something in it, and you’re fine to overwrite the existing model"
},
{
"code": null,
"e": 2980,
"s": 2939,
"text": "All in all, to train, run this in a cell"
},
{
"code": null,
"e": 3393,
"s": 2980,
"text": "%%bashexport TRAIN_FILE=wikitext-2-raw/wiki.train.rawexport TEST_FILE=wikitext-2-raw/wiki.test.rawexport MODEL_NAME=gpt2export OUTPUT_DIR=outputpython run_language_modeling.py --output_dir=$OUTPUT_DIR \\--model_type=$MODEL_NAME \\--model_name_or_path=$MODEL_NAME \\--do_train \\--train_data_file=$TRAIN_FILE \\--do_eval \\--eval_data_file=$TEST_FILE \\--per_gpu_train_batch_size=1 \\--save_steps=-1 \\--num_train_epochs=2"
},
{
"code": null,
"e": 3549,
"s": 3393,
"text": "Note that if you want to fine-tune the model you just trained, you can change MODEL_NAME=gpt2 to MODEL_NAME=output/ so it’ll load the model we just trained"
},
{
"code": null,
"e": 3691,
"s": 3549,
"text": "When you run this, if it takes some time without any output, you can hover over the RAM/Disk on the top right corner to see what’s happening."
},
{
"code": null,
"e": 3886,
"s": 3691,
"text": "The downside to Colab GPU is that it’s shared between Colab users. That means execution may not happen right away, as it’s being used by another user. When that happens, it’ll say something like"
},
{
"code": null,
"e": 3978,
"s": 3886,
"text": "Waiting for 'Python 3 Google Compute Engine backend (GPU(' to finish its current execution."
},
{
"code": null,
"e": 4024,
"s": 3978,
"text": "There’s really nothing to do but wait it out."
},
{
"code": null,
"e": 4120,
"s": 4024,
"text": "After you’ve finished running the model, you can check that it exists in your output directory."
},
{
"code": null,
"e": 4158,
"s": 4120,
"text": "To use it, you can run something like"
},
{
"code": null,
"e": 4261,
"s": 4158,
"text": "where = Toronto Raptors = is the equivalent of describing Toronto Raptors as the title of the article."
},
{
"code": null,
"e": 4305,
"s": 4261,
"text": "The result I got (and yours will differ) is"
},
{
"code": null,
"e": 5362,
"s": 4305,
"text": " = Toronto Raptors = Toronto's first @-@ round draft pick in 2006 was selected by the Toronto Raptors with the seventh overall pick. He played in all 82 games, averaging 18 points per game and 6 assists. The Raptors won their third straight NBA championship in 2007, and won the 2009 NBA All @-@ Star Game. He played in a record 16 games for Toronto, averaging 19 points on 5 @.@ 6 rebounds and 6 assists in a season that saw Toronto win the Eastern Conference finals. He also played in the 2008 All @-@ Star Game and was named to the All @-@ Star Game MVP for the first time. He also was named to the All @-@ Star Game's all @-@ time career scoring list, and was the first player to accomplish the feat. He finished the season with an assist and an assist in eight games, and was the first player in NBA history to score in double figures. He was named to the All @-@ Star Game's All @-@ time scoring list in 2011, and was the first player to do this in consecutive seasons. = = Draft = = Toronto selected Jordan Matthews with the seventh overall pick in"
},
{
"code": null,
"e": 5795,
"s": 5362,
"text": "From my example, I only generated the first 250 words, which is why it was cut off so abruptly. You can expand that if you want. Notice that this description of the Toronto Raptors is completely fake, as Jordan Matthews never played for the Raptors. The text coherency can be better as well, which can be adjusted by tuning with more epochs, or simply using a larger model. However, that requires more memory, so be careful with it."
},
{
"code": null,
"e": 5909,
"s": 5795,
"text": "In order for us to preserve this model, we should compress it and save it somewhere. This can be done easily with"
},
{
"code": null,
"e": 5946,
"s": 5909,
"text": "! tar -czf gpt2-tuned.tar.gz output/"
},
{
"code": null,
"e": 5992,
"s": 5946,
"text": "which creates a file called gpt2-tuned.tar.gz"
},
{
"code": null,
"e": 6120,
"s": 5992,
"text": "To save it to your Google Drive from Colab, first you must have a Google account/Gmail account. In your Colab cell, you can run"
},
{
"code": null,
"e": 6180,
"s": 6120,
"text": "from google.colab import drivedrive.mount('/content/drive')"
},
{
"code": null,
"e": 6360,
"s": 6180,
"text": "without needing to install anything additional, since google.colab library comes with using Google Colab. When you run the code described above, you should see something like this"
},
{
"code": null,
"e": 6496,
"s": 6360,
"text": "You have to click on the link, sign in and allow your Google Drive to access the your Colab. Then at the end, you’ll see something like"
},
{
"code": null,
"e": 6550,
"s": 6496,
"text": "Copy that and paste it back into your Colab notebook."
},
{
"code": null,
"e": 6617,
"s": 6550,
"text": "Now you can copy your output model to your Google Drive by running"
},
{
"code": null,
"e": 6665,
"s": 6617,
"text": "!cp gpt2-tuned.tar.gz /content/drive/My\\ Drive/"
},
{
"code": null,
"e": 6802,
"s": 6665,
"text": "And Voila! You’ve successfully tuned a pretrained model on a GPU, and saved it in your Google Drive. And you did it completely for free."
},
{
"code": null,
"e": 6911,
"s": 6802,
"text": "If you have any questions or improvements based on what we’ve worked on, please let me know in the comments."
}
] |
C++ Stack Library - push() Function
|
The C++ function std::stack::push() inserts new element at the top of the stack and increases size of stack by one.
Following is the declaration for std::stack::push() function form std::stack header.
void push (const value_type& val);
void push (const value_type& val);
val − Value to be assigned to the newly inserted element.
None.
Depends upon underlying container.
Constant i.e. O(1)
The following example shows the usage of std::stack::push() function.
#include <iostream>
#include <stack>
using namespace std;
int main(void) {
stack<int> s;
for (int i = 0; i < 5; ++i)
s.push(i + 1);
cout << "Stack contents are" << endl;
while (!s.empty()) {
cout << s.top() << endl;
s.pop();
}
return 0;
}
Let us compile and run the above program, this will produce the following result −
Stack contents are
5
4
3
2
1
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2719,
"s": 2603,
"text": "The C++ function std::stack::push() inserts new element at the top of the stack and increases size of stack by one."
},
{
"code": null,
"e": 2804,
"s": 2719,
"text": "Following is the declaration for std::stack::push() function form std::stack header."
},
{
"code": null,
"e": 2840,
"s": 2804,
"text": "void push (const value_type& val);\n"
},
{
"code": null,
"e": 2876,
"s": 2840,
"text": "void push (const value_type& val);\n"
},
{
"code": null,
"e": 2934,
"s": 2876,
"text": "val − Value to be assigned to the newly inserted element."
},
{
"code": null,
"e": 2940,
"s": 2934,
"text": "None."
},
{
"code": null,
"e": 2975,
"s": 2940,
"text": "Depends upon underlying container."
},
{
"code": null,
"e": 2994,
"s": 2975,
"text": "Constant i.e. O(1)"
},
{
"code": null,
"e": 3064,
"s": 2994,
"text": "The following example shows the usage of std::stack::push() function."
},
{
"code": null,
"e": 3345,
"s": 3064,
"text": "#include <iostream>\n#include <stack>\n\nusing namespace std;\n\nint main(void) {\n stack<int> s;\n\n for (int i = 0; i < 5; ++i)\n s.push(i + 1);\n\n cout << \"Stack contents are\" << endl;\n\n while (!s.empty()) {\n cout << s.top() << endl;\n s.pop();\n }\n\n return 0;\n}"
},
{
"code": null,
"e": 3428,
"s": 3345,
"text": "Let us compile and run the above program, this will produce the following result −"
},
{
"code": null,
"e": 3458,
"s": 3428,
"text": "Stack contents are\n5\n4\n3\n2\n1\n"
},
{
"code": null,
"e": 3465,
"s": 3458,
"text": " Print"
},
{
"code": null,
"e": 3476,
"s": 3465,
"text": " Add Notes"
}
] |
Check for perfect square in JavaScript
|
We are required to write a JavaScript function that takes in a number and returns a boolean
based on the fact whether or not the number is a perfect square.
Examples of perfect square numbers −
Some perfect square numbers are −
144, 196, 121, 81, 484
The code for this will be −
const num = 484;
const isPerfectSquare = num => {
let ind = 1;
while(ind * ind <= num){
if(ind * ind !== num){
ind++;
continue;
};
return true;
};
return false;
};
console.log(isPerfectSquare(num));
The output in the console −
true
|
[
{
"code": null,
"e": 1219,
"s": 1062,
"text": "We are required to write a JavaScript function that takes in a number and returns a boolean\nbased on the fact whether or not the number is a perfect square."
},
{
"code": null,
"e": 1256,
"s": 1219,
"text": "Examples of perfect square numbers −"
},
{
"code": null,
"e": 1290,
"s": 1256,
"text": "Some perfect square numbers are −"
},
{
"code": null,
"e": 1313,
"s": 1290,
"text": "144, 196, 121, 81, 484"
},
{
"code": null,
"e": 1341,
"s": 1313,
"text": "The code for this will be −"
},
{
"code": null,
"e": 1588,
"s": 1341,
"text": "const num = 484;\nconst isPerfectSquare = num => {\n let ind = 1;\n while(ind * ind <= num){\n if(ind * ind !== num){\n ind++;\n continue;\n };\n return true;\n };\n return false;\n};\nconsole.log(isPerfectSquare(num));"
},
{
"code": null,
"e": 1616,
"s": 1588,
"text": "The output in the console −"
},
{
"code": null,
"e": 1621,
"s": 1616,
"text": "true"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.