title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
---|---|---|
Deploy a Dockerized Flask App to Google Cloud Platform | by Edward Krueger | Towards Data Science
|
By: Edward Krueger and Douglas Franklin.
In this article, we’ll cover how to deploy an app with a Pipfile.lock to the cloud and connect the app to a cloud database. For more information on virtual environments or getting started with the environment and package manager Pipenv, check out this article!
Newer developers often install everything at the system level due to a lack of understanding of, or experience with, virtual environments. Python packages installed with pip are placed at the system level. Retrieving requirements this way for every project creates an unmanageable global Python environment on your machine. Virtual environments allow you to compartmentalize your software while keeping an inventory of dependencies.
Pipenv, a tool for virtual environment and package management, allows developers to create isolated software products that are much easier to deploy, build upon, and modify.
Pipenv combines package management and virtual environment control into one tool for installing, removing, tracking, and documenting your dependencies; and to create, use, and manage your virtual environments. Pipenv is essentially pip and virtualenv wrapped together into a single product.
For app deployment, GCP can build environments from a Pipfile. Pipenv will automatically update our Pipfile as we add and remove dependencies.
Docker is the best way to put apps into production. Docker uses a Dockerfile to build a container. The built container is stored in Google Container Registry were it can be deployed. Docker containers can be built locally and will run on any system running Docker.
GCP Cloud Build allows you to build containers remotely using the instructions contained in Dockerfiles. Remote builds are easy to integrate into CI/CD pipelines. They also save local computational time and energy as Docker uses lots of RAM.
Here is the Dockerfile we used for this project:
The first line of every Dockerfile begins with FROM. This is where we import our OS or programming language. The next line, starting with ENV, sets our environment variable ENV to APP_HOME / app.
These lines are part of the Python cloud platform structure and you can read more about them in the documentation.
The WORKDIR line sets our working directory to /app. Then, the Copy line makes local files available in the docker container.
The next three lines involve setting up the environment and executing it on the server. The RUN command can be followed with any bash code you would like executed. We use RUN to install pipenv. Then use pipenv to install our dependencies. Finally, the CMDline executes our HTTP server gunicorn, binds our container to $PORT, assigns the port a worker, specifies the number of threads to use at that port and finally states the path to the app asapp.main:app.
You can add a .dockerignore file to exclude files from your container image. The .dockerignore is used to keep files out of your container. For example, you likely do not want to include your test suite in your container.
To exclude files from being uploaded to Cloud Build, add a.gcloudignore file. Since Cloud Build copies your files to the cloud, you may want to omit images or data to cut down on storage costs.
If you would like to use these, be sure to check out the documentation for .dockerignore and .gcloudignorefiles, however, know that the pattern is the same as a.gitignore !
We need to make some final changes to our project files in preparation for deployment.
We need to add gunicorn and Pymysql to our Pipfile with the following.
pipenv install gunicorn pymysql
Git add the Pipfile, Pipfile.lock, and the Dockerfile you made earlier to your repository.
Now, once we have our Dockerfile ready, build your container image using Cloud Build by running the following command from the directory containing the Dockerfile:
gcloud builds submit --tag gcr.io/PROJECT-ID/container-name
Note: Replace PROJECT-ID with your GCP project ID and container-name with your container name. You can view your project ID by running the command gcloud config get-value project.
This Docker image now accessible at the GCP container registry or GCR and can be accessed via URL with Cloud Run.
Deploy using the following command:
Deploy using the following command:
gcloud run deploy --image gcr.io/PROJECT-ID/container-name --platform managed
Note: Replace PROJECT-ID with your GCP project ID and container-name with your containers’ name. You can view your project ID by running the command gcloud config get-value project.
2. You will be prompted for service name and region: select the service name and region of your choice.
3. You will be prompted to allow unauthenticated invocations: respond y if you want public access, and n to limit IP access to resources in the same google project.
4. Wait a few moments until the deployment is complete. On success, the command line displays the service URL.
5. Visit your deployed container by opening the service URL in a web browser.
Now that we have a container image stored in GCR, we are ready to deploy our application. Visit GCP cloud run and click create service, be sure to set up billing as required.
Select the region you would like to serve and specify a unique service name. Then choose between public or private access to your application by choosing unauthenticated or authenticated, respectively.
Now we use our GCR container image URL from above. Paste the URL into the space or click select and find it using a dropdown list. Check out the advanced settings to specify server hardware, container port and additional commands, maximum requests and scaling behaviors.
Click create when you’re ready to build and deploy!
You’ll be brought to the GCP Cloud Run service details page where you can manage the service and view metrics and build logs.
Click the URL to view your deployed application!
Congratulations! You have just deployed an application packaged in a container image to Cloud Run. Cloud Run automatically and horizontally scales your container image to handle the received requests, then scales down when demand decreases. You only pay for the CPU, memory, and networking consumed during request handling.
That being said, be sure to shut down your services when you do not want to pay for them!
Go to the cloud console and set up billing if you haven’t already. Now you can create an SQL instance.
Select the SQL dialect you would like to use, we are using MySQL.
Set an instance ID, password, and location.
Setting a new Cloud SQL connection, like any configuration change, leads to the creation of a new Cloud Run revision. To connect your cloud service to your cloud database instance:
Go to Cloud RunConfigure the service:
Go to Cloud Run
Configure the service:
If you are adding a Cloud SQL connection to a new service:
You need to have your service containerized and uploaded to the Container Registry.
Click CREATE SERVICE.
If you are adding Cloud SQL connections to an existing service:
Click on the service name.
Click DEPLOY NEW REVISION.
3. Enable connecting to a Cloud SQL:
Click SHOW OPTIONAL SETTINGS:
If you are adding a connection to a Cloud SQL instance in your project, select the desired Cloud SQL instance from the dropdown menu after clicking add connection.
If you are using a Cloud SQL instance from another project, select connection string in the dropdown and then enter the full instance connection name in the format PROJECT-ID:REGION:INSTANCE-ID.
4. Click Create or Deploy.
In either case, we’ll want our connection string to look like the one below for now.
mysql://ael7qci22z1qwer:nn9keetiyertrwdf@c584asdfgjnm02sk.cbetxkdfhwsb.us-east-1.rds.gcp.com:3306/fq14casdf1rb3y3n
We’ll need to make a change to the DB connection string so that it uses the Pymysql driver.
In a text editor, remove the mysql and add in its place mysql+pymysql and then save the updated string as your SQL connection.
mysql+pymysql://ael7qci22z1qwer:nn9keetiyertrwdf@c584asdfgjnm02sk.cbetxkdfhwsb.us-east-1.rds.gcp.com:3306/fq14casdf1rb3y3n
Note that you do not have to use GCP’s SQL. If you are using a third-party database, you can add the connection string as a VAR instead of Cloud SQL and input your connection string.
Locally, create a new file called .env and add the connection string for your cloud database as DB_CONN,shown below.
DB_CONN=”mysql+pymysql://root:PASSWORD@HOSTNAME:3306/records_db”
Note: Running pipenv shell gives us access to these hidden environmental variables. Similarly, we can access the hidden variables in Python with os.
MySQL_DB_CONN = os.getenv(“DB_CONN”)
Be sure to add the above line to your database.py file so that it ready to connect to the cloud!
This .env file now contains sensitive information and should be added to your .gitignore so that it doesn't end up somewhere publically visible.
Now that we have our app and database in the cloud let’s ensure our system is working correctly.
Once you can see the database listed on GCP, you are ready to load the database with a load script. The following gist includes our load.py script.
Let’s run this load script to see if we can post to our DB.
First, run the following line to enter your virtual environment.
pipenv shell
Then run your load.py script.
python load.py
Visit the remote app address and see if your data has been added to the cloud database. Be sure to check your build logs to find tracebacks if you run into any issues!
For more clarification on this loading process or setting up your app in a modular way, visit our Medium guide to building a data API! That article explains the code above in detail.
In this article, we learned a little about environment management with pipenv and how to Dockerize apps. Then we covered how to store a Docker container in Google Container Registry and deploy the container with the Cloud Build CLI and GUI. Next, we set up a cloud SQL database and connected it to our app. Lastly, we discussed one way to load the database, running load.py locally. Note that if your app collects data itself, you only need to deploy the app and database, then the deployed app will populate the database as it collects data.
Here is a link to the GitHub repository with our code for this project. Be sure to check out this code to see how we set up the whole codebase!
|
[
{
"code": null,
"e": 213,
"s": 172,
"text": "By: Edward Krueger and Douglas Franklin."
},
{
"code": null,
"e": 474,
"s": 213,
"text": "In this article, we’ll cover how to deploy an app with a Pipfile.lock to the cloud and connect the app to a cloud database. For more information on virtual environments or getting started with the environment and package manager Pipenv, check out this article!"
},
{
"code": null,
"e": 907,
"s": 474,
"text": "Newer developers often install everything at the system level due to a lack of understanding of, or experience with, virtual environments. Python packages installed with pip are placed at the system level. Retrieving requirements this way for every project creates an unmanageable global Python environment on your machine. Virtual environments allow you to compartmentalize your software while keeping an inventory of dependencies."
},
{
"code": null,
"e": 1081,
"s": 907,
"text": "Pipenv, a tool for virtual environment and package management, allows developers to create isolated software products that are much easier to deploy, build upon, and modify."
},
{
"code": null,
"e": 1372,
"s": 1081,
"text": "Pipenv combines package management and virtual environment control into one tool for installing, removing, tracking, and documenting your dependencies; and to create, use, and manage your virtual environments. Pipenv is essentially pip and virtualenv wrapped together into a single product."
},
{
"code": null,
"e": 1515,
"s": 1372,
"text": "For app deployment, GCP can build environments from a Pipfile. Pipenv will automatically update our Pipfile as we add and remove dependencies."
},
{
"code": null,
"e": 1780,
"s": 1515,
"text": "Docker is the best way to put apps into production. Docker uses a Dockerfile to build a container. The built container is stored in Google Container Registry were it can be deployed. Docker containers can be built locally and will run on any system running Docker."
},
{
"code": null,
"e": 2022,
"s": 1780,
"text": "GCP Cloud Build allows you to build containers remotely using the instructions contained in Dockerfiles. Remote builds are easy to integrate into CI/CD pipelines. They also save local computational time and energy as Docker uses lots of RAM."
},
{
"code": null,
"e": 2071,
"s": 2022,
"text": "Here is the Dockerfile we used for this project:"
},
{
"code": null,
"e": 2267,
"s": 2071,
"text": "The first line of every Dockerfile begins with FROM. This is where we import our OS or programming language. The next line, starting with ENV, sets our environment variable ENV to APP_HOME / app."
},
{
"code": null,
"e": 2382,
"s": 2267,
"text": "These lines are part of the Python cloud platform structure and you can read more about them in the documentation."
},
{
"code": null,
"e": 2508,
"s": 2382,
"text": "The WORKDIR line sets our working directory to /app. Then, the Copy line makes local files available in the docker container."
},
{
"code": null,
"e": 2967,
"s": 2508,
"text": "The next three lines involve setting up the environment and executing it on the server. The RUN command can be followed with any bash code you would like executed. We use RUN to install pipenv. Then use pipenv to install our dependencies. Finally, the CMDline executes our HTTP server gunicorn, binds our container to $PORT, assigns the port a worker, specifies the number of threads to use at that port and finally states the path to the app asapp.main:app."
},
{
"code": null,
"e": 3189,
"s": 2967,
"text": "You can add a .dockerignore file to exclude files from your container image. The .dockerignore is used to keep files out of your container. For example, you likely do not want to include your test suite in your container."
},
{
"code": null,
"e": 3383,
"s": 3189,
"text": "To exclude files from being uploaded to Cloud Build, add a.gcloudignore file. Since Cloud Build copies your files to the cloud, you may want to omit images or data to cut down on storage costs."
},
{
"code": null,
"e": 3556,
"s": 3383,
"text": "If you would like to use these, be sure to check out the documentation for .dockerignore and .gcloudignorefiles, however, know that the pattern is the same as a.gitignore !"
},
{
"code": null,
"e": 3643,
"s": 3556,
"text": "We need to make some final changes to our project files in preparation for deployment."
},
{
"code": null,
"e": 3714,
"s": 3643,
"text": "We need to add gunicorn and Pymysql to our Pipfile with the following."
},
{
"code": null,
"e": 3746,
"s": 3714,
"text": "pipenv install gunicorn pymysql"
},
{
"code": null,
"e": 3837,
"s": 3746,
"text": "Git add the Pipfile, Pipfile.lock, and the Dockerfile you made earlier to your repository."
},
{
"code": null,
"e": 4001,
"s": 3837,
"text": "Now, once we have our Dockerfile ready, build your container image using Cloud Build by running the following command from the directory containing the Dockerfile:"
},
{
"code": null,
"e": 4061,
"s": 4001,
"text": "gcloud builds submit --tag gcr.io/PROJECT-ID/container-name"
},
{
"code": null,
"e": 4241,
"s": 4061,
"text": "Note: Replace PROJECT-ID with your GCP project ID and container-name with your container name. You can view your project ID by running the command gcloud config get-value project."
},
{
"code": null,
"e": 4355,
"s": 4241,
"text": "This Docker image now accessible at the GCP container registry or GCR and can be accessed via URL with Cloud Run."
},
{
"code": null,
"e": 4391,
"s": 4355,
"text": "Deploy using the following command:"
},
{
"code": null,
"e": 4427,
"s": 4391,
"text": "Deploy using the following command:"
},
{
"code": null,
"e": 4505,
"s": 4427,
"text": "gcloud run deploy --image gcr.io/PROJECT-ID/container-name --platform managed"
},
{
"code": null,
"e": 4687,
"s": 4505,
"text": "Note: Replace PROJECT-ID with your GCP project ID and container-name with your containers’ name. You can view your project ID by running the command gcloud config get-value project."
},
{
"code": null,
"e": 4791,
"s": 4687,
"text": "2. You will be prompted for service name and region: select the service name and region of your choice."
},
{
"code": null,
"e": 4956,
"s": 4791,
"text": "3. You will be prompted to allow unauthenticated invocations: respond y if you want public access, and n to limit IP access to resources in the same google project."
},
{
"code": null,
"e": 5067,
"s": 4956,
"text": "4. Wait a few moments until the deployment is complete. On success, the command line displays the service URL."
},
{
"code": null,
"e": 5145,
"s": 5067,
"text": "5. Visit your deployed container by opening the service URL in a web browser."
},
{
"code": null,
"e": 5320,
"s": 5145,
"text": "Now that we have a container image stored in GCR, we are ready to deploy our application. Visit GCP cloud run and click create service, be sure to set up billing as required."
},
{
"code": null,
"e": 5522,
"s": 5320,
"text": "Select the region you would like to serve and specify a unique service name. Then choose between public or private access to your application by choosing unauthenticated or authenticated, respectively."
},
{
"code": null,
"e": 5793,
"s": 5522,
"text": "Now we use our GCR container image URL from above. Paste the URL into the space or click select and find it using a dropdown list. Check out the advanced settings to specify server hardware, container port and additional commands, maximum requests and scaling behaviors."
},
{
"code": null,
"e": 5845,
"s": 5793,
"text": "Click create when you’re ready to build and deploy!"
},
{
"code": null,
"e": 5971,
"s": 5845,
"text": "You’ll be brought to the GCP Cloud Run service details page where you can manage the service and view metrics and build logs."
},
{
"code": null,
"e": 6020,
"s": 5971,
"text": "Click the URL to view your deployed application!"
},
{
"code": null,
"e": 6344,
"s": 6020,
"text": "Congratulations! You have just deployed an application packaged in a container image to Cloud Run. Cloud Run automatically and horizontally scales your container image to handle the received requests, then scales down when demand decreases. You only pay for the CPU, memory, and networking consumed during request handling."
},
{
"code": null,
"e": 6434,
"s": 6344,
"text": "That being said, be sure to shut down your services when you do not want to pay for them!"
},
{
"code": null,
"e": 6537,
"s": 6434,
"text": "Go to the cloud console and set up billing if you haven’t already. Now you can create an SQL instance."
},
{
"code": null,
"e": 6603,
"s": 6537,
"text": "Select the SQL dialect you would like to use, we are using MySQL."
},
{
"code": null,
"e": 6647,
"s": 6603,
"text": "Set an instance ID, password, and location."
},
{
"code": null,
"e": 6828,
"s": 6647,
"text": "Setting a new Cloud SQL connection, like any configuration change, leads to the creation of a new Cloud Run revision. To connect your cloud service to your cloud database instance:"
},
{
"code": null,
"e": 6866,
"s": 6828,
"text": "Go to Cloud RunConfigure the service:"
},
{
"code": null,
"e": 6882,
"s": 6866,
"text": "Go to Cloud Run"
},
{
"code": null,
"e": 6905,
"s": 6882,
"text": "Configure the service:"
},
{
"code": null,
"e": 6964,
"s": 6905,
"text": "If you are adding a Cloud SQL connection to a new service:"
},
{
"code": null,
"e": 7048,
"s": 6964,
"text": "You need to have your service containerized and uploaded to the Container Registry."
},
{
"code": null,
"e": 7070,
"s": 7048,
"text": "Click CREATE SERVICE."
},
{
"code": null,
"e": 7134,
"s": 7070,
"text": "If you are adding Cloud SQL connections to an existing service:"
},
{
"code": null,
"e": 7161,
"s": 7134,
"text": "Click on the service name."
},
{
"code": null,
"e": 7188,
"s": 7161,
"text": "Click DEPLOY NEW REVISION."
},
{
"code": null,
"e": 7225,
"s": 7188,
"text": "3. Enable connecting to a Cloud SQL:"
},
{
"code": null,
"e": 7255,
"s": 7225,
"text": "Click SHOW OPTIONAL SETTINGS:"
},
{
"code": null,
"e": 7419,
"s": 7255,
"text": "If you are adding a connection to a Cloud SQL instance in your project, select the desired Cloud SQL instance from the dropdown menu after clicking add connection."
},
{
"code": null,
"e": 7614,
"s": 7419,
"text": "If you are using a Cloud SQL instance from another project, select connection string in the dropdown and then enter the full instance connection name in the format PROJECT-ID:REGION:INSTANCE-ID."
},
{
"code": null,
"e": 7641,
"s": 7614,
"text": "4. Click Create or Deploy."
},
{
"code": null,
"e": 7726,
"s": 7641,
"text": "In either case, we’ll want our connection string to look like the one below for now."
},
{
"code": null,
"e": 7841,
"s": 7726,
"text": "mysql://ael7qci22z1qwer:nn9keetiyertrwdf@c584asdfgjnm02sk.cbetxkdfhwsb.us-east-1.rds.gcp.com:3306/fq14casdf1rb3y3n"
},
{
"code": null,
"e": 7933,
"s": 7841,
"text": "We’ll need to make a change to the DB connection string so that it uses the Pymysql driver."
},
{
"code": null,
"e": 8060,
"s": 7933,
"text": "In a text editor, remove the mysql and add in its place mysql+pymysql and then save the updated string as your SQL connection."
},
{
"code": null,
"e": 8183,
"s": 8060,
"text": "mysql+pymysql://ael7qci22z1qwer:nn9keetiyertrwdf@c584asdfgjnm02sk.cbetxkdfhwsb.us-east-1.rds.gcp.com:3306/fq14casdf1rb3y3n"
},
{
"code": null,
"e": 8366,
"s": 8183,
"text": "Note that you do not have to use GCP’s SQL. If you are using a third-party database, you can add the connection string as a VAR instead of Cloud SQL and input your connection string."
},
{
"code": null,
"e": 8483,
"s": 8366,
"text": "Locally, create a new file called .env and add the connection string for your cloud database as DB_CONN,shown below."
},
{
"code": null,
"e": 8548,
"s": 8483,
"text": "DB_CONN=”mysql+pymysql://root:PASSWORD@HOSTNAME:3306/records_db”"
},
{
"code": null,
"e": 8697,
"s": 8548,
"text": "Note: Running pipenv shell gives us access to these hidden environmental variables. Similarly, we can access the hidden variables in Python with os."
},
{
"code": null,
"e": 8734,
"s": 8697,
"text": "MySQL_DB_CONN = os.getenv(“DB_CONN”)"
},
{
"code": null,
"e": 8831,
"s": 8734,
"text": "Be sure to add the above line to your database.py file so that it ready to connect to the cloud!"
},
{
"code": null,
"e": 8976,
"s": 8831,
"text": "This .env file now contains sensitive information and should be added to your .gitignore so that it doesn't end up somewhere publically visible."
},
{
"code": null,
"e": 9073,
"s": 8976,
"text": "Now that we have our app and database in the cloud let’s ensure our system is working correctly."
},
{
"code": null,
"e": 9221,
"s": 9073,
"text": "Once you can see the database listed on GCP, you are ready to load the database with a load script. The following gist includes our load.py script."
},
{
"code": null,
"e": 9281,
"s": 9221,
"text": "Let’s run this load script to see if we can post to our DB."
},
{
"code": null,
"e": 9346,
"s": 9281,
"text": "First, run the following line to enter your virtual environment."
},
{
"code": null,
"e": 9359,
"s": 9346,
"text": "pipenv shell"
},
{
"code": null,
"e": 9389,
"s": 9359,
"text": "Then run your load.py script."
},
{
"code": null,
"e": 9404,
"s": 9389,
"text": "python load.py"
},
{
"code": null,
"e": 9572,
"s": 9404,
"text": "Visit the remote app address and see if your data has been added to the cloud database. Be sure to check your build logs to find tracebacks if you run into any issues!"
},
{
"code": null,
"e": 9755,
"s": 9572,
"text": "For more clarification on this loading process or setting up your app in a modular way, visit our Medium guide to building a data API! That article explains the code above in detail."
},
{
"code": null,
"e": 10298,
"s": 9755,
"text": "In this article, we learned a little about environment management with pipenv and how to Dockerize apps. Then we covered how to store a Docker container in Google Container Registry and deploy the container with the Cloud Build CLI and GUI. Next, we set up a cloud SQL database and connected it to our app. Lastly, we discussed one way to load the database, running load.py locally. Note that if your app collects data itself, you only need to deploy the app and database, then the deployed app will populate the database as it collects data."
}
] |
Introduction to the Telegram API. Analyse your conversation history on... | by Jiayu Yi | Towards Data Science
|
Telegram is an instant messaging service just like WhatsApp, Facebook Messenger and WeChat. It has gained popularity in recent years for various reasons: its non-profit nature, cross-platform support, promises of security1, and its open APIs.
In this post, we’ll use Telethon, a Python client library for the Telegram API to count the number of messages in each of our Telegram chats.
The more well-known of Telegram’s APIs is its Bot API, a HTTP-based API for developers to interact with the bot platform. The Bot API allows developers to control Telegram bots, for example receiving messages and replying to other users.
Besides the Bot API, there’s also the Telegram API itself. This is the API used by Telegram apps for all your actions on Telegram. To name a few: viewing your chats, sending and receiving messages, changing your display picture or creating new groups. Through the Telegram API you can do anything you can do in a Telegram app programatically.
The Telegram API is a lot more complicated than the Bot API. You can access the Bot API through HTTP requests with standard JSON, form or query string payloads, while the Telegram API uses its own custom payload format and encryption protocol.
MTProto is the custom encryption scheme which backs Telegram’s promises of security. It is an application layer protocol which writes directly to an underlying transport stream such as TCP or UDP, and also HTTP. Fortunately, we don’t need to concern ourselves with it directly when using a client library. On the other hand, we do need to understand the payload format in order to make API calls.
The Telegram API is RPC-based, so interacting with the API involves sending a payload representing a function invocation and receiving a result. For example, reading the contents of a conversation involves calling the messages.getMessage function with the necessary parameters and receiving a messages.Messages in return.
Type Language, or TL, is used to represent types and functions used by the API. A TL-Schema is a collection of available types and functions. In MTProto, TL constructs will be serialised into binary form before being embedded as the payload of MTProto messages, however we can leave this to the client library which we will be using.
An example of a TL-Schema (types are declared first, followed by functions with a separator in between):
auth.sentCode#efed51d9 phone_registered:Bool phone_code_hash:string send_call_timeout:int is_password:Bool = auth.SentCode;auth.sentAppCode#e325edcf phone_registered:Bool phone_code_hash:string send_call_timeout:int is_password:Bool = auth.SentCode;---functions---auth.sendCode#768d5f4d phone_number:string sms_type:int api_id:int api_hash:string lang_code:string = auth.SentCode;
A TL function invocation and result using functions and types from the above TL-Schema, and equivalent binary representation (from the official documentation):
(auth.sendCode "79991234567" 1 32 "test-hash" "en")=(auth.sentCode phone_registered:(boolFalse) phone_code_hash:"2dc02d2cda9e615c84")d16ff372 3939370b 33323139 37363534 00000001 00000020 73657409 61682d74 00006873 e77e812d=2215bcbd bc799737 63643212 32643230 39616463 35313665 00343863 e12b7901
The Telegram API is versioned using TL-Schema layers; each layer has a unique TL-Schema. The Telegram website contains the current TL-Schema and previous layers at https://core.telegram.org/schema.
Or so it seems, it turns out that although the latest TL-Schema layer on the Telegram website is Layer 23, at time of writing the latest layer is actually already Layer 71. You can find the latest TL-Schema here instead.
You will need to obtain an api_id and api_hash to interact with the Telegram API. Follow the directions from the official documentation here: https://core.telegram.org/api/obtaining_api_id.
You will have to visit https://my.telegram.org/ and login with your phone number and confirmation code which will be sent on Telegram, and fill in the form under “API Development Tools” with an app title and short name. Afterwards, you can find your api_id and api_hash at the same place.
Alternatively, the same instructions mention that you can use the sample credentials which can be found in Telegram source codes for testing. For convenience, I’ll be using the credentials I found in the Telegram Desktop source code on GitHub in the sample code here.
We’ll be using Telethon to communicate with the Telegram API. Telethon is a Python 3 client library (which means you will have to use Python 3) for the Telegram API which will handle all the protocol-specific tasks for us, so we’ll only need to know what types to use and what functions to call.
You can install Telethon with pip:
pip install telethon
Use the pip corresponding to your Python 3 interpreter; this may be pip3 instead. (Random: Recently Ubuntu 17.10 was released, and it uses Python 3 as its default Python installation.)
Before you can start interacting with the Telegram API, you need to create a client object with your api_id and api_hash and authenticate it with your phone number. This is similar to logging in to Telegram on a new device; you can imagine this client as just another Telegram app.
Below is some code to create and authenticate a client object, modified from the Telethon documentation:
from telethon import TelegramClientfrom telethon.errors.rpc_errors_401 import SessionPasswordNeededError# (1) Use your own values hereapi_id = 17349api_hash = '344583e45741c457fe1862106095a5eb'phone = 'YOUR_NUMBER_HERE'username = 'username'# (2) Create the client and connectclient = TelegramClient(username, api_id, api_hash)client.connect()# Ensure you're authorizedif not client.is_user_authorized(): client.send_code_request(phone) try: client.sign_in(phone, input('Enter the code: ')) except SessionPasswordNeededError: client.sign_in(password=input('Password: '))me = client.get_me()print(me)
As mentioned earlier, the api_id and api_hash above are from the Telegram Desktop source code. Put your own phone number into the phone variable.
Telethon will create a .session file in its working directory to persist the session details, just like how you don’t have to re-authenticate to your Telegram apps every time you close and reopen them. The file name will start with the username variable. It is up to you if you want to change it, in case you want to work with multiple sessions.
If there was no previous session, running this code will cause an authorisation code to be sent to you via Telegram. If you have enabled Two-Step Verification on your Telegram account, you will also need to enter your Telegram password. After you have authenticated once and the .session file is saved, you won’t have to re-authenticate again until your session expires, even if you run the script again.
If the client was created and authenticated successfully, an object representing yourself should be printed to the console. It will look similar to (the ellipses ... mean that some content was skipped):
User(is_self=True ... first_name='Jiayu', last_name=None, username='USERNAME', phone='PHONE_NUMBER' ...
Now you can use this client object to start making requests to the Telegram API.
As mentioned earlier, using the Telegram API involves calling the available functions in the TL-Schema. In this case, we’re interested in the messages.GetDialogs function. We’ll also need to take note of the relevant types in the function arguments. Here is a subset of the TL-Schema we’ll be using to make this request:
messages.dialogs#15ba6c40 dialogs:Vector<Dialog> messages:Vector<Message> chats:Vector<Chat> users:Vector<User> = messages.Dialogs;messages.dialogsSlice#71e094f3 count:int dialogs:Vector<Dialog> messages:Vector<Message> chats:Vector<Chat> users:Vector<User> = messages.Dialogs;---functions---messages.getDialogs#191ba9c5 flags:# exclude_pinned:flags.0?true offset_date:int offset_id:int offset_peer:InputPeer limit:int = messages.Dialogs;
It’s not easy to read, but note that the messages.getDialogs function will return a messages.Dialogs, which is an abstract type for either a messages.dialogs or a messages.dialogsSlice object which both contain vectors of Dialog, Message, Chat and User.
Fortunately, the Telethon documentation gives more details on how we can invoke this function. From https://lonamiwebs.github.io/Telethon/index.html, if you type getdialogs into the search box, you will see a result for a method called GetDialogsRequest (TL-Schema functions are represented by *Request objects in Telethon).
The documentation for GetDialogsRequest states the return type for the method as well as slightly more details about the parameters. The “Copy import to the clipboard” button is particularly useful for when we want to use this object, like right now.
The messages.getDialogs function as well as the constructor for GetDialogsRequest takes an offset_peer argument of type InputPeer. From the documentation for GetDialogsRequest, click through the InputPeer link to see a page describing the constructors for and methods taking and returning this type.
Since we want to create an InputPeer object to use as an argument for our GetDialogsRequest, we’re interested in the constructors for InputPeer. In this case, we’ll use the InputPeerEmpty constructor. Click through once again to the page for InputPeerEmpty and copy its import path to use it. The InputPeerEmpty constructor takes no arguments.
Here is our finished GetDialogsRequest and how to get its result by passing it to our authorised client object:
from telethon.tl.functions.messages import GetDialogsRequestfrom telethon.tl.types import InputPeerEmptyget_dialogs = GetDialogsRequest( offset_date=None, offset_id=0, offset_peer=InputPeerEmpty(), limit=30,)dialogs = client(get_dialogs)print(dialogs)
In my case, I got back a DialogsSlice object containing a list of dialogs, messages, chats and users, as we expected based on the TL-Schema:
DialogsSlice(count=204, dialogs=[...], messages=[...], chats=[...], users=[...])
Receiving a DialogsSlice instead of Dialogs means that not all my dialogs were returned, but the count attribute tells me how many dialogs I have in total. If you have less than a certain amount of conversations, you may receive a Dialogs object instead, in which case all your dialogs were returned and the number of dialogs you have is just the length of the vectors.
The terminology used by the Telegram API may be a little confusing sometimes, especially with the lack of information other than the type definitions. What are “dialogs”, “messages”, “chats” and “users”?
dialogs represents the conversations from your conversation history
chats represents the groups and channels corresponding to the conversations in your conversation history
messages contains the last message sent to each conversation like you see in your list of conversations in your Telegram app
users contains the individual users with whom you have one-on-one chats with or who was the sender of the last message to one of your groups
For example, if my chat history was this screenshot I took from the Telegram app in the Play Store:
dialogs would contain the conversations in the screenshot: Old Pirates, Press Room, Monika, Jaina...
chats would contain entries for Old Pirates, Press Room and Meme Factory.
messages will contain the messages “All aboard!” from Old Pirates, “Wow, nice mention!” from Press Room, a message representing a sent photo to Monika, a message representing Jaina’s reply and so on.
users will contain an entry for Ashley since she sent the last message to Press Room, Monika, Jaina, Kate and Winston since he sent the last message to Meme Factory.
(I haven’t worked with secret chats through the Telegram API yet so I’m not sure how they are handled.)
Our objective is to count the number of messages in each conversation. To get the number of messages a conversation, we can use the messages.getHistory function from the TL-Schema:
messages.getHistory#afa92846 peer:InputPeer offset_id:int offset_date:date add_offset:int limit:int max_id:int min_id:int = messages.Messages
Following a similar process as previously with messages.getDialogs, we can work out how to call this with Telethon using a GetHistoryRequest. This will return either a Messages or MessagesSlice object which either contains a count attribute telling us how many messages there are in a conversation, or all the messages in a conversation so we can just count the messages it contains.
However, we will first have to construct the right InputPeer for our GetHistoryRequest. This time, we use InputPeerEmpty since we want to retrieve the message history for a specific conversation. Instead, we have to use either the InputPeerUser, InputPeerChat or InputPeerChannel constructor depending on the nature of the conversation.
In order to count the number of messages in each of our conversations, we will have to make a GetHistoryRequest for that conversation with the appropriate InputPeer for that conversation.
All of the relevant InputPeer constructors take the same id and access_hash parameters, but depending on whether the conversation is a one-on-one chat, group or channel, these values are found in different places in the GetDialogsRequest response:
dialogs: a list of the conversations we want to count the messages in and contains a peer value with the type and id of the peer corresponding to that conversation, but not the access_hash.
chats: contains the id, access_hash and titles for our groups and channels.
users: contains the id, access_hash and first name for our individual chats.
In pseudocode, we have:
let counts be a mapping from conversations to message countsfor each dialog in dialogs: if dialog.peer is a channel: channel = corresponding object in chats name = channel.title id = channel.id access_hash = channel.access_hash peer = InputPeerChannel(id, access_hash) else if dialog.peer is a group: group = corresponding object in chats name = group.title id = group.id peer = InputPeerChat(id) else if dialog.peer is a user: user = corresponding object in users name = user.first_name id = user.id access_hash = user.access_hash peer = InputPeerUser(id, access_hash) history = message history for peer count = number of messages in history counts[name] = count
Converting to Python code (note that dialogs, chats and users above are members of the result of our GetDialogsRequest which is also called dialogs):
counts = {}# create dictionary of ids to users and chatsusers = {}chats = {}for u in dialogs.users: users[u.id] = ufor c in dialogs.chats: chats[c.id] = cfor d in dialogs.dialogs: peer = d.peer if isinstance(peer, PeerChannel): id = peer.channel_id channel = chats[id] access_hash = channel.access_hash name = channel.title input_peer = InputPeerChannel(id, access_hash) elif isinstance(peer, PeerChat): id = peer.chat_id group = chats[id] name = group.title input_peer = InputPeerChat(id) elif isinstance(peer, PeerUser): id = peer.user_id user = users[id] access_hash = user.access_hash name = user.first_name input_peer = InputPeerUser(id, access_hash) else: continue get_history = GetHistoryRequest( peer=input_peer, offset_id=0, offset_date=None, add_offset=0, limit=1, max_id=0, min_id=0, ) history = client(get_history) if isinstance(history, Messages): count = len(history.messages) else: count = history.count counts[name] = countprint(counts)
Our counts object is a dictionary of chat names to message counts. We can sort and pretty print it to see our top conversations:
sorted_counts = sorted(counts.items(), key=lambda x: x[1], reverse=True)for name, count in sorted_counts: print('{}: {}'.format(name, count))
Example output:
Group chat 1: 10000Group chat 2: 3003Channel 1: 2000Chat 1: 1500Chat 2: 300
Telethon has some helper functions to simplify common operations. We could actually have done the above with two of these helper methods, client.get_dialogs() and client.get_message_history(), instead:
from telethon.tl.types import User_, entities = client.get_dialogs(limit=30)counts = []for e in entities: if isinstance(e, User): name = e.first_name else: name = e.title count, _, _ = client.get_message_history(e, limit=1) counts.append((name, count))message_counts.sort(key=lambda x: x[1], reverse=True)for name, count in counts: print('{}: {}'.format(name, count))
However, I felt that it a better learning experience to call the Telegram API methods directly first, especially since there isn’t a helper method for everything. Nevertheless, there are some things which are much simpler with the helper methods, such as how we authenticated our client in the beginning, or actions such as uploading files which would be otherwise tedious.
The full code for this example can be found as a Gist here: https://gist.github.com/yi-jiayu/7b34260cfbfa6cbc2b4464edd41def42
There’s a lot more you can do with the Telegram API, especially from an analytics standpoint. I started looking into it after thinking about one of my older projects to try to create data visualisations out of exported WhatsApp chat histories: https://github.com/yi-jiayu/chat-analytics.
Using regex to parse the plain text emailed chat history, I could generate a chart similar to the GitHub punch card repository graph showing at what times of the week a chat was most active:
However, using the “Email chat” function to export was quite hackish, and you needed to manually export the conversation history for each chat, and it would be out of date once you received a new message. I didn’t pursue the project much further, but I always thought about other insights could be pulled from chat histories.
With programmatic access to chat histories, there’s lots more that can be done with Telegram chats. Methods such as messages.search could me exceptionally useful. Perhaps dynamically generating statistics on conversations which peak and die down, or which are consistently active, or finding your favourite emojis or most common n-grams? The sky’s the limit (or the API rate limit, whichever is lower).
(2017–10–25 09:45 SGT) Modified message counting to skip unexpected dialogs
^ Personally, I can’t comment about Telegram’s security other than point out that Telegram conversations are not end-to-end encrypted by default, as well as bring up the common refrain about Telegram’s encryption protocol being self-developed and less-scrutinised compared to more-established protocols such as the Signal Protocol.
^ Personally, I can’t comment about Telegram’s security other than point out that Telegram conversations are not end-to-end encrypted by default, as well as bring up the common refrain about Telegram’s encryption protocol being self-developed and less-scrutinised compared to more-established protocols such as the Signal Protocol.
|
[
{
"code": null,
"e": 415,
"s": 172,
"text": "Telegram is an instant messaging service just like WhatsApp, Facebook Messenger and WeChat. It has gained popularity in recent years for various reasons: its non-profit nature, cross-platform support, promises of security1, and its open APIs."
},
{
"code": null,
"e": 557,
"s": 415,
"text": "In this post, we’ll use Telethon, a Python client library for the Telegram API to count the number of messages in each of our Telegram chats."
},
{
"code": null,
"e": 795,
"s": 557,
"text": "The more well-known of Telegram’s APIs is its Bot API, a HTTP-based API for developers to interact with the bot platform. The Bot API allows developers to control Telegram bots, for example receiving messages and replying to other users."
},
{
"code": null,
"e": 1138,
"s": 795,
"text": "Besides the Bot API, there’s also the Telegram API itself. This is the API used by Telegram apps for all your actions on Telegram. To name a few: viewing your chats, sending and receiving messages, changing your display picture or creating new groups. Through the Telegram API you can do anything you can do in a Telegram app programatically."
},
{
"code": null,
"e": 1382,
"s": 1138,
"text": "The Telegram API is a lot more complicated than the Bot API. You can access the Bot API through HTTP requests with standard JSON, form or query string payloads, while the Telegram API uses its own custom payload format and encryption protocol."
},
{
"code": null,
"e": 1779,
"s": 1382,
"text": "MTProto is the custom encryption scheme which backs Telegram’s promises of security. It is an application layer protocol which writes directly to an underlying transport stream such as TCP or UDP, and also HTTP. Fortunately, we don’t need to concern ourselves with it directly when using a client library. On the other hand, we do need to understand the payload format in order to make API calls."
},
{
"code": null,
"e": 2101,
"s": 1779,
"text": "The Telegram API is RPC-based, so interacting with the API involves sending a payload representing a function invocation and receiving a result. For example, reading the contents of a conversation involves calling the messages.getMessage function with the necessary parameters and receiving a messages.Messages in return."
},
{
"code": null,
"e": 2435,
"s": 2101,
"text": "Type Language, or TL, is used to represent types and functions used by the API. A TL-Schema is a collection of available types and functions. In MTProto, TL constructs will be serialised into binary form before being embedded as the payload of MTProto messages, however we can leave this to the client library which we will be using."
},
{
"code": null,
"e": 2540,
"s": 2435,
"text": "An example of a TL-Schema (types are declared first, followed by functions with a separator in between):"
},
{
"code": null,
"e": 2921,
"s": 2540,
"text": "auth.sentCode#efed51d9 phone_registered:Bool phone_code_hash:string send_call_timeout:int is_password:Bool = auth.SentCode;auth.sentAppCode#e325edcf phone_registered:Bool phone_code_hash:string send_call_timeout:int is_password:Bool = auth.SentCode;---functions---auth.sendCode#768d5f4d phone_number:string sms_type:int api_id:int api_hash:string lang_code:string = auth.SentCode;"
},
{
"code": null,
"e": 3081,
"s": 2921,
"text": "A TL function invocation and result using functions and types from the above TL-Schema, and equivalent binary representation (from the official documentation):"
},
{
"code": null,
"e": 3378,
"s": 3081,
"text": "(auth.sendCode \"79991234567\" 1 32 \"test-hash\" \"en\")=(auth.sentCode phone_registered:(boolFalse) phone_code_hash:\"2dc02d2cda9e615c84\")d16ff372 3939370b 33323139 37363534 00000001 00000020 73657409 61682d74 00006873 e77e812d=2215bcbd bc799737 63643212 32643230 39616463 35313665 00343863 e12b7901"
},
{
"code": null,
"e": 3576,
"s": 3378,
"text": "The Telegram API is versioned using TL-Schema layers; each layer has a unique TL-Schema. The Telegram website contains the current TL-Schema and previous layers at https://core.telegram.org/schema."
},
{
"code": null,
"e": 3797,
"s": 3576,
"text": "Or so it seems, it turns out that although the latest TL-Schema layer on the Telegram website is Layer 23, at time of writing the latest layer is actually already Layer 71. You can find the latest TL-Schema here instead."
},
{
"code": null,
"e": 3987,
"s": 3797,
"text": "You will need to obtain an api_id and api_hash to interact with the Telegram API. Follow the directions from the official documentation here: https://core.telegram.org/api/obtaining_api_id."
},
{
"code": null,
"e": 4276,
"s": 3987,
"text": "You will have to visit https://my.telegram.org/ and login with your phone number and confirmation code which will be sent on Telegram, and fill in the form under “API Development Tools” with an app title and short name. Afterwards, you can find your api_id and api_hash at the same place."
},
{
"code": null,
"e": 4544,
"s": 4276,
"text": "Alternatively, the same instructions mention that you can use the sample credentials which can be found in Telegram source codes for testing. For convenience, I’ll be using the credentials I found in the Telegram Desktop source code on GitHub in the sample code here."
},
{
"code": null,
"e": 4840,
"s": 4544,
"text": "We’ll be using Telethon to communicate with the Telegram API. Telethon is a Python 3 client library (which means you will have to use Python 3) for the Telegram API which will handle all the protocol-specific tasks for us, so we’ll only need to know what types to use and what functions to call."
},
{
"code": null,
"e": 4875,
"s": 4840,
"text": "You can install Telethon with pip:"
},
{
"code": null,
"e": 4896,
"s": 4875,
"text": "pip install telethon"
},
{
"code": null,
"e": 5081,
"s": 4896,
"text": "Use the pip corresponding to your Python 3 interpreter; this may be pip3 instead. (Random: Recently Ubuntu 17.10 was released, and it uses Python 3 as its default Python installation.)"
},
{
"code": null,
"e": 5363,
"s": 5081,
"text": "Before you can start interacting with the Telegram API, you need to create a client object with your api_id and api_hash and authenticate it with your phone number. This is similar to logging in to Telegram on a new device; you can imagine this client as just another Telegram app."
},
{
"code": null,
"e": 5468,
"s": 5363,
"text": "Below is some code to create and authenticate a client object, modified from the Telethon documentation:"
},
{
"code": null,
"e": 6090,
"s": 5468,
"text": "from telethon import TelegramClientfrom telethon.errors.rpc_errors_401 import SessionPasswordNeededError# (1) Use your own values hereapi_id = 17349api_hash = '344583e45741c457fe1862106095a5eb'phone = 'YOUR_NUMBER_HERE'username = 'username'# (2) Create the client and connectclient = TelegramClient(username, api_id, api_hash)client.connect()# Ensure you're authorizedif not client.is_user_authorized(): client.send_code_request(phone) try: client.sign_in(phone, input('Enter the code: ')) except SessionPasswordNeededError: client.sign_in(password=input('Password: '))me = client.get_me()print(me)"
},
{
"code": null,
"e": 6236,
"s": 6090,
"text": "As mentioned earlier, the api_id and api_hash above are from the Telegram Desktop source code. Put your own phone number into the phone variable."
},
{
"code": null,
"e": 6582,
"s": 6236,
"text": "Telethon will create a .session file in its working directory to persist the session details, just like how you don’t have to re-authenticate to your Telegram apps every time you close and reopen them. The file name will start with the username variable. It is up to you if you want to change it, in case you want to work with multiple sessions."
},
{
"code": null,
"e": 6987,
"s": 6582,
"text": "If there was no previous session, running this code will cause an authorisation code to be sent to you via Telegram. If you have enabled Two-Step Verification on your Telegram account, you will also need to enter your Telegram password. After you have authenticated once and the .session file is saved, you won’t have to re-authenticate again until your session expires, even if you run the script again."
},
{
"code": null,
"e": 7190,
"s": 6987,
"text": "If the client was created and authenticated successfully, an object representing yourself should be printed to the console. It will look similar to (the ellipses ... mean that some content was skipped):"
},
{
"code": null,
"e": 7294,
"s": 7190,
"text": "User(is_self=True ... first_name='Jiayu', last_name=None, username='USERNAME', phone='PHONE_NUMBER' ..."
},
{
"code": null,
"e": 7375,
"s": 7294,
"text": "Now you can use this client object to start making requests to the Telegram API."
},
{
"code": null,
"e": 7696,
"s": 7375,
"text": "As mentioned earlier, using the Telegram API involves calling the available functions in the TL-Schema. In this case, we’re interested in the messages.GetDialogs function. We’ll also need to take note of the relevant types in the function arguments. Here is a subset of the TL-Schema we’ll be using to make this request:"
},
{
"code": null,
"e": 8135,
"s": 7696,
"text": "messages.dialogs#15ba6c40 dialogs:Vector<Dialog> messages:Vector<Message> chats:Vector<Chat> users:Vector<User> = messages.Dialogs;messages.dialogsSlice#71e094f3 count:int dialogs:Vector<Dialog> messages:Vector<Message> chats:Vector<Chat> users:Vector<User> = messages.Dialogs;---functions---messages.getDialogs#191ba9c5 flags:# exclude_pinned:flags.0?true offset_date:int offset_id:int offset_peer:InputPeer limit:int = messages.Dialogs;"
},
{
"code": null,
"e": 8389,
"s": 8135,
"text": "It’s not easy to read, but note that the messages.getDialogs function will return a messages.Dialogs, which is an abstract type for either a messages.dialogs or a messages.dialogsSlice object which both contain vectors of Dialog, Message, Chat and User."
},
{
"code": null,
"e": 8714,
"s": 8389,
"text": "Fortunately, the Telethon documentation gives more details on how we can invoke this function. From https://lonamiwebs.github.io/Telethon/index.html, if you type getdialogs into the search box, you will see a result for a method called GetDialogsRequest (TL-Schema functions are represented by *Request objects in Telethon)."
},
{
"code": null,
"e": 8965,
"s": 8714,
"text": "The documentation for GetDialogsRequest states the return type for the method as well as slightly more details about the parameters. The “Copy import to the clipboard” button is particularly useful for when we want to use this object, like right now."
},
{
"code": null,
"e": 9265,
"s": 8965,
"text": "The messages.getDialogs function as well as the constructor for GetDialogsRequest takes an offset_peer argument of type InputPeer. From the documentation for GetDialogsRequest, click through the InputPeer link to see a page describing the constructors for and methods taking and returning this type."
},
{
"code": null,
"e": 9609,
"s": 9265,
"text": "Since we want to create an InputPeer object to use as an argument for our GetDialogsRequest, we’re interested in the constructors for InputPeer. In this case, we’ll use the InputPeerEmpty constructor. Click through once again to the page for InputPeerEmpty and copy its import path to use it. The InputPeerEmpty constructor takes no arguments."
},
{
"code": null,
"e": 9721,
"s": 9609,
"text": "Here is our finished GetDialogsRequest and how to get its result by passing it to our authorised client object:"
},
{
"code": null,
"e": 9985,
"s": 9721,
"text": "from telethon.tl.functions.messages import GetDialogsRequestfrom telethon.tl.types import InputPeerEmptyget_dialogs = GetDialogsRequest( offset_date=None, offset_id=0, offset_peer=InputPeerEmpty(), limit=30,)dialogs = client(get_dialogs)print(dialogs)"
},
{
"code": null,
"e": 10126,
"s": 9985,
"text": "In my case, I got back a DialogsSlice object containing a list of dialogs, messages, chats and users, as we expected based on the TL-Schema:"
},
{
"code": null,
"e": 10207,
"s": 10126,
"text": "DialogsSlice(count=204, dialogs=[...], messages=[...], chats=[...], users=[...])"
},
{
"code": null,
"e": 10577,
"s": 10207,
"text": "Receiving a DialogsSlice instead of Dialogs means that not all my dialogs were returned, but the count attribute tells me how many dialogs I have in total. If you have less than a certain amount of conversations, you may receive a Dialogs object instead, in which case all your dialogs were returned and the number of dialogs you have is just the length of the vectors."
},
{
"code": null,
"e": 10781,
"s": 10577,
"text": "The terminology used by the Telegram API may be a little confusing sometimes, especially with the lack of information other than the type definitions. What are “dialogs”, “messages”, “chats” and “users”?"
},
{
"code": null,
"e": 10849,
"s": 10781,
"text": "dialogs represents the conversations from your conversation history"
},
{
"code": null,
"e": 10954,
"s": 10849,
"text": "chats represents the groups and channels corresponding to the conversations in your conversation history"
},
{
"code": null,
"e": 11079,
"s": 10954,
"text": "messages contains the last message sent to each conversation like you see in your list of conversations in your Telegram app"
},
{
"code": null,
"e": 11220,
"s": 11079,
"text": "users contains the individual users with whom you have one-on-one chats with or who was the sender of the last message to one of your groups"
},
{
"code": null,
"e": 11320,
"s": 11220,
"text": "For example, if my chat history was this screenshot I took from the Telegram app in the Play Store:"
},
{
"code": null,
"e": 11421,
"s": 11320,
"text": "dialogs would contain the conversations in the screenshot: Old Pirates, Press Room, Monika, Jaina..."
},
{
"code": null,
"e": 11495,
"s": 11421,
"text": "chats would contain entries for Old Pirates, Press Room and Meme Factory."
},
{
"code": null,
"e": 11695,
"s": 11495,
"text": "messages will contain the messages “All aboard!” from Old Pirates, “Wow, nice mention!” from Press Room, a message representing a sent photo to Monika, a message representing Jaina’s reply and so on."
},
{
"code": null,
"e": 11861,
"s": 11695,
"text": "users will contain an entry for Ashley since she sent the last message to Press Room, Monika, Jaina, Kate and Winston since he sent the last message to Meme Factory."
},
{
"code": null,
"e": 11965,
"s": 11861,
"text": "(I haven’t worked with secret chats through the Telegram API yet so I’m not sure how they are handled.)"
},
{
"code": null,
"e": 12146,
"s": 11965,
"text": "Our objective is to count the number of messages in each conversation. To get the number of messages a conversation, we can use the messages.getHistory function from the TL-Schema:"
},
{
"code": null,
"e": 12288,
"s": 12146,
"text": "messages.getHistory#afa92846 peer:InputPeer offset_id:int offset_date:date add_offset:int limit:int max_id:int min_id:int = messages.Messages"
},
{
"code": null,
"e": 12672,
"s": 12288,
"text": "Following a similar process as previously with messages.getDialogs, we can work out how to call this with Telethon using a GetHistoryRequest. This will return either a Messages or MessagesSlice object which either contains a count attribute telling us how many messages there are in a conversation, or all the messages in a conversation so we can just count the messages it contains."
},
{
"code": null,
"e": 13009,
"s": 12672,
"text": "However, we will first have to construct the right InputPeer for our GetHistoryRequest. This time, we use InputPeerEmpty since we want to retrieve the message history for a specific conversation. Instead, we have to use either the InputPeerUser, InputPeerChat or InputPeerChannel constructor depending on the nature of the conversation."
},
{
"code": null,
"e": 13197,
"s": 13009,
"text": "In order to count the number of messages in each of our conversations, we will have to make a GetHistoryRequest for that conversation with the appropriate InputPeer for that conversation."
},
{
"code": null,
"e": 13445,
"s": 13197,
"text": "All of the relevant InputPeer constructors take the same id and access_hash parameters, but depending on whether the conversation is a one-on-one chat, group or channel, these values are found in different places in the GetDialogsRequest response:"
},
{
"code": null,
"e": 13635,
"s": 13445,
"text": "dialogs: a list of the conversations we want to count the messages in and contains a peer value with the type and id of the peer corresponding to that conversation, but not the access_hash."
},
{
"code": null,
"e": 13711,
"s": 13635,
"text": "chats: contains the id, access_hash and titles for our groups and channels."
},
{
"code": null,
"e": 13788,
"s": 13711,
"text": "users: contains the id, access_hash and first name for our individual chats."
},
{
"code": null,
"e": 13812,
"s": 13788,
"text": "In pseudocode, we have:"
},
{
"code": null,
"e": 14592,
"s": 13812,
"text": "let counts be a mapping from conversations to message countsfor each dialog in dialogs: if dialog.peer is a channel: channel = corresponding object in chats name = channel.title id = channel.id access_hash = channel.access_hash peer = InputPeerChannel(id, access_hash) else if dialog.peer is a group: group = corresponding object in chats name = group.title id = group.id peer = InputPeerChat(id) else if dialog.peer is a user: user = corresponding object in users name = user.first_name id = user.id access_hash = user.access_hash peer = InputPeerUser(id, access_hash) history = message history for peer count = number of messages in history counts[name] = count"
},
{
"code": null,
"e": 14742,
"s": 14592,
"text": "Converting to Python code (note that dialogs, chats and users above are members of the result of our GetDialogsRequest which is also called dialogs):"
},
{
"code": null,
"e": 15902,
"s": 14742,
"text": "counts = {}# create dictionary of ids to users and chatsusers = {}chats = {}for u in dialogs.users: users[u.id] = ufor c in dialogs.chats: chats[c.id] = cfor d in dialogs.dialogs: peer = d.peer if isinstance(peer, PeerChannel): id = peer.channel_id channel = chats[id] access_hash = channel.access_hash name = channel.title input_peer = InputPeerChannel(id, access_hash) elif isinstance(peer, PeerChat): id = peer.chat_id group = chats[id] name = group.title input_peer = InputPeerChat(id) elif isinstance(peer, PeerUser): id = peer.user_id user = users[id] access_hash = user.access_hash name = user.first_name input_peer = InputPeerUser(id, access_hash) else: continue get_history = GetHistoryRequest( peer=input_peer, offset_id=0, offset_date=None, add_offset=0, limit=1, max_id=0, min_id=0, ) history = client(get_history) if isinstance(history, Messages): count = len(history.messages) else: count = history.count counts[name] = countprint(counts)"
},
{
"code": null,
"e": 16031,
"s": 15902,
"text": "Our counts object is a dictionary of chat names to message counts. We can sort and pretty print it to see our top conversations:"
},
{
"code": null,
"e": 16176,
"s": 16031,
"text": "sorted_counts = sorted(counts.items(), key=lambda x: x[1], reverse=True)for name, count in sorted_counts: print('{}: {}'.format(name, count))"
},
{
"code": null,
"e": 16192,
"s": 16176,
"text": "Example output:"
},
{
"code": null,
"e": 16268,
"s": 16192,
"text": "Group chat 1: 10000Group chat 2: 3003Channel 1: 2000Chat 1: 1500Chat 2: 300"
},
{
"code": null,
"e": 16470,
"s": 16268,
"text": "Telethon has some helper functions to simplify common operations. We could actually have done the above with two of these helper methods, client.get_dialogs() and client.get_message_history(), instead:"
},
{
"code": null,
"e": 16867,
"s": 16470,
"text": "from telethon.tl.types import User_, entities = client.get_dialogs(limit=30)counts = []for e in entities: if isinstance(e, User): name = e.first_name else: name = e.title count, _, _ = client.get_message_history(e, limit=1) counts.append((name, count))message_counts.sort(key=lambda x: x[1], reverse=True)for name, count in counts: print('{}: {}'.format(name, count))"
},
{
"code": null,
"e": 17241,
"s": 16867,
"text": "However, I felt that it a better learning experience to call the Telegram API methods directly first, especially since there isn’t a helper method for everything. Nevertheless, there are some things which are much simpler with the helper methods, such as how we authenticated our client in the beginning, or actions such as uploading files which would be otherwise tedious."
},
{
"code": null,
"e": 17367,
"s": 17241,
"text": "The full code for this example can be found as a Gist here: https://gist.github.com/yi-jiayu/7b34260cfbfa6cbc2b4464edd41def42"
},
{
"code": null,
"e": 17655,
"s": 17367,
"text": "There’s a lot more you can do with the Telegram API, especially from an analytics standpoint. I started looking into it after thinking about one of my older projects to try to create data visualisations out of exported WhatsApp chat histories: https://github.com/yi-jiayu/chat-analytics."
},
{
"code": null,
"e": 17846,
"s": 17655,
"text": "Using regex to parse the plain text emailed chat history, I could generate a chart similar to the GitHub punch card repository graph showing at what times of the week a chat was most active:"
},
{
"code": null,
"e": 18172,
"s": 17846,
"text": "However, using the “Email chat” function to export was quite hackish, and you needed to manually export the conversation history for each chat, and it would be out of date once you received a new message. I didn’t pursue the project much further, but I always thought about other insights could be pulled from chat histories."
},
{
"code": null,
"e": 18575,
"s": 18172,
"text": "With programmatic access to chat histories, there’s lots more that can be done with Telegram chats. Methods such as messages.search could me exceptionally useful. Perhaps dynamically generating statistics on conversations which peak and die down, or which are consistently active, or finding your favourite emojis or most common n-grams? The sky’s the limit (or the API rate limit, whichever is lower)."
},
{
"code": null,
"e": 18651,
"s": 18575,
"text": "(2017–10–25 09:45 SGT) Modified message counting to skip unexpected dialogs"
},
{
"code": null,
"e": 18983,
"s": 18651,
"text": "^ Personally, I can’t comment about Telegram’s security other than point out that Telegram conversations are not end-to-end encrypted by default, as well as bring up the common refrain about Telegram’s encryption protocol being self-developed and less-scrutinised compared to more-established protocols such as the Signal Protocol."
}
] |
Get text from memes with python and OCR | Towards Data Science
|
Written with Lorenzo Baiocco.
Extracting text information from an image can serve different scopes.In our case, we needed to extract text to enhance the performance of our multi-modal sentiment classification model based on tweets accompanied by images. Since we found that the most common reaction pic that can be found on social media are formatted as MEMEs, we developed a pipeline to extract text from images formatted like that, and in this article, we’ll present it.
Currently (Nov 2020), the state of the art in text extraction through OCR methods is represented by Google Tesseract OCR, which is the most used open-source software to deal with this task.
Tesseract is easy to install (following this link) and use in a python environment, through the pytesseract library.
The environment used for this article is the following:
Python 3.7.9
Tesseract 5.0.0
Pytesseract
Pillow
Matplotlib
OpenCV
Numpy
The first thing to do is to import all the packages:
from PIL import Imageimport numpy as npimport matplotlib.pyplot as pltimport cv2import pytesseract#change this path if you install pytesseract in another folder:pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
We are ready to extract some text! Let’s load a dummy meme to avoid breaking copyright (anyway, this meme has been done with imageflip).
im = np.array(Image.open('..\\immagini_test_small\\meme3.jpg'))plt.figure(figsize=(10,10))plt.title('PLAIN IMAGE')plt.imshow(im); plt.xticks([]); plt.yticks([])plt.savefig('img1.png')
To search for text is as easy as it goes:
text = pytesseract.image_to_string(im)print(text.replace(‘\n’, ‘ ‘))
And the result is...” a a TTY SHE aT aN Sa ithe Pet”. Woooah something is going on. Tesseract is not working. The problem is that Tesseract is optimized to recognize text in typical text documents, so it can be difficult to recognize text within the image without preprocessing it. The Tesseract documentation provides a lot of ways to enhance the quality of our images. After some fine-tuning, we found a very clean method that worked perfectly on the subset of data that we used. Before diving into it, you need to know that, although these operations worked very well for our data, there are a couple of parameters that were tuned empirically. If you have a dataset that is different from the one that we used, it could be helpful to tune them a little bit.
The first function that we applied to our image is bilateral filtering. If you want to understand deeply how it works, there is a nice tutorial on OpenCV site, and you can find the description of the parameters here.
In a nutshell, this filter helps to remove the noise, but, in contrast with other filters, preserves edges instead of blurring them. This operation is performed by excluding from the blurring of a point the neighbors that do not present similar intensities. With the chosen parameters, the difference from the other image is not strongly perceptible, however, it led to a better final performance.
im= cv2.bilateralFilter(im,5, 55,60)plt.figure(figsize=(10,10))plt.title('BILATERAL FILTER')plt.imshow(im); plt.xticks([]); plt.yticks([])plt.savefig('img2.png',bbox_inches='tight')
The second operation it’s pretty clear: we project our RGB images in grayscale.
im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)plt.figure(figsize=(10,10))plt.title('GRAYSCALE IMAGE')plt.imshow(im, cmap='gray'); plt.xticks([]); plt.yticks([])plt.savefig('img3.png',bbox_inches='tight')
The last transformation is binarization. For every pixel, the same threshold value is applied. If the pixel value is smaller than the threshold, it is set to 0, otherwise, it is set to 255. Since we have white text, we want to blackout everything is not almost perfectly white (not exactly perfect since usually text is not “255-white”. We found that 240 was a threshold that could do the work. Since tesseract is trained to recognize black text, we also need to invert the colors. The function threshold from OpenCV can do the two operations jointly, by selecting the inverted binarization.
_, im = cv2.threshold(im, 240, 255, 1) plt.figure(figsize=(10,10))plt.title('IMMAGINE BINARIA')plt.imshow(im, cmap='gray'); plt.xticks([]); plt.yticks([])plt.savefig('img4.png',bbox_inches='tight')
This is the input that we want! Let’s put everything into a function:
def preprocess_finale(im): im= cv2.bilateralFilter(im,5, 55,60) im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) _, im = cv2.threshold(im, 240, 255, 1) return im
Before running our tesseract on the final image, we can tune it a little bit to optimize the configuration. (These lists come directly from the documentation).
There are three OEM(OCR Engine modes):
0 Legacy engine only.
1 Neural nets LSTM engine only.
2 Legacy + LSTM engines.
3 Default, based on what is available.
And thirteen PSM(Page segmentation modes):
0 Orientation and script detection (OSD) only.
1 Automatic page segmentation with OSD.
2 Automatic page segmentation, but no OSD, or OCR. (not implemented)
3 Fully automatic page segmentation, but no OSD. (Default)
4 Assume a single column of text of variable sizes.
5 Assume a single uniform block of vertically aligned text.
6 Assume a single uniform block of text.
7 Treat the image as a single text line.
8 Treat the image as a single word.
9 Treat the image as a single word in a circle.
10 Treat the image as a single character.
11 Sparse text. Find as much text as possible in no particular order.
12 Sparse text with OSD.
13 Raw line. Treat the image as a single text line, bypassing hacks that are Tesseract-specific.
In bald what we found to work better. Furthermore, we decided to give tesseract a whitelist of acceptable character, since we preferred to have only the capital letters in other to avoid small text and strange characters that are sometimes found by tesseract.
custom_config = r"--oem 3 --psm 11 -c tessedit_char_whitelist= 'ABCDEFGHIJKLMNOPQRSTUVWXYZ '"
Now we can check if everything is finally working:
img=np.array(Image.open('..\\immagini_test_small\\meme3.jpg'))im=preprocess_final(img)text = pytesseract.image_to_string(im, lang='eng', config=custom_config)print(text.replace('\n', ''))
And the answer is...” WHEN TOWARDS DATA SCIENCE REFUSES YOUR ARTICLE”.Et voilà! Now everything works perfectly.
In this brief article, we defined a simple pipeline. The definition of the pipeline was driven by our precise scope. If you need something similar but different, you should obviously consider modifying it. For example, if you need also the black text that is sometimes in the upper part of memes, you could consider a double stream and a final join of the result. Anyway, we hope that this simple procedure can help you to get started!
|
[
{
"code": null,
"e": 201,
"s": 171,
"text": "Written with Lorenzo Baiocco."
},
{
"code": null,
"e": 644,
"s": 201,
"text": "Extracting text information from an image can serve different scopes.In our case, we needed to extract text to enhance the performance of our multi-modal sentiment classification model based on tweets accompanied by images. Since we found that the most common reaction pic that can be found on social media are formatted as MEMEs, we developed a pipeline to extract text from images formatted like that, and in this article, we’ll present it."
},
{
"code": null,
"e": 834,
"s": 644,
"text": "Currently (Nov 2020), the state of the art in text extraction through OCR methods is represented by Google Tesseract OCR, which is the most used open-source software to deal with this task."
},
{
"code": null,
"e": 951,
"s": 834,
"text": "Tesseract is easy to install (following this link) and use in a python environment, through the pytesseract library."
},
{
"code": null,
"e": 1007,
"s": 951,
"text": "The environment used for this article is the following:"
},
{
"code": null,
"e": 1020,
"s": 1007,
"text": "Python 3.7.9"
},
{
"code": null,
"e": 1036,
"s": 1020,
"text": "Tesseract 5.0.0"
},
{
"code": null,
"e": 1048,
"s": 1036,
"text": "Pytesseract"
},
{
"code": null,
"e": 1055,
"s": 1048,
"text": "Pillow"
},
{
"code": null,
"e": 1066,
"s": 1055,
"text": "Matplotlib"
},
{
"code": null,
"e": 1073,
"s": 1066,
"text": "OpenCV"
},
{
"code": null,
"e": 1079,
"s": 1073,
"text": "Numpy"
},
{
"code": null,
"e": 1132,
"s": 1079,
"text": "The first thing to do is to import all the packages:"
},
{
"code": null,
"e": 1381,
"s": 1132,
"text": "from PIL import Imageimport numpy as npimport matplotlib.pyplot as pltimport cv2import pytesseract#change this path if you install pytesseract in another folder:pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'"
},
{
"code": null,
"e": 1518,
"s": 1381,
"text": "We are ready to extract some text! Let’s load a dummy meme to avoid breaking copyright (anyway, this meme has been done with imageflip)."
},
{
"code": null,
"e": 1702,
"s": 1518,
"text": "im = np.array(Image.open('..\\\\immagini_test_small\\\\meme3.jpg'))plt.figure(figsize=(10,10))plt.title('PLAIN IMAGE')plt.imshow(im); plt.xticks([]); plt.yticks([])plt.savefig('img1.png')"
},
{
"code": null,
"e": 1744,
"s": 1702,
"text": "To search for text is as easy as it goes:"
},
{
"code": null,
"e": 1813,
"s": 1744,
"text": "text = pytesseract.image_to_string(im)print(text.replace(‘\\n’, ‘ ‘))"
},
{
"code": null,
"e": 2574,
"s": 1813,
"text": "And the result is...” a a TTY SHE aT aN Sa ithe Pet”. Woooah something is going on. Tesseract is not working. The problem is that Tesseract is optimized to recognize text in typical text documents, so it can be difficult to recognize text within the image without preprocessing it. The Tesseract documentation provides a lot of ways to enhance the quality of our images. After some fine-tuning, we found a very clean method that worked perfectly on the subset of data that we used. Before diving into it, you need to know that, although these operations worked very well for our data, there are a couple of parameters that were tuned empirically. If you have a dataset that is different from the one that we used, it could be helpful to tune them a little bit."
},
{
"code": null,
"e": 2791,
"s": 2574,
"text": "The first function that we applied to our image is bilateral filtering. If you want to understand deeply how it works, there is a nice tutorial on OpenCV site, and you can find the description of the parameters here."
},
{
"code": null,
"e": 3189,
"s": 2791,
"text": "In a nutshell, this filter helps to remove the noise, but, in contrast with other filters, preserves edges instead of blurring them. This operation is performed by excluding from the blurring of a point the neighbors that do not present similar intensities. With the chosen parameters, the difference from the other image is not strongly perceptible, however, it led to a better final performance."
},
{
"code": null,
"e": 3371,
"s": 3189,
"text": "im= cv2.bilateralFilter(im,5, 55,60)plt.figure(figsize=(10,10))plt.title('BILATERAL FILTER')plt.imshow(im); plt.xticks([]); plt.yticks([])plt.savefig('img2.png',bbox_inches='tight')"
},
{
"code": null,
"e": 3451,
"s": 3371,
"text": "The second operation it’s pretty clear: we project our RGB images in grayscale."
},
{
"code": null,
"e": 3650,
"s": 3451,
"text": "im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)plt.figure(figsize=(10,10))plt.title('GRAYSCALE IMAGE')plt.imshow(im, cmap='gray'); plt.xticks([]); plt.yticks([])plt.savefig('img3.png',bbox_inches='tight')"
},
{
"code": null,
"e": 4242,
"s": 3650,
"text": "The last transformation is binarization. For every pixel, the same threshold value is applied. If the pixel value is smaller than the threshold, it is set to 0, otherwise, it is set to 255. Since we have white text, we want to blackout everything is not almost perfectly white (not exactly perfect since usually text is not “255-white”. We found that 240 was a threshold that could do the work. Since tesseract is trained to recognize black text, we also need to invert the colors. The function threshold from OpenCV can do the two operations jointly, by selecting the inverted binarization."
},
{
"code": null,
"e": 4440,
"s": 4242,
"text": "_, im = cv2.threshold(im, 240, 255, 1) plt.figure(figsize=(10,10))plt.title('IMMAGINE BINARIA')plt.imshow(im, cmap='gray'); plt.xticks([]); plt.yticks([])plt.savefig('img4.png',bbox_inches='tight')"
},
{
"code": null,
"e": 4510,
"s": 4440,
"text": "This is the input that we want! Let’s put everything into a function:"
},
{
"code": null,
"e": 4677,
"s": 4510,
"text": "def preprocess_finale(im): im= cv2.bilateralFilter(im,5, 55,60) im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) _, im = cv2.threshold(im, 240, 255, 1) return im"
},
{
"code": null,
"e": 4837,
"s": 4677,
"text": "Before running our tesseract on the final image, we can tune it a little bit to optimize the configuration. (These lists come directly from the documentation)."
},
{
"code": null,
"e": 4876,
"s": 4837,
"text": "There are three OEM(OCR Engine modes):"
},
{
"code": null,
"e": 4898,
"s": 4876,
"text": "0 Legacy engine only."
},
{
"code": null,
"e": 4930,
"s": 4898,
"text": "1 Neural nets LSTM engine only."
},
{
"code": null,
"e": 4955,
"s": 4930,
"text": "2 Legacy + LSTM engines."
},
{
"code": null,
"e": 4994,
"s": 4955,
"text": "3 Default, based on what is available."
},
{
"code": null,
"e": 5037,
"s": 4994,
"text": "And thirteen PSM(Page segmentation modes):"
},
{
"code": null,
"e": 5084,
"s": 5037,
"text": "0 Orientation and script detection (OSD) only."
},
{
"code": null,
"e": 5124,
"s": 5084,
"text": "1 Automatic page segmentation with OSD."
},
{
"code": null,
"e": 5193,
"s": 5124,
"text": "2 Automatic page segmentation, but no OSD, or OCR. (not implemented)"
},
{
"code": null,
"e": 5252,
"s": 5193,
"text": "3 Fully automatic page segmentation, but no OSD. (Default)"
},
{
"code": null,
"e": 5304,
"s": 5252,
"text": "4 Assume a single column of text of variable sizes."
},
{
"code": null,
"e": 5364,
"s": 5304,
"text": "5 Assume a single uniform block of vertically aligned text."
},
{
"code": null,
"e": 5405,
"s": 5364,
"text": "6 Assume a single uniform block of text."
},
{
"code": null,
"e": 5446,
"s": 5405,
"text": "7 Treat the image as a single text line."
},
{
"code": null,
"e": 5482,
"s": 5446,
"text": "8 Treat the image as a single word."
},
{
"code": null,
"e": 5530,
"s": 5482,
"text": "9 Treat the image as a single word in a circle."
},
{
"code": null,
"e": 5572,
"s": 5530,
"text": "10 Treat the image as a single character."
},
{
"code": null,
"e": 5642,
"s": 5572,
"text": "11 Sparse text. Find as much text as possible in no particular order."
},
{
"code": null,
"e": 5667,
"s": 5642,
"text": "12 Sparse text with OSD."
},
{
"code": null,
"e": 5764,
"s": 5667,
"text": "13 Raw line. Treat the image as a single text line, bypassing hacks that are Tesseract-specific."
},
{
"code": null,
"e": 6024,
"s": 5764,
"text": "In bald what we found to work better. Furthermore, we decided to give tesseract a whitelist of acceptable character, since we preferred to have only the capital letters in other to avoid small text and strange characters that are sometimes found by tesseract."
},
{
"code": null,
"e": 6118,
"s": 6024,
"text": "custom_config = r\"--oem 3 --psm 11 -c tessedit_char_whitelist= 'ABCDEFGHIJKLMNOPQRSTUVWXYZ '\""
},
{
"code": null,
"e": 6169,
"s": 6118,
"text": "Now we can check if everything is finally working:"
},
{
"code": null,
"e": 6357,
"s": 6169,
"text": "img=np.array(Image.open('..\\\\immagini_test_small\\\\meme3.jpg'))im=preprocess_final(img)text = pytesseract.image_to_string(im, lang='eng', config=custom_config)print(text.replace('\\n', ''))"
},
{
"code": null,
"e": 6470,
"s": 6357,
"text": "And the answer is...” WHEN TOWARDS DATA SCIENCE REFUSES YOUR ARTICLE”.Et voilà! Now everything works perfectly."
}
] |
DoubleStream average() method in Java
|
The average() method of the DoubleStream class in Java returns an OptionalDouble which is the arithmetic mean of elements of this stream. If the stream is empty, empty is returned.
The syntax is as follows:
OptionalDouble average()
Here, OptionalDouble is a container object which may or may not contain a double value.
To use the DoubleStream class in Java, import the following package:
import java.util.stream.DoubleStream;
First, create DoubleStream and add some elements:
DoubleStream doubleStream = DoubleStream.of(50.8, 35.7, 49.5,12.7, 89.7, 97.4);
Get the average of the elements of the stream:
OptionalDouble res = doubleStream.average();
The following is an example to implement DoubleStream average() method in Java:
Live Demo
import java.util.*;
import java.util.stream.DoubleStream;
public class Demo {
public static void main(String[] args) {
DoubleStream doubleStream = DoubleStream.of(50.8, 35.7, 49.5,12.7, 89.7, 97.4);
OptionalDouble res = doubleStream.average();
System.out.println("Average of the elements of the stream...");
if (res.isPresent()) {
System.out.println(res.getAsDouble());
} else {
System.out.println("Nothing!");
}
}
}
Average of the elements of the stream...
55.96666666666667
Let us see another example:
Live Demo
import java.util.*;
import java.util.stream.DoubleStream;
public class Demo {
public static void main(String[] args) {
DoubleStream doubleStream = DoubleStream.empty();
OptionalDouble res = doubleStream.average();
if (res.isPresent()) {
System.out.println(res.getAsDouble());
} else {
System.out.println("Nothing! Stream is empty!");
}
}
}
The following is the output displaying nothing since the stream is empty:
Nothing! Stream is empty!
|
[
{
"code": null,
"e": 1243,
"s": 1062,
"text": "The average() method of the DoubleStream class in Java returns an OptionalDouble which is the arithmetic mean of elements of this stream. If the stream is empty, empty is returned."
},
{
"code": null,
"e": 1269,
"s": 1243,
"text": "The syntax is as follows:"
},
{
"code": null,
"e": 1294,
"s": 1269,
"text": "OptionalDouble average()"
},
{
"code": null,
"e": 1382,
"s": 1294,
"text": "Here, OptionalDouble is a container object which may or may not contain a double value."
},
{
"code": null,
"e": 1451,
"s": 1382,
"text": "To use the DoubleStream class in Java, import the following package:"
},
{
"code": null,
"e": 1489,
"s": 1451,
"text": "import java.util.stream.DoubleStream;"
},
{
"code": null,
"e": 1539,
"s": 1489,
"text": "First, create DoubleStream and add some elements:"
},
{
"code": null,
"e": 1619,
"s": 1539,
"text": "DoubleStream doubleStream = DoubleStream.of(50.8, 35.7, 49.5,12.7, 89.7, 97.4);"
},
{
"code": null,
"e": 1666,
"s": 1619,
"text": "Get the average of the elements of the stream:"
},
{
"code": null,
"e": 1711,
"s": 1666,
"text": "OptionalDouble res = doubleStream.average();"
},
{
"code": null,
"e": 1791,
"s": 1711,
"text": "The following is an example to implement DoubleStream average() method in Java:"
},
{
"code": null,
"e": 1802,
"s": 1791,
"text": " Live Demo"
},
{
"code": null,
"e": 2279,
"s": 1802,
"text": "import java.util.*;\nimport java.util.stream.DoubleStream;\npublic class Demo {\n public static void main(String[] args) {\n DoubleStream doubleStream = DoubleStream.of(50.8, 35.7, 49.5,12.7, 89.7, 97.4);\n OptionalDouble res = doubleStream.average();\n System.out.println(\"Average of the elements of the stream...\");\n if (res.isPresent()) {\n System.out.println(res.getAsDouble());\n } else {\n System.out.println(\"Nothing!\");\n }\n }\n}"
},
{
"code": null,
"e": 2338,
"s": 2279,
"text": "Average of the elements of the stream...\n55.96666666666667"
},
{
"code": null,
"e": 2366,
"s": 2338,
"text": "Let us see another example:"
},
{
"code": null,
"e": 2377,
"s": 2366,
"text": " Live Demo"
},
{
"code": null,
"e": 2771,
"s": 2377,
"text": "import java.util.*;\nimport java.util.stream.DoubleStream;\npublic class Demo {\n public static void main(String[] args) {\n DoubleStream doubleStream = DoubleStream.empty();\n OptionalDouble res = doubleStream.average();\n if (res.isPresent()) {\n System.out.println(res.getAsDouble());\n } else {\n System.out.println(\"Nothing! Stream is empty!\");\n }\n }\n}"
},
{
"code": null,
"e": 2845,
"s": 2771,
"text": "The following is the output displaying nothing since the stream is empty:"
},
{
"code": null,
"e": 2871,
"s": 2845,
"text": "Nothing! Stream is empty!"
}
] |
QR Matrix Factorization. Least Squares and Computation (with R... | by Ben Denis Shaffer | Towards Data Science
|
There are a couple of matrix factorizations, also called decomposition, that every Data Scientist should be very familiar with. These are important because they help find methods for actually computing and estimating results for the models and algorithms we use. In some cases, a particular form of factorization is the algorithm (ex. PCA and SVD). In all cases, matrix factorizations help develop intuition and the ability to be analytical.
The QR factorization is one of these matrix factorizations that is very useful and has very important applications in Data Science, Statistics, and Data Analysis. One of these applications is the computation of the solution to the Least Squares (LS) Problem.
Recap the Least Squares Problem
Introduce the QR matrix factorization
Solve the LS using QR
Implement the QR computation with R and C++ and compare.
The QR matrix decomposition allows us to compute the solution to the Least Squares problem. I emphasize compute because OLS gives us the closed from solution in the form of the normal equations. That is great, but when you want to find the actual numerical solution they aren’t really useful.
Here is a recap of the Least Squares problem. We want to solve the equation below
The problem is that we can’t solve for β because usually if we have more observations than variables X doesn’t have an inverse and the following can’t be done:
Instead, we try to find some β̂ that solves this equation, not perfectly, but with as little error as possible. One way to do that is to minimize the following objective function, which is a function of β̂.
Minimizing this sum of squared deviations is why the problem is called the Least Squares problem. Taking derivatives with respect to β̂ and setting to zero will lead you to the normal equations and provide you with a closed-form solution.
That is one way to do it. But we could also just use Linear Algebra. This is where the QR matrix decomposition comes in and saves the day.
First, let’s just go ahead and describe what this decomposition is. The QR matrix decomposition allows one to express a matrix as a product of two separate matrices, Q, and R.
Q in an orthogonal matrix and R is a square upper/right triangular matrix.
This means that
And since R is square, as long as the diagonal entries don’t have a zero, it is also invertible. If columns of X are linearly independent then this will always be the case. Though if there is collinearity in the data then problems can still arise. That aside though, what this QR factorization implies is that a rectangular and non-invertible X can be expressed as two invertible matrices! This is bound to be useful.
Now that we know about the QR factorization, once we can actually find it, we will be able to solve the LS problem in the following way:
so
This means that all we need to do is find an inverse of R, transpose Q, and take the product. That will produce the OLS coefficients. We don’t even need to compute the Variance-Covariance matrix and its inverse which is how OLS solutions are usually presented.
The way to find the QR factors of a matrix is to use the Gram-Schmidt process to first find Q. Then to find R we just multiply the original matrix by the transpose of Q. Let’s go ahead and do the QR using functions implemented in R and C++. Later we can look inside these functions to get a better picture of what is going on.
I am loading in two functions. myQRR and myQRCpp that use the Gram-Schmidt process to do the QR factorization. One function is written in Rand the other in C++ and loaded into the Renvironment via Rcpp. Later I will compare their performance.
library(Rcpp)library(tidyverse)library(microbenchmark)sourceCpp("../source-code/myQRC.cpp")source("../source-code/myQRR.R")
Let’s begin with a small example where we simulate y and X and then solve it using the QR decomposition. We can also double check that the QR decomposition actually works and gives back the X we simulated.
Here is our simulated response variable.
y = rnorm(6)y## [1] 0.6914727 2.4810138 0.4049580 0.3117301 0.6084374 1.4778950
Here is the data that we will use to solve for the LS coefficients. We have 3 variables at our disposal.
X = matrix(c(3, 2, 3, 2, -1, 4, 5, 1, -5, 4, 2, 1, 9, -3, 2 , -1, 8, 1), ncol = 3)X## [,1] [,2] [,3]## [1,] 3 5 9## [2,] 2 1 -3## [3,] 3 -5 2## [4,] 2 4 -1## [5,] -1 2 8## [6,] 4 1 1
Now I will use the myQRCpp to find the Q and the R.
You can see that R is indeed upper triangular.
You can see that R is indeed upper triangular.
Q = myQRCpp(X)$QR = t(Q) %*% X %>% round(14)R## [,1] [,2] [,3]## [1,] 6.557439 1.829983 3.202470## [2,] 0.000000 8.285600 4.723802## [3,] 0.000000 0.000000 11.288484Q## [,1] [,2] [,3]## [1,] 0.4574957 0.50241272 0.45724344## [2,] 0.3049971 0.05332872 -0.37459932## [3,] 0.4574957 -0.70450052 0.34218986## [4,] 0.3049971 0.41540270 -0.34894183## [5,] -0.1524986 0.27506395 0.63684585## [6,] 0.6099943 -0.01403387 -0.07859294
2. Here we can verify that Q is in fact Orthogonal.
t(Q)%*%Q %>% round(14)## [,1] [,2] [,3]## [1,] 1 0 0## [2,] 0 1 0## [3,] 0 0 1
3. And that QR really does give back the original X matrix.
Q %*% R## [,1] [,2] [,3]## [1,] 3 5 9## [2,] 2 1 -3## [3,] 3 -5 2## [4,] 2 4 -1## [5,] -1 2 8## [6,] 4 1 1
Now, let’s compute the actual coefficients.
beta_qr = solve(R) %*% t(Q) %*% ybeta_qr## [,1]## [1,] 0.32297414## [2,] 0.07255123## [3,] -0.02764668
To check that this is the correct solution we can compare the computed β̂ to what the lm function gives us.
coef(lm(y ~ -1 + ., data = data.frame(cbind(y,X))))## V2 V3 V4 ## 0.32297414 0.07255123 -0.02764668
Clearly we get the exact same solution for the estimated coefficients.
The Gram–Schmidt process is a method for computing an orthogonal matrix Q that is made up of orthogonal/independent unit vectors and spans the same space as the original matrix X.
This algorithm involves picking a column vector of X, say x1 = u1 as the initial step.
Then we find a vector orthogonal to u1 by projecting the next column of X, say x2 onto it and, subtracting the projection from it u2 = x2 − proj u1x2. Now we have a set of two orthogonal vectors. In a previous post, I covered the details of why this works.
The next step is to proceed in the same way but subtract the sum of projections onto each vector in the set of orthogonal vectors uk.
We can express this as follows:
ref. Once we have the full set of orthogonal vectors we simply divide each by its norm and put them in a matrix:
Once we have Q we can solve for R easily
Of course, there is a built-in function in R that will do the QR matrix decomposition for you. Because the GS algorithm above is iterative in nature I decided to implement it in C++ which is a good tool for something like this, and compare it to an equivalent R function. Here is what my R version looks like:
myQR = function(A){ dimU = dim(A) U = matrix(nrow = dimU[1], ncol = dimU[2]) U[,1] = A[,1] for(k in 2:dimU[2]){ subt = 0 j = 1 while(j < k){ subt = subt + proj(U[,j], A[,k]) j = j + 1 } U[,k] = A[,k] - subt } Q = apply(U, 2, function(x) x/sqrt(sum(x^2))) R = round(t(Q) %*% A, 10) return(list(Q = Q, R = R, U = U))}
It is very literal. There is a while loop inside a for loop, and the projection function being called is also a function written in R.
This is what my C++ version looks like. The logic is essentially the same except there is another for loop to normalize the orthogonal columns.
// [[Rcpp::export]]List myQRCpp(NumericMatrix A) { int a = A.rows(); int b = A.cols(); NumericMatrix U(a,b); NumericMatrix Q(a,b); NumericMatrix R(a,b); NumericMatrix::Column Ucol1 = U(_ , 0); NumericMatrix::Column Acol1 = A(_ , 0); Ucol1 = Acol1; for(int i = 1; i < b; i++){ NumericMatrix::Column Ucol = U(_ , i); NumericMatrix::Column Acol = A(_ , i); NumericVector subt(a); int j = 0; while(j < i){ NumericVector uj = U(_ , j); NumericVector ai = A(_ , i); subt = subt + projC(uj, ai); j++; } Ucol = Acol - subt; } for(int i = 0; i < b; i++){ NumericMatrix::Column ui = U(_ , i); NumericMatrix::Column qi = Q(_ , i); double sum2_ui = 0; for(int j = 0; j < a; j++){ sum2_ui = sum2_ui + ui[j]*ui[j]; } qi = ui/sqrt(sum2_ui); } List L = List::create(Named("Q") = Q , _["U"] = U); return L;}
In addition to the two functions above, I have a third function that is identical to the R one except that it calls projC instead of proj. I name this function myQRC. (projC is written in C++ while proj is written in R). Otherwise, we have a purely C++ function myQRCpp and a purely R function myQR.
To compare how quickly these three functions perform the QR factorization I put them in a function QR_comp that calls and times each with the same matrix argument.
QR_comp = function(A){ t0 = Sys.time() myQR(A) tQR = Sys.time() - t0 t0 = Sys.time() myQRC(A) tQRC = Sys.time() - t0 t0 = Sys.time() myQRCpp(A) tQRCpp = Sys.time() - t0 return(data.frame(tQR = as.numeric(tQR), tQRC = as.numeric(tQRC), tQRCpp = as.numeric(tQRCpp)))}
We can compare their performance over a grid of n by m random matrices. These matrices are generated when calling the QR_comp function.
grid = expand.grid(n = seq(10, 3010, 500), m = seq(50, 600, 50))tvec = map2(grid$n, grid$m, ~QR_comp(matrix(runif(.x*.y), ncol = .y)))
Finally, we can visually asses how these vary.
plotly::ggplotly(bind_rows(tvec) %>% gather("func","time") %>% mutate(n = rep(grid$n, 3), m = rep(grid$m, 3)) %>% ggplot(aes(m, n, fill = time)) + geom_tile() + facet_grid(.~func) + scale_fill_gradientn(colours = rainbow(9)) + theme(panel.background = element_blank(), axis.ticks.y = element_blank(), axis.text.y = element_text(angle = 35, size = 5), axis.text.x = element_text(angle = 30, size = 5)), width = 550, heigh = 400)
Clearly the more C++ involved the faster the QR factorization can be computed. The all C++ function solves in under a minute for matrices with up to 250 columns and 3000 rows or 600 columns and 500 rows. The R function is 2-3 times slower.
QR is just one matrix factorization and LS is just one application of the QR. Hopefully, the above discussion demonstrates how important and useful Linea Algebra is for data science. In the future, I’ll cover another application of the QR factorization and move onto some other important factorizations like the Eigenvalue and the SVD decompositions.
Additionally, you can tell that I am using R and C++ to implement these methods computationally. I hope that this is useful and will inspire other R users like myself to learn C++ and Rcpp and have that in their toolkit to make their Rwork even more powerful.
Thanks for reading!
|
[
{
"code": null,
"e": 614,
"s": 172,
"text": "There are a couple of matrix factorizations, also called decomposition, that every Data Scientist should be very familiar with. These are important because they help find methods for actually computing and estimating results for the models and algorithms we use. In some cases, a particular form of factorization is the algorithm (ex. PCA and SVD). In all cases, matrix factorizations help develop intuition and the ability to be analytical."
},
{
"code": null,
"e": 873,
"s": 614,
"text": "The QR factorization is one of these matrix factorizations that is very useful and has very important applications in Data Science, Statistics, and Data Analysis. One of these applications is the computation of the solution to the Least Squares (LS) Problem."
},
{
"code": null,
"e": 905,
"s": 873,
"text": "Recap the Least Squares Problem"
},
{
"code": null,
"e": 943,
"s": 905,
"text": "Introduce the QR matrix factorization"
},
{
"code": null,
"e": 965,
"s": 943,
"text": "Solve the LS using QR"
},
{
"code": null,
"e": 1022,
"s": 965,
"text": "Implement the QR computation with R and C++ and compare."
},
{
"code": null,
"e": 1315,
"s": 1022,
"text": "The QR matrix decomposition allows us to compute the solution to the Least Squares problem. I emphasize compute because OLS gives us the closed from solution in the form of the normal equations. That is great, but when you want to find the actual numerical solution they aren’t really useful."
},
{
"code": null,
"e": 1397,
"s": 1315,
"text": "Here is a recap of the Least Squares problem. We want to solve the equation below"
},
{
"code": null,
"e": 1557,
"s": 1397,
"text": "The problem is that we can’t solve for β because usually if we have more observations than variables X doesn’t have an inverse and the following can’t be done:"
},
{
"code": null,
"e": 1764,
"s": 1557,
"text": "Instead, we try to find some β̂ that solves this equation, not perfectly, but with as little error as possible. One way to do that is to minimize the following objective function, which is a function of β̂."
},
{
"code": null,
"e": 2003,
"s": 1764,
"text": "Minimizing this sum of squared deviations is why the problem is called the Least Squares problem. Taking derivatives with respect to β̂ and setting to zero will lead you to the normal equations and provide you with a closed-form solution."
},
{
"code": null,
"e": 2142,
"s": 2003,
"text": "That is one way to do it. But we could also just use Linear Algebra. This is where the QR matrix decomposition comes in and saves the day."
},
{
"code": null,
"e": 2318,
"s": 2142,
"text": "First, let’s just go ahead and describe what this decomposition is. The QR matrix decomposition allows one to express a matrix as a product of two separate matrices, Q, and R."
},
{
"code": null,
"e": 2393,
"s": 2318,
"text": "Q in an orthogonal matrix and R is a square upper/right triangular matrix."
},
{
"code": null,
"e": 2409,
"s": 2393,
"text": "This means that"
},
{
"code": null,
"e": 2827,
"s": 2409,
"text": "And since R is square, as long as the diagonal entries don’t have a zero, it is also invertible. If columns of X are linearly independent then this will always be the case. Though if there is collinearity in the data then problems can still arise. That aside though, what this QR factorization implies is that a rectangular and non-invertible X can be expressed as two invertible matrices! This is bound to be useful."
},
{
"code": null,
"e": 2964,
"s": 2827,
"text": "Now that we know about the QR factorization, once we can actually find it, we will be able to solve the LS problem in the following way:"
},
{
"code": null,
"e": 2967,
"s": 2964,
"text": "so"
},
{
"code": null,
"e": 3228,
"s": 2967,
"text": "This means that all we need to do is find an inverse of R, transpose Q, and take the product. That will produce the OLS coefficients. We don’t even need to compute the Variance-Covariance matrix and its inverse which is how OLS solutions are usually presented."
},
{
"code": null,
"e": 3555,
"s": 3228,
"text": "The way to find the QR factors of a matrix is to use the Gram-Schmidt process to first find Q. Then to find R we just multiply the original matrix by the transpose of Q. Let’s go ahead and do the QR using functions implemented in R and C++. Later we can look inside these functions to get a better picture of what is going on."
},
{
"code": null,
"e": 3798,
"s": 3555,
"text": "I am loading in two functions. myQRR and myQRCpp that use the Gram-Schmidt process to do the QR factorization. One function is written in Rand the other in C++ and loaded into the Renvironment via Rcpp. Later I will compare their performance."
},
{
"code": null,
"e": 3922,
"s": 3798,
"text": "library(Rcpp)library(tidyverse)library(microbenchmark)sourceCpp(\"../source-code/myQRC.cpp\")source(\"../source-code/myQRR.R\")"
},
{
"code": null,
"e": 4128,
"s": 3922,
"text": "Let’s begin with a small example where we simulate y and X and then solve it using the QR decomposition. We can also double check that the QR decomposition actually works and gives back the X we simulated."
},
{
"code": null,
"e": 4169,
"s": 4128,
"text": "Here is our simulated response variable."
},
{
"code": null,
"e": 4249,
"s": 4169,
"text": "y = rnorm(6)y## [1] 0.6914727 2.4810138 0.4049580 0.3117301 0.6084374 1.4778950"
},
{
"code": null,
"e": 4354,
"s": 4249,
"text": "Here is the data that we will use to solve for the LS coefficients. We have 3 variables at our disposal."
},
{
"code": null,
"e": 4616,
"s": 4354,
"text": "X = matrix(c(3, 2, 3, 2, -1, 4, 5, 1, -5, 4, 2, 1, 9, -3, 2 , -1, 8, 1), ncol = 3)X## [,1] [,2] [,3]## [1,] 3 5 9## [2,] 2 1 -3## [3,] 3 -5 2## [4,] 2 4 -1## [5,] -1 2 8## [6,] 4 1 1"
},
{
"code": null,
"e": 4668,
"s": 4616,
"text": "Now I will use the myQRCpp to find the Q and the R."
},
{
"code": null,
"e": 4715,
"s": 4668,
"text": "You can see that R is indeed upper triangular."
},
{
"code": null,
"e": 4762,
"s": 4715,
"text": "You can see that R is indeed upper triangular."
},
{
"code": null,
"e": 5243,
"s": 4762,
"text": "Q = myQRCpp(X)$QR = t(Q) %*% X %>% round(14)R## [,1] [,2] [,3]## [1,] 6.557439 1.829983 3.202470## [2,] 0.000000 8.285600 4.723802## [3,] 0.000000 0.000000 11.288484Q## [,1] [,2] [,3]## [1,] 0.4574957 0.50241272 0.45724344## [2,] 0.3049971 0.05332872 -0.37459932## [3,] 0.4574957 -0.70450052 0.34218986## [4,] 0.3049971 0.41540270 -0.34894183## [5,] -0.1524986 0.27506395 0.63684585## [6,] 0.6099943 -0.01403387 -0.07859294"
},
{
"code": null,
"e": 5295,
"s": 5243,
"text": "2. Here we can verify that Q is in fact Orthogonal."
},
{
"code": null,
"e": 5406,
"s": 5295,
"text": "t(Q)%*%Q %>% round(14)## [,1] [,2] [,3]## [1,] 1 0 0## [2,] 0 1 0## [3,] 0 0 1"
},
{
"code": null,
"e": 5466,
"s": 5406,
"text": "3. And that QR really does give back the original X matrix."
},
{
"code": null,
"e": 5628,
"s": 5466,
"text": "Q %*% R## [,1] [,2] [,3]## [1,] 3 5 9## [2,] 2 1 -3## [3,] 3 -5 2## [4,] 2 4 -1## [5,] -1 2 8## [6,] 4 1 1"
},
{
"code": null,
"e": 5672,
"s": 5628,
"text": "Now, let’s compute the actual coefficients."
},
{
"code": null,
"e": 5789,
"s": 5672,
"text": "beta_qr = solve(R) %*% t(Q) %*% ybeta_qr## [,1]## [1,] 0.32297414## [2,] 0.07255123## [3,] -0.02764668"
},
{
"code": null,
"e": 5897,
"s": 5789,
"text": "To check that this is the correct solution we can compare the computed β̂ to what the lm function gives us."
},
{
"code": null,
"e": 6026,
"s": 5897,
"text": "coef(lm(y ~ -1 + ., data = data.frame(cbind(y,X))))## V2 V3 V4 ## 0.32297414 0.07255123 -0.02764668"
},
{
"code": null,
"e": 6097,
"s": 6026,
"text": "Clearly we get the exact same solution for the estimated coefficients."
},
{
"code": null,
"e": 6277,
"s": 6097,
"text": "The Gram–Schmidt process is a method for computing an orthogonal matrix Q that is made up of orthogonal/independent unit vectors and spans the same space as the original matrix X."
},
{
"code": null,
"e": 6364,
"s": 6277,
"text": "This algorithm involves picking a column vector of X, say x1 = u1 as the initial step."
},
{
"code": null,
"e": 6621,
"s": 6364,
"text": "Then we find a vector orthogonal to u1 by projecting the next column of X, say x2 onto it and, subtracting the projection from it u2 = x2 − proj u1x2. Now we have a set of two orthogonal vectors. In a previous post, I covered the details of why this works."
},
{
"code": null,
"e": 6755,
"s": 6621,
"text": "The next step is to proceed in the same way but subtract the sum of projections onto each vector in the set of orthogonal vectors uk."
},
{
"code": null,
"e": 6787,
"s": 6755,
"text": "We can express this as follows:"
},
{
"code": null,
"e": 6900,
"s": 6787,
"text": "ref. Once we have the full set of orthogonal vectors we simply divide each by its norm and put them in a matrix:"
},
{
"code": null,
"e": 6941,
"s": 6900,
"text": "Once we have Q we can solve for R easily"
},
{
"code": null,
"e": 7251,
"s": 6941,
"text": "Of course, there is a built-in function in R that will do the QR matrix decomposition for you. Because the GS algorithm above is iterative in nature I decided to implement it in C++ which is a good tool for something like this, and compare it to an equivalent R function. Here is what my R version looks like:"
},
{
"code": null,
"e": 7600,
"s": 7251,
"text": "myQR = function(A){ dimU = dim(A) U = matrix(nrow = dimU[1], ncol = dimU[2]) U[,1] = A[,1] for(k in 2:dimU[2]){ subt = 0 j = 1 while(j < k){ subt = subt + proj(U[,j], A[,k]) j = j + 1 } U[,k] = A[,k] - subt } Q = apply(U, 2, function(x) x/sqrt(sum(x^2))) R = round(t(Q) %*% A, 10) return(list(Q = Q, R = R, U = U))}"
},
{
"code": null,
"e": 7735,
"s": 7600,
"text": "It is very literal. There is a while loop inside a for loop, and the projection function being called is also a function written in R."
},
{
"code": null,
"e": 7879,
"s": 7735,
"text": "This is what my C++ version looks like. The logic is essentially the same except there is another for loop to normalize the orthogonal columns."
},
{
"code": null,
"e": 8758,
"s": 7879,
"text": "// [[Rcpp::export]]List myQRCpp(NumericMatrix A) { int a = A.rows(); int b = A.cols(); NumericMatrix U(a,b); NumericMatrix Q(a,b); NumericMatrix R(a,b); NumericMatrix::Column Ucol1 = U(_ , 0); NumericMatrix::Column Acol1 = A(_ , 0); Ucol1 = Acol1; for(int i = 1; i < b; i++){ NumericMatrix::Column Ucol = U(_ , i); NumericMatrix::Column Acol = A(_ , i); NumericVector subt(a); int j = 0; while(j < i){ NumericVector uj = U(_ , j); NumericVector ai = A(_ , i); subt = subt + projC(uj, ai); j++; } Ucol = Acol - subt; } for(int i = 0; i < b; i++){ NumericMatrix::Column ui = U(_ , i); NumericMatrix::Column qi = Q(_ , i); double sum2_ui = 0; for(int j = 0; j < a; j++){ sum2_ui = sum2_ui + ui[j]*ui[j]; } qi = ui/sqrt(sum2_ui); } List L = List::create(Named(\"Q\") = Q , _[\"U\"] = U); return L;}"
},
{
"code": null,
"e": 9058,
"s": 8758,
"text": "In addition to the two functions above, I have a third function that is identical to the R one except that it calls projC instead of proj. I name this function myQRC. (projC is written in C++ while proj is written in R). Otherwise, we have a purely C++ function myQRCpp and a purely R function myQR."
},
{
"code": null,
"e": 9222,
"s": 9058,
"text": "To compare how quickly these three functions perform the QR factorization I put them in a function QR_comp that calls and times each with the same matrix argument."
},
{
"code": null,
"e": 9539,
"s": 9222,
"text": "QR_comp = function(A){ t0 = Sys.time() myQR(A) tQR = Sys.time() - t0 t0 = Sys.time() myQRC(A) tQRC = Sys.time() - t0 t0 = Sys.time() myQRCpp(A) tQRCpp = Sys.time() - t0 return(data.frame(tQR = as.numeric(tQR), tQRC = as.numeric(tQRC), tQRCpp = as.numeric(tQRCpp)))}"
},
{
"code": null,
"e": 9675,
"s": 9539,
"text": "We can compare their performance over a grid of n by m random matrices. These matrices are generated when calling the QR_comp function."
},
{
"code": null,
"e": 9853,
"s": 9675,
"text": "grid = expand.grid(n = seq(10, 3010, 500), m = seq(50, 600, 50))tvec = map2(grid$n, grid$m, ~QR_comp(matrix(runif(.x*.y), ncol = .y)))"
},
{
"code": null,
"e": 9900,
"s": 9853,
"text": "Finally, we can visually asses how these vary."
},
{
"code": null,
"e": 10366,
"s": 9900,
"text": "plotly::ggplotly(bind_rows(tvec) %>% gather(\"func\",\"time\") %>% mutate(n = rep(grid$n, 3), m = rep(grid$m, 3)) %>% ggplot(aes(m, n, fill = time)) + geom_tile() + facet_grid(.~func) + scale_fill_gradientn(colours = rainbow(9)) + theme(panel.background = element_blank(), axis.ticks.y = element_blank(), axis.text.y = element_text(angle = 35, size = 5), axis.text.x = element_text(angle = 30, size = 5)), width = 550, heigh = 400)"
},
{
"code": null,
"e": 10606,
"s": 10366,
"text": "Clearly the more C++ involved the faster the QR factorization can be computed. The all C++ function solves in under a minute for matrices with up to 250 columns and 3000 rows or 600 columns and 500 rows. The R function is 2-3 times slower."
},
{
"code": null,
"e": 10957,
"s": 10606,
"text": "QR is just one matrix factorization and LS is just one application of the QR. Hopefully, the above discussion demonstrates how important and useful Linea Algebra is for data science. In the future, I’ll cover another application of the QR factorization and move onto some other important factorizations like the Eigenvalue and the SVD decompositions."
},
{
"code": null,
"e": 11217,
"s": 10957,
"text": "Additionally, you can tell that I am using R and C++ to implement these methods computationally. I hope that this is useful and will inspire other R users like myself to learn C++ and Rcpp and have that in their toolkit to make their Rwork even more powerful."
}
] |
Stock span problem | Practice | GeeksforGeeks
|
The stock span problem is a financial problem where we have a series of n daily price quotes for a stock and we need to calculate the span of stocks price for all n days.
The span Si of the stocks price on a given day i is defined as the maximum number of consecutive days just before the given day, for which the price of the stock on the current day is less than or equal to its price on the given day.
For example, if an array of 7 days prices is given as {100, 80, 60, 70, 60, 75, 85}, then the span values for corresponding 7 days are {1, 1, 1, 2, 1, 4, 6}.
Example 1:
Input:
N = 7, price[] = [100 80 60 70 60 75 85]
Output:
1 1 1 2 1 4 6
Explanation:
Traversing the given input span for 100
will be 1, 80 is smaller than 100 so the
span is 1, 60 is smaller than 80 so the
span is 1, 70 is greater than 60 so the
span is 2 and so on. Hence the output will
be 1 1 1 2 1 4 6.
Example 2:
Input:
N = 6, price[] = [10 4 5 90 120 80]
Output:
1 1 2 4 5 1
Explanation:
Traversing the given input span for 10
will be 1, 4 is smaller than 10 so the
span will be 1, 5 is greater than 4 so
the span will be 2 and so on. Hence, the
output will be 1 1 2 4 5 1.
User Task:
The task is to complete the function calculateSpan() which takes two parameters, an array price[] denoting the price of stocks, and an integer N denoting the size of the array and number of days. This function finds the span of stock's price for all N days and returns an array of length N denoting the span for the i-th day.
Expected Time Complexity: O(N).
Expected Auxiliary Space: O(N).
Constraints:
1 ≤ N ≤ 105
1 ≤ C[i] ≤ 105
0
20bd5a052019 hours ago
class Node
{
int ele;
int idx;
Node(int ele,int idx)
{
this.ele=ele;
this.idx=idx;
}
}
class Solution
{
//Function to calculate the span of stockâ€TMs price for all n days.
public static int[] calculateSpan(int stocks[], int n)
{
// Your code here
Stack<Node> st=new Stack<>();
int span[]=new int[stocks.length];
span[0]=1;
st.push(new Node(stocks[0],1));
for(int i=1;i<span.length;i++)
{
if(st.peek().ele>stocks[i])
{
span[i]=1;
st.push(new Node(stocks[i],span[i]));
}
else
{
while(!st.isEmpty()&&st.peek().ele<=stocks[i])
{
span[i]+=st.pop().idx;
}
span[i]+=1;
st.push(new Node(stocks[i],span[i]));
}
}
return span;
}
}
0
gfgraj2 days ago
vector <int> calculateSpan(int a[], int n) { vector<int>v; v.push_back(1); stack<int>s; stack<int>ind; ind.push(0); s.push(a[0]); for(int i=1;i<n;i++){ int flag=1; if(a[i]<s.top()){ v.push_back(flag); s.push(a[i]); ind.push(i); continue; } while(!s.empty() && a[i]>=s.top()){ s.pop(); flag+=v[ind.top()]; ind.pop(); } s.push(a[i]); ind.push(i); v.push_back(flag); } return v; }
0
hharshit81182 weeks ago
class Solution{ public: //Function to calculate the span of stockâ€TMs price for all n days. vector <int> calculateSpan(int price[], int n) { // Your code here stack<int> s; s.push(0); vector<int> v; v.push_back(1); int i =1; while(i<n){ while(s.empty()==false && price[s.top()] <= price[i]){ s.pop(); } if(s.empty()==true) v.push_back(i+1); else v.push_back(i-s.top()); s.push(i); i++; } return v; }};
0
mamoonakhterrock20022 weeks ago
public static int[] calculateSpan(int price[], int n) { int S[] = new int[n]; Stack<Integer> st = new Stack<>(); st.push(0); // Span value of first element is always 1 S[0] = 1; // Calculate span values for rest of the elements for (int i = 1; i < n; i++) { // Pop elements from stack while stack is not // empty and top of stack is smaller than // price[i] while (!st.empty() && price[st.peek()] <= price[i]) st.pop(); // If stack becomes empty, then price[i] is // greater than all elements on left of it, i.e., // price[0], price[1], ..price[i-1]. Else price[i] // is greater than elements after top of stack S[i] = (st.empty()) ? (i + 1) : (i - st.peek()); // Push this element to stack st.push(i); } return S; }
0
pkb9825112 weeks ago
stack<pair<int,int>>s; vector<int>ans; for(int i=0;i<n;i++){ int days=1; while(!s.empty() && s.top().first<=price[i] ){ days+=s.top().second; s.pop(); } s.push({price[i],days}); ans.push_back(days); } return ans;
+1
archit232002 weeks ago
JAVA Solution Based on Aditya Verma's approach:-
[IMP]:-If your original solution is not passing the test cases,it is most probably because you took “< “ instead of ”< =” in the conditions which is very important for the correct result.
class Solution
{
//Function to calculate the span of stockâ€TMs price for all n days.
static class ValueNIndex{
int value;
int index;
ValueNIndex(){}
ValueNIndex(int value,int index){
this.value=value;
this.index=index;
}
int getValue(){
return this.value;
}
int getIndex(){
return this.index;
}
void setValInd(int value,int index){
this.value=value;
this.index=index;
}
}
public static int[] calculateSpan(int arr[], int n)
{
if(arr.length==0){
int ans[]={};
return ans;
}
if(arr.length==1){
int ans[]={1};
return ans;
}
Stack<ValueNIndex> st=new Stack<>();
ArrayList<Integer> ans =new ArrayList<>();
for(int i=0;i<arr.length;i++){
if(st.isEmpty()==false && st.peek().value<=arr[i]){
while(st.isEmpty()==false && st.peek().value<=arr[i]){
st.pop();
}
}
if(st.isEmpty()==false && st.peek().value>arr[i]){
ans.add(i-st.peek().index);
}
if(st.isEmpty()==true){
ans.add(i+1);
}
ValueNIndex el=new ValueNIndex(arr[i],i);
st.push(el);
}
int result[]=new int[ans.size()];
for(int i=0;i<ans.size();i++){
result[i]=ans.get(i);
}
return result;
}
}
+1
aloksinghbais022 weeks ago
C++ solution having time complexity as O(N) and space complexity as O(N) is as follows :-
Execution Time :- 1.04 / 2.32 sec
vector <int> calculateSpan(int price[], int n){ stack<int> stk; vector<int> ngel(n); for(int i = 0; i < n; i++){ while(!stk.empty() && price[stk.top()] <= price[i]){ stk.pop(); } if(stk.empty()) ngel[i] = -1; else ngel[i] = stk.top(); stk.push(i); } vector<int> ans(n); ans[0] = 1; for(int i = 1; i < n; i++){ if(ngel[i] == -1) ans[i] = i+1; else{ ans[i] = i - ngel[i]; } } return (ans); }
0
adityagagtiwari3 weeks ago
Simple solution using stacks and a pair for keeping the state!
class Pair
{
int value;
int span;
Pair(int value,int span)
{
this.value = value;
this.span = span;
}
}
class Solution
{
//Function to calculate the span of stockâ€TMs price for all n days.
public static int[] calculateSpan(int arr[], int n)
{
// Your code here
Stack<Pair> stack = new Stack<>();
int[] span =new int[n];
stack.push(new Pair(arr[0],1));
span[0] = 1;
for(int i=1;i<n;i++)
{
if(stack.peek().value>arr[i])
{
span[i] = 1;
stack.push(new Pair(arr[i],span[i]));
}
else
{
while(stack.size()>0 && arr[i]>=stack.peek().value)
{
Pair removed = stack.pop();
span[i]+=removed.span;
}
//for including itlself in the span
span[i]+=1;
stack.push(new Pair(arr[i],span[i]));
}
}
return span;
}
}
0
aakashsoni24863 weeks ago
vector <int> calculateSpan(int price[], int n) { // Your code here vector<int>ans(n); stack<int>st; ans[0]=1; st.push(0); for(int i=1;i<n;i++){ while(!st.empty()&&price[st.top()]<=price[i]){ st.pop(); } if(st.empty()){ ans[i]=i+1; } else{ ans[i]=i-st.top(); } st.push(i); } return ans; }
0
himanshukug19cs3 weeks ago
java solution
int[] ans = new int[n];Stack<Integer> st = new Stack<>();st.push(0);ans[0]=1;for(int i=1;i<n;i++){ while(!st.empty()&&price[i]>=price[st.peek()]){ st.pop(); } if(st.empty()) ans[i]=i+1; else{ ans[i]=i-st.peek(); } st.push(i);}return ans;
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab.
|
[
{
"code": null,
"e": 802,
"s": 238,
"text": "The stock span problem is a financial problem where we have a series of n daily price quotes for a stock and we need to calculate the span of stocks price for all n days. \nThe span Si of the stocks price on a given day i is defined as the maximum number of consecutive days just before the given day, for which the price of the stock on the current day is less than or equal to its price on the given day.\nFor example, if an array of 7 days prices is given as {100, 80, 60, 70, 60, 75, 85}, then the span values for corresponding 7 days are {1, 1, 1, 2, 1, 4, 6}."
},
{
"code": null,
"e": 813,
"s": 802,
"text": "Example 1:"
},
{
"code": null,
"e": 1125,
"s": 813,
"text": "Input: \nN = 7, price[] = [100 80 60 70 60 75 85]\nOutput:\n1 1 1 2 1 4 6\nExplanation:\nTraversing the given input span for 100 \nwill be 1, 80 is smaller than 100 so the \nspan is 1, 60 is smaller than 80 so the \nspan is 1, 70 is greater than 60 so the \nspan is 2 and so on. Hence the output will \nbe 1 1 1 2 1 4 6.\n"
},
{
"code": null,
"e": 1136,
"s": 1125,
"text": "Example 2:"
},
{
"code": null,
"e": 1403,
"s": 1136,
"text": "Input: \nN = 6, price[] = [10 4 5 90 120 80]\nOutput:\n1 1 2 4 5 1\nExplanation:\nTraversing the given input span for 10 \nwill be 1, 4 is smaller than 10 so the \nspan will be 1, 5 is greater than 4 so \nthe span will be 2 and so on. Hence, the \noutput will be 1 1 2 4 5 1."
},
{
"code": null,
"e": 1740,
"s": 1403,
"text": "User Task:\nThe task is to complete the function calculateSpan() which takes two parameters, an array price[] denoting the price of stocks, and an integer N denoting the size of the array and number of days. This function finds the span of stock's price for all N days and returns an array of length N denoting the span for the i-th day."
},
{
"code": null,
"e": 1804,
"s": 1740,
"text": "Expected Time Complexity: O(N).\nExpected Auxiliary Space: O(N)."
},
{
"code": null,
"e": 1844,
"s": 1804,
"text": "Constraints:\n1 ≤ N ≤ 105\n1 ≤ C[i] ≤ 105"
},
{
"code": null,
"e": 1846,
"s": 1844,
"text": "0"
},
{
"code": null,
"e": 1869,
"s": 1846,
"text": "20bd5a052019 hours ago"
},
{
"code": null,
"e": 2765,
"s": 1869,
"text": "class Node\n{\n int ele;\n int idx;\n Node(int ele,int idx)\n {\n this.ele=ele;\n this.idx=idx;\n }\n}\nclass Solution\n{\n //Function to calculate the span of stockâ€TMs price for all n days.\n public static int[] calculateSpan(int stocks[], int n)\n {\n // Your code here\n Stack<Node> st=new Stack<>();\n\t int span[]=new int[stocks.length];\n\t span[0]=1;\n\t st.push(new Node(stocks[0],1));\n\t for(int i=1;i<span.length;i++)\n\t {\n\t if(st.peek().ele>stocks[i])\n\t {\n\t span[i]=1;\n\t st.push(new Node(stocks[i],span[i]));\n\t }\n\t else \n\t {\n\t while(!st.isEmpty()&&st.peek().ele<=stocks[i])\n\t {\n\t span[i]+=st.pop().idx;\n\t }\n\t span[i]+=1;\n\t st.push(new Node(stocks[i],span[i]));\n\t }\n\t }\n\t return span;\n \n \n }\n \n}"
},
{
"code": null,
"e": 2767,
"s": 2765,
"text": "0"
},
{
"code": null,
"e": 2784,
"s": 2767,
"text": "gfgraj2 days ago"
},
{
"code": null,
"e": 3436,
"s": 2784,
"text": " vector <int> calculateSpan(int a[], int n) { vector<int>v; v.push_back(1); stack<int>s; stack<int>ind; ind.push(0); s.push(a[0]); for(int i=1;i<n;i++){ int flag=1; if(a[i]<s.top()){ v.push_back(flag); s.push(a[i]); ind.push(i); continue; } while(!s.empty() && a[i]>=s.top()){ s.pop(); flag+=v[ind.top()]; ind.pop(); } s.push(a[i]); ind.push(i); v.push_back(flag); } return v; }"
},
{
"code": null,
"e": 3438,
"s": 3436,
"text": "0"
},
{
"code": null,
"e": 3462,
"s": 3438,
"text": "hharshit81182 weeks ago"
},
{
"code": null,
"e": 4006,
"s": 3462,
"text": "class Solution{ public: //Function to calculate the span of stockâ€TMs price for all n days. vector <int> calculateSpan(int price[], int n) { // Your code here stack<int> s; s.push(0); vector<int> v; v.push_back(1); int i =1; while(i<n){ while(s.empty()==false && price[s.top()] <= price[i]){ s.pop(); } if(s.empty()==true) v.push_back(i+1); else v.push_back(i-s.top()); s.push(i); i++; } return v; }};"
},
{
"code": null,
"e": 4008,
"s": 4006,
"text": "0"
},
{
"code": null,
"e": 4040,
"s": 4008,
"text": "mamoonakhterrock20022 weeks ago"
},
{
"code": null,
"e": 4956,
"s": 4040,
"text": " public static int[] calculateSpan(int price[], int n) { int S[] = new int[n]; Stack<Integer> st = new Stack<>(); st.push(0); // Span value of first element is always 1 S[0] = 1; // Calculate span values for rest of the elements for (int i = 1; i < n; i++) { // Pop elements from stack while stack is not // empty and top of stack is smaller than // price[i] while (!st.empty() && price[st.peek()] <= price[i]) st.pop(); // If stack becomes empty, then price[i] is // greater than all elements on left of it, i.e., // price[0], price[1], ..price[i-1]. Else price[i] // is greater than elements after top of stack S[i] = (st.empty()) ? (i + 1) : (i - st.peek()); // Push this element to stack st.push(i); } return S; }"
},
{
"code": null,
"e": 4958,
"s": 4956,
"text": "0"
},
{
"code": null,
"e": 4979,
"s": 4958,
"text": "pkb9825112 weeks ago"
},
{
"code": null,
"e": 5282,
"s": 4979,
"text": " stack<pair<int,int>>s; vector<int>ans; for(int i=0;i<n;i++){ int days=1; while(!s.empty() && s.top().first<=price[i] ){ days+=s.top().second; s.pop(); } s.push({price[i],days}); ans.push_back(days); } return ans;"
},
{
"code": null,
"e": 5285,
"s": 5282,
"text": "+1"
},
{
"code": null,
"e": 5308,
"s": 5285,
"text": "archit232002 weeks ago"
},
{
"code": null,
"e": 5357,
"s": 5308,
"text": "JAVA Solution Based on Aditya Verma's approach:-"
},
{
"code": null,
"e": 5547,
"s": 5359,
"text": "[IMP]:-If your original solution is not passing the test cases,it is most probably because you took “< “ instead of ”< =” in the conditions which is very important for the correct result."
},
{
"code": null,
"e": 7115,
"s": 5549,
"text": "class Solution\n{\n //Function to calculate the span of stockâ€TMs price for all n days.\n static class ValueNIndex{\n int value;\n int index;\n ValueNIndex(){}\n ValueNIndex(int value,int index){\n this.value=value;\n this.index=index;\n }\n int getValue(){\n return this.value;\n }\n int getIndex(){\n return this.index;\n }\n void setValInd(int value,int index){\n this.value=value;\n this.index=index;\n }\n }\n public static int[] calculateSpan(int arr[], int n)\n {\n if(arr.length==0){\n int ans[]={};\n return ans;\n }\n if(arr.length==1){\n int ans[]={1};\n return ans;\n }\n Stack<ValueNIndex> st=new Stack<>();\n ArrayList<Integer> ans =new ArrayList<>();\n for(int i=0;i<arr.length;i++){\n if(st.isEmpty()==false && st.peek().value<=arr[i]){\n while(st.isEmpty()==false && st.peek().value<=arr[i]){\n st.pop();\n }\n }\n if(st.isEmpty()==false && st.peek().value>arr[i]){\n ans.add(i-st.peek().index);\n }\n if(st.isEmpty()==true){\n ans.add(i+1);\n }\n ValueNIndex el=new ValueNIndex(arr[i],i);\n st.push(el);\n }\n int result[]=new int[ans.size()];\n for(int i=0;i<ans.size();i++){\n result[i]=ans.get(i);\n }\n return result;\n }\n \n}"
},
{
"code": null,
"e": 7118,
"s": 7115,
"text": "+1"
},
{
"code": null,
"e": 7145,
"s": 7118,
"text": "aloksinghbais022 weeks ago"
},
{
"code": null,
"e": 7236,
"s": 7145,
"text": "C++ solution having time complexity as O(N) and space complexity as O(N) is as follows :- "
},
{
"code": null,
"e": 7272,
"s": 7238,
"text": "Execution Time :- 1.04 / 2.32 sec"
},
{
"code": null,
"e": 7842,
"s": 7274,
"text": "vector <int> calculateSpan(int price[], int n){ stack<int> stk; vector<int> ngel(n); for(int i = 0; i < n; i++){ while(!stk.empty() && price[stk.top()] <= price[i]){ stk.pop(); } if(stk.empty()) ngel[i] = -1; else ngel[i] = stk.top(); stk.push(i); } vector<int> ans(n); ans[0] = 1; for(int i = 1; i < n; i++){ if(ngel[i] == -1) ans[i] = i+1; else{ ans[i] = i - ngel[i]; } } return (ans); }"
},
{
"code": null,
"e": 7844,
"s": 7842,
"text": "0"
},
{
"code": null,
"e": 7871,
"s": 7844,
"text": "adityagagtiwari3 weeks ago"
},
{
"code": null,
"e": 7934,
"s": 7871,
"text": "Simple solution using stacks and a pair for keeping the state!"
},
{
"code": null,
"e": 8978,
"s": 7934,
"text": "class Pair\n{\n int value;\n int span;\n Pair(int value,int span)\n {\n this.value = value;\n this.span = span;\n }\n}\nclass Solution\n{\n //Function to calculate the span of stockâ€TMs price for all n days.\n public static int[] calculateSpan(int arr[], int n)\n {\n // Your code here\n Stack<Pair> stack = new Stack<>();\n int[] span =new int[n];\n stack.push(new Pair(arr[0],1));\n span[0] = 1;\n for(int i=1;i<n;i++)\n {\n if(stack.peek().value>arr[i])\n {\n span[i] = 1;\n stack.push(new Pair(arr[i],span[i]));\n }\n else \n {\n while(stack.size()>0 && arr[i]>=stack.peek().value)\n {\n Pair removed = stack.pop();\n span[i]+=removed.span;\n }\n //for including itlself in the span\n span[i]+=1;\n stack.push(new Pair(arr[i],span[i])); \n }\n }\n return span;\n }\n \n}"
},
{
"code": null,
"e": 8980,
"s": 8978,
"text": "0"
},
{
"code": null,
"e": 9006,
"s": 8980,
"text": "aakashsoni24863 weeks ago"
},
{
"code": null,
"e": 9446,
"s": 9006,
"text": "vector <int> calculateSpan(int price[], int n) { // Your code here vector<int>ans(n); stack<int>st; ans[0]=1; st.push(0); for(int i=1;i<n;i++){ while(!st.empty()&&price[st.top()]<=price[i]){ st.pop(); } if(st.empty()){ ans[i]=i+1; } else{ ans[i]=i-st.top(); } st.push(i); } return ans; }"
},
{
"code": null,
"e": 9448,
"s": 9446,
"text": "0"
},
{
"code": null,
"e": 9475,
"s": 9448,
"text": "himanshukug19cs3 weeks ago"
},
{
"code": null,
"e": 9489,
"s": 9475,
"text": "java solution"
},
{
"code": null,
"e": 9770,
"s": 9491,
"text": " int[] ans = new int[n];Stack<Integer> st = new Stack<>();st.push(0);ans[0]=1;for(int i=1;i<n;i++){ while(!st.empty()&&price[i]>=price[st.peek()]){ st.pop(); } if(st.empty()) ans[i]=i+1; else{ ans[i]=i-st.peek(); } st.push(i);}return ans;"
},
{
"code": null,
"e": 9916,
"s": 9770,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 9952,
"s": 9916,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 9962,
"s": 9952,
"text": "\nProblem\n"
},
{
"code": null,
"e": 9972,
"s": 9962,
"text": "\nContest\n"
},
{
"code": null,
"e": 10035,
"s": 9972,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 10183,
"s": 10035,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 10391,
"s": 10183,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 10497,
"s": 10391,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
Scala Map get() method with example - GeeksforGeeks
|
13 Aug, 2019
The get() method is utilized to give the value associated with the keys of the map. The values are returned here as an Option i.e, either in form of Some or None.
Method Definition:def get(key: A): Option[B]
Return Type: It returns the keys corresponding to the values given in the method as argument.
Example #1:
// Scala program of get()// method // Creating objectobject GfG{ // Main method def main(args:Array[String]) { // Creating a map val m1 = Map("geeks" -> 5, "for" -> 3, "cs" -> 2) // Applying get method val result = m1.get("for") // Displays output println(result) }}
Some(3)
Here, the key in the argument i.e, for is present in the map stated above so, the value of the key is returned in the Some form.Example #2:
// Scala program of get()// method // Creating objectobject GfG{ // Main method def main(args:Array[String]) { // Creating a map val m1 = Map("geeks" -> 5, "for" -> 3, "cs" -> 2) // Applying get method val result = m1.get("portal") // Displays output println(result) }}
None
Here, the key in the argument is not present in the map so, None is returned.
Scala
Scala-Map
Scala-Method
Scala
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Scala Tutorial – Learn Scala with Step By Step Guide
Type Casting in Scala
Scala Lists
Class and Object in Scala
Scala String substring() method with example
Break statement in Scala
Lambda Expression in Scala
Scala String replace() method with example
Operators in Scala
|
[
{
"code": null,
"e": 23647,
"s": 23619,
"text": "\n13 Aug, 2019"
},
{
"code": null,
"e": 23810,
"s": 23647,
"text": "The get() method is utilized to give the value associated with the keys of the map. The values are returned here as an Option i.e, either in form of Some or None."
},
{
"code": null,
"e": 23855,
"s": 23810,
"text": "Method Definition:def get(key: A): Option[B]"
},
{
"code": null,
"e": 23949,
"s": 23855,
"text": "Return Type: It returns the keys corresponding to the values given in the method as argument."
},
{
"code": null,
"e": 23961,
"s": 23949,
"text": "Example #1:"
},
{
"code": "// Scala program of get()// method // Creating objectobject GfG{ // Main method def main(args:Array[String]) { // Creating a map val m1 = Map(\"geeks\" -> 5, \"for\" -> 3, \"cs\" -> 2) // Applying get method val result = m1.get(\"for\") // Displays output println(result) }}",
"e": 24319,
"s": 23961,
"text": null
},
{
"code": null,
"e": 24328,
"s": 24319,
"text": "Some(3)\n"
},
{
"code": null,
"e": 24468,
"s": 24328,
"text": "Here, the key in the argument i.e, for is present in the map stated above so, the value of the key is returned in the Some form.Example #2:"
},
{
"code": "// Scala program of get()// method // Creating objectobject GfG{ // Main method def main(args:Array[String]) { // Creating a map val m1 = Map(\"geeks\" -> 5, \"for\" -> 3, \"cs\" -> 2) // Applying get method val result = m1.get(\"portal\") // Displays output println(result) }}",
"e": 24829,
"s": 24468,
"text": null
},
{
"code": null,
"e": 24835,
"s": 24829,
"text": "None\n"
},
{
"code": null,
"e": 24913,
"s": 24835,
"text": "Here, the key in the argument is not present in the map so, None is returned."
},
{
"code": null,
"e": 24919,
"s": 24913,
"text": "Scala"
},
{
"code": null,
"e": 24929,
"s": 24919,
"text": "Scala-Map"
},
{
"code": null,
"e": 24942,
"s": 24929,
"text": "Scala-Method"
},
{
"code": null,
"e": 24948,
"s": 24942,
"text": "Scala"
},
{
"code": null,
"e": 25046,
"s": 24948,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25055,
"s": 25046,
"text": "Comments"
},
{
"code": null,
"e": 25068,
"s": 25055,
"text": "Old Comments"
},
{
"code": null,
"e": 25121,
"s": 25068,
"text": "Scala Tutorial – Learn Scala with Step By Step Guide"
},
{
"code": null,
"e": 25143,
"s": 25121,
"text": "Type Casting in Scala"
},
{
"code": null,
"e": 25155,
"s": 25143,
"text": "Scala Lists"
},
{
"code": null,
"e": 25181,
"s": 25155,
"text": "Class and Object in Scala"
},
{
"code": null,
"e": 25226,
"s": 25181,
"text": "Scala String substring() method with example"
},
{
"code": null,
"e": 25251,
"s": 25226,
"text": "Break statement in Scala"
},
{
"code": null,
"e": 25278,
"s": 25251,
"text": "Lambda Expression in Scala"
},
{
"code": null,
"e": 25321,
"s": 25278,
"text": "Scala String replace() method with example"
}
] |
Arrangement of words without changing the relative position of vowel and consonants - GeeksforGeeks
|
07 May, 2021
Given a word of length less than 10, the task is to find a number of ways in which it can be arranged without changing the relative position of vowel and consonants.Examples:
Input: "GEEKS"
Output: 6
Input: "COMPUTER"
Output: 720
Approach:
Count the vowels and consonants in the wordNow find total number of ways to arrange vowel onlyThen find ways to arrange consonant only.Multiply both answer to get the Total ways = (no of ways to arrange vowel only)*(no of ways to arrange consonant only)
Count the vowels and consonants in the word
Now find total number of ways to arrange vowel only
Then find ways to arrange consonant only.
Multiply both answer to get the Total ways = (no of ways to arrange vowel only)*(no of ways to arrange consonant only)
Below is the implementation of the above approach:
C++
Java
Python3
C#
PHP
Javascript
// C++ program for Arrangement of words// without changing the relative position of// vowel and consonants#include <bits/stdc++.h>using namespace std; #define ll long int // this function return n!ll factorial(ll n){ ll res = 1; for (int i = 1; i <= n; i++) res = res * i; return res;} // this will return total number of waysll count(string word){ // freq maintains frequency // of each character in word ll freq[27] = { 0 }; ll vowel = 0, consonant = 0; for (int i = 0; i < word.length(); i++) { freq[word[i] - 'A']++; // check character is vowel or not if (word[i] == 'A' || word[i] == 'E' || word[i] == 'I' || word[i] == 'O' || word[i] == 'U') { vowel++; } // the characters that are not vowel // must be consonant else consonant++; } // number of ways to arrange vowel ll vowelArrange; vowelArrange = factorial(vowel); vowelArrange /= factorial(freq[0]); vowelArrange /= factorial(freq[4]); vowelArrange /= factorial(freq[8]); vowelArrange /= factorial(freq[14]); vowelArrange /= factorial(freq[20]); ll consonantArrange; consonantArrange = factorial(consonant); for (int i = 0; i < 26; i++) { if (i != 0 && i != 4 && i != 8 && i != 14 && i != 20) consonantArrange /= factorial(freq[i]); } // multiply both as these are independent ll total = vowelArrange * consonantArrange; return total;} // Driver functionint main(){ // string contains only // capital letters string word = "COMPUTER"; // this will contain ans ll ans = count(word); cout << ans << endl; return 0;}
// Java program for Arrangement of words// without changing the relative position of// vowel and consonants class GFG{ // this function return n! static long factorial(long n) { long res = 1; for (int i = 1; i <= n; i++) res = res * i; return res; } // this will return total number of ways static long count(String word) { // freq maintains frequency // of each character in word int freq[] =new int[27]; for(int i=0;i<27;i++) freq[i]=0; long vowel = 0, consonant = 0; for (int i = 0; i < word.length(); i++) { freq[word.charAt(i) - 'A']++; // check character is vowel or not if (word.charAt(i) == 'A' || word.charAt(i) == 'E' || word.charAt(i) == 'I' || word.charAt(i) == 'O' || word.charAt(i) == 'U') { vowel++; } // the characters that are not vowel // must be consonant else consonant++; } // number of ways to arrange vowel long vowelArrange; vowelArrange = factorial(vowel); vowelArrange /= factorial(freq[0]); vowelArrange /= factorial(freq[4]); vowelArrange /= factorial(freq[8]); vowelArrange /= factorial(freq[14]); vowelArrange /= factorial(freq[20]); long consonantArrange; consonantArrange = factorial(consonant); for (int i = 0; i < 26; i++) { if (i != 0 && i != 4 && i != 8 && i != 14 && i != 20) consonantArrange /= factorial(freq[i]); } // multiply both as these are independent long total = vowelArrange * consonantArrange; return total; } // Driver function public static void main(String []args) { // string contains only // capital letters String word = "COMPUTER"; // this will contain ans long ans = count(word); System.out.println(ans); } } // This code is contributed by ihritik
# Python3 program for Arrangement of words# without changing the relative position of# vowel and consonants # this function return n!def factorial(n): res = 1 for i in range(1, n + 1): res = res * i return res # this will return total number of waysdef count(word): # freq maintains frequency # of each character in word freq = [0 for i in range(30)] vowel = 0 consonant = 0 for i in range(len(word)): freq[ord(word[i]) -65 ] += 1 # check character is vowel or not if(word[i] == 'A'or word[i] == 'E' or word[i] == 'I' or word[i] == 'O'or word[i] == 'U'): vowel += 1 # the characters that are not # vowel must be consonant else: consonant += 1 # number of ways to arrange vowel vowelArrange = factorial(vowel) vowelArrange //= factorial(freq[0]) vowelArrange //= factorial(freq[4]) vowelArrange //= factorial(freq[8]) vowelArrange //= factorial(freq[14]) vowelArrange //= factorial(freq[20]) consonantArrange = factorial(consonant) for i in range(26): if(i != 0 and i != 4 and i != 8 and i != 14 and i != 20): consonantArrange//= factorial(freq[i]) # multiply both as these are independent total = vowelArrange * consonantArrange return total # Driver code # string contains only# capital lettersword = "COMPUTER" # this will contain ansans = count(word)print(ans) # This code is contributed# by sahilshelangia
// C# program for Arrangement of words// without changing the relative position of// vowel and consonantsusing System;class GFG{ // this function return n! static long factorial(long n) { long res = 1; for (int i = 1; i <= n; i++) res = res * i; return res; } // this will return total number of ways static long count(string word) { // freq maintains frequency // of each character in word int []freq =new int[27]; for(int i=0;i<27;i++) freq[i]=0; long vowel = 0, consonant = 0; for (int i = 0; i < word.Length; i++) { freq[word[i] - 'A']++; // check character is vowel or not if (word[i] == 'A' || word[i] == 'E' || word[i] == 'I' || word[i] == 'O' || word[i] == 'U') { vowel++; } // the characters that are not vowel // must be consonant else consonant++; } // number of ways to arrange vowel long vowelArrange; vowelArrange = factorial(vowel); vowelArrange /= factorial(freq[0]); vowelArrange /= factorial(freq[4]); vowelArrange /= factorial(freq[8]); vowelArrange /= factorial(freq[14]); vowelArrange /= factorial(freq[20]); long consonantArrange; consonantArrange = factorial(consonant); for (int i = 0; i < 26; i++) { if (i != 0 && i != 4 && i != 8 && i != 14 && i != 20) consonantArrange /= factorial(freq[i]); } // multiply both as these are independent long total = vowelArrange * consonantArrange; return total; } // Driver function public static void Main() { // string contains only // capital letters string word = "COMPUTER"; // this will contain ans long ans = count(word); Console.WriteLine(ans); } }// This code is contributed by ihritik
<?php// PHP program for Arrangement of words// without changing the relative position// of vowel and consonants // this function return n!function factorial($n){ $res = 1; for ($i = 1; $i <= $n; $i++) $res = $res * $i; return $res;} // this will return total// number of waysfunction count1($word){ // freq maintains frequency // of each character in word $freq = array_fill(0, 27, 0); for($i = 0; $i < 27; $i++) $freq[$i] = 0; $vowel = 0; $consonant = 0; for ($i = 0; $i < strlen($word); $i++) { $freq[ord($word[$i]) - 65]++; // check character is vowel or not if ($word[$i] == 'A' || $word[$i] == 'E' || $word[$i] == 'I' || $word[$i] == 'O' || $word[$i] == 'U') { $vowel++; } // the characters that are not // vowel must be consonant else $consonant++; } // number of ways to arrange vowel $vowelArrange = factorial($vowel); $vowelArrange /= factorial($freq[0]); $vowelArrange /= factorial($freq[4]); $vowelArrange /= factorial($freq[8]); $vowelArrange /= factorial($freq[14]); $vowelArrange /= factorial($freq[20]); $consonantArrange = factorial($consonant); for ($i = 0; $i < 26; $i++) { if ($i != 0 && $i != 4 && $i != 8 && $i != 14 && $i != 20) $consonantArrange /= factorial($freq[$i]); } // multiply both as these // are independent $total = $vowelArrange * $consonantArrange; return $total;} // Driver Code // string contains only// capital letters$word = "COMPUTER"; // this will contain ans$ans = count1($word);echo ($ans); // This code is contributed by mits?>
// Javascript program for Arrangement of words// without changing the relative position// of vowel and consonants // this function return n!function factorial(n){ let res = 1; for (let i = 1; i <= n; i++) res = res * i; return res;} // this will return total// number of waysfunction count1(word){ // freq maintains frequency // of each character in word let freq = new Array(27).fill(0); for(let i = 0; i < 27; i++) freq[i] = 0; let vowel = 0; let consonant = 0; for (let i = 0; i < word.length; i++) { freq[word.charCodeAt(i) - 65]++; // check character is vowel or not if (word[i] == 'A' || word[i] == 'E' || word[i] == 'I' || word[i] == 'O' || word[i] == 'U') { vowel++; } // the characters that are not // vowel must be consonant else consonant++; } // number of ways to arrange vowel vowelArrange = factorial(vowel); vowelArrange /= factorial(freq[0]); vowelArrange /= factorial(freq[4]); vowelArrange /= factorial(freq[8]); vowelArrange /= factorial(freq[14]); vowelArrange /= factorial(freq[20]); consonantArrange = factorial(consonant); for (let i = 0; i < 26; i++) { if (i != 0 && i != 4 && i != 8 && i != 14 && i != 20) consonantArrange /= factorial(freq[i]); } // multiply both as these // are independent let total = vowelArrange * consonantArrange; return total;} // Driver Code // string contains only// capital letterslet word = "COMPUTER"; // this will contain anslet ans = count1(word);document.write(ans); // This code is contributed by gfgking
720
sahilshelangia
ihritik
Mithun Kumar
gfgking
factorial
vowel-consonant
Hash
Mathematical
Strings
Hash
Strings
Mathematical
factorial
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Quadratic Probing in Hashing
Rearrange an array such that arr[i] = i
Hashing in Java
Non-Repeating Element
What are Hash Functions and How to choose a good Hash Function?
Program for Fibonacci numbers
Write a program to print all permutations of a given string
C++ Data Types
Set in C++ Standard Template Library (STL)
Coin Change | DP-7
|
[
{
"code": null,
"e": 25132,
"s": 25104,
"text": "\n07 May, 2021"
},
{
"code": null,
"e": 25309,
"s": 25132,
"text": "Given a word of length less than 10, the task is to find a number of ways in which it can be arranged without changing the relative position of vowel and consonants.Examples: "
},
{
"code": null,
"e": 25365,
"s": 25309,
"text": "Input: \"GEEKS\"\nOutput: 6\n\nInput: \"COMPUTER\"\nOutput: 720"
},
{
"code": null,
"e": 25378,
"s": 25367,
"text": "Approach: "
},
{
"code": null,
"e": 25632,
"s": 25378,
"text": "Count the vowels and consonants in the wordNow find total number of ways to arrange vowel onlyThen find ways to arrange consonant only.Multiply both answer to get the Total ways = (no of ways to arrange vowel only)*(no of ways to arrange consonant only)"
},
{
"code": null,
"e": 25676,
"s": 25632,
"text": "Count the vowels and consonants in the word"
},
{
"code": null,
"e": 25728,
"s": 25676,
"text": "Now find total number of ways to arrange vowel only"
},
{
"code": null,
"e": 25770,
"s": 25728,
"text": "Then find ways to arrange consonant only."
},
{
"code": null,
"e": 25889,
"s": 25770,
"text": "Multiply both answer to get the Total ways = (no of ways to arrange vowel only)*(no of ways to arrange consonant only)"
},
{
"code": null,
"e": 25941,
"s": 25889,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 25945,
"s": 25941,
"text": "C++"
},
{
"code": null,
"e": 25950,
"s": 25945,
"text": "Java"
},
{
"code": null,
"e": 25958,
"s": 25950,
"text": "Python3"
},
{
"code": null,
"e": 25961,
"s": 25958,
"text": "C#"
},
{
"code": null,
"e": 25965,
"s": 25961,
"text": "PHP"
},
{
"code": null,
"e": 25976,
"s": 25965,
"text": "Javascript"
},
{
"code": "// C++ program for Arrangement of words// without changing the relative position of// vowel and consonants#include <bits/stdc++.h>using namespace std; #define ll long int // this function return n!ll factorial(ll n){ ll res = 1; for (int i = 1; i <= n; i++) res = res * i; return res;} // this will return total number of waysll count(string word){ // freq maintains frequency // of each character in word ll freq[27] = { 0 }; ll vowel = 0, consonant = 0; for (int i = 0; i < word.length(); i++) { freq[word[i] - 'A']++; // check character is vowel or not if (word[i] == 'A' || word[i] == 'E' || word[i] == 'I' || word[i] == 'O' || word[i] == 'U') { vowel++; } // the characters that are not vowel // must be consonant else consonant++; } // number of ways to arrange vowel ll vowelArrange; vowelArrange = factorial(vowel); vowelArrange /= factorial(freq[0]); vowelArrange /= factorial(freq[4]); vowelArrange /= factorial(freq[8]); vowelArrange /= factorial(freq[14]); vowelArrange /= factorial(freq[20]); ll consonantArrange; consonantArrange = factorial(consonant); for (int i = 0; i < 26; i++) { if (i != 0 && i != 4 && i != 8 && i != 14 && i != 20) consonantArrange /= factorial(freq[i]); } // multiply both as these are independent ll total = vowelArrange * consonantArrange; return total;} // Driver functionint main(){ // string contains only // capital letters string word = \"COMPUTER\"; // this will contain ans ll ans = count(word); cout << ans << endl; return 0;}",
"e": 27670,
"s": 25976,
"text": null
},
{
"code": "// Java program for Arrangement of words// without changing the relative position of// vowel and consonants class GFG{ // this function return n! static long factorial(long n) { long res = 1; for (int i = 1; i <= n; i++) res = res * i; return res; } // this will return total number of ways static long count(String word) { // freq maintains frequency // of each character in word int freq[] =new int[27]; for(int i=0;i<27;i++) freq[i]=0; long vowel = 0, consonant = 0; for (int i = 0; i < word.length(); i++) { freq[word.charAt(i) - 'A']++; // check character is vowel or not if (word.charAt(i) == 'A' || word.charAt(i) == 'E' || word.charAt(i) == 'I' || word.charAt(i) == 'O' || word.charAt(i) == 'U') { vowel++; } // the characters that are not vowel // must be consonant else consonant++; } // number of ways to arrange vowel long vowelArrange; vowelArrange = factorial(vowel); vowelArrange /= factorial(freq[0]); vowelArrange /= factorial(freq[4]); vowelArrange /= factorial(freq[8]); vowelArrange /= factorial(freq[14]); vowelArrange /= factorial(freq[20]); long consonantArrange; consonantArrange = factorial(consonant); for (int i = 0; i < 26; i++) { if (i != 0 && i != 4 && i != 8 && i != 14 && i != 20) consonantArrange /= factorial(freq[i]); } // multiply both as these are independent long total = vowelArrange * consonantArrange; return total; } // Driver function public static void main(String []args) { // string contains only // capital letters String word = \"COMPUTER\"; // this will contain ans long ans = count(word); System.out.println(ans); } } // This code is contributed by ihritik",
"e": 29783,
"s": 27670,
"text": null
},
{
"code": "# Python3 program for Arrangement of words# without changing the relative position of# vowel and consonants # this function return n!def factorial(n): res = 1 for i in range(1, n + 1): res = res * i return res # this will return total number of waysdef count(word): # freq maintains frequency # of each character in word freq = [0 for i in range(30)] vowel = 0 consonant = 0 for i in range(len(word)): freq[ord(word[i]) -65 ] += 1 # check character is vowel or not if(word[i] == 'A'or word[i] == 'E' or word[i] == 'I' or word[i] == 'O'or word[i] == 'U'): vowel += 1 # the characters that are not # vowel must be consonant else: consonant += 1 # number of ways to arrange vowel vowelArrange = factorial(vowel) vowelArrange //= factorial(freq[0]) vowelArrange //= factorial(freq[4]) vowelArrange //= factorial(freq[8]) vowelArrange //= factorial(freq[14]) vowelArrange //= factorial(freq[20]) consonantArrange = factorial(consonant) for i in range(26): if(i != 0 and i != 4 and i != 8 and i != 14 and i != 20): consonantArrange//= factorial(freq[i]) # multiply both as these are independent total = vowelArrange * consonantArrange return total # Driver code # string contains only# capital lettersword = \"COMPUTER\" # this will contain ansans = count(word)print(ans) # This code is contributed# by sahilshelangia",
"e": 31293,
"s": 29783,
"text": null
},
{
"code": "// C# program for Arrangement of words// without changing the relative position of// vowel and consonantsusing System;class GFG{ // this function return n! static long factorial(long n) { long res = 1; for (int i = 1; i <= n; i++) res = res * i; return res; } // this will return total number of ways static long count(string word) { // freq maintains frequency // of each character in word int []freq =new int[27]; for(int i=0;i<27;i++) freq[i]=0; long vowel = 0, consonant = 0; for (int i = 0; i < word.Length; i++) { freq[word[i] - 'A']++; // check character is vowel or not if (word[i] == 'A' || word[i] == 'E' || word[i] == 'I' || word[i] == 'O' || word[i] == 'U') { vowel++; } // the characters that are not vowel // must be consonant else consonant++; } // number of ways to arrange vowel long vowelArrange; vowelArrange = factorial(vowel); vowelArrange /= factorial(freq[0]); vowelArrange /= factorial(freq[4]); vowelArrange /= factorial(freq[8]); vowelArrange /= factorial(freq[14]); vowelArrange /= factorial(freq[20]); long consonantArrange; consonantArrange = factorial(consonant); for (int i = 0; i < 26; i++) { if (i != 0 && i != 4 && i != 8 && i != 14 && i != 20) consonantArrange /= factorial(freq[i]); } // multiply both as these are independent long total = vowelArrange * consonantArrange; return total; } // Driver function public static void Main() { // string contains only // capital letters string word = \"COMPUTER\"; // this will contain ans long ans = count(word); Console.WriteLine(ans); } }// This code is contributed by ihritik",
"e": 33357,
"s": 31293,
"text": null
},
{
"code": "<?php// PHP program for Arrangement of words// without changing the relative position// of vowel and consonants // this function return n!function factorial($n){ $res = 1; for ($i = 1; $i <= $n; $i++) $res = $res * $i; return $res;} // this will return total// number of waysfunction count1($word){ // freq maintains frequency // of each character in word $freq = array_fill(0, 27, 0); for($i = 0; $i < 27; $i++) $freq[$i] = 0; $vowel = 0; $consonant = 0; for ($i = 0; $i < strlen($word); $i++) { $freq[ord($word[$i]) - 65]++; // check character is vowel or not if ($word[$i] == 'A' || $word[$i] == 'E' || $word[$i] == 'I' || $word[$i] == 'O' || $word[$i] == 'U') { $vowel++; } // the characters that are not // vowel must be consonant else $consonant++; } // number of ways to arrange vowel $vowelArrange = factorial($vowel); $vowelArrange /= factorial($freq[0]); $vowelArrange /= factorial($freq[4]); $vowelArrange /= factorial($freq[8]); $vowelArrange /= factorial($freq[14]); $vowelArrange /= factorial($freq[20]); $consonantArrange = factorial($consonant); for ($i = 0; $i < 26; $i++) { if ($i != 0 && $i != 4 && $i != 8 && $i != 14 && $i != 20) $consonantArrange /= factorial($freq[$i]); } // multiply both as these // are independent $total = $vowelArrange * $consonantArrange; return $total;} // Driver Code // string contains only// capital letters$word = \"COMPUTER\"; // this will contain ans$ans = count1($word);echo ($ans); // This code is contributed by mits?>",
"e": 35082,
"s": 33357,
"text": null
},
{
"code": "// Javascript program for Arrangement of words// without changing the relative position// of vowel and consonants // this function return n!function factorial(n){ let res = 1; for (let i = 1; i <= n; i++) res = res * i; return res;} // this will return total// number of waysfunction count1(word){ // freq maintains frequency // of each character in word let freq = new Array(27).fill(0); for(let i = 0; i < 27; i++) freq[i] = 0; let vowel = 0; let consonant = 0; for (let i = 0; i < word.length; i++) { freq[word.charCodeAt(i) - 65]++; // check character is vowel or not if (word[i] == 'A' || word[i] == 'E' || word[i] == 'I' || word[i] == 'O' || word[i] == 'U') { vowel++; } // the characters that are not // vowel must be consonant else consonant++; } // number of ways to arrange vowel vowelArrange = factorial(vowel); vowelArrange /= factorial(freq[0]); vowelArrange /= factorial(freq[4]); vowelArrange /= factorial(freq[8]); vowelArrange /= factorial(freq[14]); vowelArrange /= factorial(freq[20]); consonantArrange = factorial(consonant); for (let i = 0; i < 26; i++) { if (i != 0 && i != 4 && i != 8 && i != 14 && i != 20) consonantArrange /= factorial(freq[i]); } // multiply both as these // are independent let total = vowelArrange * consonantArrange; return total;} // Driver Code // string contains only// capital letterslet word = \"COMPUTER\"; // this will contain anslet ans = count1(word);document.write(ans); // This code is contributed by gfgking",
"e": 36795,
"s": 35082,
"text": null
},
{
"code": null,
"e": 36799,
"s": 36795,
"text": "720"
},
{
"code": null,
"e": 36816,
"s": 36801,
"text": "sahilshelangia"
},
{
"code": null,
"e": 36824,
"s": 36816,
"text": "ihritik"
},
{
"code": null,
"e": 36837,
"s": 36824,
"text": "Mithun Kumar"
},
{
"code": null,
"e": 36845,
"s": 36837,
"text": "gfgking"
},
{
"code": null,
"e": 36855,
"s": 36845,
"text": "factorial"
},
{
"code": null,
"e": 36871,
"s": 36855,
"text": "vowel-consonant"
},
{
"code": null,
"e": 36876,
"s": 36871,
"text": "Hash"
},
{
"code": null,
"e": 36889,
"s": 36876,
"text": "Mathematical"
},
{
"code": null,
"e": 36897,
"s": 36889,
"text": "Strings"
},
{
"code": null,
"e": 36902,
"s": 36897,
"text": "Hash"
},
{
"code": null,
"e": 36910,
"s": 36902,
"text": "Strings"
},
{
"code": null,
"e": 36923,
"s": 36910,
"text": "Mathematical"
},
{
"code": null,
"e": 36933,
"s": 36923,
"text": "factorial"
},
{
"code": null,
"e": 37031,
"s": 36933,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 37060,
"s": 37031,
"text": "Quadratic Probing in Hashing"
},
{
"code": null,
"e": 37100,
"s": 37060,
"text": "Rearrange an array such that arr[i] = i"
},
{
"code": null,
"e": 37116,
"s": 37100,
"text": "Hashing in Java"
},
{
"code": null,
"e": 37138,
"s": 37116,
"text": "Non-Repeating Element"
},
{
"code": null,
"e": 37202,
"s": 37138,
"text": "What are Hash Functions and How to choose a good Hash Function?"
},
{
"code": null,
"e": 37232,
"s": 37202,
"text": "Program for Fibonacci numbers"
},
{
"code": null,
"e": 37292,
"s": 37232,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 37307,
"s": 37292,
"text": "C++ Data Types"
},
{
"code": null,
"e": 37350,
"s": 37307,
"text": "Set in C++ Standard Template Library (STL)"
}
] |
How to move specific item in array list to the first item in Java?
|
To move an item from an ArrayList and add it to the first position you need to -
Get the position (index) of the item using the indexOf() method of the ArrayList class.
Remove it using the remove() method of the ArrayList class.
Finally, add it to the index 0 using the add() method of the ArrayList class.
Live Demo
import java.util.ArrayList;
public class ArrayListSample {
public static void main(String args[]) {
ArrayList al = new ArrayList();
al.add("JavaFX");
al.add("HBase");
al.add("WebGL");
al.add("OpenCV");
System.out.println(al);
String item = "WebGL";
int itemPos = al.indexOf(item);
al.remove(itemPos);
al.add(0, item );
System.out.println(al);
}
}
[JavaFX, HBase, WebGL, OpenCV]
[WebGL, JavaFX, HBase, OpenCV]
|
[
{
"code": null,
"e": 1143,
"s": 1062,
"text": "To move an item from an ArrayList and add it to the first position you need to -"
},
{
"code": null,
"e": 1231,
"s": 1143,
"text": "Get the position (index) of the item using the indexOf() method of the ArrayList class."
},
{
"code": null,
"e": 1291,
"s": 1231,
"text": "Remove it using the remove() method of the ArrayList class."
},
{
"code": null,
"e": 1369,
"s": 1291,
"text": "Finally, add it to the index 0 using the add() method of the ArrayList class."
},
{
"code": null,
"e": 1379,
"s": 1369,
"text": "Live Demo"
},
{
"code": null,
"e": 1799,
"s": 1379,
"text": "import java.util.ArrayList;\n\npublic class ArrayListSample {\n public static void main(String args[]) {\n ArrayList al = new ArrayList();\n al.add(\"JavaFX\");\n al.add(\"HBase\");\n al.add(\"WebGL\");\n al.add(\"OpenCV\");\n System.out.println(al);\n String item = \"WebGL\";\n int itemPos = al.indexOf(item);\n al.remove(itemPos);\n al.add(0, item );\n System.out.println(al);\n }\n}"
},
{
"code": null,
"e": 1861,
"s": 1799,
"text": "[JavaFX, HBase, WebGL, OpenCV]\n[WebGL, JavaFX, HBase, OpenCV]"
}
] |
Check if a number has bits in alternate pattern | Set 1 - GeeksforGeeks
|
25 Nov, 2021
Given an integer n > 0, the task is to find whether this integer has an alternate pattern in its bits representation. For example- 5 has an alternate pattern i.e. 101. Print “Yes” if it has an alternate pattern otherwise “No”. Here alternate pattern can be like 0101 or 1010.
Examples:
Input : 15
Output : No
Explanation: Binary representation of 15 is 1111.
Input : 10
Output : Yes
Explanation: Binary representation of 10 is 1010.
A simple approach is to find its binary equivalent and then check its bits.
C++
Java
Python3
C#
PHP
Javascript
// C++ program to find if a number has alternate// bit pattern#include<bits/stdc++.h>using namespace std; // Returns true if n has alternate bit pattern// else falsebool findPattern(int n){ // Store last bit int prev = n % 2; n = n/2; // Traverse through remaining bits while (n > 0) { int curr = n % 2; // If current bit is same as previous if (curr == prev) return false; prev = curr; n = n / 2; } return true;} // Driver codeint main(){ int n = 10; if (findPattern(n)) cout << "Yes"; else cout << "No"; return 0;}
// Java program to find if a number has alternate// bit pattern class Test{ // Returns true if n has alternate bit pattern // else false static boolean findPattern(int n) { // Store last bit int prev = n % 2; n = n/2; // Traverse through remaining bits while (n > 0) { int curr = n % 2; // If current bit is same as previous if (curr == prev) return false; prev = curr; n = n / 2; } return true; } // Driver method public static void main(String args[]) { int n = 10; System.out.println(findPattern(n) ? "Yes" : "No"); }}
# Python3 program to find if a number# has alternate bit pattern # Returns true if n has alternate# bit pattern else falsedef findPattern(n): # Store last bit prev = n % 2 n = n // 2 # Traverse through remaining bits while (n > 0): curr = n % 2 # If current bit is same as previous if (curr == prev): return False prev = curr n = n // 2 return True # Driver coden = 10print("Yes") if(findPattern(n)) else print("No") # This code is contributed by Anant Agarwal.
// Program to find if a number// has alternate bit patternusing System; class Test { // Returns true if n has alternate // bit pattern else returns false static bool findPattern(int n) { // Store last bit int prev = n % 2; n = n / 2; // Traverse through remaining bits while (n > 0) { int curr = n % 2; // If current bit is same as previous if (curr == prev) return false; prev = curr; n = n / 2; } return true; } // Driver method public static void Main() { int n = 10; Console.WriteLine(findPattern(n) ? "Yes" : "No"); }} // This code is contributed by Anant Agarwal.
<?php// PHP program to find if a// number has alternate// bit pattern // Returns true if n has// alternate bit pattern// else falsefunction findPattern($n){ // Store last bit $prev = $n % 2; $n = $n / 2; // Traverse through // remaining bits while ($n > 0) { $curr = $n % 2; // If current bit is // same as previous if ($curr == $prev) return false; $prev = $curr; $n = floor($n / 2); } return true;} // Driver code $n = 10; if (findPattern($n)) echo "Yes"; else echo "No"; return 0; // This code is contributed by nitin mittal.?>
<script> // Javascript program to find if a number// has alternate bit pattern // Returns true if n has alternate// bit pattern else falsefunction findPattern(n){ // Store last bit let prev = n % 2; n = Math.floor(n / 2); // Traverse through remaining bits while (n > 0) { let curr = n % 2; // If current bit is // same as previous if (curr == prev) return false; prev = curr; n = Math.floor(n / 2); } return true;} // Driver codelet n = 10;if (findPattern(n)) document.write("Yes");else document.write("No"); // This code is contributed by gfgking </script>
Output:
Yes
Time Complexity: O(log2n)
Auxiliary Space: O(1)
Reference: http://stackoverflow.com/questions/38690278/program-to-check-whether-the-given-integer-has-an-alternate-pattern
This article is contributed by Sahil Chhabra (akku). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
nitin mittal
gfgking
samim2000
Bit Magic
Bit Magic
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Set, Clear and Toggle a given bit of a number in C
Check whether K-th bit is set or not
Write an Efficient Method to Check if a Number is Multiple of 3
Reverse actual bits of the given number
Program to find parity
Swap bits in a given number
Check if a Number is Odd or Even using Bitwise Operators
Generate n-bit Gray Codes
Builtin functions of GCC compiler
Check for Integer Overflow
|
[
{
"code": null,
"e": 24621,
"s": 24593,
"text": "\n25 Nov, 2021"
},
{
"code": null,
"e": 24897,
"s": 24621,
"text": "Given an integer n > 0, the task is to find whether this integer has an alternate pattern in its bits representation. For example- 5 has an alternate pattern i.e. 101. Print “Yes” if it has an alternate pattern otherwise “No”. Here alternate pattern can be like 0101 or 1010."
},
{
"code": null,
"e": 24908,
"s": 24897,
"text": "Examples: "
},
{
"code": null,
"e": 25060,
"s": 24908,
"text": "Input : 15\nOutput : No\nExplanation: Binary representation of 15 is 1111.\n\nInput : 10\nOutput : Yes\nExplanation: Binary representation of 10 is 1010."
},
{
"code": null,
"e": 25138,
"s": 25060,
"text": "A simple approach is to find its binary equivalent and then check its bits. "
},
{
"code": null,
"e": 25142,
"s": 25138,
"text": "C++"
},
{
"code": null,
"e": 25147,
"s": 25142,
"text": "Java"
},
{
"code": null,
"e": 25155,
"s": 25147,
"text": "Python3"
},
{
"code": null,
"e": 25158,
"s": 25155,
"text": "C#"
},
{
"code": null,
"e": 25162,
"s": 25158,
"text": "PHP"
},
{
"code": null,
"e": 25173,
"s": 25162,
"text": "Javascript"
},
{
"code": "// C++ program to find if a number has alternate// bit pattern#include<bits/stdc++.h>using namespace std; // Returns true if n has alternate bit pattern// else falsebool findPattern(int n){ // Store last bit int prev = n % 2; n = n/2; // Traverse through remaining bits while (n > 0) { int curr = n % 2; // If current bit is same as previous if (curr == prev) return false; prev = curr; n = n / 2; } return true;} // Driver codeint main(){ int n = 10; if (findPattern(n)) cout << \"Yes\"; else cout << \"No\"; return 0;}",
"e": 25790,
"s": 25173,
"text": null
},
{
"code": "// Java program to find if a number has alternate// bit pattern class Test{ // Returns true if n has alternate bit pattern // else false static boolean findPattern(int n) { // Store last bit int prev = n % 2; n = n/2; // Traverse through remaining bits while (n > 0) { int curr = n % 2; // If current bit is same as previous if (curr == prev) return false; prev = curr; n = n / 2; } return true; } // Driver method public static void main(String args[]) { int n = 10; System.out.println(findPattern(n) ? \"Yes\" : \"No\"); }}",
"e": 26513,
"s": 25790,
"text": null
},
{
"code": "# Python3 program to find if a number# has alternate bit pattern # Returns true if n has alternate# bit pattern else falsedef findPattern(n): # Store last bit prev = n % 2 n = n // 2 # Traverse through remaining bits while (n > 0): curr = n % 2 # If current bit is same as previous if (curr == prev): return False prev = curr n = n // 2 return True # Driver coden = 10print(\"Yes\") if(findPattern(n)) else print(\"No\") # This code is contributed by Anant Agarwal.",
"e": 27053,
"s": 26513,
"text": null
},
{
"code": "// Program to find if a number// has alternate bit patternusing System; class Test { // Returns true if n has alternate // bit pattern else returns false static bool findPattern(int n) { // Store last bit int prev = n % 2; n = n / 2; // Traverse through remaining bits while (n > 0) { int curr = n % 2; // If current bit is same as previous if (curr == prev) return false; prev = curr; n = n / 2; } return true; } // Driver method public static void Main() { int n = 10; Console.WriteLine(findPattern(n) ? \"Yes\" : \"No\"); }} // This code is contributed by Anant Agarwal.",
"e": 27789,
"s": 27053,
"text": null
},
{
"code": "<?php// PHP program to find if a// number has alternate// bit pattern // Returns true if n has// alternate bit pattern// else falsefunction findPattern($n){ // Store last bit $prev = $n % 2; $n = $n / 2; // Traverse through // remaining bits while ($n > 0) { $curr = $n % 2; // If current bit is // same as previous if ($curr == $prev) return false; $prev = $curr; $n = floor($n / 2); } return true;} // Driver code $n = 10; if (findPattern($n)) echo \"Yes\"; else echo \"No\"; return 0; // This code is contributed by nitin mittal.?>",
"e": 28444,
"s": 27789,
"text": null
},
{
"code": "<script> // Javascript program to find if a number// has alternate bit pattern // Returns true if n has alternate// bit pattern else falsefunction findPattern(n){ // Store last bit let prev = n % 2; n = Math.floor(n / 2); // Traverse through remaining bits while (n > 0) { let curr = n % 2; // If current bit is // same as previous if (curr == prev) return false; prev = curr; n = Math.floor(n / 2); } return true;} // Driver codelet n = 10;if (findPattern(n)) document.write(\"Yes\");else document.write(\"No\"); // This code is contributed by gfgking </script>",
"e": 29098,
"s": 28444,
"text": null
},
{
"code": null,
"e": 29107,
"s": 29098,
"text": "Output: "
},
{
"code": null,
"e": 29111,
"s": 29107,
"text": "Yes"
},
{
"code": null,
"e": 29137,
"s": 29111,
"text": "Time Complexity: O(log2n)"
},
{
"code": null,
"e": 29159,
"s": 29137,
"text": "Auxiliary Space: O(1)"
},
{
"code": null,
"e": 29282,
"s": 29159,
"text": "Reference: http://stackoverflow.com/questions/38690278/program-to-check-whether-the-given-integer-has-an-alternate-pattern"
},
{
"code": null,
"e": 29711,
"s": 29282,
"text": "This article is contributed by Sahil Chhabra (akku). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 29724,
"s": 29711,
"text": "nitin mittal"
},
{
"code": null,
"e": 29732,
"s": 29724,
"text": "gfgking"
},
{
"code": null,
"e": 29742,
"s": 29732,
"text": "samim2000"
},
{
"code": null,
"e": 29752,
"s": 29742,
"text": "Bit Magic"
},
{
"code": null,
"e": 29762,
"s": 29752,
"text": "Bit Magic"
},
{
"code": null,
"e": 29860,
"s": 29762,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29869,
"s": 29860,
"text": "Comments"
},
{
"code": null,
"e": 29882,
"s": 29869,
"text": "Old Comments"
},
{
"code": null,
"e": 29933,
"s": 29882,
"text": "Set, Clear and Toggle a given bit of a number in C"
},
{
"code": null,
"e": 29970,
"s": 29933,
"text": "Check whether K-th bit is set or not"
},
{
"code": null,
"e": 30034,
"s": 29970,
"text": "Write an Efficient Method to Check if a Number is Multiple of 3"
},
{
"code": null,
"e": 30074,
"s": 30034,
"text": "Reverse actual bits of the given number"
},
{
"code": null,
"e": 30097,
"s": 30074,
"text": "Program to find parity"
},
{
"code": null,
"e": 30125,
"s": 30097,
"text": "Swap bits in a given number"
},
{
"code": null,
"e": 30182,
"s": 30125,
"text": "Check if a Number is Odd or Even using Bitwise Operators"
},
{
"code": null,
"e": 30208,
"s": 30182,
"text": "Generate n-bit Gray Codes"
},
{
"code": null,
"e": 30242,
"s": 30208,
"text": "Builtin functions of GCC compiler"
}
] |
Creating Knowledge Graphs from Resumes and Traversing them | by Priya Dwivedi | Towards Data Science
|
Knowledge graphs are gaining popularity as a data structure for storing unstructured data. In this blog, we show how key elements of resumes can be stored and visualized as knowledge graphs. We then walk through the knowledge graph of resumes to answer questions. Our code is available on this Github link.
A Knowledge Graph is a type of graph which enables us to model knowledge of a particular domain by organizing it in an ontology through data interlinking. Machine learning can then be applied on a knowledge graph to get insights. Knowledge Graphs have all the major components of a graph — nodes, edges and their respective attributes.
Let’s look at the definitions of a node and an edge first:
A node is a point in the graph at which lines intersect or branch out.
An edge is a line joining two nodes in a graph.
As seen in the knowledge graph below, Spock, Star Trek, Star Wars, Leonard Nimoy, Science Fiction, Obi-Wan Kenobi, Alec Guinness are nodes and played, characterIn, starredIn and genre are edges of the graph.
The representation above is an example of a Knowledge Graph which stores information of movies and their characters along with the actors who played these characters. Storing information in this form enables us to model complicated pieces of knowledge of particular domains so that we can access specific information when needed through a structured traversal of the given graph.
Information to be structured in knowledge graphs are modeled in the form of triplets. In the triplets, typically there are two entities and a relation which is there between the entities. In the above graph, for example, played(Spock, Leonard Nimoy) is a triplet where the entities are Spock and Leonard Nimoy and the relation is played.
In this blog, we will be creating a Knowledge graph of people and the programming skills they mention on their resume. The idea is to extract skills from the resume and model it in a graph format, so that it becomes easier to navigate and extract specific information from.
For the purpose of this blog, we will be using 3 dummy resumes.
So, let’s dive into code now! The full code and the resumes are shared on my Github repo here.
For the purpose of simplification, we will just be focusing on programming languages that the different candidates know.
First, we extract the programming languages mentioned in the resume using regex. The code for this is shared below.
The output from this is the languages from the 3 programmers:
Output:['JavaScript', 'HTML5', 'PHP OOP', 'CSS', 'SQL', 'MySQL']['Python,R,CSS,PHP,Machine Learning']['HTML5', 'CSS', 'SQL', 'Jquery’, ‘PHP']
Here, for the sake of simplification, we assume that the name of the file is the candidate name and that the programming languages line is of the same format.
Now, once we have these, we will construct a knowledge graph from this information using the networkx library. NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. The library contains a wide range of functions to create different types of graphs, traverse them and perform a range of manipulations and calculations on top of them.
In our case, we would be using the from_dict_of_lists() function using which we will create a directed graph from a dictionary containing the names as the keys, and the corresponding programming languages as the values.
First, let’s create individual graphs for each candidate just for our understanding:
Here, in the fourth line, we are creating a networkx object from the dictionary that we created earlier. The create_using parameter here specifies the type of graph we need to create.
The different types of graphs and their respective networkx classes are specified in the networkx documentation as follows:
We will be using the MultiDiGraph since we need to represent more than one resume in a single graph and need to show directed edges for a clearer representation.
Next, we will use matplotlib to create a diagram and draw the graphs using networkx. Before drawing the knowledge graph, we have to first specify the layout of the graph. The networkx library provides about 7 layouts using which you can draw your graph depending on your use case. You can explore these layouts here.
The above code will create a graph for Mathew and his skills as shown below:
Similarly, we can create knowledge graph for the other 2 candidates.
Now, let’s try and visualize the graphs as one connected graph:
Here, we use the draw() function to draw the graph using the specified parameters:
With_labels — Using this flag, one can specify whether to show labels in the graphs or not.
Node_color — Using this parameter, the color of the nodes can be specified.
Edge_cmap -Using this parameter, the color map of the graph can be specified.
Pos -Using this parameter, the layout of the graph can be specified.
Node_size — Using this parameter, the size of the nodes can be specified.
Font_size — Using this parameter, we define the font size of the text used on the nodes.
The resultant graph will look like this:
As you can see in the above graph, we have now created a knowledge graph with each candidate and the programming languages they know.
Now that we have the knowledge graph ready, we can use this graph to extract information using graph based operations. There is a wide range of mathematical graph operations that can be used to extract data from knowledge graphs.
Traversing a knowledge graph can be tedious if code is written from scratch. To simplify this, the best practice is to store the graph in a graph database like Neo4j and write queries in SPARQL. For the purpose of this blog, we will showcase some simple extraction techniques using the networkx graph we created in Figure 4 above.
Suppose you are a recruiter of a company and you have received these resumes and have created the knowledge graph above. Now, you want to know the most popular programming language candidates have.
Let’s try it out!
In graph terms, we can convert the problem of getting the most popular programming language, to a node with most number of edges pointing to it. Think about it, the more the number of edges connected to a node, the more the number of candidates who know the skill. So we just have to extract the skill node with the maximum number of edges connected to it. That makes it easy!
Let’s see how we can achieve this using the networkx library:
Output:'CSS'3
Here, we are just using one function from the networkx library called degree(). This function returns the number of edges connected to a particular node. In the function defined above get_max_degree_node(), we are iterating through all skill nodes(all nodes except the name nodes) and getting the maximum degree in the skill nodes.
Similarly, we can use the same function to get the candidate who knows the most number of programming languages, instead of passing the name list, we pass the skill list, which has to be removed from all nodes.
skill_list = languages_mathew+languages_john+languages_maxmax_languages_degree, max_languages_node = get_max_degree_node(skill_list,G)print(max_languages_node)print(max_languages_degree)Output:Mathew Elliot6
These were a couple of examples of how we can use basic code to traverse the knowledge graph and extract information from it. Like mentioned earlier, for more advanced queries, it is recommended to store the knowledge graph in a graph database and then use querying languages such as SPARQL to query.
In the blog, we explain what knowledge graphs are, how they can be constructed and traversed. Knowledge graphs have a wide range of application ranging from conversational agents, search engines, question answering systems etc. They are ideal in a system where a large amount of interconnected data needs to be stored. Google, IBM, Facebook and other technology companies heavily use knowledge graphs in their systems to tie information together and traverse it very efficiently.
At Deep Learning Analytics, we are extremely passionate about using Machine Learning to solve real-world problems. We have helped many businesses deploy innovative AI-based solutions. Contact us through our website here if you see an opportunity to collaborate. This blog was written by Priya Dwivedi and Faizan Khan.
NetworkX Library — https://networkx.github.io/
Intro to Knowledge Graphs — https://towardsdatascience.com/knowledge-graph-bb78055a7884
|
[
{
"code": null,
"e": 479,
"s": 172,
"text": "Knowledge graphs are gaining popularity as a data structure for storing unstructured data. In this blog, we show how key elements of resumes can be stored and visualized as knowledge graphs. We then walk through the knowledge graph of resumes to answer questions. Our code is available on this Github link."
},
{
"code": null,
"e": 815,
"s": 479,
"text": "A Knowledge Graph is a type of graph which enables us to model knowledge of a particular domain by organizing it in an ontology through data interlinking. Machine learning can then be applied on a knowledge graph to get insights. Knowledge Graphs have all the major components of a graph — nodes, edges and their respective attributes."
},
{
"code": null,
"e": 874,
"s": 815,
"text": "Let’s look at the definitions of a node and an edge first:"
},
{
"code": null,
"e": 945,
"s": 874,
"text": "A node is a point in the graph at which lines intersect or branch out."
},
{
"code": null,
"e": 993,
"s": 945,
"text": "An edge is a line joining two nodes in a graph."
},
{
"code": null,
"e": 1201,
"s": 993,
"text": "As seen in the knowledge graph below, Spock, Star Trek, Star Wars, Leonard Nimoy, Science Fiction, Obi-Wan Kenobi, Alec Guinness are nodes and played, characterIn, starredIn and genre are edges of the graph."
},
{
"code": null,
"e": 1581,
"s": 1201,
"text": "The representation above is an example of a Knowledge Graph which stores information of movies and their characters along with the actors who played these characters. Storing information in this form enables us to model complicated pieces of knowledge of particular domains so that we can access specific information when needed through a structured traversal of the given graph."
},
{
"code": null,
"e": 1919,
"s": 1581,
"text": "Information to be structured in knowledge graphs are modeled in the form of triplets. In the triplets, typically there are two entities and a relation which is there between the entities. In the above graph, for example, played(Spock, Leonard Nimoy) is a triplet where the entities are Spock and Leonard Nimoy and the relation is played."
},
{
"code": null,
"e": 2193,
"s": 1919,
"text": "In this blog, we will be creating a Knowledge graph of people and the programming skills they mention on their resume. The idea is to extract skills from the resume and model it in a graph format, so that it becomes easier to navigate and extract specific information from."
},
{
"code": null,
"e": 2257,
"s": 2193,
"text": "For the purpose of this blog, we will be using 3 dummy resumes."
},
{
"code": null,
"e": 2352,
"s": 2257,
"text": "So, let’s dive into code now! The full code and the resumes are shared on my Github repo here."
},
{
"code": null,
"e": 2473,
"s": 2352,
"text": "For the purpose of simplification, we will just be focusing on programming languages that the different candidates know."
},
{
"code": null,
"e": 2589,
"s": 2473,
"text": "First, we extract the programming languages mentioned in the resume using regex. The code for this is shared below."
},
{
"code": null,
"e": 2651,
"s": 2589,
"text": "The output from this is the languages from the 3 programmers:"
},
{
"code": null,
"e": 2793,
"s": 2651,
"text": "Output:['JavaScript', 'HTML5', 'PHP OOP', 'CSS', 'SQL', 'MySQL']['Python,R,CSS,PHP,Machine Learning']['HTML5', 'CSS', 'SQL', 'Jquery’, ‘PHP']"
},
{
"code": null,
"e": 2952,
"s": 2793,
"text": "Here, for the sake of simplification, we assume that the name of the file is the candidate name and that the programming languages line is of the same format."
},
{
"code": null,
"e": 3365,
"s": 2952,
"text": "Now, once we have these, we will construct a knowledge graph from this information using the networkx library. NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. The library contains a wide range of functions to create different types of graphs, traverse them and perform a range of manipulations and calculations on top of them."
},
{
"code": null,
"e": 3585,
"s": 3365,
"text": "In our case, we would be using the from_dict_of_lists() function using which we will create a directed graph from a dictionary containing the names as the keys, and the corresponding programming languages as the values."
},
{
"code": null,
"e": 3670,
"s": 3585,
"text": "First, let’s create individual graphs for each candidate just for our understanding:"
},
{
"code": null,
"e": 3854,
"s": 3670,
"text": "Here, in the fourth line, we are creating a networkx object from the dictionary that we created earlier. The create_using parameter here specifies the type of graph we need to create."
},
{
"code": null,
"e": 3978,
"s": 3854,
"text": "The different types of graphs and their respective networkx classes are specified in the networkx documentation as follows:"
},
{
"code": null,
"e": 4140,
"s": 3978,
"text": "We will be using the MultiDiGraph since we need to represent more than one resume in a single graph and need to show directed edges for a clearer representation."
},
{
"code": null,
"e": 4457,
"s": 4140,
"text": "Next, we will use matplotlib to create a diagram and draw the graphs using networkx. Before drawing the knowledge graph, we have to first specify the layout of the graph. The networkx library provides about 7 layouts using which you can draw your graph depending on your use case. You can explore these layouts here."
},
{
"code": null,
"e": 4534,
"s": 4457,
"text": "The above code will create a graph for Mathew and his skills as shown below:"
},
{
"code": null,
"e": 4603,
"s": 4534,
"text": "Similarly, we can create knowledge graph for the other 2 candidates."
},
{
"code": null,
"e": 4667,
"s": 4603,
"text": "Now, let’s try and visualize the graphs as one connected graph:"
},
{
"code": null,
"e": 4750,
"s": 4667,
"text": "Here, we use the draw() function to draw the graph using the specified parameters:"
},
{
"code": null,
"e": 4842,
"s": 4750,
"text": "With_labels — Using this flag, one can specify whether to show labels in the graphs or not."
},
{
"code": null,
"e": 4918,
"s": 4842,
"text": "Node_color — Using this parameter, the color of the nodes can be specified."
},
{
"code": null,
"e": 4996,
"s": 4918,
"text": "Edge_cmap -Using this parameter, the color map of the graph can be specified."
},
{
"code": null,
"e": 5065,
"s": 4996,
"text": "Pos -Using this parameter, the layout of the graph can be specified."
},
{
"code": null,
"e": 5139,
"s": 5065,
"text": "Node_size — Using this parameter, the size of the nodes can be specified."
},
{
"code": null,
"e": 5228,
"s": 5139,
"text": "Font_size — Using this parameter, we define the font size of the text used on the nodes."
},
{
"code": null,
"e": 5269,
"s": 5228,
"text": "The resultant graph will look like this:"
},
{
"code": null,
"e": 5403,
"s": 5269,
"text": "As you can see in the above graph, we have now created a knowledge graph with each candidate and the programming languages they know."
},
{
"code": null,
"e": 5633,
"s": 5403,
"text": "Now that we have the knowledge graph ready, we can use this graph to extract information using graph based operations. There is a wide range of mathematical graph operations that can be used to extract data from knowledge graphs."
},
{
"code": null,
"e": 5964,
"s": 5633,
"text": "Traversing a knowledge graph can be tedious if code is written from scratch. To simplify this, the best practice is to store the graph in a graph database like Neo4j and write queries in SPARQL. For the purpose of this blog, we will showcase some simple extraction techniques using the networkx graph we created in Figure 4 above."
},
{
"code": null,
"e": 6162,
"s": 5964,
"text": "Suppose you are a recruiter of a company and you have received these resumes and have created the knowledge graph above. Now, you want to know the most popular programming language candidates have."
},
{
"code": null,
"e": 6180,
"s": 6162,
"text": "Let’s try it out!"
},
{
"code": null,
"e": 6557,
"s": 6180,
"text": "In graph terms, we can convert the problem of getting the most popular programming language, to a node with most number of edges pointing to it. Think about it, the more the number of edges connected to a node, the more the number of candidates who know the skill. So we just have to extract the skill node with the maximum number of edges connected to it. That makes it easy!"
},
{
"code": null,
"e": 6619,
"s": 6557,
"text": "Let’s see how we can achieve this using the networkx library:"
},
{
"code": null,
"e": 6633,
"s": 6619,
"text": "Output:'CSS'3"
},
{
"code": null,
"e": 6965,
"s": 6633,
"text": "Here, we are just using one function from the networkx library called degree(). This function returns the number of edges connected to a particular node. In the function defined above get_max_degree_node(), we are iterating through all skill nodes(all nodes except the name nodes) and getting the maximum degree in the skill nodes."
},
{
"code": null,
"e": 7176,
"s": 6965,
"text": "Similarly, we can use the same function to get the candidate who knows the most number of programming languages, instead of passing the name list, we pass the skill list, which has to be removed from all nodes."
},
{
"code": null,
"e": 7384,
"s": 7176,
"text": "skill_list = languages_mathew+languages_john+languages_maxmax_languages_degree, max_languages_node = get_max_degree_node(skill_list,G)print(max_languages_node)print(max_languages_degree)Output:Mathew Elliot6"
},
{
"code": null,
"e": 7685,
"s": 7384,
"text": "These were a couple of examples of how we can use basic code to traverse the knowledge graph and extract information from it. Like mentioned earlier, for more advanced queries, it is recommended to store the knowledge graph in a graph database and then use querying languages such as SPARQL to query."
},
{
"code": null,
"e": 8165,
"s": 7685,
"text": "In the blog, we explain what knowledge graphs are, how they can be constructed and traversed. Knowledge graphs have a wide range of application ranging from conversational agents, search engines, question answering systems etc. They are ideal in a system where a large amount of interconnected data needs to be stored. Google, IBM, Facebook and other technology companies heavily use knowledge graphs in their systems to tie information together and traverse it very efficiently."
},
{
"code": null,
"e": 8483,
"s": 8165,
"text": "At Deep Learning Analytics, we are extremely passionate about using Machine Learning to solve real-world problems. We have helped many businesses deploy innovative AI-based solutions. Contact us through our website here if you see an opportunity to collaborate. This blog was written by Priya Dwivedi and Faizan Khan."
},
{
"code": null,
"e": 8530,
"s": 8483,
"text": "NetworkX Library — https://networkx.github.io/"
}
] |
Program to find goal parser interpretation command in Python
|
Suppose we have a Goal Parser that can interpret a given string command. A command consists of
An alphabet "G",
An alphabet "G",
Opening and closing parenthesis "()"
Opening and closing parenthesis "()"
and/or "(al)" in some order.
and/or "(al)" in some order.
Our Goal Parser will interpret "G" as the string "G", "()" as "o", and "(al)" as the string "al". Finally interpreted strings are then concatenated in the original order. So if we have string command, we have to find the Goal Parser's interpretation of command.
So, if the input is like command = "G()()()(al)(al)", then the output will be Goooalal.
To solve this, we will follow these steps −
s:= blank string
s:= blank string
for i in range 0 to size of command - 1, doif command[i] is not same as "(" and command[i] is not same as ")", thens := s concatenate command[i]if command[i] is same as "(" and command[i+1] is same as ")" and i+1<len(command) , thens := s concatenate 'o'if command[i] is same as "(", thengo for next iterationif command[i] is same as ")", thengo for next iteration
for i in range 0 to size of command - 1, do
if command[i] is not same as "(" and command[i] is not same as ")", thens := s concatenate command[i]
if command[i] is not same as "(" and command[i] is not same as ")", then
s := s concatenate command[i]
s := s concatenate command[i]
if command[i] is same as "(" and command[i+1] is same as ")" and i+1<len(command) , thens := s concatenate 'o'
if command[i] is same as "(" and command[i+1] is same as ")" and i+1<len(command) , then
s := s concatenate 'o'
s := s concatenate 'o'
if command[i] is same as "(", thengo for next iteration
if command[i] is same as "(", then
go for next iteration
go for next iteration
if command[i] is same as ")", thengo for next iteration
if command[i] is same as ")", then
go for next iteration
go for next iteration
return s
return s
Let us see the following implementation to get better understanding −
Live Demo
def solve(command):
s=""
for i in range(len(command)):
if command[i]!="(" and command[i]!=")":
s+=command[i]
if command[i]=="(" and command[i+1]==")" and i+1<len(command):
s+='o'
if command[i]=="(":
continue
if command[i]==")":
continue
return s
command = "G()()()(al)(al)"
print(solve(command))
"G()()()(al)(al)"
Goooalal
|
[
{
"code": null,
"e": 1157,
"s": 1062,
"text": "Suppose we have a Goal Parser that can interpret a given string command. A command consists of"
},
{
"code": null,
"e": 1174,
"s": 1157,
"text": "An alphabet \"G\","
},
{
"code": null,
"e": 1191,
"s": 1174,
"text": "An alphabet \"G\","
},
{
"code": null,
"e": 1228,
"s": 1191,
"text": "Opening and closing parenthesis \"()\""
},
{
"code": null,
"e": 1265,
"s": 1228,
"text": "Opening and closing parenthesis \"()\""
},
{
"code": null,
"e": 1294,
"s": 1265,
"text": "and/or \"(al)\" in some order."
},
{
"code": null,
"e": 1323,
"s": 1294,
"text": "and/or \"(al)\" in some order."
},
{
"code": null,
"e": 1585,
"s": 1323,
"text": "Our Goal Parser will interpret \"G\" as the string \"G\", \"()\" as \"o\", and \"(al)\" as the string \"al\". Finally interpreted strings are then concatenated in the original order. So if we have string command, we have to find the Goal Parser's interpretation of command."
},
{
"code": null,
"e": 1673,
"s": 1585,
"text": "So, if the input is like command = \"G()()()(al)(al)\", then the output will be Goooalal."
},
{
"code": null,
"e": 1717,
"s": 1673,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1734,
"s": 1717,
"text": "s:= blank string"
},
{
"code": null,
"e": 1751,
"s": 1734,
"text": "s:= blank string"
},
{
"code": null,
"e": 2116,
"s": 1751,
"text": "for i in range 0 to size of command - 1, doif command[i] is not same as \"(\" and command[i] is not same as \")\", thens := s concatenate command[i]if command[i] is same as \"(\" and command[i+1] is same as \")\" and i+1<len(command) , thens := s concatenate 'o'if command[i] is same as \"(\", thengo for next iterationif command[i] is same as \")\", thengo for next iteration"
},
{
"code": null,
"e": 2160,
"s": 2116,
"text": "for i in range 0 to size of command - 1, do"
},
{
"code": null,
"e": 2262,
"s": 2160,
"text": "if command[i] is not same as \"(\" and command[i] is not same as \")\", thens := s concatenate command[i]"
},
{
"code": null,
"e": 2335,
"s": 2262,
"text": "if command[i] is not same as \"(\" and command[i] is not same as \")\", then"
},
{
"code": null,
"e": 2365,
"s": 2335,
"text": "s := s concatenate command[i]"
},
{
"code": null,
"e": 2395,
"s": 2365,
"text": "s := s concatenate command[i]"
},
{
"code": null,
"e": 2506,
"s": 2395,
"text": "if command[i] is same as \"(\" and command[i+1] is same as \")\" and i+1<len(command) , thens := s concatenate 'o'"
},
{
"code": null,
"e": 2595,
"s": 2506,
"text": "if command[i] is same as \"(\" and command[i+1] is same as \")\" and i+1<len(command) , then"
},
{
"code": null,
"e": 2618,
"s": 2595,
"text": "s := s concatenate 'o'"
},
{
"code": null,
"e": 2641,
"s": 2618,
"text": "s := s concatenate 'o'"
},
{
"code": null,
"e": 2697,
"s": 2641,
"text": "if command[i] is same as \"(\", thengo for next iteration"
},
{
"code": null,
"e": 2732,
"s": 2697,
"text": "if command[i] is same as \"(\", then"
},
{
"code": null,
"e": 2754,
"s": 2732,
"text": "go for next iteration"
},
{
"code": null,
"e": 2776,
"s": 2754,
"text": "go for next iteration"
},
{
"code": null,
"e": 2832,
"s": 2776,
"text": "if command[i] is same as \")\", thengo for next iteration"
},
{
"code": null,
"e": 2867,
"s": 2832,
"text": "if command[i] is same as \")\", then"
},
{
"code": null,
"e": 2889,
"s": 2867,
"text": "go for next iteration"
},
{
"code": null,
"e": 2911,
"s": 2889,
"text": "go for next iteration"
},
{
"code": null,
"e": 2920,
"s": 2911,
"text": "return s"
},
{
"code": null,
"e": 2929,
"s": 2920,
"text": "return s"
},
{
"code": null,
"e": 2999,
"s": 2929,
"text": "Let us see the following implementation to get better understanding −"
},
{
"code": null,
"e": 3010,
"s": 2999,
"text": " Live Demo"
},
{
"code": null,
"e": 3376,
"s": 3010,
"text": "def solve(command):\n s=\"\"\n for i in range(len(command)):\n if command[i]!=\"(\" and command[i]!=\")\":\n s+=command[i]\n if command[i]==\"(\" and command[i+1]==\")\" and i+1<len(command):\n s+='o'\n if command[i]==\"(\":\n continue\n if command[i]==\")\":\n continue\n return s\n\ncommand = \"G()()()(al)(al)\"\nprint(solve(command))"
},
{
"code": null,
"e": 3394,
"s": 3376,
"text": "\"G()()()(al)(al)\""
},
{
"code": null,
"e": 3403,
"s": 3394,
"text": "Goooalal"
}
] |
Python 3 - String split() Method
|
The split() method returns a list of all the words in the string, using str as the separator (splits on all whitespace if left unspecified), optionally limiting the number of splits to num.
Following is the syntax for split() method −
str.split(str="", num = string.count(str)).
str − This is any delimeter, by default it is space.
str − This is any delimeter, by default it is space.
num − this is number of lines to be made
num − this is number of lines to be made
This method returns a list of lines.
The following example shows the usage of split() method.
#!/usr/bin/python3
str = "this is string example....wow!!!"
print (str.split( ))
print (str.split('i',1))
print (str.split('w'))
When we run above program, it produces the following result −
['this', 'is', 'string', 'example....wow!!!']
['th', 's is string example....wow!!!']
['this is string example....', 'o', '!!!']
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2530,
"s": 2340,
"text": "The split() method returns a list of all the words in the string, using str as the separator (splits on all whitespace if left unspecified), optionally limiting the number of splits to num."
},
{
"code": null,
"e": 2575,
"s": 2530,
"text": "Following is the syntax for split() method −"
},
{
"code": null,
"e": 2620,
"s": 2575,
"text": "str.split(str=\"\", num = string.count(str)).\n"
},
{
"code": null,
"e": 2673,
"s": 2620,
"text": "str − This is any delimeter, by default it is space."
},
{
"code": null,
"e": 2726,
"s": 2673,
"text": "str − This is any delimeter, by default it is space."
},
{
"code": null,
"e": 2767,
"s": 2726,
"text": "num − this is number of lines to be made"
},
{
"code": null,
"e": 2808,
"s": 2767,
"text": "num − this is number of lines to be made"
},
{
"code": null,
"e": 2845,
"s": 2808,
"text": "This method returns a list of lines."
},
{
"code": null,
"e": 2902,
"s": 2845,
"text": "The following example shows the usage of split() method."
},
{
"code": null,
"e": 3032,
"s": 2902,
"text": "#!/usr/bin/python3\n\nstr = \"this is string example....wow!!!\"\nprint (str.split( ))\nprint (str.split('i',1))\nprint (str.split('w'))"
},
{
"code": null,
"e": 3094,
"s": 3032,
"text": "When we run above program, it produces the following result −"
},
{
"code": null,
"e": 3224,
"s": 3094,
"text": "['this', 'is', 'string', 'example....wow!!!']\n['th', 's is string example....wow!!!']\n['this is string example....', 'o', '!!!']\n"
},
{
"code": null,
"e": 3261,
"s": 3224,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 3277,
"s": 3261,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 3310,
"s": 3277,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 3329,
"s": 3310,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 3364,
"s": 3329,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 3386,
"s": 3364,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 3420,
"s": 3386,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 3448,
"s": 3420,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 3483,
"s": 3448,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 3497,
"s": 3483,
"text": " Lets Kode It"
},
{
"code": null,
"e": 3530,
"s": 3497,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3547,
"s": 3530,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 3554,
"s": 3547,
"text": " Print"
},
{
"code": null,
"e": 3565,
"s": 3554,
"text": " Add Notes"
}
] |
What is Cross Site Scripting (XSS) ?
|
28 Mar, 2022
Cross Site Scripting (XSS) is a vulnerability in a web application that allows a third party to execute a script in the user’s browser on behalf of the web application. Cross-site Scripting is one of the most prevalent vulnerabilities present on the web today. The exploitation of XSS against a user can lead to various consequences such as account compromise, account deletion, privilege escalation, malware infection and many more. In its initial days, it was called CSS and it was not exactly what it is today. Initially, it was discovered that a malicious website could utilize JavaScript to read data from other website’s responses by embedding them in an iframe, run scripts and modify page contents. It was called CSS (Cross Site Scripting) then. The definition changed when Netscape introduced the Same Origin Policy and cross-site scripting was restricted from enabling cross-origin response reading. Soon it was recommended to call this vulnerability as XSS to avoid confusion with Cascading Style Sheets(CSS). The possibility of getting XSSed arises when a website does not properly handle the input provided to it from a user before inserting it into the response. In such a case, a crafted input can be given that when embedded in the response acts as a JS code block and is executed by the browser. Depending on the context, there are two types of XSS –
Reflected XSS: If the input has to be provided each time to execute, such XSS is called reflected. These attacks are mostly carried out by delivering a payload directly to the victim. Victim requests a page with a request containing the payload and the payload comes embedded in the response as a script. An example of reflected XSS is XSS in the search field. Stored XSS: When the response containing the payload is stored on the server in such a way that the script gets executed on every visit without submission of payload, then it is identified as stored XSS. An example of stored XSS is XSS in the comment thread.
Reflected XSS: If the input has to be provided each time to execute, such XSS is called reflected. These attacks are mostly carried out by delivering a payload directly to the victim. Victim requests a page with a request containing the payload and the payload comes embedded in the response as a script. An example of reflected XSS is XSS in the search field.
Stored XSS: When the response containing the payload is stored on the server in such a way that the script gets executed on every visit without submission of payload, then it is identified as stored XSS. An example of stored XSS is XSS in the comment thread.
There is another type of XSS called DOM based XSS and its instances are either reflected or stored. DOM-based XSS arises when user-supplied data is provided to the DOM objects without proper sanitizing. An example of code vulnerable to XSS is below, notice the variables firstname and lastname :
php
<?php if(isset($_GET["firstname"]) && isset($_GET["lastname"])) { $firstname = $_GET["firstname"]; $lastname = $_GET["lastname"]; if($firstname == "" or $lastname == "") { echo "<font color=\"red\">Please enter both fields...</font>"; } else { echo "Welcome " . $firstname. " " . $lastname; } } ?>
User-supplied input is directly added in the response without any sanity check. Attacker an input something like –
html
<script> alert(1) </script>
and it will be rendered as JavaScript. There are two aspects of XSS (and any security issue) –
Developer: If you are a developer, the focus would be secure development to avoid having any security holes in the product. You do not need to dive very deep into the exploitation aspect, just have to use tools and libraries while applying the best practices for secure code development as prescribed by security researchers. Some resources for developers are – a). OWASP Encoding Project : It is a library written in Java that is developed by the Open Web Application Security Project(OWASP). It is free, open source and easy to use. b). The “X-XSS-Protection” Header : This header instructs the browser to activate the inbuilt XSS auditor to identify and block any XSS attempts against the user. c). The XSS Protection Cheat Sheet by OWASP : This resource enlists rules to be followed during development with proper examples. The rules cover a large variety of cases where a developer can miss something that can lead to the website being vulnerable to XSS. d). Content Security Policy : It is a stand-alone solution for XSS like problems, it instructs the browser about “safe” sources apart from which no script should be executed from any origin.Security researchers: Security researchers, on the other hand, would like similar resources to help them hunt down instances where the developer became lousy and left an entry point. Researchers can make use of – a). CheatSheets – 1. XSS filter evasion cheat sheet by OWASP. 2. XSS cheat sheet by Rodolfo Assis. 3. XSS cheat sheet by Veracode. b). Practice Labs – 1. bWAPP 2. DVWA(Damn vulnerable Web Application) 3. prompt.ml 4. CTFs c). Reports – 1. Hackerone Hactivity 2. Personal blogs of eminent security researchers like Jason Haddix, Geekboy, Prakhar Prasad, Dafydd Stuttard(Portswigger) etc.
Developer: If you are a developer, the focus would be secure development to avoid having any security holes in the product. You do not need to dive very deep into the exploitation aspect, just have to use tools and libraries while applying the best practices for secure code development as prescribed by security researchers. Some resources for developers are – a). OWASP Encoding Project : It is a library written in Java that is developed by the Open Web Application Security Project(OWASP). It is free, open source and easy to use. b). The “X-XSS-Protection” Header : This header instructs the browser to activate the inbuilt XSS auditor to identify and block any XSS attempts against the user. c). The XSS Protection Cheat Sheet by OWASP : This resource enlists rules to be followed during development with proper examples. The rules cover a large variety of cases where a developer can miss something that can lead to the website being vulnerable to XSS. d). Content Security Policy : It is a stand-alone solution for XSS like problems, it instructs the browser about “safe” sources apart from which no script should be executed from any origin.
Security researchers: Security researchers, on the other hand, would like similar resources to help them hunt down instances where the developer became lousy and left an entry point. Researchers can make use of – a). CheatSheets – 1. XSS filter evasion cheat sheet by OWASP. 2. XSS cheat sheet by Rodolfo Assis. 3. XSS cheat sheet by Veracode. b). Practice Labs – 1. bWAPP 2. DVWA(Damn vulnerable Web Application) 3. prompt.ml 4. CTFs c). Reports – 1. Hackerone Hactivity 2. Personal blogs of eminent security researchers like Jason Haddix, Geekboy, Prakhar Prasad, Dafydd Stuttard(Portswigger) etc.
sagar0719kumar
GBlog
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
GEEK-O-LYMPICS 2022 - May The Geeks Force Be With You!
Geek Streak - 24 Days POTD Challenge
How to Learn Data Science in 10 weeks?
What is Hashing | A Complete Tutorial
What is Data Structure: Types, Classifications and Applications
GeeksforGeeks Jobathon - Are You Ready For This Hiring Challenge?
GeeksforGeeks Job-A-Thon Exclusive - Hiring Challenge For Amazon Alexa
Roadmap to Learn JavaScript For Beginners
Axios in React: A Guide for Beginners
A Freshers Guide To Programming
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n28 Mar, 2022"
},
{
"code": null,
"e": 1397,
"s": 28,
"text": "Cross Site Scripting (XSS) is a vulnerability in a web application that allows a third party to execute a script in the user’s browser on behalf of the web application. Cross-site Scripting is one of the most prevalent vulnerabilities present on the web today. The exploitation of XSS against a user can lead to various consequences such as account compromise, account deletion, privilege escalation, malware infection and many more. In its initial days, it was called CSS and it was not exactly what it is today. Initially, it was discovered that a malicious website could utilize JavaScript to read data from other website’s responses by embedding them in an iframe, run scripts and modify page contents. It was called CSS (Cross Site Scripting) then. The definition changed when Netscape introduced the Same Origin Policy and cross-site scripting was restricted from enabling cross-origin response reading. Soon it was recommended to call this vulnerability as XSS to avoid confusion with Cascading Style Sheets(CSS). The possibility of getting XSSed arises when a website does not properly handle the input provided to it from a user before inserting it into the response. In such a case, a crafted input can be given that when embedded in the response acts as a JS code block and is executed by the browser. Depending on the context, there are two types of XSS –"
},
{
"code": null,
"e": 2018,
"s": 1397,
"text": "Reflected XSS: If the input has to be provided each time to execute, such XSS is called reflected. These attacks are mostly carried out by delivering a payload directly to the victim. Victim requests a page with a request containing the payload and the payload comes embedded in the response as a script. An example of reflected XSS is XSS in the search field. Stored XSS: When the response containing the payload is stored on the server in such a way that the script gets executed on every visit without submission of payload, then it is identified as stored XSS. An example of stored XSS is XSS in the comment thread. "
},
{
"code": null,
"e": 2380,
"s": 2018,
"text": "Reflected XSS: If the input has to be provided each time to execute, such XSS is called reflected. These attacks are mostly carried out by delivering a payload directly to the victim. Victim requests a page with a request containing the payload and the payload comes embedded in the response as a script. An example of reflected XSS is XSS in the search field. "
},
{
"code": null,
"e": 2640,
"s": 2380,
"text": "Stored XSS: When the response containing the payload is stored on the server in such a way that the script gets executed on every visit without submission of payload, then it is identified as stored XSS. An example of stored XSS is XSS in the comment thread. "
},
{
"code": null,
"e": 2937,
"s": 2640,
"text": "There is another type of XSS called DOM based XSS and its instances are either reflected or stored. DOM-based XSS arises when user-supplied data is provided to the DOM objects without proper sanitizing. An example of code vulnerable to XSS is below, notice the variables firstname and lastname : "
},
{
"code": null,
"e": 2941,
"s": 2937,
"text": "php"
},
{
"code": "<?php if(isset($_GET[\"firstname\"]) && isset($_GET[\"lastname\"])) { $firstname = $_GET[\"firstname\"]; $lastname = $_GET[\"lastname\"]; if($firstname == \"\" or $lastname == \"\") { echo \"<font color=\\\"red\\\">Please enter both fields...</font>\"; } else { echo \"Welcome \" . $firstname. \" \" . $lastname; } } ?>",
"e": 3347,
"s": 2941,
"text": null
},
{
"code": null,
"e": 3463,
"s": 3347,
"text": "User-supplied input is directly added in the response without any sanity check. Attacker an input something like – "
},
{
"code": null,
"e": 3468,
"s": 3463,
"text": "html"
},
{
"code": "<script> alert(1) </script>",
"e": 3496,
"s": 3468,
"text": null
},
{
"code": null,
"e": 3593,
"s": 3496,
"text": "and it will be rendered as JavaScript. There are two aspects of XSS (and any security issue) –"
},
{
"code": null,
"e": 5343,
"s": 3593,
"text": "Developer: If you are a developer, the focus would be secure development to avoid having any security holes in the product. You do not need to dive very deep into the exploitation aspect, just have to use tools and libraries while applying the best practices for secure code development as prescribed by security researchers. Some resources for developers are – a). OWASP Encoding Project : It is a library written in Java that is developed by the Open Web Application Security Project(OWASP). It is free, open source and easy to use. b). The “X-XSS-Protection” Header : This header instructs the browser to activate the inbuilt XSS auditor to identify and block any XSS attempts against the user. c). The XSS Protection Cheat Sheet by OWASP : This resource enlists rules to be followed during development with proper examples. The rules cover a large variety of cases where a developer can miss something that can lead to the website being vulnerable to XSS. d). Content Security Policy : It is a stand-alone solution for XSS like problems, it instructs the browser about “safe” sources apart from which no script should be executed from any origin.Security researchers: Security researchers, on the other hand, would like similar resources to help them hunt down instances where the developer became lousy and left an entry point. Researchers can make use of – a). CheatSheets – 1. XSS filter evasion cheat sheet by OWASP. 2. XSS cheat sheet by Rodolfo Assis. 3. XSS cheat sheet by Veracode. b). Practice Labs – 1. bWAPP 2. DVWA(Damn vulnerable Web Application) 3. prompt.ml 4. CTFs c). Reports – 1. Hackerone Hactivity 2. Personal blogs of eminent security researchers like Jason Haddix, Geekboy, Prakhar Prasad, Dafydd Stuttard(Portswigger) etc."
},
{
"code": null,
"e": 6494,
"s": 5343,
"text": "Developer: If you are a developer, the focus would be secure development to avoid having any security holes in the product. You do not need to dive very deep into the exploitation aspect, just have to use tools and libraries while applying the best practices for secure code development as prescribed by security researchers. Some resources for developers are – a). OWASP Encoding Project : It is a library written in Java that is developed by the Open Web Application Security Project(OWASP). It is free, open source and easy to use. b). The “X-XSS-Protection” Header : This header instructs the browser to activate the inbuilt XSS auditor to identify and block any XSS attempts against the user. c). The XSS Protection Cheat Sheet by OWASP : This resource enlists rules to be followed during development with proper examples. The rules cover a large variety of cases where a developer can miss something that can lead to the website being vulnerable to XSS. d). Content Security Policy : It is a stand-alone solution for XSS like problems, it instructs the browser about “safe” sources apart from which no script should be executed from any origin."
},
{
"code": null,
"e": 7094,
"s": 6494,
"text": "Security researchers: Security researchers, on the other hand, would like similar resources to help them hunt down instances where the developer became lousy and left an entry point. Researchers can make use of – a). CheatSheets – 1. XSS filter evasion cheat sheet by OWASP. 2. XSS cheat sheet by Rodolfo Assis. 3. XSS cheat sheet by Veracode. b). Practice Labs – 1. bWAPP 2. DVWA(Damn vulnerable Web Application) 3. prompt.ml 4. CTFs c). Reports – 1. Hackerone Hactivity 2. Personal blogs of eminent security researchers like Jason Haddix, Geekboy, Prakhar Prasad, Dafydd Stuttard(Portswigger) etc."
},
{
"code": null,
"e": 7109,
"s": 7094,
"text": "sagar0719kumar"
},
{
"code": null,
"e": 7115,
"s": 7109,
"text": "GBlog"
},
{
"code": null,
"e": 7213,
"s": 7115,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 7268,
"s": 7213,
"text": "GEEK-O-LYMPICS 2022 - May The Geeks Force Be With You!"
},
{
"code": null,
"e": 7305,
"s": 7268,
"text": "Geek Streak - 24 Days POTD Challenge"
},
{
"code": null,
"e": 7344,
"s": 7305,
"text": "How to Learn Data Science in 10 weeks?"
},
{
"code": null,
"e": 7382,
"s": 7344,
"text": "What is Hashing | A Complete Tutorial"
},
{
"code": null,
"e": 7446,
"s": 7382,
"text": "What is Data Structure: Types, Classifications and Applications"
},
{
"code": null,
"e": 7512,
"s": 7446,
"text": "GeeksforGeeks Jobathon - Are You Ready For This Hiring Challenge?"
},
{
"code": null,
"e": 7583,
"s": 7512,
"text": "GeeksforGeeks Job-A-Thon Exclusive - Hiring Challenge For Amazon Alexa"
},
{
"code": null,
"e": 7625,
"s": 7583,
"text": "Roadmap to Learn JavaScript For Beginners"
},
{
"code": null,
"e": 7663,
"s": 7625,
"text": "Axios in React: A Guide for Beginners"
}
] |
Hash-Buster v3.0 – Crack Hashes In Seconds
|
23 Sep, 2021
Hashing is a cryptographic method that can be used to authenticate the authenticity and integrity of various types of information. It is widely used in authentication systems to circumvent storing plaintext passwords in databases but is also used to authenticate files, documents, and other types of data. This makes the communication between sender and receiver more secure. Although it’s not totally secure, there are various tools that can crack the hashes and get the results in plain text format. Hash-Buster is an automated tool developed in the Python Language which cracks all types of hashes in seconds.
Hash-Buster tool can automatically detect the type of hash also tool can identify hashes from a directory, recursively. Hash-buster tool is available on the GitHub platform, it’s free and open-source to use.
Step 1: Use the following command to install the tool in your Kali Linux operating system.
git clone https://github.com/s0md3v/Hash-Buster.git
Step 2: Now use the following command to move into the directory of the tool. You have to move in the directory in order to run the tool.
cd Hash-Buster
Step 3: You are in the directory of the Hash-Buster. Now install the tool by using the following command.
sudo make install
Step 4: Now use the following command to run the tool and check the help section.
buster -h
Example 1: Cracking a single hash
buster -s a6eb56f80be8a120436d6f1c9b8d87ca
In this example, we are cracking the single hash which is inputted by using the -s tag.
Example 2: Finding hashes from a directory
buster -d /home/kali/Videos/
In this example, the tool will search the hashes in the directory and it will be cracked.
Cracked hash results are displayed in the below screenshot.
Example 3: Cracking hashes from a file
buster -f hashes.txt
In this example, we will be cracking multiple hashes which are specified in the hashes.txt file.
Results are displayed in the below screenshot.
Example 4: Specifying number of threads
buster -s a6eb56f80be8a120436d6f1c9b8d87ca -t 10
In this example, we are specifying the value of threads for faster execution.
Kali-Linux
Linux-Tools
Linux-Unix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n23 Sep, 2021"
},
{
"code": null,
"e": 643,
"s": 28,
"text": "Hashing is a cryptographic method that can be used to authenticate the authenticity and integrity of various types of information. It is widely used in authentication systems to circumvent storing plaintext passwords in databases but is also used to authenticate files, documents, and other types of data. This makes the communication between sender and receiver more secure. Although it’s not totally secure, there are various tools that can crack the hashes and get the results in plain text format. Hash-Buster is an automated tool developed in the Python Language which cracks all types of hashes in seconds. "
},
{
"code": null,
"e": 851,
"s": 643,
"text": "Hash-Buster tool can automatically detect the type of hash also tool can identify hashes from a directory, recursively. Hash-buster tool is available on the GitHub platform, it’s free and open-source to use."
},
{
"code": null,
"e": 942,
"s": 851,
"text": "Step 1: Use the following command to install the tool in your Kali Linux operating system."
},
{
"code": null,
"e": 994,
"s": 942,
"text": "git clone https://github.com/s0md3v/Hash-Buster.git"
},
{
"code": null,
"e": 1132,
"s": 994,
"text": "Step 2: Now use the following command to move into the directory of the tool. You have to move in the directory in order to run the tool."
},
{
"code": null,
"e": 1147,
"s": 1132,
"text": "cd Hash-Buster"
},
{
"code": null,
"e": 1253,
"s": 1147,
"text": "Step 3: You are in the directory of the Hash-Buster. Now install the tool by using the following command."
},
{
"code": null,
"e": 1271,
"s": 1253,
"text": "sudo make install"
},
{
"code": null,
"e": 1353,
"s": 1271,
"text": "Step 4: Now use the following command to run the tool and check the help section."
},
{
"code": null,
"e": 1363,
"s": 1353,
"text": "buster -h"
},
{
"code": null,
"e": 1397,
"s": 1363,
"text": "Example 1: Cracking a single hash"
},
{
"code": null,
"e": 1440,
"s": 1397,
"text": "buster -s a6eb56f80be8a120436d6f1c9b8d87ca"
},
{
"code": null,
"e": 1528,
"s": 1440,
"text": "In this example, we are cracking the single hash which is inputted by using the -s tag."
},
{
"code": null,
"e": 1571,
"s": 1528,
"text": "Example 2: Finding hashes from a directory"
},
{
"code": null,
"e": 1600,
"s": 1571,
"text": "buster -d /home/kali/Videos/"
},
{
"code": null,
"e": 1690,
"s": 1600,
"text": "In this example, the tool will search the hashes in the directory and it will be cracked."
},
{
"code": null,
"e": 1750,
"s": 1690,
"text": "Cracked hash results are displayed in the below screenshot."
},
{
"code": null,
"e": 1789,
"s": 1750,
"text": "Example 3: Cracking hashes from a file"
},
{
"code": null,
"e": 1810,
"s": 1789,
"text": "buster -f hashes.txt"
},
{
"code": null,
"e": 1907,
"s": 1810,
"text": "In this example, we will be cracking multiple hashes which are specified in the hashes.txt file."
},
{
"code": null,
"e": 1954,
"s": 1907,
"text": "Results are displayed in the below screenshot."
},
{
"code": null,
"e": 1994,
"s": 1954,
"text": "Example 4: Specifying number of threads"
},
{
"code": null,
"e": 2043,
"s": 1994,
"text": "buster -s a6eb56f80be8a120436d6f1c9b8d87ca -t 10"
},
{
"code": null,
"e": 2121,
"s": 2043,
"text": "In this example, we are specifying the value of threads for faster execution."
},
{
"code": null,
"e": 2132,
"s": 2121,
"text": "Kali-Linux"
},
{
"code": null,
"e": 2144,
"s": 2132,
"text": "Linux-Tools"
},
{
"code": null,
"e": 2155,
"s": 2144,
"text": "Linux-Unix"
}
] |
Remove minimum numbers from the array to get minimum OR value
|
03 Mar, 2022
Given an array arr[] of N positive integers, the task is to find the minimum number of elements to be deleted from the array so that the bitwise OR of the array elements get minimized. You are not allowed to remove all the elements i.e. at least one element must remain in the array.Examples:
Input: arr[] = {1, 2, 3} Output: 2 All possible subsets and there OR values are: a) {1, 2, 3} = 3 b) {1, 2} = 3 c) {2, 3} = 3 d) {1, 3} = 3 e) {1} = 1 f) {2} = 2 g) {3} = 3 The minimum possible OR will be 1 from the subset {1}. So, we will need to remove 2 elements.Input: arr[] = {3, 3, 3} Output: 0
Naive approach: Generate all possible sub-sequences and test which one gives the minimum ‘OR’ value. Let the length of the largest sub-sequence with minimum possible OR be L then the answer will N – L. This will take exponential time.Better approach: The minimum value will always be equal to the smallest number present in the array. If this number get bitwise ORed with any other number other than itself then the value of the OR will change and it won’t stay minimum anymore. Thus, we need to remove all the elements that are not equal to this minimum element.
Find the smallest number in the array.
Find the frequency of this element in the array say cnt.
The final answer will be N – cnt.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function to return the minimum// deletions to get minimum ORint findMinDel(int* arr, int n){ // To store the minimum element int min_num = INT_MAX; // Find the minimum element // from the array for (int i = 0; i < n; i++) min_num = min(arr[i], min_num); // To store the frequency of // the minimum element int cnt = 0; // Find the frequency of the // minimum element for (int i = 0; i < n; i++) if (arr[i] == min_num) cnt++; // Return the final answer return n - cnt;} // Driver codeint main(){ int arr[] = { 3, 3, 2 }; int n = sizeof(arr) / sizeof(int); cout << findMinDel(arr, n); return 0;}
// Java implementation of the approachclass GFG{ // Function to return the minimum// deletions to get minimum ORstatic int findMinDel(int []arr, int n){ // To store the minimum element int min_num = Integer.MAX_VALUE; // Find the minimum element // from the array for (int i = 0; i < n; i++) min_num = Math.min(arr[i], min_num); // To store the frequency of // the minimum element int cnt = 0; // Find the frequency of the // minimum element for (int i = 0; i < n; i++) if (arr[i] == min_num) cnt++; // Return the final answer return n - cnt;} // Driver codepublic static void main(String[] args){ int arr[] = { 3, 3, 2 }; int n = arr.length; System.out.print(findMinDel(arr, n));}} // This code is contributed by PrinciRaj1992
# Python3 implementation of the approachimport sys # Function to return the minimum# deletions to get minimum ORdef findMinDel(arr, n) : # To store the minimum element min_num = sys.maxsize; # Find the minimum element # from the array for i in range(n) : min_num = min(arr[i], min_num); # To store the frequency of # the minimum element cnt = 0; # Find the frequency of the # minimum element for i in range(n) : if (arr[i] == min_num) : cnt += 1; # Return the final answer return n - cnt; # Driver codeif __name__ == "__main__" : arr = [ 3, 3, 2 ]; n = len(arr); print(findMinDel(arr, n)); # This code is contributed by AnkitRai01
// C# implementation of the approachusing System; class GFG{ // Function to return the minimum// deletions to get minimum ORstatic int findMinDel(int []arr, int n){ // To store the minimum element int min_num = int.MaxValue; // Find the minimum element // from the array for (int i = 0; i < n; i++) min_num = Math.Min(arr[i], min_num); // To store the frequency of // the minimum element int cnt = 0; // Find the frequency of the // minimum element for (int i = 0; i < n; i++) if (arr[i] == min_num) cnt++; // Return the readonly answer return n - cnt;} // Driver codepublic static void Main(String[] args){ int []arr = { 3, 3, 2 }; int n = arr.Length; Console.Write(findMinDel(arr, n));}} // This code is contributed by 29AjayKumar
<script> // Javascript implementation of the approach // Function to return the minimum// deletions to get minimum ORfunction findMinDel(arr, n){ // To store the minimum element var min_num = 1000000000; // Find the minimum element // from the array for (var i = 0; i < n; i++) min_num = Math.min(arr[i], min_num); // To store the frequency of // the minimum element var cnt = 0; // Find the frequency of the // minimum element for (var i = 0; i < n; i++) if (arr[i] == min_num) cnt++; // Return the final answer return n - cnt;} // Driver codevar arr = [3, 3, 2];var n = arr.length;document.write( findMinDel(arr, n)); // This code is contributed by noob2000.</script>
2
Time Complexity: O(N)
Auxiliary Space: O(1)
ankthon
princiraj1992
29AjayKumar
noob2000
subhammahato348
Bitwise-OR
Algorithms
Arrays
Mathematical
Arrays
Mathematical
Algorithms
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
What is Hashing | A Complete Tutorial
Find if there is a path between two vertices in an undirected graph
How to Start Learning DSA?
Complete Roadmap To Learn DSA From Scratch
Types of Complexity Classes | P, NP, CoNP, NP hard and NP complete
Arrays in Java
Write a program to reverse an array or string
Maximum and minimum of an array using minimum number of comparisons
Largest Sum Contiguous Subarray
Arrays in C/C++
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n03 Mar, 2022"
},
{
"code": null,
"e": 347,
"s": 52,
"text": "Given an array arr[] of N positive integers, the task is to find the minimum number of elements to be deleted from the array so that the bitwise OR of the array elements get minimized. You are not allowed to remove all the elements i.e. at least one element must remain in the array.Examples: "
},
{
"code": null,
"e": 650,
"s": 347,
"text": "Input: arr[] = {1, 2, 3} Output: 2 All possible subsets and there OR values are: a) {1, 2, 3} = 3 b) {1, 2} = 3 c) {2, 3} = 3 d) {1, 3} = 3 e) {1} = 1 f) {2} = 2 g) {3} = 3 The minimum possible OR will be 1 from the subset {1}. So, we will need to remove 2 elements.Input: arr[] = {3, 3, 3} Output: 0 "
},
{
"code": null,
"e": 1218,
"s": 652,
"text": "Naive approach: Generate all possible sub-sequences and test which one gives the minimum ‘OR’ value. Let the length of the largest sub-sequence with minimum possible OR be L then the answer will N – L. This will take exponential time.Better approach: The minimum value will always be equal to the smallest number present in the array. If this number get bitwise ORed with any other number other than itself then the value of the OR will change and it won’t stay minimum anymore. Thus, we need to remove all the elements that are not equal to this minimum element. "
},
{
"code": null,
"e": 1257,
"s": 1218,
"text": "Find the smallest number in the array."
},
{
"code": null,
"e": 1314,
"s": 1257,
"text": "Find the frequency of this element in the array say cnt."
},
{
"code": null,
"e": 1348,
"s": 1314,
"text": "The final answer will be N – cnt."
},
{
"code": null,
"e": 1401,
"s": 1348,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 1405,
"s": 1401,
"text": "C++"
},
{
"code": null,
"e": 1410,
"s": 1405,
"text": "Java"
},
{
"code": null,
"e": 1418,
"s": 1410,
"text": "Python3"
},
{
"code": null,
"e": 1421,
"s": 1418,
"text": "C#"
},
{
"code": null,
"e": 1432,
"s": 1421,
"text": "Javascript"
},
{
"code": "// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function to return the minimum// deletions to get minimum ORint findMinDel(int* arr, int n){ // To store the minimum element int min_num = INT_MAX; // Find the minimum element // from the array for (int i = 0; i < n; i++) min_num = min(arr[i], min_num); // To store the frequency of // the minimum element int cnt = 0; // Find the frequency of the // minimum element for (int i = 0; i < n; i++) if (arr[i] == min_num) cnt++; // Return the final answer return n - cnt;} // Driver codeint main(){ int arr[] = { 3, 3, 2 }; int n = sizeof(arr) / sizeof(int); cout << findMinDel(arr, n); return 0;}",
"e": 2194,
"s": 1432,
"text": null
},
{
"code": "// Java implementation of the approachclass GFG{ // Function to return the minimum// deletions to get minimum ORstatic int findMinDel(int []arr, int n){ // To store the minimum element int min_num = Integer.MAX_VALUE; // Find the minimum element // from the array for (int i = 0; i < n; i++) min_num = Math.min(arr[i], min_num); // To store the frequency of // the minimum element int cnt = 0; // Find the frequency of the // minimum element for (int i = 0; i < n; i++) if (arr[i] == min_num) cnt++; // Return the final answer return n - cnt;} // Driver codepublic static void main(String[] args){ int arr[] = { 3, 3, 2 }; int n = arr.length; System.out.print(findMinDel(arr, n));}} // This code is contributed by PrinciRaj1992",
"e": 3001,
"s": 2194,
"text": null
},
{
"code": "# Python3 implementation of the approachimport sys # Function to return the minimum# deletions to get minimum ORdef findMinDel(arr, n) : # To store the minimum element min_num = sys.maxsize; # Find the minimum element # from the array for i in range(n) : min_num = min(arr[i], min_num); # To store the frequency of # the minimum element cnt = 0; # Find the frequency of the # minimum element for i in range(n) : if (arr[i] == min_num) : cnt += 1; # Return the final answer return n - cnt; # Driver codeif __name__ == \"__main__\" : arr = [ 3, 3, 2 ]; n = len(arr); print(findMinDel(arr, n)); # This code is contributed by AnkitRai01",
"e": 3716,
"s": 3001,
"text": null
},
{
"code": "// C# implementation of the approachusing System; class GFG{ // Function to return the minimum// deletions to get minimum ORstatic int findMinDel(int []arr, int n){ // To store the minimum element int min_num = int.MaxValue; // Find the minimum element // from the array for (int i = 0; i < n; i++) min_num = Math.Min(arr[i], min_num); // To store the frequency of // the minimum element int cnt = 0; // Find the frequency of the // minimum element for (int i = 0; i < n; i++) if (arr[i] == min_num) cnt++; // Return the readonly answer return n - cnt;} // Driver codepublic static void Main(String[] args){ int []arr = { 3, 3, 2 }; int n = arr.Length; Console.Write(findMinDel(arr, n));}} // This code is contributed by 29AjayKumar",
"e": 4554,
"s": 3716,
"text": null
},
{
"code": "<script> // Javascript implementation of the approach // Function to return the minimum// deletions to get minimum ORfunction findMinDel(arr, n){ // To store the minimum element var min_num = 1000000000; // Find the minimum element // from the array for (var i = 0; i < n; i++) min_num = Math.min(arr[i], min_num); // To store the frequency of // the minimum element var cnt = 0; // Find the frequency of the // minimum element for (var i = 0; i < n; i++) if (arr[i] == min_num) cnt++; // Return the final answer return n - cnt;} // Driver codevar arr = [3, 3, 2];var n = arr.length;document.write( findMinDel(arr, n)); // This code is contributed by noob2000.</script>",
"e": 5293,
"s": 4554,
"text": null
},
{
"code": null,
"e": 5295,
"s": 5293,
"text": "2"
},
{
"code": null,
"e": 5319,
"s": 5297,
"text": "Time Complexity: O(N)"
},
{
"code": null,
"e": 5342,
"s": 5319,
"text": "Auxiliary Space: O(1) "
},
{
"code": null,
"e": 5350,
"s": 5342,
"text": "ankthon"
},
{
"code": null,
"e": 5364,
"s": 5350,
"text": "princiraj1992"
},
{
"code": null,
"e": 5376,
"s": 5364,
"text": "29AjayKumar"
},
{
"code": null,
"e": 5385,
"s": 5376,
"text": "noob2000"
},
{
"code": null,
"e": 5401,
"s": 5385,
"text": "subhammahato348"
},
{
"code": null,
"e": 5412,
"s": 5401,
"text": "Bitwise-OR"
},
{
"code": null,
"e": 5423,
"s": 5412,
"text": "Algorithms"
},
{
"code": null,
"e": 5430,
"s": 5423,
"text": "Arrays"
},
{
"code": null,
"e": 5443,
"s": 5430,
"text": "Mathematical"
},
{
"code": null,
"e": 5450,
"s": 5443,
"text": "Arrays"
},
{
"code": null,
"e": 5463,
"s": 5450,
"text": "Mathematical"
},
{
"code": null,
"e": 5474,
"s": 5463,
"text": "Algorithms"
},
{
"code": null,
"e": 5572,
"s": 5474,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 5610,
"s": 5572,
"text": "What is Hashing | A Complete Tutorial"
},
{
"code": null,
"e": 5678,
"s": 5610,
"text": "Find if there is a path between two vertices in an undirected graph"
},
{
"code": null,
"e": 5705,
"s": 5678,
"text": "How to Start Learning DSA?"
},
{
"code": null,
"e": 5748,
"s": 5705,
"text": "Complete Roadmap To Learn DSA From Scratch"
},
{
"code": null,
"e": 5815,
"s": 5748,
"text": "Types of Complexity Classes | P, NP, CoNP, NP hard and NP complete"
},
{
"code": null,
"e": 5830,
"s": 5815,
"text": "Arrays in Java"
},
{
"code": null,
"e": 5876,
"s": 5830,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 5944,
"s": 5876,
"text": "Maximum and minimum of an array using minimum number of comparisons"
},
{
"code": null,
"e": 5976,
"s": 5944,
"text": "Largest Sum Contiguous Subarray"
}
] |
Introduction to Syntax Analysis in Compiler Design
|
15 Jun, 2022
When an input string (source code or a program in some language) is given to a compiler, the compiler processes it in several phases, starting from lexical analysis (scans the input and divides it into tokens) to target code generation.
Syntax Analysis or Parsing is the second phase, i.e. after lexical analysis. It checks the syntactical structure of the given input, i.e. whether the given input is in the correct syntax (of the language in which the input has been written) or not. It does so by building a data structure, called a Parse tree or Syntax tree. The parse tree is constructed by using the pre-defined Grammar of the language and the input string. If the given input string can be produced with the help of the syntax tree (in the derivation process), the input string is found to be in the correct syntax. if not, the error is reported by the syntax analyzer.
The pushdown automata (PDA) is used to design the syntax analysis phase.
The Grammar for a Language consists of Production rules.
Example: Suppose Production rules for the Grammar of a language are:
S -> cAd
A -> bc|a
And the input string is “cad”.
Now the parser attempts to construct a syntax tree from this grammar for the given input string. It uses the given production rules and applies those as needed to generate the string. To generate string “cad” it uses the rules as shown in the given diagram:
In step (iii) above, the production rule A->bc was not a suitable one to apply (because the string produced is “cbcd” not “cad”), here the parser needs to backtrack, and apply the next production rule available with A which is shown in step (iv), and the string “cad” is produced.
Thus, the given input can be produced by the given grammar, therefore the input is correct in syntax. But backtrack was needed to get the correct syntax tree, which is really a complex process to implement.
There can be an easier way to solve this, which we shall see in the next article “Concepts of FIRST and FOLLOW sets in Compiler Design”.
Quiz on Syntax Analysis
This article is compiled by Vaibhav Bajpai. Please write comments if you find anything incorrect, or if you want to share more information about the topic discussed above
VaibhavRai3
aayushi2402
Compiler Design
GATE CS
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n15 Jun, 2022"
},
{
"code": null,
"e": 289,
"s": 52,
"text": "When an input string (source code or a program in some language) is given to a compiler, the compiler processes it in several phases, starting from lexical analysis (scans the input and divides it into tokens) to target code generation."
},
{
"code": null,
"e": 929,
"s": 289,
"text": "Syntax Analysis or Parsing is the second phase, i.e. after lexical analysis. It checks the syntactical structure of the given input, i.e. whether the given input is in the correct syntax (of the language in which the input has been written) or not. It does so by building a data structure, called a Parse tree or Syntax tree. The parse tree is constructed by using the pre-defined Grammar of the language and the input string. If the given input string can be produced with the help of the syntax tree (in the derivation process), the input string is found to be in the correct syntax. if not, the error is reported by the syntax analyzer."
},
{
"code": null,
"e": 1002,
"s": 929,
"text": "The pushdown automata (PDA) is used to design the syntax analysis phase."
},
{
"code": null,
"e": 1059,
"s": 1002,
"text": "The Grammar for a Language consists of Production rules."
},
{
"code": null,
"e": 1128,
"s": 1059,
"text": "Example: Suppose Production rules for the Grammar of a language are:"
},
{
"code": null,
"e": 1184,
"s": 1128,
"text": " S -> cAd\n A -> bc|a\n And the input string is “cad”."
},
{
"code": null,
"e": 1443,
"s": 1184,
"text": "Now the parser attempts to construct a syntax tree from this grammar for the given input string. It uses the given production rules and applies those as needed to generate the string. To generate string “cad” it uses the rules as shown in the given diagram: "
},
{
"code": null,
"e": 1724,
"s": 1443,
"text": "In step (iii) above, the production rule A->bc was not a suitable one to apply (because the string produced is “cbcd” not “cad”), here the parser needs to backtrack, and apply the next production rule available with A which is shown in step (iv), and the string “cad” is produced."
},
{
"code": null,
"e": 1931,
"s": 1724,
"text": "Thus, the given input can be produced by the given grammar, therefore the input is correct in syntax. But backtrack was needed to get the correct syntax tree, which is really a complex process to implement."
},
{
"code": null,
"e": 2068,
"s": 1931,
"text": "There can be an easier way to solve this, which we shall see in the next article “Concepts of FIRST and FOLLOW sets in Compiler Design”."
},
{
"code": null,
"e": 2092,
"s": 2068,
"text": "Quiz on Syntax Analysis"
},
{
"code": null,
"e": 2263,
"s": 2092,
"text": "This article is compiled by Vaibhav Bajpai. Please write comments if you find anything incorrect, or if you want to share more information about the topic discussed above"
},
{
"code": null,
"e": 2275,
"s": 2263,
"text": "VaibhavRai3"
},
{
"code": null,
"e": 2287,
"s": 2275,
"text": "aayushi2402"
},
{
"code": null,
"e": 2303,
"s": 2287,
"text": "Compiler Design"
},
{
"code": null,
"e": 2311,
"s": 2303,
"text": "GATE CS"
}
] |
How to display bootstrap carousel with three post in each slide?
|
16 Jul, 2019
A Bootstrap Carousel is a slideshow for rotating through series of contents. It is built with CSS and Javascript. It works with a series of photos, images, texts, etc. It can be used as an image slider for showcasing the huge amount of contents within a small space on the web page, as it works on the principle of dynamic presentations.
Syntax:
<div class="container"> Bootstrap image contents... <div>
Following are the steps to create a Bootstrap carousel:
Include Bootstrap Javascript, CSS and JQuery Library files in the head sections, that are pre-loaded and pre-compiled<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css”><script src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"><script src="https://netdna.bootstrapcdn.com/bootstrap/3.0.3/js/bootstrap.min.js">Apply CSS to resize the .carousel Bootstrap card body by using the code segment below.<style> .carousel { width:200px; height:200px; } <style>In the body section create a division class with carousel slider using the syntax below<div id=”carousel-demo” class=”carousel slide” data-ride=”carousel”>In this step, the sliding images are defined in the division tag as under.<div class=”carousel-inner”><div class=”item”><img src=”..URL of image”>Last step is to add controls to slide images using the carousel-control class as below.<a class="left carousel-control" href="#carousel-demo2" data-slide="prev"> <span class="icon-prev"></span></a><a class="right carousel-control" href="#carousel-demo2" data-slide="next"> <span class="icon-next"></span></a>
Include Bootstrap Javascript, CSS and JQuery Library files in the head sections, that are pre-loaded and pre-compiled<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css”><script src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"><script src="https://netdna.bootstrapcdn.com/bootstrap/3.0.3/js/bootstrap.min.js">
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css”><script src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"><script src="https://netdna.bootstrapcdn.com/bootstrap/3.0.3/js/bootstrap.min.js">
Apply CSS to resize the .carousel Bootstrap card body by using the code segment below.<style> .carousel { width:200px; height:200px; } <style>
<style> .carousel { width:200px; height:200px; } <style>
In the body section create a division class with carousel slider using the syntax below<div id=”carousel-demo” class=”carousel slide” data-ride=”carousel”>
<div id=”carousel-demo” class=”carousel slide” data-ride=”carousel”>
In this step, the sliding images are defined in the division tag as under.<div class=”carousel-inner”><div class=”item”><img src=”..URL of image”>
<div class=”carousel-inner”><div class=”item”><img src=”..URL of image”>
Last step is to add controls to slide images using the carousel-control class as below.<a class="left carousel-control" href="#carousel-demo2" data-slide="prev"> <span class="icon-prev"></span></a><a class="right carousel-control" href="#carousel-demo2" data-slide="next"> <span class="icon-next"></span></a>
<a class="left carousel-control" href="#carousel-demo2" data-slide="prev"> <span class="icon-prev"></span></a><a class="right carousel-control" href="#carousel-demo2" data-slide="next"> <span class="icon-next"></span></a>
Note:We repeat step no. 4 as many times depending on how many images we are providing inside carousel slider and step 3 exactly twice to display two sections in Bootstrap card with image slider .carousel
Example 1: Let us implement the above approach and create a Bootstrap card using HTML, CSS, Js with image slider first and then move further in next example to add multiple rows and multiple columns.
<!DOCTYPE html><html> <head> <!--Add pre compiled library files --> <!--Automatics css and js adder--> <!--auto compiled css & Js--> <script type="text/javascript" src="//code.jquery.com/jquery-1.9.1.js"> </script> <link rel="stylesheet" type="text/css" href="/css/result-light.css"> <script type="text/javascript" src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"> </script> <link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css"> <link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css"> </head> <body> <!-- create a bootstrap card in a container--> <div class="container"> <!--Bootstrap card with slider class--> <div id="carousel-demo" class="carousel slide" data-ride="carousel"> <div class="carousel-inner"> <div class="item"> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190709143850/1382.png"> </div> <div class="item"> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190709143855/223-1.png"> </div> <div class="item active"> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190709143904/391.png"> </div> </div> <!--slider control for card--> <a class="left carousel-control" href="#carousel-demo" data-slide="prev"> <span class="glyphicon glyphicon-chevron-left"> </span> </a> <a class="right carousel-control" href="#carousel-demo" data-slide="next"> <span class="glyphicon glyphicon-chevron-right"> </span> </a> </div> </div></body> </html>
Output:
Example 2: Now we extend the implementation of example 1 to show multiple images in a Bootstrap Carousel all at once with the slider at ends.Below is the implementation of a styled HTML code fragment.
<!DOCTYPE html><html> <head> <!--auto compiled css & Js--> <script type="text/javascript" src="//code.jquery.com/jquery-1.9.1.js"> </script> <link rel="stylesheet" type="text/css" href="/css/result-light.css"> <script type="text/javascript" src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"> </script> <link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css"> <link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css"> <!-- JavaScript for adding slider for multiple images shown at once--> <script type="text/javascript"> $(window).load(function() { $(".carousel .item").each(function() { var i = $(this).next(); i.length || (i = $(this).siblings(":first")), i.children(":first-child").clone().appendTo($(this)); for (var n = 0; n < 4; n++)(i = i.next()).length || (i = $(this).siblings(":first")), i.children(":first-child").clone().appendTo($(this)) }) }); </script> </head> <body> <!-- container class for bootstrap card--> <div class="container"> <!-- bootstrap card with row name myCarousel as row 1--> <div class="carousel slide" id="myCarousel"> <div class="carousel-inner"> <div class="item active"> <div class="col-xs-2"> <a href="#"> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190709143904/391.png" class="img-responsive"> </a> </div> </div> <div class="item"> <div class="col-xs-2"> <a href="#"> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190709143850/1382.png" class="img-responsive"> </a> </div> </div> <div class="item"> <div class="col-xs-2"> <a href="#"> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190709143855/223-1.png" class="img-responsive"> </a> </div> </div> </div> <a class="left carousel-control" href="#myCarousel" data-slide="prev"> <i class="glyphicon glyphicon-chevron-left"> </i> </a> <a class="right carousel-control" href="#myCarousel" data-slide="next"> <i class="glyphicon glyphicon-chevron-right"> </i> </a> </div> </div></body> </html>
Output:
Example 3: Now we create a Bootstrap card with multiple images stacked in rows & columns using a slider.We display multiple posts each in a bootstrap carousel, that is we display multiple images using the matrix table.Below is the implementation of a styled HTML code fragment.
<!DOCTYPE html><html> <head> <!-- auto compiled css and js library files--> <script type="text/javascript" src="//code.jquery.com/jquery-1.9.1.js"> </script> <link rel="stylesheet" type="text/css" href="/css/result-light.css"> <script type="text/javascript" src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"> </script> <link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css"> <link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css"> <script type="text/javascript"> <!-- JavaScript to slide images horizontally--> $(window).load(function() { $(".carousel .item").each(function() { var i = $(this).next(); i.length || (i = $(this).siblings(":first")), i.children(":first-child").clone().appendTo($(this)); for (var n = 0; n < 4; n++)(i = i.next()).length || (i = $(this).siblings(":first")), i.children(":first-child").clone().appendTo($(this)) }) }); </script> </head> <body> <!--container class--> <div class="container"> <!-- myCarousel as row 1 in bootstrap card named container--> <div class="carousel slide" id="myCarousel"> <div class="carousel-inner"> <div class="item active"> <div class="col-xs-2"> <a href="#"> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190709143855/223-1.png" class="img-responsive"> </a> </div> </div> <div class="item"> <div class="col-xs-2"> <a href="#"> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190709143904/391.png" class="img-responsive"> </a> </div> </div> <div class="item"> <div class="col-xs-2"> <a href="#"> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190709143910/457.png" class="img-responsive"> </a> </div> </div> </div> <!-- row 1 of bootstrap card control--> <a class="left carousel-control" href="#myCarousel" data-slide="prev"> <i class="glyphicon glyphicon-chevron-left"> </i> </a> <a class="right carousel-control" href="#myCarousel" data-slide="next"> <i class="glyphicon glyphicon-chevron-right"> </i> </a> </div> <!-- myCarousel as row 2 of bootstrap card named container--> <div class="carousel slide" id="myCarousel2"> <div class="carousel-inner"> <div class="item active"> <div class="col-xs-2"> <a href="#"> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190709143910/457.png" class="img-responsive"> </a> </div> </div> <div class="item"> <div class="col-xs-2"> <a href="#"> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190709143904/391.png" class="img-responsive"></a> </div> </div> <div class="item"> <div class="col-xs-2"> <a href="#"> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190709143850/1382.png" class="img-responsive"> </a> </div> </div> </div> <!-- myCarousel2, control of row 2 of container class bootstrap card--> <a class="left carousel-control" href="#myCarousel2" data-slide="prev"> <i class="glyphicon glyphicon-chevron-left"> </i> </a> <a class="right carousel-control" href="#myCarousel2" data-slide="next"> <i class="glyphicon glyphicon-chevron-right"> </i> </a> </div> </div></body> </html>
Output:
Bootstrap-Misc
Picked
Bootstrap
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to pass data into a bootstrap modal?
How to Show Images on Click using HTML ?
How to set Bootstrap Timepicker using datetimepicker library ?
How to Use Bootstrap with React?
Difference between Bootstrap 4 and Bootstrap 5
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ?
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n16 Jul, 2019"
},
{
"code": null,
"e": 392,
"s": 54,
"text": "A Bootstrap Carousel is a slideshow for rotating through series of contents. It is built with CSS and Javascript. It works with a series of photos, images, texts, etc. It can be used as an image slider for showcasing the huge amount of contents within a small space on the web page, as it works on the principle of dynamic presentations."
},
{
"code": null,
"e": 400,
"s": 392,
"text": "Syntax:"
},
{
"code": null,
"e": 458,
"s": 400,
"text": "<div class=\"container\"> Bootstrap image contents... <div>"
},
{
"code": null,
"e": 514,
"s": 458,
"text": "Following are the steps to create a Bootstrap carousel:"
},
{
"code": null,
"e": 1645,
"s": 514,
"text": "Include Bootstrap Javascript, CSS and JQuery Library files in the head sections, that are pre-loaded and pre-compiled<link rel=\"stylesheet\" href=\"//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css”><script src=\"https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js\"><script src=\"https://netdna.bootstrapcdn.com/bootstrap/3.0.3/js/bootstrap.min.js\">Apply CSS to resize the .carousel Bootstrap card body by using the code segment below.<style> .carousel { width:200px; height:200px; } <style>In the body section create a division class with carousel slider using the syntax below<div id=”carousel-demo” class=”carousel slide” data-ride=”carousel”>In this step, the sliding images are defined in the division tag as under.<div class=”carousel-inner”><div class=”item”><img src=”..URL of image”>Last step is to add controls to slide images using the carousel-control class as below.<a class=\"left carousel-control\" href=\"#carousel-demo2\" data-slide=\"prev\"> <span class=\"icon-prev\"></span></a><a class=\"right carousel-control\" href=\"#carousel-demo2\" data-slide=\"next\"> <span class=\"icon-next\"></span></a>"
},
{
"code": null,
"e": 2019,
"s": 1645,
"text": "Include Bootstrap Javascript, CSS and JQuery Library files in the head sections, that are pre-loaded and pre-compiled<link rel=\"stylesheet\" href=\"//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css”><script src=\"https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js\"><script src=\"https://netdna.bootstrapcdn.com/bootstrap/3.0.3/js/bootstrap.min.js\">"
},
{
"code": "<link rel=\"stylesheet\" href=\"//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css”><script src=\"https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js\"><script src=\"https://netdna.bootstrapcdn.com/bootstrap/3.0.3/js/bootstrap.min.js\">",
"e": 2276,
"s": 2019,
"text": null
},
{
"code": null,
"e": 2419,
"s": 2276,
"text": "Apply CSS to resize the .carousel Bootstrap card body by using the code segment below.<style> .carousel { width:200px; height:200px; } <style>"
},
{
"code": "<style> .carousel { width:200px; height:200px; } <style>",
"e": 2476,
"s": 2419,
"text": null
},
{
"code": null,
"e": 2632,
"s": 2476,
"text": "In the body section create a division class with carousel slider using the syntax below<div id=”carousel-demo” class=”carousel slide” data-ride=”carousel”>"
},
{
"code": "<div id=”carousel-demo” class=”carousel slide” data-ride=”carousel”>",
"e": 2701,
"s": 2632,
"text": null
},
{
"code": null,
"e": 2848,
"s": 2701,
"text": "In this step, the sliding images are defined in the division tag as under.<div class=”carousel-inner”><div class=”item”><img src=”..URL of image”>"
},
{
"code": "<div class=”carousel-inner”><div class=”item”><img src=”..URL of image”>",
"e": 2921,
"s": 2848,
"text": null
},
{
"code": null,
"e": 3236,
"s": 2921,
"text": "Last step is to add controls to slide images using the carousel-control class as below.<a class=\"left carousel-control\" href=\"#carousel-demo2\" data-slide=\"prev\"> <span class=\"icon-prev\"></span></a><a class=\"right carousel-control\" href=\"#carousel-demo2\" data-slide=\"next\"> <span class=\"icon-next\"></span></a>"
},
{
"code": "<a class=\"left carousel-control\" href=\"#carousel-demo2\" data-slide=\"prev\"> <span class=\"icon-prev\"></span></a><a class=\"right carousel-control\" href=\"#carousel-demo2\" data-slide=\"next\"> <span class=\"icon-next\"></span></a>",
"e": 3464,
"s": 3236,
"text": null
},
{
"code": null,
"e": 3668,
"s": 3464,
"text": "Note:We repeat step no. 4 as many times depending on how many images we are providing inside carousel slider and step 3 exactly twice to display two sections in Bootstrap card with image slider .carousel"
},
{
"code": null,
"e": 3868,
"s": 3668,
"text": "Example 1: Let us implement the above approach and create a Bootstrap card using HTML, CSS, Js with image slider first and then move further in next example to add multiple rows and multiple columns."
},
{
"code": "<!DOCTYPE html><html> <head> <!--Add pre compiled library files --> <!--Automatics css and js adder--> <!--auto compiled css & Js--> <script type=\"text/javascript\" src=\"//code.jquery.com/jquery-1.9.1.js\"> </script> <link rel=\"stylesheet\" type=\"text/css\" href=\"/css/result-light.css\"> <script type=\"text/javascript\" src=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js\"> </script> <link rel=\"stylesheet\" type=\"text/css\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css\"> <link rel=\"stylesheet\" type=\"text/css\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css\"> </head> <body> <!-- create a bootstrap card in a container--> <div class=\"container\"> <!--Bootstrap card with slider class--> <div id=\"carousel-demo\" class=\"carousel slide\" data-ride=\"carousel\"> <div class=\"carousel-inner\"> <div class=\"item\"> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190709143850/1382.png\"> </div> <div class=\"item\"> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190709143855/223-1.png\"> </div> <div class=\"item active\"> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190709143904/391.png\"> </div> </div> <!--slider control for card--> <a class=\"left carousel-control\" href=\"#carousel-demo\" data-slide=\"prev\"> <span class=\"glyphicon glyphicon-chevron-left\"> </span> </a> <a class=\"right carousel-control\" href=\"#carousel-demo\" data-slide=\"next\"> <span class=\"glyphicon glyphicon-chevron-right\"> </span> </a> </div> </div></body> </html>",
"e": 5925,
"s": 3868,
"text": null
},
{
"code": null,
"e": 5933,
"s": 5925,
"text": "Output:"
},
{
"code": null,
"e": 6134,
"s": 5933,
"text": "Example 2: Now we extend the implementation of example 1 to show multiple images in a Bootstrap Carousel all at once with the slider at ends.Below is the implementation of a styled HTML code fragment."
},
{
"code": "<!DOCTYPE html><html> <head> <!--auto compiled css & Js--> <script type=\"text/javascript\" src=\"//code.jquery.com/jquery-1.9.1.js\"> </script> <link rel=\"stylesheet\" type=\"text/css\" href=\"/css/result-light.css\"> <script type=\"text/javascript\" src=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js\"> </script> <link rel=\"stylesheet\" type=\"text/css\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css\"> <link rel=\"stylesheet\" type=\"text/css\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css\"> <!-- JavaScript for adding slider for multiple images shown at once--> <script type=\"text/javascript\"> $(window).load(function() { $(\".carousel .item\").each(function() { var i = $(this).next(); i.length || (i = $(this).siblings(\":first\")), i.children(\":first-child\").clone().appendTo($(this)); for (var n = 0; n < 4; n++)(i = i.next()).length || (i = $(this).siblings(\":first\")), i.children(\":first-child\").clone().appendTo($(this)) }) }); </script> </head> <body> <!-- container class for bootstrap card--> <div class=\"container\"> <!-- bootstrap card with row name myCarousel as row 1--> <div class=\"carousel slide\" id=\"myCarousel\"> <div class=\"carousel-inner\"> <div class=\"item active\"> <div class=\"col-xs-2\"> <a href=\"#\"> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190709143904/391.png\" class=\"img-responsive\"> </a> </div> </div> <div class=\"item\"> <div class=\"col-xs-2\"> <a href=\"#\"> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190709143850/1382.png\" class=\"img-responsive\"> </a> </div> </div> <div class=\"item\"> <div class=\"col-xs-2\"> <a href=\"#\"> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190709143855/223-1.png\" class=\"img-responsive\"> </a> </div> </div> </div> <a class=\"left carousel-control\" href=\"#myCarousel\" data-slide=\"prev\"> <i class=\"glyphicon glyphicon-chevron-left\"> </i> </a> <a class=\"right carousel-control\" href=\"#myCarousel\" data-slide=\"next\"> <i class=\"glyphicon glyphicon-chevron-right\"> </i> </a> </div> </div></body> </html>",
"e": 9191,
"s": 6134,
"text": null
},
{
"code": null,
"e": 9199,
"s": 9191,
"text": "Output:"
},
{
"code": null,
"e": 9477,
"s": 9199,
"text": "Example 3: Now we create a Bootstrap card with multiple images stacked in rows & columns using a slider.We display multiple posts each in a bootstrap carousel, that is we display multiple images using the matrix table.Below is the implementation of a styled HTML code fragment."
},
{
"code": "<!DOCTYPE html><html> <head> <!-- auto compiled css and js library files--> <script type=\"text/javascript\" src=\"//code.jquery.com/jquery-1.9.1.js\"> </script> <link rel=\"stylesheet\" type=\"text/css\" href=\"/css/result-light.css\"> <script type=\"text/javascript\" src=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js\"> </script> <link rel=\"stylesheet\" type=\"text/css\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css\"> <link rel=\"stylesheet\" type=\"text/css\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css\"> <script type=\"text/javascript\"> <!-- JavaScript to slide images horizontally--> $(window).load(function() { $(\".carousel .item\").each(function() { var i = $(this).next(); i.length || (i = $(this).siblings(\":first\")), i.children(\":first-child\").clone().appendTo($(this)); for (var n = 0; n < 4; n++)(i = i.next()).length || (i = $(this).siblings(\":first\")), i.children(\":first-child\").clone().appendTo($(this)) }) }); </script> </head> <body> <!--container class--> <div class=\"container\"> <!-- myCarousel as row 1 in bootstrap card named container--> <div class=\"carousel slide\" id=\"myCarousel\"> <div class=\"carousel-inner\"> <div class=\"item active\"> <div class=\"col-xs-2\"> <a href=\"#\"> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190709143855/223-1.png\" class=\"img-responsive\"> </a> </div> </div> <div class=\"item\"> <div class=\"col-xs-2\"> <a href=\"#\"> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190709143904/391.png\" class=\"img-responsive\"> </a> </div> </div> <div class=\"item\"> <div class=\"col-xs-2\"> <a href=\"#\"> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190709143910/457.png\" class=\"img-responsive\"> </a> </div> </div> </div> <!-- row 1 of bootstrap card control--> <a class=\"left carousel-control\" href=\"#myCarousel\" data-slide=\"prev\"> <i class=\"glyphicon glyphicon-chevron-left\"> </i> </a> <a class=\"right carousel-control\" href=\"#myCarousel\" data-slide=\"next\"> <i class=\"glyphicon glyphicon-chevron-right\"> </i> </a> </div> <!-- myCarousel as row 2 of bootstrap card named container--> <div class=\"carousel slide\" id=\"myCarousel2\"> <div class=\"carousel-inner\"> <div class=\"item active\"> <div class=\"col-xs-2\"> <a href=\"#\"> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190709143910/457.png\" class=\"img-responsive\"> </a> </div> </div> <div class=\"item\"> <div class=\"col-xs-2\"> <a href=\"#\"> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190709143904/391.png\" class=\"img-responsive\"></a> </div> </div> <div class=\"item\"> <div class=\"col-xs-2\"> <a href=\"#\"> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190709143850/1382.png\" class=\"img-responsive\"> </a> </div> </div> </div> <!-- myCarousel2, control of row 2 of container class bootstrap card--> <a class=\"left carousel-control\" href=\"#myCarousel2\" data-slide=\"prev\"> <i class=\"glyphicon glyphicon-chevron-left\"> </i> </a> <a class=\"right carousel-control\" href=\"#myCarousel2\" data-slide=\"next\"> <i class=\"glyphicon glyphicon-chevron-right\"> </i> </a> </div> </div></body> </html>",
"e": 14338,
"s": 9477,
"text": null
},
{
"code": null,
"e": 14346,
"s": 14338,
"text": "Output:"
},
{
"code": null,
"e": 14361,
"s": 14346,
"text": "Bootstrap-Misc"
},
{
"code": null,
"e": 14368,
"s": 14361,
"text": "Picked"
},
{
"code": null,
"e": 14378,
"s": 14368,
"text": "Bootstrap"
},
{
"code": null,
"e": 14395,
"s": 14378,
"text": "Web Technologies"
},
{
"code": null,
"e": 14493,
"s": 14395,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 14534,
"s": 14493,
"text": "How to pass data into a bootstrap modal?"
},
{
"code": null,
"e": 14575,
"s": 14534,
"text": "How to Show Images on Click using HTML ?"
},
{
"code": null,
"e": 14638,
"s": 14575,
"text": "How to set Bootstrap Timepicker using datetimepicker library ?"
},
{
"code": null,
"e": 14671,
"s": 14638,
"text": "How to Use Bootstrap with React?"
},
{
"code": null,
"e": 14718,
"s": 14671,
"text": "Difference between Bootstrap 4 and Bootstrap 5"
},
{
"code": null,
"e": 14751,
"s": 14718,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 14813,
"s": 14751,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 14874,
"s": 14813,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 14924,
"s": 14874,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
Permute a string by changing case
|
06 Jul, 2022
Print all permutations of a string keeping the sequence but changing cases.Examples:
Input : ab
Output : AB Ab ab aB
Input : ABC
Output : abc Abc aBc ABc abC AbC aBC ABC
Method 1 (Naive) : Naive approach would be to traverse the whole string and for every character, consider two cases, (1) change case and recur (2) Do not change case and recur.
C++
Python3
Javascript
// CPP code to print all permutations// with respect to cases#include <bits/stdc++.h>using namespace std; // Function to generate permutationsvoid permute(string ip, string op){ // base case if (ip.size() == 0) { cout << op << " "; return; } // pick lower and uppercase char ch = tolower(ip[0]); char ch2 = toupper(ip[0]); ip = ip.substr(1); permute(ip, op + ch); permute(ip, op + ch2);} // Driver codeint main(){ string ip = "aB"; permute(ip, ""); return 0;}
# Python code to print all permutations# with respect to cases # function to generate permutations def permute(ip, op): # base case if len(ip) == 0: print(op, end=" ") return # pick lower and uppercase ch = ip[0].lower() ch2 = ip[0].upper() ip = ip[1:] permute(ip, op+ch) permute(ip, op+ch2) # driver code def main(): ip = "AB" permute(ip, "") main() # This Code is Contributed by Vivek Maddeshiya
<script> // JavaScript code to print all permutations// with respect to cases // Function to generate permutationsfunction permute(ip, op){ // base case if(ip.length == 0){ document.write(op," "); return; } // pick lower and uppercase let ch = ip[0].toLowerCase(); let ch2 = ip[0].toUpperCase(); ip = ip.substring(1) ; permute(ip, op + ch); permute(ip, op + ch2);} // Driver code let ip = "aB" ;permute(ip,""); // This code is contributed by shinjanpatra </script>
ab aB Ab AB
Note: Recursion will generate output in this order only.
Time Complexity: O(nn) Auxiliary Space: O(n)
Method 2 (Better) For a string of length n there exists 2n maximum combinations. We can represent this as a bitwise operation. The same idea is discussed in Print all subsequences.Below is the implementation of above idea :
C++
Java
Python
C#
PHP
Javascript
// CPP code to print all permutations// with respect to cases#include <bits/stdc++.h>using namespace std; // Function to generate permutationsvoid permute(string input){ int n = input.length(); // Number of permutations is 2^n int max = 1 << n; // Converting string to lower case transform(input.begin(), input.end(), input.begin(), ::tolower); // Using all subsequences and permuting them for (int i = 0; i < max; i++) { // If j-th bit is set, we convert it to upper case string combination = input; for (int j = 0; j < n; j++) if (((i >> j) & 1) == 1) combination[j] = toupper(input.at(j)); // Printing current combination cout << combination << " "; }} // Driver codeint main(){ permute("ABC"); return 0;}
// Java program to print all permutations// with respect to cases public class PermuteString { // Function to generate permutations static void permute(String input) { int n = input.length(); // Number of permutations is 2^n int max = 1 << n; // Converting string to lower case input = input.toLowerCase(); // Using all subsequences and permuting them for (int i = 0; i < max; i++) { char combination[] = input.toCharArray(); // If j-th bit is set, we convert it to upper // case for (int j = 0; j < n; j++) { if (((i >> j) & 1) == 1) combination[j] = (char)(combination[j] - 32); } // Printing current combination System.out.print(combination); System.out.print(" "); } } // Driver Program to test above function public static void main(String[] args) { permute("ABC"); }} // This code is contributed by Sumit Ghosh
# Python code to print all permutations# with respect to cases # Function to generate permutations def permute(inp): n = len(inp) # Number of permutations is 2^n mx = 1 << n # Converting string to lower case inp = inp.lower() # Using all subsequences and permuting them for i in range(mx): # If j-th bit is set, we convert it to upper case combination = [k for k in inp] for j in range(n): if (((i >> j) & 1) == 1): combination[j] = inp[j].upper() temp = "" # Printing current combination for i in combination: temp += i print temp, # Driver codepermute("ABC") # This code is contributed by Sachin Bisht
// C# program to print all permutations// with respect to casesusing System; class PermuteString { // Function to generate // permutations static void permute(String input) { int n = input.Length; // Number of permutations is 2^n int max = 1 << n; // Converting string // to lower case input = input.ToLower(); // Using all subsequences // and permuting them for (int i = 0; i < max; i++) { char[] combination = input.ToCharArray(); // If j-th bit is set, we // convert it to upper case for (int j = 0; j < n; j++) { if (((i >> j) & 1) == 1) combination[j] = (char)(combination[j] - 32); } // Printing current combination Console.Write(combination); Console.Write(" "); } } // Driver Code public static void Main() { permute("ABC"); }} // This code is contributed by Nitin Mittal.
<?php// PHP program to print all permutations// with respect to cases // Function to generate permutationsfunction permute($input){ $n = strlen($input); // Number of permutations is 2^n $max = 1 << $n; // Converting string to lower case $input = strtolower($input); // Using all subsequences and permuting them for($i = 0; $i < $max; $i++) { $combination = $input; // If j-th bit is set, we convert // it to upper case for($j = 0; $j < $n; $j++) { if((($i >> $j) & 1) == 1) $combination[$j] = chr(ord($combination[$j]) - 32); } // Printing current combination echo $combination . " "; }} // Driver Codepermute("ABC"); // This code is contributed by mits?>
<script> // javascript program to print all permutations// with respect to cases // Function to generate permutationsfunction permute(input){ var n = input.length; // Number of permutations is 2^n var max = 1 << n; // Converting string to lower case input = input.toLowerCase(); // Using all subsequences and permuting them for(var i = 0;i < max; i++) { var combination = input.split(''); // If j-th bit is set, we convert it to upper case for(var j = 0; j < n; j++) { if(((i >> j) & 1) == 1) combination[j] = String.fromCharCode(combination[j].charCodeAt(0)-32); } // Printing current combination document.write(combination.join('')); document.write(" "); }} // Driver Program to test above functionpermute("ABC"); // This code contributed by Princi Singh</script>
abc Abc aBc ABc abC AbC aBC ABC
Time complexity :O(n*2n) , since we are running a nested loop of size n inside a loop of size 2n.Auxiliary Space :O(n)
Asked in : Facebook.This article is contributed by Aarti_Rathi and Rohit Thapliyal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
nitin mittal
Mithun Kumar
Akanksha_Rai
princi singh
ankita_saini
prasanna1995
shinjanpatra
vivekmaddheshiya205
youmailmahibagi
rohitmishra051000
codewithrathi
Mathematical
Strings
Strings
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Merge two sorted arrays
Operators in C / C++
Sieve of Eratosthenes
Prime Numbers
Minimum number of jumps to reach end
Write a program to reverse an array or string
Reverse a string in Java
Check for Balanced Brackets in an expression (well-formedness) using Stack
Longest Common Subsequence | DP-4
Different Methods to Reverse a String in C++
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n06 Jul, 2022"
},
{
"code": null,
"e": 139,
"s": 52,
"text": "Print all permutations of a string keeping the sequence but changing cases.Examples: "
},
{
"code": null,
"e": 225,
"s": 139,
"text": "Input : ab\nOutput : AB Ab ab aB\n\nInput : ABC\nOutput : abc Abc aBc ABc abC AbC aBC ABC"
},
{
"code": null,
"e": 404,
"s": 227,
"text": "Method 1 (Naive) : Naive approach would be to traverse the whole string and for every character, consider two cases, (1) change case and recur (2) Do not change case and recur."
},
{
"code": null,
"e": 408,
"s": 404,
"text": "C++"
},
{
"code": null,
"e": 416,
"s": 408,
"text": "Python3"
},
{
"code": null,
"e": 427,
"s": 416,
"text": "Javascript"
},
{
"code": "// CPP code to print all permutations// with respect to cases#include <bits/stdc++.h>using namespace std; // Function to generate permutationsvoid permute(string ip, string op){ // base case if (ip.size() == 0) { cout << op << \" \"; return; } // pick lower and uppercase char ch = tolower(ip[0]); char ch2 = toupper(ip[0]); ip = ip.substr(1); permute(ip, op + ch); permute(ip, op + ch2);} // Driver codeint main(){ string ip = \"aB\"; permute(ip, \"\"); return 0;}",
"e": 938,
"s": 427,
"text": null
},
{
"code": "# Python code to print all permutations# with respect to cases # function to generate permutations def permute(ip, op): # base case if len(ip) == 0: print(op, end=\" \") return # pick lower and uppercase ch = ip[0].lower() ch2 = ip[0].upper() ip = ip[1:] permute(ip, op+ch) permute(ip, op+ch2) # driver code def main(): ip = \"AB\" permute(ip, \"\") main() # This Code is Contributed by Vivek Maddeshiya",
"e": 1387,
"s": 938,
"text": null
},
{
"code": "<script> // JavaScript code to print all permutations// with respect to cases // Function to generate permutationsfunction permute(ip, op){ // base case if(ip.length == 0){ document.write(op,\" \"); return; } // pick lower and uppercase let ch = ip[0].toLowerCase(); let ch2 = ip[0].toUpperCase(); ip = ip.substring(1) ; permute(ip, op + ch); permute(ip, op + ch2);} // Driver code let ip = \"aB\" ;permute(ip,\"\"); // This code is contributed by shinjanpatra </script>",
"e": 1894,
"s": 1387,
"text": null
},
{
"code": null,
"e": 1907,
"s": 1894,
"text": "ab aB Ab AB "
},
{
"code": null,
"e": 1965,
"s": 1907,
"text": "Note: Recursion will generate output in this order only. "
},
{
"code": null,
"e": 2010,
"s": 1965,
"text": "Time Complexity: O(nn) Auxiliary Space: O(n)"
},
{
"code": null,
"e": 2236,
"s": 2010,
"text": "Method 2 (Better) For a string of length n there exists 2n maximum combinations. We can represent this as a bitwise operation. The same idea is discussed in Print all subsequences.Below is the implementation of above idea : "
},
{
"code": null,
"e": 2240,
"s": 2236,
"text": "C++"
},
{
"code": null,
"e": 2245,
"s": 2240,
"text": "Java"
},
{
"code": null,
"e": 2252,
"s": 2245,
"text": "Python"
},
{
"code": null,
"e": 2255,
"s": 2252,
"text": "C#"
},
{
"code": null,
"e": 2259,
"s": 2255,
"text": "PHP"
},
{
"code": null,
"e": 2270,
"s": 2259,
"text": "Javascript"
},
{
"code": "// CPP code to print all permutations// with respect to cases#include <bits/stdc++.h>using namespace std; // Function to generate permutationsvoid permute(string input){ int n = input.length(); // Number of permutations is 2^n int max = 1 << n; // Converting string to lower case transform(input.begin(), input.end(), input.begin(), ::tolower); // Using all subsequences and permuting them for (int i = 0; i < max; i++) { // If j-th bit is set, we convert it to upper case string combination = input; for (int j = 0; j < n; j++) if (((i >> j) & 1) == 1) combination[j] = toupper(input.at(j)); // Printing current combination cout << combination << \" \"; }} // Driver codeint main(){ permute(\"ABC\"); return 0;}",
"e": 3087,
"s": 2270,
"text": null
},
{
"code": "// Java program to print all permutations// with respect to cases public class PermuteString { // Function to generate permutations static void permute(String input) { int n = input.length(); // Number of permutations is 2^n int max = 1 << n; // Converting string to lower case input = input.toLowerCase(); // Using all subsequences and permuting them for (int i = 0; i < max; i++) { char combination[] = input.toCharArray(); // If j-th bit is set, we convert it to upper // case for (int j = 0; j < n; j++) { if (((i >> j) & 1) == 1) combination[j] = (char)(combination[j] - 32); } // Printing current combination System.out.print(combination); System.out.print(\" \"); } } // Driver Program to test above function public static void main(String[] args) { permute(\"ABC\"); }} // This code is contributed by Sumit Ghosh",
"e": 4144,
"s": 3087,
"text": null
},
{
"code": "# Python code to print all permutations# with respect to cases # Function to generate permutations def permute(inp): n = len(inp) # Number of permutations is 2^n mx = 1 << n # Converting string to lower case inp = inp.lower() # Using all subsequences and permuting them for i in range(mx): # If j-th bit is set, we convert it to upper case combination = [k for k in inp] for j in range(n): if (((i >> j) & 1) == 1): combination[j] = inp[j].upper() temp = \"\" # Printing current combination for i in combination: temp += i print temp, # Driver codepermute(\"ABC\") # This code is contributed by Sachin Bisht",
"e": 4861,
"s": 4144,
"text": null
},
{
"code": "// C# program to print all permutations// with respect to casesusing System; class PermuteString { // Function to generate // permutations static void permute(String input) { int n = input.Length; // Number of permutations is 2^n int max = 1 << n; // Converting string // to lower case input = input.ToLower(); // Using all subsequences // and permuting them for (int i = 0; i < max; i++) { char[] combination = input.ToCharArray(); // If j-th bit is set, we // convert it to upper case for (int j = 0; j < n; j++) { if (((i >> j) & 1) == 1) combination[j] = (char)(combination[j] - 32); } // Printing current combination Console.Write(combination); Console.Write(\" \"); } } // Driver Code public static void Main() { permute(\"ABC\"); }} // This code is contributed by Nitin Mittal.",
"e": 5885,
"s": 4861,
"text": null
},
{
"code": "<?php// PHP program to print all permutations// with respect to cases // Function to generate permutationsfunction permute($input){ $n = strlen($input); // Number of permutations is 2^n $max = 1 << $n; // Converting string to lower case $input = strtolower($input); // Using all subsequences and permuting them for($i = 0; $i < $max; $i++) { $combination = $input; // If j-th bit is set, we convert // it to upper case for($j = 0; $j < $n; $j++) { if((($i >> $j) & 1) == 1) $combination[$j] = chr(ord($combination[$j]) - 32); } // Printing current combination echo $combination . \" \"; }} // Driver Codepermute(\"ABC\"); // This code is contributed by mits?>",
"e": 6683,
"s": 5885,
"text": null
},
{
"code": "<script> // javascript program to print all permutations// with respect to cases // Function to generate permutationsfunction permute(input){ var n = input.length; // Number of permutations is 2^n var max = 1 << n; // Converting string to lower case input = input.toLowerCase(); // Using all subsequences and permuting them for(var i = 0;i < max; i++) { var combination = input.split(''); // If j-th bit is set, we convert it to upper case for(var j = 0; j < n; j++) { if(((i >> j) & 1) == 1) combination[j] = String.fromCharCode(combination[j].charCodeAt(0)-32); } // Printing current combination document.write(combination.join('')); document.write(\" \"); }} // Driver Program to test above functionpermute(\"ABC\"); // This code contributed by Princi Singh</script>",
"e": 7595,
"s": 6683,
"text": null
},
{
"code": null,
"e": 7628,
"s": 7595,
"text": "abc Abc aBc ABc abC AbC aBC ABC "
},
{
"code": null,
"e": 7747,
"s": 7628,
"text": "Time complexity :O(n*2n) , since we are running a nested loop of size n inside a loop of size 2n.Auxiliary Space :O(n)"
},
{
"code": null,
"e": 8207,
"s": 7747,
"text": "Asked in : Facebook.This article is contributed by Aarti_Rathi and Rohit Thapliyal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 8220,
"s": 8207,
"text": "nitin mittal"
},
{
"code": null,
"e": 8233,
"s": 8220,
"text": "Mithun Kumar"
},
{
"code": null,
"e": 8246,
"s": 8233,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 8259,
"s": 8246,
"text": "princi singh"
},
{
"code": null,
"e": 8272,
"s": 8259,
"text": "ankita_saini"
},
{
"code": null,
"e": 8285,
"s": 8272,
"text": "prasanna1995"
},
{
"code": null,
"e": 8298,
"s": 8285,
"text": "shinjanpatra"
},
{
"code": null,
"e": 8318,
"s": 8298,
"text": "vivekmaddheshiya205"
},
{
"code": null,
"e": 8334,
"s": 8318,
"text": "youmailmahibagi"
},
{
"code": null,
"e": 8352,
"s": 8334,
"text": "rohitmishra051000"
},
{
"code": null,
"e": 8366,
"s": 8352,
"text": "codewithrathi"
},
{
"code": null,
"e": 8379,
"s": 8366,
"text": "Mathematical"
},
{
"code": null,
"e": 8387,
"s": 8379,
"text": "Strings"
},
{
"code": null,
"e": 8395,
"s": 8387,
"text": "Strings"
},
{
"code": null,
"e": 8408,
"s": 8395,
"text": "Mathematical"
},
{
"code": null,
"e": 8506,
"s": 8408,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 8530,
"s": 8506,
"text": "Merge two sorted arrays"
},
{
"code": null,
"e": 8551,
"s": 8530,
"text": "Operators in C / C++"
},
{
"code": null,
"e": 8573,
"s": 8551,
"text": "Sieve of Eratosthenes"
},
{
"code": null,
"e": 8587,
"s": 8573,
"text": "Prime Numbers"
},
{
"code": null,
"e": 8624,
"s": 8587,
"text": "Minimum number of jumps to reach end"
},
{
"code": null,
"e": 8670,
"s": 8624,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 8695,
"s": 8670,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 8770,
"s": 8695,
"text": "Check for Balanced Brackets in an expression (well-formedness) using Stack"
},
{
"code": null,
"e": 8804,
"s": 8770,
"text": "Longest Common Subsequence | DP-4"
}
] |
Tailwind CSS Width
|
23 Mar, 2022
This class accepts lots of values in tailwind CSS in which all the properties are covered as in class form. It is the alternative to the CSS Width Property. This class is used to set the width of the text, images. The width can be assigned to the text and images in the form of pixels(px), percentage(%), centimeter(cm) etc.
Width classes:
w-0: This class means the width is set to zero.
w-auto: This class means the width is set according to the content
w-1/2: This class means the width is set to half of the window.
w-1/3: This class means the width is set to one-third of the window.
w-1/4: This class means the width is set to one-fourth of the window.
w-1/5: This class means the width is set to one-fifth of the window.
w-1/6: This class means the width is set to one-sixth of the window.
w-1/12: This class means the width is set to one-twelfth of the window.
w-full: This class means the width is set to full.
w-screen: This class means the width is set to the screen size.
w-min: This class is used to define the min-width.
w-max: This class is used to define the max-width.
Note: You can change the number with the valid “rem” values or set the percentage value.
Syntax:
<element class="w-0">...</element>
Example:
HTML
<!DOCTYPE html> <head> <link href="https://unpkg.com/tailwindcss@^1.0/dist/tailwind.min.css" rel="stylesheet"> </head> <body class="text-center mx-4 space-y-2"> <h1 class="text-green-600 text-5xl font-bold"> GeeksforGeeks </h1> <b>Tailwind CSS Width Class</b> <div class="flex"> <div class="w-1/2 bg-green-600 h-12 rounded-l-lg"> w-1/2 </div> <div class="w-1/2 bg-green-300 h-12 rounded-r-lg"> w-1/2 </div> </div> <div class="flex ..."> <div class="w-2/5 bg-green-600 h-12 rounded-l-lg"> w-2/5 </div> <div class="w-3/5 bg-green-300 h-12 rounded-r-lg"> w-3/5 </div> </div> <div class="flex ..."> <div class="w-1/3 bg-green-600 h-12 rounded-l-lg"> w-1/3 </div> <div class="w-2/3 bg-green-300 h-12 rounded-r-lg"> w-2/3 </div> </div> <div class="flex ..."> <div class="w-1/4 bg-green-600 h-12 rounded-l-lg"> w-1/4 </div> <div class="w-3/4 bg-green-300 h-12 rounded-r-lg"> w-3/4 </div> </div> <div class="flex ..."> <div class="w-1/5 bg-green-600 h-12 rounded-l-lg"> w-1/5 </div> <div class="w-4/5 bg-green-300 h-12 rounded-r-lg"> w-4/5 </div> </div> <div class="flex ..."> <div class="w-1/6 bg-green-600 h-12 rounded-l-lg"> w-1/6 </div> <div class="w-5/6 bg-green-300 h-12 rounded-r-lg"> w-5/6 </div> </div> <div class="w-full bg-green-300 h-12 rounded-lg"> w-full </div></body> </html>
Output:
Tailwind CSS
Tailwind-Sizing
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n23 Mar, 2022"
},
{
"code": null,
"e": 354,
"s": 28,
"text": "This class accepts lots of values in tailwind CSS in which all the properties are covered as in class form. It is the alternative to the CSS Width Property. This class is used to set the width of the text, images. The width can be assigned to the text and images in the form of pixels(px), percentage(%), centimeter(cm) etc. "
},
{
"code": null,
"e": 369,
"s": 354,
"text": "Width classes:"
},
{
"code": null,
"e": 417,
"s": 369,
"text": "w-0: This class means the width is set to zero."
},
{
"code": null,
"e": 484,
"s": 417,
"text": "w-auto: This class means the width is set according to the content"
},
{
"code": null,
"e": 548,
"s": 484,
"text": "w-1/2: This class means the width is set to half of the window."
},
{
"code": null,
"e": 617,
"s": 548,
"text": "w-1/3: This class means the width is set to one-third of the window."
},
{
"code": null,
"e": 687,
"s": 617,
"text": "w-1/4: This class means the width is set to one-fourth of the window."
},
{
"code": null,
"e": 756,
"s": 687,
"text": "w-1/5: This class means the width is set to one-fifth of the window."
},
{
"code": null,
"e": 825,
"s": 756,
"text": "w-1/6: This class means the width is set to one-sixth of the window."
},
{
"code": null,
"e": 897,
"s": 825,
"text": "w-1/12: This class means the width is set to one-twelfth of the window."
},
{
"code": null,
"e": 948,
"s": 897,
"text": "w-full: This class means the width is set to full."
},
{
"code": null,
"e": 1012,
"s": 948,
"text": "w-screen: This class means the width is set to the screen size."
},
{
"code": null,
"e": 1064,
"s": 1012,
"text": "w-min: This class is used to define the min-width."
},
{
"code": null,
"e": 1115,
"s": 1064,
"text": "w-max: This class is used to define the max-width."
},
{
"code": null,
"e": 1204,
"s": 1115,
"text": "Note: You can change the number with the valid “rem” values or set the percentage value."
},
{
"code": null,
"e": 1212,
"s": 1204,
"text": "Syntax:"
},
{
"code": null,
"e": 1247,
"s": 1212,
"text": "<element class=\"w-0\">...</element>"
},
{
"code": null,
"e": 1256,
"s": 1247,
"text": "Example:"
},
{
"code": null,
"e": 1261,
"s": 1256,
"text": "HTML"
},
{
"code": "<!DOCTYPE html> <head> <link href=\"https://unpkg.com/tailwindcss@^1.0/dist/tailwind.min.css\" rel=\"stylesheet\"> </head> <body class=\"text-center mx-4 space-y-2\"> <h1 class=\"text-green-600 text-5xl font-bold\"> GeeksforGeeks </h1> <b>Tailwind CSS Width Class</b> <div class=\"flex\"> <div class=\"w-1/2 bg-green-600 h-12 rounded-l-lg\"> w-1/2 </div> <div class=\"w-1/2 bg-green-300 h-12 rounded-r-lg\"> w-1/2 </div> </div> <div class=\"flex ...\"> <div class=\"w-2/5 bg-green-600 h-12 rounded-l-lg\"> w-2/5 </div> <div class=\"w-3/5 bg-green-300 h-12 rounded-r-lg\"> w-3/5 </div> </div> <div class=\"flex ...\"> <div class=\"w-1/3 bg-green-600 h-12 rounded-l-lg\"> w-1/3 </div> <div class=\"w-2/3 bg-green-300 h-12 rounded-r-lg\"> w-2/3 </div> </div> <div class=\"flex ...\"> <div class=\"w-1/4 bg-green-600 h-12 rounded-l-lg\"> w-1/4 </div> <div class=\"w-3/4 bg-green-300 h-12 rounded-r-lg\"> w-3/4 </div> </div> <div class=\"flex ...\"> <div class=\"w-1/5 bg-green-600 h-12 rounded-l-lg\"> w-1/5 </div> <div class=\"w-4/5 bg-green-300 h-12 rounded-r-lg\"> w-4/5 </div> </div> <div class=\"flex ...\"> <div class=\"w-1/6 bg-green-600 h-12 rounded-l-lg\"> w-1/6 </div> <div class=\"w-5/6 bg-green-300 h-12 rounded-r-lg\"> w-5/6 </div> </div> <div class=\"w-full bg-green-300 h-12 rounded-lg\"> w-full </div></body> </html>",
"e": 2868,
"s": 1261,
"text": null
},
{
"code": null,
"e": 2876,
"s": 2868,
"text": "Output:"
},
{
"code": null,
"e": 2889,
"s": 2876,
"text": "Tailwind CSS"
},
{
"code": null,
"e": 2905,
"s": 2889,
"text": "Tailwind-Sizing"
},
{
"code": null,
"e": 2922,
"s": 2905,
"text": "Web Technologies"
}
] |
MATLAB – Ideal Highpass Filter in Image Processing
|
22 Apr, 2020
In the field of Image Processing, Ideal Highpass Filter (IHPF) is used for image sharpening in the frequency domain. Image Sharpening is a technique to enhance the fine details and highlight the edges in a digital image. It removes low-frequency components from an image and preserves high-frequency components.
This ideal highpass filter is the reverse operation of the ideal lowpass filter. It can be determined using the following relation- where, is the transfer function of the highpass filter and is the transfer function of the corresponding lowpass filter.
The transfer function of the IHPF can be specified by the function-Where,
is a positive constant. IHPF passes all the frequencies outside of a circle of radius from the origin without attenuation and cuts off all the frequencies within the circle.
This is the transition point between H(u, v) = 1 and H(u, v) = 0, so this is termed as cutoff frequency.
is the Euclidean Distance from any point (u, v) to the origin of the frequency plane, i.e,
Approach:
Step 1: Input – Read an imageStep 2: Saving the size of the input image in pixelsStep 3: Get the Fourier Transform of the input_imageStep 4: Assign the Cut-off Frequency Step 5: Designing filter: Ideal High Pass FilterStep 6: Convolution between the Fourier Transformed input image and the filtering maskStep 7: Take Inverse Fourier Transform of the convoluted imageStep 8: Display the resultant image as output
Implementation in MATLAB:
% MATLAB Code | Ideal High Pass Filter % Reading input image : input_image input_image = imread('[name of input image file].[file format]'); % Saving the size of the input_image in pixels-% M : no of rows (height of the image)% N : no of columns (width of the image)[M, N] = size(input_image); % Getting Fourier Transform of the input_image% using MATLAB library function fft2 (2D fast fourier transform) FT_img = fft2(double(input_image)); % Assign Cut-off Frequency D0 = 10; % one can change this value accordingly % Designing filteru = 0:(M-1);idx = find(u>M/2);u(idx) = u(idx)-M;v = 0:(N-1);idy = find(v>N/2);v(idy) = v(idy)-N; % MATLAB library function meshgrid(v, u) returns 2D grid% which contains the coordinates of vectors v and u. % Matrix V with each row is a copy of v, and matrix U % with each column is a copy of u[V, U] = meshgrid(v, u); % Calculating Euclidean DistanceD = sqrt(U.^2+V.^2); % Comparing with the cut-off frequency and % determining the filtering maskH = double(D > D0); % Convolution between the Fourier Transformed image and the maskG = H.*FT_img; % Getting the resultant image by Inverse Fourier Transform% of the convoluted image using MATLAB library function% ifft2 (2D inverse fast fourier transform) output_image = real(ifft2(double(G))); % Displaying Input Image and Output Imagesubplot(2, 1, 1), imshow(input_image),subplot(2, 1, 2), imshow(output_image, [ ]);
Input Image –
Output:
Image-Processing
MATLAB
Advanced Computer Subject
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
System Design Tutorial
ML | Linear Regression
Reinforcement learning
Docker - COPY Instruction
Supervised and Unsupervised learning
Decision Tree Introduction with example
ML | Monte Carlo Tree Search (MCTS)
Getting started with Machine Learning
How to Run a Python Script using Docker?
Markov Decision Process
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n22 Apr, 2020"
},
{
"code": null,
"e": 340,
"s": 28,
"text": "In the field of Image Processing, Ideal Highpass Filter (IHPF) is used for image sharpening in the frequency domain. Image Sharpening is a technique to enhance the fine details and highlight the edges in a digital image. It removes low-frequency components from an image and preserves high-frequency components."
},
{
"code": null,
"e": 595,
"s": 340,
"text": "This ideal highpass filter is the reverse operation of the ideal lowpass filter. It can be determined using the following relation- where, is the transfer function of the highpass filter and is the transfer function of the corresponding lowpass filter."
},
{
"code": null,
"e": 669,
"s": 595,
"text": "The transfer function of the IHPF can be specified by the function-Where,"
},
{
"code": null,
"e": 845,
"s": 669,
"text": " is a positive constant. IHPF passes all the frequencies outside of a circle of radius from the origin without attenuation and cuts off all the frequencies within the circle."
},
{
"code": null,
"e": 951,
"s": 845,
"text": "This is the transition point between H(u, v) = 1 and H(u, v) = 0, so this is termed as cutoff frequency."
},
{
"code": null,
"e": 1044,
"s": 951,
"text": " is the Euclidean Distance from any point (u, v) to the origin of the frequency plane, i.e, "
},
{
"code": null,
"e": 1054,
"s": 1044,
"text": "Approach:"
},
{
"code": null,
"e": 1466,
"s": 1054,
"text": "Step 1: Input – Read an imageStep 2: Saving the size of the input image in pixelsStep 3: Get the Fourier Transform of the input_imageStep 4: Assign the Cut-off Frequency Step 5: Designing filter: Ideal High Pass FilterStep 6: Convolution between the Fourier Transformed input image and the filtering maskStep 7: Take Inverse Fourier Transform of the convoluted imageStep 8: Display the resultant image as output"
},
{
"code": null,
"e": 1492,
"s": 1466,
"text": "Implementation in MATLAB:"
},
{
"code": "% MATLAB Code | Ideal High Pass Filter % Reading input image : input_image input_image = imread('[name of input image file].[file format]'); % Saving the size of the input_image in pixels-% M : no of rows (height of the image)% N : no of columns (width of the image)[M, N] = size(input_image); % Getting Fourier Transform of the input_image% using MATLAB library function fft2 (2D fast fourier transform) FT_img = fft2(double(input_image)); % Assign Cut-off Frequency D0 = 10; % one can change this value accordingly % Designing filteru = 0:(M-1);idx = find(u>M/2);u(idx) = u(idx)-M;v = 0:(N-1);idy = find(v>N/2);v(idy) = v(idy)-N; % MATLAB library function meshgrid(v, u) returns 2D grid% which contains the coordinates of vectors v and u. % Matrix V with each row is a copy of v, and matrix U % with each column is a copy of u[V, U] = meshgrid(v, u); % Calculating Euclidean DistanceD = sqrt(U.^2+V.^2); % Comparing with the cut-off frequency and % determining the filtering maskH = double(D > D0); % Convolution between the Fourier Transformed image and the maskG = H.*FT_img; % Getting the resultant image by Inverse Fourier Transform% of the convoluted image using MATLAB library function% ifft2 (2D inverse fast fourier transform) output_image = real(ifft2(double(G))); % Displaying Input Image and Output Imagesubplot(2, 1, 1), imshow(input_image),subplot(2, 1, 2), imshow(output_image, [ ]);",
"e": 2907,
"s": 1492,
"text": null
},
{
"code": null,
"e": 2921,
"s": 2907,
"text": "Input Image –"
},
{
"code": null,
"e": 2929,
"s": 2921,
"text": "Output:"
},
{
"code": null,
"e": 2946,
"s": 2929,
"text": "Image-Processing"
},
{
"code": null,
"e": 2953,
"s": 2946,
"text": "MATLAB"
},
{
"code": null,
"e": 2979,
"s": 2953,
"text": "Advanced Computer Subject"
},
{
"code": null,
"e": 3077,
"s": 2979,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3100,
"s": 3077,
"text": "System Design Tutorial"
},
{
"code": null,
"e": 3123,
"s": 3100,
"text": "ML | Linear Regression"
},
{
"code": null,
"e": 3146,
"s": 3123,
"text": "Reinforcement learning"
},
{
"code": null,
"e": 3172,
"s": 3146,
"text": "Docker - COPY Instruction"
},
{
"code": null,
"e": 3209,
"s": 3172,
"text": "Supervised and Unsupervised learning"
},
{
"code": null,
"e": 3249,
"s": 3209,
"text": "Decision Tree Introduction with example"
},
{
"code": null,
"e": 3285,
"s": 3249,
"text": "ML | Monte Carlo Tree Search (MCTS)"
},
{
"code": null,
"e": 3323,
"s": 3285,
"text": "Getting started with Machine Learning"
},
{
"code": null,
"e": 3364,
"s": 3323,
"text": "How to Run a Python Script using Docker?"
}
] |
numpy.bitwise_xor() in Python
|
29 Nov, 2018
numpy.bitwise_xor() function is used to Compute the bit-wise XOR of two array element-wise. This function computes the bit-wise XOR of the underlying binary representation of the integers in the input arrays.
Syntax : numpy.bitwise_xor(arr1, arr2, /, out=None, *, where=True, casting=’same_kind’, order=’K’, dtype=None, ufunc ‘bitwise_xor’)
Parameters :arr1 : [array_like] Input array.arr2 : [array_like] Input array.out : [ndarray, optional] A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned.**kwargs : Allows you to pass keyword variable length of argument to a function. It is used when we want to handle named argument in a function.where : [array_like, optional] True value means to calculate the universal functions(ufunc) at that position, False value means to leave the value in the output alone.
Return : [ndarray or scalar] Result. This is a scalar if both x1 and x2 are scalars.
Code #1 : Working
# Python program explaining# bitwise_xor() function import numpy as geekin_num1 = 10in_num2 = 11 print ("Input number1 : ", in_num1)print ("Input number2 : ", in_num2) out_num = geek.bitwise_xor(in_num1, in_num2) print ("bitwise_xor of 10 and 11 : ", out_num)
Output :
Input number1 : 10
Input number2 : 11
bitwise_xor of 10 and 11 : 1
Code #2 :
# Python program explaining# bitwise_xor() function import numpy as geek in_arr1 = [2, 8, 125]in_arr2 = [3, 3, 115] print ("Input array1 : ", in_arr1) print ("Input array2 : ", in_arr2) out_arr = geek.bitwise_xor(in_arr1, in_arr2) print ("Output array after bitwise_xor: ", out_arr)
Output :
Input array1 : [2, 8, 125]
Input array2 : [3, 3, 115]
Output array after bitwise_xor: [ 1 11 14]
Code #3 :
# Python program explaining# bitwise_xor() function import numpy as geek in_arr1 = [True, False, True, False]in_arr2 = [False, False, True, True] print ("Input array1 : ", in_arr1) print ("Input array2 : ", in_arr2) out_arr = geek.bitwise_xor(in_arr1, in_arr2) print ("Output array after bitwise_xor: ", out_arr)
Output :
Input array1 : [True, False, True, False]
Input array2 : [False, False, True, True]
Output array after bitwise_xor: [ True False False True]
Python numpy-Binary Operation
Python-numpy
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
Different ways to create Pandas Dataframe
Enumerate() in Python
Read a file line by line in Python
Python String | replace()
How to Install PIP on Windows ?
*args and **kwargs in Python
Python Classes and Objects
Iterate over a list in Python
Python OOPs Concepts
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n29 Nov, 2018"
},
{
"code": null,
"e": 237,
"s": 28,
"text": "numpy.bitwise_xor() function is used to Compute the bit-wise XOR of two array element-wise. This function computes the bit-wise XOR of the underlying binary representation of the integers in the input arrays."
},
{
"code": null,
"e": 369,
"s": 237,
"text": "Syntax : numpy.bitwise_xor(arr1, arr2, /, out=None, *, where=True, casting=’same_kind’, order=’K’, dtype=None, ufunc ‘bitwise_xor’)"
},
{
"code": null,
"e": 954,
"s": 369,
"text": "Parameters :arr1 : [array_like] Input array.arr2 : [array_like] Input array.out : [ndarray, optional] A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned.**kwargs : Allows you to pass keyword variable length of argument to a function. It is used when we want to handle named argument in a function.where : [array_like, optional] True value means to calculate the universal functions(ufunc) at that position, False value means to leave the value in the output alone."
},
{
"code": null,
"e": 1039,
"s": 954,
"text": "Return : [ndarray or scalar] Result. This is a scalar if both x1 and x2 are scalars."
},
{
"code": null,
"e": 1057,
"s": 1039,
"text": "Code #1 : Working"
},
{
"code": "# Python program explaining# bitwise_xor() function import numpy as geekin_num1 = 10in_num2 = 11 print (\"Input number1 : \", in_num1)print (\"Input number2 : \", in_num2) out_num = geek.bitwise_xor(in_num1, in_num2) print (\"bitwise_xor of 10 and 11 : \", out_num) ",
"e": 1326,
"s": 1057,
"text": null
},
{
"code": null,
"e": 1335,
"s": 1326,
"text": "Output :"
},
{
"code": null,
"e": 1408,
"s": 1335,
"text": "Input number1 : 10\nInput number2 : 11\nbitwise_xor of 10 and 11 : 1\n"
},
{
"code": null,
"e": 1419,
"s": 1408,
"text": " Code #2 :"
},
{
"code": "# Python program explaining# bitwise_xor() function import numpy as geek in_arr1 = [2, 8, 125]in_arr2 = [3, 3, 115] print (\"Input array1 : \", in_arr1) print (\"Input array2 : \", in_arr2) out_arr = geek.bitwise_xor(in_arr1, in_arr2) print (\"Output array after bitwise_xor: \", out_arr) ",
"e": 1710,
"s": 1419,
"text": null
},
{
"code": null,
"e": 1719,
"s": 1710,
"text": "Output :"
},
{
"code": null,
"e": 1820,
"s": 1719,
"text": "Input array1 : [2, 8, 125]\nInput array2 : [3, 3, 115]\nOutput array after bitwise_xor: [ 1 11 14]\n"
},
{
"code": null,
"e": 1831,
"s": 1820,
"text": " Code #3 :"
},
{
"code": "# Python program explaining# bitwise_xor() function import numpy as geek in_arr1 = [True, False, True, False]in_arr2 = [False, False, True, True] print (\"Input array1 : \", in_arr1) print (\"Input array2 : \", in_arr2) out_arr = geek.bitwise_xor(in_arr1, in_arr2) print (\"Output array after bitwise_xor: \", out_arr) ",
"e": 2152,
"s": 1831,
"text": null
},
{
"code": null,
"e": 2161,
"s": 2152,
"text": "Output :"
},
{
"code": null,
"e": 2307,
"s": 2161,
"text": "Input array1 : [True, False, True, False]\nInput array2 : [False, False, True, True]\nOutput array after bitwise_xor: [ True False False True]\n"
},
{
"code": null,
"e": 2337,
"s": 2307,
"text": "Python numpy-Binary Operation"
},
{
"code": null,
"e": 2350,
"s": 2337,
"text": "Python-numpy"
},
{
"code": null,
"e": 2357,
"s": 2350,
"text": "Python"
},
{
"code": null,
"e": 2455,
"s": 2357,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2473,
"s": 2455,
"text": "Python Dictionary"
},
{
"code": null,
"e": 2515,
"s": 2473,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 2537,
"s": 2515,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 2572,
"s": 2537,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 2598,
"s": 2572,
"text": "Python String | replace()"
},
{
"code": null,
"e": 2630,
"s": 2598,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 2659,
"s": 2630,
"text": "*args and **kwargs in Python"
},
{
"code": null,
"e": 2686,
"s": 2659,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 2716,
"s": 2686,
"text": "Iterate over a list in Python"
}
] |
Lodash _.template() Method
|
18 Sep, 2020
Lodash is a JavaScript library that works on the top of underscore.js. Lodash helps in working with arrays, strings, objects, numbers, etc.
The _.template() method is used to create a template function that is compiled and can interpolate properties of data in interpolate delimiters, execute JavaScript in evaluate delimiters, and HTML-escape interpolated properties of data in escape delimiters. Moreover, data properties are retrieved in the template as free variables.
Syntax:
_.template( string, options )
Parameters: This method accepts two parameters as mentioned above and described below:
string: It is a string that would be used as the template. It is an optional value.
options: It is an object that can be used to modify the behavior of the method. It is an optional value.
The options field has the following optional arguments:
options.interpolate: It is a regular expression that specifies the HTML interpolate delimiter.
options.evaluate: It is a regular expression that specifies the HTML evaluate delimiter.
options.escape: It is a regular expression that specifies the HTML escape delimiter.
options.imports: It is an object which is to be imported as free variables into the template.
options.sourceURL: It is a string that denotes the source URL of the compiled template.
options.variable: It is a string that denotes the variable name of the data object.
Return Value: This method returns the compiled template function.
Example 1:
Javascript
// Requiring lodash libraryconst _ = require('lodash'); // Using the _.template() method to// create a compiled template using // the "interpolate" delimitervar comptempl = _.template('Hi <%= author%>!'); // Assigning the value to the // interpolate delimiterlet result = comptempl({ 'author': 'Nidhi' }); // Displays outputconsole.log(result);
Output:
Hi Nidhi!
Example 2:
Javascript
// Requiring lodash libraryconst _ = require('lodash'); // Using the _.template() method to // create a compiled template using // the internal print function in// the "evaluate" delimitervar comptempl = _.template( '<% print("hey " + geek); %>...'); // Assigning value to the evaluate delimiterlet result = comptempl({ 'geek': 'Nisha' }); // Displays outputconsole.log(result);
Output:
hey Nisha...
Example 3: The forEach() method is used as the evaluate delimiter in order to generate HTML as output.
Javascript
// Requiring lodash libraryconst _ = require("lodash"); // Using the template() method with// additional parameterslet compiled_temp = _.template( "<% _.forEach(students, function(students) " + "{ %><li><b><%- students %></b></li><% }); %>")({ students: ["Rahul", "Rohit"] }); // Displays the outputconsole.log(compiled_temp);
Output:
<li><b>Rahul</b></li><li><b>Rohit</b></li>
JavaScript-Lodash
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Remove elements from a JavaScript Array
JavaScript String includes() Method
Implementation of LinkedList in Javascript
DOM (Document Object Model)
Installation of Node.js on Linux
How to insert spaces/tabs in text using HTML/CSS?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to create footer to stay at the bottom of a Web page?
How to set the default value for an HTML <select> element ?
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n18 Sep, 2020"
},
{
"code": null,
"e": 168,
"s": 28,
"text": "Lodash is a JavaScript library that works on the top of underscore.js. Lodash helps in working with arrays, strings, objects, numbers, etc."
},
{
"code": null,
"e": 502,
"s": 168,
"text": "The _.template() method is used to create a template function that is compiled and can interpolate properties of data in interpolate delimiters, execute JavaScript in evaluate delimiters, and HTML-escape interpolated properties of data in escape delimiters. Moreover, data properties are retrieved in the template as free variables. "
},
{
"code": null,
"e": 510,
"s": 502,
"text": "Syntax:"
},
{
"code": null,
"e": 541,
"s": 510,
"text": "_.template( string, options )\n"
},
{
"code": null,
"e": 628,
"s": 541,
"text": "Parameters: This method accepts two parameters as mentioned above and described below:"
},
{
"code": null,
"e": 712,
"s": 628,
"text": "string: It is a string that would be used as the template. It is an optional value."
},
{
"code": null,
"e": 817,
"s": 712,
"text": "options: It is an object that can be used to modify the behavior of the method. It is an optional value."
},
{
"code": null,
"e": 873,
"s": 817,
"text": "The options field has the following optional arguments:"
},
{
"code": null,
"e": 968,
"s": 873,
"text": "options.interpolate: It is a regular expression that specifies the HTML interpolate delimiter."
},
{
"code": null,
"e": 1057,
"s": 968,
"text": "options.evaluate: It is a regular expression that specifies the HTML evaluate delimiter."
},
{
"code": null,
"e": 1142,
"s": 1057,
"text": "options.escape: It is a regular expression that specifies the HTML escape delimiter."
},
{
"code": null,
"e": 1236,
"s": 1142,
"text": "options.imports: It is an object which is to be imported as free variables into the template."
},
{
"code": null,
"e": 1324,
"s": 1236,
"text": "options.sourceURL: It is a string that denotes the source URL of the compiled template."
},
{
"code": null,
"e": 1408,
"s": 1324,
"text": "options.variable: It is a string that denotes the variable name of the data object."
},
{
"code": null,
"e": 1474,
"s": 1408,
"text": "Return Value: This method returns the compiled template function."
},
{
"code": null,
"e": 1485,
"s": 1474,
"text": "Example 1:"
},
{
"code": null,
"e": 1496,
"s": 1485,
"text": "Javascript"
},
{
"code": "// Requiring lodash libraryconst _ = require('lodash'); // Using the _.template() method to// create a compiled template using // the \"interpolate\" delimitervar comptempl = _.template('Hi <%= author%>!'); // Assigning the value to the // interpolate delimiterlet result = comptempl({ 'author': 'Nidhi' }); // Displays outputconsole.log(result);",
"e": 1847,
"s": 1496,
"text": null
},
{
"code": null,
"e": 1855,
"s": 1847,
"text": "Output:"
},
{
"code": null,
"e": 1866,
"s": 1855,
"text": "Hi Nidhi!\n"
},
{
"code": null,
"e": 1879,
"s": 1866,
"text": "Example 2: "
},
{
"code": null,
"e": 1890,
"s": 1879,
"text": "Javascript"
},
{
"code": "// Requiring lodash libraryconst _ = require('lodash'); // Using the _.template() method to // create a compiled template using // the internal print function in// the \"evaluate\" delimitervar comptempl = _.template( '<% print(\"hey \" + geek); %>...'); // Assigning value to the evaluate delimiterlet result = comptempl({ 'geek': 'Nisha' }); // Displays outputconsole.log(result);",
"e": 2274,
"s": 1890,
"text": null
},
{
"code": null,
"e": 2282,
"s": 2274,
"text": "Output:"
},
{
"code": null,
"e": 2296,
"s": 2282,
"text": "hey Nisha...\n"
},
{
"code": null,
"e": 2399,
"s": 2296,
"text": "Example 3: The forEach() method is used as the evaluate delimiter in order to generate HTML as output."
},
{
"code": null,
"e": 2410,
"s": 2399,
"text": "Javascript"
},
{
"code": "// Requiring lodash libraryconst _ = require(\"lodash\"); // Using the template() method with// additional parameterslet compiled_temp = _.template( \"<% _.forEach(students, function(students) \" + \"{ %><li><b><%- students %></b></li><% }); %>\")({ students: [\"Rahul\", \"Rohit\"] }); // Displays the outputconsole.log(compiled_temp);",
"e": 2743,
"s": 2410,
"text": null
},
{
"code": null,
"e": 2751,
"s": 2743,
"text": "Output:"
},
{
"code": null,
"e": 2795,
"s": 2751,
"text": "<li><b>Rahul</b></li><li><b>Rohit</b></li>\n"
},
{
"code": null,
"e": 2813,
"s": 2795,
"text": "JavaScript-Lodash"
},
{
"code": null,
"e": 2824,
"s": 2813,
"text": "JavaScript"
},
{
"code": null,
"e": 2841,
"s": 2824,
"text": "Web Technologies"
},
{
"code": null,
"e": 2939,
"s": 2841,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3000,
"s": 2939,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 3040,
"s": 3000,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 3076,
"s": 3040,
"text": "JavaScript String includes() Method"
},
{
"code": null,
"e": 3119,
"s": 3076,
"text": "Implementation of LinkedList in Javascript"
},
{
"code": null,
"e": 3147,
"s": 3119,
"text": "DOM (Document Object Model)"
},
{
"code": null,
"e": 3180,
"s": 3147,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 3230,
"s": 3180,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
},
{
"code": null,
"e": 3292,
"s": 3230,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 3350,
"s": 3292,
"text": "How to create footer to stay at the bottom of a Web page?"
}
] |
PHP | stristr() Function
|
28 Mar, 2018
The stristr() function is a built-in function in PHP. It searches for the first occurrence of a string inside another string and displays the portion of the latter starting from the first occurrence of the former in the latter (before if specified). This function is case-insensitive.Syntax :
stristr( $string, $search, $before )
Parameters : This function accepts three parameters as shown in the above syntax out of which the first two parameters must be supplied and the third one is optional. All of these parameters are described below:
$string : It is a mandatory parameter which specifies the string to be searched.
$search : It is a mandatory parameter which specifies the string to search for. If this parameter is a number, it will search for the character matching the ASCII value of the number
$before : It is an optional parameter. It specifies a boolean value whose default is false. If set to true, it returns the part of the string before the first occurrence of the search parameter.
Return Value : The function returns the rest of the string (from the matching point), or FALSE, if the string to search for is not found.
Examples:
Input : $string = "Hello world!", $search = "WORLD"
Output : world!
Input : $string = "Geeks for Geeks!", $search = "K"
Output : ks for Geeks!
Below programs illustrate the stristr() function in PHP :
Program 1: In this program we will display the portion of $string from the first occurrence of $search.
<?phpecho stristr("Geeks for Geeks!", "K");?>
Output:
ks for Geeks!
Program 2: In this program we will display the portion of $string before the first occurrence of $search.
<?phpecho stristr("Geeks for Geeks!", "K", true);?>
Output:
Gee
Program 3: In this program we will pass an integer as $search.
<?php $string = "Geeks"; echo stristr($string, 101); // 101 is ASCII value of lowercase e?>
Output:
eeks
Reference:http://php.net/manual/en/function.stristr.php
PHP-string
PHP
Web Technologies
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n28 Mar, 2018"
},
{
"code": null,
"e": 321,
"s": 28,
"text": "The stristr() function is a built-in function in PHP. It searches for the first occurrence of a string inside another string and displays the portion of the latter starting from the first occurrence of the former in the latter (before if specified). This function is case-insensitive.Syntax :"
},
{
"code": null,
"e": 358,
"s": 321,
"text": "stristr( $string, $search, $before )"
},
{
"code": null,
"e": 570,
"s": 358,
"text": "Parameters : This function accepts three parameters as shown in the above syntax out of which the first two parameters must be supplied and the third one is optional. All of these parameters are described below:"
},
{
"code": null,
"e": 651,
"s": 570,
"text": "$string : It is a mandatory parameter which specifies the string to be searched."
},
{
"code": null,
"e": 834,
"s": 651,
"text": "$search : It is a mandatory parameter which specifies the string to search for. If this parameter is a number, it will search for the character matching the ASCII value of the number"
},
{
"code": null,
"e": 1029,
"s": 834,
"text": "$before : It is an optional parameter. It specifies a boolean value whose default is false. If set to true, it returns the part of the string before the first occurrence of the search parameter."
},
{
"code": null,
"e": 1167,
"s": 1029,
"text": "Return Value : The function returns the rest of the string (from the matching point), or FALSE, if the string to search for is not found."
},
{
"code": null,
"e": 1177,
"s": 1167,
"text": "Examples:"
},
{
"code": null,
"e": 1322,
"s": 1177,
"text": "Input : $string = \"Hello world!\", $search = \"WORLD\"\nOutput : world!\n\nInput : $string = \"Geeks for Geeks!\", $search = \"K\"\nOutput : ks for Geeks!\n"
},
{
"code": null,
"e": 1380,
"s": 1322,
"text": "Below programs illustrate the stristr() function in PHP :"
},
{
"code": null,
"e": 1484,
"s": 1380,
"text": "Program 1: In this program we will display the portion of $string from the first occurrence of $search."
},
{
"code": "<?phpecho stristr(\"Geeks for Geeks!\", \"K\");?> ",
"e": 1532,
"s": 1484,
"text": null
},
{
"code": null,
"e": 1540,
"s": 1532,
"text": "Output:"
},
{
"code": null,
"e": 1555,
"s": 1540,
"text": "ks for Geeks! "
},
{
"code": null,
"e": 1661,
"s": 1555,
"text": "Program 2: In this program we will display the portion of $string before the first occurrence of $search."
},
{
"code": "<?phpecho stristr(\"Geeks for Geeks!\", \"K\", true);?> ",
"e": 1718,
"s": 1661,
"text": null
},
{
"code": null,
"e": 1726,
"s": 1718,
"text": "Output:"
},
{
"code": null,
"e": 1730,
"s": 1726,
"text": "Gee"
},
{
"code": null,
"e": 1793,
"s": 1730,
"text": "Program 3: In this program we will pass an integer as $search."
},
{
"code": "<?php $string = \"Geeks\"; echo stristr($string, 101); // 101 is ASCII value of lowercase e?> ",
"e": 1892,
"s": 1793,
"text": null
},
{
"code": null,
"e": 1900,
"s": 1892,
"text": "Output:"
},
{
"code": null,
"e": 1905,
"s": 1900,
"text": "eeks"
},
{
"code": null,
"e": 1961,
"s": 1905,
"text": "Reference:http://php.net/manual/en/function.stristr.php"
},
{
"code": null,
"e": 1972,
"s": 1961,
"text": "PHP-string"
},
{
"code": null,
"e": 1976,
"s": 1972,
"text": "PHP"
},
{
"code": null,
"e": 1993,
"s": 1976,
"text": "Web Technologies"
},
{
"code": null,
"e": 1997,
"s": 1993,
"text": "PHP"
}
] |
Python | Minimum number of subsets with distinct elements using Counter
|
21 Nov, 2018
You are given an array of n-element. You have to make subsets from the array such that no subset contain duplicate elements. Find out minimum number of subset possible.
Examples:
Input : arr[] = {1, 2, 3, 4}
Output :1
Explanation : A single subset can contains all
values and all values are distinct
Input : arr[] = {1, 2, 3, 3}
Output : 2
Explanation : We need to create two subsets
{1, 2, 3} and {3} [or {1, 3} and {2, 3}] such
that both subsets have distinct elements.
We have existing solution for this problem please refer Minimum number of subsets with distinct elements link. We will solve this problem quickly in python using Counter(iterable) method. Approach is very simple, calculate frequency of each element in array and print value of maximum frequency because we want each subset to be different and we have to put any repeated element in different subset, so to get minimum number of subset we should have at least maximum frequency number of subsets.
# Python program to find Minimum number of # subsets with distinct elements using Counter # function to find Minimum number of subsets # with distinct elementsfrom collections import Counter def minSubsets(input): # calculate frequency of each element freqDict = Counter(input) # get list of all frequency values # print maximum from it print (max(freqDict.values())) # Driver programif __name__ == "__main__": input = [1, 2, 3, 3] minSubsets(input)
Output:
2
Python dictionary-programs
Python set-programs
python-dict
python-set
Python
python-dict
python-set
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
Different ways to create Pandas Dataframe
Enumerate() in Python
Read a file line by line in Python
Python String | replace()
How to Install PIP on Windows ?
*args and **kwargs in Python
Python Classes and Objects
Iterate over a list in Python
Convert integer to string in Python
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n21 Nov, 2018"
},
{
"code": null,
"e": 221,
"s": 52,
"text": "You are given an array of n-element. You have to make subsets from the array such that no subset contain duplicate elements. Find out minimum number of subset possible."
},
{
"code": null,
"e": 231,
"s": 221,
"text": "Examples:"
},
{
"code": null,
"e": 527,
"s": 231,
"text": "Input : arr[] = {1, 2, 3, 4}\nOutput :1\nExplanation : A single subset can contains all \nvalues and all values are distinct\n\nInput : arr[] = {1, 2, 3, 3}\nOutput : 2\nExplanation : We need to create two subsets\n{1, 2, 3} and {3} [or {1, 3} and {2, 3}] such\nthat both subsets have distinct elements.\n"
},
{
"code": null,
"e": 1023,
"s": 527,
"text": "We have existing solution for this problem please refer Minimum number of subsets with distinct elements link. We will solve this problem quickly in python using Counter(iterable) method. Approach is very simple, calculate frequency of each element in array and print value of maximum frequency because we want each subset to be different and we have to put any repeated element in different subset, so to get minimum number of subset we should have at least maximum frequency number of subsets."
},
{
"code": "# Python program to find Minimum number of # subsets with distinct elements using Counter # function to find Minimum number of subsets # with distinct elementsfrom collections import Counter def minSubsets(input): # calculate frequency of each element freqDict = Counter(input) # get list of all frequency values # print maximum from it print (max(freqDict.values())) # Driver programif __name__ == \"__main__\": input = [1, 2, 3, 3] minSubsets(input)",
"e": 1507,
"s": 1023,
"text": null
},
{
"code": null,
"e": 1515,
"s": 1507,
"text": "Output:"
},
{
"code": null,
"e": 1518,
"s": 1515,
"text": "2\n"
},
{
"code": null,
"e": 1545,
"s": 1518,
"text": "Python dictionary-programs"
},
{
"code": null,
"e": 1565,
"s": 1545,
"text": "Python set-programs"
},
{
"code": null,
"e": 1577,
"s": 1565,
"text": "python-dict"
},
{
"code": null,
"e": 1588,
"s": 1577,
"text": "python-set"
},
{
"code": null,
"e": 1595,
"s": 1588,
"text": "Python"
},
{
"code": null,
"e": 1607,
"s": 1595,
"text": "python-dict"
},
{
"code": null,
"e": 1618,
"s": 1607,
"text": "python-set"
},
{
"code": null,
"e": 1716,
"s": 1618,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1734,
"s": 1716,
"text": "Python Dictionary"
},
{
"code": null,
"e": 1776,
"s": 1734,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 1798,
"s": 1776,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 1833,
"s": 1798,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 1859,
"s": 1833,
"text": "Python String | replace()"
},
{
"code": null,
"e": 1891,
"s": 1859,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 1920,
"s": 1891,
"text": "*args and **kwargs in Python"
},
{
"code": null,
"e": 1947,
"s": 1920,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 1977,
"s": 1947,
"text": "Iterate over a list in Python"
}
] |
Matplotlib.axes.Axes.get_xlim() in Python
|
19 Apr, 2020
Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The Axes Class contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. And the instances of Axes supports callbacks through a callbacks attribute.
The Axes.get_xlim() function in axes module of matplotlib library is used to get the x-axis view limits.
Syntax: Axes.get_xlim(self)
Returns:This method returns the following
left, right :This returns the current x-axis limits in data coordinates.
Below examples illustrate the matplotlib.axes.Axes.get_xlim() function in matplotlib.axes:
Example 1:
#Implementation of matplotlib functionfrom matplotlib.widgets import Cursorimport numpy as npimport matplotlib.pyplot as plt np.random.seed(19680801) fig, (ax,ax1) = plt.subplots(1 , 2 , facecolor='#A0F0CC') x, y = 4*(np.random.rand(2, 100) - .5)ax.plot(x, y, 'g')ax.set_xlim(-3, 3) xmin, xmax = ax.get_xlim()ax.set_title('Original Window', fontsize=10, fontweight='bold') ax1.plot(x, y, 'g')ax1.set_xlim(xmin, 2*xmax)ax1.set_title('Using get_xlim() function', fontsize=10, fontweight='bold') fig.suptitle('matplotlib.axes.Axes.get_xlim()\ Example\n',fontsize=10, fontweight='bold')plt.show()
Output:
Example 2:
#Implementation of matplotlib functionimport matplotlib.pyplot as pltimport numpy as np fig1, (ax1,ax11) = plt.subplots(1,2)fig2, (ax2,ax22) = plt.subplots(1,2)ax1.set(xlim=(-1.0, 1.0), ylim=(-1.0, 1.0), autoscale_on=False)ax2.set(xlim=(-0.5, 0.5), ylim=(-0.5, 0.5), autoscale_on=False)ax11.set(xlim=(-1.0, 1.0), ylim=(-1.0, 1.0), autoscale_on=False)ax22.set(xlim=(-0.5, 0.5), ylim=(-0.5, 0.5), autoscale_on=False) x, y, s, c = np.random.rand(4, 200)s *= 200 ax1.scatter(x, y, s, c)ax2.scatter(x, y, s, c)ax11.scatter(x, y, s, c)ax22.scatter(x, y, s, c) def GFG(event): if event.button != 1: return x, y = event.xdata, event.ydata ax2.set_xlim(x - 0.5, x + 0.5) ax2.set_ylim(y - 0.5, y + 0.5) ax22.set_xlim(x - 0.5, x + 0.5) ax22.set_ylim(y - 0.5, y + 0.5) fig2.canvas.draw() fig1.canvas.mpl_connect('button_press_event', GFG) xmin, xmax = ax1.get_xlim()ax1.set_title('Original Window', fontsize=10, fontweight='bold') ax11.set_xlim(xmin, 2*xmax)ax11.set_title('After Using get_xlim() function', fontsize=10, fontweight='bold') xmin, xmax = ax2.get_xlim()ax2.set_title('Zoomed Window', fontsize=10, fontweight='bold') ax22.set_xlim(xmin, 2*xmax)ax22.set_title('After Using get_xlim() function', fontsize=10, fontweight='bold') fig1.suptitle('matplotlib.axes.Axes.get_xlim() \Example\n',fontsize=10, fontweight='bold')plt.show()
Output:
Python-matplotlib
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Python Classes and Objects
Python | os.path.join() method
Introduction To PYTHON
Python OOPs Concepts
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Python | Get unique values from a list
Create a directory in Python
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n19 Apr, 2020"
},
{
"code": null,
"e": 328,
"s": 28,
"text": "Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The Axes Class contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. And the instances of Axes supports callbacks through a callbacks attribute."
},
{
"code": null,
"e": 433,
"s": 328,
"text": "The Axes.get_xlim() function in axes module of matplotlib library is used to get the x-axis view limits."
},
{
"code": null,
"e": 461,
"s": 433,
"text": "Syntax: Axes.get_xlim(self)"
},
{
"code": null,
"e": 503,
"s": 461,
"text": "Returns:This method returns the following"
},
{
"code": null,
"e": 576,
"s": 503,
"text": "left, right :This returns the current x-axis limits in data coordinates."
},
{
"code": null,
"e": 667,
"s": 576,
"text": "Below examples illustrate the matplotlib.axes.Axes.get_xlim() function in matplotlib.axes:"
},
{
"code": null,
"e": 678,
"s": 667,
"text": "Example 1:"
},
{
"code": "#Implementation of matplotlib functionfrom matplotlib.widgets import Cursorimport numpy as npimport matplotlib.pyplot as plt np.random.seed(19680801) fig, (ax,ax1) = plt.subplots(1 , 2 , facecolor='#A0F0CC') x, y = 4*(np.random.rand(2, 100) - .5)ax.plot(x, y, 'g')ax.set_xlim(-3, 3) xmin, xmax = ax.get_xlim()ax.set_title('Original Window', fontsize=10, fontweight='bold') ax1.plot(x, y, 'g')ax1.set_xlim(xmin, 2*xmax)ax1.set_title('Using get_xlim() function', fontsize=10, fontweight='bold') fig.suptitle('matplotlib.axes.Axes.get_xlim()\\ Example\\n',fontsize=10, fontweight='bold')plt.show()",
"e": 1295,
"s": 678,
"text": null
},
{
"code": null,
"e": 1303,
"s": 1295,
"text": "Output:"
},
{
"code": null,
"e": 1314,
"s": 1303,
"text": "Example 2:"
},
{
"code": "#Implementation of matplotlib functionimport matplotlib.pyplot as pltimport numpy as np fig1, (ax1,ax11) = plt.subplots(1,2)fig2, (ax2,ax22) = plt.subplots(1,2)ax1.set(xlim=(-1.0, 1.0), ylim=(-1.0, 1.0), autoscale_on=False)ax2.set(xlim=(-0.5, 0.5), ylim=(-0.5, 0.5), autoscale_on=False)ax11.set(xlim=(-1.0, 1.0), ylim=(-1.0, 1.0), autoscale_on=False)ax22.set(xlim=(-0.5, 0.5), ylim=(-0.5, 0.5), autoscale_on=False) x, y, s, c = np.random.rand(4, 200)s *= 200 ax1.scatter(x, y, s, c)ax2.scatter(x, y, s, c)ax11.scatter(x, y, s, c)ax22.scatter(x, y, s, c) def GFG(event): if event.button != 1: return x, y = event.xdata, event.ydata ax2.set_xlim(x - 0.5, x + 0.5) ax2.set_ylim(y - 0.5, y + 0.5) ax22.set_xlim(x - 0.5, x + 0.5) ax22.set_ylim(y - 0.5, y + 0.5) fig2.canvas.draw() fig1.canvas.mpl_connect('button_press_event', GFG) xmin, xmax = ax1.get_xlim()ax1.set_title('Original Window', fontsize=10, fontweight='bold') ax11.set_xlim(xmin, 2*xmax)ax11.set_title('After Using get_xlim() function', fontsize=10, fontweight='bold') xmin, xmax = ax2.get_xlim()ax2.set_title('Zoomed Window', fontsize=10, fontweight='bold') ax22.set_xlim(xmin, 2*xmax)ax22.set_title('After Using get_xlim() function', fontsize=10, fontweight='bold') fig1.suptitle('matplotlib.axes.Axes.get_xlim() \\Example\\n',fontsize=10, fontweight='bold')plt.show()",
"e": 2776,
"s": 1314,
"text": null
},
{
"code": null,
"e": 2784,
"s": 2776,
"text": "Output:"
},
{
"code": null,
"e": 2802,
"s": 2784,
"text": "Python-matplotlib"
},
{
"code": null,
"e": 2809,
"s": 2802,
"text": "Python"
},
{
"code": null,
"e": 2907,
"s": 2809,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2939,
"s": 2907,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 2966,
"s": 2939,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 2997,
"s": 2966,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 3020,
"s": 2997,
"text": "Introduction To PYTHON"
},
{
"code": null,
"e": 3041,
"s": 3020,
"text": "Python OOPs Concepts"
},
{
"code": null,
"e": 3097,
"s": 3041,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 3139,
"s": 3097,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 3181,
"s": 3139,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 3220,
"s": 3181,
"text": "Python | Get unique values from a list"
}
] |
How to Convert Local Time to GMT in Java?
|
04 Jul, 2021
Time conversion from IST or any standard time to GMT is necessary for locals to understand and reciprocate their international clients if they are connected overseas in terms of work or any purpose. Today we will have a look at a code where we convert the standard time of any country to GMT.
Here we would use the SimpleDateFormat to convert the local Time in GMT. It is available in class as mentioned:
java.util.SimpleDateFormat
Methods: One can use different methods like SimpleDateFormat or maybe even Instance() method. They are very easy and helpful methods. We can also use calendar and time methods to do so.
Using format() method of SimpleDateFormat classUsing instance() method of SimpleDateFormat class
Using format() method of SimpleDateFormat class
Using instance() method of SimpleDateFormat class
Method 1: Using format() method of SimpleDateFormat class
The format() method of DateFormat class in Java is used to format a given date into a Date/Time string. Basically, the method is used to convert this date and time into a particular format i.e., mm/dd/yyyy.
Syntax:
public final String format(Date date)
Parameters: The method takes one parameter date of Date object type and refers to the date whose string output is to be produced.
Return Value: The method returns Date or time in string format of mm/dd/yyyy.
Procedure:
Here we would simply first print our local time
Then convert it to GMT using SimpleDateFormat
Then print both the time zones.
Example:
Java
// Java Program to convert local time to GMT // Importing libraries// 1. input output librariesimport java.io.*;// 3. Text classimport java.text.DateFormat;import java.text.SimpleDateFormat;// 2. Utility libraries for// Date and TimeZone classimport java.util.Date;import java.util.TimeZone; // Classclass GFG { // Main driver method public static void main(String[] args) { // Creating a Date class object // to take local time from the user Date localTime = new Date(); // Creating a DateFormat class object to // convert the localtime to GMT DateFormat s = new SimpleDateFormat("dd/MM/yyyy" + " " + " HH:mm:ss"); // function will helps to get the GMT Timezone // using the getTimeZOne() method s.setTimeZone(TimeZone.getTimeZone("GMT")); // One can get any other time zone also // by passing some other argument to it // Printing the local time System.out.println("local Time:" + localTime); // Printing the GMT time to // illustrate changes in GMT time System.out.println("Time IN Gmt : " + s.format(localTime)); }}
local Time:Thu Feb 04 11:34:15 UTC 2021
Time IN Gmt : 04/02/2021 11:34:15
Method 2: Using instance() method of SimpleDateFormat class
As in the above method we used SimpleDateFormat, we will now use the instant method to get the time. The SimpleDateFormat can be used in different ways, now the instant method can be used to get the UTC or GMT.
Procedure:
We will use the Instant method to get the time proper time
It can be importing the complete time class
java.time.* ;
Example:
Java
// Java Program to convert Local time to// GMT time // Importing all input output classesimport java.io.*;// Importing all time classesimport java.time.*; // Classclass GFG { // Main driver method public static void main(String[] args) { // Instant operator helps to note // the time and the location of it // Creating an object of Instant type // using the now() method Instant now = Instant.now(); // Now with the help of Instant operator // zoned operator is called // Creating an object of ZonedDateTime ZonedDateTime zdt = ZonedDateTime.ofInstant( now, ZoneId.systemDefault()); // Printing the local time System.out.println(" Local : " + zdt); // Creating an object of Instant type // taking any other instant method Instant instant = Instant.now(); // Printing the GMT/UTC time by parsing with string // using the toString() method System.out.println(" GMT : "+instant.toString()); }}
Local : 2021-02-04T10:40:34.436700Z[Etc/UTC]
GMT : 2021-02-04T10:40:34.547680Z
simmytarika5
Java-Date-Time
Picked
Technical Scripter 2020
Java
Technical Scripter
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Introduction to Java
Constructors in Java
Exceptions in Java
Generics in Java
Functional Interfaces in Java
Java Programming Examples
Strings in Java
Differences between JDK, JRE and JVM
Abstraction in Java
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n04 Jul, 2021"
},
{
"code": null,
"e": 322,
"s": 28,
"text": "Time conversion from IST or any standard time to GMT is necessary for locals to understand and reciprocate their international clients if they are connected overseas in terms of work or any purpose. Today we will have a look at a code where we convert the standard time of any country to GMT. "
},
{
"code": null,
"e": 434,
"s": 322,
"text": "Here we would use the SimpleDateFormat to convert the local Time in GMT. It is available in class as mentioned:"
},
{
"code": null,
"e": 461,
"s": 434,
"text": "java.util.SimpleDateFormat"
},
{
"code": null,
"e": 647,
"s": 461,
"text": "Methods: One can use different methods like SimpleDateFormat or maybe even Instance() method. They are very easy and helpful methods. We can also use calendar and time methods to do so."
},
{
"code": null,
"e": 744,
"s": 647,
"text": "Using format() method of SimpleDateFormat classUsing instance() method of SimpleDateFormat class"
},
{
"code": null,
"e": 792,
"s": 744,
"text": "Using format() method of SimpleDateFormat class"
},
{
"code": null,
"e": 842,
"s": 792,
"text": "Using instance() method of SimpleDateFormat class"
},
{
"code": null,
"e": 900,
"s": 842,
"text": "Method 1: Using format() method of SimpleDateFormat class"
},
{
"code": null,
"e": 1107,
"s": 900,
"text": "The format() method of DateFormat class in Java is used to format a given date into a Date/Time string. Basically, the method is used to convert this date and time into a particular format i.e., mm/dd/yyyy."
},
{
"code": null,
"e": 1115,
"s": 1107,
"text": "Syntax:"
},
{
"code": null,
"e": 1153,
"s": 1115,
"text": "public final String format(Date date)"
},
{
"code": null,
"e": 1283,
"s": 1153,
"text": "Parameters: The method takes one parameter date of Date object type and refers to the date whose string output is to be produced."
},
{
"code": null,
"e": 1361,
"s": 1283,
"text": "Return Value: The method returns Date or time in string format of mm/dd/yyyy."
},
{
"code": null,
"e": 1372,
"s": 1361,
"text": "Procedure:"
},
{
"code": null,
"e": 1420,
"s": 1372,
"text": "Here we would simply first print our local time"
},
{
"code": null,
"e": 1466,
"s": 1420,
"text": "Then convert it to GMT using SimpleDateFormat"
},
{
"code": null,
"e": 1498,
"s": 1466,
"text": "Then print both the time zones."
},
{
"code": null,
"e": 1507,
"s": 1498,
"text": "Example:"
},
{
"code": null,
"e": 1512,
"s": 1507,
"text": "Java"
},
{
"code": "// Java Program to convert local time to GMT // Importing libraries// 1. input output librariesimport java.io.*;// 3. Text classimport java.text.DateFormat;import java.text.SimpleDateFormat;// 2. Utility libraries for// Date and TimeZone classimport java.util.Date;import java.util.TimeZone; // Classclass GFG { // Main driver method public static void main(String[] args) { // Creating a Date class object // to take local time from the user Date localTime = new Date(); // Creating a DateFormat class object to // convert the localtime to GMT DateFormat s = new SimpleDateFormat(\"dd/MM/yyyy\" + \" \" + \" HH:mm:ss\"); // function will helps to get the GMT Timezone // using the getTimeZOne() method s.setTimeZone(TimeZone.getTimeZone(\"GMT\")); // One can get any other time zone also // by passing some other argument to it // Printing the local time System.out.println(\"local Time:\" + localTime); // Printing the GMT time to // illustrate changes in GMT time System.out.println(\"Time IN Gmt : \" + s.format(localTime)); }}",
"e": 2776,
"s": 1512,
"text": null
},
{
"code": null,
"e": 2854,
"s": 2779,
"text": "local Time:Thu Feb 04 11:34:15 UTC 2021\nTime IN Gmt : 04/02/2021 11:34:15"
},
{
"code": null,
"e": 2916,
"s": 2856,
"text": "Method 2: Using instance() method of SimpleDateFormat class"
},
{
"code": null,
"e": 3129,
"s": 2918,
"text": "As in the above method we used SimpleDateFormat, we will now use the instant method to get the time. The SimpleDateFormat can be used in different ways, now the instant method can be used to get the UTC or GMT."
},
{
"code": null,
"e": 3142,
"s": 3131,
"text": "Procedure:"
},
{
"code": null,
"e": 3203,
"s": 3144,
"text": "We will use the Instant method to get the time proper time"
},
{
"code": null,
"e": 3247,
"s": 3203,
"text": "It can be importing the complete time class"
},
{
"code": null,
"e": 3262,
"s": 3247,
"text": " java.time.* ;"
},
{
"code": null,
"e": 3273,
"s": 3264,
"text": "Example:"
},
{
"code": null,
"e": 3280,
"s": 3275,
"text": "Java"
},
{
"code": "// Java Program to convert Local time to// GMT time // Importing all input output classesimport java.io.*;// Importing all time classesimport java.time.*; // Classclass GFG { // Main driver method public static void main(String[] args) { // Instant operator helps to note // the time and the location of it // Creating an object of Instant type // using the now() method Instant now = Instant.now(); // Now with the help of Instant operator // zoned operator is called // Creating an object of ZonedDateTime ZonedDateTime zdt = ZonedDateTime.ofInstant( now, ZoneId.systemDefault()); // Printing the local time System.out.println(\" Local : \" + zdt); // Creating an object of Instant type // taking any other instant method Instant instant = Instant.now(); // Printing the GMT/UTC time by parsing with string // using the toString() method System.out.println(\" GMT : \"+instant.toString()); }}",
"e": 4322,
"s": 3280,
"text": null
},
{
"code": null,
"e": 4406,
"s": 4325,
"text": " Local : 2021-02-04T10:40:34.436700Z[Etc/UTC]\n GMT : 2021-02-04T10:40:34.547680Z"
},
{
"code": null,
"e": 4421,
"s": 4408,
"text": "simmytarika5"
},
{
"code": null,
"e": 4436,
"s": 4421,
"text": "Java-Date-Time"
},
{
"code": null,
"e": 4443,
"s": 4436,
"text": "Picked"
},
{
"code": null,
"e": 4467,
"s": 4443,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 4472,
"s": 4467,
"text": "Java"
},
{
"code": null,
"e": 4491,
"s": 4472,
"text": "Technical Scripter"
},
{
"code": null,
"e": 4496,
"s": 4491,
"text": "Java"
},
{
"code": null,
"e": 4594,
"s": 4496,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 4609,
"s": 4594,
"text": "Stream In Java"
},
{
"code": null,
"e": 4630,
"s": 4609,
"text": "Introduction to Java"
},
{
"code": null,
"e": 4651,
"s": 4630,
"text": "Constructors in Java"
},
{
"code": null,
"e": 4670,
"s": 4651,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 4687,
"s": 4670,
"text": "Generics in Java"
},
{
"code": null,
"e": 4717,
"s": 4687,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 4743,
"s": 4717,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 4759,
"s": 4743,
"text": "Strings in Java"
},
{
"code": null,
"e": 4796,
"s": 4759,
"text": "Differences between JDK, JRE and JVM"
}
] |
How to read Emails from Gmail using Gmail API in Python ?
|
01 Oct, 2020
In this article, we will see how to read Emails from your Gmail using Gmail API in Python. Gmail API is a RESTful API that allows users to interact with your Gmail account and use its features with a Python script.
So, let’s go ahead and write a simple Python script to read emails.
Python (2.6 or higher)
A Google account with Gmail enabled
Beautiful Soup library
Google API client and Google OAuth libraries
Install the required libraries by running these commands:
pip install –upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib
Run this to install Beautiful Soup:
pip install beautifulsoup4
Now, you have to set up your Google Cloud console to interact with the Gmail API. So, follow these steps:
Sign in to Google Cloud console and create a New Project or continue with an existing project.
Create a New Project
Go to APIs and Services.
Go to APIs and Services
Enable Gmail API for the selected project.
Go to Enable APIs and Services
Enable Gmail API
Now, configure the Consent screen by clicking on OAuth Consent Screen if it is not already configured.
Configure Consent screen
Enter the Application name and save it.
Enter Application name
Now go to Credentials.
Go to Credentials
Click on Create credentials, and go to OAuth Client ID.
Create an OAuth Client ID
Choose application type as Desktop Application.
Enter the Application name, and click on the Create button.
The Client ID will be created. Download it to your computer and save it as credentials.json
Please keep your Client ID and Client Secrets confidential.
Now, everything is set up, and we are ready to write the code. So, let’s go.
Approach :
The file ‘token.pickle‘ contains the User’s access token, so, first, we will check if it exists or not. If it does not exist or is invalid, our program will open up the browser and ask for access to the User’s Gmail and save it for next time. If it exists, we will check if the token needs to be refreshed and refresh if needed.
Now, we will connect to the Gmail API with the access token. Once connected, we will request a list of messages. This will return a list of IDs of the last 100 emails (default value) for that Gmail account. We can ask for any number of Emails by passing an optional argument ‘maxResults‘.
The output of this request is a dictionary in which the value of the key ‘messages‘ is a list of dictionaries. Each dictionary contains the ID of an Email and the Thread ID.
Now, We will go through all of these dictionaries and request the Email’s content through their IDs.
This again returns a dictionary in which the key ‘payload‘ contains the main content of Email in form of Dictionary.
This dictionary contains ‘headers‘, ‘parts‘, ‘filename‘ etc. So, we can now easily find headers such as sender, subject, etc. from here. The key ‘parts‘ which is a list of dictionaries contains all the parts of the Email’s body such as text, HTML, Attached file details, etc. So, we can get the body of the Email from here. It is generally in the first element of the list.
The body is encoded in Base 64 encoding. So, we have to convert it to a readable format. After decoding it, the obtained text is in ‘lxml‘. So, we will parse it using the BeautifulSoup library and convert it to text format.
At last, we will print the Subject, Sender, and Email.
Python3
# import the required librariesfrom googleapiclient.discovery import buildfrom google_auth_oauthlib.flow import InstalledAppFlowfrom google.auth.transport.requests import Requestimport pickleimport os.pathimport base64import emailfrom bs4 import BeautifulSoup # Define the SCOPES. If modifying it, delete the token.pickle file.SCOPES = ['https://www.googleapis.com/auth/gmail.readonly'] def getEmails(): # Variable creds will store the user access token. # If no valid token found, we will create one. creds = None # The file token.pickle contains the user access token. # Check if it exists if os.path.exists('token.pickle'): # Read the token from the file and store it in the variable creds with open('token.pickle', 'rb') as token: creds = pickle.load(token) # If credentials are not available or are invalid, ask the user to log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file('credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Save the access token in token.pickle file for the next run with open('token.pickle', 'wb') as token: pickle.dump(creds, token) # Connect to the Gmail API service = build('gmail', 'v1', credentials=creds) # request a list of all the messages result = service.users().messages().list(userId='me').execute() # We can also pass maxResults to get any number of emails. Like this: # result = service.users().messages().list(maxResults=200, userId='me').execute() messages = result.get('messages') # messages is a list of dictionaries where each dictionary contains a message id. # iterate through all the messages for msg in messages: # Get the message from its id txt = service.users().messages().get(userId='me', id=msg['id']).execute() # Use try-except to avoid any Errors try: # Get value of 'payload' from dictionary 'txt' payload = txt['payload'] headers = payload['headers'] # Look for Subject and Sender Email in the headers for d in headers: if d['name'] == 'Subject': subject = d['value'] if d['name'] == 'From': sender = d['value'] # The Body of the message is in Encrypted format. So, we have to decode it. # Get the data and decode it with base 64 decoder. parts = payload.get('parts')[0] data = parts['body']['data'] data = data.replace("-","+").replace("_","/") decoded_data = base64.b64decode(data) # Now, the data obtained is in lxml. So, we will parse # it with BeautifulSoup library soup = BeautifulSoup(decoded_data , "lxml") body = soup.body() # Printing the subject, sender's email and message print("Subject: ", subject) print("From: ", sender) print("Message: ", body) print('\n') except: pass getEmails()
Now, run the script with
python3 email_reader.py
This will attempt to open a new window in your default browser. If it fails, copy the URL from the console and manually open it in your browser.
Now, Log in to your Google account if you aren’t already logged in. If there are multiple accounts, you will be asked to choose one of them. Then, click on the Allow button.
Your Application asking for Permission
After the authentication has been completed, your browser will display a message: “The authentication flow has been completed. You may close this window”.
The script will start printing the Email data in the console.
You can also extend this and save the emails in separate text or csv files to make a collection of Emails from a particular sender.
python-utility
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
Different ways to create Pandas Dataframe
Enumerate() in Python
Python String | replace()
How to Install PIP on Windows ?
*args and **kwargs in Python
Python Classes and Objects
Convert integer to string in Python
Python | os.path.join() method
Create a Pandas DataFrame from Lists
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n01 Oct, 2020"
},
{
"code": null,
"e": 267,
"s": 52,
"text": "In this article, we will see how to read Emails from your Gmail using Gmail API in Python. Gmail API is a RESTful API that allows users to interact with your Gmail account and use its features with a Python script."
},
{
"code": null,
"e": 335,
"s": 267,
"text": "So, let’s go ahead and write a simple Python script to read emails."
},
{
"code": null,
"e": 358,
"s": 335,
"text": "Python (2.6 or higher)"
},
{
"code": null,
"e": 394,
"s": 358,
"text": "A Google account with Gmail enabled"
},
{
"code": null,
"e": 417,
"s": 394,
"text": "Beautiful Soup library"
},
{
"code": null,
"e": 462,
"s": 417,
"text": "Google API client and Google OAuth libraries"
},
{
"code": null,
"e": 520,
"s": 462,
"text": "Install the required libraries by running these commands:"
},
{
"code": null,
"e": 608,
"s": 520,
"text": "pip install –upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib"
},
{
"code": null,
"e": 644,
"s": 608,
"text": "Run this to install Beautiful Soup:"
},
{
"code": null,
"e": 671,
"s": 644,
"text": "pip install beautifulsoup4"
},
{
"code": null,
"e": 777,
"s": 671,
"text": "Now, you have to set up your Google Cloud console to interact with the Gmail API. So, follow these steps:"
},
{
"code": null,
"e": 872,
"s": 777,
"text": "Sign in to Google Cloud console and create a New Project or continue with an existing project."
},
{
"code": null,
"e": 893,
"s": 872,
"text": "Create a New Project"
},
{
"code": null,
"e": 918,
"s": 893,
"text": "Go to APIs and Services."
},
{
"code": null,
"e": 942,
"s": 918,
"text": "Go to APIs and Services"
},
{
"code": null,
"e": 985,
"s": 942,
"text": "Enable Gmail API for the selected project."
},
{
"code": null,
"e": 1016,
"s": 985,
"text": "Go to Enable APIs and Services"
},
{
"code": null,
"e": 1033,
"s": 1016,
"text": "Enable Gmail API"
},
{
"code": null,
"e": 1136,
"s": 1033,
"text": "Now, configure the Consent screen by clicking on OAuth Consent Screen if it is not already configured."
},
{
"code": null,
"e": 1161,
"s": 1136,
"text": "Configure Consent screen"
},
{
"code": null,
"e": 1201,
"s": 1161,
"text": "Enter the Application name and save it."
},
{
"code": null,
"e": 1224,
"s": 1201,
"text": "Enter Application name"
},
{
"code": null,
"e": 1247,
"s": 1224,
"text": "Now go to Credentials."
},
{
"code": null,
"e": 1265,
"s": 1247,
"text": "Go to Credentials"
},
{
"code": null,
"e": 1321,
"s": 1265,
"text": "Click on Create credentials, and go to OAuth Client ID."
},
{
"code": null,
"e": 1347,
"s": 1321,
"text": "Create an OAuth Client ID"
},
{
"code": null,
"e": 1395,
"s": 1347,
"text": "Choose application type as Desktop Application."
},
{
"code": null,
"e": 1455,
"s": 1395,
"text": "Enter the Application name, and click on the Create button."
},
{
"code": null,
"e": 1547,
"s": 1455,
"text": "The Client ID will be created. Download it to your computer and save it as credentials.json"
},
{
"code": null,
"e": 1607,
"s": 1547,
"text": "Please keep your Client ID and Client Secrets confidential."
},
{
"code": null,
"e": 1684,
"s": 1607,
"text": "Now, everything is set up, and we are ready to write the code. So, let’s go."
},
{
"code": null,
"e": 1695,
"s": 1684,
"text": "Approach :"
},
{
"code": null,
"e": 2024,
"s": 1695,
"text": "The file ‘token.pickle‘ contains the User’s access token, so, first, we will check if it exists or not. If it does not exist or is invalid, our program will open up the browser and ask for access to the User’s Gmail and save it for next time. If it exists, we will check if the token needs to be refreshed and refresh if needed."
},
{
"code": null,
"e": 2313,
"s": 2024,
"text": "Now, we will connect to the Gmail API with the access token. Once connected, we will request a list of messages. This will return a list of IDs of the last 100 emails (default value) for that Gmail account. We can ask for any number of Emails by passing an optional argument ‘maxResults‘."
},
{
"code": null,
"e": 2487,
"s": 2313,
"text": "The output of this request is a dictionary in which the value of the key ‘messages‘ is a list of dictionaries. Each dictionary contains the ID of an Email and the Thread ID."
},
{
"code": null,
"e": 2588,
"s": 2487,
"text": "Now, We will go through all of these dictionaries and request the Email’s content through their IDs."
},
{
"code": null,
"e": 2705,
"s": 2588,
"text": "This again returns a dictionary in which the key ‘payload‘ contains the main content of Email in form of Dictionary."
},
{
"code": null,
"e": 3079,
"s": 2705,
"text": "This dictionary contains ‘headers‘, ‘parts‘, ‘filename‘ etc. So, we can now easily find headers such as sender, subject, etc. from here. The key ‘parts‘ which is a list of dictionaries contains all the parts of the Email’s body such as text, HTML, Attached file details, etc. So, we can get the body of the Email from here. It is generally in the first element of the list."
},
{
"code": null,
"e": 3303,
"s": 3079,
"text": "The body is encoded in Base 64 encoding. So, we have to convert it to a readable format. After decoding it, the obtained text is in ‘lxml‘. So, we will parse it using the BeautifulSoup library and convert it to text format."
},
{
"code": null,
"e": 3358,
"s": 3303,
"text": "At last, we will print the Subject, Sender, and Email."
},
{
"code": null,
"e": 3366,
"s": 3358,
"text": "Python3"
},
{
"code": "# import the required librariesfrom googleapiclient.discovery import buildfrom google_auth_oauthlib.flow import InstalledAppFlowfrom google.auth.transport.requests import Requestimport pickleimport os.pathimport base64import emailfrom bs4 import BeautifulSoup # Define the SCOPES. If modifying it, delete the token.pickle file.SCOPES = ['https://www.googleapis.com/auth/gmail.readonly'] def getEmails(): # Variable creds will store the user access token. # If no valid token found, we will create one. creds = None # The file token.pickle contains the user access token. # Check if it exists if os.path.exists('token.pickle'): # Read the token from the file and store it in the variable creds with open('token.pickle', 'rb') as token: creds = pickle.load(token) # If credentials are not available or are invalid, ask the user to log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file('credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Save the access token in token.pickle file for the next run with open('token.pickle', 'wb') as token: pickle.dump(creds, token) # Connect to the Gmail API service = build('gmail', 'v1', credentials=creds) # request a list of all the messages result = service.users().messages().list(userId='me').execute() # We can also pass maxResults to get any number of emails. Like this: # result = service.users().messages().list(maxResults=200, userId='me').execute() messages = result.get('messages') # messages is a list of dictionaries where each dictionary contains a message id. # iterate through all the messages for msg in messages: # Get the message from its id txt = service.users().messages().get(userId='me', id=msg['id']).execute() # Use try-except to avoid any Errors try: # Get value of 'payload' from dictionary 'txt' payload = txt['payload'] headers = payload['headers'] # Look for Subject and Sender Email in the headers for d in headers: if d['name'] == 'Subject': subject = d['value'] if d['name'] == 'From': sender = d['value'] # The Body of the message is in Encrypted format. So, we have to decode it. # Get the data and decode it with base 64 decoder. parts = payload.get('parts')[0] data = parts['body']['data'] data = data.replace(\"-\",\"+\").replace(\"_\",\"/\") decoded_data = base64.b64decode(data) # Now, the data obtained is in lxml. So, we will parse # it with BeautifulSoup library soup = BeautifulSoup(decoded_data , \"lxml\") body = soup.body() # Printing the subject, sender's email and message print(\"Subject: \", subject) print(\"From: \", sender) print(\"Message: \", body) print('\\n') except: pass getEmails()",
"e": 6580,
"s": 3366,
"text": null
},
{
"code": null,
"e": 6606,
"s": 6580,
"text": "Now, run the script with "
},
{
"code": null,
"e": 6632,
"s": 6606,
"text": "python3 email_reader.py\n\n"
},
{
"code": null,
"e": 6777,
"s": 6632,
"text": "This will attempt to open a new window in your default browser. If it fails, copy the URL from the console and manually open it in your browser."
},
{
"code": null,
"e": 6951,
"s": 6777,
"text": "Now, Log in to your Google account if you aren’t already logged in. If there are multiple accounts, you will be asked to choose one of them. Then, click on the Allow button."
},
{
"code": null,
"e": 6990,
"s": 6951,
"text": "Your Application asking for Permission"
},
{
"code": null,
"e": 7145,
"s": 6990,
"text": "After the authentication has been completed, your browser will display a message: “The authentication flow has been completed. You may close this window”."
},
{
"code": null,
"e": 7207,
"s": 7145,
"text": "The script will start printing the Email data in the console."
},
{
"code": null,
"e": 7339,
"s": 7207,
"text": "You can also extend this and save the emails in separate text or csv files to make a collection of Emails from a particular sender."
},
{
"code": null,
"e": 7354,
"s": 7339,
"text": "python-utility"
},
{
"code": null,
"e": 7361,
"s": 7354,
"text": "Python"
},
{
"code": null,
"e": 7459,
"s": 7361,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 7477,
"s": 7459,
"text": "Python Dictionary"
},
{
"code": null,
"e": 7519,
"s": 7477,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 7541,
"s": 7519,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 7567,
"s": 7541,
"text": "Python String | replace()"
},
{
"code": null,
"e": 7599,
"s": 7567,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 7628,
"s": 7599,
"text": "*args and **kwargs in Python"
},
{
"code": null,
"e": 7655,
"s": 7628,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 7691,
"s": 7655,
"text": "Convert integer to string in Python"
},
{
"code": null,
"e": 7722,
"s": 7691,
"text": "Python | os.path.join() method"
}
] |
Best Time to Buy and Sell Stock
|
17 Jun, 2022
Type I: At most one transaction is allowed
Given an array prices[] of length N, representing the prices of the stocks on different days, the task is to find the maximum profit possible for buying and selling the stocks on different days using transactions where at most one transaction is allowed.
Note: Stock must be bought before being sold.
Examples:
Input: prices[] = {7, 1, 5, 3, 6, 4]Output: 5Explanation:The lowest price of the stock is on the 2nd day, i.e. price = 1. Starting from the 2nd day, the highest price of the stock is witnessed on the 5th day, i.e. price = 6. Therefore, maximum possible profit = 6 – 1 = 5.
Input: prices[] = {7, 6, 4, 3, 1} Output: 0Explanation: Since the array is in decreasing order, no possible way exists to solve the problem.
Approach 1:This problem can be solved using greedy approach. To maximize the profit we have to minimize the buy cost and we have to sell it on maximum price. Follow the steps below to implement the above idea:
Declare a buy variable to store the buy cost and max_profit to store the maximum profit.
Initialize the buy variable to first element of profit array.
Iterate over the prices array and check if the current price is minimum or not.If the current price is minimum then buy on this ith day.If the current price is greater then previous buy then make profit from it and maximize the max_profit.
If the current price is minimum then buy on this ith day.
If the current price is greater then previous buy then make profit from it and maximize the max_profit.
Finally return the max_profit.
Below is the implementation of the above approach:
C++
Java
Python3
// C++ code for the above approach#include <iostream>using namespace std; int maxProfit(int prices[], int n){ int buy = prices[0], max_profit = 0; for (int i = 1; i < n; i++) { // Checking for lower buy value if (buy > prices[i]) buy = prices[i]; // Checking for higher profit else if (prices[i] - buy > max_profit) max_profit = prices[i] - buy; } return max_profit;} // Driver Codeint main(){ int prices[] = { 7, 1, 5, 6, 4 }; int n = sizeof(prices) / sizeof(prices[0]); int max_profit = maxProfit(prices, n); cout << max_profit << endl; return 0;}
// Java code for the above approachclass GFG { static int maxProfit(int prices[], int n) { int buy = prices[0], max_profit = 0; for (int i = 1; i < n; i++) { // Checking for lower buy value if (buy > prices[i]) buy = prices[i]; // Checking for higher profit else if (prices[i] - buy > max_profit) max_profit = prices[i] - buy; } return max_profit; } // Driver Code public static void main(String args[]) { int prices[] = { 7, 1, 5, 6, 4 }; int n = prices.length; int max_profit = maxProfit(prices, n); System.out.println(max_profit); }} // This code is contributed by Lovely Jain
## Python program for the above approach: def maxProfit(prices, n): buy = prices[0] max_profit = 0 for i in range(1, n): ## Checking for lower buy value if (buy > prices[i]): buy = prices[i] ## Checking for higher profit elif (prices[i] - buy > max_profit): max_profit = prices[i] - buy; return max_profit; ## Driver codeif __name__=='__main__': prices = [ 7, 1, 5, 6, 4 ]; n = len(prices) max_profit = maxProfit(prices, n); print(max_profit)
Output :
5
Time Complexity: O(N). Where N is the size of prices array. Auxiliary Space: O(1). We do not use any extra space.
Approach 2: The given problem can be solved based on the idea of finding the maximum difference between two array elements with smaller number occurring before the larger number. Therefore, this problem can be reduced to finding max(prices[j]−prices[i]) for every pair of indices i and j, such that j>i.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program for the above approach #include <bits/stdc++.h>#include <iostream>using namespace std; // Function to find maximum profit possible// by buying and selling at most one stackint findMaximumProfit(vector<int>& prices, int i, int k, bool buy, vector<vector<int> >& v){ // If no stock can be chosen if (i >= prices.size() || k <= 0) return 0; if (v[i][buy] != -1) return v[i][buy]; // If a stock is already bought if (buy) { return v[i][buy] = max(-prices[i] + findMaximumProfit(prices, i + 1, k, !buy, v), findMaximumProfit(prices, i + 1, k, buy, v)); } // Otherwise else { // Buy now return v[i][buy] = max(prices[i] + findMaximumProfit( prices, i + 1, k - 1, !buy, v), findMaximumProfit(prices, i + 1, k, buy, v)); }} // Function to find the maximum// profit in the buy and sell stockint maxProfit(vector<int>& prices){ int n = prices.size(); vector<vector<int> > v(n, vector<int>(2, -1)); // buy = 1 because atmost one // transaction is allowed return findMaximumProfit(prices, 0, 1, 1, v);} // Driver Codeint main(){ // Given prices vector<int> prices = { 7, 1, 5, 3, 6, 4 }; // Function Call to find the // maximum profit possible by // buying and selling a single stock int ans = maxProfit(prices); // Print answer cout << ans << endl; return 0;}
// Java code for the above approachimport java.util.*; class GFG { // Function to find maximum profit possible // by buying and selling at most one stack static int findMaximumProfit(int[] prices, int i, int k, int buy, int[][] v) { // If no stock can be chosen if (i >= prices.length || k <= 0) return 0; if (v[i][buy] != -1) return v[i][buy]; // If a stock is already bought // Buy now int nbuy; if (buy == 1) nbuy = 0; else nbuy = 1; if (buy == 1) { return v[i][buy] = Math.max( -prices[i] + findMaximumProfit( prices, i + 1, k, nbuy, v), findMaximumProfit(prices, i + 1, k, (int)(buy), v)); } // Otherwise else { // Buy now if (buy == 1) nbuy = 0; else nbuy = 1; return v[i][buy] = Math.max( prices[i] + findMaximumProfit(prices, i + 1, k - 1, nbuy, v), findMaximumProfit(prices, i + 1, k, buy, v)); } } // Function to find the maximum // profit in the buy and sell stock static int maxProfit(int[] prices) { int n = prices.length; int[][] v = new int[n][2]; for (int i = 0; i < v.length; i++) { v[i][0] = -1; v[i][1] = -1; } // buy = 1 because atmost one // transaction is allowed return findMaximumProfit(prices, 0, 1, 1, v); } // Driver Code public static void main(String[] args) { // Given prices int[] prices = { 7, 1, 5, 3, 6, 4 }; // Function Call to find the // maximum profit possible by // buying and selling a single stock int ans = maxProfit(prices); // Print answer System.out.println(ans); } // This code is contributed by Potta Lokesh
# Python 3 program for the above approach # Function to find maximum profit possible# by buying and selling at most one stackdef findMaximumProfit(prices, i, k, buy, v): # If no stock can be chosen if (i >= len(prices) or k <= 0): return 0 if (v[i][buy] != -1): return v[i][buy] # If a stock is already bought if (buy): v[i][buy] = max(-prices[i] + findMaximumProfit(prices, i + 1, k, not buy, v), findMaximumProfit(prices, i + 1, k, buy, v)) return v[i][buy] # Otherwise else: # Buy now v[i][buy] = max(prices[i] + findMaximumProfit( prices, i + 1, k - 1, not buy, v), findMaximumProfit(prices, i + 1, k, buy, v)) return v[i][buy] # Function to find the maximum# profit in the buy and sell stockdef maxProfit(prices): n = len(prices) v = [[-1 for x in range(2)]for y in range(n)] # buy = 1 because atmost one # transaction is allowed return findMaximumProfit(prices, 0, 1, 1, v) # Driver Codeif __name__ == "__main__": # Given prices prices = [7, 1, 5, 3, 6, 4] # Function Call to find the # maximum profit possible by # buying and selling a single stock ans = maxProfit(prices) # Print answer print(ans)
// C# program for above approachusing System; class GFG { // Function to find maximum profit possible // by buying and selling at most one stack static int findMaximumProfit(int[] prices, int i, int k, int buy, int[, ] v) { // If no stock can be chosen if (i >= prices.Length || k <= 0) return 0; if (v[i, buy] != -1) return v[i, buy]; // If a stock is already bought // Buy now int nbuy; if (buy == 1) nbuy = 0; else nbuy = 1; if (buy == 1) { return v[i, buy] = Math.Max( -prices[i] + findMaximumProfit( prices, i + 1, k, nbuy, v), findMaximumProfit(prices, i + 1, k, (int)(buy), v)); } // Otherwise else { // Buy now if (buy == 1) nbuy = 0; else nbuy = 1; return v[i, buy] = Math.Max( prices[i] + findMaximumProfit(prices, i + 1, k - 1, nbuy, v), findMaximumProfit(prices, i + 1, k, buy, v)); } } // Function to find the maximum // profit in the buy and sell stock static int maxProfit(int[] prices) { int n = prices.Length; int[, ] v = new int[n, 2]; for (int i = 0; i < n; i++) { v[i, 0] = -1; v[i, 1] = -1; } // buy = 1 because atmost one // transaction is allowed return findMaximumProfit(prices, 0, 1, 1, v); } // Driver Code public static void Main() { // Given prices int[] prices = { 7, 1, 5, 3, 6, 4 }; // Function Call to find the // maximum profit possible by // buying and selling a single stock int ans = maxProfit(prices); // Print answer Console.Write(ans); }} // This code is contributed by Samim Hossain Mondal.
<script>// Javascript code for the above approach // Function to find maximum profit possible// by buying and selling at most one stackfunction findMaximumProfit(prices, i, k, buy, v) { // If no stock can be chosen if (i >= prices.length || k <= 0) return 0; if (v[i][buy] != -1) return v[i][buy]; // If a stock is already bought // Buy now let nbuy; if (buy == 1) nbuy = 0; else nbuy = 1; if (buy == 1) { return v[i][buy] = Math.max(-prices[i] + findMaximumProfit(prices, i + 1, k, nbuy, v), findMaximumProfit(prices, i + 1, k, buy, v)); } // Otherwise else { // Buy now if (buy == 1) nbuy = 0; else nbuy = 1; return v[i][buy] = Math.max(prices[i] + findMaximumProfit(prices, i + 1, k - 1, nbuy, v), findMaximumProfit(prices, i + 1, k, buy, v)); }} // Function to find the maximum// profit in the buy and sell stockfunction maxProfit(prices) { let n = prices.length; let v = new Array(n).fill(0).map(() => new Array(2).fill(-1)) // buy = 1 because atmost one // transaction is allowed return findMaximumProfit(prices, 0, 1, 1, v);} // Driver Code // Given priceslet prices = [7, 1, 5, 3, 6, 4]; // Function Call to find the// maximum profit possible by// buying and selling a single stocklet ans = maxProfit(prices); // Print answerdocument.write(ans); // This code is contributed by Saurabh Jaiswal</script>
5
Time complexity: O(N) where N is the length of the given array. Auxiliary Space: O(N)
Type II: Infinite transactions are allowed
Given an array price[] of length N, representing the prices of the stocks on different days, the task is to find the maximum profit possible for buying and selling the stocks on different days using transactions where any number of transactions are allowed.
Examples:
Input: prices[] = {7, 1, 5, 3, 6, 4} Output: 7Explanation:Purchase on 2nd day. Price = 1.Sell on 3rd day. Price = 5.Therefore, profit = 5 – 1 = 4.Purchase on 4th day. Price = 3.Sell on 5th day. Price = 6.Therefore, profit = 4 + (6 – 3) = 7.
Input: prices = {1, 2, 3, 4, 5} Output: 4Explanation: Purchase on 1st day. Price = 1.Sell on 5th day. Price = 5. Therefore, profit = 5 – 1 = 4.
Approach: The idea is to maintain a boolean value that denotes if there is any current purchase ongoing or not. If yes, then at the current state, the stock can be sold to maximize profit or move to the next price without selling the stock. Otherwise, if no transaction is happening, the current stock can be bought or move to the next price without buying.
Below is the implementation of the above approach:
C++
// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to calculate maximum// profit possible by buying or// selling stocks any number of timesint find(int ind, vector<int>& v, bool buy, vector<vector<int> >& memo){ // No prices left if (ind >= v.size()) return 0; // Already found if (memo[ind][buy] != -1) return memo[ind][buy]; // Already bought, now sell if (buy) { return memo[ind][buy] = max(-v[ind] + find(ind + 1, v, !buy, memo), find(ind + 1, v, buy, memo)); } // Otherwise, buy the stock else { return memo[ind][buy] = max(v[ind] + find(ind + 1, v, !buy, memo), find(ind + 1, v, buy, memo)); }} // Function to find the maximum// profit possible by buying and// selling stocks any number of timesint maxProfit(vector<int>& prices){ int n = prices.size(); if (n < 2) return 0; vector<vector<int> > v(n + 1, vector<int>(2, -1)); return find(0, prices, 1, v);} // Driver Codeint main(){ // Given prices vector<int> prices = { 7, 1, 5, 3, 6, 4 }; // Function Call to calculate // maximum profit possible int ans = maxProfit(prices); // Print the total profit cout << ans << endl; return 0;}
7
Time complexity: O(N) where N is the length of the given array. Auxiliary Space: O(N)
Type III: At most two transactions are allowed
Problem: Given an array price[] of length N which denotes the prices of the stocks on different days. The task is to find the maximum profit possible for buying and selling the stocks on different days using transactions where at most two transactions are allowed.
Note: Stock must be bought before being sold.
Input: prices[] = {3, 3, 5, 0, 0, 3, 1, 4} Output: 6 Explanation: Buy on Day 4 and Sell at Day 6 => Profit = 3 0 = 3 Buy on Day 7 and Sell at Day 8 => Profit = 4 1 = 3 Therefore, Total Profit = 3 + 3 = 6
Input: prices[] = {1, 2, 3, 4, 5} Output: 4 Explanation: Buy on Day 1 and sell at Day 6 => Profit = 5 1 = 4 Therefore, Total Profit = 4
Approach: The problem can be solved by following the above approach. Now, if the number of transactions is equal to 2, then the current profit can be the desired answer. Similarly, Try out all the possible answers by memoizing them into the DP Table.
Below is the implementation of the above approach:
C++
// C++ program for the above approach #include <bits/stdc++.h>#include <iostream>using namespace std; // Function to find the maximum// profit in the buy and sell stockint find(vector<int>& prices, int ind, bool buy, int c, vector<vector<vector<int> > >& memo){ // If buy =1 means buy now // else sell if (ind >= prices.size() || c >= 2) return 0; if (memo[ind][buy] != -1) return memo[ind][buy]; // Already bought, sell now if (buy) { return memo[ind][buy] = max(-prices[ind] + find(prices, ind + 1, !buy, c, memo), find(prices, ind + 1, buy, c, memo)); } // Can buy stocks else { return memo[ind][buy] = max(prices[ind] + find(prices, ind + 1, !buy, c + 1, memo), find(prices, ind + 1, buy, c, memo)); }} // Function to find the maximum// profit in the buy and sell stockint maxProfit(vector<int>& prices){ // Here maximum two transaction are allowed // Use 3-D vector because here // three states are there: i,k,buy/sell vector<vector<vector<int> > > memo( prices.size(), vector<vector<int> >(2, vector<int>(2, -1))); // Answer return find(prices, 0, 1, 0, memo);} // Driver Codeint main(){ // Given prices vector<int> prices = { 3, 3, 5, 0, 0, 3, 1, 4 }; // Function Call int ans = maxProfit(prices); // Answer cout << ans << endl; return 0;}
6
Time complexity: O(N), where N is the length of the given array. Auxiliary Space: O(N)
Type IV: At most K transactions are allowed
Problem: Given an array price[] of length N which denotes the prices of the stocks on different days. The task is to find the maximum profit possible for buying and selling the stocks on different days using transactions where at most K transactions are allowed.
Note: Stock must be bought before being sold.
Examples:
Input: K = 2, prices[] = {2, 4, 1} Output: 2Explanation: Buy on day 1 when price is 2 and sell on day 2 when price is 4. Therefore, profit = 4-2 = 2.
Input: K = 2, prices[] = {3, 2, 6, 5, 0, 3} Output: 7Explanation: Buy on day 2 when price is 2 and sell on day 3 when price is 6. Therefore, profit = 6-2 = 4.Buy on day 5 when price is 0 and sell on day 6 when price is 3. Therefore, profit = 3-0 = 3.Therefore, the total profit = 4+3 = 7
Approach: The idea is to maintain the count of transactions completed till and compare the count of the transaction to K. If it is less than K then buy and sell the stock. Otherwise, the current profit can be the maximum profit.
Below is the implementation of the above approach:
C++
// C++ program for the above approach #include <bits/stdc++.h>#include <iostream>using namespace std; // Function to find the maximum// profit with atmost K transactionsint find(vector<int>& prices, int ind, bool buy, int c, int k, vector<vector<vector<int> > >& memo){ // If there are no more transaction // allowed, return the profit as 0 if (ind >= prices.size() || c >= k) return 0; // Memoize else if (memo[ind][buy] != -1) return memo[ind][buy]; // Already bought, now sell if (buy) { return memo[ind][buy] = max( -prices[ind] + find(prices, ind + 1, !buy, c, k, memo), find(prices, ind + 1, buy, c, k, memo)); } // Stocks can be bought else { return memo[ind][buy] = max( prices[ind] + find(prices, ind + 1, !buy, c + 1, k, memo), find(prices, ind + 1, buy, c, k, memo)); }} // Function to find the maximum profit// in the buy and sell stockint maxProfit(int k, vector<int>& prices){ // If transactions are greater // than number of prices if (2 * k > prices.size()) { int res = 0; for (int i = 1; i < prices.size(); i++) { res += max(0, prices[i] - prices[i - 1]); } return res; } // Maximum k transaction vector<vector<vector<int> > > memo( prices.size() + 1, vector<vector<int> >(2, vector<int>(k + 1, -1))); return find(prices, 0, 1, 0, k, memo);} // Driver Codeint main(){ // Given prices vector<int> prices = { 2, 4, 1 }; // Given K int k = 2; // Function Call int ans = maxProfit(k, prices); // Print answer cout << ans << endl; return 0;}
2
Time complexity: O(N*K), where N is the length of the given array and K is the number of transactions allowed. Auxiliary Space: O(N*K)
lokeshpotta20
samim2000
_saurabh_jaiswal
ukasp
simmytarika5
nehajmi08
piyushmulatkar11
202051178
subhamgoyal2014
jainlovely450
Dynamic Programming
Dynamic Programming
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Find if there is a path between two vertices in an undirected graph
Count number of binary strings without consecutive 1's
Find if a string is interleaved of two other strings | DP-33
Optimal Substructure Property in Dynamic Programming | DP-2
Maximum sum such that no two elements are adjacent
Unique paths in a Grid with Obstacles
How to solve a Dynamic Programming Problem ?
Maximum profit by buying and selling a share at most twice
Word Break Problem | DP-32
Top 50 Dynamic Programming Coding Problems for Interviews
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n17 Jun, 2022"
},
{
"code": null,
"e": 95,
"s": 52,
"text": "Type I: At most one transaction is allowed"
},
{
"code": null,
"e": 350,
"s": 95,
"text": "Given an array prices[] of length N, representing the prices of the stocks on different days, the task is to find the maximum profit possible for buying and selling the stocks on different days using transactions where at most one transaction is allowed."
},
{
"code": null,
"e": 396,
"s": 350,
"text": "Note: Stock must be bought before being sold."
},
{
"code": null,
"e": 406,
"s": 396,
"text": "Examples:"
},
{
"code": null,
"e": 680,
"s": 406,
"text": "Input: prices[] = {7, 1, 5, 3, 6, 4]Output: 5Explanation:The lowest price of the stock is on the 2nd day, i.e. price = 1. Starting from the 2nd day, the highest price of the stock is witnessed on the 5th day, i.e. price = 6. Therefore, maximum possible profit = 6 – 1 = 5. "
},
{
"code": null,
"e": 821,
"s": 680,
"text": "Input: prices[] = {7, 6, 4, 3, 1} Output: 0Explanation: Since the array is in decreasing order, no possible way exists to solve the problem."
},
{
"code": null,
"e": 1031,
"s": 821,
"text": "Approach 1:This problem can be solved using greedy approach. To maximize the profit we have to minimize the buy cost and we have to sell it on maximum price. Follow the steps below to implement the above idea:"
},
{
"code": null,
"e": 1120,
"s": 1031,
"text": "Declare a buy variable to store the buy cost and max_profit to store the maximum profit."
},
{
"code": null,
"e": 1182,
"s": 1120,
"text": "Initialize the buy variable to first element of profit array."
},
{
"code": null,
"e": 1422,
"s": 1182,
"text": "Iterate over the prices array and check if the current price is minimum or not.If the current price is minimum then buy on this ith day.If the current price is greater then previous buy then make profit from it and maximize the max_profit."
},
{
"code": null,
"e": 1480,
"s": 1422,
"text": "If the current price is minimum then buy on this ith day."
},
{
"code": null,
"e": 1584,
"s": 1480,
"text": "If the current price is greater then previous buy then make profit from it and maximize the max_profit."
},
{
"code": null,
"e": 1615,
"s": 1584,
"text": "Finally return the max_profit."
},
{
"code": null,
"e": 1666,
"s": 1615,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 1670,
"s": 1666,
"text": "C++"
},
{
"code": null,
"e": 1675,
"s": 1670,
"text": "Java"
},
{
"code": null,
"e": 1683,
"s": 1675,
"text": "Python3"
},
{
"code": "// C++ code for the above approach#include <iostream>using namespace std; int maxProfit(int prices[], int n){ int buy = prices[0], max_profit = 0; for (int i = 1; i < n; i++) { // Checking for lower buy value if (buy > prices[i]) buy = prices[i]; // Checking for higher profit else if (prices[i] - buy > max_profit) max_profit = prices[i] - buy; } return max_profit;} // Driver Codeint main(){ int prices[] = { 7, 1, 5, 6, 4 }; int n = sizeof(prices) / sizeof(prices[0]); int max_profit = maxProfit(prices, n); cout << max_profit << endl; return 0;}",
"e": 2312,
"s": 1683,
"text": null
},
{
"code": "// Java code for the above approachclass GFG { static int maxProfit(int prices[], int n) { int buy = prices[0], max_profit = 0; for (int i = 1; i < n; i++) { // Checking for lower buy value if (buy > prices[i]) buy = prices[i]; // Checking for higher profit else if (prices[i] - buy > max_profit) max_profit = prices[i] - buy; } return max_profit; } // Driver Code public static void main(String args[]) { int prices[] = { 7, 1, 5, 6, 4 }; int n = prices.length; int max_profit = maxProfit(prices, n); System.out.println(max_profit); }} // This code is contributed by Lovely Jain",
"e": 2960,
"s": 2312,
"text": null
},
{
"code": "## Python program for the above approach: def maxProfit(prices, n): buy = prices[0] max_profit = 0 for i in range(1, n): ## Checking for lower buy value if (buy > prices[i]): buy = prices[i] ## Checking for higher profit elif (prices[i] - buy > max_profit): max_profit = prices[i] - buy; return max_profit; ## Driver codeif __name__=='__main__': prices = [ 7, 1, 5, 6, 4 ]; n = len(prices) max_profit = maxProfit(prices, n); print(max_profit)",
"e": 3481,
"s": 2960,
"text": null
},
{
"code": null,
"e": 3490,
"s": 3481,
"text": "Output :"
},
{
"code": null,
"e": 3492,
"s": 3490,
"text": "5"
},
{
"code": null,
"e": 3607,
"s": 3492,
"text": "Time Complexity: O(N). Where N is the size of prices array. Auxiliary Space: O(1). We do not use any extra space. "
},
{
"code": null,
"e": 3912,
"s": 3607,
"text": "Approach 2: The given problem can be solved based on the idea of finding the maximum difference between two array elements with smaller number occurring before the larger number. Therefore, this problem can be reduced to finding max(prices[j]−prices[i]) for every pair of indices i and j, such that j>i."
},
{
"code": null,
"e": 3963,
"s": 3912,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 3967,
"s": 3963,
"text": "C++"
},
{
"code": null,
"e": 3972,
"s": 3967,
"text": "Java"
},
{
"code": null,
"e": 3980,
"s": 3972,
"text": "Python3"
},
{
"code": null,
"e": 3983,
"s": 3980,
"text": "C#"
},
{
"code": null,
"e": 3994,
"s": 3983,
"text": "Javascript"
},
{
"code": "// C++ program for the above approach #include <bits/stdc++.h>#include <iostream>using namespace std; // Function to find maximum profit possible// by buying and selling at most one stackint findMaximumProfit(vector<int>& prices, int i, int k, bool buy, vector<vector<int> >& v){ // If no stock can be chosen if (i >= prices.size() || k <= 0) return 0; if (v[i][buy] != -1) return v[i][buy]; // If a stock is already bought if (buy) { return v[i][buy] = max(-prices[i] + findMaximumProfit(prices, i + 1, k, !buy, v), findMaximumProfit(prices, i + 1, k, buy, v)); } // Otherwise else { // Buy now return v[i][buy] = max(prices[i] + findMaximumProfit( prices, i + 1, k - 1, !buy, v), findMaximumProfit(prices, i + 1, k, buy, v)); }} // Function to find the maximum// profit in the buy and sell stockint maxProfit(vector<int>& prices){ int n = prices.size(); vector<vector<int> > v(n, vector<int>(2, -1)); // buy = 1 because atmost one // transaction is allowed return findMaximumProfit(prices, 0, 1, 1, v);} // Driver Codeint main(){ // Given prices vector<int> prices = { 7, 1, 5, 3, 6, 4 }; // Function Call to find the // maximum profit possible by // buying and selling a single stock int ans = maxProfit(prices); // Print answer cout << ans << endl; return 0;}",
"e": 5655,
"s": 3994,
"text": null
},
{
"code": "// Java code for the above approachimport java.util.*; class GFG { // Function to find maximum profit possible // by buying and selling at most one stack static int findMaximumProfit(int[] prices, int i, int k, int buy, int[][] v) { // If no stock can be chosen if (i >= prices.length || k <= 0) return 0; if (v[i][buy] != -1) return v[i][buy]; // If a stock is already bought // Buy now int nbuy; if (buy == 1) nbuy = 0; else nbuy = 1; if (buy == 1) { return v[i][buy] = Math.max( -prices[i] + findMaximumProfit( prices, i + 1, k, nbuy, v), findMaximumProfit(prices, i + 1, k, (int)(buy), v)); } // Otherwise else { // Buy now if (buy == 1) nbuy = 0; else nbuy = 1; return v[i][buy] = Math.max( prices[i] + findMaximumProfit(prices, i + 1, k - 1, nbuy, v), findMaximumProfit(prices, i + 1, k, buy, v)); } } // Function to find the maximum // profit in the buy and sell stock static int maxProfit(int[] prices) { int n = prices.length; int[][] v = new int[n][2]; for (int i = 0; i < v.length; i++) { v[i][0] = -1; v[i][1] = -1; } // buy = 1 because atmost one // transaction is allowed return findMaximumProfit(prices, 0, 1, 1, v); } // Driver Code public static void main(String[] args) { // Given prices int[] prices = { 7, 1, 5, 3, 6, 4 }; // Function Call to find the // maximum profit possible by // buying and selling a single stock int ans = maxProfit(prices); // Print answer System.out.println(ans); } // This code is contributed by Potta Lokesh",
"e": 7844,
"s": 5655,
"text": null
},
{
"code": "# Python 3 program for the above approach # Function to find maximum profit possible# by buying and selling at most one stackdef findMaximumProfit(prices, i, k, buy, v): # If no stock can be chosen if (i >= len(prices) or k <= 0): return 0 if (v[i][buy] != -1): return v[i][buy] # If a stock is already bought if (buy): v[i][buy] = max(-prices[i] + findMaximumProfit(prices, i + 1, k, not buy, v), findMaximumProfit(prices, i + 1, k, buy, v)) return v[i][buy] # Otherwise else: # Buy now v[i][buy] = max(prices[i] + findMaximumProfit( prices, i + 1, k - 1, not buy, v), findMaximumProfit(prices, i + 1, k, buy, v)) return v[i][buy] # Function to find the maximum# profit in the buy and sell stockdef maxProfit(prices): n = len(prices) v = [[-1 for x in range(2)]for y in range(n)] # buy = 1 because atmost one # transaction is allowed return findMaximumProfit(prices, 0, 1, 1, v) # Driver Codeif __name__ == \"__main__\": # Given prices prices = [7, 1, 5, 3, 6, 4] # Function Call to find the # maximum profit possible by # buying and selling a single stock ans = maxProfit(prices) # Print answer print(ans)",
"e": 9297,
"s": 7844,
"text": null
},
{
"code": "// C# program for above approachusing System; class GFG { // Function to find maximum profit possible // by buying and selling at most one stack static int findMaximumProfit(int[] prices, int i, int k, int buy, int[, ] v) { // If no stock can be chosen if (i >= prices.Length || k <= 0) return 0; if (v[i, buy] != -1) return v[i, buy]; // If a stock is already bought // Buy now int nbuy; if (buy == 1) nbuy = 0; else nbuy = 1; if (buy == 1) { return v[i, buy] = Math.Max( -prices[i] + findMaximumProfit( prices, i + 1, k, nbuy, v), findMaximumProfit(prices, i + 1, k, (int)(buy), v)); } // Otherwise else { // Buy now if (buy == 1) nbuy = 0; else nbuy = 1; return v[i, buy] = Math.Max( prices[i] + findMaximumProfit(prices, i + 1, k - 1, nbuy, v), findMaximumProfit(prices, i + 1, k, buy, v)); } } // Function to find the maximum // profit in the buy and sell stock static int maxProfit(int[] prices) { int n = prices.Length; int[, ] v = new int[n, 2]; for (int i = 0; i < n; i++) { v[i, 0] = -1; v[i, 1] = -1; } // buy = 1 because atmost one // transaction is allowed return findMaximumProfit(prices, 0, 1, 1, v); } // Driver Code public static void Main() { // Given prices int[] prices = { 7, 1, 5, 3, 6, 4 }; // Function Call to find the // maximum profit possible by // buying and selling a single stock int ans = maxProfit(prices); // Print answer Console.Write(ans); }} // This code is contributed by Samim Hossain Mondal.",
"e": 11458,
"s": 9297,
"text": null
},
{
"code": "<script>// Javascript code for the above approach // Function to find maximum profit possible// by buying and selling at most one stackfunction findMaximumProfit(prices, i, k, buy, v) { // If no stock can be chosen if (i >= prices.length || k <= 0) return 0; if (v[i][buy] != -1) return v[i][buy]; // If a stock is already bought // Buy now let nbuy; if (buy == 1) nbuy = 0; else nbuy = 1; if (buy == 1) { return v[i][buy] = Math.max(-prices[i] + findMaximumProfit(prices, i + 1, k, nbuy, v), findMaximumProfit(prices, i + 1, k, buy, v)); } // Otherwise else { // Buy now if (buy == 1) nbuy = 0; else nbuy = 1; return v[i][buy] = Math.max(prices[i] + findMaximumProfit(prices, i + 1, k - 1, nbuy, v), findMaximumProfit(prices, i + 1, k, buy, v)); }} // Function to find the maximum// profit in the buy and sell stockfunction maxProfit(prices) { let n = prices.length; let v = new Array(n).fill(0).map(() => new Array(2).fill(-1)) // buy = 1 because atmost one // transaction is allowed return findMaximumProfit(prices, 0, 1, 1, v);} // Driver Code // Given priceslet prices = [7, 1, 5, 3, 6, 4]; // Function Call to find the// maximum profit possible by// buying and selling a single stocklet ans = maxProfit(prices); // Print answerdocument.write(ans); // This code is contributed by Saurabh Jaiswal</script>",
"e": 12959,
"s": 11458,
"text": null
},
{
"code": null,
"e": 12961,
"s": 12959,
"text": "5"
},
{
"code": null,
"e": 13047,
"s": 12961,
"text": "Time complexity: O(N) where N is the length of the given array. Auxiliary Space: O(N)"
},
{
"code": null,
"e": 13090,
"s": 13047,
"text": "Type II: Infinite transactions are allowed"
},
{
"code": null,
"e": 13348,
"s": 13090,
"text": "Given an array price[] of length N, representing the prices of the stocks on different days, the task is to find the maximum profit possible for buying and selling the stocks on different days using transactions where any number of transactions are allowed."
},
{
"code": null,
"e": 13359,
"s": 13348,
"text": "Examples: "
},
{
"code": null,
"e": 13600,
"s": 13359,
"text": "Input: prices[] = {7, 1, 5, 3, 6, 4} Output: 7Explanation:Purchase on 2nd day. Price = 1.Sell on 3rd day. Price = 5.Therefore, profit = 5 – 1 = 4.Purchase on 4th day. Price = 3.Sell on 5th day. Price = 6.Therefore, profit = 4 + (6 – 3) = 7."
},
{
"code": null,
"e": 13744,
"s": 13600,
"text": "Input: prices = {1, 2, 3, 4, 5} Output: 4Explanation: Purchase on 1st day. Price = 1.Sell on 5th day. Price = 5. Therefore, profit = 5 – 1 = 4."
},
{
"code": null,
"e": 14102,
"s": 13744,
"text": "Approach: The idea is to maintain a boolean value that denotes if there is any current purchase ongoing or not. If yes, then at the current state, the stock can be sold to maximize profit or move to the next price without selling the stock. Otherwise, if no transaction is happening, the current stock can be bought or move to the next price without buying."
},
{
"code": null,
"e": 14154,
"s": 14102,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 14158,
"s": 14154,
"text": "C++"
},
{
"code": "// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to calculate maximum// profit possible by buying or// selling stocks any number of timesint find(int ind, vector<int>& v, bool buy, vector<vector<int> >& memo){ // No prices left if (ind >= v.size()) return 0; // Already found if (memo[ind][buy] != -1) return memo[ind][buy]; // Already bought, now sell if (buy) { return memo[ind][buy] = max(-v[ind] + find(ind + 1, v, !buy, memo), find(ind + 1, v, buy, memo)); } // Otherwise, buy the stock else { return memo[ind][buy] = max(v[ind] + find(ind + 1, v, !buy, memo), find(ind + 1, v, buy, memo)); }} // Function to find the maximum// profit possible by buying and// selling stocks any number of timesint maxProfit(vector<int>& prices){ int n = prices.size(); if (n < 2) return 0; vector<vector<int> > v(n + 1, vector<int>(2, -1)); return find(0, prices, 1, v);} // Driver Codeint main(){ // Given prices vector<int> prices = { 7, 1, 5, 3, 6, 4 }; // Function Call to calculate // maximum profit possible int ans = maxProfit(prices); // Print the total profit cout << ans << endl; return 0;}",
"e": 15478,
"s": 14158,
"text": null
},
{
"code": null,
"e": 15480,
"s": 15478,
"text": "7"
},
{
"code": null,
"e": 15566,
"s": 15480,
"text": "Time complexity: O(N) where N is the length of the given array. Auxiliary Space: O(N)"
},
{
"code": null,
"e": 15613,
"s": 15566,
"text": "Type III: At most two transactions are allowed"
},
{
"code": null,
"e": 15878,
"s": 15613,
"text": "Problem: Given an array price[] of length N which denotes the prices of the stocks on different days. The task is to find the maximum profit possible for buying and selling the stocks on different days using transactions where at most two transactions are allowed."
},
{
"code": null,
"e": 15925,
"s": 15878,
"text": "Note: Stock must be bought before being sold. "
},
{
"code": null,
"e": 16129,
"s": 15925,
"text": "Input: prices[] = {3, 3, 5, 0, 0, 3, 1, 4} Output: 6 Explanation: Buy on Day 4 and Sell at Day 6 => Profit = 3 0 = 3 Buy on Day 7 and Sell at Day 8 => Profit = 4 1 = 3 Therefore, Total Profit = 3 + 3 = 6"
},
{
"code": null,
"e": 16266,
"s": 16129,
"text": "Input: prices[] = {1, 2, 3, 4, 5} Output: 4 Explanation: Buy on Day 1 and sell at Day 6 => Profit = 5 1 = 4 Therefore, Total Profit = 4 "
},
{
"code": null,
"e": 16517,
"s": 16266,
"text": "Approach: The problem can be solved by following the above approach. Now, if the number of transactions is equal to 2, then the current profit can be the desired answer. Similarly, Try out all the possible answers by memoizing them into the DP Table."
},
{
"code": null,
"e": 16569,
"s": 16517,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 16573,
"s": 16569,
"text": "C++"
},
{
"code": "// C++ program for the above approach #include <bits/stdc++.h>#include <iostream>using namespace std; // Function to find the maximum// profit in the buy and sell stockint find(vector<int>& prices, int ind, bool buy, int c, vector<vector<vector<int> > >& memo){ // If buy =1 means buy now // else sell if (ind >= prices.size() || c >= 2) return 0; if (memo[ind][buy] != -1) return memo[ind][buy]; // Already bought, sell now if (buy) { return memo[ind][buy] = max(-prices[ind] + find(prices, ind + 1, !buy, c, memo), find(prices, ind + 1, buy, c, memo)); } // Can buy stocks else { return memo[ind][buy] = max(prices[ind] + find(prices, ind + 1, !buy, c + 1, memo), find(prices, ind + 1, buy, c, memo)); }} // Function to find the maximum// profit in the buy and sell stockint maxProfit(vector<int>& prices){ // Here maximum two transaction are allowed // Use 3-D vector because here // three states are there: i,k,buy/sell vector<vector<vector<int> > > memo( prices.size(), vector<vector<int> >(2, vector<int>(2, -1))); // Answer return find(prices, 0, 1, 0, memo);} // Driver Codeint main(){ // Given prices vector<int> prices = { 3, 3, 5, 0, 0, 3, 1, 4 }; // Function Call int ans = maxProfit(prices); // Answer cout << ans << endl; return 0;}",
"e": 18124,
"s": 16573,
"text": null
},
{
"code": null,
"e": 18126,
"s": 18124,
"text": "6"
},
{
"code": null,
"e": 18213,
"s": 18126,
"text": "Time complexity: O(N), where N is the length of the given array. Auxiliary Space: O(N)"
},
{
"code": null,
"e": 18257,
"s": 18213,
"text": "Type IV: At most K transactions are allowed"
},
{
"code": null,
"e": 18520,
"s": 18257,
"text": "Problem: Given an array price[] of length N which denotes the prices of the stocks on different days. The task is to find the maximum profit possible for buying and selling the stocks on different days using transactions where at most K transactions are allowed."
},
{
"code": null,
"e": 18566,
"s": 18520,
"text": "Note: Stock must be bought before being sold."
},
{
"code": null,
"e": 18577,
"s": 18566,
"text": "Examples: "
},
{
"code": null,
"e": 18727,
"s": 18577,
"text": "Input: K = 2, prices[] = {2, 4, 1} Output: 2Explanation: Buy on day 1 when price is 2 and sell on day 2 when price is 4. Therefore, profit = 4-2 = 2."
},
{
"code": null,
"e": 19015,
"s": 18727,
"text": "Input: K = 2, prices[] = {3, 2, 6, 5, 0, 3} Output: 7Explanation: Buy on day 2 when price is 2 and sell on day 3 when price is 6. Therefore, profit = 6-2 = 4.Buy on day 5 when price is 0 and sell on day 6 when price is 3. Therefore, profit = 3-0 = 3.Therefore, the total profit = 4+3 = 7"
},
{
"code": null,
"e": 19244,
"s": 19015,
"text": "Approach: The idea is to maintain the count of transactions completed till and compare the count of the transaction to K. If it is less than K then buy and sell the stock. Otherwise, the current profit can be the maximum profit."
},
{
"code": null,
"e": 19296,
"s": 19244,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 19300,
"s": 19296,
"text": "C++"
},
{
"code": "// C++ program for the above approach #include <bits/stdc++.h>#include <iostream>using namespace std; // Function to find the maximum// profit with atmost K transactionsint find(vector<int>& prices, int ind, bool buy, int c, int k, vector<vector<vector<int> > >& memo){ // If there are no more transaction // allowed, return the profit as 0 if (ind >= prices.size() || c >= k) return 0; // Memoize else if (memo[ind][buy] != -1) return memo[ind][buy]; // Already bought, now sell if (buy) { return memo[ind][buy] = max( -prices[ind] + find(prices, ind + 1, !buy, c, k, memo), find(prices, ind + 1, buy, c, k, memo)); } // Stocks can be bought else { return memo[ind][buy] = max( prices[ind] + find(prices, ind + 1, !buy, c + 1, k, memo), find(prices, ind + 1, buy, c, k, memo)); }} // Function to find the maximum profit// in the buy and sell stockint maxProfit(int k, vector<int>& prices){ // If transactions are greater // than number of prices if (2 * k > prices.size()) { int res = 0; for (int i = 1; i < prices.size(); i++) { res += max(0, prices[i] - prices[i - 1]); } return res; } // Maximum k transaction vector<vector<vector<int> > > memo( prices.size() + 1, vector<vector<int> >(2, vector<int>(k + 1, -1))); return find(prices, 0, 1, 0, k, memo);} // Driver Codeint main(){ // Given prices vector<int> prices = { 2, 4, 1 }; // Given K int k = 2; // Function Call int ans = maxProfit(k, prices); // Print answer cout << ans << endl; return 0;}",
"e": 21107,
"s": 19300,
"text": null
},
{
"code": null,
"e": 21109,
"s": 21107,
"text": "2"
},
{
"code": null,
"e": 21244,
"s": 21109,
"text": "Time complexity: O(N*K), where N is the length of the given array and K is the number of transactions allowed. Auxiliary Space: O(N*K)"
},
{
"code": null,
"e": 21260,
"s": 21246,
"text": "lokeshpotta20"
},
{
"code": null,
"e": 21270,
"s": 21260,
"text": "samim2000"
},
{
"code": null,
"e": 21287,
"s": 21270,
"text": "_saurabh_jaiswal"
},
{
"code": null,
"e": 21293,
"s": 21287,
"text": "ukasp"
},
{
"code": null,
"e": 21306,
"s": 21293,
"text": "simmytarika5"
},
{
"code": null,
"e": 21316,
"s": 21306,
"text": "nehajmi08"
},
{
"code": null,
"e": 21333,
"s": 21316,
"text": "piyushmulatkar11"
},
{
"code": null,
"e": 21343,
"s": 21333,
"text": "202051178"
},
{
"code": null,
"e": 21359,
"s": 21343,
"text": "subhamgoyal2014"
},
{
"code": null,
"e": 21373,
"s": 21359,
"text": "jainlovely450"
},
{
"code": null,
"e": 21393,
"s": 21373,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 21413,
"s": 21393,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 21511,
"s": 21413,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 21579,
"s": 21511,
"text": "Find if there is a path between two vertices in an undirected graph"
},
{
"code": null,
"e": 21634,
"s": 21579,
"text": "Count number of binary strings without consecutive 1's"
},
{
"code": null,
"e": 21695,
"s": 21634,
"text": "Find if a string is interleaved of two other strings | DP-33"
},
{
"code": null,
"e": 21755,
"s": 21695,
"text": "Optimal Substructure Property in Dynamic Programming | DP-2"
},
{
"code": null,
"e": 21806,
"s": 21755,
"text": "Maximum sum such that no two elements are adjacent"
},
{
"code": null,
"e": 21844,
"s": 21806,
"text": "Unique paths in a Grid with Obstacles"
},
{
"code": null,
"e": 21889,
"s": 21844,
"text": "How to solve a Dynamic Programming Problem ?"
},
{
"code": null,
"e": 21948,
"s": 21889,
"text": "Maximum profit by buying and selling a share at most twice"
},
{
"code": null,
"e": 21975,
"s": 21948,
"text": "Word Break Problem | DP-32"
}
] |
A Beginner’s Guide to Graph Neural Networks Using PyTorch Geometric — Part 2 | by Rohith Teja | Towards Data Science
|
In my previous post, we saw how PyTorch Geometric library was used to construct a GNN model and formulate a Node Classification task on Zachary’s Karate Club dataset.
A graph neural network model requires initial node representations in order to train and previously, I employed the node degrees as these representations. But there are several ways to do it and another interesting way is to use learning-based methods like node embeddings as the numerical representations.
This is a small recap of the dataset and its visualization showing the two factions with two different colours.
Dataset: Zachary’s Karate Club.
Here, the nodes represent 34 students who were involved in the club and the links represent 78 different interactions between pairs of members outside the club. There are two different types of labels i.e, the two factions.
Our idea is to capture the network information using an array of numbers which are called low-dimensional embeddings. There exist different algorithms specifically for the purpose of learning numerical representations for graph nodes.
DeepWalk is a node embedding technique that is based on the Random Walk concept which I will be using in this example. In order to implement it, I picked the Graph Embedding python library that provides 5 different types of algorithms to generate the embeddings.
Firstly, install the Graph Embedding library and run the setup:
!git clone https://github.com/shenweichen/GraphEmbedding.gitcd GraphEmbedding/!python setup.py install
We use the DeepWalk model to learn the embeddings for our graph nodes. The variable embeddings stores the embeddings in form of a dictionary where the keys are the nodes and values are the embeddings themselves.
As I mentioned before, embeddings are just low-dimensional numerical representations of the network, therefore we can make a visualization of these embeddings. Here, the size of the embeddings is 128, so we need to employ t-SNE which is a dimensionality reduction technique. Basically, t-SNE transforms the 128 dimension array into a 2-dimensional array so that we can visualize it in a 2D space.
Note: The embedding size is a hyperparameter.
The visualization made using the above code looks like this:
We can see that the embeddings generated for this graph are of good quality as there is a clear separation between the red and blue points. Now we can build a graph neural network model which trains on these embeddings and finally, we will have a good prediction model.
I will reuse the code from my previous post for building the graph neural network model for the node classification task. The procedure we follow from now is very similar to my previous post. We just change the node features from degree to DeepWalk embeddings.
Here, we are just preparing the data which will be used to create the custom dataset in the next step. Notice how I changed the embeddings variable which holds the node embedding values generated from the DeepWalk algorithm.
In order to compare the results with my previous post, I am using a similar data split and conditions as before.
The data object now contains the following variables:
Data(edge_index=[2, 156], num_classes=[1], test_mask=[34], train_mask=[34], x=[34, 128], y=[34])
We can notice the change in dimensions of the x variable from 1 to 128.
We use the same code for constructing the graph convolutional network.
Now it is time to train the model and predict on the test set.
Using the same hyperparameters as before, we obtain the results as:
Train Accuracy: 1.0 Test Accuracy: 0.90
This is a huge improvement from before!
As seen from the results, we actually have a good improvement in both train and test accuracies when the GNN model was trained under similar conditions of Part 1.
Note: We can surely improve the results by doing hyperparameter tuning.
This shows that Graph Neural Networks perform better when we use learning-based node embeddings as the input feature. Now the question arises, why is this happening?
Answering that question takes a bit of explanation. So I will write a new post just to explain this behaviour. Stay tuned!
PyTorch Geometric: https://github.com/rusty1s/pytorch_geometricDeepWalk: https://arxiv.org/abs/1403.6652Graph Embedding library: https://github.com/shenweichen/GraphEmbeddingGCN code example: https://github.com/rusty1s/pytorch_geometric/blob/master/examples/gcn.py
PyTorch Geometric: https://github.com/rusty1s/pytorch_geometric
DeepWalk: https://arxiv.org/abs/1403.6652
Graph Embedding library: https://github.com/shenweichen/GraphEmbedding
GCN code example: https://github.com/rusty1s/pytorch_geometric/blob/master/examples/gcn.py
5. Link to Part 1 of this series. I strongly recommend checking this out:
towardsdatascience.com
I hope you enjoyed reading the post and you can find me on LinkedIn, Twitter or GitHub. Feel free to say hi!
|
[
{
"code": null,
"e": 339,
"s": 172,
"text": "In my previous post, we saw how PyTorch Geometric library was used to construct a GNN model and formulate a Node Classification task on Zachary’s Karate Club dataset."
},
{
"code": null,
"e": 646,
"s": 339,
"text": "A graph neural network model requires initial node representations in order to train and previously, I employed the node degrees as these representations. But there are several ways to do it and another interesting way is to use learning-based methods like node embeddings as the numerical representations."
},
{
"code": null,
"e": 758,
"s": 646,
"text": "This is a small recap of the dataset and its visualization showing the two factions with two different colours."
},
{
"code": null,
"e": 790,
"s": 758,
"text": "Dataset: Zachary’s Karate Club."
},
{
"code": null,
"e": 1014,
"s": 790,
"text": "Here, the nodes represent 34 students who were involved in the club and the links represent 78 different interactions between pairs of members outside the club. There are two different types of labels i.e, the two factions."
},
{
"code": null,
"e": 1249,
"s": 1014,
"text": "Our idea is to capture the network information using an array of numbers which are called low-dimensional embeddings. There exist different algorithms specifically for the purpose of learning numerical representations for graph nodes."
},
{
"code": null,
"e": 1512,
"s": 1249,
"text": "DeepWalk is a node embedding technique that is based on the Random Walk concept which I will be using in this example. In order to implement it, I picked the Graph Embedding python library that provides 5 different types of algorithms to generate the embeddings."
},
{
"code": null,
"e": 1576,
"s": 1512,
"text": "Firstly, install the Graph Embedding library and run the setup:"
},
{
"code": null,
"e": 1679,
"s": 1576,
"text": "!git clone https://github.com/shenweichen/GraphEmbedding.gitcd GraphEmbedding/!python setup.py install"
},
{
"code": null,
"e": 1891,
"s": 1679,
"text": "We use the DeepWalk model to learn the embeddings for our graph nodes. The variable embeddings stores the embeddings in form of a dictionary where the keys are the nodes and values are the embeddings themselves."
},
{
"code": null,
"e": 2288,
"s": 1891,
"text": "As I mentioned before, embeddings are just low-dimensional numerical representations of the network, therefore we can make a visualization of these embeddings. Here, the size of the embeddings is 128, so we need to employ t-SNE which is a dimensionality reduction technique. Basically, t-SNE transforms the 128 dimension array into a 2-dimensional array so that we can visualize it in a 2D space."
},
{
"code": null,
"e": 2334,
"s": 2288,
"text": "Note: The embedding size is a hyperparameter."
},
{
"code": null,
"e": 2395,
"s": 2334,
"text": "The visualization made using the above code looks like this:"
},
{
"code": null,
"e": 2665,
"s": 2395,
"text": "We can see that the embeddings generated for this graph are of good quality as there is a clear separation between the red and blue points. Now we can build a graph neural network model which trains on these embeddings and finally, we will have a good prediction model."
},
{
"code": null,
"e": 2926,
"s": 2665,
"text": "I will reuse the code from my previous post for building the graph neural network model for the node classification task. The procedure we follow from now is very similar to my previous post. We just change the node features from degree to DeepWalk embeddings."
},
{
"code": null,
"e": 3151,
"s": 2926,
"text": "Here, we are just preparing the data which will be used to create the custom dataset in the next step. Notice how I changed the embeddings variable which holds the node embedding values generated from the DeepWalk algorithm."
},
{
"code": null,
"e": 3264,
"s": 3151,
"text": "In order to compare the results with my previous post, I am using a similar data split and conditions as before."
},
{
"code": null,
"e": 3318,
"s": 3264,
"text": "The data object now contains the following variables:"
},
{
"code": null,
"e": 3415,
"s": 3318,
"text": "Data(edge_index=[2, 156], num_classes=[1], test_mask=[34], train_mask=[34], x=[34, 128], y=[34])"
},
{
"code": null,
"e": 3487,
"s": 3415,
"text": "We can notice the change in dimensions of the x variable from 1 to 128."
},
{
"code": null,
"e": 3558,
"s": 3487,
"text": "We use the same code for constructing the graph convolutional network."
},
{
"code": null,
"e": 3621,
"s": 3558,
"text": "Now it is time to train the model and predict on the test set."
},
{
"code": null,
"e": 3689,
"s": 3621,
"text": "Using the same hyperparameters as before, we obtain the results as:"
},
{
"code": null,
"e": 3729,
"s": 3689,
"text": "Train Accuracy: 1.0 Test Accuracy: 0.90"
},
{
"code": null,
"e": 3769,
"s": 3729,
"text": "This is a huge improvement from before!"
},
{
"code": null,
"e": 3932,
"s": 3769,
"text": "As seen from the results, we actually have a good improvement in both train and test accuracies when the GNN model was trained under similar conditions of Part 1."
},
{
"code": null,
"e": 4004,
"s": 3932,
"text": "Note: We can surely improve the results by doing hyperparameter tuning."
},
{
"code": null,
"e": 4170,
"s": 4004,
"text": "This shows that Graph Neural Networks perform better when we use learning-based node embeddings as the input feature. Now the question arises, why is this happening?"
},
{
"code": null,
"e": 4293,
"s": 4170,
"text": "Answering that question takes a bit of explanation. So I will write a new post just to explain this behaviour. Stay tuned!"
},
{
"code": null,
"e": 4558,
"s": 4293,
"text": "PyTorch Geometric: https://github.com/rusty1s/pytorch_geometricDeepWalk: https://arxiv.org/abs/1403.6652Graph Embedding library: https://github.com/shenweichen/GraphEmbeddingGCN code example: https://github.com/rusty1s/pytorch_geometric/blob/master/examples/gcn.py"
},
{
"code": null,
"e": 4622,
"s": 4558,
"text": "PyTorch Geometric: https://github.com/rusty1s/pytorch_geometric"
},
{
"code": null,
"e": 4664,
"s": 4622,
"text": "DeepWalk: https://arxiv.org/abs/1403.6652"
},
{
"code": null,
"e": 4735,
"s": 4664,
"text": "Graph Embedding library: https://github.com/shenweichen/GraphEmbedding"
},
{
"code": null,
"e": 4826,
"s": 4735,
"text": "GCN code example: https://github.com/rusty1s/pytorch_geometric/blob/master/examples/gcn.py"
},
{
"code": null,
"e": 4900,
"s": 4826,
"text": "5. Link to Part 1 of this series. I strongly recommend checking this out:"
},
{
"code": null,
"e": 4923,
"s": 4900,
"text": "towardsdatascience.com"
}
] |
Monitoring your BigQuery costs and reports usage with Data Studio | by 💡Mike Shakhomirov | Towards Data Science
|
If you are a BigQuery user and you visualise your data with Data Studio then you might want to answer the following questions:
What was the cost of each Data Studio report for yesterday?
How many times each query/report was run and who ran it?
What was the cost of queries for tables, datasets (e.g. production/staging/ analytics) with label X?
Can I be notified in case of a sudden surge in billed bytes amount?
Standard Google Billing dashboard won’t answer these questions.
According to official Google docs at the moment you can’t use labels for BigQuery jobs.
Read more here: https://cloud.google.com/bigquery/docs/filtering-labels
But what if we want to know who exactly ran the query and HOW MUCH it cost us?
In this article I will talk you through the complete process of setting up BigQuery cost monitoring dashboard.
I used standard Google Ads template from Google Data Studio. I think it looks nice and I slightly changed it for my needs.
If you want you can just copy the template with all relevant settings and widgets included. Just adjust the datsource (create your own) for each widget as shown below in bq_cost_analysis.sql
Prerequisites:
Google Cloud Platform (GCP) account
BigQuery
Data Studio
In this article I will use Cloud Audit Log for BigQuery. We’ll use these events data, export it to BigQuery and analyse it.
Google-recommended best practice is to use custom log based metrics.
Go to Logging and create a sink. Read more about creating sinks here.
This will output all BigQuery `query_job_completed` log events from Cloud Audit Log service into your BigQuery table.
In this way you can export any logs for further analysis which I find very useful.
Note that only new logs will be exported but you can also filter and narrow down the logs you are investigating in Logging console:
So now when I have BigQuery event data flowing into my logs dataset I will create a view based on the data I have:
And that’s the bit we need to calculate BigQuery costs (per query or a job):
We can simply multiply then: cost per TB processed * numbers of TB processed.
It is a custom query bq_cost_analysis.sql:
Now we have processedBytes with associated cost per user and per query nicely grouped:
You probably noticed me adding a query tag in bq_cost_analysis.sql
That’s a little trick that will help with cost groupings and let us use actual labels (which are currently not supported).
Simply adjust that SQL and add a column for your tagged query. It can be anything, a report in Data Studio, table name or a scheduled query:
substr(protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobConfiguration.query.query,0,30) as report
Or something like this might work even better:
select REGEXP_EXTRACT(‘ --tag: some report — tag ends/’, r’--tag: (.*) - tag ends/’) as report
Of course, you can aggregate costs by query too.
If you set the table at the bottom to display only one record (one query) per page that would make query cost analysis realy simple.
Adding sliders for hour and minute can also be very useful.
Another interesting thing. Cache. If you want to save money you should enable prefetch caching in Data Studio report you must use owner's credentials and enable Cache option (data freshness). Read more about cache here: https://cloud.google.com/bigquery/docs/cached-results
In this case if you share an access to your report with any other users they will be using your credentials to access the data and what you will see in your cost report is only your account name.
If you want to enable other users for your cost reports you should use Viewer's credentials and grant them access to underlying datasets too.
I find it really annoying but this is how it works in Data Studio.
Read more in official google docs here: https://developers.google.com/datastudio/solution/blocks/enforce-viewer-creds.
If you are using Viewer’s credential in Data Studio you will need this.
Let’s go to IAM roles and create the role called Data Studio User. For that we will need permissions to view and list tables in the datasets. You can simply add predefined BigQuery Data Viewer role.
To run reports from Data Studio we will also need BigQuery Job User role which adds bigquery.jobs.create permission. Read more about predefined roles here: https://cloud.google.com/bigquery/docs/access-control
Now you can assign this role to any user to let them use Viewer’s credentials in Data Studio. Just click on the dataset name and click Share dataset. This fits into Google best security practice when only minimum scope of permissions is granted.
I also want to be notified in case of a sudden surge in billed bytes amount.
To create this Alert first go to Dashboards, click Add chart and select Scanned Bytes Billed:
Select Aggregator and Aligner to Sum. An Aggregator describes how to aggregate data points across multiple time series and in case you have labels to use in Group By. Aligner describes how to bring the data points in each individual time series into equal periods of time. Read more about Aligner.
Now let’s go to Google Alerting and create a policy:
Let’s call it ‘Last 15 minutes BilledBytes exceed 1Gb’ so we will be getting alerts every time total amount of billed bytes exceeds 1Gb over any of 15 minute windows:
Now if we run any query/queries (including Data Studio reports) where SUM of Total Billed Bytes exceeds the threshold over the last 15 minutes the system will notify us.
Let’s run a query that exceeds this threashold and see if that worked:
You should receive an email alert. It worked:
You can also set alerts for number of slots in use and Query execution time, for example, in a similar way.
Read more here: https://cloud.google.com/monitoring/alerts/using-alerting-ui#create-policy
Now you have a beautiful cost monitoring dashboard where you can see your BigQuery costs associated to each Data Studio report allowing you to go deeper and beyond basic dataset level reporting in Google Cloud Billing.
Thanks for reading and let me know if you have any questions.
|
[
{
"code": null,
"e": 299,
"s": 172,
"text": "If you are a BigQuery user and you visualise your data with Data Studio then you might want to answer the following questions:"
},
{
"code": null,
"e": 359,
"s": 299,
"text": "What was the cost of each Data Studio report for yesterday?"
},
{
"code": null,
"e": 416,
"s": 359,
"text": "How many times each query/report was run and who ran it?"
},
{
"code": null,
"e": 517,
"s": 416,
"text": "What was the cost of queries for tables, datasets (e.g. production/staging/ analytics) with label X?"
},
{
"code": null,
"e": 585,
"s": 517,
"text": "Can I be notified in case of a sudden surge in billed bytes amount?"
},
{
"code": null,
"e": 649,
"s": 585,
"text": "Standard Google Billing dashboard won’t answer these questions."
},
{
"code": null,
"e": 737,
"s": 649,
"text": "According to official Google docs at the moment you can’t use labels for BigQuery jobs."
},
{
"code": null,
"e": 809,
"s": 737,
"text": "Read more here: https://cloud.google.com/bigquery/docs/filtering-labels"
},
{
"code": null,
"e": 888,
"s": 809,
"text": "But what if we want to know who exactly ran the query and HOW MUCH it cost us?"
},
{
"code": null,
"e": 999,
"s": 888,
"text": "In this article I will talk you through the complete process of setting up BigQuery cost monitoring dashboard."
},
{
"code": null,
"e": 1122,
"s": 999,
"text": "I used standard Google Ads template from Google Data Studio. I think it looks nice and I slightly changed it for my needs."
},
{
"code": null,
"e": 1313,
"s": 1122,
"text": "If you want you can just copy the template with all relevant settings and widgets included. Just adjust the datsource (create your own) for each widget as shown below in bq_cost_analysis.sql"
},
{
"code": null,
"e": 1328,
"s": 1313,
"text": "Prerequisites:"
},
{
"code": null,
"e": 1364,
"s": 1328,
"text": "Google Cloud Platform (GCP) account"
},
{
"code": null,
"e": 1373,
"s": 1364,
"text": "BigQuery"
},
{
"code": null,
"e": 1385,
"s": 1373,
"text": "Data Studio"
},
{
"code": null,
"e": 1509,
"s": 1385,
"text": "In this article I will use Cloud Audit Log for BigQuery. We’ll use these events data, export it to BigQuery and analyse it."
},
{
"code": null,
"e": 1578,
"s": 1509,
"text": "Google-recommended best practice is to use custom log based metrics."
},
{
"code": null,
"e": 1648,
"s": 1578,
"text": "Go to Logging and create a sink. Read more about creating sinks here."
},
{
"code": null,
"e": 1766,
"s": 1648,
"text": "This will output all BigQuery `query_job_completed` log events from Cloud Audit Log service into your BigQuery table."
},
{
"code": null,
"e": 1849,
"s": 1766,
"text": "In this way you can export any logs for further analysis which I find very useful."
},
{
"code": null,
"e": 1981,
"s": 1849,
"text": "Note that only new logs will be exported but you can also filter and narrow down the logs you are investigating in Logging console:"
},
{
"code": null,
"e": 2096,
"s": 1981,
"text": "So now when I have BigQuery event data flowing into my logs dataset I will create a view based on the data I have:"
},
{
"code": null,
"e": 2173,
"s": 2096,
"text": "And that’s the bit we need to calculate BigQuery costs (per query or a job):"
},
{
"code": null,
"e": 2251,
"s": 2173,
"text": "We can simply multiply then: cost per TB processed * numbers of TB processed."
},
{
"code": null,
"e": 2294,
"s": 2251,
"text": "It is a custom query bq_cost_analysis.sql:"
},
{
"code": null,
"e": 2381,
"s": 2294,
"text": "Now we have processedBytes with associated cost per user and per query nicely grouped:"
},
{
"code": null,
"e": 2448,
"s": 2381,
"text": "You probably noticed me adding a query tag in bq_cost_analysis.sql"
},
{
"code": null,
"e": 2571,
"s": 2448,
"text": "That’s a little trick that will help with cost groupings and let us use actual labels (which are currently not supported)."
},
{
"code": null,
"e": 2712,
"s": 2571,
"text": "Simply adjust that SQL and add a column for your tagged query. It can be anything, a report in Data Studio, table name or a scheduled query:"
},
{
"code": null,
"e": 2832,
"s": 2712,
"text": "substr(protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobConfiguration.query.query,0,30) as report"
},
{
"code": null,
"e": 2879,
"s": 2832,
"text": "Or something like this might work even better:"
},
{
"code": null,
"e": 2974,
"s": 2879,
"text": "select REGEXP_EXTRACT(‘ --tag: some report — tag ends/’, r’--tag: (.*) - tag ends/’) as report"
},
{
"code": null,
"e": 3023,
"s": 2974,
"text": "Of course, you can aggregate costs by query too."
},
{
"code": null,
"e": 3156,
"s": 3023,
"text": "If you set the table at the bottom to display only one record (one query) per page that would make query cost analysis realy simple."
},
{
"code": null,
"e": 3216,
"s": 3156,
"text": "Adding sliders for hour and minute can also be very useful."
},
{
"code": null,
"e": 3490,
"s": 3216,
"text": "Another interesting thing. Cache. If you want to save money you should enable prefetch caching in Data Studio report you must use owner's credentials and enable Cache option (data freshness). Read more about cache here: https://cloud.google.com/bigquery/docs/cached-results"
},
{
"code": null,
"e": 3686,
"s": 3490,
"text": "In this case if you share an access to your report with any other users they will be using your credentials to access the data and what you will see in your cost report is only your account name."
},
{
"code": null,
"e": 3828,
"s": 3686,
"text": "If you want to enable other users for your cost reports you should use Viewer's credentials and grant them access to underlying datasets too."
},
{
"code": null,
"e": 3895,
"s": 3828,
"text": "I find it really annoying but this is how it works in Data Studio."
},
{
"code": null,
"e": 4014,
"s": 3895,
"text": "Read more in official google docs here: https://developers.google.com/datastudio/solution/blocks/enforce-viewer-creds."
},
{
"code": null,
"e": 4086,
"s": 4014,
"text": "If you are using Viewer’s credential in Data Studio you will need this."
},
{
"code": null,
"e": 4285,
"s": 4086,
"text": "Let’s go to IAM roles and create the role called Data Studio User. For that we will need permissions to view and list tables in the datasets. You can simply add predefined BigQuery Data Viewer role."
},
{
"code": null,
"e": 4495,
"s": 4285,
"text": "To run reports from Data Studio we will also need BigQuery Job User role which adds bigquery.jobs.create permission. Read more about predefined roles here: https://cloud.google.com/bigquery/docs/access-control"
},
{
"code": null,
"e": 4741,
"s": 4495,
"text": "Now you can assign this role to any user to let them use Viewer’s credentials in Data Studio. Just click on the dataset name and click Share dataset. This fits into Google best security practice when only minimum scope of permissions is granted."
},
{
"code": null,
"e": 4818,
"s": 4741,
"text": "I also want to be notified in case of a sudden surge in billed bytes amount."
},
{
"code": null,
"e": 4912,
"s": 4818,
"text": "To create this Alert first go to Dashboards, click Add chart and select Scanned Bytes Billed:"
},
{
"code": null,
"e": 5210,
"s": 4912,
"text": "Select Aggregator and Aligner to Sum. An Aggregator describes how to aggregate data points across multiple time series and in case you have labels to use in Group By. Aligner describes how to bring the data points in each individual time series into equal periods of time. Read more about Aligner."
},
{
"code": null,
"e": 5263,
"s": 5210,
"text": "Now let’s go to Google Alerting and create a policy:"
},
{
"code": null,
"e": 5430,
"s": 5263,
"text": "Let’s call it ‘Last 15 minutes BilledBytes exceed 1Gb’ so we will be getting alerts every time total amount of billed bytes exceeds 1Gb over any of 15 minute windows:"
},
{
"code": null,
"e": 5600,
"s": 5430,
"text": "Now if we run any query/queries (including Data Studio reports) where SUM of Total Billed Bytes exceeds the threshold over the last 15 minutes the system will notify us."
},
{
"code": null,
"e": 5671,
"s": 5600,
"text": "Let’s run a query that exceeds this threashold and see if that worked:"
},
{
"code": null,
"e": 5717,
"s": 5671,
"text": "You should receive an email alert. It worked:"
},
{
"code": null,
"e": 5825,
"s": 5717,
"text": "You can also set alerts for number of slots in use and Query execution time, for example, in a similar way."
},
{
"code": null,
"e": 5916,
"s": 5825,
"text": "Read more here: https://cloud.google.com/monitoring/alerts/using-alerting-ui#create-policy"
},
{
"code": null,
"e": 6135,
"s": 5916,
"text": "Now you have a beautiful cost monitoring dashboard where you can see your BigQuery costs associated to each Data Studio report allowing you to go deeper and beyond basic dataset level reporting in Google Cloud Billing."
}
] |
How to download all pdf files with selenium python?
|
Answer − We can download all pdf files using Selenium webdriver in Python. A file is downloaded in the default path set in the Chrome browser. However, we can modify the path of the downloaded file programmatically in Selenium.
This is done with the help of the Options class. We have to create an object of this class and apply add_experimental_option. We have to pass the parameters - prefs and the path where the pdf is to be downloaded to this method. Finally, this information has to be sent to the webdriver object.
op = Options()
p = {"download.default_directory": "../pdf"}
op.add_experimental_option("prefs", p)
Code Implementation
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
#Options instance
op = Options()
#configure path of downloaded pdf file
p = {"download.default_directory": "../pdf"}
op.add_experimental_option("prefs", p)
#send browser option to webdriver object
driver = webdriver.Chrome(executable_path='../drivers/chromedriver',
options=op)
#implicit wait
driver.implicitly_wait(0.8)
#url launch
driver.get("http://demo.automationtesting.in/FileDownload.html")
#browser maximize
driver.maximize_window()
#identify element
m = driver.find_element_by_id('pdfbox')
m.send_keys("infotest")
t = driver.find_element_by_id('createPdf')
t.click()
e = driver.find_element_by_id('pdf-link-to-download')
e.click()
#driver close
driver.close()
|
[
{
"code": null,
"e": 1290,
"s": 1062,
"text": "Answer − We can download all pdf files using Selenium webdriver in Python. A file is downloaded in the default path set in the Chrome browser. However, we can modify the path of the downloaded file programmatically in Selenium."
},
{
"code": null,
"e": 1584,
"s": 1290,
"text": "This is done with the help of the Options class. We have to create an object of this class and apply add_experimental_option. We have to pass the parameters - prefs and the path where the pdf is to be downloaded to this method. Finally, this information has to be sent to the webdriver object."
},
{
"code": null,
"e": 1683,
"s": 1584,
"text": "op = Options()\np = {\"download.default_directory\": \"../pdf\"}\nop.add_experimental_option(\"prefs\", p)"
},
{
"code": null,
"e": 1703,
"s": 1683,
"text": "Code Implementation"
},
{
"code": null,
"e": 2457,
"s": 1703,
"text": "from selenium import webdriver\nfrom selenium.webdriver.chrome.options import Options\n#Options instance\nop = Options()\n#configure path of downloaded pdf file\np = {\"download.default_directory\": \"../pdf\"}\nop.add_experimental_option(\"prefs\", p)\n#send browser option to webdriver object\ndriver = webdriver.Chrome(executable_path='../drivers/chromedriver',\noptions=op)\n#implicit wait\ndriver.implicitly_wait(0.8)\n#url launch\ndriver.get(\"http://demo.automationtesting.in/FileDownload.html\")\n#browser maximize\ndriver.maximize_window()\n#identify element\nm = driver.find_element_by_id('pdfbox')\nm.send_keys(\"infotest\")\nt = driver.find_element_by_id('createPdf')\nt.click()\ne = driver.find_element_by_id('pdf-link-to-download')\ne.click()\n#driver close\ndriver.close()"
}
] |
How to subset a data frame based on a vector values in R?
|
If we have a vector and a data frame, and the data frame has a column that contains the values similar as in the vector then we can create a subset of the data frame based on that vector. This can be done with the help of single square brackets and %in% operator. The %in% operator will help us to find the values in the data frame column that matches with the vector values. Check out the below examples to understand how it works.
Consider the below data frame df1 and vector v1 −
Live Demo
> x1<-rpois(20,2)
> x2<-rnorm(20)
> df1<-data.frame(x1,x2)
> df1
x1 x2
1 2 -1.0627997
2 4 -0.2159125
3 1 0.2443734
4 3 -1.3513780
5 3 1.7359994
6 1 1.2563915
7 1 -0.8998470
8 2 0.4187820
9 1 2.6305826
10 4 -0.8040052
11 4 0.4067659
12 3 -1.7879203
13 3 1.7214544
14 2 -0.4699642
15 2 0.3626548
16 4 1.3013632
17 2 -0.2983836
18 1 1.8943313
19 1 1.5637219
20 2 0.8786897
Sub-setting the data frame df1 based on values in vector v1 −
> df1[df1$x1 %in% v1,]
x1 x2
1 2 -1.0627997
3 1 0.2443734
4 3 -1.3513780
5 3 1.7359994
6 1 1.2563915
7 1 -0.8998470
8 2 0.4187820
9 1 2.6305826
12 3 -1.7879203
13 3 1.7214544
14 2 -0.4699642
15 2 0.3626548
17 2 -0.2983836
18 1 1.8943313
19 1 1.5637219
20 2 0.8786897
Live Demo
> y1<-sample(LETTERS[1:5],20,replace=TRUE)
> y2<-rpois(20,2)
> y3<-rpois(20,5)
> df2<-data.frame(y1,y2,y3)
> df2
y1 y2 y3
1 C 0 5
2 A 2 5
3 A 2 1
4 D 1 6
5 B 0 4
6 E 6 9
7 E 0 5
8 C 1 9
9 D 1 6
10 D 2 6
11 A 4 5
12 D 1 6
13 E 1 5
14 E 2 6
15 C 5 4
16 A 0 3
17 D 2 5
18 B 1 10
19 E 3 3
20 A 2 1
Sub-setting the data frame df2 based on values in vector v2 −
> v2<-c("A","B","C","D")
> df2[df2$y1 %in% v2,]
y1 y2 y3
1 C 0 5
2 A 2 5
3 A 2 1
4 D 1 6
5 B 0 4
8 C 1 9
9 D 1 6
10 D 2 6
11 A 4 5
12 D 1 6
15 C 5 4
16 A 0 3
17 D 2 5
18 B 1 10
20 A 2 1
|
[
{
"code": null,
"e": 1495,
"s": 1062,
"text": "If we have a vector and a data frame, and the data frame has a column that contains the values similar as in the vector then we can create a subset of the data frame based on that vector. This can be done with the help of single square brackets and %in% operator. The %in% operator will help us to find the values in the data frame column that matches with the vector values. Check out the below examples to understand how it works."
},
{
"code": null,
"e": 1545,
"s": 1495,
"text": "Consider the below data frame df1 and vector v1 −"
},
{
"code": null,
"e": 1555,
"s": 1545,
"text": "Live Demo"
},
{
"code": null,
"e": 1620,
"s": 1555,
"text": "> x1<-rpois(20,2)\n> x2<-rnorm(20)\n> df1<-data.frame(x1,x2)\n> df1"
},
{
"code": null,
"e": 1925,
"s": 1620,
"text": "x1 x2\n1 2 -1.0627997\n2 4 -0.2159125\n3 1 0.2443734\n4 3 -1.3513780\n5 3 1.7359994\n6 1 1.2563915\n7 1 -0.8998470\n8 2 0.4187820\n9 1 2.6305826\n10 4 -0.8040052\n11 4 0.4067659\n12 3 -1.7879203\n13 3 1.7214544\n14 2 -0.4699642\n15 2 0.3626548\n16 4 1.3013632\n17 2 -0.2983836\n18 1 1.8943313\n19 1 1.5637219\n20 2 0.8786897"
},
{
"code": null,
"e": 1987,
"s": 1925,
"text": "Sub-setting the data frame df1 based on values in vector v1 −"
},
{
"code": null,
"e": 2010,
"s": 1987,
"text": "> df1[df1$x1 %in% v1,]"
},
{
"code": null,
"e": 2254,
"s": 2010,
"text": "x1 x2\n1 2 -1.0627997\n3 1 0.2443734\n4 3 -1.3513780\n5 3 1.7359994\n6 1 1.2563915\n7 1 -0.8998470\n8 2 0.4187820\n9 1 2.6305826\n12 3 -1.7879203\n13 3 1.7214544\n14 2 -0.4699642\n15 2 0.3626548\n17 2 -0.2983836\n18 1 1.8943313\n19 1 1.5637219\n20 2 0.8786897"
},
{
"code": null,
"e": 2264,
"s": 2254,
"text": "Live Demo"
},
{
"code": null,
"e": 2377,
"s": 2264,
"text": "> y1<-sample(LETTERS[1:5],20,replace=TRUE)\n> y2<-rpois(20,2)\n> y3<-rpois(20,5)\n> df2<-data.frame(y1,y2,y3)\n> df2"
},
{
"code": null,
"e": 2558,
"s": 2377,
"text": "y1 y2 y3\n1 C 0 5\n2 A 2 5\n3 A 2 1\n4 D 1 6\n5 B 0 4\n6 E 6 9\n7 E 0 5\n8 C 1 9\n9 D 1 6\n10 D 2 6\n11 A 4 5\n12 D 1 6\n13 E 1 5\n14 E 2 6\n15 C 5 4\n16 A 0 3\n17 D 2 5\n18 B 1 10\n19 E 3 3\n20 A 2 1"
},
{
"code": null,
"e": 2620,
"s": 2558,
"text": "Sub-setting the data frame df2 based on values in vector v2 −"
},
{
"code": null,
"e": 2668,
"s": 2620,
"text": "> v2<-c(\"A\",\"B\",\"C\",\"D\")\n> df2[df2$y1 %in% v2,]"
},
{
"code": null,
"e": 2806,
"s": 2668,
"text": "y1 y2 y3\n1 C 0 5\n2 A 2 5\n3 A 2 1\n4 D 1 6\n5 B 0 4\n8 C 1 9\n9 D 1 6\n10 D 2 6\n11 A 4 5\n12 D 1 6\n15 C 5 4\n16 A 0 3\n17 D 2 5\n18 B 1 10\n20 A 2 1"
}
] |
How can I convert a bytes array into JSON format in Python?
|
You need to decode the bytes object to produce a string. This can be done using the decode function from string class that will accept then encoding you want to decode with.
my_str = b"Hello" # b means its a byte string
new_str = my_str.decode('utf-8') # Decode using the utf-8 encoding
print(new_str)
This will give the output
Hello
Once you have the bytes as a string, you can use the JSON.dumps method to convert the string object to JSON.
my_str = b'{"foo": 42}' # b means its a byte string
new_str = my_str.decode('utf-8') # Decode using the utf-8 encoding
import json
d = json.dumps(my_str)
print(d)
This will give the output −
"{\"foo\": 42}"
|
[
{
"code": null,
"e": 1237,
"s": 1062,
"text": "You need to decode the bytes object to produce a string. This can be done using the decode function from string class that will accept then encoding you want to decode with. "
},
{
"code": null,
"e": 1365,
"s": 1237,
"text": "my_str = b\"Hello\" # b means its a byte string\nnew_str = my_str.decode('utf-8') # Decode using the utf-8 encoding\nprint(new_str)"
},
{
"code": null,
"e": 1391,
"s": 1365,
"text": "This will give the output"
},
{
"code": null,
"e": 1397,
"s": 1391,
"text": "Hello"
},
{
"code": null,
"e": 1507,
"s": 1397,
"text": "Once you have the bytes as a string, you can use the JSON.dumps method to convert the string object to JSON. "
},
{
"code": null,
"e": 1671,
"s": 1507,
"text": "my_str = b'{\"foo\": 42}' # b means its a byte string\nnew_str = my_str.decode('utf-8') # Decode using the utf-8 encoding\n\nimport json\nd = json.dumps(my_str)\nprint(d)"
},
{
"code": null,
"e": 1699,
"s": 1671,
"text": "This will give the output −"
},
{
"code": null,
"e": 1715,
"s": 1699,
"text": "\"{\\\"foo\\\": 42}\""
}
] |
Why can't my HTML file find the JavaScript function from a sourced module?
|
This may happen if you haven’t used “export” statement. Use “export” before the function which
will be imported into the script file. The JavaScript file is as follows which has the file name
demo.js.
demo.js
console.log("function will import");
export function test(){
console.log("Imported!!!");
}
Here is the “index.html” file that imports the above function −
index.html
Live Demo
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initialscale=1.0">
<title>Document</title>
<link rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css">
<script src="https://code.jquery.com/jquery-1.12.4.js"></script>
<script src="https://code.jquery.com/ui/1.12.1/jquery-ui.js"></script>
</head>
<body>
<script type='module'>
import { test } from "./demo.js"
test();
</script>
</body>
</html>
To run the above program, save the file name “anyName.html(index.html)” and right click on the
file. Select the option “Open with Live Server” in VS Code editor.
Following is the output from the file demo.js which has the function name test().
|
[
{
"code": null,
"e": 1263,
"s": 1062,
"text": "This may happen if you haven’t used “export” statement. Use “export” before the function which\nwill be imported into the script file. The JavaScript file is as follows which has the file name\ndemo.js."
},
{
"code": null,
"e": 1271,
"s": 1263,
"text": "demo.js"
},
{
"code": null,
"e": 1365,
"s": 1271,
"text": "console.log(\"function will import\");\nexport function test(){\n console.log(\"Imported!!!\");\n}"
},
{
"code": null,
"e": 1429,
"s": 1365,
"text": "Here is the “index.html” file that imports the above function −"
},
{
"code": null,
"e": 1440,
"s": 1429,
"text": "index.html"
},
{
"code": null,
"e": 1451,
"s": 1440,
"text": " Live Demo"
},
{
"code": null,
"e": 1940,
"s": 1451,
"text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\">\n<meta name=\"viewport\" content=\"width=device-width, initialscale=1.0\">\n<title>Document</title>\n<link rel=\"stylesheet\" href=\"//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css\">\n<script src=\"https://code.jquery.com/jquery-1.12.4.js\"></script>\n<script src=\"https://code.jquery.com/ui/1.12.1/jquery-ui.js\"></script>\n</head>\n<body>\n<script type='module'>\n import { test } from \"./demo.js\"\n test();\n</script>\n</body>\n</html>"
},
{
"code": null,
"e": 2102,
"s": 1940,
"text": "To run the above program, save the file name “anyName.html(index.html)” and right click on the\nfile. Select the option “Open with Live Server” in VS Code editor."
},
{
"code": null,
"e": 2184,
"s": 2102,
"text": "Following is the output from the file demo.js which has the function name test()."
}
] |
My Python Pandas Cheat Sheet. The pandas functions I use everyday as... | by GreekDataGuy | Towards Data Science
|
A mentor once told me that software engineers are like indexes not textbooks; we don’t memorize everything but we know how to look it up quickly.
Being able to look up and use functions fast allows us to achieve a certain flow when writing code. So I’ve created this cheatsheet of functions I use everyday building web apps and machine learning models.
This is not a comprehensive list but contains the functions I use most, an example, and my insights as to when it’s most useful.
Contents:1) Setup2) Importing3) Exporting4) Viewing and Inspecting5) Selecting6) Adding / Dropping7) Combining8) Filtering9) Sorting10) Aggregating11) Cleaning12) Other13) Conclusion
If you want to run these examples yourself, download the Anime recommendation dataset from Kaggle, unzip and drop it in the same folder as your jupyter notebook.
Next Run these commands and you should be able to replicate my results for any of the below functions.
import pandas as pdimport numpy as npanime = pd.read_csv('anime-recommendations-database/anime.csv')rating = pd.read_csv('anime-recommendations-database/rating.csv')anime_modified = anime.set_index('name')
Convert a CSV directly into a data frame. Sometimes loading data from a CSV also requires specifying an encoding (ie:encoding='ISO-8859–1'). It’s the first thing you should try if your data frame contains unreadable characters.
Another similar function also exists called pd.read_excel for excel files.
anime = pd.read_csv('anime-recommendations-database/anime.csv')
Useful when you want to manually instantiate simple data so that you can see how it changes as it flows through a pipeline.
df = pd.DataFrame([[1,'Bob', 'Builder'], [2,'Sally', 'Baker'], [3,'Scott', 'Candle Stick Maker']], columns=['id','name', 'occupation'])
Useful when you want to make changes to a data frame while maintaining a copy of the original. It’s good practise to copy all data frames immediately after loading them.
anime_copy = anime.copy(deep=True)
This dumps to the same directory as the notebook. I’m only saving the 1st 10 rows below but you don’t need to do that. Again, df.to_excel() also exists and functions basically the same for excel files.
rating[:10].to_csv('saved_ratings.csv', index=False)
Display the first n records from a data frame. I often print the top record of a data frame somewhere in my notebook so I can refer back to it if I forget what’s inside.
anime.head(3)rating.tail(1)
This is not a pandas function per se but len() counts rows and can be saved to a variable and used elsewhere.
len(df)#=> 3
Count unique values in a column.
len(ratings['user_id'].unique())
Useful for getting some general information like header, number of values and datatype by column. A similar but less useful function is df.dtypes which just gives column data types.
anime.info()
Really useful if the data frame has a lot of numeric values. Knowing the mean, min and max of the rating column give us a sense of how the data frame looks overall.
anime.describe()
Get counts of values for a particular column.
anime.type.value_counts()
This works if you need to pull the values in columns into X and y variables so you can fit a machine learning model.
anime['genre'].tolist()anime['genre']
Create a list of values from index. Note I’ve used anime_modified data frame here as the index values are more interesting.
anime_modified.index.tolist()
anime.columns.tolist()
I do this on occasion when I have test and train sets in 2 separate data frames and want to mark which rows are related to what set before combining them.
anime['train set'] = True
Useful when you only want to keep a few columns from a giant data frame and don’t want to specify each that you want to drop.
anime[['name','episodes']]
Useful when you only need to drop a few columns. Otherwise, it can be tedious to write them all out and I prefer the previous option.
anime.drop(['anime_id', 'genre', 'members'], axis=1).head()
We’ll manually create a small data frame here because it’s easier to look at. The interesting part here is df.sum(axis=0) which adds the values across rows. Alternatively df.sum(axis=1) adds values across columns.
The same logic applies when calculating counts or means, ie: df.mean(axis=0).
df = pd.DataFrame([[1,'Bob', 8000], [2,'Sally', 9000], [3,'Scott', 20]], columns=['id','name', 'power level'])df.append(df.sum(axis=0), ignore_index=True)
Use this if you have 2 data frames with the same columns and want to combine them.
Here we split a data frame in 2 them add them back together.
df1 = anime[0:2]df2 = anime[2:4]pd.concat([df1, df2], ignore_index=True)
This functions like a SQL left join, when you have 2 data frames and want to join on a column.
rating.merge(anime, left_on=’anime_id’, right_on=’anime_id’, suffixes=(‘_left’, ‘_right’))
The index values in anime_modified are the names of the anime. Notice how we’ve used those names to grab specific columns.
anime_modified.loc[['Haikyuu!! Second Season','Gintama']]
This differs from the previous function. Using iloc, the 1st row has an index of 0, the 2nd row has an index of 1, and so on... even if you’ve modified the data frame and are now using string values in the index column.
Use this is you want the first 3 rows in a data frame.
anime_modified.iloc[0:3]
Retrieve rows where a column’s value is in a given list. anime[anime[‘type’] == 'TV'] also works when matching on a single value.
anime[anime['type'].isin(['TV', 'Movie'])]
This is just like slicing a list. Slice a data frame to get all rows before/between/after specified indices.
anime[1:3]
Filter data frame for rows that meet a condition. Note this maintains existing index values.
anime[anime['rating'] > 8]
Sort data frame by values in a column.
anime.sort_values('rating', ascending=False)
Count number of records for each distinct value in a column.
anime.groupby('type').count()
Note I added reset_index() otherwise the type column becomes the index column — I recommend doing the same in most cases.
anime.groupby(["type"]).agg({ "rating": "sum", "episodes": "count", "name": "last"}).reset_index()
Nothing better than a pivot table for pulling a subset of data from a data frame.
Note I’ve heavily filtered the data frame so it’s quicker to build the pivot table.
tmp_df = rating.copy()tmp_df.sort_values('user_id', ascending=True, inplace=True)tmp_df = tmp_df[tmp_df.user_id < 10] tmp_df = tmp_df[tmp_df.anime_id < 30]tmp_df = tmp_df[tmp_df.rating != -1]pd.pivot_table(tmp_df, values='rating', index=['user_id'], columns=['anime_id'], aggfunc=np.sum, fill_value=0)
Set cells with NaN value to 0 . In the example we create the same pivot table as before but without fill_value=0 then use fillna(0) to fill them in afterwards.
pivot = pd.pivot_table(tmp_df, values='rating', index=['user_id'], columns=['anime_id'], aggfunc=np.sum)pivot.fillna(0)
I use this all the time taking a small sample from a larger data frame. It allows randomly rearranging rows while maintaining indices if frac=1.
anime.sample(frac=0.25)
Iterate over index and rows in data frame.
for idx,row in anime[:2].iterrows(): print(idx, row)
Start notebook with a very high data rate limit.
jupyter notebook — NotebookApp.iopub_data_rate_limit=1.0e10
I hope this can be a reference guide for you as well. I’ll try to continuously update this as I find more useful pandas functions.
If there are any functions you can’t live without please post them in the comments below!
|
[
{
"code": null,
"e": 193,
"s": 47,
"text": "A mentor once told me that software engineers are like indexes not textbooks; we don’t memorize everything but we know how to look it up quickly."
},
{
"code": null,
"e": 400,
"s": 193,
"text": "Being able to look up and use functions fast allows us to achieve a certain flow when writing code. So I’ve created this cheatsheet of functions I use everyday building web apps and machine learning models."
},
{
"code": null,
"e": 529,
"s": 400,
"text": "This is not a comprehensive list but contains the functions I use most, an example, and my insights as to when it’s most useful."
},
{
"code": null,
"e": 712,
"s": 529,
"text": "Contents:1) Setup2) Importing3) Exporting4) Viewing and Inspecting5) Selecting6) Adding / Dropping7) Combining8) Filtering9) Sorting10) Aggregating11) Cleaning12) Other13) Conclusion"
},
{
"code": null,
"e": 874,
"s": 712,
"text": "If you want to run these examples yourself, download the Anime recommendation dataset from Kaggle, unzip and drop it in the same folder as your jupyter notebook."
},
{
"code": null,
"e": 977,
"s": 874,
"text": "Next Run these commands and you should be able to replicate my results for any of the below functions."
},
{
"code": null,
"e": 1183,
"s": 977,
"text": "import pandas as pdimport numpy as npanime = pd.read_csv('anime-recommendations-database/anime.csv')rating = pd.read_csv('anime-recommendations-database/rating.csv')anime_modified = anime.set_index('name')"
},
{
"code": null,
"e": 1411,
"s": 1183,
"text": "Convert a CSV directly into a data frame. Sometimes loading data from a CSV also requires specifying an encoding (ie:encoding='ISO-8859–1'). It’s the first thing you should try if your data frame contains unreadable characters."
},
{
"code": null,
"e": 1486,
"s": 1411,
"text": "Another similar function also exists called pd.read_excel for excel files."
},
{
"code": null,
"e": 1550,
"s": 1486,
"text": "anime = pd.read_csv('anime-recommendations-database/anime.csv')"
},
{
"code": null,
"e": 1674,
"s": 1550,
"text": "Useful when you want to manually instantiate simple data so that you can see how it changes as it flows through a pipeline."
},
{
"code": null,
"e": 1844,
"s": 1674,
"text": "df = pd.DataFrame([[1,'Bob', 'Builder'], [2,'Sally', 'Baker'], [3,'Scott', 'Candle Stick Maker']], columns=['id','name', 'occupation'])"
},
{
"code": null,
"e": 2014,
"s": 1844,
"text": "Useful when you want to make changes to a data frame while maintaining a copy of the original. It’s good practise to copy all data frames immediately after loading them."
},
{
"code": null,
"e": 2049,
"s": 2014,
"text": "anime_copy = anime.copy(deep=True)"
},
{
"code": null,
"e": 2251,
"s": 2049,
"text": "This dumps to the same directory as the notebook. I’m only saving the 1st 10 rows below but you don’t need to do that. Again, df.to_excel() also exists and functions basically the same for excel files."
},
{
"code": null,
"e": 2304,
"s": 2251,
"text": "rating[:10].to_csv('saved_ratings.csv', index=False)"
},
{
"code": null,
"e": 2474,
"s": 2304,
"text": "Display the first n records from a data frame. I often print the top record of a data frame somewhere in my notebook so I can refer back to it if I forget what’s inside."
},
{
"code": null,
"e": 2502,
"s": 2474,
"text": "anime.head(3)rating.tail(1)"
},
{
"code": null,
"e": 2612,
"s": 2502,
"text": "This is not a pandas function per se but len() counts rows and can be saved to a variable and used elsewhere."
},
{
"code": null,
"e": 2625,
"s": 2612,
"text": "len(df)#=> 3"
},
{
"code": null,
"e": 2658,
"s": 2625,
"text": "Count unique values in a column."
},
{
"code": null,
"e": 2691,
"s": 2658,
"text": "len(ratings['user_id'].unique())"
},
{
"code": null,
"e": 2873,
"s": 2691,
"text": "Useful for getting some general information like header, number of values and datatype by column. A similar but less useful function is df.dtypes which just gives column data types."
},
{
"code": null,
"e": 2886,
"s": 2873,
"text": "anime.info()"
},
{
"code": null,
"e": 3051,
"s": 2886,
"text": "Really useful if the data frame has a lot of numeric values. Knowing the mean, min and max of the rating column give us a sense of how the data frame looks overall."
},
{
"code": null,
"e": 3068,
"s": 3051,
"text": "anime.describe()"
},
{
"code": null,
"e": 3114,
"s": 3068,
"text": "Get counts of values for a particular column."
},
{
"code": null,
"e": 3140,
"s": 3114,
"text": "anime.type.value_counts()"
},
{
"code": null,
"e": 3257,
"s": 3140,
"text": "This works if you need to pull the values in columns into X and y variables so you can fit a machine learning model."
},
{
"code": null,
"e": 3295,
"s": 3257,
"text": "anime['genre'].tolist()anime['genre']"
},
{
"code": null,
"e": 3419,
"s": 3295,
"text": "Create a list of values from index. Note I’ve used anime_modified data frame here as the index values are more interesting."
},
{
"code": null,
"e": 3449,
"s": 3419,
"text": "anime_modified.index.tolist()"
},
{
"code": null,
"e": 3472,
"s": 3449,
"text": "anime.columns.tolist()"
},
{
"code": null,
"e": 3627,
"s": 3472,
"text": "I do this on occasion when I have test and train sets in 2 separate data frames and want to mark which rows are related to what set before combining them."
},
{
"code": null,
"e": 3653,
"s": 3627,
"text": "anime['train set'] = True"
},
{
"code": null,
"e": 3779,
"s": 3653,
"text": "Useful when you only want to keep a few columns from a giant data frame and don’t want to specify each that you want to drop."
},
{
"code": null,
"e": 3806,
"s": 3779,
"text": "anime[['name','episodes']]"
},
{
"code": null,
"e": 3940,
"s": 3806,
"text": "Useful when you only need to drop a few columns. Otherwise, it can be tedious to write them all out and I prefer the previous option."
},
{
"code": null,
"e": 4000,
"s": 3940,
"text": "anime.drop(['anime_id', 'genre', 'members'], axis=1).head()"
},
{
"code": null,
"e": 4214,
"s": 4000,
"text": "We’ll manually create a small data frame here because it’s easier to look at. The interesting part here is df.sum(axis=0) which adds the values across rows. Alternatively df.sum(axis=1) adds values across columns."
},
{
"code": null,
"e": 4292,
"s": 4214,
"text": "The same logic applies when calculating counts or means, ie: df.mean(axis=0)."
},
{
"code": null,
"e": 4481,
"s": 4292,
"text": "df = pd.DataFrame([[1,'Bob', 8000], [2,'Sally', 9000], [3,'Scott', 20]], columns=['id','name', 'power level'])df.append(df.sum(axis=0), ignore_index=True)"
},
{
"code": null,
"e": 4564,
"s": 4481,
"text": "Use this if you have 2 data frames with the same columns and want to combine them."
},
{
"code": null,
"e": 4625,
"s": 4564,
"text": "Here we split a data frame in 2 them add them back together."
},
{
"code": null,
"e": 4698,
"s": 4625,
"text": "df1 = anime[0:2]df2 = anime[2:4]pd.concat([df1, df2], ignore_index=True)"
},
{
"code": null,
"e": 4793,
"s": 4698,
"text": "This functions like a SQL left join, when you have 2 data frames and want to join on a column."
},
{
"code": null,
"e": 4884,
"s": 4793,
"text": "rating.merge(anime, left_on=’anime_id’, right_on=’anime_id’, suffixes=(‘_left’, ‘_right’))"
},
{
"code": null,
"e": 5007,
"s": 4884,
"text": "The index values in anime_modified are the names of the anime. Notice how we’ve used those names to grab specific columns."
},
{
"code": null,
"e": 5065,
"s": 5007,
"text": "anime_modified.loc[['Haikyuu!! Second Season','Gintama']]"
},
{
"code": null,
"e": 5285,
"s": 5065,
"text": "This differs from the previous function. Using iloc, the 1st row has an index of 0, the 2nd row has an index of 1, and so on... even if you’ve modified the data frame and are now using string values in the index column."
},
{
"code": null,
"e": 5340,
"s": 5285,
"text": "Use this is you want the first 3 rows in a data frame."
},
{
"code": null,
"e": 5365,
"s": 5340,
"text": "anime_modified.iloc[0:3]"
},
{
"code": null,
"e": 5495,
"s": 5365,
"text": "Retrieve rows where a column’s value is in a given list. anime[anime[‘type’] == 'TV'] also works when matching on a single value."
},
{
"code": null,
"e": 5538,
"s": 5495,
"text": "anime[anime['type'].isin(['TV', 'Movie'])]"
},
{
"code": null,
"e": 5647,
"s": 5538,
"text": "This is just like slicing a list. Slice a data frame to get all rows before/between/after specified indices."
},
{
"code": null,
"e": 5658,
"s": 5647,
"text": "anime[1:3]"
},
{
"code": null,
"e": 5751,
"s": 5658,
"text": "Filter data frame for rows that meet a condition. Note this maintains existing index values."
},
{
"code": null,
"e": 5778,
"s": 5751,
"text": "anime[anime['rating'] > 8]"
},
{
"code": null,
"e": 5817,
"s": 5778,
"text": "Sort data frame by values in a column."
},
{
"code": null,
"e": 5862,
"s": 5817,
"text": "anime.sort_values('rating', ascending=False)"
},
{
"code": null,
"e": 5923,
"s": 5862,
"text": "Count number of records for each distinct value in a column."
},
{
"code": null,
"e": 5953,
"s": 5923,
"text": "anime.groupby('type').count()"
},
{
"code": null,
"e": 6075,
"s": 5953,
"text": "Note I added reset_index() otherwise the type column becomes the index column — I recommend doing the same in most cases."
},
{
"code": null,
"e": 6177,
"s": 6075,
"text": "anime.groupby([\"type\"]).agg({ \"rating\": \"sum\", \"episodes\": \"count\", \"name\": \"last\"}).reset_index()"
},
{
"code": null,
"e": 6259,
"s": 6177,
"text": "Nothing better than a pivot table for pulling a subset of data from a data frame."
},
{
"code": null,
"e": 6343,
"s": 6259,
"text": "Note I’ve heavily filtered the data frame so it’s quicker to build the pivot table."
},
{
"code": null,
"e": 6645,
"s": 6343,
"text": "tmp_df = rating.copy()tmp_df.sort_values('user_id', ascending=True, inplace=True)tmp_df = tmp_df[tmp_df.user_id < 10] tmp_df = tmp_df[tmp_df.anime_id < 30]tmp_df = tmp_df[tmp_df.rating != -1]pd.pivot_table(tmp_df, values='rating', index=['user_id'], columns=['anime_id'], aggfunc=np.sum, fill_value=0)"
},
{
"code": null,
"e": 6805,
"s": 6645,
"text": "Set cells with NaN value to 0 . In the example we create the same pivot table as before but without fill_value=0 then use fillna(0) to fill them in afterwards."
},
{
"code": null,
"e": 6925,
"s": 6805,
"text": "pivot = pd.pivot_table(tmp_df, values='rating', index=['user_id'], columns=['anime_id'], aggfunc=np.sum)pivot.fillna(0)"
},
{
"code": null,
"e": 7070,
"s": 6925,
"text": "I use this all the time taking a small sample from a larger data frame. It allows randomly rearranging rows while maintaining indices if frac=1."
},
{
"code": null,
"e": 7094,
"s": 7070,
"text": "anime.sample(frac=0.25)"
},
{
"code": null,
"e": 7137,
"s": 7094,
"text": "Iterate over index and rows in data frame."
},
{
"code": null,
"e": 7193,
"s": 7137,
"text": "for idx,row in anime[:2].iterrows(): print(idx, row)"
},
{
"code": null,
"e": 7242,
"s": 7193,
"text": "Start notebook with a very high data rate limit."
},
{
"code": null,
"e": 7302,
"s": 7242,
"text": "jupyter notebook — NotebookApp.iopub_data_rate_limit=1.0e10"
},
{
"code": null,
"e": 7433,
"s": 7302,
"text": "I hope this can be a reference guide for you as well. I’ll try to continuously update this as I find more useful pandas functions."
}
] |
Generate an array of size N according to the given rules - GeeksforGeeks
|
30 Nov, 2021
Given a number N, the task is to create an array arr[] of size N, where the value of the element at every index i is filled according to the following rules:
arr[i] = ((i – 1) – k), where k is the index of arr[i – 1] that has appeared second most recently. This rule is applied when arr[i – 1] is present more than once in the arrayarr[i] = 0. This rule is applied when arr[i – 1] is present only once or when i = 1.
arr[i] = ((i – 1) – k), where k is the index of arr[i – 1] that has appeared second most recently. This rule is applied when arr[i – 1] is present more than once in the array
arr[i] = 0. This rule is applied when arr[i – 1] is present only once or when i = 1.
Examples:
Input: N = 8 Output: 0 0 1 0 2 0 2 2 Explanation: For i = 0: There is no element in the array arr[]. So, 0 is placed in the first index. arr[] = {0}. For i = 1: There is only one element in the array arr[]. The occurrence of arr[i – 1] (= 0) is only one. Therefore, the array is filled according to the rule 2. arr = {0, 0}. For i = 2: There are two elements in the array. The second most occurrence of arr[i – 1] = arr[0] = 0. So, arr[2] = 1. arr[] = {0, 0, 1}. For i = 3: There is no second occurrence of arr[i – 1] = 1. Therefore, arr[3] = 0. arr[] = {0, 0, 1, 0} For i = 4: The second recent occurrence of arr[i – 1] = 0 is 1. Therefore, (i – 1 – k) = (4 – 1 – 1) = 2. arr[] = {0, 0, 1, 0, 2}. For i = 5: There is only one occurrence of arr[i – 1] = 2. Therefore, arr[5] = 0. arr[] = {0, 0, 1, 0, 2, 0}. For i = 6: The second recent occurrence of arr[i – 1] = 0 is 3. Therefore, (i – 1 – k) = (6 – 1 – 3) = 2. arr[] = {0, 0, 1, 0, 2, 0, 2}. For i = 7: The second recent occurrence of arr[i – 1] = 2 is 4. Therefore, (i – 1 – k) = (7 – 1 – 4) = 2. arr[] = {0, 0, 1, 0, 2, 0, 2, 2}Input: N = 5 Output: 0 0 1 0 2
Approach: The idea is to first create an array filled with zeroes of size N. Then for every iteration, we search if the element has occurred in the near past. If yes, then we follow rule 1. Else, rule 2 is followed to fill the array. Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ implementation to generate// an array of size N by following// the given rules#include <bits/stdc++.h>using namespace std; // Function to search the most recent// location of element N// If not present in the array// it will return -1int search(int a[], int k, int x){ int j; for ( j = k - 1; j > -1 ; j--) { if(a[j] == x) return j ; } return -1 ;} // Function to generate an array// of size N by following the given rulesvoid genArray(int arr[], int N){ // Loop to fill the array // as per the given rules for(int i = 0; i < N - 1; i++) { // Check for the occurrence // of arr[i - 1] if(search(arr, i, arr[i]) == -1) arr[i + 1] = 0 ; else arr[i + 1] = (i-search(arr, i, arr[i])) ; }} // Driver codeint main(){ int N = 5 ; int size = N + 1 ; int a[] = {0, 0, 0, 0, 0}; genArray(a, N) ; for (int i = 0; i < N ; i ++) cout << a[i] << " " ; return 0;} // This code is contributed by shivanisinghss2110
// Java implementation to generate// an array of size N by following// the given rulesclass GFG{ static int a[]; // Function to search the most recent// location of element N// If not present in the array// it will return -1static int search(int a[],int k, int x){ int j; for ( j = k - 1; j > -1 ; j--) { if(a[j] == x) return j ; } return -1 ;} // Function to generate an array// of size N by following the given rulesstatic void genArray(int []arr, int N){ // Loop to fill the array // as per the given rules for(int i = 0; i < N - 1; i++) { // Check for the occurrence // of arr[i - 1] if(search(arr, i, arr[i]) == -1) arr[i + 1] = 0 ; else arr[i + 1] = (i-search(arr, i, arr[i])) ; }} // Driver codepublic static void main (String[] args){ int N = 5 ; int size = N + 1 ; int a[] = new int [N]; genArray(a, N) ; for (int i = 0; i < N ; i ++) System.out.print(a[i]+" " ); }} // This code is contributed by Yash_R
# Python implementation to generate# an array of size N by following# the given rules # Function to search the most recent# location of element N# If not present in the array# it will return -1def search(a, k, x): for j in range(k-1, -1, -1) : if(a[j]== x): return j return -1 # Function to generate an array# of size N by following the given rulesdef genArray(arr, N): # Loop to fill the array # as per the given rules for i in range(0, N-1, 1): # Check for the occurrence # of arr[i - 1] if(search(arr, i, arr[i])==-1): arr[i + 1]= 0 else: arr[i + 1]=(i-search(arr, i, arr[i])) # Driver code if __name__ == "__main__": N = 5 size = N + 1 a =[0]*N genArray(a, N) print(a)
// C# implementation to generate// an array of size N by following// the given rules using System; public class GFG{ static int []a; // Function to search the most recent// location of element N// If not present in the array// it will return -1static int search(int []a,int k, int x){ int j; for ( j = k - 1; j > -1 ; j--) { if(a[j] == x) return j ; } return -1 ;} // Function to generate an array// of size N by following the given rulesstatic void genArray(int []arr, int N){ // Loop to fill the array // as per the given rules for(int i = 0; i < N - 1; i++) { // Check for the occurrence // of arr[i - 1] if(search(arr, i, arr[i]) == -1) arr[i + 1] = 0 ; else arr[i + 1] = (i-search(arr, i, arr[i])) ; }} // Driver codepublic static void Main (string[] args){ int N = 5 ; int size = N + 1 ; int []a = new int [N]; genArray(a, N) ; for (int i = 0; i < N ; i ++) Console.Write(a[i]+" " ); }}// This code is contributed by AnkitRai01
<script> // Javascript implementation to generate// an array of size N by following// the given rules // Function to search the most recent// location of element N// If not present in the array// it will return -1function search( a, k, x){ var j; for ( j = k - 1; j > -1 ; j--) { if(a[j] == x) return j ; } return -1 ;} // Function to generate an array// of size N by following the given rulesfunction genArray(arr, N){ // Loop to fill the array // as per the given rules for(var i = 0; i < N - 1; i++) { // Check for the occurrence // of arr[i - 1] if(search(arr, i, arr[i]) == -1) arr[i + 1] = 0 ; else arr[i + 1] = (i-search(arr, i, arr[i])) ; }} // Driver codevar N = 5 ;var size = N + 1 ;var a = [0, 0, 0, 0, 0];genArray(a, N) ;document.write("["+a+"]"); // This code is contributed by rutvik_56.</script>
[0, 0, 1, 0, 2]
Time Complexity: O(N2)
Auxiliary Space: O(1)
Yash_R
ankthon
shivanisinghss2110
rutvik_56
souravmahato348
Arrays
Mathematical
School Programming
Arrays
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Window Sliding Technique
Trapping Rain Water
Reversal algorithm for array rotation
Move all negative numbers to beginning and positive to end with constant extra space
Program to find sum of elements in a given array
Program for Fibonacci numbers
Write a program to print all permutations of a given string
C++ Data Types
Set in C++ Standard Template Library (STL)
Coin Change | DP-7
|
[
{
"code": null,
"e": 24796,
"s": 24768,
"text": "\n30 Nov, 2021"
},
{
"code": null,
"e": 24956,
"s": 24796,
"text": "Given a number N, the task is to create an array arr[] of size N, where the value of the element at every index i is filled according to the following rules: "
},
{
"code": null,
"e": 25215,
"s": 24956,
"text": "arr[i] = ((i – 1) – k), where k is the index of arr[i – 1] that has appeared second most recently. This rule is applied when arr[i – 1] is present more than once in the arrayarr[i] = 0. This rule is applied when arr[i – 1] is present only once or when i = 1."
},
{
"code": null,
"e": 25390,
"s": 25215,
"text": "arr[i] = ((i – 1) – k), where k is the index of arr[i – 1] that has appeared second most recently. This rule is applied when arr[i – 1] is present more than once in the array"
},
{
"code": null,
"e": 25475,
"s": 25390,
"text": "arr[i] = 0. This rule is applied when arr[i – 1] is present only once or when i = 1."
},
{
"code": null,
"e": 25487,
"s": 25475,
"text": "Examples: "
},
{
"code": null,
"e": 26603,
"s": 25487,
"text": "Input: N = 8 Output: 0 0 1 0 2 0 2 2 Explanation: For i = 0: There is no element in the array arr[]. So, 0 is placed in the first index. arr[] = {0}. For i = 1: There is only one element in the array arr[]. The occurrence of arr[i – 1] (= 0) is only one. Therefore, the array is filled according to the rule 2. arr = {0, 0}. For i = 2: There are two elements in the array. The second most occurrence of arr[i – 1] = arr[0] = 0. So, arr[2] = 1. arr[] = {0, 0, 1}. For i = 3: There is no second occurrence of arr[i – 1] = 1. Therefore, arr[3] = 0. arr[] = {0, 0, 1, 0} For i = 4: The second recent occurrence of arr[i – 1] = 0 is 1. Therefore, (i – 1 – k) = (4 – 1 – 1) = 2. arr[] = {0, 0, 1, 0, 2}. For i = 5: There is only one occurrence of arr[i – 1] = 2. Therefore, arr[5] = 0. arr[] = {0, 0, 1, 0, 2, 0}. For i = 6: The second recent occurrence of arr[i – 1] = 0 is 3. Therefore, (i – 1 – k) = (6 – 1 – 3) = 2. arr[] = {0, 0, 1, 0, 2, 0, 2}. For i = 7: The second recent occurrence of arr[i – 1] = 2 is 4. Therefore, (i – 1 – k) = (7 – 1 – 4) = 2. arr[] = {0, 0, 1, 0, 2, 0, 2, 2}Input: N = 5 Output: 0 0 1 0 2 "
},
{
"code": null,
"e": 26892,
"s": 26605,
"text": "Approach: The idea is to first create an array filled with zeroes of size N. Then for every iteration, we search if the element has occurred in the near past. If yes, then we follow rule 1. Else, rule 2 is followed to fill the array. Below is the implementation of the above approach: "
},
{
"code": null,
"e": 26896,
"s": 26892,
"text": "C++"
},
{
"code": null,
"e": 26901,
"s": 26896,
"text": "Java"
},
{
"code": null,
"e": 26909,
"s": 26901,
"text": "Python3"
},
{
"code": null,
"e": 26912,
"s": 26909,
"text": "C#"
},
{
"code": null,
"e": 26923,
"s": 26912,
"text": "Javascript"
},
{
"code": "// C++ implementation to generate// an array of size N by following// the given rules#include <bits/stdc++.h>using namespace std; // Function to search the most recent// location of element N// If not present in the array// it will return -1int search(int a[], int k, int x){ int j; for ( j = k - 1; j > -1 ; j--) { if(a[j] == x) return j ; } return -1 ;} // Function to generate an array// of size N by following the given rulesvoid genArray(int arr[], int N){ // Loop to fill the array // as per the given rules for(int i = 0; i < N - 1; i++) { // Check for the occurrence // of arr[i - 1] if(search(arr, i, arr[i]) == -1) arr[i + 1] = 0 ; else arr[i + 1] = (i-search(arr, i, arr[i])) ; }} // Driver codeint main(){ int N = 5 ; int size = N + 1 ; int a[] = {0, 0, 0, 0, 0}; genArray(a, N) ; for (int i = 0; i < N ; i ++) cout << a[i] << \" \" ; return 0;} // This code is contributed by shivanisinghss2110",
"e": 27995,
"s": 26923,
"text": null
},
{
"code": "// Java implementation to generate// an array of size N by following// the given rulesclass GFG{ static int a[]; // Function to search the most recent// location of element N// If not present in the array// it will return -1static int search(int a[],int k, int x){ int j; for ( j = k - 1; j > -1 ; j--) { if(a[j] == x) return j ; } return -1 ;} // Function to generate an array// of size N by following the given rulesstatic void genArray(int []arr, int N){ // Loop to fill the array // as per the given rules for(int i = 0; i < N - 1; i++) { // Check for the occurrence // of arr[i - 1] if(search(arr, i, arr[i]) == -1) arr[i + 1] = 0 ; else arr[i + 1] = (i-search(arr, i, arr[i])) ; }} // Driver codepublic static void main (String[] args){ int N = 5 ; int size = N + 1 ; int a[] = new int [N]; genArray(a, N) ; for (int i = 0; i < N ; i ++) System.out.print(a[i]+\" \" ); }} // This code is contributed by Yash_R",
"e": 29079,
"s": 27995,
"text": null
},
{
"code": "# Python implementation to generate# an array of size N by following# the given rules # Function to search the most recent# location of element N# If not present in the array# it will return -1def search(a, k, x): for j in range(k-1, -1, -1) : if(a[j]== x): return j return -1 # Function to generate an array# of size N by following the given rulesdef genArray(arr, N): # Loop to fill the array # as per the given rules for i in range(0, N-1, 1): # Check for the occurrence # of arr[i - 1] if(search(arr, i, arr[i])==-1): arr[i + 1]= 0 else: arr[i + 1]=(i-search(arr, i, arr[i])) # Driver code if __name__ == \"__main__\": N = 5 size = N + 1 a =[0]*N genArray(a, N) print(a)",
"e": 29912,
"s": 29079,
"text": null
},
{
"code": "// C# implementation to generate// an array of size N by following// the given rules using System; public class GFG{ static int []a; // Function to search the most recent// location of element N// If not present in the array// it will return -1static int search(int []a,int k, int x){ int j; for ( j = k - 1; j > -1 ; j--) { if(a[j] == x) return j ; } return -1 ;} // Function to generate an array// of size N by following the given rulesstatic void genArray(int []arr, int N){ // Loop to fill the array // as per the given rules for(int i = 0; i < N - 1; i++) { // Check for the occurrence // of arr[i - 1] if(search(arr, i, arr[i]) == -1) arr[i + 1] = 0 ; else arr[i + 1] = (i-search(arr, i, arr[i])) ; }} // Driver codepublic static void Main (string[] args){ int N = 5 ; int size = N + 1 ; int []a = new int [N]; genArray(a, N) ; for (int i = 0; i < N ; i ++) Console.Write(a[i]+\" \" ); }}// This code is contributed by AnkitRai01",
"e": 31016,
"s": 29912,
"text": null
},
{
"code": "<script> // Javascript implementation to generate// an array of size N by following// the given rules // Function to search the most recent// location of element N// If not present in the array// it will return -1function search( a, k, x){ var j; for ( j = k - 1; j > -1 ; j--) { if(a[j] == x) return j ; } return -1 ;} // Function to generate an array// of size N by following the given rulesfunction genArray(arr, N){ // Loop to fill the array // as per the given rules for(var i = 0; i < N - 1; i++) { // Check for the occurrence // of arr[i - 1] if(search(arr, i, arr[i]) == -1) arr[i + 1] = 0 ; else arr[i + 1] = (i-search(arr, i, arr[i])) ; }} // Driver codevar N = 5 ;var size = N + 1 ;var a = [0, 0, 0, 0, 0];genArray(a, N) ;document.write(\"[\"+a+\"]\"); // This code is contributed by rutvik_56.</script>",
"e": 31958,
"s": 31016,
"text": null
},
{
"code": null,
"e": 31974,
"s": 31958,
"text": "[0, 0, 1, 0, 2]"
},
{
"code": null,
"e": 31999,
"s": 31976,
"text": "Time Complexity: O(N2)"
},
{
"code": null,
"e": 32021,
"s": 31999,
"text": "Auxiliary Space: O(1)"
},
{
"code": null,
"e": 32028,
"s": 32021,
"text": "Yash_R"
},
{
"code": null,
"e": 32036,
"s": 32028,
"text": "ankthon"
},
{
"code": null,
"e": 32055,
"s": 32036,
"text": "shivanisinghss2110"
},
{
"code": null,
"e": 32065,
"s": 32055,
"text": "rutvik_56"
},
{
"code": null,
"e": 32081,
"s": 32065,
"text": "souravmahato348"
},
{
"code": null,
"e": 32088,
"s": 32081,
"text": "Arrays"
},
{
"code": null,
"e": 32101,
"s": 32088,
"text": "Mathematical"
},
{
"code": null,
"e": 32120,
"s": 32101,
"text": "School Programming"
},
{
"code": null,
"e": 32127,
"s": 32120,
"text": "Arrays"
},
{
"code": null,
"e": 32140,
"s": 32127,
"text": "Mathematical"
},
{
"code": null,
"e": 32238,
"s": 32140,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 32263,
"s": 32238,
"text": "Window Sliding Technique"
},
{
"code": null,
"e": 32283,
"s": 32263,
"text": "Trapping Rain Water"
},
{
"code": null,
"e": 32321,
"s": 32283,
"text": "Reversal algorithm for array rotation"
},
{
"code": null,
"e": 32406,
"s": 32321,
"text": "Move all negative numbers to beginning and positive to end with constant extra space"
},
{
"code": null,
"e": 32455,
"s": 32406,
"text": "Program to find sum of elements in a given array"
},
{
"code": null,
"e": 32485,
"s": 32455,
"text": "Program for Fibonacci numbers"
},
{
"code": null,
"e": 32545,
"s": 32485,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 32560,
"s": 32545,
"text": "C++ Data Types"
},
{
"code": null,
"e": 32603,
"s": 32560,
"text": "Set in C++ Standard Template Library (STL)"
}
] |
Mandlebrot Set in C/C++ Using Graphics - GeeksforGeeks
|
27 Dec, 2019
A Fractal is a never-ending pattern. Fractals are infinitely complex patterns that are self-similar across different scales. They are created by repeating a simple process over and over in an ongoing feedback loop. Mathematically fractals can be explained as follows.
The location of a point on a screen is fed into an equation as its initial solution and the equation is iterated a large number of times.
If that equation tends to zero (i.e. the value at the end of the iterations is smaller than the initial value), the point is coloured black.
If the equation tends to infinity (i.e. the final value is larger than the initial value) then depending on the rate of increase (i.e. the rate at which the value tends to infinity), the pixel is painted with an appropriate colour.Defining MandlebrotThe Mandelbrot set is the set of complex numbers c for which the function does not diverge when iterated from z=0, i.e., for which the sequence , etc., remains bounded in absolute value. In simple words, Mandelbrot set is a particular set of complex numbers which has a highly convoluted fractal boundary when plotted.Implementation#include <complex.h>#include <tgmath.h>#include <winbgim.h> // Defining the size of the screen.#define Y 1080#define X 1920 // Recursive function to provide the iterative every 100th// f^n (0) for every pixel on the screen.int Mandle(double _Complex c, double _Complex t = 0, int counter = 0){ // To eliminate out of bound values. if (cabs(t) > 4) { putpixel(creal(c) * Y / 2 + X / 2, cimag(c) * Y / 2 + Y / 2, COLOR(128 - 128 * cabs(t) / cabs(c), 128 - 128 * cabs(t) / cabs(c), 128 - 128 * cabs(t) / cabs(c))); return 0; } // To put about the end of the fractal, // the higher the value of the counter, // The more accurate the fractal is generated, // however, higher values cause // more processing time. if (counter == 100) { putpixel(creal((c)) * Y / 2 + X / 2, cimag((c)) * Y / 2 + Y / 2, COLOR(255 * (cabs((t * t)) / cabs((t - c) * c)), 0, 0)); return 0; } // recursively calling Mandle with increased counter // and passing the value of the squares of t into it. Mandle(c, cpow(t, 2) + c, counter + 1); return 0;} int MandleSet(){ // Calling Mandle function for every // point on the screen. for (double x = -2; x < 2; x += 0.0015) { for (double y = -1; y < 1; y += 0.0015) { double _Complex temp = x + y * _Complex_I; Mandle(temp); } } return 0;} int main(){ initwindow(X, Y); MandleSet(); getch(); closegraph(); return 0;}OutputMy Personal Notes
arrow_drop_upSave
The Mandelbrot set is the set of complex numbers c for which the function does not diverge when iterated from z=0, i.e., for which the sequence , etc., remains bounded in absolute value. In simple words, Mandelbrot set is a particular set of complex numbers which has a highly convoluted fractal boundary when plotted.
Implementation
#include <complex.h>#include <tgmath.h>#include <winbgim.h> // Defining the size of the screen.#define Y 1080#define X 1920 // Recursive function to provide the iterative every 100th// f^n (0) for every pixel on the screen.int Mandle(double _Complex c, double _Complex t = 0, int counter = 0){ // To eliminate out of bound values. if (cabs(t) > 4) { putpixel(creal(c) * Y / 2 + X / 2, cimag(c) * Y / 2 + Y / 2, COLOR(128 - 128 * cabs(t) / cabs(c), 128 - 128 * cabs(t) / cabs(c), 128 - 128 * cabs(t) / cabs(c))); return 0; } // To put about the end of the fractal, // the higher the value of the counter, // The more accurate the fractal is generated, // however, higher values cause // more processing time. if (counter == 100) { putpixel(creal((c)) * Y / 2 + X / 2, cimag((c)) * Y / 2 + Y / 2, COLOR(255 * (cabs((t * t)) / cabs((t - c) * c)), 0, 0)); return 0; } // recursively calling Mandle with increased counter // and passing the value of the squares of t into it. Mandle(c, cpow(t, 2) + c, counter + 1); return 0;} int MandleSet(){ // Calling Mandle function for every // point on the screen. for (double x = -2; x < 2; x += 0.0015) { for (double y = -1; y < 1; y += 0.0015) { double _Complex temp = x + y * _Complex_I; Mandle(temp); } } return 0;} int main(){ initwindow(X, Y); MandleSet(); getch(); closegraph(); return 0;}
Output
computer-graphics
Fractal
C++ Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
C++ Program for QuickSort
CSV file management using C++
delete keyword in C++
cin in C++
Shallow Copy and Deep Copy in C++
Passing a function as a parameter in C++
C Program to Swap two Numbers
Program to implement Singly Linked List in C++ using class
Generics in C++
Const keyword in C++
|
[
{
"code": null,
"e": 24622,
"s": 24594,
"text": "\n27 Dec, 2019"
},
{
"code": null,
"e": 24890,
"s": 24622,
"text": "A Fractal is a never-ending pattern. Fractals are infinitely complex patterns that are self-similar across different scales. They are created by repeating a simple process over and over in an ongoing feedback loop. Mathematically fractals can be explained as follows."
},
{
"code": null,
"e": 25028,
"s": 24890,
"text": "The location of a point on a screen is fed into an equation as its initial solution and the equation is iterated a large number of times."
},
{
"code": null,
"e": 25169,
"s": 25028,
"text": "If that equation tends to zero (i.e. the value at the end of the iterations is smaller than the initial value), the point is coloured black."
},
{
"code": null,
"e": 27455,
"s": 25169,
"text": "If the equation tends to infinity (i.e. the final value is larger than the initial value) then depending on the rate of increase (i.e. the rate at which the value tends to infinity), the pixel is painted with an appropriate colour.Defining MandlebrotThe Mandelbrot set is the set of complex numbers c for which the function does not diverge when iterated from z=0, i.e., for which the sequence , etc., remains bounded in absolute value. In simple words, Mandelbrot set is a particular set of complex numbers which has a highly convoluted fractal boundary when plotted.Implementation#include <complex.h>#include <tgmath.h>#include <winbgim.h> // Defining the size of the screen.#define Y 1080#define X 1920 // Recursive function to provide the iterative every 100th// f^n (0) for every pixel on the screen.int Mandle(double _Complex c, double _Complex t = 0, int counter = 0){ // To eliminate out of bound values. if (cabs(t) > 4) { putpixel(creal(c) * Y / 2 + X / 2, cimag(c) * Y / 2 + Y / 2, COLOR(128 - 128 * cabs(t) / cabs(c), 128 - 128 * cabs(t) / cabs(c), 128 - 128 * cabs(t) / cabs(c))); return 0; } // To put about the end of the fractal, // the higher the value of the counter, // The more accurate the fractal is generated, // however, higher values cause // more processing time. if (counter == 100) { putpixel(creal((c)) * Y / 2 + X / 2, cimag((c)) * Y / 2 + Y / 2, COLOR(255 * (cabs((t * t)) / cabs((t - c) * c)), 0, 0)); return 0; } // recursively calling Mandle with increased counter // and passing the value of the squares of t into it. Mandle(c, cpow(t, 2) + c, counter + 1); return 0;} int MandleSet(){ // Calling Mandle function for every // point on the screen. for (double x = -2; x < 2; x += 0.0015) { for (double y = -1; y < 1; y += 0.0015) { double _Complex temp = x + y * _Complex_I; Mandle(temp); } } return 0;} int main(){ initwindow(X, Y); MandleSet(); getch(); closegraph(); return 0;}OutputMy Personal Notes\narrow_drop_upSave"
},
{
"code": null,
"e": 27775,
"s": 27455,
"text": "The Mandelbrot set is the set of complex numbers c for which the function does not diverge when iterated from z=0, i.e., for which the sequence , etc., remains bounded in absolute value. In simple words, Mandelbrot set is a particular set of complex numbers which has a highly convoluted fractal boundary when plotted."
},
{
"code": null,
"e": 27790,
"s": 27775,
"text": "Implementation"
},
{
"code": "#include <complex.h>#include <tgmath.h>#include <winbgim.h> // Defining the size of the screen.#define Y 1080#define X 1920 // Recursive function to provide the iterative every 100th// f^n (0) for every pixel on the screen.int Mandle(double _Complex c, double _Complex t = 0, int counter = 0){ // To eliminate out of bound values. if (cabs(t) > 4) { putpixel(creal(c) * Y / 2 + X / 2, cimag(c) * Y / 2 + Y / 2, COLOR(128 - 128 * cabs(t) / cabs(c), 128 - 128 * cabs(t) / cabs(c), 128 - 128 * cabs(t) / cabs(c))); return 0; } // To put about the end of the fractal, // the higher the value of the counter, // The more accurate the fractal is generated, // however, higher values cause // more processing time. if (counter == 100) { putpixel(creal((c)) * Y / 2 + X / 2, cimag((c)) * Y / 2 + Y / 2, COLOR(255 * (cabs((t * t)) / cabs((t - c) * c)), 0, 0)); return 0; } // recursively calling Mandle with increased counter // and passing the value of the squares of t into it. Mandle(c, cpow(t, 2) + c, counter + 1); return 0;} int MandleSet(){ // Calling Mandle function for every // point on the screen. for (double x = -2; x < 2; x += 0.0015) { for (double y = -1; y < 1; y += 0.0015) { double _Complex temp = x + y * _Complex_I; Mandle(temp); } } return 0;} int main(){ initwindow(X, Y); MandleSet(); getch(); closegraph(); return 0;}",
"e": 29452,
"s": 27790,
"text": null
},
{
"code": null,
"e": 29459,
"s": 29452,
"text": "Output"
},
{
"code": null,
"e": 29477,
"s": 29459,
"text": "computer-graphics"
},
{
"code": null,
"e": 29485,
"s": 29477,
"text": "Fractal"
},
{
"code": null,
"e": 29498,
"s": 29485,
"text": "C++ Programs"
},
{
"code": null,
"e": 29596,
"s": 29498,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29622,
"s": 29596,
"text": "C++ Program for QuickSort"
},
{
"code": null,
"e": 29652,
"s": 29622,
"text": "CSV file management using C++"
},
{
"code": null,
"e": 29674,
"s": 29652,
"text": "delete keyword in C++"
},
{
"code": null,
"e": 29685,
"s": 29674,
"text": "cin in C++"
},
{
"code": null,
"e": 29719,
"s": 29685,
"text": "Shallow Copy and Deep Copy in C++"
},
{
"code": null,
"e": 29760,
"s": 29719,
"text": "Passing a function as a parameter in C++"
},
{
"code": null,
"e": 29790,
"s": 29760,
"text": "C Program to Swap two Numbers"
},
{
"code": null,
"e": 29849,
"s": 29790,
"text": "Program to implement Singly Linked List in C++ using class"
},
{
"code": null,
"e": 29865,
"s": 29849,
"text": "Generics in C++"
}
] |
How to get the last item of JavaScript object ? - GeeksforGeeks
|
23 Oct, 2019
Given a JavaScript object and the task is to get the last element of the JavaScript object.
Approach 1:
Use Object.keys() method to get the all keys of the object.
Now use indexing to access the last element of the JavaScript object.
Example: This example implements the above approach.
<!DOCTYPE HTML> <html> <head> <title> How to get the last item of JavaScript object ? </title> </head> <body style = "text-align:center;"> <h1 style = "color:green;" > GeeksforGeeks </h1> <p id = "GFG_UP1" style = "font-size: 15px; font-weight: bold;"> </p> <p id = "GFG_UP2" style = "font-size: 15px; font-weight: bold; color: green;"> </p> <button onclick = "GFG_Fun()"> click here </button> <p id = "GFG_DOWN" style = "color:green; font-size: 20px; font-weight: bold;"> </p> <script> var up1 = document.getElementById('GFG_UP1'); var up2 = document.getElementById('GFG_UP2'); var down = document.getElementById('GFG_DOWN'); var Obj = { "1_prop": "1_Val", "2_prop": "2_Val", "3_prop": "3_Val" }; up1.innerHTML = "Click on the button to get" + "the last element of the Object."; up2.innerHTML = JSON.stringify(Obj); function GFG_Fun() { down.innerHTML = "The last key = '" + Object.keys(Obj)[Object.keys(Obj).length-1] + "' <br> Value = '" + Obj[Object.keys(Obj)[Object.keys(Obj).length-1]] + "'"; } </script> </body> </html>
Output:
Before clicking on the button:
After clicking on the button:
Approach 2:
Use for loop to access the all keys of the object and at the end of the loop, loop variable will have the last key of the object.
Now use indexing to access the last element’s value of the JavaScript object.
Example: This example implements the above approach.
<!DOCTYPE HTML> <html> <head> <title> How to get the last item of JavaScript object ? </title> </head> <body style = "text-align:center;"> <h1 style = "color:green;" > GeeksforGeeks </h1> <p id = "GFG_UP1" style = "font-size: 15px; font-weight: bold;"> </p> <p id = "GFG_UP2" style = "font-size: 15px; font-weight: bold; color: green;"> </p> <button onclick = "GFG_Fun()"> click here </button> <p id = "GFG_DOWN" style = "color:green; font-size: 20px; font-weight: bold;"> </p> <script> var up1 = document.getElementById('GFG_UP1'); var up2 = document.getElementById('GFG_UP2'); var down = document.getElementById('GFG_DOWN'); var Obj = { "1_prop": "1_Val", "2_prop": "2_Val", "3_prop": "3_Val" }; up1.innerHTML = "Click on the button to get" + "the last element of the Object."; up2.innerHTML = JSON.stringify(Obj); function GFG_Fun() { var lastElement; for (lastElement in Obj); lastElement; down.innerHTML = "The last key = '" + lastElement + "' <br> Value = '" + Obj[lastElement] + "'"; } </script> </body> </html>
Output:
Before clicking on the button:
After clicking on the button:
javascript-object
JavaScript
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between var, let and const keywords in JavaScript
Convert a string to an integer in JavaScript
Differences between Functional Components and Class Components in React
How to Use the JavaScript Fetch API to Get Data?
Difference Between PUT and PATCH Request
Top 10 Front End Developer Skills That You Need in 2022
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
|
[
{
"code": null,
"e": 25043,
"s": 25015,
"text": "\n23 Oct, 2019"
},
{
"code": null,
"e": 25135,
"s": 25043,
"text": "Given a JavaScript object and the task is to get the last element of the JavaScript object."
},
{
"code": null,
"e": 25147,
"s": 25135,
"text": "Approach 1:"
},
{
"code": null,
"e": 25207,
"s": 25147,
"text": "Use Object.keys() method to get the all keys of the object."
},
{
"code": null,
"e": 25277,
"s": 25207,
"text": "Now use indexing to access the last element of the JavaScript object."
},
{
"code": null,
"e": 25330,
"s": 25277,
"text": "Example: This example implements the above approach."
},
{
"code": "<!DOCTYPE HTML> <html> <head> <title> How to get the last item of JavaScript object ? </title> </head> <body style = \"text-align:center;\"> <h1 style = \"color:green;\" > GeeksforGeeks </h1> <p id = \"GFG_UP1\" style = \"font-size: 15px; font-weight: bold;\"> </p> <p id = \"GFG_UP2\" style = \"font-size: 15px; font-weight: bold; color: green;\"> </p> <button onclick = \"GFG_Fun()\"> click here </button> <p id = \"GFG_DOWN\" style = \"color:green; font-size: 20px; font-weight: bold;\"> </p> <script> var up1 = document.getElementById('GFG_UP1'); var up2 = document.getElementById('GFG_UP2'); var down = document.getElementById('GFG_DOWN'); var Obj = { \"1_prop\": \"1_Val\", \"2_prop\": \"2_Val\", \"3_prop\": \"3_Val\" }; up1.innerHTML = \"Click on the button to get\" + \"the last element of the Object.\"; up2.innerHTML = JSON.stringify(Obj); function GFG_Fun() { down.innerHTML = \"The last key = '\" + Object.keys(Obj)[Object.keys(Obj).length-1] + \"' <br> Value = '\" + Obj[Object.keys(Obj)[Object.keys(Obj).length-1]] + \"'\"; } </script> </body> </html>",
"e": 26751,
"s": 25330,
"text": null
},
{
"code": null,
"e": 26759,
"s": 26751,
"text": "Output:"
},
{
"code": null,
"e": 26790,
"s": 26759,
"text": "Before clicking on the button:"
},
{
"code": null,
"e": 26820,
"s": 26790,
"text": "After clicking on the button:"
},
{
"code": null,
"e": 26832,
"s": 26820,
"text": "Approach 2:"
},
{
"code": null,
"e": 26962,
"s": 26832,
"text": "Use for loop to access the all keys of the object and at the end of the loop, loop variable will have the last key of the object."
},
{
"code": null,
"e": 27040,
"s": 26962,
"text": "Now use indexing to access the last element’s value of the JavaScript object."
},
{
"code": null,
"e": 27093,
"s": 27040,
"text": "Example: This example implements the above approach."
},
{
"code": "<!DOCTYPE HTML> <html> <head> <title> How to get the last item of JavaScript object ? </title> </head> <body style = \"text-align:center;\"> <h1 style = \"color:green;\" > GeeksforGeeks </h1> <p id = \"GFG_UP1\" style = \"font-size: 15px; font-weight: bold;\"> </p> <p id = \"GFG_UP2\" style = \"font-size: 15px; font-weight: bold; color: green;\"> </p> <button onclick = \"GFG_Fun()\"> click here </button> <p id = \"GFG_DOWN\" style = \"color:green; font-size: 20px; font-weight: bold;\"> </p> <script> var up1 = document.getElementById('GFG_UP1'); var up2 = document.getElementById('GFG_UP2'); var down = document.getElementById('GFG_DOWN'); var Obj = { \"1_prop\": \"1_Val\", \"2_prop\": \"2_Val\", \"3_prop\": \"3_Val\" }; up1.innerHTML = \"Click on the button to get\" + \"the last element of the Object.\"; up2.innerHTML = JSON.stringify(Obj); function GFG_Fun() { var lastElement; for (lastElement in Obj); lastElement; down.innerHTML = \"The last key = '\" + lastElement + \"' <br> Value = '\" + Obj[lastElement] + \"'\"; } </script> </body> </html>",
"e": 28535,
"s": 27093,
"text": null
},
{
"code": null,
"e": 28543,
"s": 28535,
"text": "Output:"
},
{
"code": null,
"e": 28574,
"s": 28543,
"text": "Before clicking on the button:"
},
{
"code": null,
"e": 28604,
"s": 28574,
"text": "After clicking on the button:"
},
{
"code": null,
"e": 28622,
"s": 28604,
"text": "javascript-object"
},
{
"code": null,
"e": 28633,
"s": 28622,
"text": "JavaScript"
},
{
"code": null,
"e": 28650,
"s": 28633,
"text": "Web Technologies"
},
{
"code": null,
"e": 28677,
"s": 28650,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 28775,
"s": 28677,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28784,
"s": 28775,
"text": "Comments"
},
{
"code": null,
"e": 28797,
"s": 28784,
"text": "Old Comments"
},
{
"code": null,
"e": 28858,
"s": 28797,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 28903,
"s": 28858,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 28975,
"s": 28903,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 29024,
"s": 28975,
"text": "How to Use the JavaScript Fetch API to Get Data?"
},
{
"code": null,
"e": 29065,
"s": 29024,
"text": "Difference Between PUT and PATCH Request"
},
{
"code": null,
"e": 29121,
"s": 29065,
"text": "Top 10 Front End Developer Skills That You Need in 2022"
},
{
"code": null,
"e": 29154,
"s": 29121,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 29216,
"s": 29154,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 29259,
"s": 29216,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Principal Component Analysis with Python - GeeksforGeeks
|
10 Mar, 2022
Principal Component Analysis is basically a statistical procedure to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables. Each of the principal components is chosen in such a way so that it would describe most of them still available variance and all these principal components are orthogonal to each other. In all principal components first principal component has a maximum variance.
Uses of PCA:
It is used to find inter-relation between variables in the data.
It is used to interpret and visualize data.
The number of variables is decreasing it makes further analysis simpler.
It’s often used to visualize genetic distance and relatedness between populations.
These are basically performed on a square symmetric matrix. It can be a pure sums of squares and cross-products matrix or Covariance matrix or Correlation matrix. A correlation matrix is used if the individual variance differs much.
Objectives of PCA:
It is basically a non-dependent procedure in which it reduces attribute space from a large number of variables to a smaller number of factors.
PCA is basically a dimension reduction process but there is no guarantee that the dimension is interpretable.
The main task in this PCA is to select a subset of variables from a larger set, based on which original variables have the highest correlation with the principal amount.
Principal Axis Method: PCA basically searches a linear combination of variables so that we can extract maximum variance from the variables. Once this process completes it removes it and searches for another linear combination that gives an explanation about the maximum proportion of remaining variance which basically leads to orthogonal factors. In this method, we analyze total variance.
Eigenvector: It is a non-zero vector that stays parallel after matrix multiplication. Let’s suppose x is an eigenvector of dimension r of matrix M with dimension r*r if Mx and x are parallel. Then we need to solve Mx=Ax where both x and A are unknown to get eigenvector and eigenvalues. Under Eigen-Vectors we can say that Principal components show both common and unique variance of the variable. Basically, it is variance focused approach seeking to reproduce total variance and correlation with all components. The principal components are basically the linear combinations of the original variables weighted by their contribution to explain the variance in a particular orthogonal dimension.
Eigen Values: It is basically known as characteristic roots. It basically measures the variance in all variables which is accounted for by that factor. The ratio of eigenvalues is the ratio of explanatory importance of the factors with respect to the variables. If the factor is low then it is contributing less to the explanation of variables. In simple words, it measures the amount of variance in the total given database accounted by the factor. We can calculate the factor’s eigenvalue as the sum of its squared factor loading for all the variables.
Now, Let’s understand Principal Component Analysis with Python.To get the dataset used in the implementation, click here.Step 1: Importing the libraries
Python
# importing required librariesimport numpy as npimport matplotlib.pyplot as pltimport pandas as pd
Step 2: Importing the data setImport the dataset and distributing the dataset into X and y components for data analysis.
Python
# importing or loading the datasetdataset = pd.read_csv('wine.csv') # distributing the dataset into two components X and YX = dataset.iloc[:, 0:13].valuesy = dataset.iloc[:, 13].values
Step 3: Splitting the dataset into the Training set and Test set
Python
# Splitting the X and Y into the# Training set and Testing setfrom sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
Step 4: Feature ScalingDoing the pre-processing part on training and testing set such as fitting the Standard scale.
Python
# performing preprocessing partfrom sklearn.preprocessing import StandardScalersc = StandardScaler() X_train = sc.fit_transform(X_train)X_test = sc.transform(X_test)
Step 5: Applying PCA functionApplying the PCA function into the training and testing set for analysis.
Python
# Applying PCA function on training# and testing set of X componentfrom sklearn.decomposition import PCA pca = PCA(n_components = 2) X_train = pca.fit_transform(X_train)X_test = pca.transform(X_test) explained_variance = pca.explained_variance_ratio_
Step 6: Fitting Logistic Regression To the training set
Python
# Fitting Logistic Regression To the training setfrom sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state = 0)classifier.fit(X_train, y_train)
Step 7: Predicting the test set result
Python
# Predicting the test set result using# predict function under LogisticRegressiony_pred = classifier.predict(X_test)
Step 8: Making the confusion matrix
Python
# making confusion matrix between# test set of Y and predicted value.from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred)
Step 9: Predicting the training set result
Python
# Predicting the training set# result through scatter plotfrom matplotlib.colors import ListedColormap X_set, y_set = X_train, y_trainX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('yellow', 'white', 'aquamarine'))) plt.xlim(X1.min(), X1.max())plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green', 'blue'))(i), label = j) plt.title('Logistic Regression (Training set)')plt.xlabel('PC1') # for Xlabelplt.ylabel('PC2') # for Ylabelplt.legend() # to show legend # show scatter plotplt.show()
Step 10: Visualizing the Test set results
Python
# Visualising the Test set results through scatter plotfrom matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('yellow', 'white', 'aquamarine'))) plt.xlim(X1.min(), X1.max())plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green', 'blue'))(i), label = j) # title for scatter plotplt.title('Logistic Regression (Test set)')plt.xlabel('PC1') # for Xlabelplt.ylabel('PC2') # for Ylabelplt.legend() # show scatter plotplt.show()
surindertarika1234
as5853535
23603vaibhav2021
shashwatsonu21
Picked
Advanced Computer Subject
Machine Learning
Python
Machine Learning
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Decision Tree
Copying Files to and from Docker Containers
System Design Tutorial
Python | Decision tree implementation
Decision Tree Introduction with example
Agents in Artificial Intelligence
Decision Tree
Python | Decision tree implementation
Search Algorithms in AI
Difference between Informed and Uninformed Search in AI
|
[
{
"code": null,
"e": 23971,
"s": 23943,
"text": "\n10 Mar, 2022"
},
{
"code": null,
"e": 24425,
"s": 23971,
"text": "Principal Component Analysis is basically a statistical procedure to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables. Each of the principal components is chosen in such a way so that it would describe most of them still available variance and all these principal components are orthogonal to each other. In all principal components first principal component has a maximum variance. "
},
{
"code": null,
"e": 24439,
"s": 24425,
"text": "Uses of PCA: "
},
{
"code": null,
"e": 24504,
"s": 24439,
"text": "It is used to find inter-relation between variables in the data."
},
{
"code": null,
"e": 24548,
"s": 24504,
"text": "It is used to interpret and visualize data."
},
{
"code": null,
"e": 24621,
"s": 24548,
"text": "The number of variables is decreasing it makes further analysis simpler."
},
{
"code": null,
"e": 24704,
"s": 24621,
"text": "It’s often used to visualize genetic distance and relatedness between populations."
},
{
"code": null,
"e": 24938,
"s": 24704,
"text": "These are basically performed on a square symmetric matrix. It can be a pure sums of squares and cross-products matrix or Covariance matrix or Correlation matrix. A correlation matrix is used if the individual variance differs much. "
},
{
"code": null,
"e": 24958,
"s": 24938,
"text": "Objectives of PCA: "
},
{
"code": null,
"e": 25101,
"s": 24958,
"text": "It is basically a non-dependent procedure in which it reduces attribute space from a large number of variables to a smaller number of factors."
},
{
"code": null,
"e": 25211,
"s": 25101,
"text": "PCA is basically a dimension reduction process but there is no guarantee that the dimension is interpretable."
},
{
"code": null,
"e": 25381,
"s": 25211,
"text": "The main task in this PCA is to select a subset of variables from a larger set, based on which original variables have the highest correlation with the principal amount."
},
{
"code": null,
"e": 25772,
"s": 25381,
"text": "Principal Axis Method: PCA basically searches a linear combination of variables so that we can extract maximum variance from the variables. Once this process completes it removes it and searches for another linear combination that gives an explanation about the maximum proportion of remaining variance which basically leads to orthogonal factors. In this method, we analyze total variance."
},
{
"code": null,
"e": 26468,
"s": 25772,
"text": "Eigenvector: It is a non-zero vector that stays parallel after matrix multiplication. Let’s suppose x is an eigenvector of dimension r of matrix M with dimension r*r if Mx and x are parallel. Then we need to solve Mx=Ax where both x and A are unknown to get eigenvector and eigenvalues. Under Eigen-Vectors we can say that Principal components show both common and unique variance of the variable. Basically, it is variance focused approach seeking to reproduce total variance and correlation with all components. The principal components are basically the linear combinations of the original variables weighted by their contribution to explain the variance in a particular orthogonal dimension."
},
{
"code": null,
"e": 27024,
"s": 26468,
"text": "Eigen Values: It is basically known as characteristic roots. It basically measures the variance in all variables which is accounted for by that factor. The ratio of eigenvalues is the ratio of explanatory importance of the factors with respect to the variables. If the factor is low then it is contributing less to the explanation of variables. In simple words, it measures the amount of variance in the total given database accounted by the factor. We can calculate the factor’s eigenvalue as the sum of its squared factor loading for all the variables. "
},
{
"code": null,
"e": 27178,
"s": 27024,
"text": "Now, Let’s understand Principal Component Analysis with Python.To get the dataset used in the implementation, click here.Step 1: Importing the libraries "
},
{
"code": null,
"e": 27185,
"s": 27178,
"text": "Python"
},
{
"code": "# importing required librariesimport numpy as npimport matplotlib.pyplot as pltimport pandas as pd",
"e": 27284,
"s": 27185,
"text": null
},
{
"code": null,
"e": 27406,
"s": 27284,
"text": " Step 2: Importing the data setImport the dataset and distributing the dataset into X and y components for data analysis."
},
{
"code": null,
"e": 27413,
"s": 27406,
"text": "Python"
},
{
"code": "# importing or loading the datasetdataset = pd.read_csv('wine.csv') # distributing the dataset into two components X and YX = dataset.iloc[:, 0:13].valuesy = dataset.iloc[:, 13].values",
"e": 27598,
"s": 27413,
"text": null
},
{
"code": null,
"e": 27665,
"s": 27598,
"text": " Step 3: Splitting the dataset into the Training set and Test set "
},
{
"code": null,
"e": 27672,
"s": 27665,
"text": "Python"
},
{
"code": "# Splitting the X and Y into the# Training set and Testing setfrom sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)",
"e": 27880,
"s": 27672,
"text": null
},
{
"code": null,
"e": 27998,
"s": 27880,
"text": " Step 4: Feature ScalingDoing the pre-processing part on training and testing set such as fitting the Standard scale."
},
{
"code": null,
"e": 28005,
"s": 27998,
"text": "Python"
},
{
"code": "# performing preprocessing partfrom sklearn.preprocessing import StandardScalersc = StandardScaler() X_train = sc.fit_transform(X_train)X_test = sc.transform(X_test)",
"e": 28171,
"s": 28005,
"text": null
},
{
"code": null,
"e": 28276,
"s": 28171,
"text": " Step 5: Applying PCA functionApplying the PCA function into the training and testing set for analysis. "
},
{
"code": null,
"e": 28283,
"s": 28276,
"text": "Python"
},
{
"code": "# Applying PCA function on training# and testing set of X componentfrom sklearn.decomposition import PCA pca = PCA(n_components = 2) X_train = pca.fit_transform(X_train)X_test = pca.transform(X_test) explained_variance = pca.explained_variance_ratio_",
"e": 28534,
"s": 28283,
"text": null
},
{
"code": null,
"e": 28591,
"s": 28534,
"text": " Step 6: Fitting Logistic Regression To the training set"
},
{
"code": null,
"e": 28598,
"s": 28591,
"text": "Python"
},
{
"code": "# Fitting Logistic Regression To the training setfrom sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state = 0)classifier.fit(X_train, y_train)",
"e": 28782,
"s": 28598,
"text": null
},
{
"code": null,
"e": 28823,
"s": 28782,
"text": " Step 7: Predicting the test set result "
},
{
"code": null,
"e": 28830,
"s": 28823,
"text": "Python"
},
{
"code": "# Predicting the test set result using# predict function under LogisticRegressiony_pred = classifier.predict(X_test)",
"e": 28947,
"s": 28830,
"text": null
},
{
"code": null,
"e": 28985,
"s": 28947,
"text": " Step 8: Making the confusion matrix "
},
{
"code": null,
"e": 28992,
"s": 28985,
"text": "Python"
},
{
"code": "# making confusion matrix between# test set of Y and predicted value.from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred)",
"e": 29145,
"s": 28992,
"text": null
},
{
"code": null,
"e": 29189,
"s": 29145,
"text": " Step 9: Predicting the training set result"
},
{
"code": null,
"e": 29196,
"s": 29189,
"text": "Python"
},
{
"code": "# Predicting the training set# result through scatter plotfrom matplotlib.colors import ListedColormap X_set, y_set = X_train, y_trainX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('yellow', 'white', 'aquamarine'))) plt.xlim(X1.min(), X1.max())plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green', 'blue'))(i), label = j) plt.title('Logistic Regression (Training set)')plt.xlabel('PC1') # for Xlabelplt.ylabel('PC2') # for Ylabelplt.legend() # to show legend # show scatter plotplt.show()",
"e": 30173,
"s": 29196,
"text": null
},
{
"code": null,
"e": 30217,
"s": 30173,
"text": " Step 10: Visualizing the Test set results"
},
{
"code": null,
"e": 30224,
"s": 30217,
"text": "Python"
},
{
"code": "# Visualising the Test set results through scatter plotfrom matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('yellow', 'white', 'aquamarine'))) plt.xlim(X1.min(), X1.max())plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green', 'blue'))(i), label = j) # title for scatter plotplt.title('Logistic Regression (Test set)')plt.xlabel('PC1') # for Xlabelplt.ylabel('PC2') # for Ylabelplt.legend() # show scatter plotplt.show()",
"e": 31200,
"s": 30224,
"text": null
},
{
"code": null,
"e": 31219,
"s": 31200,
"text": "surindertarika1234"
},
{
"code": null,
"e": 31229,
"s": 31219,
"text": "as5853535"
},
{
"code": null,
"e": 31246,
"s": 31229,
"text": "23603vaibhav2021"
},
{
"code": null,
"e": 31261,
"s": 31246,
"text": "shashwatsonu21"
},
{
"code": null,
"e": 31268,
"s": 31261,
"text": "Picked"
},
{
"code": null,
"e": 31294,
"s": 31268,
"text": "Advanced Computer Subject"
},
{
"code": null,
"e": 31311,
"s": 31294,
"text": "Machine Learning"
},
{
"code": null,
"e": 31318,
"s": 31311,
"text": "Python"
},
{
"code": null,
"e": 31335,
"s": 31318,
"text": "Machine Learning"
},
{
"code": null,
"e": 31433,
"s": 31335,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31442,
"s": 31433,
"text": "Comments"
},
{
"code": null,
"e": 31455,
"s": 31442,
"text": "Old Comments"
},
{
"code": null,
"e": 31469,
"s": 31455,
"text": "Decision Tree"
},
{
"code": null,
"e": 31513,
"s": 31469,
"text": "Copying Files to and from Docker Containers"
},
{
"code": null,
"e": 31536,
"s": 31513,
"text": "System Design Tutorial"
},
{
"code": null,
"e": 31574,
"s": 31536,
"text": "Python | Decision tree implementation"
},
{
"code": null,
"e": 31614,
"s": 31574,
"text": "Decision Tree Introduction with example"
},
{
"code": null,
"e": 31648,
"s": 31614,
"text": "Agents in Artificial Intelligence"
},
{
"code": null,
"e": 31662,
"s": 31648,
"text": "Decision Tree"
},
{
"code": null,
"e": 31700,
"s": 31662,
"text": "Python | Decision tree implementation"
},
{
"code": null,
"e": 31724,
"s": 31700,
"text": "Search Algorithms in AI"
}
] |
DAX Text - LEN function
|
Returns the number of characters in a text string.
LEN (<text>)
text
The text whose length you want to find, or a column that contains text.
A whole number indicating the number of characters in the text string.
Spaces count as characters.
DAX uses Unicode and stores all the characters with the same length. Hence, LEN always counts each character as 1, no matter what the default language setting is.
If you use DAX LEN function with a column that contains non-text values, such as dates or Boolean values, the function implicitly casts the value to text, using the current column format.
= LEN ([Product])
Returns a calculated column with the number of characters in the corresponding Product text values.
53 Lectures
5.5 hours
Abhay Gadiya
24 Lectures
2 hours
Randy Minder
26 Lectures
4.5 hours
Randy Minder
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2052,
"s": 2001,
"text": "Returns the number of characters in a text string."
},
{
"code": null,
"e": 2067,
"s": 2052,
"text": "LEN (<text>) \n"
},
{
"code": null,
"e": 2072,
"s": 2067,
"text": "text"
},
{
"code": null,
"e": 2144,
"s": 2072,
"text": "The text whose length you want to find, or a column that contains text."
},
{
"code": null,
"e": 2215,
"s": 2144,
"text": "A whole number indicating the number of characters in the text string."
},
{
"code": null,
"e": 2243,
"s": 2215,
"text": "Spaces count as characters."
},
{
"code": null,
"e": 2406,
"s": 2243,
"text": "DAX uses Unicode and stores all the characters with the same length. Hence, LEN always counts each character as 1, no matter what the default language setting is."
},
{
"code": null,
"e": 2594,
"s": 2406,
"text": "If you use DAX LEN function with a column that contains non-text values, such as dates or Boolean values, the function implicitly casts the value to text, using the current column format."
},
{
"code": null,
"e": 2613,
"s": 2594,
"text": "= LEN ([Product]) "
},
{
"code": null,
"e": 2713,
"s": 2613,
"text": "Returns a calculated column with the number of characters in the corresponding Product text values."
},
{
"code": null,
"e": 2748,
"s": 2713,
"text": "\n 53 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 2762,
"s": 2748,
"text": " Abhay Gadiya"
},
{
"code": null,
"e": 2795,
"s": 2762,
"text": "\n 24 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 2809,
"s": 2795,
"text": " Randy Minder"
},
{
"code": null,
"e": 2844,
"s": 2809,
"text": "\n 26 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 2858,
"s": 2844,
"text": " Randy Minder"
},
{
"code": null,
"e": 2865,
"s": 2858,
"text": " Print"
},
{
"code": null,
"e": 2876,
"s": 2865,
"text": " Add Notes"
}
] |
Spring DI - Setter-Based
|
Setter-based DI is accomplished by the container calling setter methods on your beans after invoking a no-argument constructor or no-argument static factory method to instantiate your bean.
The following example shows a class TextEditor that can only be dependency-injected using pure setter-based injection.
Let's update the project created in Spring DI - Create Project chapter. We're adding following files −
TextEditor.java − A class containing a SpellChecker as dependency.
TextEditor.java − A class containing a SpellChecker as dependency.
SpellChecker.java − A dependency class.
SpellChecker.java − A dependency class.
MainApp.java − Main application to run and test.
MainApp.java − Main application to run and test.
Here is the content of TextEditor.java file −
package com.tutorialspoint;
public class TextEditor {
private SpellChecker spellChecker;
// a setter method to inject the dependency.
public void setSpellChecker(SpellChecker spellChecker) {
System.out.println("Inside setSpellChecker." );
this.spellChecker = spellChecker;
}
// a getter method to return spellChecker
public SpellChecker getSpellChecker() {
return spellChecker;
}
public void spellCheck() {
spellChecker.checkSpelling();
}
}
Here you need to check the naming convention of the setter methods. To set a variable spellChecker we are using setSpellChecker() method which is very similar to Java POJO classes. Let us create the content of another dependent class file SpellChecker.java −
package com.tutorialspoint;
public class SpellChecker {
public SpellChecker(){
System.out.println("Inside SpellChecker constructor." );
}
public void checkSpelling() {
System.out.println("Inside checkSpelling." );
}
}
Following is the content of the MainApp.java file
package com.tutorialspoint;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
public class MainApp {
public static void main(String[] args) {
ApplicationContext context = new ClassPathXmlApplicationContext("applicationcontext.xml");
TextEditor te = (TextEditor) context.getBean("textEditor");
te.spellCheck();
}
}
Following is the configuration file applicationcontext.xml which has configuration for the setter-based injection −
<?xml version = "1.0" encoding = "UTF-8"?>
<beans xmlns = "http://www.springframework.org/schema/beans"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
<!-- Definition for textEditor bean -->
<bean id = "textEditor" class = "com.tutorialspoint.TextEditor">
<property name = "spellChecker" ref = "spellChecker"/>
</bean>
<!-- Definition for spellChecker bean -->
<bean id = "spellChecker" class = "com.tutorialspoint.SpellChecker"></bean>
</beans>
You should note the difference in applicationcontext.xml file defined in the constructor-based injection and the setter-based injection. The only difference is inside the <bean> element where we have used <constructor-arg> tags for constructor-based injection and <property> tags for setter-based injection.
The second important point to note is that in case you are passing a reference to an object, you need to use ref attribute of <property> tag and if you are passing a value directly then you should use value attribute.
Once you are done creating the source and bean configuration files, let us run the application. If everything is fine with your application, this will print the following message −
Inside SpellChecker constructor.
Inside setSpellChecker.
Inside checkSpelling.
102 Lectures
8 hours
Karthikeya T
39 Lectures
5 hours
Chaand Sheikh
73 Lectures
5.5 hours
Senol Atac
62 Lectures
4.5 hours
Senol Atac
67 Lectures
4.5 hours
Senol Atac
69 Lectures
5 hours
Senol Atac
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2568,
"s": 2378,
"text": "Setter-based DI is accomplished by the container calling setter methods on your beans after invoking a no-argument constructor or no-argument static factory method to instantiate your bean."
},
{
"code": null,
"e": 2687,
"s": 2568,
"text": "The following example shows a class TextEditor that can only be dependency-injected using pure setter-based injection."
},
{
"code": null,
"e": 2790,
"s": 2687,
"text": "Let's update the project created in Spring DI - Create Project chapter. We're adding following files −"
},
{
"code": null,
"e": 2857,
"s": 2790,
"text": "TextEditor.java − A class containing a SpellChecker as dependency."
},
{
"code": null,
"e": 2924,
"s": 2857,
"text": "TextEditor.java − A class containing a SpellChecker as dependency."
},
{
"code": null,
"e": 2964,
"s": 2924,
"text": "SpellChecker.java − A dependency class."
},
{
"code": null,
"e": 3004,
"s": 2964,
"text": "SpellChecker.java − A dependency class."
},
{
"code": null,
"e": 3053,
"s": 3004,
"text": "MainApp.java − Main application to run and test."
},
{
"code": null,
"e": 3102,
"s": 3053,
"text": "MainApp.java − Main application to run and test."
},
{
"code": null,
"e": 3148,
"s": 3102,
"text": "Here is the content of TextEditor.java file −"
},
{
"code": null,
"e": 3641,
"s": 3148,
"text": "package com.tutorialspoint;\npublic class TextEditor {\n private SpellChecker spellChecker;\n\n // a setter method to inject the dependency.\n public void setSpellChecker(SpellChecker spellChecker) {\n System.out.println(\"Inside setSpellChecker.\" );\n this.spellChecker = spellChecker;\n }\n // a getter method to return spellChecker\n public SpellChecker getSpellChecker() {\n return spellChecker;\n }\n public void spellCheck() {\n spellChecker.checkSpelling();\n }\n}"
},
{
"code": null,
"e": 3900,
"s": 3641,
"text": "Here you need to check the naming convention of the setter methods. To set a variable spellChecker we are using setSpellChecker() method which is very similar to Java POJO classes. Let us create the content of another dependent class file SpellChecker.java −"
},
{
"code": null,
"e": 4143,
"s": 3900,
"text": "package com.tutorialspoint;\n\npublic class SpellChecker {\n public SpellChecker(){\n System.out.println(\"Inside SpellChecker constructor.\" );\n }\n public void checkSpelling() {\n System.out.println(\"Inside checkSpelling.\" );\n }\n}"
},
{
"code": null,
"e": 4193,
"s": 4143,
"text": "Following is the content of the MainApp.java file"
},
{
"code": null,
"e": 4614,
"s": 4193,
"text": "package com.tutorialspoint;\n\nimport org.springframework.context.ApplicationContext;\nimport org.springframework.context.support.ClassPathXmlApplicationContext;\n\npublic class MainApp {\n public static void main(String[] args) {\n ApplicationContext context = new ClassPathXmlApplicationContext(\"applicationcontext.xml\");\n\n TextEditor te = (TextEditor) context.getBean(\"textEditor\");\n te.spellCheck();\n }\n}"
},
{
"code": null,
"e": 4730,
"s": 4614,
"text": "Following is the configuration file applicationcontext.xml which has configuration for the setter-based injection −"
},
{
"code": null,
"e": 5351,
"s": 4730,
"text": "<?xml version = \"1.0\" encoding = \"UTF-8\"?>\n\n<beans xmlns = \"http://www.springframework.org/schema/beans\"\n xmlns:xsi = \"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation = \"http://www.springframework.org/schema/beans\n http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\">\n\n <!-- Definition for textEditor bean -->\n <bean id = \"textEditor\" class = \"com.tutorialspoint.TextEditor\">\n <property name = \"spellChecker\" ref = \"spellChecker\"/>\n </bean>\n\n <!-- Definition for spellChecker bean -->\n <bean id = \"spellChecker\" class = \"com.tutorialspoint.SpellChecker\"></bean>\n</beans>"
},
{
"code": null,
"e": 5659,
"s": 5351,
"text": "You should note the difference in applicationcontext.xml file defined in the constructor-based injection and the setter-based injection. The only difference is inside the <bean> element where we have used <constructor-arg> tags for constructor-based injection and <property> tags for setter-based injection."
},
{
"code": null,
"e": 5877,
"s": 5659,
"text": "The second important point to note is that in case you are passing a reference to an object, you need to use ref attribute of <property> tag and if you are passing a value directly then you should use value attribute."
},
{
"code": null,
"e": 6058,
"s": 5877,
"text": "Once you are done creating the source and bean configuration files, let us run the application. If everything is fine with your application, this will print the following message −"
},
{
"code": null,
"e": 6138,
"s": 6058,
"text": "Inside SpellChecker constructor.\nInside setSpellChecker.\nInside checkSpelling.\n"
},
{
"code": null,
"e": 6172,
"s": 6138,
"text": "\n 102 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 6186,
"s": 6172,
"text": " Karthikeya T"
},
{
"code": null,
"e": 6219,
"s": 6186,
"text": "\n 39 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 6234,
"s": 6219,
"text": " Chaand Sheikh"
},
{
"code": null,
"e": 6269,
"s": 6234,
"text": "\n 73 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 6281,
"s": 6269,
"text": " Senol Atac"
},
{
"code": null,
"e": 6316,
"s": 6281,
"text": "\n 62 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 6328,
"s": 6316,
"text": " Senol Atac"
},
{
"code": null,
"e": 6363,
"s": 6328,
"text": "\n 67 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 6375,
"s": 6363,
"text": " Senol Atac"
},
{
"code": null,
"e": 6408,
"s": 6375,
"text": "\n 69 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 6420,
"s": 6408,
"text": " Senol Atac"
},
{
"code": null,
"e": 6427,
"s": 6420,
"text": " Print"
},
{
"code": null,
"e": 6438,
"s": 6427,
"text": " Add Notes"
}
] |
How to compare two lists for equality in C#?
|
Set the two lists −
List < string > list1 = new List < string > ();
list1.Add("A");
list1.Add("B");
list1.Add("C");
list1.Add("D");
List < string > list2 = new List < string > ();
list2.Add("C");
list2.Add("D");
Now if the following returns different elements, then it would mean the lists are not equal −
using System;
using System.Collections.Generic;
using System.Linq;
public class Demo {
public static void Main() {
List < string > list1 = new List < string > ();
list1.Add("P");
list1.Add("Q");
list1.Add("R");
Console.WriteLine("First list...");
foreach(string value in list1) {
Console.WriteLine(value);
}
Console.WriteLine("Second list...");
List < string > list2 = new List < string > ();
list2.Add("P");
list2.Add("R");
foreach(string value in list2) {
Console.WriteLine(value);
}
Console.WriteLine("Difference in the two lists...");
IEnumerable < string > list3;
list3 = list1.Except(list2);
foreach(string value in list3) {
Console.WriteLine(value);
}
}
}
|
[
{
"code": null,
"e": 1082,
"s": 1062,
"text": "Set the two lists −"
},
{
"code": null,
"e": 1194,
"s": 1082,
"text": "List < string > list1 = new List < string > ();\nlist1.Add(\"A\");\nlist1.Add(\"B\");\nlist1.Add(\"C\");\nlist1.Add(\"D\");"
},
{
"code": null,
"e": 1274,
"s": 1194,
"text": "List < string > list2 = new List < string > ();\nlist2.Add(\"C\");\nlist2.Add(\"D\");"
},
{
"code": null,
"e": 1368,
"s": 1274,
"text": "Now if the following returns different elements, then it would mean the lists are not equal −"
},
{
"code": null,
"e": 2177,
"s": 1368,
"text": "using System;\nusing System.Collections.Generic;\nusing System.Linq;\n\npublic class Demo {\n public static void Main() {\n List < string > list1 = new List < string > ();\n list1.Add(\"P\");\n list1.Add(\"Q\");\n list1.Add(\"R\");\n\n Console.WriteLine(\"First list...\");\n foreach(string value in list1) {\n Console.WriteLine(value);\n }\n\n Console.WriteLine(\"Second list...\");\n List < string > list2 = new List < string > ();\n\n list2.Add(\"P\");\n list2.Add(\"R\");\n foreach(string value in list2) {\n Console.WriteLine(value);\n }\n\n Console.WriteLine(\"Difference in the two lists...\");\n IEnumerable < string > list3;\n list3 = list1.Except(list2);\n foreach(string value in list3) {\n Console.WriteLine(value);\n }\n }\n}"
}
] |
Anatomy of a dbt project. Some basic concepts and project... | by Tuan Nguyen | Towards Data Science
|
For analysts not from a technical background, working with dbt can be intimidating at first. Writing codes to configure everything takes some time to get used to, especially if you are used to setting everything up using a UI.
If you consider dbt for your team or just started with a team that uses dbt, this post will hopefully help ease you into the project structure and some basic concepts. We will walk through most files and folders in the image below. You can file the repo here.
This file contains the database connection that dbt will use to connect to the data warehouse. You only have to worry about this file if you set up dbt locally.
Since this file can have sensitive information such as project name and database credentials, it is not in your dbt project. By default, the file is created in the folder: ~/.dbt/.
If you start your project from scratch, running dbt init will generate this file for you. If not, you may have to create the .dbt folder and a profiles.yml file locally. Follow dbt documentation or contact the repo owner if you are unsure how to set up this file.
If you work on multiple projects locally, the different project names (configured in the dbt_project.yml file) will allow you to set up various profiles for other projects.
Besides configuring the database connection, you can also configure a target. Target is how you have different configurations for different environments. For instance, when developing locally, you and your team would want separate datasets/databases to work on. But when deploying to production, it is best to have all tables in a single dataset/database. The default target is dev.
This file is the main configuration file for your project. If you are creating your project, change the project name and the profile name (preferably to the same value). Also, replace my_new_project at the models section with the new project name.
All objects in this project will inherit settings configured here unless overridden at the model level. For example, you can configure to create all models as tables in the dbt_project.yml file. But you can go into one of the models and override that setting to create a view instead.
Where to configure what depend on your project setup, but a good principle to follow is DRY (do not repeat yourself). For settings that apply to the whole project or a specific folder, define here in this file. For options that are only relevant to one model, configure it there.
For a complete list of settings, you can configure in this file, check out the dbt_project.yml documentation.
The models folder contains all data models in your project. Inside this folder, you can create any folder structure that you want. A recommended structure put out in the dbt style guide is as follow:
├── dbt_project.yml└── models ├── marts | └── core | ├── intermediate | | ├── intermediate.yml | | ├── customers__unioned.sql | | ├── customers__grouped.sql | └── core.yml | └── core.docs | └── dim_customers.sql | └── fct_orders.sql └── staging └── stripe ├── base | ├── base__stripe_invoices.sql ├── src_stripe.yml ├── src_stripe.docs ├── stg_stripe.yml ├── stg_stripe__customers.sql └── stg_stripe__invoices.sql
Here, we have the marts and staging folders underneath models. Different data sources will have separate folders underneath staging (e.g. stripe). Use cases or departments have different folders underneath marts (e.g. core or marketing).
Notice the .yml and .doc files in the example above. They are where you would define metadata and documentation for your models. You can have everything in the dbt_project.yml file, but it is much more cleaner to define them here.
This structure allows you to organize objects clearly and easily apply bulk settings.
This folder contains all manual data that will be loaded to the database by dbt. To load the .csv files in this folder to the database, you will have to run the dbt seed command.
Since this is a version-controlled repository, do not put large files or files with sensitive information here. A file that is changed regularly is also not a good candidate for this approach. Some examples of a good fit for dbt seed are yearly budget, status mappings, category mappings, etc.
Macros in dbt are similar to functions in excel. You can define custom functions in the macros folder or override default macros and macros from a package. Macros, together with Jinja templating, gives you many functionalities that are not available with SQL.
For example, you can use a control structure (if statements and for loops) or different behavior for different targets. You can also abstract away complicated SQL logic and makes your code much more readable.
-- This is hard to read(amount / 100)::numeric(16, 2) as amount_usd-- This is much easier{{ cents_to_dollars('amount') }} as amount_usd
One of the great things about dbt is that you can easily use packages that others have made. Doing so can save you tons of time, and you will not have to reinvent the wheel. You can find most of the dbt packages here.
To use any of these packages, you need to define them in the packages.yml file. Simply add content as following to the file.
packages: - package: dbt-labs/dbt_utils version: 0.7.3
Before you can use these packages, you have to run dbt deps to install these dependencies. After that, you can use any macro from the packages in your cod.
Snapshot is a dbt feature to capture the state of a table at a particular time. The snapshot folder contains all snapshots models for your project, which must be separate from the model folder.
The use case for this is to build a slowly changing dimension (SCD) table for sources that do not support change data capture (CDC). For example, every time the status of an order change, your system overrides it with the new information. In this case, there we cannot know what historical statues that order had.
Snapshot offers a solution around this. If you capture the orders tables every day, you can keep the history of order status changes over time. For every record, dbt will determine whether it is the latest one and update the old one’s effective period.
Read more about the dbt snapshot function here.
Most of the tests in a dbt project are defined in a .yml file under models. These are tests that use pre-made or custom macros. These tests can be applied to a model or a column.
In the example above, when you run dbt test, dbt will check whether orer_id is unique and not_null, status are in the defined values, and every records of customer_id can be linked to a record in the customers table.
Sometimes though, these tests are not enough, and you need to write custom ones. In that case, you would store those custom tests in the tests folder. dbt will evaluate the SQL statement. The test will pass if no row is returned and failed if at least one or more rows are returned.
We have walked through some basic concepts and project structure of a typical dbt project in this post. I hope that this can be the start of a fascinating new journey for you.
|
[
{
"code": null,
"e": 399,
"s": 172,
"text": "For analysts not from a technical background, working with dbt can be intimidating at first. Writing codes to configure everything takes some time to get used to, especially if you are used to setting everything up using a UI."
},
{
"code": null,
"e": 659,
"s": 399,
"text": "If you consider dbt for your team or just started with a team that uses dbt, this post will hopefully help ease you into the project structure and some basic concepts. We will walk through most files and folders in the image below. You can file the repo here."
},
{
"code": null,
"e": 820,
"s": 659,
"text": "This file contains the database connection that dbt will use to connect to the data warehouse. You only have to worry about this file if you set up dbt locally."
},
{
"code": null,
"e": 1001,
"s": 820,
"text": "Since this file can have sensitive information such as project name and database credentials, it is not in your dbt project. By default, the file is created in the folder: ~/.dbt/."
},
{
"code": null,
"e": 1265,
"s": 1001,
"text": "If you start your project from scratch, running dbt init will generate this file for you. If not, you may have to create the .dbt folder and a profiles.yml file locally. Follow dbt documentation or contact the repo owner if you are unsure how to set up this file."
},
{
"code": null,
"e": 1438,
"s": 1265,
"text": "If you work on multiple projects locally, the different project names (configured in the dbt_project.yml file) will allow you to set up various profiles for other projects."
},
{
"code": null,
"e": 1821,
"s": 1438,
"text": "Besides configuring the database connection, you can also configure a target. Target is how you have different configurations for different environments. For instance, when developing locally, you and your team would want separate datasets/databases to work on. But when deploying to production, it is best to have all tables in a single dataset/database. The default target is dev."
},
{
"code": null,
"e": 2069,
"s": 1821,
"text": "This file is the main configuration file for your project. If you are creating your project, change the project name and the profile name (preferably to the same value). Also, replace my_new_project at the models section with the new project name."
},
{
"code": null,
"e": 2354,
"s": 2069,
"text": "All objects in this project will inherit settings configured here unless overridden at the model level. For example, you can configure to create all models as tables in the dbt_project.yml file. But you can go into one of the models and override that setting to create a view instead."
},
{
"code": null,
"e": 2634,
"s": 2354,
"text": "Where to configure what depend on your project setup, but a good principle to follow is DRY (do not repeat yourself). For settings that apply to the whole project or a specific folder, define here in this file. For options that are only relevant to one model, configure it there."
},
{
"code": null,
"e": 2744,
"s": 2634,
"text": "For a complete list of settings, you can configure in this file, check out the dbt_project.yml documentation."
},
{
"code": null,
"e": 2944,
"s": 2744,
"text": "The models folder contains all data models in your project. Inside this folder, you can create any folder structure that you want. A recommended structure put out in the dbt style guide is as follow:"
},
{
"code": null,
"e": 3533,
"s": 2944,
"text": "├── dbt_project.yml└── models ├── marts | └── core | ├── intermediate | | ├── intermediate.yml | | ├── customers__unioned.sql | | ├── customers__grouped.sql | └── core.yml | └── core.docs | └── dim_customers.sql | └── fct_orders.sql └── staging └── stripe ├── base | ├── base__stripe_invoices.sql ├── src_stripe.yml ├── src_stripe.docs ├── stg_stripe.yml ├── stg_stripe__customers.sql └── stg_stripe__invoices.sql"
},
{
"code": null,
"e": 3771,
"s": 3533,
"text": "Here, we have the marts and staging folders underneath models. Different data sources will have separate folders underneath staging (e.g. stripe). Use cases or departments have different folders underneath marts (e.g. core or marketing)."
},
{
"code": null,
"e": 4002,
"s": 3771,
"text": "Notice the .yml and .doc files in the example above. They are where you would define metadata and documentation for your models. You can have everything in the dbt_project.yml file, but it is much more cleaner to define them here."
},
{
"code": null,
"e": 4088,
"s": 4002,
"text": "This structure allows you to organize objects clearly and easily apply bulk settings."
},
{
"code": null,
"e": 4267,
"s": 4088,
"text": "This folder contains all manual data that will be loaded to the database by dbt. To load the .csv files in this folder to the database, you will have to run the dbt seed command."
},
{
"code": null,
"e": 4561,
"s": 4267,
"text": "Since this is a version-controlled repository, do not put large files or files with sensitive information here. A file that is changed regularly is also not a good candidate for this approach. Some examples of a good fit for dbt seed are yearly budget, status mappings, category mappings, etc."
},
{
"code": null,
"e": 4821,
"s": 4561,
"text": "Macros in dbt are similar to functions in excel. You can define custom functions in the macros folder or override default macros and macros from a package. Macros, together with Jinja templating, gives you many functionalities that are not available with SQL."
},
{
"code": null,
"e": 5030,
"s": 4821,
"text": "For example, you can use a control structure (if statements and for loops) or different behavior for different targets. You can also abstract away complicated SQL logic and makes your code much more readable."
},
{
"code": null,
"e": 5166,
"s": 5030,
"text": "-- This is hard to read(amount / 100)::numeric(16, 2) as amount_usd-- This is much easier{{ cents_to_dollars('amount') }} as amount_usd"
},
{
"code": null,
"e": 5384,
"s": 5166,
"text": "One of the great things about dbt is that you can easily use packages that others have made. Doing so can save you tons of time, and you will not have to reinvent the wheel. You can find most of the dbt packages here."
},
{
"code": null,
"e": 5509,
"s": 5384,
"text": "To use any of these packages, you need to define them in the packages.yml file. Simply add content as following to the file."
},
{
"code": null,
"e": 5568,
"s": 5509,
"text": "packages: - package: dbt-labs/dbt_utils version: 0.7.3"
},
{
"code": null,
"e": 5724,
"s": 5568,
"text": "Before you can use these packages, you have to run dbt deps to install these dependencies. After that, you can use any macro from the packages in your cod."
},
{
"code": null,
"e": 5918,
"s": 5724,
"text": "Snapshot is a dbt feature to capture the state of a table at a particular time. The snapshot folder contains all snapshots models for your project, which must be separate from the model folder."
},
{
"code": null,
"e": 6232,
"s": 5918,
"text": "The use case for this is to build a slowly changing dimension (SCD) table for sources that do not support change data capture (CDC). For example, every time the status of an order change, your system overrides it with the new information. In this case, there we cannot know what historical statues that order had."
},
{
"code": null,
"e": 6485,
"s": 6232,
"text": "Snapshot offers a solution around this. If you capture the orders tables every day, you can keep the history of order status changes over time. For every record, dbt will determine whether it is the latest one and update the old one’s effective period."
},
{
"code": null,
"e": 6533,
"s": 6485,
"text": "Read more about the dbt snapshot function here."
},
{
"code": null,
"e": 6712,
"s": 6533,
"text": "Most of the tests in a dbt project are defined in a .yml file under models. These are tests that use pre-made or custom macros. These tests can be applied to a model or a column."
},
{
"code": null,
"e": 6929,
"s": 6712,
"text": "In the example above, when you run dbt test, dbt will check whether orer_id is unique and not_null, status are in the defined values, and every records of customer_id can be linked to a record in the customers table."
},
{
"code": null,
"e": 7212,
"s": 6929,
"text": "Sometimes though, these tests are not enough, and you need to write custom ones. In that case, you would store those custom tests in the tests folder. dbt will evaluate the SQL statement. The test will pass if no row is returned and failed if at least one or more rows are returned."
}
] |
Barrier Objects in Python
|
Barrier provides one of the python synchronization technique with which single or multiple threads wait until a point in a set of activities and make progress together.
To define a barrier object, “threading. Barrier” is used.
threading.Barrier(parties, action = None, timeout = None)
Where,
parties = Number of threads
parties = Number of threads
action = called by one of the threads when they are released.
action = called by one of the threads when they are released.
timeout = Default timeout value. In case no timeout value is specified for the wait(), this timeout value is used.
timeout = Default timeout value. In case no timeout value is specified for the wait(), this timeout value is used.
Below mentioned methods are used by Barrier class.
barrierThread.py
Live Demo
from random import randrange
from threading import Barrier, Thread
from time import ctime, sleep
num = 4
# 4 threads will need to pass this barrier to get released.
b = Barrier(num)
names = ['India', 'Japan', 'USA', 'China']
def player():
name = names.pop()
sleep(randrange(2, 5))
print('%s reached the barrier at: %s \n' % (name, ctime()))
b.wait()
threads = []
print("Race starts now...")
for i in range(num):
threads.append(Thread(target=player))
threads[-1].start()
"""
Below loop enables waiting for the threads to complete before moving on with the main script.
"""
for thread in threads:
thread.join()
print("All Reached Barrier Point!")
Race starts now...
India reached the barrier at: Fri Jan 18 14:07:44 2019
China reached the barrier at: Fri Jan 18 14:07:44 2019
Japan reached the barrier at: Fri Jan 18 14:07:46 2019
USA reached the barrier at: Fri Jan 18 14:07:46 2019
All Reached Barrier Point!
|
[
{
"code": null,
"e": 1231,
"s": 1062,
"text": "Barrier provides one of the python synchronization technique with which single or multiple threads wait until a point in a set of activities and make progress together."
},
{
"code": null,
"e": 1289,
"s": 1231,
"text": "To define a barrier object, “threading. Barrier” is used."
},
{
"code": null,
"e": 1347,
"s": 1289,
"text": "threading.Barrier(parties, action = None, timeout = None)"
},
{
"code": null,
"e": 1354,
"s": 1347,
"text": "Where,"
},
{
"code": null,
"e": 1382,
"s": 1354,
"text": "parties = Number of threads"
},
{
"code": null,
"e": 1410,
"s": 1382,
"text": "parties = Number of threads"
},
{
"code": null,
"e": 1472,
"s": 1410,
"text": "action = called by one of the threads when they are released."
},
{
"code": null,
"e": 1534,
"s": 1472,
"text": "action = called by one of the threads when they are released."
},
{
"code": null,
"e": 1649,
"s": 1534,
"text": "timeout = Default timeout value. In case no timeout value is specified for the wait(), this timeout value is used."
},
{
"code": null,
"e": 1764,
"s": 1649,
"text": "timeout = Default timeout value. In case no timeout value is specified for the wait(), this timeout value is used."
},
{
"code": null,
"e": 1815,
"s": 1764,
"text": "Below mentioned methods are used by Barrier class."
},
{
"code": null,
"e": 1832,
"s": 1815,
"text": "barrierThread.py"
},
{
"code": null,
"e": 1843,
"s": 1832,
"text": " Live Demo"
},
{
"code": null,
"e": 2509,
"s": 1843,
"text": "from random import randrange\nfrom threading import Barrier, Thread\nfrom time import ctime, sleep\nnum = 4\n# 4 threads will need to pass this barrier to get released.\nb = Barrier(num)\nnames = ['India', 'Japan', 'USA', 'China']\ndef player():\n name = names.pop()\n sleep(randrange(2, 5))\n print('%s reached the barrier at: %s \\n' % (name, ctime()))\n b.wait()\nthreads = []\nprint(\"Race starts now...\")\nfor i in range(num):\n threads.append(Thread(target=player))\n threads[-1].start()\n\"\"\"\nBelow loop enables waiting for the threads to complete before moving on with the main script.\n\"\"\"\nfor thread in threads:\n thread.join()\nprint(\"All Reached Barrier Point!\")"
},
{
"code": null,
"e": 2773,
"s": 2509,
"text": "Race starts now...\nIndia reached the barrier at: Fri Jan 18 14:07:44 2019\nChina reached the barrier at: Fri Jan 18 14:07:44 2019\nJapan reached the barrier at: Fri Jan 18 14:07:46 2019\nUSA reached the barrier at: Fri Jan 18 14:07:46 2019\nAll Reached Barrier Point!"
}
] |
Build an GUI Application to Get Live Air Quality Information Using Python. - GeeksforGeeks
|
04 Dec, 2021
We are living in a modernization and industrialization era. Our life becomes more and more convenient. But the problem is Air Pollution arise with time. This Pollution makes us unhealthy, Air is a Lifeline for our life.
In this article, we are going to write python scripts to get live air quality information and bind it with GUI Application.
Modules Needed
bs4: Beautiful Soup(bs4) is a Python library for pulling data out of HTML and XML files. To install this type the command below in the terminal.
pip install bs4
requests: This allows you to send HTTP/1.1 requests very easily. To install this type the command below in the terminal.
pip install requests
Approach:
Extract data form given URL. Copy the URL, after selecting the desired location.
Scrape the data with the help of requests and Beautiful Soup module.
Convert that data into HTML code.
Find the required details and filter them.
Implementation:Step 1: Import all the modules required
Python3
# import moduleimport requestsfrom bs4 import BeautifulSoup
Step 2: Create a URL get function
Python3
# link to extract html data def getdata(url): r=requests.get(url) return r.text
Step 3: Now pass the URL into the getdata function and convert that data into HTML code. The URL used here is “https://weather.com/en-IN/forecast/air-quality/l/3dbed5c769584b3604a70d40a1a0a9f6ebc99c253d955b548f4978ca101eeca1”
Python3
htmldata = getdata(# write the URL )soup = BeautifulSoup(htmldata, 'html.parser')result = (soup.find_all("div", class_="styles__primaryPollutantGraphNumber__2WgP9"))result
Output:
[<div class=”styles__primaryPollutantGraphNumber__2WgP9′′ classname=”styles__primaryPollutantGraphNumber__2WgP9′′>67</div>, <div class=”styles__primaryPollutantGraphNumber__2WgP9′′ classname=”styles__primaryPollutantGraphNumber__2WgP9′′>22</div>, <div class=”styles__primaryPollutantGraphNumber__2WgP9′′ classname=”styles__primaryPollutantGraphNumber__2WgP9′′>13</div>, <div class=”styles_N_primaryPollutantGraphNumber__2WgP9′′ classname=”styles__primaryPollutantGraphNumber__2WgP9′′>30</div>, <div class=”styles__primaryPollutantGraphNumber__2WgP9′′ classname=”styles__primaryPollutantGraphNumber__2WgP9′′>45</div>, <div class=”styles__primaryPollutantGraphNumber__2WgP9′′ classname=”styles__primaryPollutantGraphNumber__2WgP9′′>479</div>]
Step 4: Filter your data and Check your Air Quality according to the given data :
Python3
# Traverse the air qualityfor item in (soup.find_all("div", class_="styles__aqiGraphNumber__2R6Y9")): res_data = item.get_text() # traverse the contentdata = ''for item in (soup.find_all("div", class_="styles__primaryPollutantGraphNumber__2WgP9")): data += item.get_text() data += " "air_data = data.split(" ")print("Air Quality :", res_data)print("O3 level :", air_data[0])print("NO2 level :", air_data[1])print("SO2 level :", air_data[2])print("PM2.5 level :", air_data[3])print("PM10 level :", air_data[4])print("co level :", air_data[5])
Output:
Air Quality : 85
O3 level : 67
NO2 level : 22
SO2 level : 13
PM2.5 level : 30
PM10 level : 45
co level : 479
Step 5: Now Analyze the Air Quality with the given data:
Python3
res = int(res_data) if res <= 50: remark = "Good" impact = "Minimal impact" elif res <= 100 and res > 51: remark = "Satisfactory" impact = "Minor breathing discomfort to sensitive people" elif res <= 200 and res >= 101: remark = "Moderate" impact = "Breathing discomfort to the people with lungs, asthma and heart diseases" elif res <= 400 and res >= 201: remark = "Very Poor" impact = "Breathing discomfort to most people on prolonged exposure" elif res <= 500 and res >= 401: remark = "Severe" impact = "Affects healthy people and seriously impacts those with existing diseases" print(remark)print(impact)
Output:
Satisfactory
Minor breathing discomfort to sensitive people
Application for the live Air Quality information with Tkinter: This Script implements the above Implementation into a GUI.
Python3
# import modulesfrom tkinter import *import requestsfrom bs4 import BeautifulSoup # link for extract html data def getdata(url): r = requests.get(url) return r.text def airinfo(): htmldata = getdata( "https://weather.com/en-IN/forecast/air-quality/l/3dbed5c769584b3604a70d40a1a0a9f6ebc99c253d955b548f4978ca101eeca1") soup = BeautifulSoup(htmldata, 'html.parser') # Traverse the air quality for item in (soup.find_all("div", class_="styles__aqiGraphNumber__2R6Y9")): res_data = item.get_text() # traverse the content data = '' for item in (soup.find_all("div", class_="styles__primaryPollutantGraphNumber__2WgP9")): data += item.get_text() data += " " air_data = data.split(" ") ar.set(res_data) o3.set(air_data[0]) no2.set(air_data[1]) so2.set(air_data[2]) pm.set(air_data[3]) pml.set(air_data[4]) co.set(air_data[5]) res = int(res_data) if res <= 50: remark = "Good" impact = "Minimal impact" elif res <= 100 and res > 51: remark = "Satisfactory" impact = "Minor breathing discomfort to sensitive people" elif res <= 200 and res >= 101: remark = "Moderate" impact = "Breathing discomfort to the people with lungs, asthma and heart diseases" elif res <= 400 and res >= 201: remark = "Very Poor" impact = "Breathing discomfort to most people on prolonged exposure" elif res <= 500 and res >= 401: remark = "Severe" impact = "Affects healthy people and seriously impacts those with existing diseases" res_remark.set(remark) res_imp.set(impact) # object of tkinter# and background set to greymaster = Tk()master.configure(bg='light grey') # Variable Classes in tkinterair_data = StringVar()ar = StringVar()o3 = StringVar()no2 = StringVar()so2 = StringVar()pm = StringVar()pml = StringVar()co = StringVar()res_remark = StringVar()res_imp = StringVar() # Creating label for each information# name using widget LabelLabel(master, text="Air Quality : ", bg="light grey").grid(row=0, sticky=W)Label(master, text="O3 (μg/m3) :", bg="light grey").grid(row=1, sticky=W)Label(master, text="NO2 (μg/m3) :", bg="light grey").grid(row=2, sticky=W)Label(master, text="SO2 (μg/m3) :", bg="light grey").grid(row=3, sticky=W)Label(master, text="PM2.5 (μg/m3) :", bg="light grey").grid(row=4, sticky=W)Label(master, text="PM10 (μg/m3) :", bg="light grey").grid(row=5, sticky=W)Label(master, text="CO (μg/m3) :", bg="light grey").grid(row=6, sticky=W) Label(master, text="Remark :", bg="light grey").grid(row=7, sticky=W)Label(master, text="Possible Health Impacts :", bg="light grey").grid(row=8, sticky=W) # Creating label for class variable# name using widget EntryLabel(master, text="", textvariable=ar, bg="light grey").grid( row=0, column=1, sticky=W)Label(master, text="", textvariable=o3, bg="light grey").grid( row=1, column=1, sticky=W)Label(master, text="", textvariable=no2, bg="light grey").grid( row=2, column=1, sticky=W)Label(master, text="", textvariable=so2, bg="light grey").grid( row=3, column=1, sticky=W)Label(master, text="", textvariable=pm, bg="light grey").grid( row=4, column=1, sticky=W)Label(master, text="", textvariable=pml, bg="light grey").grid( row=5, column=1, sticky=W)Label(master, text="", textvariable=co, bg="light grey").grid( row=6, column=1, sticky=W)Label(master, text="", textvariable=res_remark, bg="light grey").grid(row=7, column=1, sticky=W)Label(master, text="", textvariable=res_imp, bg="light grey").grid(row=8, column=1, sticky=W) # creating a button using the widgetb = Button(master, text="Check", command=airinfo, bg="Blue")b.grid(row=0, column=2, columnspan=2, rowspan=2, padx=5, pady=5,) mainloop()
Output:
nnr223442
python-utility
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Install PIP on Windows ?
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Python | Pandas dataframe.groupby()
Defaultdict in Python
Python | Get unique values from a list
Python Classes and Objects
Python | os.path.join() method
Create a directory in Python
|
[
{
"code": null,
"e": 23901,
"s": 23873,
"text": "\n04 Dec, 2021"
},
{
"code": null,
"e": 24121,
"s": 23901,
"text": "We are living in a modernization and industrialization era. Our life becomes more and more convenient. But the problem is Air Pollution arise with time. This Pollution makes us unhealthy, Air is a Lifeline for our life."
},
{
"code": null,
"e": 24245,
"s": 24121,
"text": "In this article, we are going to write python scripts to get live air quality information and bind it with GUI Application."
},
{
"code": null,
"e": 24260,
"s": 24245,
"text": "Modules Needed"
},
{
"code": null,
"e": 24405,
"s": 24260,
"text": "bs4: Beautiful Soup(bs4) is a Python library for pulling data out of HTML and XML files. To install this type the command below in the terminal."
},
{
"code": null,
"e": 24422,
"s": 24405,
"text": "pip install bs4 "
},
{
"code": null,
"e": 24543,
"s": 24422,
"text": "requests: This allows you to send HTTP/1.1 requests very easily. To install this type the command below in the terminal."
},
{
"code": null,
"e": 24565,
"s": 24543,
"text": "pip install requests "
},
{
"code": null,
"e": 24575,
"s": 24565,
"text": "Approach:"
},
{
"code": null,
"e": 24656,
"s": 24575,
"text": "Extract data form given URL. Copy the URL, after selecting the desired location."
},
{
"code": null,
"e": 24725,
"s": 24656,
"text": "Scrape the data with the help of requests and Beautiful Soup module."
},
{
"code": null,
"e": 24759,
"s": 24725,
"text": "Convert that data into HTML code."
},
{
"code": null,
"e": 24802,
"s": 24759,
"text": "Find the required details and filter them."
},
{
"code": null,
"e": 24857,
"s": 24802,
"text": "Implementation:Step 1: Import all the modules required"
},
{
"code": null,
"e": 24865,
"s": 24857,
"text": "Python3"
},
{
"code": "# import moduleimport requestsfrom bs4 import BeautifulSoup",
"e": 24925,
"s": 24865,
"text": null
},
{
"code": null,
"e": 24960,
"s": 24925,
"text": "Step 2: Create a URL get function "
},
{
"code": null,
"e": 24968,
"s": 24960,
"text": "Python3"
},
{
"code": "# link to extract html data def getdata(url): r=requests.get(url) return r.text",
"e": 25058,
"s": 24968,
"text": null
},
{
"code": null,
"e": 25284,
"s": 25058,
"text": "Step 3: Now pass the URL into the getdata function and convert that data into HTML code. The URL used here is “https://weather.com/en-IN/forecast/air-quality/l/3dbed5c769584b3604a70d40a1a0a9f6ebc99c253d955b548f4978ca101eeca1”"
},
{
"code": null,
"e": 25292,
"s": 25284,
"text": "Python3"
},
{
"code": "htmldata = getdata(# write the URL )soup = BeautifulSoup(htmldata, 'html.parser')result = (soup.find_all(\"div\", class_=\"styles__primaryPollutantGraphNumber__2WgP9\"))result",
"e": 25464,
"s": 25292,
"text": null
},
{
"code": null,
"e": 25472,
"s": 25464,
"text": "Output:"
},
{
"code": null,
"e": 26214,
"s": 25472,
"text": "[<div class=”styles__primaryPollutantGraphNumber__2WgP9′′ classname=”styles__primaryPollutantGraphNumber__2WgP9′′>67</div>, <div class=”styles__primaryPollutantGraphNumber__2WgP9′′ classname=”styles__primaryPollutantGraphNumber__2WgP9′′>22</div>, <div class=”styles__primaryPollutantGraphNumber__2WgP9′′ classname=”styles__primaryPollutantGraphNumber__2WgP9′′>13</div>, <div class=”styles_N_primaryPollutantGraphNumber__2WgP9′′ classname=”styles__primaryPollutantGraphNumber__2WgP9′′>30</div>, <div class=”styles__primaryPollutantGraphNumber__2WgP9′′ classname=”styles__primaryPollutantGraphNumber__2WgP9′′>45</div>, <div class=”styles__primaryPollutantGraphNumber__2WgP9′′ classname=”styles__primaryPollutantGraphNumber__2WgP9′′>479</div>] "
},
{
"code": null,
"e": 26296,
"s": 26214,
"text": "Step 4: Filter your data and Check your Air Quality according to the given data :"
},
{
"code": null,
"e": 26304,
"s": 26296,
"text": "Python3"
},
{
"code": "# Traverse the air qualityfor item in (soup.find_all(\"div\", class_=\"styles__aqiGraphNumber__2R6Y9\")): res_data = item.get_text() # traverse the contentdata = ''for item in (soup.find_all(\"div\", class_=\"styles__primaryPollutantGraphNumber__2WgP9\")): data += item.get_text() data += \" \"air_data = data.split(\" \")print(\"Air Quality :\", res_data)print(\"O3 level :\", air_data[0])print(\"NO2 level :\", air_data[1])print(\"SO2 level :\", air_data[2])print(\"PM2.5 level :\", air_data[3])print(\"PM10 level :\", air_data[4])print(\"co level :\", air_data[5])",
"e": 26855,
"s": 26304,
"text": null
},
{
"code": null,
"e": 26863,
"s": 26855,
"text": "Output:"
},
{
"code": null,
"e": 26972,
"s": 26863,
"text": "Air Quality : 85\nO3 level : 67\nNO2 level : 22\nSO2 level : 13\nPM2.5 level : 30\nPM10 level : 45\nco level : 479"
},
{
"code": null,
"e": 27029,
"s": 26972,
"text": "Step 5: Now Analyze the Air Quality with the given data:"
},
{
"code": null,
"e": 27037,
"s": 27029,
"text": "Python3"
},
{
"code": "res = int(res_data) if res <= 50: remark = \"Good\" impact = \"Minimal impact\" elif res <= 100 and res > 51: remark = \"Satisfactory\" impact = \"Minor breathing discomfort to sensitive people\" elif res <= 200 and res >= 101: remark = \"Moderate\" impact = \"Breathing discomfort to the people with lungs, asthma and heart diseases\" elif res <= 400 and res >= 201: remark = \"Very Poor\" impact = \"Breathing discomfort to most people on prolonged exposure\" elif res <= 500 and res >= 401: remark = \"Severe\" impact = \"Affects healthy people and seriously impacts those with existing diseases\" print(remark)print(impact)",
"e": 27695,
"s": 27037,
"text": null
},
{
"code": null,
"e": 27703,
"s": 27695,
"text": "Output:"
},
{
"code": null,
"e": 27763,
"s": 27703,
"text": "Satisfactory\nMinor breathing discomfort to sensitive people"
},
{
"code": null,
"e": 27886,
"s": 27763,
"text": "Application for the live Air Quality information with Tkinter: This Script implements the above Implementation into a GUI."
},
{
"code": null,
"e": 27894,
"s": 27886,
"text": "Python3"
},
{
"code": "# import modulesfrom tkinter import *import requestsfrom bs4 import BeautifulSoup # link for extract html data def getdata(url): r = requests.get(url) return r.text def airinfo(): htmldata = getdata( \"https://weather.com/en-IN/forecast/air-quality/l/3dbed5c769584b3604a70d40a1a0a9f6ebc99c253d955b548f4978ca101eeca1\") soup = BeautifulSoup(htmldata, 'html.parser') # Traverse the air quality for item in (soup.find_all(\"div\", class_=\"styles__aqiGraphNumber__2R6Y9\")): res_data = item.get_text() # traverse the content data = '' for item in (soup.find_all(\"div\", class_=\"styles__primaryPollutantGraphNumber__2WgP9\")): data += item.get_text() data += \" \" air_data = data.split(\" \") ar.set(res_data) o3.set(air_data[0]) no2.set(air_data[1]) so2.set(air_data[2]) pm.set(air_data[3]) pml.set(air_data[4]) co.set(air_data[5]) res = int(res_data) if res <= 50: remark = \"Good\" impact = \"Minimal impact\" elif res <= 100 and res > 51: remark = \"Satisfactory\" impact = \"Minor breathing discomfort to sensitive people\" elif res <= 200 and res >= 101: remark = \"Moderate\" impact = \"Breathing discomfort to the people with lungs, asthma and heart diseases\" elif res <= 400 and res >= 201: remark = \"Very Poor\" impact = \"Breathing discomfort to most people on prolonged exposure\" elif res <= 500 and res >= 401: remark = \"Severe\" impact = \"Affects healthy people and seriously impacts those with existing diseases\" res_remark.set(remark) res_imp.set(impact) # object of tkinter# and background set to greymaster = Tk()master.configure(bg='light grey') # Variable Classes in tkinterair_data = StringVar()ar = StringVar()o3 = StringVar()no2 = StringVar()so2 = StringVar()pm = StringVar()pml = StringVar()co = StringVar()res_remark = StringVar()res_imp = StringVar() # Creating label for each information# name using widget LabelLabel(master, text=\"Air Quality : \", bg=\"light grey\").grid(row=0, sticky=W)Label(master, text=\"O3 (μg/m3) :\", bg=\"light grey\").grid(row=1, sticky=W)Label(master, text=\"NO2 (μg/m3) :\", bg=\"light grey\").grid(row=2, sticky=W)Label(master, text=\"SO2 (μg/m3) :\", bg=\"light grey\").grid(row=3, sticky=W)Label(master, text=\"PM2.5 (μg/m3) :\", bg=\"light grey\").grid(row=4, sticky=W)Label(master, text=\"PM10 (μg/m3) :\", bg=\"light grey\").grid(row=5, sticky=W)Label(master, text=\"CO (μg/m3) :\", bg=\"light grey\").grid(row=6, sticky=W) Label(master, text=\"Remark :\", bg=\"light grey\").grid(row=7, sticky=W)Label(master, text=\"Possible Health Impacts :\", bg=\"light grey\").grid(row=8, sticky=W) # Creating label for class variable# name using widget EntryLabel(master, text=\"\", textvariable=ar, bg=\"light grey\").grid( row=0, column=1, sticky=W)Label(master, text=\"\", textvariable=o3, bg=\"light grey\").grid( row=1, column=1, sticky=W)Label(master, text=\"\", textvariable=no2, bg=\"light grey\").grid( row=2, column=1, sticky=W)Label(master, text=\"\", textvariable=so2, bg=\"light grey\").grid( row=3, column=1, sticky=W)Label(master, text=\"\", textvariable=pm, bg=\"light grey\").grid( row=4, column=1, sticky=W)Label(master, text=\"\", textvariable=pml, bg=\"light grey\").grid( row=5, column=1, sticky=W)Label(master, text=\"\", textvariable=co, bg=\"light grey\").grid( row=6, column=1, sticky=W)Label(master, text=\"\", textvariable=res_remark, bg=\"light grey\").grid(row=7, column=1, sticky=W)Label(master, text=\"\", textvariable=res_imp, bg=\"light grey\").grid(row=8, column=1, sticky=W) # creating a button using the widgetb = Button(master, text=\"Check\", command=airinfo, bg=\"Blue\")b.grid(row=0, column=2, columnspan=2, rowspan=2, padx=5, pady=5,) mainloop()",
"e": 31815,
"s": 27894,
"text": null
},
{
"code": null,
"e": 31823,
"s": 31815,
"text": "Output:"
},
{
"code": null,
"e": 31833,
"s": 31823,
"text": "nnr223442"
},
{
"code": null,
"e": 31848,
"s": 31833,
"text": "python-utility"
},
{
"code": null,
"e": 31855,
"s": 31848,
"text": "Python"
},
{
"code": null,
"e": 31953,
"s": 31855,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31962,
"s": 31953,
"text": "Comments"
},
{
"code": null,
"e": 31975,
"s": 31962,
"text": "Old Comments"
},
{
"code": null,
"e": 32007,
"s": 31975,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 32063,
"s": 32007,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 32105,
"s": 32063,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 32147,
"s": 32105,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 32183,
"s": 32147,
"text": "Python | Pandas dataframe.groupby()"
},
{
"code": null,
"e": 32205,
"s": 32183,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 32244,
"s": 32205,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 32271,
"s": 32244,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 32302,
"s": 32271,
"text": "Python | os.path.join() method"
}
] |
Searching an element in a sorted array | Practice | GeeksforGeeks
|
Given an array arr[] sorted in ascending order of size N and an integer K. Check if K is present in the array or not.
Example 1:
Input:
N = 5, K = 6
arr[] = {1,2,3,4,6}
Output: 1
Exlpanation: Since, 6 is present in
the array at index 4 (0-based indexing),
output is 1.
Example 2:
Input:
N = 5, K = 2
arr[] = {1,3,4,5,6}
Output: -1
Exlpanation: Since, 2 is not present
in the array, output is -1.
Your Task:
You don't need to read input or print anything. Complete the function searchInSorted() which takes the sorted array arr[], its size N and the element K as input parameters and returns 1 if K is present in the array, else it returns -1.
Expected Time Complexity: O(Log N)
Expected Auxiliary Space: O(1)
Constraints:
1 <= N <= 106
1 <= K <= 106
1 <= arr[i] <= 106
0
akasksingh0801 week ago
static int searchInSorted(int arr[], int N, int K)
{
// Your code here
int low =0;
int high=N-1;
while(low<=high)
{
int mid = (low+high)/2;
if(arr[mid]==K)
return 1;
if(arr[mid]>K)
high=mid-1;
else
low=mid+1;
}
return -1;
}
+1
kartikeyashokgautam2 weeks ago
JAVA Solution : -
static int searchInSorted(int arr[], int N, int K) { int low = 0; int high = N-1; int res = -1; while(low<=high) { int mid = low+(high-low)/2; if(K>arr[mid]) { low = mid+1; } else if(K<arr[mid]) { high=mid -1; } else { return 1; } }
return -1;}
0
snehalgadge3 weeks ago
Java Solution
class Solution{ static int searchInSorted(int arr[], int N, int K) { int lb = 0; int ub = N-1; while(lb<=ub){ int mid = lb+(ub-lb)/2; if(arr[mid]==K){ return 1; } else if (arr[mid]>K){ ub = mid-1; } else{ lb = mid+1; } } return -1; }}
+2
mohammadtanveer75403 weeks ago
int searchInSorted(int arr[], int N, int K)
{
int low=0, high=N-1,mid;
while(high-low>1){
mid=(low+high)/2;
if(arr[mid]<K)
low=mid+1;
else
high=mid;
}
if(arr[low]==K)
return 1;
else if(arr[high]==K)
return 1;
return -1;
}
-1
amoghyelasangi4 weeks ago
ONLY SOME TEST CASES HAVE PASSED
def searchInSorted(self,arr, N, K): for i in range (len(arr)): if K in arr: return arr[i] else: return -1
0
yoshis1 month ago
Solution in java (least time complexity)
class Solution{ static int searchInSorted(int arr[], int N, int K) { int start= 0; int end= N-1; // Your code here int mid; while(start<=end){ mid= start+ (end-start)/2; if(arr[mid]==K){ return 1; }else{ if(arr[mid]>K){ end=mid-1; }else{ start=mid+1; } } } return -1; } }
0
kumararchitkumar4831 month ago
CPP Solution
int searchInSorted(int arr[], int N, int K) { int start=0,end=N-1; int mid= start+(end-start)/2; while(end>=start){ if(arr[mid]==K){ return 1; } else if(arr[mid]<K){ start=start+1; } else{ end=end-1; } mid= start+(end-start)/2;} }};
0
amiransarimy1 month ago
Python Solutions
def searchInSorted(self,arr, N, K):
low = 0
high = N - 1
while low <= high:
mid = low +(high -low)//2
if arr[mid] == K:
return 1
elif K < arr[mid]:
high = mid -1
else:
low = mid + 1
return -1
0
vishal8241151 month ago
212 cases passed java
class Solution{ static int searchInSorted(int arr[], int N, int K) { for(int i=0;i<N;i++){ if(arr[i]==K){ return 1; } } return -1; }}
0
0niharika22 months ago
int searchInSorted(int arr[], int N, int K)
{
int lo=0, hi=N-1;
while(lo<=hi){
int mid = (lo + hi)/2; //dynamic
if(arr[mid]==K) return 1;
if(K<arr[mid]) hi = mid - 1;
if(K>arr[mid]) lo = mid + 1;
}
return -1;
}
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab.
|
[
{
"code": null,
"e": 356,
"s": 238,
"text": "Given an array arr[] sorted in ascending order of size N and an integer K. Check if K is present in the array or not."
},
{
"code": null,
"e": 368,
"s": 356,
"text": "\nExample 1:"
},
{
"code": null,
"e": 509,
"s": 368,
"text": "Input:\nN = 5, K = 6\narr[] = {1,2,3,4,6}\nOutput: 1\nExlpanation: Since, 6 is present in \nthe array at index 4 (0-based indexing),\noutput is 1."
},
{
"code": null,
"e": 522,
"s": 511,
"text": "Example 2:"
},
{
"code": null,
"e": 640,
"s": 522,
"text": "Input:\nN = 5, K = 2\narr[] = {1,3,4,5,6}\nOutput: -1\nExlpanation: Since, 2 is not present \nin the array, output is -1.\n"
},
{
"code": null,
"e": 890,
"s": 642,
"text": "Your Task:\nYou don't need to read input or print anything. Complete the function searchInSorted() which takes the sorted array arr[], its size N and the element K as input parameters and returns 1 if K is present in the array, else it returns -1. "
},
{
"code": null,
"e": 957,
"s": 890,
"text": "\nExpected Time Complexity: O(Log N)\nExpected Auxiliary Space: O(1)"
},
{
"code": null,
"e": 1019,
"s": 959,
"text": "Constraints:\n1 <= N <= 106\n1 <= K <= 106\n1 <= arr[i] <= 106"
},
{
"code": null,
"e": 1023,
"s": 1021,
"text": "0"
},
{
"code": null,
"e": 1047,
"s": 1023,
"text": "akasksingh0801 week ago"
},
{
"code": null,
"e": 1456,
"s": 1047,
"text": " static int searchInSorted(int arr[], int N, int K)\n {\n \n // Your code here\n int low =0; \n int high=N-1;\n while(low<=high)\n {\n int mid = (low+high)/2;\n if(arr[mid]==K)\n return 1;\n \n if(arr[mid]>K)\n high=mid-1;\n \n else\n low=mid+1;\n }\n return -1;\n }"
},
{
"code": null,
"e": 1459,
"s": 1456,
"text": "+1"
},
{
"code": null,
"e": 1490,
"s": 1459,
"text": "kartikeyashokgautam2 weeks ago"
},
{
"code": null,
"e": 1508,
"s": 1490,
"text": "JAVA Solution : -"
},
{
"code": null,
"e": 1977,
"s": 1508,
"text": " static int searchInSorted(int arr[], int N, int K) { int low = 0; int high = N-1; int res = -1; while(low<=high) { int mid = low+(high-low)/2; if(K>arr[mid]) { low = mid+1; } else if(K<arr[mid]) { high=mid -1; } else { return 1; } }"
},
{
"code": null,
"e": 1997,
"s": 1977,
"text": " return -1;}"
},
{
"code": null,
"e": 2001,
"s": 1999,
"text": "0"
},
{
"code": null,
"e": 2024,
"s": 2001,
"text": "snehalgadge3 weeks ago"
},
{
"code": null,
"e": 2039,
"s": 2024,
"text": "Java Solution "
},
{
"code": null,
"e": 2391,
"s": 2041,
"text": "class Solution{ static int searchInSorted(int arr[], int N, int K) { int lb = 0; int ub = N-1; while(lb<=ub){ int mid = lb+(ub-lb)/2; if(arr[mid]==K){ return 1; } else if (arr[mid]>K){ ub = mid-1; } else{ lb = mid+1; } } return -1; }}"
},
{
"code": null,
"e": 2394,
"s": 2391,
"text": "+2"
},
{
"code": null,
"e": 2425,
"s": 2394,
"text": "mohammadtanveer75403 weeks ago"
},
{
"code": null,
"e": 2809,
"s": 2425,
"text": " int searchInSorted(int arr[], int N, int K) \n { \n int low=0, high=N-1,mid;\n while(high-low>1){\n mid=(low+high)/2;\n if(arr[mid]<K)\n low=mid+1;\n else\n high=mid;\n }\n if(arr[low]==K)\n return 1;\n else if(arr[high]==K)\n return 1;\n return -1;\n \n }"
},
{
"code": null,
"e": 2812,
"s": 2809,
"text": "-1"
},
{
"code": null,
"e": 2838,
"s": 2812,
"text": "amoghyelasangi4 weeks ago"
},
{
"code": null,
"e": 2871,
"s": 2838,
"text": "ONLY SOME TEST CASES HAVE PASSED"
},
{
"code": null,
"e": 3031,
"s": 2871,
"text": "def searchInSorted(self,arr, N, K): for i in range (len(arr)): if K in arr: return arr[i] else: return -1"
},
{
"code": null,
"e": 3033,
"s": 3031,
"text": "0"
},
{
"code": null,
"e": 3051,
"s": 3033,
"text": "yoshis1 month ago"
},
{
"code": null,
"e": 3092,
"s": 3051,
"text": "Solution in java (least time complexity)"
},
{
"code": null,
"e": 3507,
"s": 3092,
"text": "class Solution{ static int searchInSorted(int arr[], int N, int K) { int start= 0; int end= N-1; // Your code here int mid; while(start<=end){ mid= start+ (end-start)/2; if(arr[mid]==K){ return 1; }else{ if(arr[mid]>K){ end=mid-1; }else{ start=mid+1; } } } return -1; } }"
},
{
"code": null,
"e": 3509,
"s": 3507,
"text": "0"
},
{
"code": null,
"e": 3540,
"s": 3509,
"text": "kumararchitkumar4831 month ago"
},
{
"code": null,
"e": 3553,
"s": 3540,
"text": "CPP Solution"
},
{
"code": null,
"e": 3861,
"s": 3555,
"text": " int searchInSorted(int arr[], int N, int K) { int start=0,end=N-1; int mid= start+(end-start)/2; while(end>=start){ if(arr[mid]==K){ return 1; } else if(arr[mid]<K){ start=start+1; } else{ end=end-1; } mid= start+(end-start)/2;} }};"
},
{
"code": null,
"e": 3863,
"s": 3861,
"text": "0"
},
{
"code": null,
"e": 3887,
"s": 3863,
"text": "amiransarimy1 month ago"
},
{
"code": null,
"e": 3904,
"s": 3887,
"text": "Python Solutions"
},
{
"code": null,
"e": 4255,
"s": 3906,
"text": "def searchInSorted(self,arr, N, K):\n low = 0\n high = N - 1\n \n while low <= high:\n mid = low +(high -low)//2\n if arr[mid] == K:\n return 1\n \n elif K < arr[mid]:\n high = mid -1\n \n else:\n low = mid + 1\n \n return -1"
},
{
"code": null,
"e": 4257,
"s": 4255,
"text": "0"
},
{
"code": null,
"e": 4281,
"s": 4257,
"text": "vishal8241151 month ago"
},
{
"code": null,
"e": 4303,
"s": 4281,
"text": "212 cases passed java"
},
{
"code": null,
"e": 4512,
"s": 4305,
"text": "class Solution{ static int searchInSorted(int arr[], int N, int K) { for(int i=0;i<N;i++){ if(arr[i]==K){ return 1; } } return -1; }}"
},
{
"code": null,
"e": 4514,
"s": 4512,
"text": "0"
},
{
"code": null,
"e": 4537,
"s": 4514,
"text": "0niharika22 months ago"
},
{
"code": null,
"e": 4817,
"s": 4537,
"text": "int searchInSorted(int arr[], int N, int K) \n{ \n int lo=0, hi=N-1;\n while(lo<=hi){\n int mid = (lo + hi)/2; //dynamic\n if(arr[mid]==K) return 1;\n if(K<arr[mid]) hi = mid - 1;\n if(K>arr[mid]) lo = mid + 1;\n }\n return -1;\n}"
},
{
"code": null,
"e": 4963,
"s": 4817,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 4999,
"s": 4963,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 5009,
"s": 4999,
"text": "\nProblem\n"
},
{
"code": null,
"e": 5019,
"s": 5009,
"text": "\nContest\n"
},
{
"code": null,
"e": 5082,
"s": 5019,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 5230,
"s": 5082,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 5438,
"s": 5230,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 5544,
"s": 5438,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
Java Examples - Array Compare
|
How to sort an array and search an element inside it?
Following example shows how to use sort () and binarySearch () method to accomplish the task. The user defined method printArray () is used to display the output −
import java.util.Arrays;
public class MainClass {
public static void main(String args[]) throws Exception {
int array[] = { 2, 5, -2, 6, -3, 8, 0, -7, -9, 4 };
Arrays.sort(array);
printArray("Sorted array", array);
int index = Arrays.binarySearch(array, 2);
System.out.println("Found 2 @ " + index);
}
private static void printArray(String message, int array[]) {
System.out.println(message + ": [length: " + array.length + "]");
for (int i = 0; i < array.length; i++) {
if(i != 0){
System.out.print(", ");
}
System.out.print(array[i]);
}
System.out.println();
}
}
The above code sample will produce the following result.
Sorted array: [length: 10]
-9, -7, -3, -2, 0, 2, 4, 5, 6, 8
Found 2 @ 5
How to compare Two arrays?
public class HelloWorld {
public static void main (String[] args) {
int arr1[] = {1, 2, 3};
int arr2[] = {1, 2, 3};
if (arr1 == arr2) System.out.println("Same");
else System.out.println("Not same");
}
}
The above code sample will produce the following result.
Not same
Another sample example of Array compare
import java.util.Arrays;
public class HelloWorld {
public static void main (String[] args) {
int arr1[] = {1, 2, 3};
int arr2[] = {1, 2, 3};
if (Arrays.equals(arr1, arr2)) System.out.println("Same");
else System.out.println("Not same");
}
}
The above code sample will produce the following result.
Same
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2122,
"s": 2068,
"text": "How to sort an array and search an element inside it?"
},
{
"code": null,
"e": 2286,
"s": 2122,
"text": "Following example shows how to use sort () and binarySearch () method to accomplish the task. The user defined method printArray () is used to display the output −"
},
{
"code": null,
"e": 2965,
"s": 2286,
"text": "import java.util.Arrays;\n\npublic class MainClass {\n public static void main(String args[]) throws Exception {\n int array[] = { 2, 5, -2, 6, -3, 8, 0, -7, -9, 4 };\n Arrays.sort(array);\n printArray(\"Sorted array\", array);\n \n int index = Arrays.binarySearch(array, 2);\n System.out.println(\"Found 2 @ \" + index);\n }\n private static void printArray(String message, int array[]) {\n System.out.println(message + \": [length: \" + array.length + \"]\");\n for (int i = 0; i < array.length; i++) {\n if(i != 0){\n System.out.print(\", \");\n }\n System.out.print(array[i]);\n } \n System.out.println();\n }\n}"
},
{
"code": null,
"e": 3022,
"s": 2965,
"text": "The above code sample will produce the following result."
},
{
"code": null,
"e": 3095,
"s": 3022,
"text": "Sorted array: [length: 10]\n-9, -7, -3, -2, 0, 2, 4, 5, 6, 8\nFound 2 @ 5\n"
},
{
"code": null,
"e": 3122,
"s": 3095,
"text": "How to compare Two arrays?"
},
{
"code": null,
"e": 3365,
"s": 3122,
"text": "public class HelloWorld { \n public static void main (String[] args) {\n int arr1[] = {1, 2, 3};\n int arr2[] = {1, 2, 3};\n \n if (arr1 == arr2) System.out.println(\"Same\"); \n else System.out.println(\"Not same\");\n } \n}"
},
{
"code": null,
"e": 3422,
"s": 3365,
"text": "The above code sample will produce the following result."
},
{
"code": null,
"e": 3435,
"s": 3422,
"text": "Not same \n"
},
{
"code": null,
"e": 3475,
"s": 3435,
"text": "Another sample example of Array compare"
},
{
"code": null,
"e": 3754,
"s": 3475,
"text": "import java.util.Arrays;\n\npublic class HelloWorld { \n public static void main (String[] args) { \n int arr1[] = {1, 2, 3};\n int arr2[] = {1, 2, 3};\n \n if (Arrays.equals(arr1, arr2)) System.out.println(\"Same\"); \n else System.out.println(\"Not same\");\n }\n}"
},
{
"code": null,
"e": 3811,
"s": 3754,
"text": "The above code sample will produce the following result."
},
{
"code": null,
"e": 3820,
"s": 3811,
"text": "Same \n"
},
{
"code": null,
"e": 3827,
"s": 3820,
"text": " Print"
},
{
"code": null,
"e": 3838,
"s": 3827,
"text": " Add Notes"
}
] |
How to Remove Hyperlinks in Excel? - GeeksforGeeks
|
16 Dec, 2021
The HYPERLINK function generates a shortcut that opens a document saved on a server or the Internet, or moves to another position in the existing workbook. Excel moves to the specified URL or displays the file when you click a cell that includes a HYPERLINK function. The syntax for a hyperlink function is as follows:
Syntax:
HYPERLINK(link_location, [friendly_name])
Difference between clear hyperlinks and remove hyperlinks:
Clear Hyperlinks: The hyperlinks in the selected cells will be deleted. The formatting will not be cleared. Remove Hyperlinks: The hyperlinks and formatting in the chosen cells should be removed.
Clear Hyperlinks: The hyperlinks in the selected cells will be deleted. The formatting will not be cleared.
Remove Hyperlinks: The hyperlinks and formatting in the chosen cells should be removed.
You can eliminate a single hyperlink, many hyperlinks in one go, disable automatic hyperlinks, and disable the need to use Ctrl to follow a hyperlink.
This article will provide you with different methods to remove the hyperlinks in Excel.
To remove a hyperlink but keep the text, simply right-click on the cell which has the hyperlink and click the Remove Hyperlink option.
In conclusion, you can go through the procedures again and remove hyperlinks from additional cells. Furthermore, you may use this procedure to delete hyperlinks from both cells by selecting them together.
Follow the below steps to remove hyperlinks using Window
Step 1: Select the target cell with the right mouse button.
Step 2: Then select “Edit Hyperlink” from the drop-down menu.
Step 3: Press the “Remove Link” button in the “Edit Hyperlink” window. Keep in mind that the hyperlink in the “Address” text box should not be cleared. Otherwise, the “OK” button will be inactive.
The popup will vanish as soon as you click the “Remove Link” button. In addition, the cell’s hyperlink has been erased. Unlike the previous way, you cannot delete the hyperlinks for various cells at the same time. To totally remove the hyperlink, select it and then press the Delete key.
Follow the below steps to remove hyperlinks from the Toolbar:
Step 1: Choose the desired cell. To pick a cell, use the arrow keys on your keyboard or click and hold the mouse button.
Step 2: Next, on the toolbar, click the “Clear” button.
Step 3: Then there are 2 choices related to hyperlinks in the drop-down menu.
Step 4: The hyperlink will be erased from the cell if you select “Clear Hyperlinks” from the menu. The format, though, will not change. A little button will also be shown close to the actual cell. When you click the button, a menu with two alternatives appears. You get to choose whether or not to exit the format at this point.
Step 5: The hyperlink will be deleted asap if you select “Remove Hyperlinks” from the menu.
Simultaneously, the format will be phased out. The distinction between the two options beneath the “Clear” button is this. When you save the cell format, you can use this approach to select “Clear Hyperlinks.” You can even remove hyperlinks for several cells at once with this method. If you choose to leave the format alone, the formatting of all the cells will remain unchanged.
If you used the HYPERLINK function to establish the hyperlink, you can remove it by following these steps:
Step 1: Choose the cell that contains the hyperlink.
Step 2: To copy the hyperlink, use CTRL+C.
Step 3: By using the Values paste option, right-click and paste.
Follow the below steps to remove all hyperlinks at the same time in Excel 10 or ge=reater versions:
Step 1: Click all cells with hyperlinks or select all cells by using Ctrl+A.
Step 2: Remove Hyperlinks by right-clicking and selecting Remove Hyperlinks.
We will advise you to be patient with Excel as it may glitch sometimes and not give you the desired output right away. It is good if you just close the window and reopen it.
Excel-functions
Picked
Excel
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Use Solver in Excel?
Introduction to Excel Spreadsheet
Macros in Excel
How to Find the Last Used Row and Column in Excel VBA?
How to Read Data From Text File in Excel VBA?
Java Program to Draw a Shape in Excel Sheet using Apache POI
How to Run a Macro Automatically When Workbook Opens in Excel?
How to Get Length of Array in Excel VBA?
How to Extract the Last Word From a Cell in Excel?
How to Sum Values Based on Criteria in Another Column in Excel?
|
[
{
"code": null,
"e": 24964,
"s": 24936,
"text": "\n16 Dec, 2021"
},
{
"code": null,
"e": 25283,
"s": 24964,
"text": "The HYPERLINK function generates a shortcut that opens a document saved on a server or the Internet, or moves to another position in the existing workbook. Excel moves to the specified URL or displays the file when you click a cell that includes a HYPERLINK function. The syntax for a hyperlink function is as follows:"
},
{
"code": null,
"e": 25291,
"s": 25283,
"text": "Syntax:"
},
{
"code": null,
"e": 25333,
"s": 25291,
"text": "HYPERLINK(link_location, [friendly_name])"
},
{
"code": null,
"e": 25392,
"s": 25333,
"text": "Difference between clear hyperlinks and remove hyperlinks:"
},
{
"code": null,
"e": 25588,
"s": 25392,
"text": "Clear Hyperlinks: The hyperlinks in the selected cells will be deleted. The formatting will not be cleared. Remove Hyperlinks: The hyperlinks and formatting in the chosen cells should be removed."
},
{
"code": null,
"e": 25697,
"s": 25588,
"text": "Clear Hyperlinks: The hyperlinks in the selected cells will be deleted. The formatting will not be cleared. "
},
{
"code": null,
"e": 25785,
"s": 25697,
"text": "Remove Hyperlinks: The hyperlinks and formatting in the chosen cells should be removed."
},
{
"code": null,
"e": 25936,
"s": 25785,
"text": "You can eliminate a single hyperlink, many hyperlinks in one go, disable automatic hyperlinks, and disable the need to use Ctrl to follow a hyperlink."
},
{
"code": null,
"e": 26024,
"s": 25936,
"text": "This article will provide you with different methods to remove the hyperlinks in Excel."
},
{
"code": null,
"e": 26159,
"s": 26024,
"text": "To remove a hyperlink but keep the text, simply right-click on the cell which has the hyperlink and click the Remove Hyperlink option."
},
{
"code": null,
"e": 26364,
"s": 26159,
"text": "In conclusion, you can go through the procedures again and remove hyperlinks from additional cells. Furthermore, you may use this procedure to delete hyperlinks from both cells by selecting them together."
},
{
"code": null,
"e": 26421,
"s": 26364,
"text": "Follow the below steps to remove hyperlinks using Window"
},
{
"code": null,
"e": 26481,
"s": 26421,
"text": "Step 1: Select the target cell with the right mouse button."
},
{
"code": null,
"e": 26543,
"s": 26481,
"text": "Step 2: Then select “Edit Hyperlink” from the drop-down menu."
},
{
"code": null,
"e": 26740,
"s": 26543,
"text": "Step 3: Press the “Remove Link” button in the “Edit Hyperlink” window. Keep in mind that the hyperlink in the “Address” text box should not be cleared. Otherwise, the “OK” button will be inactive."
},
{
"code": null,
"e": 27028,
"s": 26740,
"text": "The popup will vanish as soon as you click the “Remove Link” button. In addition, the cell’s hyperlink has been erased. Unlike the previous way, you cannot delete the hyperlinks for various cells at the same time. To totally remove the hyperlink, select it and then press the Delete key."
},
{
"code": null,
"e": 27090,
"s": 27028,
"text": "Follow the below steps to remove hyperlinks from the Toolbar:"
},
{
"code": null,
"e": 27211,
"s": 27090,
"text": "Step 1: Choose the desired cell. To pick a cell, use the arrow keys on your keyboard or click and hold the mouse button."
},
{
"code": null,
"e": 27267,
"s": 27211,
"text": "Step 2: Next, on the toolbar, click the “Clear” button."
},
{
"code": null,
"e": 27345,
"s": 27267,
"text": "Step 3: Then there are 2 choices related to hyperlinks in the drop-down menu."
},
{
"code": null,
"e": 27674,
"s": 27345,
"text": "Step 4: The hyperlink will be erased from the cell if you select “Clear Hyperlinks” from the menu. The format, though, will not change. A little button will also be shown close to the actual cell. When you click the button, a menu with two alternatives appears. You get to choose whether or not to exit the format at this point."
},
{
"code": null,
"e": 27766,
"s": 27674,
"text": "Step 5: The hyperlink will be deleted asap if you select “Remove Hyperlinks” from the menu."
},
{
"code": null,
"e": 28147,
"s": 27766,
"text": "Simultaneously, the format will be phased out. The distinction between the two options beneath the “Clear” button is this. When you save the cell format, you can use this approach to select “Clear Hyperlinks.” You can even remove hyperlinks for several cells at once with this method. If you choose to leave the format alone, the formatting of all the cells will remain unchanged."
},
{
"code": null,
"e": 28254,
"s": 28147,
"text": "If you used the HYPERLINK function to establish the hyperlink, you can remove it by following these steps:"
},
{
"code": null,
"e": 28307,
"s": 28254,
"text": "Step 1: Choose the cell that contains the hyperlink."
},
{
"code": null,
"e": 28350,
"s": 28307,
"text": "Step 2: To copy the hyperlink, use CTRL+C."
},
{
"code": null,
"e": 28415,
"s": 28350,
"text": "Step 3: By using the Values paste option, right-click and paste."
},
{
"code": null,
"e": 28515,
"s": 28415,
"text": "Follow the below steps to remove all hyperlinks at the same time in Excel 10 or ge=reater versions:"
},
{
"code": null,
"e": 28592,
"s": 28515,
"text": "Step 1: Click all cells with hyperlinks or select all cells by using Ctrl+A."
},
{
"code": null,
"e": 28669,
"s": 28592,
"text": "Step 2: Remove Hyperlinks by right-clicking and selecting Remove Hyperlinks."
},
{
"code": null,
"e": 28843,
"s": 28669,
"text": "We will advise you to be patient with Excel as it may glitch sometimes and not give you the desired output right away. It is good if you just close the window and reopen it."
},
{
"code": null,
"e": 28859,
"s": 28843,
"text": "Excel-functions"
},
{
"code": null,
"e": 28866,
"s": 28859,
"text": "Picked"
},
{
"code": null,
"e": 28872,
"s": 28866,
"text": "Excel"
},
{
"code": null,
"e": 28970,
"s": 28872,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28979,
"s": 28970,
"text": "Comments"
},
{
"code": null,
"e": 28992,
"s": 28979,
"text": "Old Comments"
},
{
"code": null,
"e": 29020,
"s": 28992,
"text": "How to Use Solver in Excel?"
},
{
"code": null,
"e": 29054,
"s": 29020,
"text": "Introduction to Excel Spreadsheet"
},
{
"code": null,
"e": 29070,
"s": 29054,
"text": "Macros in Excel"
},
{
"code": null,
"e": 29125,
"s": 29070,
"text": "How to Find the Last Used Row and Column in Excel VBA?"
},
{
"code": null,
"e": 29171,
"s": 29125,
"text": "How to Read Data From Text File in Excel VBA?"
},
{
"code": null,
"e": 29232,
"s": 29171,
"text": "Java Program to Draw a Shape in Excel Sheet using Apache POI"
},
{
"code": null,
"e": 29295,
"s": 29232,
"text": "How to Run a Macro Automatically When Workbook Opens in Excel?"
},
{
"code": null,
"e": 29336,
"s": 29295,
"text": "How to Get Length of Array in Excel VBA?"
},
{
"code": null,
"e": 29387,
"s": 29336,
"text": "How to Extract the Last Word From a Cell in Excel?"
}
] |
How to randomly select a fixed number of rows without replacement from a data frame in R?
|
This can be done simply by using sample function.
> df = data.frame(matrix(rnorm(20), nrow=5))
> df
X1 X2 X3 X4
1 -0.3277833 -0.1810403 0.2844406 -2.9676440
2 0.8262923 0.4334449 0.4031084 -1.9278049
3 -0.1769219 -0.1583660 -0.2829540 -0.1962654
4 1.0357773 0.9326049 0.3250011 -1.8835882
5 -1.0682642 -0.6589731 -0.4783144 -0.2945062
Let’s say we want to select 3 rows randomly then it can be done as follows −
> df[sample(nrow(df), 3), ]
X1 X2 X3 X4
2 0.8262923 0.4334449 0.4031084 -1.9278049
1 -0.3277833 -0.1810403 0.2844406 -2.9676440
5 -1.0682642 -0.6589731 -0.4783144 -0.2945062
|
[
{
"code": null,
"e": 1112,
"s": 1062,
"text": "This can be done simply by using sample function."
},
{
"code": null,
"e": 1432,
"s": 1112,
"text": "> df = data.frame(matrix(rnorm(20), nrow=5))\n> df\n X1 X2 X3 X4\n1 -0.3277833 -0.1810403 0.2844406 -2.9676440\n2 0.8262923 0.4334449 0.4031084 -1.9278049\n3 -0.1769219 -0.1583660 -0.2829540 -0.1962654\n4 1.0357773 0.9326049 0.3250011 -1.8835882\n5 -1.0682642 -0.6589731 -0.4783144 -0.2945062"
},
{
"code": null,
"e": 1509,
"s": 1432,
"text": "Let’s say we want to select 3 rows randomly then it can be done as follows −"
},
{
"code": null,
"e": 1713,
"s": 1509,
"text": "> df[sample(nrow(df), 3), ]\n X1 X2 X3 X4\n2 0.8262923 0.4334449 0.4031084 -1.9278049\n1 -0.3277833 -0.1810403 0.2844406 -2.9676440\n5 -1.0682642 -0.6589731 -0.4783144 -0.2945062"
}
] |
Count the number of paragraph tag using BeautifulSoup - GeeksforGeeks
|
07 Apr, 2021
Sometimes, while extracting data from an HTML webpage, do you want to know how many paragraph tags are used in a given HTML document? Don’t worry we will discuss about this in this article.
print(len(soup.find_all("p")))
Step 1: First, import the libraries, BeautifulSoup, and os.
from bs4 import BeautifulSoup as bs
import os
Step 2: Now, remove the last segment of the path by entering the name of the Python file in which you are currently working.
base=os.path.dirname(os.path.abspath(‘#Name of Python file in which you are currently working’))
Step 3: Then, open the HTML file from which you want to read the value.
html=open(os.path.join(base, ‘#Name of HTML file from which you wish to read value’))
Step 4: Moreover, parse the HTML file in BeautifulSoup.
soup=bs(html, 'html.parser')
Step 5: Next, print a certain line if you want to.
print("Number of paragraph tags:")
Step 6: Finally, calculate and print the number of paragraph tags in the HTML document.
print(len(soup.find_all("p")))
Example 1
Let us consider the simple HTML webpage, which has numerous paragraph tags.
HTML
<!DOCTYPE html><html> <head> Geeks For Geeks </head> <body> <div> <p>King</p> <p>Prince</p> <p>Queen</p> </div> <p id="vinayak">Princess</p> </body> </html>
For finding the number of paragraph tags in the above HTML webpage, implement the following code.
Python
# Python program to get number of paragraph tags# of a given HTML document in Beautifulsoup # Import the libraries beautifulsoup # and osfrom bs4 import BeautifulSoup as bsimport os # Open the HTML filehtml = open('gfg.html') # Parse HTML file in Beautiful Soupsoup = bs(html, 'html.parser') # Print a certain lineprint("Number of paragraph tags:") # Calculating and printing the# number of paragraph tagsprint(len(soup.find_all("p")))
Output:
Example 2
In the below program, we will find the number of paragraph tags on a particular website.
Python
# Python program to get number of paragraph tags# of a given Website in Beautifulsoup # Import the libraries beautifulsoup # and osfrom bs4 import BeautifulSoup as bsimport osimport requests # Assign URLURL = 'https://www.geeksforgeeks.org/' # Page content from Website URLpage = requests.get(URL) # Parse HTML file in Beautiful Soupsoup = bs(page.content, 'html.parser') # Print a certain lineprint("Number of paragraph tags:") # Calculating and printing the# number of paragraph tagsprint(len(soup.find_all("p")))
Output:
Picked
Python BeautifulSoup
Python bs4-Exercises
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Check if element exists in list in Python
Selecting rows in pandas DataFrame based on conditions
Python | os.path.join() method
Defaultdict in Python
Create a directory in Python
Python | Get unique values from a list
Python | Pandas dataframe.groupby()
|
[
{
"code": null,
"e": 24292,
"s": 24264,
"text": "\n07 Apr, 2021"
},
{
"code": null,
"e": 24482,
"s": 24292,
"text": "Sometimes, while extracting data from an HTML webpage, do you want to know how many paragraph tags are used in a given HTML document? Don’t worry we will discuss about this in this article."
},
{
"code": null,
"e": 24513,
"s": 24482,
"text": "print(len(soup.find_all(\"p\")))"
},
{
"code": null,
"e": 24573,
"s": 24513,
"text": "Step 1: First, import the libraries, BeautifulSoup, and os."
},
{
"code": null,
"e": 24619,
"s": 24573,
"text": "from bs4 import BeautifulSoup as bs\nimport os"
},
{
"code": null,
"e": 24744,
"s": 24619,
"text": "Step 2: Now, remove the last segment of the path by entering the name of the Python file in which you are currently working."
},
{
"code": null,
"e": 24841,
"s": 24744,
"text": "base=os.path.dirname(os.path.abspath(‘#Name of Python file in which you are currently working’))"
},
{
"code": null,
"e": 24913,
"s": 24841,
"text": "Step 3: Then, open the HTML file from which you want to read the value."
},
{
"code": null,
"e": 24999,
"s": 24913,
"text": "html=open(os.path.join(base, ‘#Name of HTML file from which you wish to read value’))"
},
{
"code": null,
"e": 25055,
"s": 24999,
"text": "Step 4: Moreover, parse the HTML file in BeautifulSoup."
},
{
"code": null,
"e": 25084,
"s": 25055,
"text": "soup=bs(html, 'html.parser')"
},
{
"code": null,
"e": 25136,
"s": 25084,
"text": "Step 5: Next, print a certain line if you want to. "
},
{
"code": null,
"e": 25171,
"s": 25136,
"text": "print(\"Number of paragraph tags:\")"
},
{
"code": null,
"e": 25259,
"s": 25171,
"text": "Step 6: Finally, calculate and print the number of paragraph tags in the HTML document."
},
{
"code": null,
"e": 25290,
"s": 25259,
"text": "print(len(soup.find_all(\"p\")))"
},
{
"code": null,
"e": 25300,
"s": 25290,
"text": "Example 1"
},
{
"code": null,
"e": 25376,
"s": 25300,
"text": "Let us consider the simple HTML webpage, which has numerous paragraph tags."
},
{
"code": null,
"e": 25381,
"s": 25376,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html> <head> Geeks For Geeks </head> <body> <div> <p>King</p> <p>Prince</p> <p>Queen</p> </div> <p id=\"vinayak\">Princess</p> </body> </html>",
"e": 25588,
"s": 25381,
"text": null
},
{
"code": null,
"e": 25686,
"s": 25588,
"text": "For finding the number of paragraph tags in the above HTML webpage, implement the following code."
},
{
"code": null,
"e": 25693,
"s": 25686,
"text": "Python"
},
{
"code": "# Python program to get number of paragraph tags# of a given HTML document in Beautifulsoup # Import the libraries beautifulsoup # and osfrom bs4 import BeautifulSoup as bsimport os # Open the HTML filehtml = open('gfg.html') # Parse HTML file in Beautiful Soupsoup = bs(html, 'html.parser') # Print a certain lineprint(\"Number of paragraph tags:\") # Calculating and printing the# number of paragraph tagsprint(len(soup.find_all(\"p\")))",
"e": 26134,
"s": 25693,
"text": null
},
{
"code": null,
"e": 26142,
"s": 26134,
"text": "Output:"
},
{
"code": null,
"e": 26152,
"s": 26142,
"text": "Example 2"
},
{
"code": null,
"e": 26241,
"s": 26152,
"text": "In the below program, we will find the number of paragraph tags on a particular website."
},
{
"code": null,
"e": 26248,
"s": 26241,
"text": "Python"
},
{
"code": "# Python program to get number of paragraph tags# of a given Website in Beautifulsoup # Import the libraries beautifulsoup # and osfrom bs4 import BeautifulSoup as bsimport osimport requests # Assign URLURL = 'https://www.geeksforgeeks.org/' # Page content from Website URLpage = requests.get(URL) # Parse HTML file in Beautiful Soupsoup = bs(page.content, 'html.parser') # Print a certain lineprint(\"Number of paragraph tags:\") # Calculating and printing the# number of paragraph tagsprint(len(soup.find_all(\"p\")))",
"e": 26770,
"s": 26248,
"text": null
},
{
"code": null,
"e": 26778,
"s": 26770,
"text": "Output:"
},
{
"code": null,
"e": 26785,
"s": 26778,
"text": "Picked"
},
{
"code": null,
"e": 26806,
"s": 26785,
"text": "Python BeautifulSoup"
},
{
"code": null,
"e": 26827,
"s": 26806,
"text": "Python bs4-Exercises"
},
{
"code": null,
"e": 26834,
"s": 26827,
"text": "Python"
},
{
"code": null,
"e": 26932,
"s": 26834,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26964,
"s": 26932,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 27006,
"s": 26964,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 27062,
"s": 27006,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 27104,
"s": 27062,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 27159,
"s": 27104,
"text": "Selecting rows in pandas DataFrame based on conditions"
},
{
"code": null,
"e": 27190,
"s": 27159,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 27212,
"s": 27190,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 27241,
"s": 27212,
"text": "Create a directory in Python"
},
{
"code": null,
"e": 27280,
"s": 27241,
"text": "Python | Get unique values from a list"
}
] |
Compare two char arrays in a single line in Java
|
Two char arrays can be compared in Java using the java.util.Arrays.equals() method. This method returns true if the arrays are equal and false otherwise. The two arrays are equal if they contain the same number of elements in the same order.
A program that compares two char arrays using the Arrays.equals() method is given as follows −
Live Demo
import java.util.Arrays;
public class Demo {
public static void main(String[] argv) throws Exception {
boolean flag = Arrays.equals(new char[] { 'a', 'b', 'c' }, new char[] { 'a', 'b', 'c' });
System.out.println("The two char arrays are equal? " + flag);
}
}
The two char arrays are equal? true
Now let us understand the above program.
The Arrays.equals() method is used to compare two char arrays. If they are equal then true is stored in flag and if they are not equal then false is stored in flag. The value of flag is displayed. A code snippet which demonstrates this is as follows −
boolean flag = Arrays.equals(new char[] { 'a', 'b', 'c' }, new char[] { 'a', 'b', 'c' });
System.out.println("The two char arrays are equal? " + flag);
|
[
{
"code": null,
"e": 1304,
"s": 1062,
"text": "Two char arrays can be compared in Java using the java.util.Arrays.equals() method. This method returns true if the arrays are equal and false otherwise. The two arrays are equal if they contain the same number of elements in the same order."
},
{
"code": null,
"e": 1399,
"s": 1304,
"text": "A program that compares two char arrays using the Arrays.equals() method is given as follows −"
},
{
"code": null,
"e": 1410,
"s": 1399,
"text": " Live Demo"
},
{
"code": null,
"e": 1687,
"s": 1410,
"text": "import java.util.Arrays;\npublic class Demo {\n public static void main(String[] argv) throws Exception {\n boolean flag = Arrays.equals(new char[] { 'a', 'b', 'c' }, new char[] { 'a', 'b', 'c' });\n System.out.println(\"The two char arrays are equal? \" + flag);\n }\n}"
},
{
"code": null,
"e": 1723,
"s": 1687,
"text": "The two char arrays are equal? true"
},
{
"code": null,
"e": 1764,
"s": 1723,
"text": "Now let us understand the above program."
},
{
"code": null,
"e": 2016,
"s": 1764,
"text": "The Arrays.equals() method is used to compare two char arrays. If they are equal then true is stored in flag and if they are not equal then false is stored in flag. The value of flag is displayed. A code snippet which demonstrates this is as follows −"
},
{
"code": null,
"e": 2168,
"s": 2016,
"text": "boolean flag = Arrays.equals(new char[] { 'a', 'b', 'c' }, new char[] { 'a', 'b', 'c' });\nSystem.out.println(\"The two char arrays are equal? \" + flag);"
}
] |
Developing Good Twitter Data Visualizations using Matplotlib | by Arief Anbiya | Towards Data Science
|
In this article, we will learn about how to collect Twitter data and create interesting visualizations in Python. We will briefly explore about how to collect tweets using Tweepy and we will mostly explore about the various Data Visualization techniques for the Twitter data using Matplotlib. Before that, Data Visualization and the overall statistical process that enables it will be explained.
First, what is Data Visualization? It is a pictorial or visual representation of data. We have seen it in newspapers, news media, sports analysis, research papers, and sometimes in ads. The common examples of data visualization models are: Line Graph, Scatter Plot, Pie Chart, and Bar Chart (it is easy to make visualizations using these models).
Creating Data Visualization models is important, why?- A picture can speak a thousand words. A graph that represents data of many rows can give a big picture of the data and can reveal patterns contained in the data.- Data visualization is part of statistical analysis. Statistics has many applications in various areas.- Developing new algorithms can train your creativity and problem-solving abilities. Matplotlib has its own tools for plotting data, but we may not restrict ourselves to it in order to have more models and more variety for the visualizations.- Different models can give different perspectives about the data.
The statistical process (P-C-A-I)The process follows a simple flow. The flow starts with Posing the question, for example: “What do @NatGeo account tend to post?”. After we have the question in mind, we perform appropriate data Collection. In the collection phase, perhaps there are many ways to obtain Twitter data, but here pick to use Tweepy. After the collection we will enter the Analysis part. This is where we should choose the appropriate mathematical or statistical method for analyzing the data. One way of analyzing the data is through data visualization. Finally, the last phase is to Interpret the result. Notice that we may end up getting another question to answer, which means the flow could be cyclic.
We have briefly discussed data visualization and the statistical process that enables it. For our investigation, we already know that the questions to answer will be related to Twitter data, hence we will see the data collection first before defining the questions specifically.
Collecting Twitter dataWe can collect a relatively large Twitter data automatically using the Tweepy library in Python. It is an external Python library for accessing Twitter API. In order to make the library useful, we must first create a Twitter Application for our Twitter account. After the registration, we will obtain private information: consumer key, consumer secret, access secret, and access token. The information is required to access our Twitter API using Tweepy.
An example of creating our Twitter API object in Python:
import tweepyconsumer_key = 'ecGxfboL66oO2ZwxfKkg7q3QK'consumer_secret = 'exVRiv517gdwkPLP19PtlQMEIRjxgJr21JZCAAQYIqJCUW5vmh'access_token = '3151279508−Ywd662Zv97Ie7E7I97dUm0e3s2X8yYBloJQd6Gr'access_secret = 'BH5REW4V7RdGadMr31NLY9ksFypG12m8BR04S32ZF7jO3'auth = tweepy.OAuthHandler(consumer_key, consumer_secret)auth.set_access_token(access_token, access_secret)our_api = tweepy.API(auth)
Now we can start using the Twitter API through our_api .
We can directly collect tweets from our home timeline by applying the method our_api.home_timeline() , which collects 20 most recent tweets by default (including retweeted tweets). To adjust the desired number of tweets (take 100 tweets for example), use our_api.home_timeline(count = 100). To collect tweets from a particular account (take @NatGeo for example), use the method our_api.user_timeline(screen_name = 'NatGeo', count = 100).
One weakness of the above methods is that it can only collect up to 200 tweets. To overcome this, we may use the tweepy.Cursor class. To collect 2000 @NatGeo tweets and then saving the texts, number of retweets, and number of likes inside a list, we can perform as below.
natgeo_cursor = tweepy.Cursor(our_api.user_timeline, screen_name = 'NatGeo')natgeo_tweets = [(tweet.text, tweet.retweet_count, tweet.favorite_count) \ for tweet in natgeo_cursor.items(2000)]
Each item in the natgeo_cursor.items(n)is a tweepy.models.Statusobject from which we can obtain the tweet data such as text, author information (in the form of tweepy.models.User object), time of creation, number of retweets, number of likes, and media.
Result:
>>> natgeo_tweets[0]("As Chernobyl's gray wolf population increases, their influence on the surrounding environment is called into questi... https://t.co/elsb4FJQGZ", 428, 1504)>>> natgeo_tweets[1]('1,058 temples in the south Indian state of Kerala that have pledged to eliminate plastic this year https://t.co/ltJ6mFIWpV', 268, 985)
So far, we have seen how we can use Tweepy to collect actual Twitter data. Now let’s be curious and visualize them using Matplotlib.
*Instead of the above result, we will use various sets of Twitter data for the visualization.
Minimum Data VisualizationTo start this, let us see an example of plotting the username frequency of tweets in our home timeline. The username frequency is the number of tweets from a specific username that appears in our data. The tweets data will be of 200 tweets from the home timeline. Below is the code sample followed by the bar plot result.
import matplotlib.pyplot as plthome_cursor = tweepy.Cursor(api.home_timeline)tweets = [i.author.screen_name for i in home_cursor.items(200)]unq = set(tweets)freq = {uname: tweets.count(uname) for uname in unq}plt.bar(range(len(unq)), freq.values())plt.show()
As you can see, the plot shows that there were 42 different Twitter accounts tweeting in the timeline, and there is only one account that tweets the most (41 tweets). In case you want to know, the most frequent account is @CNNIndonesia. But we have to admit that we cannot see the usernames that appeared in the data, there is not much information can be collected from the bar chart (although it is still better than no visualization at all). That is why we should learn to develop our own algorithms for modifying data visualizations on top of Matplotlib (which can lead to various models).
What can we learn from the attempt above?According to David McCandless, the author of www.informationisbeautiful.net, there are 4 key elements that make a good data visualization: Information, Story (concept), Goal, and Visual Form (metaphor). Information is data, the keywords for this element are accuracy, consistency, and honesty. Story (concept) is to make the data vis interesting. Goal is the usefulness and efficiency of the data vis. Visual Form (metaphor) is the beauty and appearance.
The first bar chart above surely has the Information element, the data is accurate. It also has a bit of Goal and minimum Visual Form (the bars). But it does not have Story (concept) at all (only the creator of the data vis that knows the story behind the plot), it should at least inform the viewers that it is about Twitter data.
To improve our visualization, we will apply the 4 key elements. But to make it interesting, we will only use Matplotlib in Python.
In the next sections, we will see some questions or objectives with respect to various Twitter data and also perform the data visualizations.
Case 1. Objective: accounts comparison, Model: modified horizontal bar chartIn this case, we will use a data of 192 tweets that I have collected from my home timeline a few months ago. The data will be transformed such that we can group the tweets by accounts, and then we will compare the numbers through a data visualization. It is important to note that we will develop a good data visualization that combines various Matplotlib features and not restrict ourselves to the plotting models it has.
The modules that we will use in Python are:
import numpyimport matplotlib.pyplot as pltimport matplotlib.font_manager as fmimport operatorimport itertools
The code for data representation is shown below. It will group the tweets by username and then count the tweets count (also can be seen as username frequency) and the number of followers for each username. It will also sort the data by the username frequency.
home_tweets = list(numpy.load('testfile.npy'))unicity_key = operator.attrgetter('author.screen_name')tweets = sorted(home_tweets, key=unicity_key)authors_tweets = {}followers_count = {}sample_count = {} for screen_name, author_tweets in itertools.groupby(tweets, key=unicity_key): author_tweets = list(author_tweets) author = author_tweets[0].author authors_tweets[screen_name] = author followers_count[screen_name] = author.followers_count sample_count[screen_name] = len(author_tweets)maxfolls = max(followers_count.values())sorted_screen_name = sorted(sample_count.keys(), key = lambda x: sample_count[x])sorted_folls = [followers_count[i] for i in sorted_screen_name]sorted_count = [sample_count[i] for i in sorted_screen_name]
Next, we could easily perform the horizontal bar plot byplt.hbar(range(len(sorted_count)),sorted_count)and then setting the titles for the figure and x-y axis, but let us improvise. We will add additional visual form by adding color transparancy difference, for example: the transparancy of the bars may represent the number of followers of the accounts. We will also change the tick labels on the y axis from numbers to strings of usernames. Here is the code:
colors = [(0.1, 0.1, 1, i/maxfolls) for i in sorted_folls]fig, ax = plt.subplots(1,1)y_pos = range(len(colors))ax.set_yticks(y_pos)ax.set_yticklabels(sorted_screen_name, font_properties = helv_8)ax.barh(y_pos, sorted_count, \ height = 0.7, left = 0.5, \ color = colors)for i in y_pos: ax.text(sorted_count[i]+1, i, style_the_num(str(sorted_folls[i])), \ font_properties = helv_10, color = colors[i])ax.set_ylim(-1, y_pos[-1]+1+0.7)ax.set_xlim(0, max(sorted_count) + 8)ax.set_xlabel('Tweets count', font_properties = helv_14)plt.tight_layout(pad=3)fig.show()
Notice that we also change the font style, and improvise for the strings of followers count (mapping 22441917 to 22,441,917) using the style_the_num function. Below is the plot result:
It is clear that the plot above shows more information and looks better than the first plot. The visual effects also help to explain the data. The plot also has more value on the Story (concept) element, it looks quite alive (we don’t need to put much effort to understand the visualization).
*Sorry, but maybe the plot should have a text explaining the meaning of the blue color transparency (which is the number of followers of the accounts).
Case 2. Objective: the variety of images posted by an account and the audience’s preference, Model: Image Plot In this case, we will dataset of 1000 @NatGeo tweets (collected on 19 Sept 2018). Each of the 1000 tweets can either has an image or not. So to achieve the objective of this case, we must first collect all images that exist in the data. After that, an Image plot will be introduced. The Python implementation is done by creating PlotDecoratorclass, then use the instance to generate the image plot. The PlotDecoratorclass has methods gen_img(prev_img, next_img), put_below(left_img, next_img), and generate(imgs, ncol). An explanation for the algorithm:
- gen_img(prev_img, next_img):It plots the next image to the right side of the previous image tightly. The next_imgparameter is a matplotlib.image.AxesImage object, while the prev_img is a dictionary containing AxesImageobject of the left image. The function sets tje next image properties so that it will be positioned next to the previous image (the left side one) using the next_img.get_extent(...)and next_img.set_extent(...)methods.
- put_below(left_img, next_img):Similar as the 1st method explained above, but the next image is positioned below the leftest image of the row. This is applied because each row can only have ncol number of images.
- generate(imgs, ncol):This is where we generate the new position of the images. imgsis a list of AxesImageobjects that we want to plot. For example, if ncol=10, this function will apply gen_img(prev_img, next_img)10 times, and then apply put_below(left_img, next_img)and then start over again for the new row of images.
After collecting and positioned the images neatly, we count the number of retweets of each image and then only show auxiliary decoration for the top 3 with highest retweet count. The decoration is only a purple circle with no fill surrounding the image (the transparency gives clue about the rank). Here is the result, it starts from full image, then zoomed in, and then zoomed in again for the most retweeted image:
Let us rate the data visualization above. It is clear that it has a decent value (not bad) of Visual form (metaphor) and the concept is quite nice (it is interesting). The information we get are the top 3 images on highest retweet count and the pattern of the images (all images are great photos, there is no images of posters, quotes, etc).
*Again, maybe the plot should have a short paragraph that explains the meaning of the plot.
Case 3. Objective: the relation between two words, Model: Twitter Venn diagramIn this case, we will use a dataset of @jakpost (The Jakarta Post) 3214 tweets. The oldest tweet is created on 8 May 2018, and the newest is created on 22 June 2018. The objective is that we want to know whether or not there is a relation between two words in a set of tweets. One way to find out is to check whether the picked words are used in one tweet.
There is a method called Twitter Venn diagram that can be used to visualize how two words are connected in a set of tweets. We will adopt the Twitter Venn diagram and improvise a little bit using Matplotlib (the code example can be reached in this repo).*I used TextBlob to collect the words from the tweets.*The algorithm is written in the most obvious way without any optimization attempt.
The improvisation: Each marker representing each tweet has visibility value which represents the retweet count ratio (among the tweets in the circles, the tweet with the most retweet count will have the highest visibility value, which is 1).
Now let us pick two interesting words, for example, ‘jakarta’ and ‘administration’. Tweets that include ‘jakarta’ but not ‘administration’ will have red markers, and tweets that include ‘administration’ but not ‘jakarta’ will have blue markers. Tweets that include both words will have green markers. Below is the plot.
The above plot should have speak about the meaning itself. Among the 3214 tweets, 161 tweets include ‘jakarta’, only 14 tweets include ‘administration’, and 9 tweets include both. The retweets are gathered in the ‘jakarta’ circle, notice that there are 3 tweets with very high retweet count.
Now let us take another example, with words ‘jokowi’ and ‘prabowo’. ‘jokowi’ will be red and ‘prabowo’ be blue. Below is the plot.
If you are curious about the one tweet in the intersection, here is what it says:“ Prabowo accuses Jokowi govt of weakening TNI #jakpost https://t.co/BxJ7hum7Um “
In general, the above Twitter data visualization examples are not the best examples of visualization. But we have seen how we can harness Tweepy and Matplotlib while putting the 4 elements for good data visualization in mind. We have seen how we can combine different colors and shapes to add more values for the visual form (metaphor) and the story (concept).
Some of great data visualization examples (in general, not limited to Python or Matplotlib) can be seen in the links below.- https://informationisbeautiful.net/- https://datavizproject.com/
|
[
{
"code": null,
"e": 568,
"s": 172,
"text": "In this article, we will learn about how to collect Twitter data and create interesting visualizations in Python. We will briefly explore about how to collect tweets using Tweepy and we will mostly explore about the various Data Visualization techniques for the Twitter data using Matplotlib. Before that, Data Visualization and the overall statistical process that enables it will be explained."
},
{
"code": null,
"e": 915,
"s": 568,
"text": "First, what is Data Visualization? It is a pictorial or visual representation of data. We have seen it in newspapers, news media, sports analysis, research papers, and sometimes in ads. The common examples of data visualization models are: Line Graph, Scatter Plot, Pie Chart, and Bar Chart (it is easy to make visualizations using these models)."
},
{
"code": null,
"e": 1544,
"s": 915,
"text": "Creating Data Visualization models is important, why?- A picture can speak a thousand words. A graph that represents data of many rows can give a big picture of the data and can reveal patterns contained in the data.- Data visualization is part of statistical analysis. Statistics has many applications in various areas.- Developing new algorithms can train your creativity and problem-solving abilities. Matplotlib has its own tools for plotting data, but we may not restrict ourselves to it in order to have more models and more variety for the visualizations.- Different models can give different perspectives about the data."
},
{
"code": null,
"e": 2263,
"s": 1544,
"text": "The statistical process (P-C-A-I)The process follows a simple flow. The flow starts with Posing the question, for example: “What do @NatGeo account tend to post?”. After we have the question in mind, we perform appropriate data Collection. In the collection phase, perhaps there are many ways to obtain Twitter data, but here pick to use Tweepy. After the collection we will enter the Analysis part. This is where we should choose the appropriate mathematical or statistical method for analyzing the data. One way of analyzing the data is through data visualization. Finally, the last phase is to Interpret the result. Notice that we may end up getting another question to answer, which means the flow could be cyclic."
},
{
"code": null,
"e": 2542,
"s": 2263,
"text": "We have briefly discussed data visualization and the statistical process that enables it. For our investigation, we already know that the questions to answer will be related to Twitter data, hence we will see the data collection first before defining the questions specifically."
},
{
"code": null,
"e": 3019,
"s": 2542,
"text": "Collecting Twitter dataWe can collect a relatively large Twitter data automatically using the Tweepy library in Python. It is an external Python library for accessing Twitter API. In order to make the library useful, we must first create a Twitter Application for our Twitter account. After the registration, we will obtain private information: consumer key, consumer secret, access secret, and access token. The information is required to access our Twitter API using Tweepy."
},
{
"code": null,
"e": 3076,
"s": 3019,
"text": "An example of creating our Twitter API object in Python:"
},
{
"code": null,
"e": 3465,
"s": 3076,
"text": "import tweepyconsumer_key = 'ecGxfboL66oO2ZwxfKkg7q3QK'consumer_secret = 'exVRiv517gdwkPLP19PtlQMEIRjxgJr21JZCAAQYIqJCUW5vmh'access_token = '3151279508−Ywd662Zv97Ie7E7I97dUm0e3s2X8yYBloJQd6Gr'access_secret = 'BH5REW4V7RdGadMr31NLY9ksFypG12m8BR04S32ZF7jO3'auth = tweepy.OAuthHandler(consumer_key, consumer_secret)auth.set_access_token(access_token, access_secret)our_api = tweepy.API(auth)"
},
{
"code": null,
"e": 3522,
"s": 3465,
"text": "Now we can start using the Twitter API through our_api ."
},
{
"code": null,
"e": 3960,
"s": 3522,
"text": "We can directly collect tweets from our home timeline by applying the method our_api.home_timeline() , which collects 20 most recent tweets by default (including retweeted tweets). To adjust the desired number of tweets (take 100 tweets for example), use our_api.home_timeline(count = 100). To collect tweets from a particular account (take @NatGeo for example), use the method our_api.user_timeline(screen_name = 'NatGeo', count = 100)."
},
{
"code": null,
"e": 4232,
"s": 3960,
"text": "One weakness of the above methods is that it can only collect up to 200 tweets. To overcome this, we may use the tweepy.Cursor class. To collect 2000 @NatGeo tweets and then saving the texts, number of retweets, and number of likes inside a list, we can perform as below."
},
{
"code": null,
"e": 4439,
"s": 4232,
"text": "natgeo_cursor = tweepy.Cursor(our_api.user_timeline, screen_name = 'NatGeo')natgeo_tweets = [(tweet.text, tweet.retweet_count, tweet.favorite_count) \\ for tweet in natgeo_cursor.items(2000)]"
},
{
"code": null,
"e": 4693,
"s": 4439,
"text": "Each item in the natgeo_cursor.items(n)is a tweepy.models.Statusobject from which we can obtain the tweet data such as text, author information (in the form of tweepy.models.User object), time of creation, number of retweets, number of likes, and media."
},
{
"code": null,
"e": 4701,
"s": 4693,
"text": "Result:"
},
{
"code": null,
"e": 5035,
"s": 4701,
"text": ">>> natgeo_tweets[0](\"As Chernobyl's gray wolf population increases, their influence on the surrounding environment is called into questi... https://t.co/elsb4FJQGZ\", 428, 1504)>>> natgeo_tweets[1]('1,058 temples in the south Indian state of Kerala that have pledged to eliminate plastic this year https://t.co/ltJ6mFIWpV', 268, 985)"
},
{
"code": null,
"e": 5168,
"s": 5035,
"text": "So far, we have seen how we can use Tweepy to collect actual Twitter data. Now let’s be curious and visualize them using Matplotlib."
},
{
"code": null,
"e": 5262,
"s": 5168,
"text": "*Instead of the above result, we will use various sets of Twitter data for the visualization."
},
{
"code": null,
"e": 5610,
"s": 5262,
"text": "Minimum Data VisualizationTo start this, let us see an example of plotting the username frequency of tweets in our home timeline. The username frequency is the number of tweets from a specific username that appears in our data. The tweets data will be of 200 tweets from the home timeline. Below is the code sample followed by the bar plot result."
},
{
"code": null,
"e": 5869,
"s": 5610,
"text": "import matplotlib.pyplot as plthome_cursor = tweepy.Cursor(api.home_timeline)tweets = [i.author.screen_name for i in home_cursor.items(200)]unq = set(tweets)freq = {uname: tweets.count(uname) for uname in unq}plt.bar(range(len(unq)), freq.values())plt.show()"
},
{
"code": null,
"e": 6462,
"s": 5869,
"text": "As you can see, the plot shows that there were 42 different Twitter accounts tweeting in the timeline, and there is only one account that tweets the most (41 tweets). In case you want to know, the most frequent account is @CNNIndonesia. But we have to admit that we cannot see the usernames that appeared in the data, there is not much information can be collected from the bar chart (although it is still better than no visualization at all). That is why we should learn to develop our own algorithms for modifying data visualizations on top of Matplotlib (which can lead to various models)."
},
{
"code": null,
"e": 6958,
"s": 6462,
"text": "What can we learn from the attempt above?According to David McCandless, the author of www.informationisbeautiful.net, there are 4 key elements that make a good data visualization: Information, Story (concept), Goal, and Visual Form (metaphor). Information is data, the keywords for this element are accuracy, consistency, and honesty. Story (concept) is to make the data vis interesting. Goal is the usefulness and efficiency of the data vis. Visual Form (metaphor) is the beauty and appearance."
},
{
"code": null,
"e": 7290,
"s": 6958,
"text": "The first bar chart above surely has the Information element, the data is accurate. It also has a bit of Goal and minimum Visual Form (the bars). But it does not have Story (concept) at all (only the creator of the data vis that knows the story behind the plot), it should at least inform the viewers that it is about Twitter data."
},
{
"code": null,
"e": 7421,
"s": 7290,
"text": "To improve our visualization, we will apply the 4 key elements. But to make it interesting, we will only use Matplotlib in Python."
},
{
"code": null,
"e": 7563,
"s": 7421,
"text": "In the next sections, we will see some questions or objectives with respect to various Twitter data and also perform the data visualizations."
},
{
"code": null,
"e": 8062,
"s": 7563,
"text": "Case 1. Objective: accounts comparison, Model: modified horizontal bar chartIn this case, we will use a data of 192 tweets that I have collected from my home timeline a few months ago. The data will be transformed such that we can group the tweets by accounts, and then we will compare the numbers through a data visualization. It is important to note that we will develop a good data visualization that combines various Matplotlib features and not restrict ourselves to the plotting models it has."
},
{
"code": null,
"e": 8106,
"s": 8062,
"text": "The modules that we will use in Python are:"
},
{
"code": null,
"e": 8217,
"s": 8106,
"text": "import numpyimport matplotlib.pyplot as pltimport matplotlib.font_manager as fmimport operatorimport itertools"
},
{
"code": null,
"e": 8477,
"s": 8217,
"text": "The code for data representation is shown below. It will group the tweets by username and then count the tweets count (also can be seen as username frequency) and the number of followers for each username. It will also sort the data by the username frequency."
},
{
"code": null,
"e": 9231,
"s": 8477,
"text": "home_tweets = list(numpy.load('testfile.npy'))unicity_key = operator.attrgetter('author.screen_name')tweets = sorted(home_tweets, key=unicity_key)authors_tweets = {}followers_count = {}sample_count = {} for screen_name, author_tweets in itertools.groupby(tweets, key=unicity_key): author_tweets = list(author_tweets) author = author_tweets[0].author authors_tweets[screen_name] = author followers_count[screen_name] = author.followers_count sample_count[screen_name] = len(author_tweets)maxfolls = max(followers_count.values())sorted_screen_name = sorted(sample_count.keys(), key = lambda x: sample_count[x])sorted_folls = [followers_count[i] for i in sorted_screen_name]sorted_count = [sample_count[i] for i in sorted_screen_name]"
},
{
"code": null,
"e": 9692,
"s": 9231,
"text": "Next, we could easily perform the horizontal bar plot byplt.hbar(range(len(sorted_count)),sorted_count)and then setting the titles for the figure and x-y axis, but let us improvise. We will add additional visual form by adding color transparancy difference, for example: the transparancy of the bars may represent the number of followers of the accounts. We will also change the tick labels on the y axis from numbers to strings of usernames. Here is the code:"
},
{
"code": null,
"e": 10278,
"s": 9692,
"text": "colors = [(0.1, 0.1, 1, i/maxfolls) for i in sorted_folls]fig, ax = plt.subplots(1,1)y_pos = range(len(colors))ax.set_yticks(y_pos)ax.set_yticklabels(sorted_screen_name, font_properties = helv_8)ax.barh(y_pos, sorted_count, \\ height = 0.7, left = 0.5, \\ color = colors)for i in y_pos: ax.text(sorted_count[i]+1, i, style_the_num(str(sorted_folls[i])), \\ font_properties = helv_10, color = colors[i])ax.set_ylim(-1, y_pos[-1]+1+0.7)ax.set_xlim(0, max(sorted_count) + 8)ax.set_xlabel('Tweets count', font_properties = helv_14)plt.tight_layout(pad=3)fig.show()"
},
{
"code": null,
"e": 10463,
"s": 10278,
"text": "Notice that we also change the font style, and improvise for the strings of followers count (mapping 22441917 to 22,441,917) using the style_the_num function. Below is the plot result:"
},
{
"code": null,
"e": 10756,
"s": 10463,
"text": "It is clear that the plot above shows more information and looks better than the first plot. The visual effects also help to explain the data. The plot also has more value on the Story (concept) element, it looks quite alive (we don’t need to put much effort to understand the visualization)."
},
{
"code": null,
"e": 10908,
"s": 10756,
"text": "*Sorry, but maybe the plot should have a text explaining the meaning of the blue color transparency (which is the number of followers of the accounts)."
},
{
"code": null,
"e": 11573,
"s": 10908,
"text": "Case 2. Objective: the variety of images posted by an account and the audience’s preference, Model: Image Plot In this case, we will dataset of 1000 @NatGeo tweets (collected on 19 Sept 2018). Each of the 1000 tweets can either has an image or not. So to achieve the objective of this case, we must first collect all images that exist in the data. After that, an Image plot will be introduced. The Python implementation is done by creating PlotDecoratorclass, then use the instance to generate the image plot. The PlotDecoratorclass has methods gen_img(prev_img, next_img), put_below(left_img, next_img), and generate(imgs, ncol). An explanation for the algorithm:"
},
{
"code": null,
"e": 12011,
"s": 11573,
"text": "- gen_img(prev_img, next_img):It plots the next image to the right side of the previous image tightly. The next_imgparameter is a matplotlib.image.AxesImage object, while the prev_img is a dictionary containing AxesImageobject of the left image. The function sets tje next image properties so that it will be positioned next to the previous image (the left side one) using the next_img.get_extent(...)and next_img.set_extent(...)methods."
},
{
"code": null,
"e": 12225,
"s": 12011,
"text": "- put_below(left_img, next_img):Similar as the 1st method explained above, but the next image is positioned below the leftest image of the row. This is applied because each row can only have ncol number of images."
},
{
"code": null,
"e": 12546,
"s": 12225,
"text": "- generate(imgs, ncol):This is where we generate the new position of the images. imgsis a list of AxesImageobjects that we want to plot. For example, if ncol=10, this function will apply gen_img(prev_img, next_img)10 times, and then apply put_below(left_img, next_img)and then start over again for the new row of images."
},
{
"code": null,
"e": 12963,
"s": 12546,
"text": "After collecting and positioned the images neatly, we count the number of retweets of each image and then only show auxiliary decoration for the top 3 with highest retweet count. The decoration is only a purple circle with no fill surrounding the image (the transparency gives clue about the rank). Here is the result, it starts from full image, then zoomed in, and then zoomed in again for the most retweeted image:"
},
{
"code": null,
"e": 13305,
"s": 12963,
"text": "Let us rate the data visualization above. It is clear that it has a decent value (not bad) of Visual form (metaphor) and the concept is quite nice (it is interesting). The information we get are the top 3 images on highest retweet count and the pattern of the images (all images are great photos, there is no images of posters, quotes, etc)."
},
{
"code": null,
"e": 13397,
"s": 13305,
"text": "*Again, maybe the plot should have a short paragraph that explains the meaning of the plot."
},
{
"code": null,
"e": 13832,
"s": 13397,
"text": "Case 3. Objective: the relation between two words, Model: Twitter Venn diagramIn this case, we will use a dataset of @jakpost (The Jakarta Post) 3214 tweets. The oldest tweet is created on 8 May 2018, and the newest is created on 22 June 2018. The objective is that we want to know whether or not there is a relation between two words in a set of tweets. One way to find out is to check whether the picked words are used in one tweet."
},
{
"code": null,
"e": 14224,
"s": 13832,
"text": "There is a method called Twitter Venn diagram that can be used to visualize how two words are connected in a set of tweets. We will adopt the Twitter Venn diagram and improvise a little bit using Matplotlib (the code example can be reached in this repo).*I used TextBlob to collect the words from the tweets.*The algorithm is written in the most obvious way without any optimization attempt."
},
{
"code": null,
"e": 14466,
"s": 14224,
"text": "The improvisation: Each marker representing each tweet has visibility value which represents the retweet count ratio (among the tweets in the circles, the tweet with the most retweet count will have the highest visibility value, which is 1)."
},
{
"code": null,
"e": 14786,
"s": 14466,
"text": "Now let us pick two interesting words, for example, ‘jakarta’ and ‘administration’. Tweets that include ‘jakarta’ but not ‘administration’ will have red markers, and tweets that include ‘administration’ but not ‘jakarta’ will have blue markers. Tweets that include both words will have green markers. Below is the plot."
},
{
"code": null,
"e": 15078,
"s": 14786,
"text": "The above plot should have speak about the meaning itself. Among the 3214 tweets, 161 tweets include ‘jakarta’, only 14 tweets include ‘administration’, and 9 tweets include both. The retweets are gathered in the ‘jakarta’ circle, notice that there are 3 tweets with very high retweet count."
},
{
"code": null,
"e": 15209,
"s": 15078,
"text": "Now let us take another example, with words ‘jokowi’ and ‘prabowo’. ‘jokowi’ will be red and ‘prabowo’ be blue. Below is the plot."
},
{
"code": null,
"e": 15372,
"s": 15209,
"text": "If you are curious about the one tweet in the intersection, here is what it says:“ Prabowo accuses Jokowi govt of weakening TNI #jakpost https://t.co/BxJ7hum7Um “"
},
{
"code": null,
"e": 15733,
"s": 15372,
"text": "In general, the above Twitter data visualization examples are not the best examples of visualization. But we have seen how we can harness Tweepy and Matplotlib while putting the 4 elements for good data visualization in mind. We have seen how we can combine different colors and shapes to add more values for the visual form (metaphor) and the story (concept)."
}
] |
Excel Undo and Redo
|
The Undo function lets you reverse an action.
Undo is helpful if you regret an action and want to go back to how it was before.
Examples of use
Undo deleting a formula
Undo adding a column
Undo removing a row
Note: You cannot Undo things that you do in the File Menu, such as deleting a sheet, saving a spreadsheet or changing the options. The thumb rule is that you can Undo things you do in your sheet.
There are two ways to access the Undo command.
1) Pressing the Undo button in the Ribbon:
2) Using the keyboard shortcut CTRL + Z / Command + Z
Let's have a look at an example:
Note: It is recommended to practice using the keyboard shortcut. It saves you time!
The Redo function has the opposite effect as Undo, it reverses the Undo action.
Redo is helpful if you regret using Undo.
Note: The Redo command is only available if you have used Undo.
There are two ways to access the Redo command.
1) Pressing the Redo button in the Ribbon:
2) Using the keyboard shortcut CTRL + Y / Command + Y
Tip: Practice for yourself to get familiar with Undo and Redo.
Complete the keyboard shortcut for the Undo command (for Windows and MAC):
CTRL +
Command +
Start the Exercise
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
[email protected]
Your message has been sent to W3Schools.
|
[
{
"code": null,
"e": 46,
"s": 0,
"text": "The Undo function lets you reverse an action."
},
{
"code": null,
"e": 128,
"s": 46,
"text": "Undo is helpful if you regret an action and want to go back to how it was before."
},
{
"code": null,
"e": 144,
"s": 128,
"text": "Examples of use"
},
{
"code": null,
"e": 168,
"s": 144,
"text": "Undo deleting a formula"
},
{
"code": null,
"e": 189,
"s": 168,
"text": "Undo adding a column"
},
{
"code": null,
"e": 209,
"s": 189,
"text": "Undo removing a row"
},
{
"code": null,
"e": 405,
"s": 209,
"text": "Note: You cannot Undo things that you do in the File Menu, such as deleting a sheet, saving a spreadsheet or changing the options. The thumb rule is that you can Undo things you do in your sheet."
},
{
"code": null,
"e": 452,
"s": 405,
"text": "There are two ways to access the Undo command."
},
{
"code": null,
"e": 495,
"s": 452,
"text": "1) Pressing the Undo button in the Ribbon:"
},
{
"code": null,
"e": 549,
"s": 495,
"text": "2) Using the keyboard shortcut CTRL + Z / Command + Z"
},
{
"code": null,
"e": 582,
"s": 549,
"text": "Let's have a look at an example:"
},
{
"code": null,
"e": 666,
"s": 582,
"text": "Note: It is recommended to practice using the keyboard shortcut. It saves you time!"
},
{
"code": null,
"e": 746,
"s": 666,
"text": "The Redo function has the opposite effect as Undo, it reverses the Undo action."
},
{
"code": null,
"e": 788,
"s": 746,
"text": "Redo is helpful if you regret using Undo."
},
{
"code": null,
"e": 852,
"s": 788,
"text": "Note: The Redo command is only available if you have used Undo."
},
{
"code": null,
"e": 899,
"s": 852,
"text": "There are two ways to access the Redo command."
},
{
"code": null,
"e": 942,
"s": 899,
"text": "1) Pressing the Redo button in the Ribbon:"
},
{
"code": null,
"e": 996,
"s": 942,
"text": "2) Using the keyboard shortcut CTRL + Y / Command + Y"
},
{
"code": null,
"e": 1059,
"s": 996,
"text": "Tip: Practice for yourself to get familiar with Undo and Redo."
},
{
"code": null,
"e": 1134,
"s": 1059,
"text": "Complete the keyboard shortcut for the Undo command (for Windows and MAC):"
},
{
"code": null,
"e": 1154,
"s": 1134,
"text": "CTRL + \nCommand + \n"
},
{
"code": null,
"e": 1173,
"s": 1154,
"text": "Start the Exercise"
},
{
"code": null,
"e": 1206,
"s": 1173,
"text": "We just launchedW3Schools videos"
},
{
"code": null,
"e": 1248,
"s": 1206,
"text": "Get certifiedby completinga course today!"
},
{
"code": null,
"e": 1355,
"s": 1248,
"text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:"
},
{
"code": null,
"e": 1374,
"s": 1355,
"text": "[email protected]"
}
] |
Center alignment using the margin property in CSS
|
We can center horizontally a block-level element by using the CSS margin property, but CSS width property of that element should be set.
Let’s see an example to center an element using CSS margin property −
Live Demo
<!DOCTYPE html>
<html>
<head>
<title>Center Alignment using CSS Margin</title>
<style>
#yinyangSymbol {
width: 100px;
height: 50px;
background: #fff;
border-color: #000;
border-style: solid;
border-width: 2px 2px 50px 2px;
border-radius: 100%;
position: relative;
}
#yinyangSymbol::before {
content: "";
position: absolute;
top: 50%;
left: 0;
background: #fff;
border: 18px solid #000;
border-radius: 100%;
width: 14px;
height: 14px;
}
#yinyangSymbol::after {
content: "";
position: absolute;
top: 50%;
left: 50%;
background: #000;
border: 18px solid #fff;
border-radius:100%;
width: 14px;
height: 14px;
}
div{
width: 50%;
margin: 10px auto;
border:4px solid black;
}
#text {
border: 4px solid black;
background-color: grey;
color: white;
text-align: center;
}
</style>
</head>
<body>
<div id="main">
<div>
<div id="yinyangSymbol"></div>
</div>
<div id="text">Be Centered & Balanced</div>
</div>
</body>
</html>
This will produce the following output
Let’s see another example to center an element using CSS margin property −
Live Demo
<!DOCTYPE html>
<html>
<head>
<title>Center Alignment using CSS Margin</title>
<style>
.screen{
padding: 10px;
margin: 10px auto;
text-align: center;
color: white;
border-radius: 0 0 50px 50px;
border: 4px solid #000;
}
.screen1 {
background-color: #f06d06;
width: 70%;
}
.screen2 {
background-color: #48C9B0;
width: 50%;
}
.screen3 {
background-color: #DC3545;
width: 30%;
}
</style>
</head>
<body>
<div class="screen screen1">Screen 70%</div>
<div class="screen screen2">Screen 50%</div>
<div class="screen screen3">Screen 30%</div>
</div>
</body>
</html>
This will produce the following output
|
[
{
"code": null,
"e": 1199,
"s": 1062,
"text": "We can center horizontally a block-level element by using the CSS margin property, but CSS width property of that element should be set."
},
{
"code": null,
"e": 1269,
"s": 1199,
"text": "Let’s see an example to center an element using CSS margin property −"
},
{
"code": null,
"e": 1280,
"s": 1269,
"text": " Live Demo"
},
{
"code": null,
"e": 2291,
"s": 1280,
"text": "<!DOCTYPE html>\n<html>\n<head>\n<title>Center Alignment using CSS Margin</title>\n<style>\n#yinyangSymbol {\n width: 100px;\n height: 50px;\n background: #fff;\n border-color: #000;\n border-style: solid;\n border-width: 2px 2px 50px 2px;\n border-radius: 100%;\n position: relative;\n}\n#yinyangSymbol::before {\n content: \"\";\n position: absolute;\n top: 50%;\n left: 0;\n background: #fff;\n border: 18px solid #000;\n border-radius: 100%;\n width: 14px;\n height: 14px;\n}\n#yinyangSymbol::after {\n content: \"\";\n position: absolute;\n top: 50%;\n left: 50%;\n background: #000;\n border: 18px solid #fff;\n border-radius:100%;\n width: 14px;\n height: 14px;\n}\ndiv{\n width: 50%;\n margin: 10px auto;\n border:4px solid black;\n}\n#text {\n border: 4px solid black;\n background-color: grey;\n color: white;\n text-align: center;\n}\n</style>\n</head>\n<body>\n<div id=\"main\">\n<div>\n<div id=\"yinyangSymbol\"></div>\n</div>\n<div id=\"text\">Be Centered & Balanced</div>\n</div>\n</body>\n</html>"
},
{
"code": null,
"e": 2330,
"s": 2291,
"text": "This will produce the following output"
},
{
"code": null,
"e": 2405,
"s": 2330,
"text": "Let’s see another example to center an element using CSS margin property −"
},
{
"code": null,
"e": 2416,
"s": 2405,
"text": " Live Demo"
},
{
"code": null,
"e": 3010,
"s": 2416,
"text": "<!DOCTYPE html>\n<html>\n<head>\n<title>Center Alignment using CSS Margin</title>\n<style>\n.screen{\n padding: 10px;\n margin: 10px auto;\n text-align: center;\n color: white;\n border-radius: 0 0 50px 50px;\n border: 4px solid #000;\n}\n.screen1 {\n background-color: #f06d06;\n width: 70%;\n}\n.screen2 {\n background-color: #48C9B0;\n width: 50%;\n}\n.screen3 {\n background-color: #DC3545;\n width: 30%;\n}\n</style>\n</head>\n<body>\n<div class=\"screen screen1\">Screen 70%</div>\n<div class=\"screen screen2\">Screen 50%</div>\n<div class=\"screen screen3\">Screen 30%</div>\n</div>\n</body>\n</html>"
},
{
"code": null,
"e": 3049,
"s": 3010,
"text": "This will produce the following output"
}
] |
Python | Find dictionary matching value in list - GeeksforGeeks
|
02 Aug, 2019
The problem of getting only the suitable dictionary that has a particular value of the corresponding key is quite common when one starts working with dictionary. Let’s discuss certain ways in which this task can be performed.
Method #1 : Using loopThis is the brute force method by which this task can be performed. For this, we just use naive check and compare and return the result once we find the suitable match and break for rest of dictionaries.
# Python3 code to demonstrate working of# Find dictionary matching value in list# Using loop # Initialize listtest_list = [{'gfg' : 2, 'is' : 4, 'best' : 6}, {'it' : 5, 'is' : 7, 'best' : 8}, {'CS' : 10}] # Printing original listprint("The original list is : " + str(test_list)) # Using loop# Find dictionary matching value in listres = Nonefor sub in test_list: if sub['is'] == 7: res = sub break # printing result print("The filtered dictionary value is : " + str(res))
The original list is : [{'is': 4, 'gfg': 2, 'best': 6}, {'is': 7, 'best': 8, 'it': 5}, {'CS': 10}]
The filtered dictionary value is : {'is': 7, 'best': 8, 'it': 5}
Method #2 : Using next() + dictionary comprehensionThe combination of these methods can also be used to perform this task. This difference is that it’s a one liner and more efficient as next function uses iterator as internal implementation which are quicker than generic methods.
# Python3 code to demonstrate working of# Find dictionary matching value in list# Using next() + dictionary comprehension # Initialize listtest_list = [{'gfg' : 2, 'is' : 4, 'best' : 6}, {'it' : 5, 'is' : 7, 'best' : 8}, {'CS' : 10}] # Printing original listprint("The original list is : " + str(test_list)) # Using next() + dictionary comprehension# Find dictionary matching value in listres = next((sub for sub in test_list if sub['is'] == 7), None) # printing result print("The filtered dictionary value is : " + str(res))
The original list is : [{'is': 4, 'gfg': 2, 'best': 6}, {'is': 7, 'best': 8, 'it': 5}, {'CS': 10}]
The filtered dictionary value is : {'is': 7, 'best': 8, 'it': 5}
Python dictionary-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
Read a file line by line in Python
Enumerate() in Python
How to Install PIP on Windows ?
Iterate over a list in Python
Python program to convert a list to string
Defaultdict in Python
Python | Split string into list of characters
Python program to check whether a number is Prime or not
Python | Convert a list to dictionary
|
[
{
"code": null,
"e": 23975,
"s": 23947,
"text": "\n02 Aug, 2019"
},
{
"code": null,
"e": 24201,
"s": 23975,
"text": "The problem of getting only the suitable dictionary that has a particular value of the corresponding key is quite common when one starts working with dictionary. Let’s discuss certain ways in which this task can be performed."
},
{
"code": null,
"e": 24427,
"s": 24201,
"text": "Method #1 : Using loopThis is the brute force method by which this task can be performed. For this, we just use naive check and compare and return the result once we find the suitable match and break for rest of dictionaries."
},
{
"code": "# Python3 code to demonstrate working of# Find dictionary matching value in list# Using loop # Initialize listtest_list = [{'gfg' : 2, 'is' : 4, 'best' : 6}, {'it' : 5, 'is' : 7, 'best' : 8}, {'CS' : 10}] # Printing original listprint(\"The original list is : \" + str(test_list)) # Using loop# Find dictionary matching value in listres = Nonefor sub in test_list: if sub['is'] == 7: res = sub break # printing result print(\"The filtered dictionary value is : \" + str(res))",
"e": 24945,
"s": 24427,
"text": null
},
{
"code": null,
"e": 25110,
"s": 24945,
"text": "The original list is : [{'is': 4, 'gfg': 2, 'best': 6}, {'is': 7, 'best': 8, 'it': 5}, {'CS': 10}]\nThe filtered dictionary value is : {'is': 7, 'best': 8, 'it': 5}\n"
},
{
"code": null,
"e": 25393,
"s": 25112,
"text": "Method #2 : Using next() + dictionary comprehensionThe combination of these methods can also be used to perform this task. This difference is that it’s a one liner and more efficient as next function uses iterator as internal implementation which are quicker than generic methods."
},
{
"code": "# Python3 code to demonstrate working of# Find dictionary matching value in list# Using next() + dictionary comprehension # Initialize listtest_list = [{'gfg' : 2, 'is' : 4, 'best' : 6}, {'it' : 5, 'is' : 7, 'best' : 8}, {'CS' : 10}] # Printing original listprint(\"The original list is : \" + str(test_list)) # Using next() + dictionary comprehension# Find dictionary matching value in listres = next((sub for sub in test_list if sub['is'] == 7), None) # printing result print(\"The filtered dictionary value is : \" + str(res))",
"e": 25948,
"s": 25393,
"text": null
},
{
"code": null,
"e": 26113,
"s": 25948,
"text": "The original list is : [{'is': 4, 'gfg': 2, 'best': 6}, {'is': 7, 'best': 8, 'it': 5}, {'CS': 10}]\nThe filtered dictionary value is : {'is': 7, 'best': 8, 'it': 5}\n"
},
{
"code": null,
"e": 26140,
"s": 26113,
"text": "Python dictionary-programs"
},
{
"code": null,
"e": 26147,
"s": 26140,
"text": "Python"
},
{
"code": null,
"e": 26163,
"s": 26147,
"text": "Python Programs"
},
{
"code": null,
"e": 26261,
"s": 26163,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26270,
"s": 26261,
"text": "Comments"
},
{
"code": null,
"e": 26283,
"s": 26270,
"text": "Old Comments"
},
{
"code": null,
"e": 26301,
"s": 26283,
"text": "Python Dictionary"
},
{
"code": null,
"e": 26336,
"s": 26301,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 26358,
"s": 26336,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 26390,
"s": 26358,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 26420,
"s": 26390,
"text": "Iterate over a list in Python"
},
{
"code": null,
"e": 26463,
"s": 26420,
"text": "Python program to convert a list to string"
},
{
"code": null,
"e": 26485,
"s": 26463,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 26531,
"s": 26485,
"text": "Python | Split string into list of characters"
},
{
"code": null,
"e": 26588,
"s": 26531,
"text": "Python program to check whether a number is Prime or not"
}
] |
K-th Symbol in Grammar in C++
|
Suppose on the first row, we have a 0. Now in every subsequent row, we look at the previous row and replace each occurrence of 0 by 01, and each occurrence of 1 by 10. Suppose we have N rows and index K, we have to find the K-th indexed symbol in row N. (The values of K are 1-indexed.) (1 indexed). So if N = 4 and K = 5, then the output will be 1. This is because −
Row 1: 0
Row 2: 01
Row 3: 0110
Row 4: 01101001
To solve this, we will follow these steps −
Suppose the name of the method is kthGrammar. This takes N and K.
if N is 1, then return 0
if k is even, return 1 when then kthGrammar(N – 1, K/2) is 0, otherwise 0
otherwise return kthGrammar(N – 1, (K + 1)/2)
Let us see the following implementation to get better understanding −
Live Demo
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
int kthGrammar(int N, int K) {
if(N == 1) return 0;
if(K % 2 == 0){
return kthGrammar(N - 1, K / 2) == 0 ? 1 : 0;
}else{
return kthGrammar(N - 1, (K + 1) / 2);
}
}
};
main(){
Solution ob;
cout << (ob.kthGrammar(4, 5));
}
4
5
1
|
[
{
"code": null,
"e": 1430,
"s": 1062,
"text": "Suppose on the first row, we have a 0. Now in every subsequent row, we look at the previous row and replace each occurrence of 0 by 01, and each occurrence of 1 by 10. Suppose we have N rows and index K, we have to find the K-th indexed symbol in row N. (The values of K are 1-indexed.) (1 indexed). So if N = 4 and K = 5, then the output will be 1. This is because −"
},
{
"code": null,
"e": 1439,
"s": 1430,
"text": "Row 1: 0"
},
{
"code": null,
"e": 1449,
"s": 1439,
"text": "Row 2: 01"
},
{
"code": null,
"e": 1461,
"s": 1449,
"text": "Row 3: 0110"
},
{
"code": null,
"e": 1477,
"s": 1461,
"text": "Row 4: 01101001"
},
{
"code": null,
"e": 1521,
"s": 1477,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1587,
"s": 1521,
"text": "Suppose the name of the method is kthGrammar. This takes N and K."
},
{
"code": null,
"e": 1612,
"s": 1587,
"text": "if N is 1, then return 0"
},
{
"code": null,
"e": 1686,
"s": 1612,
"text": "if k is even, return 1 when then kthGrammar(N – 1, K/2) is 0, otherwise 0"
},
{
"code": null,
"e": 1732,
"s": 1686,
"text": "otherwise return kthGrammar(N – 1, (K + 1)/2)"
},
{
"code": null,
"e": 1802,
"s": 1732,
"text": "Let us see the following implementation to get better understanding −"
},
{
"code": null,
"e": 1813,
"s": 1802,
"text": " Live Demo"
},
{
"code": null,
"e": 2159,
"s": 1813,
"text": "#include <bits/stdc++.h>\nusing namespace std;\nclass Solution {\n public:\n int kthGrammar(int N, int K) {\n if(N == 1) return 0;\n if(K % 2 == 0){\n return kthGrammar(N - 1, K / 2) == 0 ? 1 : 0;\n }else{\n return kthGrammar(N - 1, (K + 1) / 2);\n }\n }\n};\nmain(){\n Solution ob;\n cout << (ob.kthGrammar(4, 5));\n}"
},
{
"code": null,
"e": 2163,
"s": 2159,
"text": "4\n5"
},
{
"code": null,
"e": 2165,
"s": 2163,
"text": "1"
}
] |
How to solve an IllegalArgumentException in Java?
|
An IllegalArgumentException is thrown in order to indicate that a method has been passed an illegal argument. This exception extends the RuntimeException class and thus, belongs to those exceptions that can be thrown during the operation of the Java Virtual Machine (JVM). It is an unchecked exception and thus, it does not need to be declared in a method’s or a constructor’s throws clause.
When Arguments out of range. For example, the percentage should lie between 1 to 100. If the user entered 101 then an IllegalArugmentExcpetion will be thrown.
When argument format is invalid. For example, if our method requires date format like YYYY/MM/DD but if the user is passing YYYY-MM-DD. Then our method can’t understand then IllegalArugmentExcpetion will be thrown.
When a method needs non-empty string as a parameter but the null string is passed.
public class Student {
int m;
public void setMarks(int marks) {
if(marks < 0 || marks > 100)
throw new IllegalArgumentException(Integer.toString(marks));
else
m = marks;
}
public static void main(String[] args) {
Student s1 = new Student();
s1.setMarks(45);
System.out.println(s1.m);
Student s2 = new Student();
s2.setMarks(101);
System.out.println(s2.m);
}
}
45
Exception in thread "main" java.lang.IllegalArgumentException: 101
at Student.setMarks(Student.java:5)
at Student.main(Student.java:15)
When an IllegalArgumentException is thrown, we must check the call stack in Java’s stack trace and locate the method that produced the wrong argument.
The IllegalArgumentException is very useful and can be used to avoid situations where the application’s code would have to deal with unchecked input data.
The main use of this IllegalArgumentException is for validating the inputs coming from other users.
If we want to catch the IllegalArgumentException then we can use try-catch blocks. By doing like this we can handle some situations. Suppose in catch block if we put code to give another chance to the user to input again instead of stopping the execution especially in case of looping.
import java.util.Scanner;
public class Student {
public static void main(String[] args) {
String cont = "y";
run(cont);
}
static void run(String cont) {
Scanner scan = new Scanner(System.in);
while( cont.equalsIgnoreCase("y")) {
try {
System.out.println("Enter an integer: ");
int marks = scan.nextInt();
if (marks < 0 || marks > 100)
throw new IllegalArgumentException("value must be non-negative and below 100");
System.out.println( marks);
}
catch(IllegalArgumentException i) {
System.out.println("out of range encouneterd. Want to continue");
cont = scan.next();
if(cont.equalsIgnoreCase("Y"))
run(cont);
}
}
}
}
Enter an integer:
1
1
Enter an integer:
100
100
Enter an integer:
150
out of range encouneterd. Want to continue
y
Enter an integer:
|
[
{
"code": null,
"e": 1454,
"s": 1062,
"text": "An IllegalArgumentException is thrown in order to indicate that a method has been passed an illegal argument. This exception extends the RuntimeException class and thus, belongs to those exceptions that can be thrown during the operation of the Java Virtual Machine (JVM). It is an unchecked exception and thus, it does not need to be declared in a method’s or a constructor’s throws clause."
},
{
"code": null,
"e": 1613,
"s": 1454,
"text": "When Arguments out of range. For example, the percentage should lie between 1 to 100. If the user entered 101 then an IllegalArugmentExcpetion will be thrown."
},
{
"code": null,
"e": 1828,
"s": 1613,
"text": "When argument format is invalid. For example, if our method requires date format like YYYY/MM/DD but if the user is passing YYYY-MM-DD. Then our method can’t understand then IllegalArugmentExcpetion will be thrown."
},
{
"code": null,
"e": 1911,
"s": 1828,
"text": "When a method needs non-empty string as a parameter but the null string is passed."
},
{
"code": null,
"e": 2352,
"s": 1911,
"text": "public class Student {\n int m;\n public void setMarks(int marks) {\n if(marks < 0 || marks > 100)\n throw new IllegalArgumentException(Integer.toString(marks));\n else\n m = marks;\n }\n public static void main(String[] args) {\n Student s1 = new Student();\n s1.setMarks(45);\n System.out.println(s1.m);\n Student s2 = new Student();\n s2.setMarks(101);\n System.out.println(s2.m);\n }\n}"
},
{
"code": null,
"e": 2491,
"s": 2352,
"text": "45\nException in thread \"main\" java.lang.IllegalArgumentException: 101\nat Student.setMarks(Student.java:5)\nat Student.main(Student.java:15)"
},
{
"code": null,
"e": 2642,
"s": 2491,
"text": "When an IllegalArgumentException is thrown, we must check the call stack in Java’s stack trace and locate the method that produced the wrong argument."
},
{
"code": null,
"e": 2797,
"s": 2642,
"text": "The IllegalArgumentException is very useful and can be used to avoid situations where the application’s code would have to deal with unchecked input data."
},
{
"code": null,
"e": 2897,
"s": 2797,
"text": "The main use of this IllegalArgumentException is for validating the inputs coming from other users."
},
{
"code": null,
"e": 3183,
"s": 2897,
"text": "If we want to catch the IllegalArgumentException then we can use try-catch blocks. By doing like this we can handle some situations. Suppose in catch block if we put code to give another chance to the user to input again instead of stopping the execution especially in case of looping."
},
{
"code": null,
"e": 4053,
"s": 3183,
"text": "import java.util.Scanner;\npublic class Student {\n public static void main(String[] args) {\n String cont = \"y\";\n run(cont);\n }\n static void run(String cont) {\n Scanner scan = new Scanner(System.in);\n while( cont.equalsIgnoreCase(\"y\")) {\n try {\n System.out.println(\"Enter an integer: \");\n int marks = scan.nextInt();\n if (marks < 0 || marks > 100)\n throw new IllegalArgumentException(\"value must be non-negative and below 100\");\n System.out.println( marks);\n }\n catch(IllegalArgumentException i) {\n System.out.println(\"out of range encouneterd. Want to continue\");\n cont = scan.next(); \n if(cont.equalsIgnoreCase(\"Y\"))\n run(cont);\n }\n }\n }\n}"
},
{
"code": null,
"e": 4186,
"s": 4053,
"text": "Enter an integer:\n1\n1\nEnter an integer:\n100\n100\nEnter an integer:\n150\nout of range encouneterd. Want to continue\ny\nEnter an integer:"
}
] |
How to get DNS IP Settings using PowerShell?
|
Ipconfig /all command also retrieves the DNS settings for all the network interfaces. This command can be run on both cmd and PowerShell. For example,
PS C:\Users\Administrator> ipconfig /all
Windows IP Configuration
Host Name . . . . . . . . . . . . : Test1-Win2k16
Primary Dns Suffix . . . . . . . : labdomain.local
Node Type . . . . . . . . . . . . : Hybrid
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
DNS Suffix Search List. . . . . . : labdomain.local
Ethernet adapter Ethernet0:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Intel(R) 82574L Gigabit Network Connection
Physical Address. . . . . . . . . : 00-0C-29-E1-28-E0
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . : 192.168.0.108(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.0.1
DHCPv6 IAID . . . . . . . . . . . : 33557545
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-26-A9-34-58-00-0C-29-E1-28-E0
DNS Servers . . . . . . . . . . . : 192.168.0.105
NetBIOS over Tcpip. . . . . . . . : Enabled
Tunnel adapter isatap.{5F9A3612-A410-4408-A7A8-368D2E16D6A8}:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Microsoft ISATAP Adapter #2
Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
But the problem with this command is you can not filter out the results properly. For example, if you need to retrieve the information for a specific interface then you need to write many string manipulation codes in PowerShell.
There are few GET verb commands available for DNS client-side settings. Let's check them.
Get-Command -Verb Get -Noun DNS*
Name Version
---- -------
Get-DnsClient 1.0.0.0
Get-DnsClientCache 1.0.0.0
Get-DnsClientGlobalSetting 1.0.0.0
Get-DnsClientNrptGlobal 1.0.0.0
Get-DnsClientNrptPolicy 1.0.0.0
Get-DnsClientNrptRule 1.0.0.0
Get-DnsClientServerAddress 1.0.0.0
To retrieve the data related to DNS client IP settings including domain name like Ipconfig /all command, we need mainly 3 commands.
Get-DnsClient
Get-DnsClient
Get-DnsClientGlobalSetting
Get-DnsClientGlobalSetting
Get-DnsClientServerAddress
Get-DnsClientServerAddress
We will see each command one by one.
This command gets the details of the specific network interfaces configured on the specific computer. This command also helps you to set the DNS server address on the client computers when the Set-DnsClientServerAddress command is pipelined.
When you run this command on the local computer, it gives the details of the local interface.
Get-DnsClient
If you need specific interface address information then use -InterfaceIndex parameter. In the above output Interface Index, 3 is the primary adapter.
Get-DnsClient -InterfaceIndex 3
To get the same settings on the remote server, we can use the -CimSession parameter.
$sess = New-CimSession -ComputerName Test1-Win2k16
Get-DnsClient -Session $sess
This cmdlet retrieves the DNS client settings which are common to all the interfaces like the DNS suffix search list. Once you run the command, the output will be shown as below.
PS C:\Users\Administrator> Get-DnsClientGlobalSetting
UseSuffixSearchList : True
SuffixSearchList : {labdomain.local}
UseDevolution : True
DevolutionLevel : 0
To get the settings on the remote server, use the CIM session parameter -Session.
$sess = New-CimSession -ComputerName Test1-Win2k16
Get-DnsClientGlobalSetting -Session $sess
UseSuffixSearchList : True
SuffixSearchList : {labdomain.local}
UseDevolution : True
DevolutionLevel : 0
PSComputerName : Test1-Win2k16
This cmdlet retrieves one or more DNS address associated with the interfaces on the computer. For example,
Get-DnsClientServerAddress
PS C:\Users\Administrator> Get-DnsClientServerAddress
InterfaceAlias Interface Address ServerAddresses
Index Family
-------------- --------- ------- ---------------
Ethernet0 3 IPv4 {192.168.0.106}
Ethernet0 3 IPv6 {}
Loopback Pseudo-Interface 1 1 IPv4 {}
isatap.{5F9A3612-A410-440... 4 IPv4 {192.168.0.106}
isatap.{5F9A3612-A410-440... 4 IPv6 {}
In the above output, the main interface Ethernet0 has associated DNS address is 192.168.0.106. Likewise, there are different IPv4 and IPv6 interfaces, and their DNS addresses are displayed in the Server Address field.
To retrieve only IPv4 interface associated DNS server address, use -AddressFamily parameter.
Get-DnsClientServerAddress -AddressFamily IPv4
InterfaceAlias Interface Address ServerAddresses
Index Family
-------------- --------- ------- ---------------
Ethernet0 3 IPv4 {192.168.0.106}
Loopback Pseudo-Interface 1 1 IPv4 {}
isatap.{5F9A3612-A410-440... 4 IPv4 {192.168.0.106}
To get the DNS server IPs of the specific interface, you need to use its index by supplying index to the -InterfaceIndex Parameter.
Get-DnsClientServerAddress -InterfaceIndex 3
InterfaceAlias Interface Address ServerAddresses
Index Family
-------------- --------- ------- ---------------
Ethernet0 3 IPv4 {192.168.0.106}
Ethernet0 3 IPv6 {}
To get the DNS server list on the remote system, you need to use the CIM session parameter -Session.
Get-DnsClientServerAddress -AddressFamily IPv4 -Session $sess
InterfaceAlias Interface Address ServerAddresses
Index Family
-------------- --------- ------- ---------------
Ethernet0 3 IPv4 {192.168.0.106}
Loopback Pseudo-Interface 1 1 IPv4 {}
isatap.{5F9A3612-A410-440... 4 IPv4 {192.168.0.106}
|
[
{
"code": null,
"e": 1213,
"s": 1062,
"text": "Ipconfig /all command also retrieves the DNS settings for all the network interfaces. This command can be run on both cmd and PowerShell. For example,"
},
{
"code": null,
"e": 2641,
"s": 1213,
"text": "PS C:\\Users\\Administrator> ipconfig /all\n\nWindows IP Configuration\n\n Host Name . . . . . . . . . . . . : Test1-Win2k16\n Primary Dns Suffix . . . . . . . : labdomain.local\n Node Type . . . . . . . . . . . . : Hybrid\n IP Routing Enabled. . . . . . . . : No\n WINS Proxy Enabled. . . . . . . . : No\n DNS Suffix Search List. . . . . . : labdomain.local\n\nEthernet adapter Ethernet0:\n\n Connection-specific DNS Suffix . :\n Description . . . . . . . . . . . : Intel(R) 82574L Gigabit Network Connection\n Physical Address. . . . . . . . . : 00-0C-29-E1-28-E0\n DHCP Enabled. . . . . . . . . . . : No\n Autoconfiguration Enabled . . . . : Yes\n IPv4 Address. . . . . . . . . . . : 192.168.0.108(Preferred)\n Subnet Mask . . . . . . . . . . . : 255.255.255.0\n Default Gateway . . . . . . . . . : 192.168.0.1\n DHCPv6 IAID . . . . . . . . . . . : 33557545\n DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-26-A9-34-58-00-0C-29-E1-28-E0\n DNS Servers . . . . . . . . . . . : 192.168.0.105\n NetBIOS over Tcpip. . . . . . . . : Enabled\n\nTunnel adapter isatap.{5F9A3612-A410-4408-A7A8-368D2E16D6A8}:\n\n Media State . . . . . . . . . . . : Media disconnected\n Connection-specific DNS Suffix . :\n Description . . . . . . . . . . . : Microsoft ISATAP Adapter #2\n Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0\n DHCP Enabled. . . . . . . . . . . : No\n Autoconfiguration Enabled . . . . : Yes\n"
},
{
"code": null,
"e": 2870,
"s": 2641,
"text": "But the problem with this command is you can not filter out the results properly. For example, if you need to retrieve the information for a specific interface then you need to write many string manipulation codes in PowerShell."
},
{
"code": null,
"e": 2960,
"s": 2870,
"text": "There are few GET verb commands available for DNS client-side settings. Let's check them."
},
{
"code": null,
"e": 3309,
"s": 2960,
"text": "Get-Command -Verb Get -Noun DNS*\n\nName Version\n---- -------\nGet-DnsClient 1.0.0.0\nGet-DnsClientCache 1.0.0.0\nGet-DnsClientGlobalSetting 1.0.0.0\nGet-DnsClientNrptGlobal 1.0.0.0\nGet-DnsClientNrptPolicy 1.0.0.0\nGet-DnsClientNrptRule 1.0.0.0\nGet-DnsClientServerAddress 1.0.0.0"
},
{
"code": null,
"e": 3441,
"s": 3309,
"text": "To retrieve the data related to DNS client IP settings including domain name like Ipconfig /all command, we need mainly 3 commands."
},
{
"code": null,
"e": 3455,
"s": 3441,
"text": "Get-DnsClient"
},
{
"code": null,
"e": 3469,
"s": 3455,
"text": "Get-DnsClient"
},
{
"code": null,
"e": 3496,
"s": 3469,
"text": "Get-DnsClientGlobalSetting"
},
{
"code": null,
"e": 3523,
"s": 3496,
"text": "Get-DnsClientGlobalSetting"
},
{
"code": null,
"e": 3550,
"s": 3523,
"text": "Get-DnsClientServerAddress"
},
{
"code": null,
"e": 3577,
"s": 3550,
"text": "Get-DnsClientServerAddress"
},
{
"code": null,
"e": 3614,
"s": 3577,
"text": "We will see each command one by one."
},
{
"code": null,
"e": 3856,
"s": 3614,
"text": "This command gets the details of the specific network interfaces configured on the specific computer. This command also helps you to set the DNS server address on the client computers when the Set-DnsClientServerAddress command is pipelined."
},
{
"code": null,
"e": 3950,
"s": 3856,
"text": "When you run this command on the local computer, it gives the details of the local interface."
},
{
"code": null,
"e": 3964,
"s": 3950,
"text": "Get-DnsClient"
},
{
"code": null,
"e": 4114,
"s": 3964,
"text": "If you need specific interface address information then use -InterfaceIndex parameter. In the above output Interface Index, 3 is the primary adapter."
},
{
"code": null,
"e": 4146,
"s": 4114,
"text": "Get-DnsClient -InterfaceIndex 3"
},
{
"code": null,
"e": 4231,
"s": 4146,
"text": "To get the same settings on the remote server, we can use the -CimSession parameter."
},
{
"code": null,
"e": 4311,
"s": 4231,
"text": "$sess = New-CimSession -ComputerName Test1-Win2k16\nGet-DnsClient -Session $sess"
},
{
"code": null,
"e": 4490,
"s": 4311,
"text": "This cmdlet retrieves the DNS client settings which are common to all the interfaces like the DNS suffix search list. Once you run the command, the output will be shown as below."
},
{
"code": null,
"e": 4663,
"s": 4490,
"text": "PS C:\\Users\\Administrator> Get-DnsClientGlobalSetting\n\nUseSuffixSearchList : True\nSuffixSearchList : {labdomain.local}\nUseDevolution : True\nDevolutionLevel : 0"
},
{
"code": null,
"e": 4745,
"s": 4663,
"text": "To get the settings on the remote server, use the CIM session parameter -Session."
},
{
"code": null,
"e": 4993,
"s": 4745,
"text": "$sess = New-CimSession -ComputerName Test1-Win2k16\nGet-DnsClientGlobalSetting -Session $sess\n\nUseSuffixSearchList : True\nSuffixSearchList : {labdomain.local}\nUseDevolution : True\nDevolutionLevel : 0\nPSComputerName : Test1-Win2k16"
},
{
"code": null,
"e": 5100,
"s": 4993,
"text": "This cmdlet retrieves one or more DNS address associated with the interfaces on the computer. For example,"
},
{
"code": null,
"e": 5633,
"s": 5100,
"text": "Get-DnsClientServerAddress\n\nPS C:\\Users\\Administrator> Get-DnsClientServerAddress\n\nInterfaceAlias Interface Address ServerAddresses\n Index Family\n-------------- --------- ------- ---------------\nEthernet0 3 IPv4 {192.168.0.106}\nEthernet0 3 IPv6 {}\nLoopback Pseudo-Interface 1 1 IPv4 {}\nisatap.{5F9A3612-A410-440... 4 IPv4 {192.168.0.106}\nisatap.{5F9A3612-A410-440... 4 IPv6 {}\n\n"
},
{
"code": null,
"e": 5851,
"s": 5633,
"text": "In the above output, the main interface Ethernet0 has associated DNS address is 192.168.0.106. Likewise, there are different IPv4 and IPv6 interfaces, and their DNS addresses are displayed in the Server Address field."
},
{
"code": null,
"e": 5944,
"s": 5851,
"text": "To retrieve only IPv4 interface associated DNS server address, use -AddressFamily parameter."
},
{
"code": null,
"e": 5991,
"s": 5944,
"text": "Get-DnsClientServerAddress -AddressFamily IPv4"
},
{
"code": null,
"e": 6339,
"s": 5991,
"text": "InterfaceAlias Interface Address ServerAddresses\n Index Family\n-------------- --------- ------- ---------------\nEthernet0 3 IPv4 {192.168.0.106}\nLoopback Pseudo-Interface 1 1 IPv4 {}\nisatap.{5F9A3612-A410-440... 4 IPv4 {192.168.0.106}"
},
{
"code": null,
"e": 6471,
"s": 6339,
"text": "To get the DNS server IPs of the specific interface, you need to use its index by supplying index to the -InterfaceIndex Parameter."
},
{
"code": null,
"e": 6802,
"s": 6471,
"text": "Get-DnsClientServerAddress -InterfaceIndex 3\n\nInterfaceAlias Interface Address ServerAddresses\n Index Family\n-------------- --------- ------- ---------------\nEthernet0 3 IPv4 {192.168.0.106}\nEthernet0 3 IPv6 {}"
},
{
"code": null,
"e": 6903,
"s": 6802,
"text": "To get the DNS server list on the remote system, you need to use the CIM session parameter -Session."
},
{
"code": null,
"e": 6966,
"s": 6903,
"text": "Get-DnsClientServerAddress -AddressFamily IPv4 -Session $sess\n"
},
{
"code": null,
"e": 7314,
"s": 6966,
"text": "InterfaceAlias Interface Address ServerAddresses\n Index Family\n-------------- --------- ------- ---------------\nEthernet0 3 IPv4 {192.168.0.106}\nLoopback Pseudo-Interface 1 1 IPv4 {}\nisatap.{5F9A3612-A410-440... 4 IPv4 {192.168.0.106}"
}
] |
Machine Learning Basics: Logistic Regression | by Gurucharan M K | Towards Data Science
|
In the previous stories, I had given an explanation of the program for implementation of various Regression models. As we move on to Classification, isn’t it surprising as to why the title of this algorithm still has the name, Regression. Let us understand the mechanism of the Logistic Regression and learn to build a classification model with an example.
Logistic Regression is a classification model that is used when the dependent variable (output) is in the binary format such as 0 (False) or 1 (True). Examples include such as predicting if there is a tumor (1) or not (0) and if an email is a spam (1) or not (0).
The logistic function, also called as sigmoid function was initially used by statisticians to describe properties of population growth in ecology. The sigmoid function is a mathematical function used to map the predicted values to probabilities. Logistic Regression has an S-shaped curve and can take values between 0 and 1 but never exactly at those limits. It has the formula of 1 / (1 + e^-value).
Logistic Regression is an extension of the Linear Regression model. Let us understand this with a simple example. If we want to classify if an email is a spam or not, if we apply a Linear Regression model, we would get only continuous values between 0 and 1 such as 0.4, 0.7 etc. On the other hand, the Logistic Regression extends this linear regression model by setting a threshold at 0.5, hence the data point will be classified as spam if the output value is greater than 0.5 and not spam if the output value is lesser than 0.5.
In this way, we can use Logistic Regression to classification problems and get accurate predictions.
To apply the Logistic Regression model in practical usage, let us consider a DMV Test dataset which consists of three columns. The first two columns consist of the two DMV written tests (DMV_Test_1 and DMV_Test_2) which are the independent variables and the last column consists of the dependent variable, Results which denote that the driver has got the license (1) or not (0).
In this, we have to build a Logistic Regression model using this data to predict if a driver who has taken the two DMV written tests will get the license or not using those marks obtained in their written tests and classify the results.
As always, the first step will always include importing the libraries which are the NumPy, Pandas and the Matplotlib.
import numpy as npimport matplotlib.pyplot as pltimport pandas as pd
In this step, we shall get the dataset from my GitHub repository as “DMVWrittenTests.csv”. The variable X will store the two “DMV Tests ”and the variable Y will store the final output as “Results”. The dataset.head(5)is used to visualize the first 5 rows of the data.
dataset = pd.read_csv('https://raw.githubusercontent.com/mk-gurucharan/Classification/master/DMVWrittenTests.csv')X = dataset.iloc[:, [0, 1]].valuesy = dataset.iloc[:, 2].valuesdataset.head(5)>>DMV_Test_1 DMV_Test_2 Results34.623660 78.024693 030.286711 43.894998 035.847409 72.902198 060.182599 86.308552 179.032736 75.344376 1
In this step, we have to split the dataset into the Training set, on which the Logistic Regression model will be trained and the Test set, on which the trained model will be applied to classify the results. In this the test_size=0.25 denotes that 25% of the data will be kept as the Test set and the remaining 75% will be used for training as the Training set.
from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
This is an additional step that is used to normalize the data within a particular range. It also aids in speeding up the calculations. As the data is widely varying, we use this function to limit the range of the data within a small limit ( -2,2). For example, the score 62.0730638 is normalized to -0.21231162 and the score 96.51142588 is normalized to 1.55187648. In this way, the scores of X_train and X_test are normalized to a smaller range.
from sklearn.preprocessing import StandardScalersc = StandardScaler()X_train = sc.fit_transform(X_train)X_test = sc.transform(X_test)
In this step, the class LogisticRegression is imported and is assigned to the variable “classifier”. The classifier.fit() function is fitted with X_train and Y_train on which the model will be trained.
from sklearn.linear_model import LogisticRegressionclassifier = LogisticRegression()classifier.fit(X_train, y_train)
In this step, the classifier.predict() function is used to predict the values for the Test set and the values are stored to the variable y_pred.
y_pred = classifier.predict(X_test) y_pred
This is a step that is mostly used in classification techniques. In this, we see the Accuracy of the trained model and plot the confusion matrix.
The confusion matrix is a table that is used to show the number of correct and incorrect predictions on a classification problem when the real values of the Test Set are known. It is of the format
The True values are the number of correct predictions made.
from sklearn.metrics import confusion_matrixcm = confusion_matrix(y_test, y_pred)from sklearn.metrics import accuracy_score print ("Accuracy : ", accuracy_score(y_test, y_pred))cm>>Accuracy : 0.88>>array([[11, 0], [ 3, 11]])
From the above confusion matrix, we infer that, out of 25 test set data, 22 were correctly classified and 3 were incorrectly classified. Pretty good for a start, isn’t it?
In this step, a Pandas DataFrame is created to compare the classified values of both the original Test set (y_test) and the predicted results (y_pred).
df = pd.DataFrame({'Real Values':y_test, 'Predicted Values':y_pred})df>> Real Values Predicted Values1 10 00 00 01 11 11 01 10 01 10 00 00 01 11 01 10 01 11 01 10 00 01 11 10 0
Though this visualization may not be of much use as it was with Regression, from this, we can see that the model is able to classify the test set values with a decent accuracy of 88% as calculated above.
In this last step, we visualize the results of the Logistic Regression model on a graph that is plotted along with the two regions.
from matplotlib.colors import ListedColormapX_set, y_set = X_test, y_testX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green')))plt.xlim(X1.min(), X1.max())plt.ylim(X2.min(), X2.max())for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)plt.title('Logistic Regression')plt.xlabel('DMV_Test_1')plt.ylabel('DMV_Test_2')plt.legend()plt.show()
In this graph, the value 1 (i.e, Yes) is plotted in “Red” color and the value 0 (i.e, No) is plotted in “Green” color. The Logistic Regression line separates the two regions. Thus, any data with the two data points (DMV_Test_1 and DMV_Test_2) given, can be plotted on the graph and depending upon which region if falls in, the result (Getting the Driver’s License) can be classified as Yes or No.
As calculated above, we can see that there are three values in the test set that are wrongly classified as “No” as they are on the other side of the line.
Thus in this story, we have successfully been able to build a Logistic Regression model that is able to predict if a person is able to get the driving license from their written examinations and visualize the results.
I am also attaching the link to my GitHub repository where you can download this Google Colab notebook and the data files for your reference.
github.com
You can also find the explanation of the program for other Classification models below:
Logistic Regression
K-Nearest Neighbors (KNN) Classification (Coming Soon)
Support Vector Machine (SVM) Classification (Coming Soon)
Naive Bayes Classification (Coming Soon)
Random Forest Classification (Coming Soon)
We will come across the more complex models of Regression, Classification and Clustering in the upcoming articles. Till then, Happy Machine Learning!
|
[
{
"code": null,
"e": 528,
"s": 171,
"text": "In the previous stories, I had given an explanation of the program for implementation of various Regression models. As we move on to Classification, isn’t it surprising as to why the title of this algorithm still has the name, Regression. Let us understand the mechanism of the Logistic Regression and learn to build a classification model with an example."
},
{
"code": null,
"e": 792,
"s": 528,
"text": "Logistic Regression is a classification model that is used when the dependent variable (output) is in the binary format such as 0 (False) or 1 (True). Examples include such as predicting if there is a tumor (1) or not (0) and if an email is a spam (1) or not (0)."
},
{
"code": null,
"e": 1193,
"s": 792,
"text": "The logistic function, also called as sigmoid function was initially used by statisticians to describe properties of population growth in ecology. The sigmoid function is a mathematical function used to map the predicted values to probabilities. Logistic Regression has an S-shaped curve and can take values between 0 and 1 but never exactly at those limits. It has the formula of 1 / (1 + e^-value)."
},
{
"code": null,
"e": 1725,
"s": 1193,
"text": "Logistic Regression is an extension of the Linear Regression model. Let us understand this with a simple example. If we want to classify if an email is a spam or not, if we apply a Linear Regression model, we would get only continuous values between 0 and 1 such as 0.4, 0.7 etc. On the other hand, the Logistic Regression extends this linear regression model by setting a threshold at 0.5, hence the data point will be classified as spam if the output value is greater than 0.5 and not spam if the output value is lesser than 0.5."
},
{
"code": null,
"e": 1826,
"s": 1725,
"text": "In this way, we can use Logistic Regression to classification problems and get accurate predictions."
},
{
"code": null,
"e": 2205,
"s": 1826,
"text": "To apply the Logistic Regression model in practical usage, let us consider a DMV Test dataset which consists of three columns. The first two columns consist of the two DMV written tests (DMV_Test_1 and DMV_Test_2) which are the independent variables and the last column consists of the dependent variable, Results which denote that the driver has got the license (1) or not (0)."
},
{
"code": null,
"e": 2442,
"s": 2205,
"text": "In this, we have to build a Logistic Regression model using this data to predict if a driver who has taken the two DMV written tests will get the license or not using those marks obtained in their written tests and classify the results."
},
{
"code": null,
"e": 2560,
"s": 2442,
"text": "As always, the first step will always include importing the libraries which are the NumPy, Pandas and the Matplotlib."
},
{
"code": null,
"e": 2629,
"s": 2560,
"text": "import numpy as npimport matplotlib.pyplot as pltimport pandas as pd"
},
{
"code": null,
"e": 2897,
"s": 2629,
"text": "In this step, we shall get the dataset from my GitHub repository as “DMVWrittenTests.csv”. The variable X will store the two “DMV Tests ”and the variable Y will store the final output as “Results”. The dataset.head(5)is used to visualize the first 5 rows of the data."
},
{
"code": null,
"e": 3260,
"s": 2897,
"text": "dataset = pd.read_csv('https://raw.githubusercontent.com/mk-gurucharan/Classification/master/DMVWrittenTests.csv')X = dataset.iloc[:, [0, 1]].valuesy = dataset.iloc[:, 2].valuesdataset.head(5)>>DMV_Test_1 DMV_Test_2 Results34.623660 78.024693 030.286711 43.894998 035.847409 72.902198 060.182599 86.308552 179.032736 75.344376 1"
},
{
"code": null,
"e": 3621,
"s": 3260,
"text": "In this step, we have to split the dataset into the Training set, on which the Logistic Regression model will be trained and the Test set, on which the trained model will be applied to classify the results. In this the test_size=0.25 denotes that 25% of the data will be kept as the Test set and the remaining 75% will be used for training as the Training set."
},
{
"code": null,
"e": 3767,
"s": 3621,
"text": "from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)"
},
{
"code": null,
"e": 4214,
"s": 3767,
"text": "This is an additional step that is used to normalize the data within a particular range. It also aids in speeding up the calculations. As the data is widely varying, we use this function to limit the range of the data within a small limit ( -2,2). For example, the score 62.0730638 is normalized to -0.21231162 and the score 96.51142588 is normalized to 1.55187648. In this way, the scores of X_train and X_test are normalized to a smaller range."
},
{
"code": null,
"e": 4348,
"s": 4214,
"text": "from sklearn.preprocessing import StandardScalersc = StandardScaler()X_train = sc.fit_transform(X_train)X_test = sc.transform(X_test)"
},
{
"code": null,
"e": 4550,
"s": 4348,
"text": "In this step, the class LogisticRegression is imported and is assigned to the variable “classifier”. The classifier.fit() function is fitted with X_train and Y_train on which the model will be trained."
},
{
"code": null,
"e": 4667,
"s": 4550,
"text": "from sklearn.linear_model import LogisticRegressionclassifier = LogisticRegression()classifier.fit(X_train, y_train)"
},
{
"code": null,
"e": 4812,
"s": 4667,
"text": "In this step, the classifier.predict() function is used to predict the values for the Test set and the values are stored to the variable y_pred."
},
{
"code": null,
"e": 4855,
"s": 4812,
"text": "y_pred = classifier.predict(X_test) y_pred"
},
{
"code": null,
"e": 5001,
"s": 4855,
"text": "This is a step that is mostly used in classification techniques. In this, we see the Accuracy of the trained model and plot the confusion matrix."
},
{
"code": null,
"e": 5198,
"s": 5001,
"text": "The confusion matrix is a table that is used to show the number of correct and incorrect predictions on a classification problem when the real values of the Test Set are known. It is of the format"
},
{
"code": null,
"e": 5258,
"s": 5198,
"text": "The True values are the number of correct predictions made."
},
{
"code": null,
"e": 5491,
"s": 5258,
"text": "from sklearn.metrics import confusion_matrixcm = confusion_matrix(y_test, y_pred)from sklearn.metrics import accuracy_score print (\"Accuracy : \", accuracy_score(y_test, y_pred))cm>>Accuracy : 0.88>>array([[11, 0], [ 3, 11]])"
},
{
"code": null,
"e": 5663,
"s": 5491,
"text": "From the above confusion matrix, we infer that, out of 25 test set data, 22 were correctly classified and 3 were incorrectly classified. Pretty good for a start, isn’t it?"
},
{
"code": null,
"e": 5815,
"s": 5663,
"text": "In this step, a Pandas DataFrame is created to compare the classified values of both the original Test set (y_test) and the predicted results (y_pred)."
},
{
"code": null,
"e": 6294,
"s": 5815,
"text": "df = pd.DataFrame({'Real Values':y_test, 'Predicted Values':y_pred})df>> Real Values Predicted Values1 10 00 00 01 11 11 01 10 01 10 00 00 01 11 01 10 01 11 01 10 00 01 11 10 0"
},
{
"code": null,
"e": 6498,
"s": 6294,
"text": "Though this visualization may not be of much use as it was with Regression, from this, we can see that the model is able to classify the test set values with a decent accuracy of 88% as calculated above."
},
{
"code": null,
"e": 6630,
"s": 6498,
"text": "In this last step, we visualize the results of the Logistic Regression model on a graph that is plotted along with the two regions."
},
{
"code": null,
"e": 7401,
"s": 6630,
"text": "from matplotlib.colors import ListedColormapX_set, y_set = X_test, y_testX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green')))plt.xlim(X1.min(), X1.max())plt.ylim(X2.min(), X2.max())for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)plt.title('Logistic Regression')plt.xlabel('DMV_Test_1')plt.ylabel('DMV_Test_2')plt.legend()plt.show()"
},
{
"code": null,
"e": 7798,
"s": 7401,
"text": "In this graph, the value 1 (i.e, Yes) is plotted in “Red” color and the value 0 (i.e, No) is plotted in “Green” color. The Logistic Regression line separates the two regions. Thus, any data with the two data points (DMV_Test_1 and DMV_Test_2) given, can be plotted on the graph and depending upon which region if falls in, the result (Getting the Driver’s License) can be classified as Yes or No."
},
{
"code": null,
"e": 7953,
"s": 7798,
"text": "As calculated above, we can see that there are three values in the test set that are wrongly classified as “No” as they are on the other side of the line."
},
{
"code": null,
"e": 8171,
"s": 7953,
"text": "Thus in this story, we have successfully been able to build a Logistic Regression model that is able to predict if a person is able to get the driving license from their written examinations and visualize the results."
},
{
"code": null,
"e": 8313,
"s": 8171,
"text": "I am also attaching the link to my GitHub repository where you can download this Google Colab notebook and the data files for your reference."
},
{
"code": null,
"e": 8324,
"s": 8313,
"text": "github.com"
},
{
"code": null,
"e": 8412,
"s": 8324,
"text": "You can also find the explanation of the program for other Classification models below:"
},
{
"code": null,
"e": 8432,
"s": 8412,
"text": "Logistic Regression"
},
{
"code": null,
"e": 8487,
"s": 8432,
"text": "K-Nearest Neighbors (KNN) Classification (Coming Soon)"
},
{
"code": null,
"e": 8545,
"s": 8487,
"text": "Support Vector Machine (SVM) Classification (Coming Soon)"
},
{
"code": null,
"e": 8586,
"s": 8545,
"text": "Naive Bayes Classification (Coming Soon)"
},
{
"code": null,
"e": 8629,
"s": 8586,
"text": "Random Forest Classification (Coming Soon)"
}
] |
JSF - f:attribute
|
The h:attribute tag provides option to pass a attribute value to a component, or a parameter to a component via action listener.
<h:commandButton id = "submit"
actionListener = "#{userData.attributeListener}" action = "result">
<f:attribute name = "value" value = "Show Message" />
<f:attribute name = "username" value = "JSF 2.0 User" />
</h:commandButton>
name
The name of the attribute to set
value
The value of the attribute
Let us create a test JSF application to test the above tag.
package com.tutorialspoint.test;
import java.io.Serializable;
import javax.faces.bean.ManagedBean;
import javax.faces.bean.SessionScoped;
@ManagedBean(name = "userData", eager = true)
@SessionScoped
public class UserData implements Serializable {
private static final long serialVersionUID = 1L;
public String data = "1";
public String getData() {
return data;
}
public void setData(String data) {
this.data = data;
}
public void attributeListener(ActionEvent event) {
data = (String)event.getComponent().getAttributes().get("username");
}
}
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns = "http://www.w3.org/1999/xhtml">
<head>
<title>JSF Tutorial!</title>
</head>
<body>
<h2>f:attribute example</h2>
<hr />
<h:form>
<h:commandButton id = "submit"
actionListener = "#{userData.attributeListener}" action = "result">
<f:attribute name = "value" value = "Show Message" />
<f:attribute name = "username" value = "JSF 2.0 User" />
</h:commandButton>
</h:form>
</body>
</html>
<?xml version = "1.0" encoding = "UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns = "http://www.w3.org/1999/xhtml"
xmlns:f = "http://java.sun.com/jsf/core"
xmlns:h = "http://java.sun.com/jsf/html"
xmlns:ui = "http://java.sun.com/jsf/facelets">
<head>
<title>JSF Tutorial!</title>
</head>
<h:body>
<h2>Result</h2>
<hr />
#{userData.data}
</h:body>
</html>
Once you are ready with all the changes done, let us compile and run the application as we did in JSF - First Application chapter. If everything is fine with your application, this will produce the following result.
Press Show Message button and you'll see the following result.
37 Lectures
3.5 hours
Chaand Sheikh
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2081,
"s": 1952,
"text": "The h:attribute tag provides option to pass a attribute value to a component, or a parameter to a component via action listener."
},
{
"code": null,
"e": 2325,
"s": 2081,
"text": "<h:commandButton id = \"submit\" \n actionListener = \"#{userData.attributeListener}\" action = \"result\"> \n <f:attribute name = \"value\" value = \"Show Message\" />\t\t\t\t\n <f:attribute name = \"username\" value = \"JSF 2.0 User\" />\n</h:commandButton>"
},
{
"code": null,
"e": 2330,
"s": 2325,
"text": "name"
},
{
"code": null,
"e": 2363,
"s": 2330,
"text": "The name of the attribute to set"
},
{
"code": null,
"e": 2369,
"s": 2363,
"text": "value"
},
{
"code": null,
"e": 2396,
"s": 2369,
"text": "The value of the attribute"
},
{
"code": null,
"e": 2456,
"s": 2396,
"text": "Let us create a test JSF application to test the above tag."
},
{
"code": null,
"e": 3047,
"s": 2456,
"text": "package com.tutorialspoint.test;\n\nimport java.io.Serializable;\n\nimport javax.faces.bean.ManagedBean;\nimport javax.faces.bean.SessionScoped;\n\n@ManagedBean(name = \"userData\", eager = true)\n@SessionScoped\npublic class UserData implements Serializable {\n private static final long serialVersionUID = 1L;\n public String data = \"1\";\n\n public String getData() {\n return data;\n }\n\n public void setData(String data) {\n this.data = data;\n }\n\n public void attributeListener(ActionEvent event) {\n data = (String)event.getComponent().getAttributes().get(\"username\");\t\n }\n}"
},
{
"code": null,
"e": 3701,
"s": 3047,
"text": "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \n \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\"> \n\n<html xmlns = \"http://www.w3.org/1999/xhtml\"> \n <head> \n <title>JSF Tutorial!</title> \n </head> \n \n <body> \n <h2>f:attribute example</h2> \n <hr /> \n \n <h:form> \n <h:commandButton id = \"submit\" \n actionListener = \"#{userData.attributeListener}\" action = \"result\"> \n <f:attribute name = \"value\" value = \"Show Message\" /> \n <f:attribute name = \"username\" value = \"JSF 2.0 User\" /> \n </h:commandButton> \n </h:form> \n \n </body> \n</html> "
},
{
"code": null,
"e": 4212,
"s": 3701,
"text": "<?xml version = \"1.0\" encoding = \"UTF-8\"?>\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \n\"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n\n<html xmlns = \"http://www.w3.org/1999/xhtml\"\n xmlns:f = \"http://java.sun.com/jsf/core\" \n xmlns:h = \"http://java.sun.com/jsf/html\"\n xmlns:ui = \"http://java.sun.com/jsf/facelets\">\n \n <head>\n <title>JSF Tutorial!</title>\n </head>\n \n <h:body>\n <h2>Result</h2>\n <hr />\n #{userData.data}\n </h:body>\n</html> "
},
{
"code": null,
"e": 4428,
"s": 4212,
"text": "Once you are ready with all the changes done, let us compile and run the application as we did in JSF - First Application chapter. If everything is fine with your application, this will produce the following result."
},
{
"code": null,
"e": 4491,
"s": 4428,
"text": "Press Show Message button and you'll see the following result."
},
{
"code": null,
"e": 4526,
"s": 4491,
"text": "\n 37 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 4541,
"s": 4526,
"text": " Chaand Sheikh"
},
{
"code": null,
"e": 4548,
"s": 4541,
"text": " Print"
},
{
"code": null,
"e": 4559,
"s": 4548,
"text": " Add Notes"
}
] |
AI Generates Taylor Swift’s Song Lyrics | by Mohammed AL-Ma'amari | Towards Data Science
|
A few days ago, I started to learn LSTM RNN (Long Short Term Memory Recurrent Neural Networks), and I thought that it would be a good idea if I make a project using it.
There is a multitude of applications of LSTM RNN, I decided to go with natural language generation as I always wanted to learn how to process text data, and it will be entertaining to see texts generated by neural networks, so I got this idea about generating Taylor Swift lyrics.
If you don’t know, LSTM recurrent neural networks are networks with loops in them, allowing information to persist, and they have a special type of nodes called LSTM(Long Short Term Memory).
LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell. If you want to know more about LSTM Recurrent Neural Networks visit : Understanding LSTM Networks or Long short-term memory
LSTM Recurrent Neural Networks are used in many applications , the following are the most popular ones :
Language modeling
Text classification
Dialog systems
Natural language generation
More applications
Now, after we learned some essential information about LSTM and RNN , we will start implementing the idea (Taylor Swift Lyrics Generator)
I will use two ways to build the model :
From scratch
Using a Python module called textgenrnn
You can try and run the code in [This Notebook], I highly recommend you to at least take a look at the Colab Notebook
To train the LSTM model we need a dataset of Taylor songs’ lyrics. After searching for it, I found this great dataset in Kaggle .
Let’s take a look at it :
first, import all the needed libraries for our project:
# Import the dependenciesimport numpy as npimport pandas as pdimport sys from keras.models import Sequentialfrom keras.layers import LSTM, Activation, Flatten, Dropout, Dense, Embedding, TimeDistributed, CuDNNLSTMfrom keras.callbacks import ModelCheckpointfrom keras.utils import np_utils
Load the dataset :
#Load the datasetdataset = pd.read_csv('taylor_swift_lyrics.csv', encoding = "latin1")dataset.head()
Concatenate the lines of each song to get each song by its own in one string:
def processFirstLine(lyrics, songID, songName, row): lyrics.append(row['lyric'] + '\n') songID.append( row['year']*100+ row['track_n']) songName.append(row['track_title']) return lyrics,songID,songName# define empty lists for the lyrics , songID , songName lyrics = []songID = []songName = []# songNumber indicates the song number in the datasetsongNumber = 1# i indicates the song numberi = 0isFirstLine = True# Iterate through every lyrics line and join them together for each song independently for index,row in dataset.iterrows(): if(songNumber == row['track_n']): if (isFirstLine): lyrics,songID,songName = processFirstLine(lyrics,songID,songName,row) isFirstLine = False else : #if we still in the same song , keep joining the lyrics lines lyrics[i] += row['lyric'] + '\n' #When it's done joining a song's lyrics lines , go to the next song : else : lyrics,songID,songName = processFirstLine(lyrics,songID,songName,row) songNumber = row['track_n'] i+=1
Define a new pandas DataFrame to save songID , songName , Lyric
lyrics_data = pd.DataFrame({'songID':songID, 'songName':songName, 'lyrics':lyrics })
Now save the lyrics in a text file to use it in the LSTM RNN :
# Save Lyrics in .txt filewith open('lyricsText.txt', 'w',encoding="utf-8") as filehandle: for listitem in lyrics: filehandle.write('%s\n' % listitem)
After getting the wanted data from the dataset, we need to preprocess it.
# Load the dataset and convert it to lowercase :textFileName = 'lyricsText.txt'raw_text = open(textFileName, encoding = 'UTF-8').read()raw_text = raw_text.lower()
Make two dictionaries , one to convert chars to ints, the other to convert ints back to chars :
# Mapping chars to ints :chars = sorted(list(set(raw_text)))int_chars = dict((i, c) for i, c in enumerate(chars))chars_int = dict((i, c) for c, i in enumerate(chars))
Get number of chars and vocab in our text :
n_chars = len(raw_text)n_vocab = len(chars)print(‘Total Characters : ‘ , n_chars) # number of all the characters in lyricsText.txtprint(‘Total Vocab : ‘, n_vocab) # number of unique characters
Make samples and labels to feed the LSTM RNN
# process the dataset:seq_len = 100data_X = []data_y = []for i in range(0, n_chars - seq_len, 1): # Input Sequeance(will be used as samples) seq_in = raw_text[i:i+seq_len] # Output sequence (will be used as target) seq_out = raw_text[i + seq_len] # Store samples in data_X data_X.append([chars_int[char] for char in seq_in]) # Store targets in data_y data_y.append(chars_int[seq_out])n_patterns = len(data_X)print( 'Total Patterns : ', n_patterns)
prepare the samples and labels to be ready to go into our model.
Reshape the samples
Normalize them
One hot encode the output targets
# Reshape X to be suitable to go into LSTM RNN :X = np.reshape(data_X , (n_patterns, seq_len, 1))# Normalizing input data :X = X/ float(n_vocab)# One hot encode the output targets :y = np_utils.to_categorical(data_y)
After we finished processing the dataset , we will start building our LSTM RNN model .
We will start by determining how many layers our model will has , and how many nodes each layer will has :
LSTM_layer_num = 4 # number of LSTM layerslayer_size = [256,256,256,256] # number of nodes in each layer
Define a sequential model :
model = Sequential()
The main difference is that LSTM uses the CPU and CuDNNLSTM uses the GPU , that’s why CuDNNLSTM is much faster than LSTM , it is x15 faster.
This is the reason that made me use CuDNNLTSM instead of LSTM .
Note : make sure to change the run time setting of colab to use its GPU .
Add an input layer :
model.add(CuDNNLSTM(layer_size[0], input_shape =(X.shape[1], X.shape[2]), return_sequences = True))
Add some hidden layers :
for i in range(1,LSTM_layer_num) : model.add(CuDNNLSTM(layer_size[i], return_sequences=True))
Flatten the data that is coming from the last hidden layer to input it to the output layer :
model.add(Flatten())
Add an output layer and define its activation function to be ‘softmax’
and then compile the model with the next params :
loss = ‘categorical_crossentropy’
optimizer = ‘adam’
model.add(Dense(y.shape[1]))model.add(Activation('softmax'))model.compile(loss = 'categorical_crossentropy', optimizer = 'adam')
Print a summary of the model to see some details :
model.summary()
After we defined the model , we will define the needed callbacks.
A callback is a function that is called after every epoch
in our case we will call the checkpoint callback , what a checkpoint callback does is saving the weights of the model every time the model gets better.
# Configure the checkpoint :checkpoint_name = 'Weights-LSTM-improvement-{epoch:03d}-{loss:.5f}-bigger.hdf5'checkpoint = ModelCheckpoint(checkpoint_name, monitor='loss', verbose = 1, save_best_only = True, mode ='min')callbacks_list = [checkpoint]
A model can’t do a thing if it did not train.
As they say “No train no gain “
Feel free to tweak model_params to get a better model
# Fit the model :model_params = {'epochs':30, 'batch_size':128, 'callbacks':callbacks_list, 'verbose':1, 'validation_split':0.2, 'validation_data':None, 'shuffle': True, 'initial_epoch':0, 'steps_per_epoch':None, 'validation_steps':None}model.fit(X, y, epochs = model_params['epochs'], batch_size = model_params['batch_size'], callbacks= model_params['callbacks'], verbose = model_params['verbose'], validation_split = model_params['validation_split'], validation_data = model_params['validation_data'], shuffle = model_params['shuffle'], initial_epoch = model_params['initial_epoch'], steps_per_epoch = model_params['steps_per_epoch'], validation_steps = model_params['validation_steps'])
We can see that some files have been downloaded, we can use such files to load the trained weights to be used in untrained models (i.e we don’t have to train a model every time we want to use it)
# Load wights file :wights_file = './models/Weights-LSTM-improvement-004-2.49538-bigger.hdf5' # weights file pathmodel.load_weights(wights_file)model.compile(loss = 'categorical_crossentropy', optimizer = 'adam')
Now , after we trained the model ,we can use it to generate fake Taylor Swift lyrics
We first pick a random seed , then we will use it to generate lyrics character by character .
# set a random seed :start = np.random.randint(0, len(data_X)-1)pattern = data_X[start]print('Seed : ')print("\"",''.join([int_chars[value] for value in pattern]), "\"\n")# How many characters you want to generategenerated_characters = 300# Generate Charachters :for i in range(generated_characters): x = np.reshape(pattern, ( 1, len(pattern), 1)) x = x / float(n_vocab) prediction = model.predict(x,verbose = 0) index = np.argmax(prediction) result = int_chars[index] #seq_in = [int_chars[value] for value in pattern] sys.stdout.write(result) pattern.append(index) pattern = pattern[1:len(pattern)]print('\nDone')
Output :
Seed : " once, i've been waiting, waiting ooh whoa, ooh whoa and all at once, you are the one, i have been w " eu h mool shoea a eir, bo ly lean on the sast is tigm's the noen uo doy, fo shey stant tas you fot you srart aoo't you tein so my liost i spaye somethppel' cua iy yas tn mu, io' me ohehip in the uorlirs tiines ho a ban't teit dven aester, tee tame mnweiny you'd be pe k bet thing oe eowt the light i Done
You might noticed that the generated lyrics are not real , and there are many spelling mistakes.
You can tweak some parameters and add a Dropout layer to avoid overfitting ,then the model could be better at generating tolerable lyrics.
but if you are lazy and don’t want to bother yourself trying these steps , try using textgenrnn
!pip install -q textgenrnnfrom google.colab import filesfrom textgenrnn import textgenrnnimport os
model_cfg = { 'rnn_size': 500, 'rnn_layers': 12, 'rnn_bidirectional': True, 'max_length': 15, 'max_words': 10000, 'dim_embeddings': 100, 'word_level': False,}train_cfg = { 'line_delimited': True, 'num_epochs': 100, 'gen_epochs': 25, 'batch_size': 750, 'train_size': 0.8, 'dropout': 0.0, 'max_gen_length': 300, 'validation': True, 'is_csv': False}
uploaded = files.upload()all_files = [(name, os.path.getmtime(name)) for name in os.listdir()]latest_file = sorted(all_files, key=lambda x: -x[1])[0][0]
model_name = '500nds_12Lrs_100epchs_Model'textgen = textgenrnn(name=model_name)train_function = textgen.train_from_file if train_cfg['line_delimited'] else textgen.train_from_largetext_filetrain_function( file_path=latest_file, new_model=True, num_epochs=train_cfg['num_epochs'], gen_epochs=train_cfg['gen_epochs'], batch_size=train_cfg['batch_size'], train_size=train_cfg['train_size'], dropout=train_cfg['dropout'], max_gen_length=train_cfg['max_gen_length'], validation=train_cfg['validation'], is_csv=train_cfg['is_csv'], rnn_layers=model_cfg['rnn_layers'], rnn_size=model_cfg['rnn_size'], rnn_bidirectional=model_cfg['rnn_bidirectional'], max_length=model_cfg['max_length'], dim_embeddings=model_cfg['dim_embeddings'], word_level=model_cfg['word_level'])
print a summary of the model :
print(textgen.model.summary())
files.download('{}_weights.hdf5'.format(model_name))files.download('{}_vocab.json'.format(model_name))files.download('{}_config.json'.format(model_name))
textgen = textgenrnn(weights_path='6layers30EpochsModel_weights.hdf5', vocab_path='6layers30EpochsModel_vocab.json', config_path='6layers30EpochsModel_config.json')generated_characters = 300textgen.generate_samples(300)textgen.generate_to_file('lyrics.txt', 300)
Some lyrics generated by a model created using textgenrnn :
i ' m not your friendsand it rains when you ' re not speakingbut you think tim mcgrawand i ' m pacing downi ' m comfortablei ' m not a storm in mindyou ' re not speakingand i ' m not a saint and i ' m standin ' t know you ' rei ' m wonderstruckand you ' re gayi ' ve been giving outbut i ' m just another picture to payyou ' re not asking myself , oh , i ' d go back to december , don ' t know youit ' s killing me like a chalkboardit ' s the one youcan ' t you ' re jumping into of you ' re not a last kissand i ' m just a girl , baby , i ' m alone for mei ' m not a little troublingwon ' t you think about a . steps , you roll the stars mindyou ' s killing me ? )and i ' m say i won ' t stay beautiful at onto the first pageyou ' s 2 : prettyand you said real?change makes and oh , who baby , oh , and you talk awayand you ' s all a minute , ghosts your arms pagethese senior making me tough , so hello growing up , we were liar , no one someone perfect day when i came' re not sorryyou ' re an innocenton the outskirtsight , don ' t say a house and he ' roundshe ' re thinking to december all that baby , all everything nowand let me when you oh , what to come back my dressalwaysi close both young beforeat ?yeah
We saw how easy and convenient it was using textgenrnn , yes the lyrics still not realistic, but there are much less spelling mistakes than the model that we built from scratch.
another good thing about textgenrnn is that one don’t have to deal with any dataset processing, just upload the text dataset and set down with a cup of coffee watching your model training and getting better
Now, after you learned how to make a LSTM RNN from scratch to generate texts , and also how to use Pyhton modules such as textgenrnn you can do many things using this knowledge :
Try to use other datasets (wikipedia articles , William Shakespeare novevls, etc) to generate novels or articles.
Use LSTM RNN in other applications than text generating .
Read more about LSTM RNN
Text Generation With LSTM Recurrent Neural Networks in Python with Keras
Applied Introduction to LSTMs with GPU for text generation
Generating Text Using LSTM RNN
textgenrnn
Train a Text-Generating Neural Network for Free with textgenrnn
Understanding LSTM Networks
Long short-term memory
You can follow me on Twitter @ModMaamari
Deep Neural Networks for Regression Problems
Introduction to Random Forest Algorithm with Python
Machine Learning Crash Course with TensorFlow APIs Summary
How To Make A CNN Using Tensorflow and Keras ?
How to Choose the Best Machine Learning Model ?
|
[
{
"code": null,
"e": 341,
"s": 172,
"text": "A few days ago, I started to learn LSTM RNN (Long Short Term Memory Recurrent Neural Networks), and I thought that it would be a good idea if I make a project using it."
},
{
"code": null,
"e": 622,
"s": 341,
"text": "There is a multitude of applications of LSTM RNN, I decided to go with natural language generation as I always wanted to learn how to process text data, and it will be entertaining to see texts generated by neural networks, so I got this idea about generating Taylor Swift lyrics."
},
{
"code": null,
"e": 813,
"s": 622,
"text": "If you don’t know, LSTM recurrent neural networks are networks with loops in them, allowing information to persist, and they have a special type of nodes called LSTM(Long Short Term Memory)."
},
{
"code": null,
"e": 1154,
"s": 813,
"text": "LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell. If you want to know more about LSTM Recurrent Neural Networks visit : Understanding LSTM Networks or Long short-term memory"
},
{
"code": null,
"e": 1259,
"s": 1154,
"text": "LSTM Recurrent Neural Networks are used in many applications , the following are the most popular ones :"
},
{
"code": null,
"e": 1277,
"s": 1259,
"text": "Language modeling"
},
{
"code": null,
"e": 1297,
"s": 1277,
"text": "Text classification"
},
{
"code": null,
"e": 1312,
"s": 1297,
"text": "Dialog systems"
},
{
"code": null,
"e": 1340,
"s": 1312,
"text": "Natural language generation"
},
{
"code": null,
"e": 1358,
"s": 1340,
"text": "More applications"
},
{
"code": null,
"e": 1496,
"s": 1358,
"text": "Now, after we learned some essential information about LSTM and RNN , we will start implementing the idea (Taylor Swift Lyrics Generator)"
},
{
"code": null,
"e": 1537,
"s": 1496,
"text": "I will use two ways to build the model :"
},
{
"code": null,
"e": 1550,
"s": 1537,
"text": "From scratch"
},
{
"code": null,
"e": 1590,
"s": 1550,
"text": "Using a Python module called textgenrnn"
},
{
"code": null,
"e": 1708,
"s": 1590,
"text": "You can try and run the code in [This Notebook], I highly recommend you to at least take a look at the Colab Notebook"
},
{
"code": null,
"e": 1838,
"s": 1708,
"text": "To train the LSTM model we need a dataset of Taylor songs’ lyrics. After searching for it, I found this great dataset in Kaggle ."
},
{
"code": null,
"e": 1864,
"s": 1838,
"text": "Let’s take a look at it :"
},
{
"code": null,
"e": 1920,
"s": 1864,
"text": "first, import all the needed libraries for our project:"
},
{
"code": null,
"e": 2209,
"s": 1920,
"text": "# Import the dependenciesimport numpy as npimport pandas as pdimport sys from keras.models import Sequentialfrom keras.layers import LSTM, Activation, Flatten, Dropout, Dense, Embedding, TimeDistributed, CuDNNLSTMfrom keras.callbacks import ModelCheckpointfrom keras.utils import np_utils"
},
{
"code": null,
"e": 2228,
"s": 2209,
"text": "Load the dataset :"
},
{
"code": null,
"e": 2329,
"s": 2228,
"text": "#Load the datasetdataset = pd.read_csv('taylor_swift_lyrics.csv', encoding = \"latin1\")dataset.head()"
},
{
"code": null,
"e": 2407,
"s": 2329,
"text": "Concatenate the lines of each song to get each song by its own in one string:"
},
{
"code": null,
"e": 3475,
"s": 2407,
"text": "def processFirstLine(lyrics, songID, songName, row): lyrics.append(row['lyric'] + '\\n') songID.append( row['year']*100+ row['track_n']) songName.append(row['track_title']) return lyrics,songID,songName# define empty lists for the lyrics , songID , songName lyrics = []songID = []songName = []# songNumber indicates the song number in the datasetsongNumber = 1# i indicates the song numberi = 0isFirstLine = True# Iterate through every lyrics line and join them together for each song independently for index,row in dataset.iterrows(): if(songNumber == row['track_n']): if (isFirstLine): lyrics,songID,songName = processFirstLine(lyrics,songID,songName,row) isFirstLine = False else : #if we still in the same song , keep joining the lyrics lines lyrics[i] += row['lyric'] + '\\n' #When it's done joining a song's lyrics lines , go to the next song : else : lyrics,songID,songName = processFirstLine(lyrics,songID,songName,row) songNumber = row['track_n'] i+=1"
},
{
"code": null,
"e": 3539,
"s": 3475,
"text": "Define a new pandas DataFrame to save songID , songName , Lyric"
},
{
"code": null,
"e": 3624,
"s": 3539,
"text": "lyrics_data = pd.DataFrame({'songID':songID, 'songName':songName, 'lyrics':lyrics })"
},
{
"code": null,
"e": 3687,
"s": 3624,
"text": "Now save the lyrics in a text file to use it in the LSTM RNN :"
},
{
"code": null,
"e": 3850,
"s": 3687,
"text": "# Save Lyrics in .txt filewith open('lyricsText.txt', 'w',encoding=\"utf-8\") as filehandle: for listitem in lyrics: filehandle.write('%s\\n' % listitem)"
},
{
"code": null,
"e": 3924,
"s": 3850,
"text": "After getting the wanted data from the dataset, we need to preprocess it."
},
{
"code": null,
"e": 4087,
"s": 3924,
"text": "# Load the dataset and convert it to lowercase :textFileName = 'lyricsText.txt'raw_text = open(textFileName, encoding = 'UTF-8').read()raw_text = raw_text.lower()"
},
{
"code": null,
"e": 4183,
"s": 4087,
"text": "Make two dictionaries , one to convert chars to ints, the other to convert ints back to chars :"
},
{
"code": null,
"e": 4350,
"s": 4183,
"text": "# Mapping chars to ints :chars = sorted(list(set(raw_text)))int_chars = dict((i, c) for i, c in enumerate(chars))chars_int = dict((i, c) for c, i in enumerate(chars))"
},
{
"code": null,
"e": 4394,
"s": 4350,
"text": "Get number of chars and vocab in our text :"
},
{
"code": null,
"e": 4587,
"s": 4394,
"text": "n_chars = len(raw_text)n_vocab = len(chars)print(‘Total Characters : ‘ , n_chars) # number of all the characters in lyricsText.txtprint(‘Total Vocab : ‘, n_vocab) # number of unique characters"
},
{
"code": null,
"e": 4632,
"s": 4587,
"text": "Make samples and labels to feed the LSTM RNN"
},
{
"code": null,
"e": 5105,
"s": 4632,
"text": "# process the dataset:seq_len = 100data_X = []data_y = []for i in range(0, n_chars - seq_len, 1): # Input Sequeance(will be used as samples) seq_in = raw_text[i:i+seq_len] # Output sequence (will be used as target) seq_out = raw_text[i + seq_len] # Store samples in data_X data_X.append([chars_int[char] for char in seq_in]) # Store targets in data_y data_y.append(chars_int[seq_out])n_patterns = len(data_X)print( 'Total Patterns : ', n_patterns)"
},
{
"code": null,
"e": 5170,
"s": 5105,
"text": "prepare the samples and labels to be ready to go into our model."
},
{
"code": null,
"e": 5190,
"s": 5170,
"text": "Reshape the samples"
},
{
"code": null,
"e": 5205,
"s": 5190,
"text": "Normalize them"
},
{
"code": null,
"e": 5239,
"s": 5205,
"text": "One hot encode the output targets"
},
{
"code": null,
"e": 5456,
"s": 5239,
"text": "# Reshape X to be suitable to go into LSTM RNN :X = np.reshape(data_X , (n_patterns, seq_len, 1))# Normalizing input data :X = X/ float(n_vocab)# One hot encode the output targets :y = np_utils.to_categorical(data_y)"
},
{
"code": null,
"e": 5543,
"s": 5456,
"text": "After we finished processing the dataset , we will start building our LSTM RNN model ."
},
{
"code": null,
"e": 5650,
"s": 5543,
"text": "We will start by determining how many layers our model will has , and how many nodes each layer will has :"
},
{
"code": null,
"e": 5755,
"s": 5650,
"text": "LSTM_layer_num = 4 # number of LSTM layerslayer_size = [256,256,256,256] # number of nodes in each layer"
},
{
"code": null,
"e": 5783,
"s": 5755,
"text": "Define a sequential model :"
},
{
"code": null,
"e": 5804,
"s": 5783,
"text": "model = Sequential()"
},
{
"code": null,
"e": 5945,
"s": 5804,
"text": "The main difference is that LSTM uses the CPU and CuDNNLSTM uses the GPU , that’s why CuDNNLSTM is much faster than LSTM , it is x15 faster."
},
{
"code": null,
"e": 6009,
"s": 5945,
"text": "This is the reason that made me use CuDNNLTSM instead of LSTM ."
},
{
"code": null,
"e": 6083,
"s": 6009,
"text": "Note : make sure to change the run time setting of colab to use its GPU ."
},
{
"code": null,
"e": 6104,
"s": 6083,
"text": "Add an input layer :"
},
{
"code": null,
"e": 6204,
"s": 6104,
"text": "model.add(CuDNNLSTM(layer_size[0], input_shape =(X.shape[1], X.shape[2]), return_sequences = True))"
},
{
"code": null,
"e": 6229,
"s": 6204,
"text": "Add some hidden layers :"
},
{
"code": null,
"e": 6326,
"s": 6229,
"text": "for i in range(1,LSTM_layer_num) : model.add(CuDNNLSTM(layer_size[i], return_sequences=True))"
},
{
"code": null,
"e": 6419,
"s": 6326,
"text": "Flatten the data that is coming from the last hidden layer to input it to the output layer :"
},
{
"code": null,
"e": 6440,
"s": 6419,
"text": "model.add(Flatten())"
},
{
"code": null,
"e": 6511,
"s": 6440,
"text": "Add an output layer and define its activation function to be ‘softmax’"
},
{
"code": null,
"e": 6561,
"s": 6511,
"text": "and then compile the model with the next params :"
},
{
"code": null,
"e": 6595,
"s": 6561,
"text": "loss = ‘categorical_crossentropy’"
},
{
"code": null,
"e": 6614,
"s": 6595,
"text": "optimizer = ‘adam’"
},
{
"code": null,
"e": 6743,
"s": 6614,
"text": "model.add(Dense(y.shape[1]))model.add(Activation('softmax'))model.compile(loss = 'categorical_crossentropy', optimizer = 'adam')"
},
{
"code": null,
"e": 6794,
"s": 6743,
"text": "Print a summary of the model to see some details :"
},
{
"code": null,
"e": 6810,
"s": 6794,
"text": "model.summary()"
},
{
"code": null,
"e": 6876,
"s": 6810,
"text": "After we defined the model , we will define the needed callbacks."
},
{
"code": null,
"e": 6934,
"s": 6876,
"text": "A callback is a function that is called after every epoch"
},
{
"code": null,
"e": 7086,
"s": 6934,
"text": "in our case we will call the checkpoint callback , what a checkpoint callback does is saving the weights of the model every time the model gets better."
},
{
"code": null,
"e": 7333,
"s": 7086,
"text": "# Configure the checkpoint :checkpoint_name = 'Weights-LSTM-improvement-{epoch:03d}-{loss:.5f}-bigger.hdf5'checkpoint = ModelCheckpoint(checkpoint_name, monitor='loss', verbose = 1, save_best_only = True, mode ='min')callbacks_list = [checkpoint]"
},
{
"code": null,
"e": 7379,
"s": 7333,
"text": "A model can’t do a thing if it did not train."
},
{
"code": null,
"e": 7411,
"s": 7379,
"text": "As they say “No train no gain “"
},
{
"code": null,
"e": 7465,
"s": 7411,
"text": "Feel free to tweak model_params to get a better model"
},
{
"code": null,
"e": 8398,
"s": 7465,
"text": "# Fit the model :model_params = {'epochs':30, 'batch_size':128, 'callbacks':callbacks_list, 'verbose':1, 'validation_split':0.2, 'validation_data':None, 'shuffle': True, 'initial_epoch':0, 'steps_per_epoch':None, 'validation_steps':None}model.fit(X, y, epochs = model_params['epochs'], batch_size = model_params['batch_size'], callbacks= model_params['callbacks'], verbose = model_params['verbose'], validation_split = model_params['validation_split'], validation_data = model_params['validation_data'], shuffle = model_params['shuffle'], initial_epoch = model_params['initial_epoch'], steps_per_epoch = model_params['steps_per_epoch'], validation_steps = model_params['validation_steps'])"
},
{
"code": null,
"e": 8594,
"s": 8398,
"text": "We can see that some files have been downloaded, we can use such files to load the trained weights to be used in untrained models (i.e we don’t have to train a model every time we want to use it)"
},
{
"code": null,
"e": 8807,
"s": 8594,
"text": "# Load wights file :wights_file = './models/Weights-LSTM-improvement-004-2.49538-bigger.hdf5' # weights file pathmodel.load_weights(wights_file)model.compile(loss = 'categorical_crossentropy', optimizer = 'adam')"
},
{
"code": null,
"e": 8892,
"s": 8807,
"text": "Now , after we trained the model ,we can use it to generate fake Taylor Swift lyrics"
},
{
"code": null,
"e": 8986,
"s": 8892,
"text": "We first pick a random seed , then we will use it to generate lyrics character by character ."
},
{
"code": null,
"e": 9628,
"s": 8986,
"text": "# set a random seed :start = np.random.randint(0, len(data_X)-1)pattern = data_X[start]print('Seed : ')print(\"\\\"\",''.join([int_chars[value] for value in pattern]), \"\\\"\\n\")# How many characters you want to generategenerated_characters = 300# Generate Charachters :for i in range(generated_characters): x = np.reshape(pattern, ( 1, len(pattern), 1)) x = x / float(n_vocab) prediction = model.predict(x,verbose = 0) index = np.argmax(prediction) result = int_chars[index] #seq_in = [int_chars[value] for value in pattern] sys.stdout.write(result) pattern.append(index) pattern = pattern[1:len(pattern)]print('\\nDone')"
},
{
"code": null,
"e": 9637,
"s": 9628,
"text": "Output :"
},
{
"code": null,
"e": 10057,
"s": 9637,
"text": "Seed : \" once, i've been waiting, waiting ooh whoa, ooh whoa and all at once, you are the one, i have been w \" eu h mool shoea a eir, bo ly lean on the sast is tigm's the noen uo doy, fo shey stant tas you fot you srart aoo't you tein so my liost i spaye somethppel' cua iy yas tn mu, io' me ohehip in the uorlirs tiines ho a ban't teit dven aester, tee tame mnweiny you'd be pe k bet thing oe eowt the light i Done"
},
{
"code": null,
"e": 10154,
"s": 10057,
"text": "You might noticed that the generated lyrics are not real , and there are many spelling mistakes."
},
{
"code": null,
"e": 10293,
"s": 10154,
"text": "You can tweak some parameters and add a Dropout layer to avoid overfitting ,then the model could be better at generating tolerable lyrics."
},
{
"code": null,
"e": 10389,
"s": 10293,
"text": "but if you are lazy and don’t want to bother yourself trying these steps , try using textgenrnn"
},
{
"code": null,
"e": 10488,
"s": 10389,
"text": "!pip install -q textgenrnnfrom google.colab import filesfrom textgenrnn import textgenrnnimport os"
},
{
"code": null,
"e": 10883,
"s": 10488,
"text": "model_cfg = { 'rnn_size': 500, 'rnn_layers': 12, 'rnn_bidirectional': True, 'max_length': 15, 'max_words': 10000, 'dim_embeddings': 100, 'word_level': False,}train_cfg = { 'line_delimited': True, 'num_epochs': 100, 'gen_epochs': 25, 'batch_size': 750, 'train_size': 0.8, 'dropout': 0.0, 'max_gen_length': 300, 'validation': True, 'is_csv': False}"
},
{
"code": null,
"e": 11036,
"s": 10883,
"text": "uploaded = files.upload()all_files = [(name, os.path.getmtime(name)) for name in os.listdir()]latest_file = sorted(all_files, key=lambda x: -x[1])[0][0]"
},
{
"code": null,
"e": 11844,
"s": 11036,
"text": "model_name = '500nds_12Lrs_100epchs_Model'textgen = textgenrnn(name=model_name)train_function = textgen.train_from_file if train_cfg['line_delimited'] else textgen.train_from_largetext_filetrain_function( file_path=latest_file, new_model=True, num_epochs=train_cfg['num_epochs'], gen_epochs=train_cfg['gen_epochs'], batch_size=train_cfg['batch_size'], train_size=train_cfg['train_size'], dropout=train_cfg['dropout'], max_gen_length=train_cfg['max_gen_length'], validation=train_cfg['validation'], is_csv=train_cfg['is_csv'], rnn_layers=model_cfg['rnn_layers'], rnn_size=model_cfg['rnn_size'], rnn_bidirectional=model_cfg['rnn_bidirectional'], max_length=model_cfg['max_length'], dim_embeddings=model_cfg['dim_embeddings'], word_level=model_cfg['word_level'])"
},
{
"code": null,
"e": 11875,
"s": 11844,
"text": "print a summary of the model :"
},
{
"code": null,
"e": 11906,
"s": 11875,
"text": "print(textgen.model.summary())"
},
{
"code": null,
"e": 12060,
"s": 11906,
"text": "files.download('{}_weights.hdf5'.format(model_name))files.download('{}_vocab.json'.format(model_name))files.download('{}_config.json'.format(model_name))"
},
{
"code": null,
"e": 12367,
"s": 12060,
"text": "textgen = textgenrnn(weights_path='6layers30EpochsModel_weights.hdf5', vocab_path='6layers30EpochsModel_vocab.json', config_path='6layers30EpochsModel_config.json')generated_characters = 300textgen.generate_samples(300)textgen.generate_to_file('lyrics.txt', 300)"
},
{
"code": null,
"e": 12427,
"s": 12367,
"text": "Some lyrics generated by a model created using textgenrnn :"
},
{
"code": null,
"e": 13644,
"s": 12427,
"text": "i ' m not your friendsand it rains when you ' re not speakingbut you think tim mcgrawand i ' m pacing downi ' m comfortablei ' m not a storm in mindyou ' re not speakingand i ' m not a saint and i ' m standin ' t know you ' rei ' m wonderstruckand you ' re gayi ' ve been giving outbut i ' m just another picture to payyou ' re not asking myself , oh , i ' d go back to december , don ' t know youit ' s killing me like a chalkboardit ' s the one youcan ' t you ' re jumping into of you ' re not a last kissand i ' m just a girl , baby , i ' m alone for mei ' m not a little troublingwon ' t you think about a . steps , you roll the stars mindyou ' s killing me ? )and i ' m say i won ' t stay beautiful at onto the first pageyou ' s 2 : prettyand you said real?change makes and oh , who baby , oh , and you talk awayand you ' s all a minute , ghosts your arms pagethese senior making me tough , so hello growing up , we were liar , no one someone perfect day when i came' re not sorryyou ' re an innocenton the outskirtsight , don ' t say a house and he ' roundshe ' re thinking to december all that baby , all everything nowand let me when you oh , what to come back my dressalwaysi close both young beforeat ?yeah"
},
{
"code": null,
"e": 13822,
"s": 13644,
"text": "We saw how easy and convenient it was using textgenrnn , yes the lyrics still not realistic, but there are much less spelling mistakes than the model that we built from scratch."
},
{
"code": null,
"e": 14029,
"s": 13822,
"text": "another good thing about textgenrnn is that one don’t have to deal with any dataset processing, just upload the text dataset and set down with a cup of coffee watching your model training and getting better"
},
{
"code": null,
"e": 14208,
"s": 14029,
"text": "Now, after you learned how to make a LSTM RNN from scratch to generate texts , and also how to use Pyhton modules such as textgenrnn you can do many things using this knowledge :"
},
{
"code": null,
"e": 14322,
"s": 14208,
"text": "Try to use other datasets (wikipedia articles , William Shakespeare novevls, etc) to generate novels or articles."
},
{
"code": null,
"e": 14380,
"s": 14322,
"text": "Use LSTM RNN in other applications than text generating ."
},
{
"code": null,
"e": 14405,
"s": 14380,
"text": "Read more about LSTM RNN"
},
{
"code": null,
"e": 14478,
"s": 14405,
"text": "Text Generation With LSTM Recurrent Neural Networks in Python with Keras"
},
{
"code": null,
"e": 14537,
"s": 14478,
"text": "Applied Introduction to LSTMs with GPU for text generation"
},
{
"code": null,
"e": 14568,
"s": 14537,
"text": "Generating Text Using LSTM RNN"
},
{
"code": null,
"e": 14579,
"s": 14568,
"text": "textgenrnn"
},
{
"code": null,
"e": 14643,
"s": 14579,
"text": "Train a Text-Generating Neural Network for Free with textgenrnn"
},
{
"code": null,
"e": 14671,
"s": 14643,
"text": "Understanding LSTM Networks"
},
{
"code": null,
"e": 14694,
"s": 14671,
"text": "Long short-term memory"
},
{
"code": null,
"e": 14735,
"s": 14694,
"text": "You can follow me on Twitter @ModMaamari"
},
{
"code": null,
"e": 14780,
"s": 14735,
"text": "Deep Neural Networks for Regression Problems"
},
{
"code": null,
"e": 14832,
"s": 14780,
"text": "Introduction to Random Forest Algorithm with Python"
},
{
"code": null,
"e": 14891,
"s": 14832,
"text": "Machine Learning Crash Course with TensorFlow APIs Summary"
},
{
"code": null,
"e": 14938,
"s": 14891,
"text": "How To Make A CNN Using Tensorflow and Keras ?"
}
] |
Data Science and Parallel Computing With Dask | Towards Data Science
|
In this post we discuss the basics of leveraging Dask in python, use it to execute some simple tasks that are trivial to parallelize (Embarrassingly Parallel), understand some of the most common possible use cases (data munging, data exploration, machine learning) and then touch upon some of the more complex workflows that can be built by combining different ML libraries with Dask. In this post, we’ll focus on running an embarrassingly parallel task with Dask.
What’s parallel computing?
You might have heard the phrase, but what it actually implies at its core other than all the actual complex algorithmic stuff is this: You execute the work you want to do such that multiple steps happen at the same time instead of always waiting for the previous task to finish. The level of difficulty rises with the nature of what you actually want to accomplish.
Operations/tasks which are independent of each other i.e. except for the start and end for each task, there isn’t any data dependency — are usually easiest to parallelize and are termed as embarrassingly parallel, while tasks involving a lot of data transfer/communication between start and end — are difficult to parallelize.
As a data scientist/analyst you don’t always have either the luxury or the skillset required for thinking about how to parallelize your task and implement a solution in some other language from the ground up — e.g. spending weeks, may be months to find out the optimal way to do a group by isn’t what’s expected from you!
What’s Dask?
Dask : A framework for parallelizing your computational tasks in Python
Dask is a library designed to help facilitate :
Data munging with very large datasets, and
Distribution of computation across lots of cores ( local cluster with multiple workers )or physical computers ( cluster with multiple nodes )
It is similar to Apache Spark in the functionality it provides, but it is tightly integrated into numpy and pandas, making it much easier to learn and utilize than spark for users of those libraries.
Parallelizing a task, ANY task is hard if you are the one building the framework from scratch for such a task. There are python modules (e.g. multiprocessing )which help you skip this part and execute and manage ( to some extent ) different threads. Though, even this takes a lot of effort and specific skillset from the user’s perspective. Monitoring, logging etc also usually take a backseat in this scenario.
Dask takes abstracts away all the complex ( though important ) internals and gives you a clean api to parallelize your existing code within python. depending on how you have written your code — you may not even need to modify your existing functions for them to work with Dask!
Let’s get started on some basics of using Dask below and over the next few posts.
Getting Dask :
I would recommend 1 of the following choices :
Install Anaconda for your python environment ( My preferred choice, Dask comes installed with the base setup. You can also create a new environment to keep your Dask environment separate while you are learning, scheduling your jobs etc )
conda create --name dask python=3.7 anaconda dask
Create a new python virtual environment and install Dask within it
Install Dask within your current environment
# Anaconda environmentconda install dask# pip based installationpip install "dask[complete]"
Create your Dask client :
import daskfrom dask.distributed import Client, progresstry: client.close() client = Client(threads_per_worker=1, n_workers=5, memory_limit='2GB')except: client = Client(threads_per_worker=1, n_workers=5, memory_limit='2GB')client
Here, we have created a Local cluster, defined how many local workers to run and the resources to allocate for each worker. This client can at most runs 5 tasks at a time (n_workers) and Dask handles all the details of defining individual threads, coordinating between different threads as one task finishes, the next one begins, data transfer from our environment to each worker and back etc
Task: Calculate Mean and Standard Deviation for each category in a given dataset
Step 01: Create a sample simulationIn our case, I have simulated a dataset of k operations and g categories. The output of this simulation is mean and standard deviation for each from the simulation.
def simulate_dataset(i): g = 10 k = 100000 categories = np.random.choice(a = np.arange(g), size= k) values = [(j+1)*np.random.random(k//g) for j in range(g) ] values = np.concatenate(values) data = pd.DataFrame({'category':categories, 'values':values}) data_out = data.groupby('category').apply(lambda df: [ df.loc[:,'values'].mean(), df.loc[:,'values'].std() ])return(data_out)
THIS function could be anything, any complex analysis/simulation task that you want — as long as it is self-contained. Each iteration can be run independently and doesn’t require any communication with other threads and only inputs at the beginning of your simulation.
Step 02: Run simulation in a for loop n times
%%timeitresults = {}for i in range(n): results[i] = simulate_dataset(i)
Step 03: Run simulation with Dask n times
%%timeitresults_interim = {}for i in range(n): results_interim[i] = delayed(simulate_dataset)(i)results_scheduled = delayed(list)(results_interim)results = results_scheduled.compute()
In this stand alone example, we see a speedup of ~3 times. The speedup is much more apparent when you make the computation/simulation heavier (increase either unique categories or total sample size in our case)
The increase isn’t linear because of the overheads involved in launching and managing different workers. The heavier computation with each worker is, the less impact this overhead has.
Visualizations for progress on Dask dashboard:
|
[
{
"code": null,
"e": 637,
"s": 172,
"text": "In this post we discuss the basics of leveraging Dask in python, use it to execute some simple tasks that are trivial to parallelize (Embarrassingly Parallel), understand some of the most common possible use cases (data munging, data exploration, machine learning) and then touch upon some of the more complex workflows that can be built by combining different ML libraries with Dask. In this post, we’ll focus on running an embarrassingly parallel task with Dask."
},
{
"code": null,
"e": 664,
"s": 637,
"text": "What’s parallel computing?"
},
{
"code": null,
"e": 1030,
"s": 664,
"text": "You might have heard the phrase, but what it actually implies at its core other than all the actual complex algorithmic stuff is this: You execute the work you want to do such that multiple steps happen at the same time instead of always waiting for the previous task to finish. The level of difficulty rises with the nature of what you actually want to accomplish."
},
{
"code": null,
"e": 1357,
"s": 1030,
"text": "Operations/tasks which are independent of each other i.e. except for the start and end for each task, there isn’t any data dependency — are usually easiest to parallelize and are termed as embarrassingly parallel, while tasks involving a lot of data transfer/communication between start and end — are difficult to parallelize."
},
{
"code": null,
"e": 1679,
"s": 1357,
"text": "As a data scientist/analyst you don’t always have either the luxury or the skillset required for thinking about how to parallelize your task and implement a solution in some other language from the ground up — e.g. spending weeks, may be months to find out the optimal way to do a group by isn’t what’s expected from you!"
},
{
"code": null,
"e": 1692,
"s": 1679,
"text": "What’s Dask?"
},
{
"code": null,
"e": 1764,
"s": 1692,
"text": "Dask : A framework for parallelizing your computational tasks in Python"
},
{
"code": null,
"e": 1812,
"s": 1764,
"text": "Dask is a library designed to help facilitate :"
},
{
"code": null,
"e": 1855,
"s": 1812,
"text": "Data munging with very large datasets, and"
},
{
"code": null,
"e": 1997,
"s": 1855,
"text": "Distribution of computation across lots of cores ( local cluster with multiple workers )or physical computers ( cluster with multiple nodes )"
},
{
"code": null,
"e": 2197,
"s": 1997,
"text": "It is similar to Apache Spark in the functionality it provides, but it is tightly integrated into numpy and pandas, making it much easier to learn and utilize than spark for users of those libraries."
},
{
"code": null,
"e": 2609,
"s": 2197,
"text": "Parallelizing a task, ANY task is hard if you are the one building the framework from scratch for such a task. There are python modules (e.g. multiprocessing )which help you skip this part and execute and manage ( to some extent ) different threads. Though, even this takes a lot of effort and specific skillset from the user’s perspective. Monitoring, logging etc also usually take a backseat in this scenario."
},
{
"code": null,
"e": 2887,
"s": 2609,
"text": "Dask takes abstracts away all the complex ( though important ) internals and gives you a clean api to parallelize your existing code within python. depending on how you have written your code — you may not even need to modify your existing functions for them to work with Dask!"
},
{
"code": null,
"e": 2969,
"s": 2887,
"text": "Let’s get started on some basics of using Dask below and over the next few posts."
},
{
"code": null,
"e": 2984,
"s": 2969,
"text": "Getting Dask :"
},
{
"code": null,
"e": 3031,
"s": 2984,
"text": "I would recommend 1 of the following choices :"
},
{
"code": null,
"e": 3269,
"s": 3031,
"text": "Install Anaconda for your python environment ( My preferred choice, Dask comes installed with the base setup. You can also create a new environment to keep your Dask environment separate while you are learning, scheduling your jobs etc )"
},
{
"code": null,
"e": 3319,
"s": 3269,
"text": "conda create --name dask python=3.7 anaconda dask"
},
{
"code": null,
"e": 3386,
"s": 3319,
"text": "Create a new python virtual environment and install Dask within it"
},
{
"code": null,
"e": 3431,
"s": 3386,
"text": "Install Dask within your current environment"
},
{
"code": null,
"e": 3524,
"s": 3431,
"text": "# Anaconda environmentconda install dask# pip based installationpip install \"dask[complete]\""
},
{
"code": null,
"e": 3550,
"s": 3524,
"text": "Create your Dask client :"
},
{
"code": null,
"e": 3870,
"s": 3550,
"text": "import daskfrom dask.distributed import Client, progresstry: client.close() client = Client(threads_per_worker=1, n_workers=5, memory_limit='2GB')except: client = Client(threads_per_worker=1, n_workers=5, memory_limit='2GB')client"
},
{
"code": null,
"e": 4263,
"s": 3870,
"text": "Here, we have created a Local cluster, defined how many local workers to run and the resources to allocate for each worker. This client can at most runs 5 tasks at a time (n_workers) and Dask handles all the details of defining individual threads, coordinating between different threads as one task finishes, the next one begins, data transfer from our environment to each worker and back etc"
},
{
"code": null,
"e": 4344,
"s": 4263,
"text": "Task: Calculate Mean and Standard Deviation for each category in a given dataset"
},
{
"code": null,
"e": 4544,
"s": 4344,
"text": "Step 01: Create a sample simulationIn our case, I have simulated a dataset of k operations and g categories. The output of this simulation is mean and standard deviation for each from the simulation."
},
{
"code": null,
"e": 5108,
"s": 4544,
"text": "def simulate_dataset(i): g = 10 k = 100000 categories = np.random.choice(a = np.arange(g), size= k) values = [(j+1)*np.random.random(k//g) for j in range(g) ] values = np.concatenate(values) data = pd.DataFrame({'category':categories, 'values':values}) data_out = data.groupby('category').apply(lambda df: [ df.loc[:,'values'].mean(), df.loc[:,'values'].std() ])return(data_out)"
},
{
"code": null,
"e": 5377,
"s": 5108,
"text": "THIS function could be anything, any complex analysis/simulation task that you want — as long as it is self-contained. Each iteration can be run independently and doesn’t require any communication with other threads and only inputs at the beginning of your simulation."
},
{
"code": null,
"e": 5423,
"s": 5377,
"text": "Step 02: Run simulation in a for loop n times"
},
{
"code": null,
"e": 5502,
"s": 5423,
"text": "%%timeitresults = {}for i in range(n): results[i] = simulate_dataset(i)"
},
{
"code": null,
"e": 5544,
"s": 5502,
"text": "Step 03: Run simulation with Dask n times"
},
{
"code": null,
"e": 5731,
"s": 5544,
"text": "%%timeitresults_interim = {}for i in range(n): results_interim[i] = delayed(simulate_dataset)(i)results_scheduled = delayed(list)(results_interim)results = results_scheduled.compute()"
},
{
"code": null,
"e": 5942,
"s": 5731,
"text": "In this stand alone example, we see a speedup of ~3 times. The speedup is much more apparent when you make the computation/simulation heavier (increase either unique categories or total sample size in our case)"
},
{
"code": null,
"e": 6127,
"s": 5942,
"text": "The increase isn’t linear because of the overheads involved in launching and managing different workers. The heavier computation with each worker is, the less impact this overhead has."
}
] |
Difference between pointer and array in C
|
The details about a pointer and array that showcase their difference are given as follows.
A pointer is a variable that stores the address of another variable. When memory is allocated to a variable, pointer points to the memory address of the variable. Unary operator ( * ) is used to declare a pointer variable.
The following is the syntax of pointer declaration.
datatype *variable_name;
Here, the datatype is the data type of the variable like int, char, float etc. and variable_name is the name of variable given by user.
A program that demonstrates pointers is given as follows.
Live Demo
#include <stdio.h>
int main () {
int a = 8;
int *ptr;
ptr = &a;
printf("Value of variable a: %d\n", a);
printf("Address of variable a: %d\n", ptr);
return 0;
}
The output of the above program is as follows.
Value of variable a: 8
Address of variable a: -2018153420
An array is a collection of the same type of elements at contiguous memory locations. The lowest address in an array corresponds to the first element while highest address corresponds to the last element. Array index starts with zero(0) and ends with the size of array minus one(array size - 1).
The following is the syntax of array.
type array_name[array_size ];
Here, array_name is the name given to an array and array_size is the size of the array.
A program that demonstrates arrays is given as follows.
Live Demo
#include <stdio.h>
int main () {
int a[5];
int i,j;
for (i = 0;i<5;i++) {
a[i] = i+100;
}
for (j = 0;j<5;j++) {
printf("Element[%d] = %d\n", j, a[j] );
}
return 0;
}
The output of the above program is as follows.
Element[0] = 100
Element[1] = 101
Element[2] = 102
Element[3] = 103
Element[4] = 104
|
[
{
"code": null,
"e": 1153,
"s": 1062,
"text": "The details about a pointer and array that showcase their difference are given as follows."
},
{
"code": null,
"e": 1376,
"s": 1153,
"text": "A pointer is a variable that stores the address of another variable. When memory is allocated to a variable, pointer points to the memory address of the variable. Unary operator ( * ) is used to declare a pointer variable."
},
{
"code": null,
"e": 1428,
"s": 1376,
"text": "The following is the syntax of pointer declaration."
},
{
"code": null,
"e": 1453,
"s": 1428,
"text": "datatype *variable_name;"
},
{
"code": null,
"e": 1589,
"s": 1453,
"text": "Here, the datatype is the data type of the variable like int, char, float etc. and variable_name is the name of variable given by user."
},
{
"code": null,
"e": 1647,
"s": 1589,
"text": "A program that demonstrates pointers is given as follows."
},
{
"code": null,
"e": 1658,
"s": 1647,
"text": " Live Demo"
},
{
"code": null,
"e": 1836,
"s": 1658,
"text": "#include <stdio.h>\nint main () {\n int a = 8;\n int *ptr;\n ptr = &a;\n printf(\"Value of variable a: %d\\n\", a);\n printf(\"Address of variable a: %d\\n\", ptr);\n return 0;\n}"
},
{
"code": null,
"e": 1883,
"s": 1836,
"text": "The output of the above program is as follows."
},
{
"code": null,
"e": 1941,
"s": 1883,
"text": "Value of variable a: 8\nAddress of variable a: -2018153420"
},
{
"code": null,
"e": 2237,
"s": 1941,
"text": "An array is a collection of the same type of elements at contiguous memory locations. The lowest address in an array corresponds to the first element while highest address corresponds to the last element. Array index starts with zero(0) and ends with the size of array minus one(array size - 1)."
},
{
"code": null,
"e": 2275,
"s": 2237,
"text": "The following is the syntax of array."
},
{
"code": null,
"e": 2305,
"s": 2275,
"text": "type array_name[array_size ];"
},
{
"code": null,
"e": 2393,
"s": 2305,
"text": "Here, array_name is the name given to an array and array_size is the size of the array."
},
{
"code": null,
"e": 2449,
"s": 2393,
"text": "A program that demonstrates arrays is given as follows."
},
{
"code": null,
"e": 2460,
"s": 2449,
"text": " Live Demo"
},
{
"code": null,
"e": 2659,
"s": 2460,
"text": "#include <stdio.h>\nint main () {\n int a[5];\n int i,j;\n for (i = 0;i<5;i++) {\n a[i] = i+100;\n }\n for (j = 0;j<5;j++) {\n printf(\"Element[%d] = %d\\n\", j, a[j] );\n }\n return 0;\n}"
},
{
"code": null,
"e": 2706,
"s": 2659,
"text": "The output of the above program is as follows."
},
{
"code": null,
"e": 2791,
"s": 2706,
"text": "Element[0] = 100\nElement[1] = 101\nElement[2] = 102\nElement[3] = 103\nElement[4] = 104"
}
] |
Program to check Strength of Password in C++
|
Given a string input containing password characters, the task is to check the strength of the password.
The strength of the password is when you tell that the password is whether easily guessed or cracked. The strength should vary from weak, average and strong. To check the strength we have to check the following points −
Password must be at least 8 characters long.
It must contain 1 lower case alphabet.
It must contain 1 uppercase alphabet
It must contain a digit
It must contain a special character like : !@#$%^&*()><,.+=-
Like there is a password “tutorialspoint” which is easily guessed so we can cay that he given password is “weak” as it contains only lowercase characters, whereas password “Tutorialspoint@863!” is strong as have both upper and lowercase, a digit and a special character and is longer than 8 characters, hence meeting all the conditions to make a password stronger.
If there is some password meeting the more than half of the characteristics of the strong password, then we will consider the password as moderate. Like the password “tutorialspoint12” it will be considered as moderate as contains lowercase, a digit and its length is greater than 8 chacters.
Input: tutoriAlspOint!@12
Output: Strength of password:-Strong
Explanation: Password has 1 lowercase, 1 uppercase, 1 special character, more than 8 characters long and a digit, hence the password is strong.
Input: tutorialspoint
Output: Strength of password:-Weak
Approach we will be using to solve the given problem −
Take a string output for password.
Check the password for all the factors which are responsible for judging the password strength.
According to the factors print the password’s strength.
Start
Step 1 ⇒ In function void printStrongNess(string& input)
Declare and initialize n = input.length()
Declare bool hasLower = false, hasUpper = false
Declare bool hasDigit = false, specialChar = false
Declare string normalChars = "abcdefghijklmnopqrstu"
"vwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890 "
Loop For i = 0 and i < n and i++
If (islower(input[i]))
Set hasLower = true
If (isupper(input[i]))
Set hasUpper = true
If (isdigit(input[i]))
Set hasDigit = true
Set size_t special = input.find_first_not_of(normalChars)
If (special != string::npos)
Set specialChar = true
End Loop
Print "Strength of password:-"
If (hasLower && hasUpper && hasDigit &&
specialChar && (n >= 8))
Print "Strong"
else if ((hasLower || hasUpper) &&
specialChar && (n >= 6))
Print "Moderate"
else
print "Weak"
Step 2 ⇒ In function int main()
Declare and initialize input = "tutorialspoint!@12"
printStrongNess(input)
Stop
#include <iostream>
using namespace std;
void printStrongNess(string& input) {
int n = input.length();
// Checking lower alphabet in string
bool hasLower = false, hasUpper = false;
bool hasDigit = false, specialChar = false;
string normalChars = "abcdefghijklmnopqrstu" "vwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890 ";
for (int i = 0; i < n; i++) {
if (islower(input[i]))
hasLower = true;
if (isupper(input[i]))
hasUpper = true;
if (isdigit(input[i]))
hasDigit = true;
size_t special = input.find_first_not_of(normalChars);
if (special != string::npos)
specialChar = true;
}
// Strength of password
cout << "Strength of password:-";
if (hasLower && hasUpper && hasDigit &&
specialChar && (n >= 8))
cout << "Strong" << endl;
else if ((hasLower || hasUpper) &&
specialChar && (n >= 6))
cout << "Moderate" << endl;
else
cout << "Weak" << endl;
}
int main() {
string input = "tutorialspoint!@12";
printStrongNess(input);
return 0;
}
Strength of password:-Moderate
|
[
{
"code": null,
"e": 1166,
"s": 1062,
"text": "Given a string input containing password characters, the task is to check the strength of the password."
},
{
"code": null,
"e": 1386,
"s": 1166,
"text": "The strength of the password is when you tell that the password is whether easily guessed or cracked. The strength should vary from weak, average and strong. To check the strength we have to check the following points −"
},
{
"code": null,
"e": 1431,
"s": 1386,
"text": "Password must be at least 8 characters long."
},
{
"code": null,
"e": 1470,
"s": 1431,
"text": "It must contain 1 lower case alphabet."
},
{
"code": null,
"e": 1507,
"s": 1470,
"text": "It must contain 1 uppercase alphabet"
},
{
"code": null,
"e": 1531,
"s": 1507,
"text": "It must contain a digit"
},
{
"code": null,
"e": 1592,
"s": 1531,
"text": "It must contain a special character like : !@#$%^&*()><,.+=-"
},
{
"code": null,
"e": 1957,
"s": 1592,
"text": "Like there is a password “tutorialspoint” which is easily guessed so we can cay that he given password is “weak” as it contains only lowercase characters, whereas password “Tutorialspoint@863!” is strong as have both upper and lowercase, a digit and a special character and is longer than 8 characters, hence meeting all the conditions to make a password stronger."
},
{
"code": null,
"e": 2250,
"s": 1957,
"text": "If there is some password meeting the more than half of the characteristics of the strong password, then we will consider the password as moderate. Like the password “tutorialspoint12” it will be considered as moderate as contains lowercase, a digit and its length is greater than 8 chacters."
},
{
"code": null,
"e": 2514,
"s": 2250,
"text": "Input: tutoriAlspOint!@12\nOutput: Strength of password:-Strong\nExplanation: Password has 1 lowercase, 1 uppercase, 1 special character, more than 8 characters long and a digit, hence the password is strong.\nInput: tutorialspoint\nOutput: Strength of password:-Weak"
},
{
"code": null,
"e": 2569,
"s": 2514,
"text": "Approach we will be using to solve the given problem −"
},
{
"code": null,
"e": 2604,
"s": 2569,
"text": "Take a string output for password."
},
{
"code": null,
"e": 2700,
"s": 2604,
"text": "Check the password for all the factors which are responsible for judging the password strength."
},
{
"code": null,
"e": 2756,
"s": 2700,
"text": "According to the factors print the password’s strength."
},
{
"code": null,
"e": 3885,
"s": 2756,
"text": "Start\n Step 1 ⇒ In function void printStrongNess(string& input)\n Declare and initialize n = input.length()\n Declare bool hasLower = false, hasUpper = false\n Declare bool hasDigit = false, specialChar = false\n Declare string normalChars = \"abcdefghijklmnopqrstu\"\n \"vwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890 \"\n Loop For i = 0 and i < n and i++\n If (islower(input[i]))\n Set hasLower = true\n If (isupper(input[i]))\n Set hasUpper = true\n If (isdigit(input[i]))\n Set hasDigit = true\n Set size_t special = input.find_first_not_of(normalChars)\n If (special != string::npos)\n Set specialChar = true\n End Loop\n Print \"Strength of password:-\"\n If (hasLower && hasUpper && hasDigit &&\n specialChar && (n >= 8))\n Print \"Strong\"\n else if ((hasLower || hasUpper) &&\n specialChar && (n >= 6))\n Print \"Moderate\"\n else\n print \"Weak\"\n Step 2 ⇒ In function int main()\n Declare and initialize input = \"tutorialspoint!@12\"\n printStrongNess(input)\nStop"
},
{
"code": null,
"e": 4952,
"s": 3885,
"text": "#include <iostream>\nusing namespace std;\nvoid printStrongNess(string& input) {\n int n = input.length();\n // Checking lower alphabet in string\n bool hasLower = false, hasUpper = false;\n bool hasDigit = false, specialChar = false;\n string normalChars = \"abcdefghijklmnopqrstu\" \"vwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890 \";\n for (int i = 0; i < n; i++) {\n if (islower(input[i]))\n hasLower = true;\n if (isupper(input[i]))\n hasUpper = true;\n if (isdigit(input[i]))\n hasDigit = true;\n size_t special = input.find_first_not_of(normalChars);\n if (special != string::npos)\n specialChar = true;\n }\n // Strength of password\n cout << \"Strength of password:-\";\n if (hasLower && hasUpper && hasDigit &&\n specialChar && (n >= 8))\n cout << \"Strong\" << endl;\n else if ((hasLower || hasUpper) &&\n specialChar && (n >= 6))\n cout << \"Moderate\" << endl;\n else\n cout << \"Weak\" << endl;\n}\nint main() {\n string input = \"tutorialspoint!@12\";\n printStrongNess(input);\n return 0;\n}"
},
{
"code": null,
"e": 4983,
"s": 4952,
"text": "Strength of password:-Moderate"
}
] |
Building a Python UI for Comparing Data | by Costas Andreou | Towards Data Science
|
Spend enough time in an analytical or IT function and it will become immediately obvious that working with data is a must. Collecting data, working with data and of course comparing data. The problem with all of this data nowadays is generally the sheer amount of it.
Unless people are technical enough to know how to use Python, R, or the like, they will struggle when they are required to work with larger data sets. Unfortunately (or fortunately?), Excel won’t cut it anymore. This creates a number of hurdles for people.
One of the most common complaints I hear when it comes to data operations, including looking at the data or comparing data is that Excel will simply not support it. It’s too big to load. To that effect, I have written a number of articles explaining how you can work around it — but most of my articles have been somewhat technical in nature. It required you writing code and using the command line.
medium.com
Unless you were a technical person you wouldn’t necessarily find that easy. Likewise, if you wanted to share that piece of functionality with a non-technical person.
In my immediately previous article, however, we covered how we can quickly spin UIs in Python and then share them with our wider team or community. The response was fantastic; many of you found it very interesting.
towardsdatascience.com
In this blog then, we will cover a larger number of features available from PySimpleGUI while we also build something that will allow our non-technical friends to quickly compare data.
The first thing we need to do is define a simple UI which allows the user to pick two files.
Once the two files have been defined, we should carry out some basic validation to ensure the two files are comparable. Looking for the same column headers could be one way of doing that. We could then offer these headers as a potential key the user could select for the data comparison.
Using this example file:
The following screen would be constructed:
Here’s the full thing with the output included:
To build the UIs we will be using the PySimpleGUI library which is based on Tkinter, wxPython and PyQT. This code couldn’t be simpler. Taking a quick sneak peak:
import PySimpleGUI as sgsupportedextensions = ['csv','xlsx', 'xlsm' ,'json']layoutprefile = [ [sg.Text('Select two files to proceed')], [sg.Text('File 1'), sg.InputText(), sg.FileBrowse()], [sg.Text('File 2'), sg.InputText(), sg.FileBrowse()], # *list1, [sg.Output(size=(61, 5))], [sg.Submit('Proceed'), sg.Cancel('Exit')]]window = sg.Window('File Compare', layoutprefile)while True: # The Event Loop event, values = window.read() if event in (None, 'Exit', 'Cancel'): secondwindow = 0 break elif event == 'Proceed': print('yay')
We will be using the famous Pandas library to be reading the files in. This will allow us to quickly support CSVs, JSON and Excel files. Once the files are in a data frame, then we can do the necessary operations we need.
import reimport pandas as pdfile1 = r"C:/temp/file.csv"file2 = r"C:/temp/file1.csv"if re.findall('/.+?/.+\.(.+)',file1)[0] == 'csv': df1, df2 = pd.read_csv(file1), pd.read_csv(file2)elif re.findall('/.+?/.+\.(.+)',file1)[0] == 'json': df1, df2 = pd.read_json(file1), pd.read_json(file2)elif re.findall('/.+?/.+\.(.+)',file1)[0] in ['xlsx', 'xlsm']: df1, df2 = pd.read_excel(file1), pd.read_excel(file2)
To pull out the extension from the filepath, I use the re library which is Python’s regular expression library. Regular expression gives us a way to pattern match and extract information.
medium.com
For the comparison, we will be using the DataComPy library, which gives us a nice summary of the comparison.
towardsdatascience.com
The code is once again very simple:
compare = datacompy.Compare( df1, df2, join_columns=definedkey, abs_tol=0, #Optional, defaults to 0 rel_tol=0, #Optional, defaults to 0 df1_name='Original', #Optional, defaults to 'df1' df2_name='New' #Optional, defaults to 'df2')print(compare.report())
To share the UI we can use PyInstaller. Simply find your file (ComPyUI.py in this example) and run the following command:
pyinstaller --onefile ComPyUI.py
Without further ado, simply copy the below code locally and run it for a comparison tool:
import PySimpleGUI as sgimport re, timeimport datacompyimport pandas as pdsupportedextensions = ['csv','xlsx', 'xlsm' ,'json']layoutprefile = [ [sg.Text('Select two files to proceed')], [sg.Text('File 1'), sg.InputText(), sg.FileBrowse()], [sg.Text('File 2'), sg.InputText(), sg.FileBrowse()], # *list1, [sg.Output(size=(61, 5))], [sg.Submit('Proceed'), sg.Cancel('Exit')]]window = sg.Window('File Compare', layoutprefile)while True: # The Event Loop event, values = window.read() # print(event, values) # debug if event in (None, 'Exit', 'Cancel'): secondwindow = 0 break elif event == 'Proceed': #do some checks if valid directories have been provided file1test = file2test = isitago = proceedwithfindcommonkeys = None file1, file2 = values[0], values[1] if file1 and file2: file1test = re.findall('.+:\/.+\.+.', file1) file2test = re.findall('.+:\/.+\.+.', file2) isitago = 1 if not file1test and file1test is not None: print('Error: File 1 path not valid.') isitago = 0 elif not file2test and file2test is not None: print('Error: File 2 path not valid.') isitago = 0 #both files to have the same extension elif re.findall('/.+?/.+\.(.+)',file1) != re.findall('/.+?/.+\.(.+)',file2): print('Error: The two files have different file extensions. Please correct') isitago = 0 #they need to be in a list of supported extensions elif re.findall('/.+?/.+\.(.+)',file1)[0] not in supportedextensions or re.findall('/.+?/.+\.(.+)',file2)[0] not in supportedextensions: print('Error: File format currently not supported. At the moment only csv, xlsx, xlsm and json files are supported.') isitago = 0 elif file1 == file2: print('Error: The files need to be different') isitago = 0 elif isitago == 1: print('Info: Filepaths correctly defined.') # check if files exist try: print('Info: Attempting to access files.') if re.findall('/.+?/.+\.(.+)',file1)[0] == 'csv': df1, df2 = pd.read_csv(file1), pd.read_csv(file2) elif re.findall('/.+?/.+\.(.+)',file1)[0] == 'json': df1, df2 = pd.read_json(file1), pd.read_json(file2) elif re.findall('/.+?/.+\.(.+)',file1)[0] in ['xlsx', 'xlsm']: df1, df2 = pd.read_excel(file1), pd.read_excel(file2) else: print('How did we get here?') proceedwithfindcommonkeys = 1 except IOError: print("Error: File not accessible.") proceedwithfindcommonkeys = 0 except UnicodeDecodeError: print("Error: File includes a unicode character that cannot be decoded with the default UTF decryption.") proceedwithfindcommonkeys = 0 except Exception as e: print('Error: ', e) proceedwithfindcommonkeys = 0 else: print('Error: Please choose 2 files.') if proceedwithfindcommonkeys == 1: keyslist1 = [] #This will be the list of headers from first file keyslist2 = [] #This will be the list of headers from second file keyslist = [] #This will be the list of headers that are the intersection between the two files formlists = [] #This will be the list to be displayed on the UI for header in df1.columns: if header not in keyslist1: keyslist1.append(header) for header in df2.columns: if header not in keyslist2: keyslist2.append(header) for item in keyslist1: if item in keyslist2: keyslist.append(item) if len(keyslist) == 0: print('Error: Files have no common headers.') secondwindow = 0 else: window.close() secondwindow = 1 break################################################## First screen completed, moving on to second oneif secondwindow != 1: exit()#To align the three columns on the UI, we need the max len#Note: This could be made better by having the max len of each columnmaxlen = 0for header in keyslist: if len(str(header)) > maxlen: maxlen = len(str(header))if maxlen > 25: maxlen = 25elif maxlen < 10: maxlen = 15#we need to split the keys to four columnsfor index,item in enumerate(keyslist): if index == 0: i =0 if len(keyslist) >= 4 and i == 0: formlists.append([sg.Checkbox(keyslist[i], size=(maxlen,None)),sg.Checkbox(keyslist[i+1], size=(maxlen,None)),sg.Checkbox(keyslist[i+2], size=(maxlen,None)),sg.Checkbox(keyslist[i+3], size=(maxlen,None))]) i += 4 elif len(keyslist) > i: if len(keyslist) - i - 4>= 0: formlists.append([sg.Checkbox(keyslist[i], size=(maxlen,None)),sg.Checkbox(keyslist[i+1], size=(maxlen,None)),sg.Checkbox(keyslist[i+2], size=(maxlen,None)),sg.Checkbox(keyslist[i+3], size=(maxlen,None))]) i += 4 elif len(keyslist) - i - 3>= 0: formlists.append([sg.Checkbox(keyslist[i], size=(maxlen,None)),sg.Checkbox(keyslist[i+1], size=(maxlen,None)),sg.Checkbox(keyslist[i+2], size=(maxlen,None))]) i += 3 elif len(keyslist)- i - 2>= 0: formlists.append([sg.Checkbox(keyslist[i], size=(maxlen,None)),sg.Checkbox(keyslist[i+1], size=(maxlen,None))]) i += 2 elif len(keyslist) - i - 1>= 0: formlists.append([sg.Checkbox(keyslist[i], size=(maxlen,None))]) i += 1 else: sg.Popup('Error: Uh-oh, something\'s gone wrong!') #The second UIlayoutpostfile = [ [sg.Text('File 1'), sg.InputText(file1,disabled = True, size = (75,2))], [sg.Text('File 2'), sg.InputText(file2,disabled = True, size = (75,2))], #[sg.Text('Select the data key for the comparison:')], [sg.Frame(layout=[ *formlists],title = 'Select the Data Key for Comparison',relief=sg.RELIEF_RIDGE )], [sg.Output(size=(maxlen*6, 20))], [sg.Submit('Compare'), sg.Cancel('Exit')]]window2 = sg.Window('File Compare', layoutpostfile)datakeydefined = 0definedkey = []while True: # The Event Loop event, values = window2.read() # print(event, values) # debug if event in (None, 'Exit', 'Cancel'): break elif event == 'Compare': definedkey.clear() file1test = file2test = isitago = None #print('Event', event, '\n', 'Values', values) for index, value in enumerate(values): if index not in [0,1]: if values[index] == True: datakeydefined = 1 definedkey.append(keyslist[index-2]) #print(index, values[index], keyslist[index-2]) if len(definedkey) > 0: compare = datacompy.Compare( df1, df2, join_columns=definedkey, #You can also specify a list of columns eg ['policyID','statecode'] abs_tol=0, #Optional, defaults to 0 rel_tol=0, #Optional, defaults to 0 df1_name='Original', #Optional, defaults to 'df1' df2_name='New' #Optional, defaults to 'df2' ) print('########################################################################################################') print(compare.report()) else: print('Error: You need to select at least one attribute as a data key')
There are quite a few limitations to this solution, but it’s one that can be enhanced quite easily and quickly in the future. The following items would be nice to haves:
It would be good if the CSV delimiter was to be defined rather than assume a ‘,’ at the momentSizing of the window is too precarious, and using very long headers could create problemsEnable the system to remember key choices based on file namesFull directory reconciliations
It would be good if the CSV delimiter was to be defined rather than assume a ‘,’ at the moment
Sizing of the window is too precarious, and using very long headers could create problems
Enable the system to remember key choices based on file names
Full directory reconciliations
I am sure there are a lot more, but do let me know what you think!
|
[
{
"code": null,
"e": 440,
"s": 172,
"text": "Spend enough time in an analytical or IT function and it will become immediately obvious that working with data is a must. Collecting data, working with data and of course comparing data. The problem with all of this data nowadays is generally the sheer amount of it."
},
{
"code": null,
"e": 697,
"s": 440,
"text": "Unless people are technical enough to know how to use Python, R, or the like, they will struggle when they are required to work with larger data sets. Unfortunately (or fortunately?), Excel won’t cut it anymore. This creates a number of hurdles for people."
},
{
"code": null,
"e": 1097,
"s": 697,
"text": "One of the most common complaints I hear when it comes to data operations, including looking at the data or comparing data is that Excel will simply not support it. It’s too big to load. To that effect, I have written a number of articles explaining how you can work around it — but most of my articles have been somewhat technical in nature. It required you writing code and using the command line."
},
{
"code": null,
"e": 1108,
"s": 1097,
"text": "medium.com"
},
{
"code": null,
"e": 1274,
"s": 1108,
"text": "Unless you were a technical person you wouldn’t necessarily find that easy. Likewise, if you wanted to share that piece of functionality with a non-technical person."
},
{
"code": null,
"e": 1489,
"s": 1274,
"text": "In my immediately previous article, however, we covered how we can quickly spin UIs in Python and then share them with our wider team or community. The response was fantastic; many of you found it very interesting."
},
{
"code": null,
"e": 1512,
"s": 1489,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 1697,
"s": 1512,
"text": "In this blog then, we will cover a larger number of features available from PySimpleGUI while we also build something that will allow our non-technical friends to quickly compare data."
},
{
"code": null,
"e": 1790,
"s": 1697,
"text": "The first thing we need to do is define a simple UI which allows the user to pick two files."
},
{
"code": null,
"e": 2078,
"s": 1790,
"text": "Once the two files have been defined, we should carry out some basic validation to ensure the two files are comparable. Looking for the same column headers could be one way of doing that. We could then offer these headers as a potential key the user could select for the data comparison."
},
{
"code": null,
"e": 2103,
"s": 2078,
"text": "Using this example file:"
},
{
"code": null,
"e": 2146,
"s": 2103,
"text": "The following screen would be constructed:"
},
{
"code": null,
"e": 2194,
"s": 2146,
"text": "Here’s the full thing with the output included:"
},
{
"code": null,
"e": 2356,
"s": 2194,
"text": "To build the UIs we will be using the PySimpleGUI library which is based on Tkinter, wxPython and PyQT. This code couldn’t be simpler. Taking a quick sneak peak:"
},
{
"code": null,
"e": 2931,
"s": 2356,
"text": "import PySimpleGUI as sgsupportedextensions = ['csv','xlsx', 'xlsm' ,'json']layoutprefile = [ [sg.Text('Select two files to proceed')], [sg.Text('File 1'), sg.InputText(), sg.FileBrowse()], [sg.Text('File 2'), sg.InputText(), sg.FileBrowse()], # *list1, [sg.Output(size=(61, 5))], [sg.Submit('Proceed'), sg.Cancel('Exit')]]window = sg.Window('File Compare', layoutprefile)while True: # The Event Loop event, values = window.read() if event in (None, 'Exit', 'Cancel'): secondwindow = 0 break elif event == 'Proceed': print('yay')"
},
{
"code": null,
"e": 3153,
"s": 2931,
"text": "We will be using the famous Pandas library to be reading the files in. This will allow us to quickly support CSVs, JSON and Excel files. Once the files are in a data frame, then we can do the necessary operations we need."
},
{
"code": null,
"e": 3557,
"s": 3153,
"text": "import reimport pandas as pdfile1 = r\"C:/temp/file.csv\"file2 = r\"C:/temp/file1.csv\"if re.findall('/.+?/.+\\.(.+)',file1)[0] == 'csv': df1, df2 = pd.read_csv(file1), pd.read_csv(file2)elif re.findall('/.+?/.+\\.(.+)',file1)[0] == 'json': df1, df2 = pd.read_json(file1), pd.read_json(file2)elif re.findall('/.+?/.+\\.(.+)',file1)[0] in ['xlsx', 'xlsm']: df1, df2 = pd.read_excel(file1), pd.read_excel(file2)"
},
{
"code": null,
"e": 3745,
"s": 3557,
"text": "To pull out the extension from the filepath, I use the re library which is Python’s regular expression library. Regular expression gives us a way to pattern match and extract information."
},
{
"code": null,
"e": 3756,
"s": 3745,
"text": "medium.com"
},
{
"code": null,
"e": 3865,
"s": 3756,
"text": "For the comparison, we will be using the DataComPy library, which gives us a nice summary of the comparison."
},
{
"code": null,
"e": 3888,
"s": 3865,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 3924,
"s": 3888,
"text": "The code is once again very simple:"
},
{
"code": null,
"e": 4199,
"s": 3924,
"text": "compare = datacompy.Compare( df1, df2, join_columns=definedkey, abs_tol=0, #Optional, defaults to 0 rel_tol=0, #Optional, defaults to 0 df1_name='Original', #Optional, defaults to 'df1' df2_name='New' #Optional, defaults to 'df2')print(compare.report())"
},
{
"code": null,
"e": 4321,
"s": 4199,
"text": "To share the UI we can use PyInstaller. Simply find your file (ComPyUI.py in this example) and run the following command:"
},
{
"code": null,
"e": 4354,
"s": 4321,
"text": "pyinstaller --onefile ComPyUI.py"
},
{
"code": null,
"e": 4444,
"s": 4354,
"text": "Without further ado, simply copy the below code locally and run it for a comparison tool:"
},
{
"code": null,
"e": 12296,
"s": 4444,
"text": "import PySimpleGUI as sgimport re, timeimport datacompyimport pandas as pdsupportedextensions = ['csv','xlsx', 'xlsm' ,'json']layoutprefile = [ [sg.Text('Select two files to proceed')], [sg.Text('File 1'), sg.InputText(), sg.FileBrowse()], [sg.Text('File 2'), sg.InputText(), sg.FileBrowse()], # *list1, [sg.Output(size=(61, 5))], [sg.Submit('Proceed'), sg.Cancel('Exit')]]window = sg.Window('File Compare', layoutprefile)while True: # The Event Loop event, values = window.read() # print(event, values) # debug if event in (None, 'Exit', 'Cancel'): secondwindow = 0 break elif event == 'Proceed': #do some checks if valid directories have been provided file1test = file2test = isitago = proceedwithfindcommonkeys = None file1, file2 = values[0], values[1] if file1 and file2: file1test = re.findall('.+:\\/.+\\.+.', file1) file2test = re.findall('.+:\\/.+\\.+.', file2) isitago = 1 if not file1test and file1test is not None: print('Error: File 1 path not valid.') isitago = 0 elif not file2test and file2test is not None: print('Error: File 2 path not valid.') isitago = 0 #both files to have the same extension elif re.findall('/.+?/.+\\.(.+)',file1) != re.findall('/.+?/.+\\.(.+)',file2): print('Error: The two files have different file extensions. Please correct') isitago = 0 #they need to be in a list of supported extensions elif re.findall('/.+?/.+\\.(.+)',file1)[0] not in supportedextensions or re.findall('/.+?/.+\\.(.+)',file2)[0] not in supportedextensions: print('Error: File format currently not supported. At the moment only csv, xlsx, xlsm and json files are supported.') isitago = 0 elif file1 == file2: print('Error: The files need to be different') isitago = 0 elif isitago == 1: print('Info: Filepaths correctly defined.') # check if files exist try: print('Info: Attempting to access files.') if re.findall('/.+?/.+\\.(.+)',file1)[0] == 'csv': df1, df2 = pd.read_csv(file1), pd.read_csv(file2) elif re.findall('/.+?/.+\\.(.+)',file1)[0] == 'json': df1, df2 = pd.read_json(file1), pd.read_json(file2) elif re.findall('/.+?/.+\\.(.+)',file1)[0] in ['xlsx', 'xlsm']: df1, df2 = pd.read_excel(file1), pd.read_excel(file2) else: print('How did we get here?') proceedwithfindcommonkeys = 1 except IOError: print(\"Error: File not accessible.\") proceedwithfindcommonkeys = 0 except UnicodeDecodeError: print(\"Error: File includes a unicode character that cannot be decoded with the default UTF decryption.\") proceedwithfindcommonkeys = 0 except Exception as e: print('Error: ', e) proceedwithfindcommonkeys = 0 else: print('Error: Please choose 2 files.') if proceedwithfindcommonkeys == 1: keyslist1 = [] #This will be the list of headers from first file keyslist2 = [] #This will be the list of headers from second file keyslist = [] #This will be the list of headers that are the intersection between the two files formlists = [] #This will be the list to be displayed on the UI for header in df1.columns: if header not in keyslist1: keyslist1.append(header) for header in df2.columns: if header not in keyslist2: keyslist2.append(header) for item in keyslist1: if item in keyslist2: keyslist.append(item) if len(keyslist) == 0: print('Error: Files have no common headers.') secondwindow = 0 else: window.close() secondwindow = 1 break################################################## First screen completed, moving on to second oneif secondwindow != 1: exit()#To align the three columns on the UI, we need the max len#Note: This could be made better by having the max len of each columnmaxlen = 0for header in keyslist: if len(str(header)) > maxlen: maxlen = len(str(header))if maxlen > 25: maxlen = 25elif maxlen < 10: maxlen = 15#we need to split the keys to four columnsfor index,item in enumerate(keyslist): if index == 0: i =0 if len(keyslist) >= 4 and i == 0: formlists.append([sg.Checkbox(keyslist[i], size=(maxlen,None)),sg.Checkbox(keyslist[i+1], size=(maxlen,None)),sg.Checkbox(keyslist[i+2], size=(maxlen,None)),sg.Checkbox(keyslist[i+3], size=(maxlen,None))]) i += 4 elif len(keyslist) > i: if len(keyslist) - i - 4>= 0: formlists.append([sg.Checkbox(keyslist[i], size=(maxlen,None)),sg.Checkbox(keyslist[i+1], size=(maxlen,None)),sg.Checkbox(keyslist[i+2], size=(maxlen,None)),sg.Checkbox(keyslist[i+3], size=(maxlen,None))]) i += 4 elif len(keyslist) - i - 3>= 0: formlists.append([sg.Checkbox(keyslist[i], size=(maxlen,None)),sg.Checkbox(keyslist[i+1], size=(maxlen,None)),sg.Checkbox(keyslist[i+2], size=(maxlen,None))]) i += 3 elif len(keyslist)- i - 2>= 0: formlists.append([sg.Checkbox(keyslist[i], size=(maxlen,None)),sg.Checkbox(keyslist[i+1], size=(maxlen,None))]) i += 2 elif len(keyslist) - i - 1>= 0: formlists.append([sg.Checkbox(keyslist[i], size=(maxlen,None))]) i += 1 else: sg.Popup('Error: Uh-oh, something\\'s gone wrong!') #The second UIlayoutpostfile = [ [sg.Text('File 1'), sg.InputText(file1,disabled = True, size = (75,2))], [sg.Text('File 2'), sg.InputText(file2,disabled = True, size = (75,2))], #[sg.Text('Select the data key for the comparison:')], [sg.Frame(layout=[ *formlists],title = 'Select the Data Key for Comparison',relief=sg.RELIEF_RIDGE )], [sg.Output(size=(maxlen*6, 20))], [sg.Submit('Compare'), sg.Cancel('Exit')]]window2 = sg.Window('File Compare', layoutpostfile)datakeydefined = 0definedkey = []while True: # The Event Loop event, values = window2.read() # print(event, values) # debug if event in (None, 'Exit', 'Cancel'): break elif event == 'Compare': definedkey.clear() file1test = file2test = isitago = None #print('Event', event, '\\n', 'Values', values) for index, value in enumerate(values): if index not in [0,1]: if values[index] == True: datakeydefined = 1 definedkey.append(keyslist[index-2]) #print(index, values[index], keyslist[index-2]) if len(definedkey) > 0: compare = datacompy.Compare( df1, df2, join_columns=definedkey, #You can also specify a list of columns eg ['policyID','statecode'] abs_tol=0, #Optional, defaults to 0 rel_tol=0, #Optional, defaults to 0 df1_name='Original', #Optional, defaults to 'df1' df2_name='New' #Optional, defaults to 'df2' ) print('########################################################################################################') print(compare.report()) else: print('Error: You need to select at least one attribute as a data key')"
},
{
"code": null,
"e": 12466,
"s": 12296,
"text": "There are quite a few limitations to this solution, but it’s one that can be enhanced quite easily and quickly in the future. The following items would be nice to haves:"
},
{
"code": null,
"e": 12741,
"s": 12466,
"text": "It would be good if the CSV delimiter was to be defined rather than assume a ‘,’ at the momentSizing of the window is too precarious, and using very long headers could create problemsEnable the system to remember key choices based on file namesFull directory reconciliations"
},
{
"code": null,
"e": 12836,
"s": 12741,
"text": "It would be good if the CSV delimiter was to be defined rather than assume a ‘,’ at the moment"
},
{
"code": null,
"e": 12926,
"s": 12836,
"text": "Sizing of the window is too precarious, and using very long headers could create problems"
},
{
"code": null,
"e": 12988,
"s": 12926,
"text": "Enable the system to remember key choices based on file names"
},
{
"code": null,
"e": 13019,
"s": 12988,
"text": "Full directory reconciliations"
}
] |
HTML | <iframe> src Attribute - GeeksforGeeks
|
16 Oct, 2019
The HTML <iframe> src attribute is used to specify the URL of the document that are embedded to the <iframe> element.
Syntax:
<iframe src="URL">
Attribute Values: It contains single value URL which specifies the URL of the document that is embedded to the iframe. There are two types of URL link which are listed below:
Absolute URL: It points to another webpage.
Relative URL: It points to other files of the same web page.
Below example illustrates the <iframe> src attribute in HTML:
Example:
<!DOCTYPE html> <html> <head> <title> HTML iframe src Attribute </title></head> <body style="text-align:center;"> <h1>GeeksforGeeks</h1> <h2>HTML iframe src Attribute</h2> <iframe src="https://ide.geeksforgeeks.org/index.php" height="200" width="400"></iframe> </body> </html>
Output:
Supported Browsers: The browser supported by HTML <iframe> src attribute are listed below:
Google Chrome
Internet Explorer
Firefox
Safari
Opera
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
shubham_singh
HTML-Attributes
HTML
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Types of CSS (Cascading Style Sheet)
HTML | <img> align Attribute
Form validation using HTML and JavaScript
HTML Introduction
How to Upload Image into Database and Display it using PHP ?
Top 10 Front End Developer Skills That You Need in 2022
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
Difference between var, let and const keywords in JavaScript
Convert a string to an integer in JavaScript
|
[
{
"code": null,
"e": 24416,
"s": 24388,
"text": "\n16 Oct, 2019"
},
{
"code": null,
"e": 24534,
"s": 24416,
"text": "The HTML <iframe> src attribute is used to specify the URL of the document that are embedded to the <iframe> element."
},
{
"code": null,
"e": 24542,
"s": 24534,
"text": "Syntax:"
},
{
"code": null,
"e": 24561,
"s": 24542,
"text": "<iframe src=\"URL\">"
},
{
"code": null,
"e": 24736,
"s": 24561,
"text": "Attribute Values: It contains single value URL which specifies the URL of the document that is embedded to the iframe. There are two types of URL link which are listed below:"
},
{
"code": null,
"e": 24780,
"s": 24736,
"text": "Absolute URL: It points to another webpage."
},
{
"code": null,
"e": 24841,
"s": 24780,
"text": "Relative URL: It points to other files of the same web page."
},
{
"code": null,
"e": 24903,
"s": 24841,
"text": "Below example illustrates the <iframe> src attribute in HTML:"
},
{
"code": null,
"e": 24912,
"s": 24903,
"text": "Example:"
},
{
"code": "<!DOCTYPE html> <html> <head> <title> HTML iframe src Attribute </title></head> <body style=\"text-align:center;\"> <h1>GeeksforGeeks</h1> <h2>HTML iframe src Attribute</h2> <iframe src=\"https://ide.geeksforgeeks.org/index.php\" height=\"200\" width=\"400\"></iframe> </body> </html> ",
"e": 25266,
"s": 24912,
"text": null
},
{
"code": null,
"e": 25274,
"s": 25266,
"text": "Output:"
},
{
"code": null,
"e": 25365,
"s": 25274,
"text": "Supported Browsers: The browser supported by HTML <iframe> src attribute are listed below:"
},
{
"code": null,
"e": 25379,
"s": 25365,
"text": "Google Chrome"
},
{
"code": null,
"e": 25397,
"s": 25379,
"text": "Internet Explorer"
},
{
"code": null,
"e": 25405,
"s": 25397,
"text": "Firefox"
},
{
"code": null,
"e": 25412,
"s": 25405,
"text": "Safari"
},
{
"code": null,
"e": 25418,
"s": 25412,
"text": "Opera"
},
{
"code": null,
"e": 25555,
"s": 25418,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 25569,
"s": 25555,
"text": "shubham_singh"
},
{
"code": null,
"e": 25585,
"s": 25569,
"text": "HTML-Attributes"
},
{
"code": null,
"e": 25590,
"s": 25585,
"text": "HTML"
},
{
"code": null,
"e": 25607,
"s": 25590,
"text": "Web Technologies"
},
{
"code": null,
"e": 25612,
"s": 25607,
"text": "HTML"
},
{
"code": null,
"e": 25710,
"s": 25612,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25719,
"s": 25710,
"text": "Comments"
},
{
"code": null,
"e": 25732,
"s": 25719,
"text": "Old Comments"
},
{
"code": null,
"e": 25769,
"s": 25732,
"text": "Types of CSS (Cascading Style Sheet)"
},
{
"code": null,
"e": 25798,
"s": 25769,
"text": "HTML | <img> align Attribute"
},
{
"code": null,
"e": 25840,
"s": 25798,
"text": "Form validation using HTML and JavaScript"
},
{
"code": null,
"e": 25858,
"s": 25840,
"text": "HTML Introduction"
},
{
"code": null,
"e": 25919,
"s": 25858,
"text": "How to Upload Image into Database and Display it using PHP ?"
},
{
"code": null,
"e": 25975,
"s": 25919,
"text": "Top 10 Front End Developer Skills That You Need in 2022"
},
{
"code": null,
"e": 26008,
"s": 25975,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 26051,
"s": 26008,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 26112,
"s": 26051,
"text": "Difference between var, let and const keywords in JavaScript"
}
] |
Descriptor in Python - GeeksforGeeks
|
08 Jan, 2019
Definition of descriptor :Python descriptors are created to manage the attributes of different classes which use the object as reference. In descriptors we used three different methods that are __getters__(), __setters__(), and __delete__(). If any of those methods are defined for an object, it can be termed as a descriptor. Normally, Python uses methods like getters and setters to adjust the values on attributes without any special processing. It’s just a basic storage system. Sometimes, You might need to validate the values that are being assigned to a value. A descriptor is a mechanism behind properties, methods, static methods, class methods, and super(). Descriptor protocol :In other programming languages, descriptors are referred to as setter and getter, where public functions are used to Get and Set a private variable. Python doesn’t have a private variables concept, and descriptor protocol can be considered as a Pythonic way to achieve something similar. Descriptors are a new way to implement classes in Python, and it does not need to inherit anything from a particular object. To implement descriptors easily in python we have to use at least one of the methods that are defined above. Note that instance below returns to the object where the attribute was accessed, and the owner is the class where the descriptor was assigned as an attribute. There are three protocol in python descriptor for setters, getters and delete method.
gfg.__get__(self, obj, type=None) : This attribute is called when you want to retrieve the information (value = obj.attr), and whatever it returns is what will be given to the code that requested the attribute’s value.
gfg.__set__(self, obj, value) : This method is called to set the values of an attribute (obj.attr = 'value'), and it will not return anything to you.
gfg.__delete__(self, obj) : This method is called when the attribute is deleted from an object (del obj.attr)
Invoking descriptor :Descriptors are invoked automatically whenever it receives the call for a set() method or get() method. For example, obj.gfg looks up gfg in the dictionary of obj. If gfg defines the method __get__(), then gfg.__get__(obj) is invoked. It can also directly be invoked by method name i.e gfg.__get__(obj).
# Python program showing# how to invoke descriptor def __getattribute__(self, key): v = object.__getattribute__(self, key) if hasattr(v, '__get__'): return v.__get__(None, self) return v
The important points to remember are:
Descriptors are invoked by the __getattribute__() method.
Overriding __getattribute__() prevents automatic descriptor calls.
object.__getattribute__() and type.__getattribute__() make different calls to __get__().
Data descriptors always override instance dictionaries.
Non-data descriptors may be overridden by instance dictionaries.
Descriptor Example :In this Example a data descriptor sets and returns values normally and prints a message logging their access.Code 1:
class Descriptor(object): def __init__(self, name =''): self.name = name def __get__(self, obj, objtype): return "{}for{}".format(self.name, self.name) def __set__(self, obj, name): if isinstance(name, str): self.name = name else: raise TypeError("Name should be string") class GFG(object): name = Descriptor() g = GFG()g.name = "Geeks"print(g.name)
Output:
GeeksforGeeks
Code 2:
class Descriptor(object): def __init__(self, name =''): self.name = name def __get__(self, obj, objtype): return "{}for{}".format(self.name, self.name) def __set__(self, obj, name): if isinstance(name, str): self.name = name else: raise TypeError("Name should be string") class GFG(object): name = Descriptor() g = GFG()g.name = "Computer"print(g.name)
Output:
ComputerforComputer
Creating a Descriptor using property() :property(), it is easy to create a usable descriptor for any attribute. Syntax for creating property()
property(fget=None, fset=None, fdel=None, doc=None)
# Python program to explain property() function # Alphabet class class Alphabet: def __init__(self, value): self._value = value # getting the values def getValue(self): print('Getting value') return self._value # setting the values def setValue(self, value): print('Setting value to ' + value) self._value = value # deleting the values def delValue(self): print('Deleting value') del self._value value = property(getValue, setValue, delValue, ) # passing the value x = Alphabet('GeeksforGeeks') print(x.value) x.value = 'GfG' del x.value
Output :
Getting value
GeeksforGeeks
Setting value to GfG
Deleting value
Creating a Descriptor using class methods :In this we create a class and override any of the descriptor methods __set__, __ get__, and __delete__. This method is used when the same descriptor is needed across many different classes and attributes, for example, for type validation.
class Descriptor(object): def __init__(self, name =''): self.name = name def __get__(self, obj, objtype): return "{}for{}".format(self.name, self.name) def __set__(self, obj, name): if isinstance(name, str): self.name = name else: raise TypeError("Name should be string") class GFG(object): name = Descriptor() g = GFG()g.name = "Geeks"print(g.name)
Output :
GeeksforGeeks
Creating a Descriptor using @property Decorator :In this we use the power of property decorators which are a combination of property type method and Python decorators.
class Alphabet: def __init__(self, value): self._value = value # getting the values @property def value(self): print('Getting value') return self._value # setting the values @value.setter def value(self, value): print('Setting value to ' + value) self._value = value # deleting the values @value.deleter def value(self): print('Deleting value') del self._value # passing the value x = Alphabet('Peter') print(x.value) x.value = 'Diesel' del x.value
Output :
Getting value
Peter
Setting value to Diesel
Deleting value
Picked
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
Enumerate() in Python
Read a file line by line in Python
Defaultdict in Python
Different ways to create Pandas Dataframe
sum() function in Python
Iterate over a list in Python
How to Install PIP on Windows ?
Deque in Python
Python String | replace()
|
[
{
"code": null,
"e": 24006,
"s": 23978,
"text": "\n08 Jan, 2019"
},
{
"code": null,
"e": 25462,
"s": 24006,
"text": "Definition of descriptor :Python descriptors are created to manage the attributes of different classes which use the object as reference. In descriptors we used three different methods that are __getters__(), __setters__(), and __delete__(). If any of those methods are defined for an object, it can be termed as a descriptor. Normally, Python uses methods like getters and setters to adjust the values on attributes without any special processing. It’s just a basic storage system. Sometimes, You might need to validate the values that are being assigned to a value. A descriptor is a mechanism behind properties, methods, static methods, class methods, and super(). Descriptor protocol :In other programming languages, descriptors are referred to as setter and getter, where public functions are used to Get and Set a private variable. Python doesn’t have a private variables concept, and descriptor protocol can be considered as a Pythonic way to achieve something similar. Descriptors are a new way to implement classes in Python, and it does not need to inherit anything from a particular object. To implement descriptors easily in python we have to use at least one of the methods that are defined above. Note that instance below returns to the object where the attribute was accessed, and the owner is the class where the descriptor was assigned as an attribute. There are three protocol in python descriptor for setters, getters and delete method."
},
{
"code": null,
"e": 25681,
"s": 25462,
"text": "gfg.__get__(self, obj, type=None) : This attribute is called when you want to retrieve the information (value = obj.attr), and whatever it returns is what will be given to the code that requested the attribute’s value."
},
{
"code": null,
"e": 25831,
"s": 25681,
"text": "gfg.__set__(self, obj, value) : This method is called to set the values of an attribute (obj.attr = 'value'), and it will not return anything to you."
},
{
"code": null,
"e": 25941,
"s": 25831,
"text": "gfg.__delete__(self, obj) : This method is called when the attribute is deleted from an object (del obj.attr)"
},
{
"code": null,
"e": 26267,
"s": 25941,
"text": " Invoking descriptor :Descriptors are invoked automatically whenever it receives the call for a set() method or get() method. For example, obj.gfg looks up gfg in the dictionary of obj. If gfg defines the method __get__(), then gfg.__get__(obj) is invoked. It can also directly be invoked by method name i.e gfg.__get__(obj)."
},
{
"code": "# Python program showing# how to invoke descriptor def __getattribute__(self, key): v = object.__getattribute__(self, key) if hasattr(v, '__get__'): return v.__get__(None, self) return v",
"e": 26471,
"s": 26267,
"text": null
},
{
"code": null,
"e": 26509,
"s": 26471,
"text": "The important points to remember are:"
},
{
"code": null,
"e": 26567,
"s": 26509,
"text": "Descriptors are invoked by the __getattribute__() method."
},
{
"code": null,
"e": 26634,
"s": 26567,
"text": "Overriding __getattribute__() prevents automatic descriptor calls."
},
{
"code": null,
"e": 26723,
"s": 26634,
"text": "object.__getattribute__() and type.__getattribute__() make different calls to __get__()."
},
{
"code": null,
"e": 26779,
"s": 26723,
"text": "Data descriptors always override instance dictionaries."
},
{
"code": null,
"e": 26844,
"s": 26779,
"text": "Non-data descriptors may be overridden by instance dictionaries."
},
{
"code": null,
"e": 26982,
"s": 26844,
"text": " Descriptor Example :In this Example a data descriptor sets and returns values normally and prints a message logging their access.Code 1:"
},
{
"code": "class Descriptor(object): def __init__(self, name =''): self.name = name def __get__(self, obj, objtype): return \"{}for{}\".format(self.name, self.name) def __set__(self, obj, name): if isinstance(name, str): self.name = name else: raise TypeError(\"Name should be string\") class GFG(object): name = Descriptor() g = GFG()g.name = \"Geeks\"print(g.name)",
"e": 27412,
"s": 26982,
"text": null
},
{
"code": null,
"e": 27420,
"s": 27412,
"text": "Output:"
},
{
"code": null,
"e": 27435,
"s": 27420,
"text": "GeeksforGeeks\n"
},
{
"code": null,
"e": 27444,
"s": 27435,
"text": " Code 2:"
},
{
"code": "class Descriptor(object): def __init__(self, name =''): self.name = name def __get__(self, obj, objtype): return \"{}for{}\".format(self.name, self.name) def __set__(self, obj, name): if isinstance(name, str): self.name = name else: raise TypeError(\"Name should be string\") class GFG(object): name = Descriptor() g = GFG()g.name = \"Computer\"print(g.name)",
"e": 27877,
"s": 27444,
"text": null
},
{
"code": null,
"e": 27885,
"s": 27877,
"text": "Output:"
},
{
"code": null,
"e": 27906,
"s": 27885,
"text": "ComputerforComputer\n"
},
{
"code": null,
"e": 28050,
"s": 27906,
"text": " Creating a Descriptor using property() :property(), it is easy to create a usable descriptor for any attribute. Syntax for creating property()"
},
{
"code": null,
"e": 28102,
"s": 28050,
"text": "property(fget=None, fset=None, fdel=None, doc=None)"
},
{
"code": "# Python program to explain property() function # Alphabet class class Alphabet: def __init__(self, value): self._value = value # getting the values def getValue(self): print('Getting value') return self._value # setting the values def setValue(self, value): print('Setting value to ' + value) self._value = value # deleting the values def delValue(self): print('Deleting value') del self._value value = property(getValue, setValue, delValue, ) # passing the value x = Alphabet('GeeksforGeeks') print(x.value) x.value = 'GfG' del x.value ",
"e": 28788,
"s": 28102,
"text": null
},
{
"code": null,
"e": 28797,
"s": 28788,
"text": "Output :"
},
{
"code": null,
"e": 28862,
"s": 28797,
"text": "Getting value\nGeeksforGeeks\nSetting value to GfG\nDeleting value\n"
},
{
"code": null,
"e": 29144,
"s": 28862,
"text": "Creating a Descriptor using class methods :In this we create a class and override any of the descriptor methods __set__, __ get__, and __delete__. This method is used when the same descriptor is needed across many different classes and attributes, for example, for type validation."
},
{
"code": "class Descriptor(object): def __init__(self, name =''): self.name = name def __get__(self, obj, objtype): return \"{}for{}\".format(self.name, self.name) def __set__(self, obj, name): if isinstance(name, str): self.name = name else: raise TypeError(\"Name should be string\") class GFG(object): name = Descriptor() g = GFG()g.name = \"Geeks\"print(g.name)",
"e": 29579,
"s": 29144,
"text": null
},
{
"code": null,
"e": 29588,
"s": 29579,
"text": "Output :"
},
{
"code": null,
"e": 29603,
"s": 29588,
"text": "GeeksforGeeks\n"
},
{
"code": null,
"e": 29772,
"s": 29603,
"text": " Creating a Descriptor using @property Decorator :In this we use the power of property decorators which are a combination of property type method and Python decorators."
},
{
"code": "class Alphabet: def __init__(self, value): self._value = value # getting the values @property def value(self): print('Getting value') return self._value # setting the values @value.setter def value(self, value): print('Setting value to ' + value) self._value = value # deleting the values @value.deleter def value(self): print('Deleting value') del self._value # passing the value x = Alphabet('Peter') print(x.value) x.value = 'Diesel' del x.value ",
"e": 30376,
"s": 29772,
"text": null
},
{
"code": null,
"e": 30385,
"s": 30376,
"text": "Output :"
},
{
"code": null,
"e": 30445,
"s": 30385,
"text": "Getting value\nPeter\nSetting value to Diesel\nDeleting value\n"
},
{
"code": null,
"e": 30452,
"s": 30445,
"text": "Picked"
},
{
"code": null,
"e": 30459,
"s": 30452,
"text": "Python"
},
{
"code": null,
"e": 30557,
"s": 30459,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30566,
"s": 30557,
"text": "Comments"
},
{
"code": null,
"e": 30579,
"s": 30566,
"text": "Old Comments"
},
{
"code": null,
"e": 30597,
"s": 30579,
"text": "Python Dictionary"
},
{
"code": null,
"e": 30619,
"s": 30597,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 30654,
"s": 30619,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 30676,
"s": 30654,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 30718,
"s": 30676,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 30743,
"s": 30718,
"text": "sum() function in Python"
},
{
"code": null,
"e": 30773,
"s": 30743,
"text": "Iterate over a list in Python"
},
{
"code": null,
"e": 30805,
"s": 30773,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 30821,
"s": 30805,
"text": "Deque in Python"
}
] |
How to detect if a user is using an Android tablet or a phone using Kotlin?
|
This example demonstrates how to detect if a user is using an Android tablet or a phone using Kotlin.
Step 1 − Create a new project in Android Studio, go to File? New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_marginTop="50dp"
android:text="Tutorials Point"
android:textAlignment="center"
android:textColor="@android:color/holo_green_dark"
android:textSize="32sp"
android:textStyle="bold" />
<TextView
android:id="@+id/textView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_marginTop="150dp"
android:text="Detect device is Android phone or Android tablet"
android:textAlignment="center"
android:textColor="@android:color/black"
android:textSize="24sp"
android:textStyle="bold" />
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.kt
import android.content.Context
import android.os.Bundle
import android.telephony.TelephonyManager
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
import java.util.*
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
title = "KotlinApp"
val manager =
applicationContext.getSystemService(Context.TELEPHONY_SERVICE) as TelephonyManager
if (Objects.requireNonNull(manager).phoneType == TelephonyManager.PHONE_TYPE_NONE) {
Toast.makeText(
this@MainActivity,
"Detected... You're using a Tablet",
Toast.LENGTH_SHORT
).show()
} else {
Toast.makeText(
this@MainActivity,
"Detected... You're using a Mobile Phone",
Toast.LENGTH_SHORT
).show()
}
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.q11">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen
|
[
{
"code": null,
"e": 1164,
"s": 1062,
"text": "This example demonstrates how to detect if a user is using an Android tablet or a phone using Kotlin."
},
{
"code": null,
"e": 1292,
"s": 1164,
"text": "Step 1 − Create a new project in Android Studio, go to File? New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1357,
"s": 1292,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2395,
"s": 1357,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\nxmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".MainActivity\">\n<TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginTop=\"50dp\"\n android:text=\"Tutorials Point\"\n android:textAlignment=\"center\"\n android:textColor=\"@android:color/holo_green_dark\"\n android:textSize=\"32sp\"\n android:textStyle=\"bold\" />\n<TextView\n android:id=\"@+id/textView\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginTop=\"150dp\"\n android:text=\"Detect device is Android phone or Android tablet\"\n android:textAlignment=\"center\"\n android:textColor=\"@android:color/black\"\n android:textSize=\"24sp\"\n android:textStyle=\"bold\" />\n</RelativeLayout>"
},
{
"code": null,
"e": 2450,
"s": 2395,
"text": "Step 3 − Add the following code to src/MainActivity.kt"
},
{
"code": null,
"e": 3398,
"s": 2450,
"text": "import android.content.Context\nimport android.os.Bundle\nimport android.telephony.TelephonyManager\nimport android.widget.Toast\nimport androidx.appcompat.app.AppCompatActivity\nimport java.util.*\nclass MainActivity : AppCompatActivity() {\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n val manager =\n applicationContext.getSystemService(Context.TELEPHONY_SERVICE) as TelephonyManager\n if (Objects.requireNonNull(manager).phoneType == TelephonyManager.PHONE_TYPE_NONE) {\n Toast.makeText(\n this@MainActivity,\n \"Detected... You're using a Tablet\",\n Toast.LENGTH_SHORT\n ).show()\n } else {\n Toast.makeText(\n this@MainActivity,\n \"Detected... You're using a Mobile Phone\",\n Toast.LENGTH_SHORT\n ).show()\n }\n }\n}"
},
{
"code": null,
"e": 3453,
"s": 3398,
"text": "Step 4 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 4124,
"s": 3453,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\npackage=\"com.example.q11\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 4472,
"s": 4124,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen"
}
] |
Python | Ways to add row/columns in numpy array - GeeksforGeeks
|
10 Sep, 2020
Given numpy array, the task is to add rows/columns basis on requirements to numpy array. Let’s see a few examples of this problem.Method #1: Using np.hstack() method
Python3
# Python code to demonstrate# adding columns in numpy array import numpy as np ini_array = np.array([[1, 2, 3], [45, 4, 7], [9, 6, 10]]) # printing initial arrayprint("initial_array : ", str(ini_array)); # Array to be added as columncolumn_to_be_added = np.array([1, 2, 3]) # Adding column to numpy arrayresult = np.hstack((ini_array, np.atleast_2d(column_to_be_added).T)) # printing resultprint ("resultant array", str(result))
Output:
initial_array : [[ 1 2 3]
[45 4 7]
[ 9 6 10]]
resultant array [[ 1 2 3 1]
[45 4 7 2]
[ 9 6 10 3]]
Method #2: Using column_stack() method
Python3
# python code to demonstrate# adding columns in numpy array import numpy as np ini_array = np.array([[1, 2, 3], [45, 4, 7], [9, 6, 10]]) # printing initial arrayprint("initial_array : ", str(ini_array)); # Array to be added as columncolumn_to_be_added = np.array([1, 2, 3]) # Adding column to numpy arrayresult = np.column_stack((ini_array, column_to_be_added)) # printing resultprint ("resultant array", str(result))
Output:
initial_array : [[ 1 2 3]
[45 4 7]
[ 9 6 10]]
resultant array [[ 1 2 3 1]
[45 4 7 2]
[ 9 6 10 3]]
Method #3: Using np.vstack() method
Python3
# python code to demonstrate# adding rows in numpy array import numpy as np ini_array = np.array([[1, 2, 3], [45, 4, 7], [9, 6, 10]]) # printing initial arrayprint("initial_array : ", str(ini_array)); # Array to be added as rowrow_to_be_added = np.array([1, 2, 3]) # Adding row to numpy arrayresult = np.vstack ((ini_array, row_to_be_added) ) # printing resultprint ("resultant array", str(result))
Output:
initial_array : [[ 1 2 3]
[45 4 7]
[ 9 6 10]]
resultant array [[ 1 2 3]
[45 4 7]
[ 9 6 10]
[ 1 2 3]]
Sometimes we have an empty array and we need to append rows in it. Numpy provides the function to append a row to an empty Numpy array using numpy.append() function.
Syntax : numpy.append(arr, values, axis=None)
Case 1: Adding new rows to an empty 2-D array
Python3
# importing Numpy packageimport numpy as np # creating an empty 2d array of int typeempt_array = np.empty((0,2), int)print("Empty array:")print(empt_array) # adding two new rows to empt_array# using np.append()empt_array = np.append(empt_array, np.array([[10,20]]), axis=0)empt_array = np.append(empt_array, np.array([[40,50]]), axis=0) print("\nNow array is:")print(empt_array)
Empty array:
[]
Now array is:
[[10 20]
[40 50]]
Case 2: Adding new rows to an empty 3-D array
Python3
# importing Numpy packageimport numpy as np # creating an empty 3d array of int typeempt_array = np.empty((0,3), int)print("Empty array:")print(empt_array) # adding three new rows to empt_array# using np.append()empt_array = np.append(empt_array, np.array([[10,20,40]]), axis=0)empt_array = np.append(empt_array, np.array([[40,50,55]]), axis=0)empt_array = np.append(empt_array, np.array([[40,50,55]]), axis=0) print("\nNow array is:")print(empt_array)
Empty array:
[]
Now array is:
[[10 20 40]
[40 50 55]
[40 50 55]]
Case 3: Adding new rows to an empty 4-D array
Python3
# importing Numpy packageimport numpy as np # creating an empty 4d array of int typeempt_array = np.empty((0,4), int)print("Empty array:")print(empt_array) # adding four new rows to empt_array# using np.append()empt_array = np.append(empt_array, np.array([[100,200,400,888]]), axis=0)empt_array = np.append(empt_array, np.array([[405,500,550,558]]), axis=0)empt_array = np.append(empt_array, np.array([[404,505,555,145]]), axis=0)empt_array = np.append(empt_array, np.array([[44,55,550,150]]), axis=0) print("\nNow array is:")print(empt_array)
Empty array:
[]
Now array is:
[[100 200 400 888]
[405 500 550 558]
[404 505 555 145]
[ 44 55 550 150]]
vanshgaur14866
Python numpy-program
Python-numpy
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
Enumerate() in Python
How to Install PIP on Windows ?
Different ways to create Pandas Dataframe
Create a Pandas DataFrame from Lists
*args and **kwargs in Python
sum() function in Python
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Print lists in Python (4 Different Ways)
|
[
{
"code": null,
"e": 24892,
"s": 24864,
"text": "\n10 Sep, 2020"
},
{
"code": null,
"e": 25060,
"s": 24892,
"text": "Given numpy array, the task is to add rows/columns basis on requirements to numpy array. Let’s see a few examples of this problem.Method #1: Using np.hstack() method "
},
{
"code": null,
"e": 25068,
"s": 25060,
"text": "Python3"
},
{
"code": "# Python code to demonstrate# adding columns in numpy array import numpy as np ini_array = np.array([[1, 2, 3], [45, 4, 7], [9, 6, 10]]) # printing initial arrayprint(\"initial_array : \", str(ini_array)); # Array to be added as columncolumn_to_be_added = np.array([1, 2, 3]) # Adding column to numpy arrayresult = np.hstack((ini_array, np.atleast_2d(column_to_be_added).T)) # printing resultprint (\"resultant array\", str(result))",
"e": 25497,
"s": 25068,
"text": null
},
{
"code": null,
"e": 25507,
"s": 25497,
"text": "Output: "
},
{
"code": null,
"e": 25625,
"s": 25507,
"text": "initial_array : [[ 1 2 3]\n [45 4 7]\n [ 9 6 10]]\nresultant array [[ 1 2 3 1]\n [45 4 7 2]\n [ 9 6 10 3]]\n\n"
},
{
"code": null,
"e": 25666,
"s": 25625,
"text": "Method #2: Using column_stack() method "
},
{
"code": null,
"e": 25674,
"s": 25666,
"text": "Python3"
},
{
"code": "# python code to demonstrate# adding columns in numpy array import numpy as np ini_array = np.array([[1, 2, 3], [45, 4, 7], [9, 6, 10]]) # printing initial arrayprint(\"initial_array : \", str(ini_array)); # Array to be added as columncolumn_to_be_added = np.array([1, 2, 3]) # Adding column to numpy arrayresult = np.column_stack((ini_array, column_to_be_added)) # printing resultprint (\"resultant array\", str(result))",
"e": 26092,
"s": 25674,
"text": null
},
{
"code": null,
"e": 26102,
"s": 26092,
"text": "Output: "
},
{
"code": null,
"e": 26220,
"s": 26102,
"text": "initial_array : [[ 1 2 3]\n [45 4 7]\n [ 9 6 10]]\nresultant array [[ 1 2 3 1]\n [45 4 7 2]\n [ 9 6 10 3]]\n\n"
},
{
"code": null,
"e": 26258,
"s": 26220,
"text": "Method #3: Using np.vstack() method "
},
{
"code": null,
"e": 26266,
"s": 26258,
"text": "Python3"
},
{
"code": "# python code to demonstrate# adding rows in numpy array import numpy as np ini_array = np.array([[1, 2, 3], [45, 4, 7], [9, 6, 10]]) # printing initial arrayprint(\"initial_array : \", str(ini_array)); # Array to be added as rowrow_to_be_added = np.array([1, 2, 3]) # Adding row to numpy arrayresult = np.vstack ((ini_array, row_to_be_added) ) # printing resultprint (\"resultant array\", str(result))",
"e": 26665,
"s": 26266,
"text": null
},
{
"code": null,
"e": 26675,
"s": 26665,
"text": "Output: "
},
{
"code": null,
"e": 26796,
"s": 26675,
"text": "initial_array : [[ 1 2 3]\n [45 4 7]\n [ 9 6 10]]\nresultant array [[ 1 2 3]\n [45 4 7]\n [ 9 6 10]\n [ 1 2 3]]\n\n"
},
{
"code": null,
"e": 26962,
"s": 26796,
"text": "Sometimes we have an empty array and we need to append rows in it. Numpy provides the function to append a row to an empty Numpy array using numpy.append() function."
},
{
"code": null,
"e": 27008,
"s": 26962,
"text": "Syntax : numpy.append(arr, values, axis=None)"
},
{
"code": null,
"e": 27054,
"s": 27008,
"text": "Case 1: Adding new rows to an empty 2-D array"
},
{
"code": null,
"e": 27062,
"s": 27054,
"text": "Python3"
},
{
"code": "# importing Numpy packageimport numpy as np # creating an empty 2d array of int typeempt_array = np.empty((0,2), int)print(\"Empty array:\")print(empt_array) # adding two new rows to empt_array# using np.append()empt_array = np.append(empt_array, np.array([[10,20]]), axis=0)empt_array = np.append(empt_array, np.array([[40,50]]), axis=0) print(\"\\nNow array is:\")print(empt_array)",
"e": 27443,
"s": 27062,
"text": null
},
{
"code": null,
"e": 27492,
"s": 27443,
"text": "Empty array:\n[]\n\nNow array is:\n[[10 20]\n[40 50]]"
},
{
"code": null,
"e": 27538,
"s": 27492,
"text": "Case 2: Adding new rows to an empty 3-D array"
},
{
"code": null,
"e": 27546,
"s": 27538,
"text": "Python3"
},
{
"code": "# importing Numpy packageimport numpy as np # creating an empty 3d array of int typeempt_array = np.empty((0,3), int)print(\"Empty array:\")print(empt_array) # adding three new rows to empt_array# using np.append()empt_array = np.append(empt_array, np.array([[10,20,40]]), axis=0)empt_array = np.append(empt_array, np.array([[40,50,55]]), axis=0)empt_array = np.append(empt_array, np.array([[40,50,55]]), axis=0) print(\"\\nNow array is:\")print(empt_array)",
"e": 28001,
"s": 27546,
"text": null
},
{
"code": null,
"e": 28069,
"s": 28001,
"text": "Empty array:\n[]\n\nNow array is:\n[[10 20 40]\n [40 50 55]\n [40 50 55]]"
},
{
"code": null,
"e": 28115,
"s": 28069,
"text": "Case 3: Adding new rows to an empty 4-D array"
},
{
"code": null,
"e": 28123,
"s": 28115,
"text": "Python3"
},
{
"code": "# importing Numpy packageimport numpy as np # creating an empty 4d array of int typeempt_array = np.empty((0,4), int)print(\"Empty array:\")print(empt_array) # adding four new rows to empt_array# using np.append()empt_array = np.append(empt_array, np.array([[100,200,400,888]]), axis=0)empt_array = np.append(empt_array, np.array([[405,500,550,558]]), axis=0)empt_array = np.append(empt_array, np.array([[404,505,555,145]]), axis=0)empt_array = np.append(empt_array, np.array([[44,55,550,150]]), axis=0) print(\"\\nNow array is:\")print(empt_array)",
"e": 28669,
"s": 28123,
"text": null
},
{
"code": null,
"e": 28777,
"s": 28669,
"text": "Empty array:\n[]\n\nNow array is:\n[[100 200 400 888]\n [405 500 550 558]\n [404 505 555 145]\n [ 44 55 550 150]]"
},
{
"code": null,
"e": 28792,
"s": 28777,
"text": "vanshgaur14866"
},
{
"code": null,
"e": 28813,
"s": 28792,
"text": "Python numpy-program"
},
{
"code": null,
"e": 28826,
"s": 28813,
"text": "Python-numpy"
},
{
"code": null,
"e": 28833,
"s": 28826,
"text": "Python"
},
{
"code": null,
"e": 28931,
"s": 28833,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28949,
"s": 28931,
"text": "Python Dictionary"
},
{
"code": null,
"e": 28971,
"s": 28949,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 29003,
"s": 28971,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 29045,
"s": 29003,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 29082,
"s": 29045,
"text": "Create a Pandas DataFrame from Lists"
},
{
"code": null,
"e": 29111,
"s": 29082,
"text": "*args and **kwargs in Python"
},
{
"code": null,
"e": 29136,
"s": 29111,
"text": "sum() function in Python"
},
{
"code": null,
"e": 29178,
"s": 29136,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 29234,
"s": 29178,
"text": "How to drop one or multiple columns in Pandas Dataframe"
}
] |
F# - if/elif/else statement
|
An if/then/elif/else construct has multiple else branches.
The syntax of an if/then/elif/else statement in F# programming language is −
if expr then
expr
elif expr then
expr
elif expr then
expr
...
else
expr
let a : int32 = 100
(* check the boolean condition using if statement *)
if (a = 10) then
printfn "Value of a is 10\n"
elif (a = 20) then
printfn " Value of a is 20\n"
elif (a = 30) then
printfn " Value of a is 30\n"
else
printfn " None of the values are matching\n"
printfn "Value of a is: %d" a
When you compile and execute the program, it yields the following output −
None of the values are matching
Value of a is: 100
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2220,
"s": 2161,
"text": "An if/then/elif/else construct has multiple else branches."
},
{
"code": null,
"e": 2297,
"s": 2220,
"text": "The syntax of an if/then/elif/else statement in F# programming language is −"
},
{
"code": null,
"e": 2382,
"s": 2297,
"text": "if expr then\n expr\nelif expr then\n expr\nelif expr then\n expr\n...\nelse\n expr\n"
},
{
"code": null,
"e": 2696,
"s": 2382,
"text": "let a : int32 = 100\n\n(* check the boolean condition using if statement *)\n\nif (a = 10) then\n printfn \"Value of a is 10\\n\"\nelif (a = 20) then\n printfn \" Value of a is 20\\n\"\nelif (a = 30) then\n printfn \" Value of a is 30\\n\"\nelse\n printfn \" None of the values are matching\\n\"\n printfn \"Value of a is: %d\" a"
},
{
"code": null,
"e": 2771,
"s": 2696,
"text": "When you compile and execute the program, it yields the following output −"
},
{
"code": null,
"e": 2824,
"s": 2771,
"text": "None of the values are matching\n\nValue of a is: 100\n"
},
{
"code": null,
"e": 2831,
"s": 2824,
"text": " Print"
},
{
"code": null,
"e": 2842,
"s": 2831,
"text": " Add Notes"
}
] |
GATE | GATE CS 2020 | Question 31 - GeeksforGeeks
|
13 Aug, 2021
A direct mapped cache memory of 1 MB has a block size of 256 bytes. The cache has an access time of 3 ns and a hit rate of 94%. During a cache miss, it takes 20 ns to bring the first word of a block from the main memory, while each subsequent word takes 5 ns. The word size is 64 bits. The average memory access time in ns (round off to 1 decimal place) is ________ .
Note – This question was Numerical Type.(A) 13.5(B) 15.5(C) 23.5(D) 15.3Answer: (A)Explanation: Given,
Word size = 64 bit = 8 byte
And,
Block size = 256 byte
Therefore, Number of words per block will be,
= 256 / 8 = 32
According to question, first word takes 20ns and rest (31) each subsequent words take 5ns each to fetch a word from main-memory to cache.
Hence,
Tavg = (0.94 × 3) + (1 – 0.94) [3 + (20 + (31 × 5))]
= 13.5 (in ns)
Option (A) is correct.
YouTubeGeeksforGeeks GATE Computer Science16.2K subscribersGate PYQs on Avg. Memory access time Part-2 with Ashish Verma | GeeksforGeeks GATEWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:001:01 / 25:25•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=D94hqMo-FWs" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>Quiz of this Question
GATE
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
GATE | GATE-IT-2004 | Question 66
GATE | GATE-CS-2014-(Set-3) | Question 65
GATE | GATE CS 2019 | Question 27
GATE | GATE-CS-2016 (Set 2) | Question 48
GATE | GATE-CS-2006 | Question 49
GATE | GATE-CS-2004 | Question 3
GATE | GATE CS 2010 | Question 24
GATE | GATE CS 2011 | Question 65
GATE | GATE CS 2021 | Set 1 | Question 47
GATE | GATE-CS-2017 (Set 2) | Question 42
|
[
{
"code": null,
"e": 24602,
"s": 24574,
"text": "\n13 Aug, 2021"
},
{
"code": null,
"e": 24970,
"s": 24602,
"text": "A direct mapped cache memory of 1 MB has a block size of 256 bytes. The cache has an access time of 3 ns and a hit rate of 94%. During a cache miss, it takes 20 ns to bring the first word of a block from the main memory, while each subsequent word takes 5 ns. The word size is 64 bits. The average memory access time in ns (round off to 1 decimal place) is ________ ."
},
{
"code": null,
"e": 25073,
"s": 24970,
"text": "Note – This question was Numerical Type.(A) 13.5(B) 15.5(C) 23.5(D) 15.3Answer: (A)Explanation: Given,"
},
{
"code": null,
"e": 25130,
"s": 25073,
"text": "Word size = 64 bit = 8 byte\nAnd, \nBlock size = 256 byte "
},
{
"code": null,
"e": 25176,
"s": 25130,
"text": "Therefore, Number of words per block will be,"
},
{
"code": null,
"e": 25191,
"s": 25176,
"text": "= 256 / 8 = 32"
},
{
"code": null,
"e": 25329,
"s": 25191,
"text": "According to question, first word takes 20ns and rest (31) each subsequent words take 5ns each to fetch a word from main-memory to cache."
},
{
"code": null,
"e": 25336,
"s": 25329,
"text": "Hence,"
},
{
"code": null,
"e": 25405,
"s": 25336,
"text": "Tavg = (0.94 × 3) + (1 – 0.94) [3 + (20 + (31 × 5))]\n= 13.5 (in ns) "
},
{
"code": null,
"e": 25428,
"s": 25405,
"text": "Option (A) is correct."
},
{
"code": null,
"e": 26338,
"s": 25428,
"text": "YouTubeGeeksforGeeks GATE Computer Science16.2K subscribersGate PYQs on Avg. Memory access time Part-2 with Ashish Verma | GeeksforGeeks GATEWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:001:01 / 25:25•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=D94hqMo-FWs\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>Quiz of this Question"
},
{
"code": null,
"e": 26343,
"s": 26338,
"text": "GATE"
},
{
"code": null,
"e": 26441,
"s": 26343,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26475,
"s": 26441,
"text": "GATE | GATE-IT-2004 | Question 66"
},
{
"code": null,
"e": 26517,
"s": 26475,
"text": "GATE | GATE-CS-2014-(Set-3) | Question 65"
},
{
"code": null,
"e": 26551,
"s": 26517,
"text": "GATE | GATE CS 2019 | Question 27"
},
{
"code": null,
"e": 26593,
"s": 26551,
"text": "GATE | GATE-CS-2016 (Set 2) | Question 48"
},
{
"code": null,
"e": 26627,
"s": 26593,
"text": "GATE | GATE-CS-2006 | Question 49"
},
{
"code": null,
"e": 26660,
"s": 26627,
"text": "GATE | GATE-CS-2004 | Question 3"
},
{
"code": null,
"e": 26694,
"s": 26660,
"text": "GATE | GATE CS 2010 | Question 24"
},
{
"code": null,
"e": 26728,
"s": 26694,
"text": "GATE | GATE CS 2011 | Question 65"
},
{
"code": null,
"e": 26770,
"s": 26728,
"text": "GATE | GATE CS 2021 | Set 1 | Question 47"
}
] |
\textit - Tex Command
|
\textit - Used to produce text-mode material in italics within a mathematical expression.
{ \textit #1}
\textit command is used to produce text-mode material in italics within a mathematical expression.
\textit{\alpha in textit mode }\alpha
\alpha in textit mode α
\textit{\alpha in textit mode }\alpha
\alpha in textit mode α
\textit{\alpha in textit mode }\alpha
14 Lectures
52 mins
Ashraf Said
11 Lectures
1 hours
Ashraf Said
9 Lectures
1 hours
Emenwa Global, Ejike IfeanyiChukwu
29 Lectures
2.5 hours
Mohammad Nauman
14 Lectures
1 hours
Daniel Stern
15 Lectures
47 mins
Nishant Kumar
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 8076,
"s": 7986,
"text": "\\textit - Used to produce text-mode material in italics within a mathematical expression."
},
{
"code": null,
"e": 8090,
"s": 8076,
"text": "{ \\textit #1}"
},
{
"code": null,
"e": 8189,
"s": 8090,
"text": "\\textit command is used to produce text-mode material in italics within a mathematical expression."
},
{
"code": null,
"e": 8256,
"s": 8189,
"text": "\n\\textit{\\alpha in textit mode }\\alpha\n\n\\alpha in textit mode α\n\n\n"
},
{
"code": null,
"e": 8321,
"s": 8256,
"text": "\\textit{\\alpha in textit mode }\\alpha\n\n\\alpha in textit mode α\n\n"
},
{
"code": null,
"e": 8359,
"s": 8321,
"text": "\\textit{\\alpha in textit mode }\\alpha"
},
{
"code": null,
"e": 8391,
"s": 8359,
"text": "\n 14 Lectures \n 52 mins\n"
},
{
"code": null,
"e": 8404,
"s": 8391,
"text": " Ashraf Said"
},
{
"code": null,
"e": 8437,
"s": 8404,
"text": "\n 11 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 8450,
"s": 8437,
"text": " Ashraf Said"
},
{
"code": null,
"e": 8482,
"s": 8450,
"text": "\n 9 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 8518,
"s": 8482,
"text": " Emenwa Global, Ejike IfeanyiChukwu"
},
{
"code": null,
"e": 8553,
"s": 8518,
"text": "\n 29 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 8570,
"s": 8553,
"text": " Mohammad Nauman"
},
{
"code": null,
"e": 8603,
"s": 8570,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 8617,
"s": 8603,
"text": " Daniel Stern"
},
{
"code": null,
"e": 8649,
"s": 8617,
"text": "\n 15 Lectures \n 47 mins\n"
},
{
"code": null,
"e": 8664,
"s": 8649,
"text": " Nishant Kumar"
},
{
"code": null,
"e": 8671,
"s": 8664,
"text": " Print"
},
{
"code": null,
"e": 8682,
"s": 8671,
"text": " Add Notes"
}
] |
How to skip a particular test method from execution in Cucumber?
|
We can skip a particular test method from execution in Cucumber with
the help of tagging of scenarios in the feature file.
feature file.
@Regression
Feature: Invoice Testing
@Smoke
Scenario: Login Verification
Given User is in Home Page
@Payment
Scenario: Payment Testing
Given User is in Payment Page
Feature file with scenarios having tags Smoke and Payment.
import org.junit.runner.RunWith;
import cucumber.api.CucumberOptions;
import cucumber.api.junit.Cucumber;
import cucumber.api.testng.AbstractTestNGCucumberTests;
@RunWith(Cucumber.class)
@CucumberOptions(
features = “src/test/java/features”,
glue = “stepDefiniations”
tags = { “~@Payment”}
)
To skip the scenarios with @Payment , ~ is placed before the @Payment tag in Test Runner file.
|
[
{
"code": null,
"e": 1185,
"s": 1062,
"text": "We can skip a particular test method from execution in Cucumber with\nthe help of tagging of scenarios in the feature file."
},
{
"code": null,
"e": 1199,
"s": 1185,
"text": "feature file."
},
{
"code": null,
"e": 1364,
"s": 1199,
"text": "@Regression\nFeature: Invoice Testing\n@Smoke\nScenario: Login Verification\nGiven User is in Home Page\n@Payment\nScenario: Payment Testing\nGiven User is in Payment Page"
},
{
"code": null,
"e": 1423,
"s": 1364,
"text": "Feature file with scenarios having tags Smoke and Payment."
},
{
"code": null,
"e": 1724,
"s": 1423,
"text": "import org.junit.runner.RunWith;\nimport cucumber.api.CucumberOptions;\nimport cucumber.api.junit.Cucumber;\nimport cucumber.api.testng.AbstractTestNGCucumberTests;\n@RunWith(Cucumber.class)\n@CucumberOptions(\n features = “src/test/java/features”,\n glue = “stepDefiniations”\n tags = { “~@Payment”}\n)"
},
{
"code": null,
"e": 1819,
"s": 1724,
"text": "To skip the scenarios with @Payment , ~ is placed before the @Payment tag in Test Runner file."
}
] |
Hosting your ML model on Azure Functions — Part 1 | by Evan Miller | Towards Data Science
|
It’s been weeks of feature engineering, model selecting and hyper parameter tuning and you’re done.
Your colleagues are so confused by your constant use of the word “pickle” that they stop asking you what you’re on about and just assume you’ve really got into preserving veges.
Now let’s put that model somewhere that you can get some value out of it.
If you’re looking to deploy your models via AWS lambdas check out my articles in the link above the title.
Here’s the link to my Git repo if you want to follow along
These are very similar to AWS lambdas and from a model deployment option they have some great benefits over AWS lambdas.
Specifying your Azure Function runtime is easy as pi
Remember how painful it was to get AWS Lambdas to use pandas? I sure do. Change one small thing (your internal ML package or wanna use a new fancy sklearn model) and AWS lambdas kicks off.
Next you’re using Docker to recreate layers, and heaven forbid you forget to change your layer version..
With Azure Functions you just specify a requirements.txt and it’ll install all those packages for you when you deploy your Azure Function to your account.
It also lets you upload models from local directly to your Azure Function.
That is pretty bloody cool.
The Azure CLI allows you to debug your Azure functions locally
If you followed my AWS lambda tutorial, you probably spent a lot of time writing simple tests to run over your AWS lambdas to make sure you could import pandas or something really small.
And then you’re able to test out sending data to the Azure Function via the requests package in Python. But now you just run them locally and send POST requests via the requests package and you can be sure that it’s working before you deploy
Creating a HTTP Azure Function deploys and exposes the endpoint in one simple step
Previously when deploying a model to AWS Lambdas and API Gateway you’d have to do the two steps separately. With Azure Functions they are only one step.
In addition to this your Azure Functions come presecured (ie: have an API key) so you don’t have to worry about shutting down your AWS Lambda while you configure things on the API Gateway side.
Now we get into the fun part. If you want to follow along here’s the GitHub repo.
First you’ll need to install Azure CLI tools. This will let you test your Azure function locally and then push it to your Azure account.
At this point it’s probably worthwhile getting your Azure account. If you’ve never signed up you can probably get $200 free credits. But if you can’t don’t worry — these deployments won’t break the bank.
Follow this to get the Azure CLI installed so we can then start building our function.
To test this you’ll want to try :
func new
If the installation has worked then you’ll be able to create a Azure function locally. But it can be a bit tough to get working on Windows.
Getting Azure CLI working on Windows
If you get an error saying func not found you’ll want to install do a few more steps:
Install chocolatey via admin shell
Use chocolatey to install azure cli tools azure cli tools working (answer #2).
Run func check that your results match the Stackoverflow answer and you’ll be ready to go
Apparently this is because func is not on your path. So if you’re smarter than me and sort that stuff let me know and I”ll add it here.
This will be where you initialize your Azure Function. The easy thing is that if you get your Azure Function to respond to HTTP requests then you’ve deployed your model and made it available via API in one easy step.
#1 Intialize the folder you’re working in
func init titanic_model --python
#2 Set up the bare bones Azure Function
cd titanic_modelfunc new
Select HTTP trigger
Give your function a name, I called mine model_deployment
And if you’ve done it right you should see this:
Eagle eyed readers would realize that I’m running those functions off my base Python installation.
When we run the Azure Function locally it uses the Python environment we specify to execute. This lets us use the requirements.txt to specify a local environment that will mirror what will run on the Azure Function.
I create my environment using conda because it’s easy, but you can use venv or any other solution.
I’d also add to your requirements.txt to cover all the dependencies. Here’s what mine looks like:
azure-functions==1.0.4pandas==0.25.3scikit-learn==0.21.3requests==2.22.0
Now run so you can install the packages as well as build the model for deployment.
conda create --name model_deploy python=3.7activate model_deploypip install -r requirements.txt
I’ve put a simple model in the Git repo to guide you. To create the model .pkl you’ll need to run:
python model_build.py
This model is built from the titanic dataset and predicts survival off:
Age
Pclass (Passenger class)
Embarked (where a passenger embarked)
Sex
__init__.py is what your Azure Function is run off.
Using JSON inputs/outputs can be a bit of a faff and it took awhile for me to get right.
I’ll paste the main sections of the code here so I can highlight the main confusions I had so you can learn from my mistakes.
data = req.get_json()data = json.loads(data)
You’ll be using a POST request for this model. But when you read in the JSON request it will still be in a string format, so we need to convert it to a proper dict/JSON object using json.loads before we can use the data to predict on.
response = []for i in range(len(data)): data_row = data[i] pred_df = pd.DataFrame([data_row]) pred_label = clf.predict(pred_df)[0] pred_probs = clf.predict_proba(pred_df)[0] results_dict = { 'pred_label': int(pred_label), 'pred_prob_0': pred_probs[0], 'pred_prob_1': pred_probs[1] } response.append(results_dict)return json.dumps(response)
There’s a few things I’ll quickly mention:
pd.DataFrame([data_row]): lets you create a one row dataframe in Pandas. Otherwise you’ll get an index error
int(pred_label): used because the class outputted is a numpy datatype (int64) and that isn’t usable when returning JSON objects so I convert it
json.dumps(response): even though we work with JSON you need to convert it to a string before you send it back
Now let’s deploy that bad boi — locally
func host start
That should give you the below result once it’s up and running
http://localhost:7071/api/model_deployment is what we want to send our requests to. After the local Azure Function is running use test_api.py to ping data to our API. You should see the below results:
Booo yah. Now it’s working!!!!! So now we’ve got an Azure Function working locally, we need to push it to Azure so we can deploy it last and for good.
Now we’ve got the Azure Function working locally we can push it to Azure and deploy this bad boi.
If you haven’t created an Azure account then this is your time to do it. And once you’ve done that go to your Portal and spin up an Azure Function App.
Here’s how I configured mine
Now you’ve created your Azure App, you can now deploy your model to Azure and try it out.
From your local Azure Function directory you’ll want to run the following command
az login
This will either execute seamlessly or or ask you to log in to your Azure account. Once that’s done you’re ready to go. Now let’s push your local code to Azure.
func azure functionapp publish <APP_NAME> --build remote
Where APP_NAME is the name you gave your Function App (duh). Mine is titanicmodel but yours will be different.
Once that’s built you need to find the URL of your Azure App. You can find that here
This URL is what you’re going to use to access your model from Azure. So replace azure_url in test_api.py with your Azure Function URL and give it a try.
If everything has gone to plan you’ll get:
[ { "pred_label": 1, "pred_prob_0": 0.13974358581161161, "pred_prob_1": 0.86025641418838839 }, { "pred_label": 0, "pred_prob_0": 0.65911568636955931, "pred_prob_1": 0.34088431363044069 }, { "pred_label": 1, "pred_prob_0": 0.13974358581161161, "pred_prob_1": 0.86025641418838839 }, { "pred_label": 0, "pred_prob_0": 0.65911568636955931, "pred_prob_1": 0.34088431363044069 }]
And now you’ll also have that warm and fuzzy feeling of deploying your first ML model to an Azure Function!!
Since the API side is natively handled by Azure Functions I’ll make my second part on recreating this process with using Docker which might make things even easier.
|
[
{
"code": null,
"e": 272,
"s": 172,
"text": "It’s been weeks of feature engineering, model selecting and hyper parameter tuning and you’re done."
},
{
"code": null,
"e": 450,
"s": 272,
"text": "Your colleagues are so confused by your constant use of the word “pickle” that they stop asking you what you’re on about and just assume you’ve really got into preserving veges."
},
{
"code": null,
"e": 524,
"s": 450,
"text": "Now let’s put that model somewhere that you can get some value out of it."
},
{
"code": null,
"e": 631,
"s": 524,
"text": "If you’re looking to deploy your models via AWS lambdas check out my articles in the link above the title."
},
{
"code": null,
"e": 690,
"s": 631,
"text": "Here’s the link to my Git repo if you want to follow along"
},
{
"code": null,
"e": 811,
"s": 690,
"text": "These are very similar to AWS lambdas and from a model deployment option they have some great benefits over AWS lambdas."
},
{
"code": null,
"e": 864,
"s": 811,
"text": "Specifying your Azure Function runtime is easy as pi"
},
{
"code": null,
"e": 1053,
"s": 864,
"text": "Remember how painful it was to get AWS Lambdas to use pandas? I sure do. Change one small thing (your internal ML package or wanna use a new fancy sklearn model) and AWS lambdas kicks off."
},
{
"code": null,
"e": 1158,
"s": 1053,
"text": "Next you’re using Docker to recreate layers, and heaven forbid you forget to change your layer version.."
},
{
"code": null,
"e": 1313,
"s": 1158,
"text": "With Azure Functions you just specify a requirements.txt and it’ll install all those packages for you when you deploy your Azure Function to your account."
},
{
"code": null,
"e": 1388,
"s": 1313,
"text": "It also lets you upload models from local directly to your Azure Function."
},
{
"code": null,
"e": 1416,
"s": 1388,
"text": "That is pretty bloody cool."
},
{
"code": null,
"e": 1479,
"s": 1416,
"text": "The Azure CLI allows you to debug your Azure functions locally"
},
{
"code": null,
"e": 1666,
"s": 1479,
"text": "If you followed my AWS lambda tutorial, you probably spent a lot of time writing simple tests to run over your AWS lambdas to make sure you could import pandas or something really small."
},
{
"code": null,
"e": 1908,
"s": 1666,
"text": "And then you’re able to test out sending data to the Azure Function via the requests package in Python. But now you just run them locally and send POST requests via the requests package and you can be sure that it’s working before you deploy"
},
{
"code": null,
"e": 1991,
"s": 1908,
"text": "Creating a HTTP Azure Function deploys and exposes the endpoint in one simple step"
},
{
"code": null,
"e": 2144,
"s": 1991,
"text": "Previously when deploying a model to AWS Lambdas and API Gateway you’d have to do the two steps separately. With Azure Functions they are only one step."
},
{
"code": null,
"e": 2338,
"s": 2144,
"text": "In addition to this your Azure Functions come presecured (ie: have an API key) so you don’t have to worry about shutting down your AWS Lambda while you configure things on the API Gateway side."
},
{
"code": null,
"e": 2420,
"s": 2338,
"text": "Now we get into the fun part. If you want to follow along here’s the GitHub repo."
},
{
"code": null,
"e": 2557,
"s": 2420,
"text": "First you’ll need to install Azure CLI tools. This will let you test your Azure function locally and then push it to your Azure account."
},
{
"code": null,
"e": 2761,
"s": 2557,
"text": "At this point it’s probably worthwhile getting your Azure account. If you’ve never signed up you can probably get $200 free credits. But if you can’t don’t worry — these deployments won’t break the bank."
},
{
"code": null,
"e": 2848,
"s": 2761,
"text": "Follow this to get the Azure CLI installed so we can then start building our function."
},
{
"code": null,
"e": 2882,
"s": 2848,
"text": "To test this you’ll want to try :"
},
{
"code": null,
"e": 2891,
"s": 2882,
"text": "func new"
},
{
"code": null,
"e": 3031,
"s": 2891,
"text": "If the installation has worked then you’ll be able to create a Azure function locally. But it can be a bit tough to get working on Windows."
},
{
"code": null,
"e": 3068,
"s": 3031,
"text": "Getting Azure CLI working on Windows"
},
{
"code": null,
"e": 3154,
"s": 3068,
"text": "If you get an error saying func not found you’ll want to install do a few more steps:"
},
{
"code": null,
"e": 3189,
"s": 3154,
"text": "Install chocolatey via admin shell"
},
{
"code": null,
"e": 3268,
"s": 3189,
"text": "Use chocolatey to install azure cli tools azure cli tools working (answer #2)."
},
{
"code": null,
"e": 3358,
"s": 3268,
"text": "Run func check that your results match the Stackoverflow answer and you’ll be ready to go"
},
{
"code": null,
"e": 3494,
"s": 3358,
"text": "Apparently this is because func is not on your path. So if you’re smarter than me and sort that stuff let me know and I”ll add it here."
},
{
"code": null,
"e": 3711,
"s": 3494,
"text": "This will be where you initialize your Azure Function. The easy thing is that if you get your Azure Function to respond to HTTP requests then you’ve deployed your model and made it available via API in one easy step."
},
{
"code": null,
"e": 3753,
"s": 3711,
"text": "#1 Intialize the folder you’re working in"
},
{
"code": null,
"e": 3786,
"s": 3753,
"text": "func init titanic_model --python"
},
{
"code": null,
"e": 3826,
"s": 3786,
"text": "#2 Set up the bare bones Azure Function"
},
{
"code": null,
"e": 3851,
"s": 3826,
"text": "cd titanic_modelfunc new"
},
{
"code": null,
"e": 3871,
"s": 3851,
"text": "Select HTTP trigger"
},
{
"code": null,
"e": 3929,
"s": 3871,
"text": "Give your function a name, I called mine model_deployment"
},
{
"code": null,
"e": 3978,
"s": 3929,
"text": "And if you’ve done it right you should see this:"
},
{
"code": null,
"e": 4077,
"s": 3978,
"text": "Eagle eyed readers would realize that I’m running those functions off my base Python installation."
},
{
"code": null,
"e": 4293,
"s": 4077,
"text": "When we run the Azure Function locally it uses the Python environment we specify to execute. This lets us use the requirements.txt to specify a local environment that will mirror what will run on the Azure Function."
},
{
"code": null,
"e": 4392,
"s": 4293,
"text": "I create my environment using conda because it’s easy, but you can use venv or any other solution."
},
{
"code": null,
"e": 4490,
"s": 4392,
"text": "I’d also add to your requirements.txt to cover all the dependencies. Here’s what mine looks like:"
},
{
"code": null,
"e": 4563,
"s": 4490,
"text": "azure-functions==1.0.4pandas==0.25.3scikit-learn==0.21.3requests==2.22.0"
},
{
"code": null,
"e": 4646,
"s": 4563,
"text": "Now run so you can install the packages as well as build the model for deployment."
},
{
"code": null,
"e": 4742,
"s": 4646,
"text": "conda create --name model_deploy python=3.7activate model_deploypip install -r requirements.txt"
},
{
"code": null,
"e": 4841,
"s": 4742,
"text": "I’ve put a simple model in the Git repo to guide you. To create the model .pkl you’ll need to run:"
},
{
"code": null,
"e": 4863,
"s": 4841,
"text": "python model_build.py"
},
{
"code": null,
"e": 4935,
"s": 4863,
"text": "This model is built from the titanic dataset and predicts survival off:"
},
{
"code": null,
"e": 4939,
"s": 4935,
"text": "Age"
},
{
"code": null,
"e": 4964,
"s": 4939,
"text": "Pclass (Passenger class)"
},
{
"code": null,
"e": 5002,
"s": 4964,
"text": "Embarked (where a passenger embarked)"
},
{
"code": null,
"e": 5006,
"s": 5002,
"text": "Sex"
},
{
"code": null,
"e": 5058,
"s": 5006,
"text": "__init__.py is what your Azure Function is run off."
},
{
"code": null,
"e": 5147,
"s": 5058,
"text": "Using JSON inputs/outputs can be a bit of a faff and it took awhile for me to get right."
},
{
"code": null,
"e": 5273,
"s": 5147,
"text": "I’ll paste the main sections of the code here so I can highlight the main confusions I had so you can learn from my mistakes."
},
{
"code": null,
"e": 5318,
"s": 5273,
"text": "data = req.get_json()data = json.loads(data)"
},
{
"code": null,
"e": 5553,
"s": 5318,
"text": "You’ll be using a POST request for this model. But when you read in the JSON request it will still be in a string format, so we need to convert it to a proper dict/JSON object using json.loads before we can use the data to predict on."
},
{
"code": null,
"e": 5935,
"s": 5553,
"text": "response = []for i in range(len(data)): data_row = data[i] pred_df = pd.DataFrame([data_row]) pred_label = clf.predict(pred_df)[0] pred_probs = clf.predict_proba(pred_df)[0] results_dict = { 'pred_label': int(pred_label), 'pred_prob_0': pred_probs[0], 'pred_prob_1': pred_probs[1] } response.append(results_dict)return json.dumps(response)"
},
{
"code": null,
"e": 5978,
"s": 5935,
"text": "There’s a few things I’ll quickly mention:"
},
{
"code": null,
"e": 6087,
"s": 5978,
"text": "pd.DataFrame([data_row]): lets you create a one row dataframe in Pandas. Otherwise you’ll get an index error"
},
{
"code": null,
"e": 6231,
"s": 6087,
"text": "int(pred_label): used because the class outputted is a numpy datatype (int64) and that isn’t usable when returning JSON objects so I convert it"
},
{
"code": null,
"e": 6342,
"s": 6231,
"text": "json.dumps(response): even though we work with JSON you need to convert it to a string before you send it back"
},
{
"code": null,
"e": 6382,
"s": 6342,
"text": "Now let’s deploy that bad boi — locally"
},
{
"code": null,
"e": 6398,
"s": 6382,
"text": "func host start"
},
{
"code": null,
"e": 6461,
"s": 6398,
"text": "That should give you the below result once it’s up and running"
},
{
"code": null,
"e": 6662,
"s": 6461,
"text": "http://localhost:7071/api/model_deployment is what we want to send our requests to. After the local Azure Function is running use test_api.py to ping data to our API. You should see the below results:"
},
{
"code": null,
"e": 6813,
"s": 6662,
"text": "Booo yah. Now it’s working!!!!! So now we’ve got an Azure Function working locally, we need to push it to Azure so we can deploy it last and for good."
},
{
"code": null,
"e": 6911,
"s": 6813,
"text": "Now we’ve got the Azure Function working locally we can push it to Azure and deploy this bad boi."
},
{
"code": null,
"e": 7063,
"s": 6911,
"text": "If you haven’t created an Azure account then this is your time to do it. And once you’ve done that go to your Portal and spin up an Azure Function App."
},
{
"code": null,
"e": 7092,
"s": 7063,
"text": "Here’s how I configured mine"
},
{
"code": null,
"e": 7182,
"s": 7092,
"text": "Now you’ve created your Azure App, you can now deploy your model to Azure and try it out."
},
{
"code": null,
"e": 7264,
"s": 7182,
"text": "From your local Azure Function directory you’ll want to run the following command"
},
{
"code": null,
"e": 7273,
"s": 7264,
"text": "az login"
},
{
"code": null,
"e": 7434,
"s": 7273,
"text": "This will either execute seamlessly or or ask you to log in to your Azure account. Once that’s done you’re ready to go. Now let’s push your local code to Azure."
},
{
"code": null,
"e": 7491,
"s": 7434,
"text": "func azure functionapp publish <APP_NAME> --build remote"
},
{
"code": null,
"e": 7602,
"s": 7491,
"text": "Where APP_NAME is the name you gave your Function App (duh). Mine is titanicmodel but yours will be different."
},
{
"code": null,
"e": 7687,
"s": 7602,
"text": "Once that’s built you need to find the URL of your Azure App. You can find that here"
},
{
"code": null,
"e": 7841,
"s": 7687,
"text": "This URL is what you’re going to use to access your model from Azure. So replace azure_url in test_api.py with your Azure Function URL and give it a try."
},
{
"code": null,
"e": 7884,
"s": 7841,
"text": "If everything has gone to plan you’ll get:"
},
{
"code": null,
"e": 8302,
"s": 7884,
"text": "[ { \"pred_label\": 1, \"pred_prob_0\": 0.13974358581161161, \"pred_prob_1\": 0.86025641418838839 }, { \"pred_label\": 0, \"pred_prob_0\": 0.65911568636955931, \"pred_prob_1\": 0.34088431363044069 }, { \"pred_label\": 1, \"pred_prob_0\": 0.13974358581161161, \"pred_prob_1\": 0.86025641418838839 }, { \"pred_label\": 0, \"pred_prob_0\": 0.65911568636955931, \"pred_prob_1\": 0.34088431363044069 }]"
},
{
"code": null,
"e": 8411,
"s": 8302,
"text": "And now you’ll also have that warm and fuzzy feeling of deploying your first ML model to an Azure Function!!"
}
] |
10 tricks for converting Data to a Numeric Type in Pandas | by B. Chen | Towards Data Science
|
When doing data analysis, it is important to ensure correct data types. Otherwise, you may get unexpected results or errors. In the case of Pandas, it will correctly infer data types in many cases and you can move on with your analysis without any further thought on the topic.
Despite how well pandas works, at some point in your data analysis process you will likely need to explicitly convert data from one type to another. This article will discuss how to change data to a numeric type. More specifically, you will learn how to use the Pandas built-in methods astype() and to_numeric() to deal with the following common problems:
Converting string/int to int/floatConverting float to intConverting a column of mixed data typesHandling missing valuesConverting a money column to floatConverting boolean to 0/1Converting multiple data columns at onceDefining data types when reading a CSV fileCreating a custom function to convert data typeastype() vs. to_numeric()
Converting string/int to int/float
Converting float to int
Converting a column of mixed data types
Handling missing values
Converting a money column to float
Converting boolean to 0/1
Converting multiple data columns at once
Defining data types when reading a CSV file
Creating a custom function to convert data type
astype() vs. to_numeric()
For demonstration, we create a dataset and will load it with a function:
import pandas as pdimport numpy as npdef load_df(): return pd.DataFrame({ 'string_col': ['1','2','3','4'], 'int_col': [1,2,3,4], 'float_col': [1.1,1.2,1.3,4.7], 'mix_col': ['a', 2, 3, 4], 'missing_col': [1.0, 2, 3, np.nan], 'money_col': ['£1,000.00','£2,400.00','£2,400.00','£2,400.00'], 'boolean_col': [True, False, True, True], 'custom': ['Y', 'Y', 'N', 'N'] })df = load_df()
Please check out the Github repo for the source code.
Before we diving into change data types, let’s take a quick look at how to check data types. If we want to see all the data types in a DataFrame, we can use dtypes attribute:
>>> df.dtypesstring_col objectint_col int64float_col float64mix_col objectmissing_col float64money_col objectboolean_col boolcustom objectdtype: object
This attribute is also available in Series and we can use it to check data type on a specific column. For instance, check the data type of int_col:
>>> df.int_col.dtypesdtype('int64')
If we would like to explore data, the info() method may be more useful as it provides RangeIndex, total columns, non-null count, dtypes, and memory usage. This is a lot of valuable information that can help us to grasp more of an overall picture of the data.
>>> df.info()RangeIndex: 4 entries, 0 to 3Data columns (total 8 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 string_col 4 non-null object 1 int_col 4 non-null int64 2 float_col 4 non-null float64 3 mix_col 4 non-null object 4 missing_col 3 non-null float64 5 money_col 4 non-null object 6 boolean_col 4 non-null bool 7 custom 4 non-null object dtypes: bool(1), float64(2), int64(1), object(4)memory usage: 356.0+ bytes
The simplest way to convert a Pandas column to a different type is to use the Series’ method astype(). For instance, to convert strings to integers we can call it like:
# string to int>>> df['string_col'] = df['string_col'].astype('int')>>> df.dtypesstring_col int64int_col float64float_col float64mix_col objectmissing_col float64money_col objectboolean_col boolcustom objectdtype: object
We can see that it is using 64-bit integer numbers by default. In some situations, it can be more memory efficient to use shorter integer numbers when handling a large dataset. To do that, you can simply call astype('int8') , astype('int16') or astype('int32')
Similarly, if we want to convert the data type to float, we can call astype('float'). By default, it is using 64-bit floating-point numbers. We can use 'float128' for more precision or 'float16' for better memory efficiency.
# string to float>>> df['string_col'] = df['string_col'].astype('float')>>> df.dtypesstring_col float64int_col int64float_col float64mix_col objectmissing_col float64money_col objectboolean_col boolcustom objectdtype: object# For more precision>>> df['string_col'] = df['string_col'].astype('float128')# For more memory efficiency>>> df['string_col'] = df['string_col'].astype('float16')>>> df['string_col'] = df['string_col'].astype('float32')
If we want to convert a float column to integers, we can try using the astype() we used above.
df['float_col'] = df['float_col'].astype('int')
However, there is a bit of a gotcha. By displaying the DataFrame, we can see that the column gets converted to integers but rounded all the values down. It might be okay, but in most cases, I would imagine that is not. If we want to convert to integers and round the way that we would expect we can do round() first.
df['float_col'] = df['float_col'].round(0).astype('int')
Now, the number 4.7 gets rounded up to 5.
Let’s move on to a column of mixed strings and numbers. When running astype('int'), we get an ValueError.
# Getting ValueErrordf['mix_col'] = df['mix_col'].astype('int')
The error shows it’s the problem with the value 'a' as it cannot be converted to an integer. In order to get around this problem, we can use Pandas to_numeric() function with argument errors='coerce'.
df['mix_col'] = pd.to_numeric(df['mix_col'], errors='coerce')
But when checking the dtypes, you will find it get converted to float64.
>>> df['mix_col'].dtypesdtype('float64')
In some cases, you don’t want to output to be float values you want it to be integers, for instance converting an ID column. We can call astype('Int64'). Note it has a capital I and is different than Numpy 'int64'. What this does is change Numpy’s NaN to Pandas’ NA and this allows it to be an integer.
>>> df['mix_col'] = pd.to_numeric(df['mix_col'], errors='coerce').astype('Int64')>>> df['mix_col'].dtypesInt64Dtype()
Alternatively, we can replace Numpy nan with another value (for example replacing NaN with 0) and call astype('int')
df['mix_col'] = pd.to_numeric( df['mix_col'], errors='coerce').fillna(0).astype('int')
Now we should be fully equipped with dealing with the missing values. In Pandas, missing values are given the value NaN, short for “Not a Number”. For technical reasons, these NaN values are always of the float64.
df.missing_col.dtypesdtype('float64')
When converting a column with missing values to integers, we will also get a ValueError because NaN cannot be converted to an integer.
To get around the error, we can call astype('Int64') as we did above (Note it is captial I, same as mentioned in the last section). What this does is change Numpy’s NaN to Pandas’ NA and this allows it to be an integer.
df['missing_col'] = df['missing_col'].astype('Int64')
Alternatively, we can replace Numpy NaN with another value (for example replacing NaN with 0) and call astype('int')
df['mix_col'] = df['missing_col'].fillna(0).astype('int')
If you want to learn more about handling missing values, you can check out:
towardsdatascience.com
Let’s move on to the money column. The problem is that if we are using the method above we’re going to get all NaN or NA values because they are all strings with symbols £ and ,, and they can’t be converted to numbers. So the first what we have to do is removing all invalid symbols.
>>> df['money_replace'] = df['money_col'].str.replace('£', '').str.replace(',','')>>> df['money_replace'] = pd.to_numeric(df['money_replace'])>>> df['money_replace']0 1000.01 2400.02 2400.03 2400.0Name: money_replace, dtype: float64
We chain 2 replace() calls, one for £ and the other for ,, to replace them with an empty string.
If you are familiar with regular expression, we can also replace those symbols with a regular expression.
>>> df['money_regex'] = df['money_col'].str.replace('[\£\,]', '', regex=True)>>> df['money_regex'] = pd.to_numeric(df['money_replace'])>>> df['money_regex']
replace('[\£\,]', '', regex=True) says we would like to replace £ and , with an empty string. The argument regex=True assumes the passed-in pattern is a regular expression (Note it defaults to True).
We have True/False, but you can imagine a case in which need these as 0 and 1 , for instance, if you are building a machine learning model and this is one of your input features, you’d need it to be numeric and you would use 0 and 1 to represent False and True. This is actually very simple, you can just call astype('int'):
df['boolean_col'] = df['boolean_col'].astype('int')
So far, we have been converting data type one column at a time. For instance
# Converting column string_col and int_col one at a timedf['string_col'] = df['string_col'].astype('float16')df['int_col'] = df['int_col'].astype('float16')
There is a DataFrame method also called astype() allows us to convert multiple column data types at once. It is time-saving when you have a bunch of columns you want to change.
df = df.astype({ 'string_col': 'float16', 'int_col': 'float16'})
If you want to set the data type for each column when reading a CSV file, you can use the argument dtype when loading data with read_csv():
df = pd.read_csv( 'dataset.csv', dtype={ 'string_col': 'float16', 'int_col': 'float16' })
The dtype argument takes a dictionary with the key representing the column and the value representing the data type. The difference between this and above is that this method does the converting during the reading process and can be time-saving and more memory efficient.
When data is a bit complex to convert, we can create a custom function and apply it to each value to convert to the appropriate data type.
For instance, the money_col column, here is a simple function we can use:
>>> def convert_money(value): value = value.replace('£','').replace(',', '') return float(value)>>> df['money_col'].apply(convert_money)0 1000.01 2400.02 2400.03 2400.0Name: money_col, dtype: float64
We can also use a lambda function:
df['money_col'] .apply(lambda v: v.replace('£','').replace(',','')) .astype('float')
The simplest way to convert data type from one to the other is to use astype() method. The method is supported by both Pandas DataFrame and Series. If you already have a numeric data type (int8, int16, int32, int64, float16, float32, float64, float128, and boolean) you can also use astype() to:
convert it to another numeric data type (int to float, float to int, etc.)
use it to downcast to a smaller or upcast to a larger byte size
However, astype() won’t work for a column of mixed types. For instance, the mixed_col has a and missing_col has NaN. If we try to use astype() we would get a ValueError. As of Pandas 0.20.0, this error can be suppressed by setting the argument errors='ignore', but your original data will be returned untouched.
The Pandas to_numeric() function can handle these values more gracefully. Rather than fail, we can set the argument errors='coerce' to coerce invalid values to NaN:
pd.to_numeric(df['mixed_col'], errors='coerce')
We have seen how we can convert a Pandas data column to a numeric type with astype() and to_numeric(). astype() is the simplest way and offers more possibility in the way of conversion, while to_numeric() has more powerful functions for error handling.
I hope this article will help you to save time in learning Pandas. I recommend you to check out the documentation for the astypes() and to_numeric() API and to know about other things you can do.
Thanks for reading. Please check out the notebook for the source code and stay tuned if you are interested in the practical aspect of machine learning.
Pandas json_normalize() you should know for flattening JSON
All Pandas cut() you should know for transforming numerical data into categorical data
Using Pandas method chaining to improve code readability
How to do a Custom Sort on Pandas DataFrame
All the Pandas shift() you should know for data analysis
When to use Pandas transform() function
Pandas concat() tricks you should know
All the Pandas merge() you should know
Working with datetime in Pandas DataFrame
Pandas read_csv() tricks you should know
4 tricks you should know to parse date columns with Pandas read_csv()
More tutorials can be found on my Github
|
[
{
"code": null,
"e": 450,
"s": 172,
"text": "When doing data analysis, it is important to ensure correct data types. Otherwise, you may get unexpected results or errors. In the case of Pandas, it will correctly infer data types in many cases and you can move on with your analysis without any further thought on the topic."
},
{
"code": null,
"e": 806,
"s": 450,
"text": "Despite how well pandas works, at some point in your data analysis process you will likely need to explicitly convert data from one type to another. This article will discuss how to change data to a numeric type. More specifically, you will learn how to use the Pandas built-in methods astype() and to_numeric() to deal with the following common problems:"
},
{
"code": null,
"e": 1140,
"s": 806,
"text": "Converting string/int to int/floatConverting float to intConverting a column of mixed data typesHandling missing valuesConverting a money column to floatConverting boolean to 0/1Converting multiple data columns at onceDefining data types when reading a CSV fileCreating a custom function to convert data typeastype() vs. to_numeric()"
},
{
"code": null,
"e": 1175,
"s": 1140,
"text": "Converting string/int to int/float"
},
{
"code": null,
"e": 1199,
"s": 1175,
"text": "Converting float to int"
},
{
"code": null,
"e": 1239,
"s": 1199,
"text": "Converting a column of mixed data types"
},
{
"code": null,
"e": 1263,
"s": 1239,
"text": "Handling missing values"
},
{
"code": null,
"e": 1298,
"s": 1263,
"text": "Converting a money column to float"
},
{
"code": null,
"e": 1324,
"s": 1298,
"text": "Converting boolean to 0/1"
},
{
"code": null,
"e": 1365,
"s": 1324,
"text": "Converting multiple data columns at once"
},
{
"code": null,
"e": 1409,
"s": 1365,
"text": "Defining data types when reading a CSV file"
},
{
"code": null,
"e": 1457,
"s": 1409,
"text": "Creating a custom function to convert data type"
},
{
"code": null,
"e": 1483,
"s": 1457,
"text": "astype() vs. to_numeric()"
},
{
"code": null,
"e": 1556,
"s": 1483,
"text": "For demonstration, we create a dataset and will load it with a function:"
},
{
"code": null,
"e": 1961,
"s": 1556,
"text": "import pandas as pdimport numpy as npdef load_df(): return pd.DataFrame({ 'string_col': ['1','2','3','4'], 'int_col': [1,2,3,4], 'float_col': [1.1,1.2,1.3,4.7], 'mix_col': ['a', 2, 3, 4], 'missing_col': [1.0, 2, 3, np.nan], 'money_col': ['£1,000.00','£2,400.00','£2,400.00','£2,400.00'], 'boolean_col': [True, False, True, True], 'custom': ['Y', 'Y', 'N', 'N'] })df = load_df()"
},
{
"code": null,
"e": 2015,
"s": 1961,
"text": "Please check out the Github repo for the source code."
},
{
"code": null,
"e": 2190,
"s": 2015,
"text": "Before we diving into change data types, let’s take a quick look at how to check data types. If we want to see all the data types in a DataFrame, we can use dtypes attribute:"
},
{
"code": null,
"e": 2393,
"s": 2190,
"text": ">>> df.dtypesstring_col objectint_col int64float_col float64mix_col objectmissing_col float64money_col objectboolean_col boolcustom objectdtype: object"
},
{
"code": null,
"e": 2541,
"s": 2393,
"text": "This attribute is also available in Series and we can use it to check data type on a specific column. For instance, check the data type of int_col:"
},
{
"code": null,
"e": 2577,
"s": 2541,
"text": ">>> df.int_col.dtypesdtype('int64')"
},
{
"code": null,
"e": 2836,
"s": 2577,
"text": "If we would like to explore data, the info() method may be more useful as it provides RangeIndex, total columns, non-null count, dtypes, and memory usage. This is a lot of valuable information that can help us to grasp more of an overall picture of the data."
},
{
"code": null,
"e": 3394,
"s": 2836,
"text": ">>> df.info()RangeIndex: 4 entries, 0 to 3Data columns (total 8 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 string_col 4 non-null object 1 int_col 4 non-null int64 2 float_col 4 non-null float64 3 mix_col 4 non-null object 4 missing_col 3 non-null float64 5 money_col 4 non-null object 6 boolean_col 4 non-null bool 7 custom 4 non-null object dtypes: bool(1), float64(2), int64(1), object(4)memory usage: 356.0+ bytes"
},
{
"code": null,
"e": 3563,
"s": 3394,
"text": "The simplest way to convert a Pandas column to a different type is to use the Series’ method astype(). For instance, to convert strings to integers we can call it like:"
},
{
"code": null,
"e": 3834,
"s": 3563,
"text": "# string to int>>> df['string_col'] = df['string_col'].astype('int')>>> df.dtypesstring_col int64int_col float64float_col float64mix_col objectmissing_col float64money_col objectboolean_col boolcustom objectdtype: object"
},
{
"code": null,
"e": 4095,
"s": 3834,
"text": "We can see that it is using 64-bit integer numbers by default. In some situations, it can be more memory efficient to use shorter integer numbers when handling a large dataset. To do that, you can simply call astype('int8') , astype('int16') or astype('int32')"
},
{
"code": null,
"e": 4320,
"s": 4095,
"text": "Similarly, if we want to convert the data type to float, we can call astype('float'). By default, it is using 64-bit floating-point numbers. We can use 'float128' for more precision or 'float16' for better memory efficiency."
},
{
"code": null,
"e": 4815,
"s": 4320,
"text": "# string to float>>> df['string_col'] = df['string_col'].astype('float')>>> df.dtypesstring_col float64int_col int64float_col float64mix_col objectmissing_col float64money_col objectboolean_col boolcustom objectdtype: object# For more precision>>> df['string_col'] = df['string_col'].astype('float128')# For more memory efficiency>>> df['string_col'] = df['string_col'].astype('float16')>>> df['string_col'] = df['string_col'].astype('float32')"
},
{
"code": null,
"e": 4910,
"s": 4815,
"text": "If we want to convert a float column to integers, we can try using the astype() we used above."
},
{
"code": null,
"e": 4958,
"s": 4910,
"text": "df['float_col'] = df['float_col'].astype('int')"
},
{
"code": null,
"e": 5275,
"s": 4958,
"text": "However, there is a bit of a gotcha. By displaying the DataFrame, we can see that the column gets converted to integers but rounded all the values down. It might be okay, but in most cases, I would imagine that is not. If we want to convert to integers and round the way that we would expect we can do round() first."
},
{
"code": null,
"e": 5332,
"s": 5275,
"text": "df['float_col'] = df['float_col'].round(0).astype('int')"
},
{
"code": null,
"e": 5374,
"s": 5332,
"text": "Now, the number 4.7 gets rounded up to 5."
},
{
"code": null,
"e": 5480,
"s": 5374,
"text": "Let’s move on to a column of mixed strings and numbers. When running astype('int'), we get an ValueError."
},
{
"code": null,
"e": 5544,
"s": 5480,
"text": "# Getting ValueErrordf['mix_col'] = df['mix_col'].astype('int')"
},
{
"code": null,
"e": 5745,
"s": 5544,
"text": "The error shows it’s the problem with the value 'a' as it cannot be converted to an integer. In order to get around this problem, we can use Pandas to_numeric() function with argument errors='coerce'."
},
{
"code": null,
"e": 5807,
"s": 5745,
"text": "df['mix_col'] = pd.to_numeric(df['mix_col'], errors='coerce')"
},
{
"code": null,
"e": 5880,
"s": 5807,
"text": "But when checking the dtypes, you will find it get converted to float64."
},
{
"code": null,
"e": 5921,
"s": 5880,
"text": ">>> df['mix_col'].dtypesdtype('float64')"
},
{
"code": null,
"e": 6224,
"s": 5921,
"text": "In some cases, you don’t want to output to be float values you want it to be integers, for instance converting an ID column. We can call astype('Int64'). Note it has a capital I and is different than Numpy 'int64'. What this does is change Numpy’s NaN to Pandas’ NA and this allows it to be an integer."
},
{
"code": null,
"e": 6349,
"s": 6224,
"text": ">>> df['mix_col'] = pd.to_numeric(df['mix_col'], errors='coerce').astype('Int64')>>> df['mix_col'].dtypesInt64Dtype()"
},
{
"code": null,
"e": 6466,
"s": 6349,
"text": "Alternatively, we can replace Numpy nan with another value (for example replacing NaN with 0) and call astype('int')"
},
{
"code": null,
"e": 6560,
"s": 6466,
"text": "df['mix_col'] = pd.to_numeric( df['mix_col'], errors='coerce').fillna(0).astype('int')"
},
{
"code": null,
"e": 6774,
"s": 6560,
"text": "Now we should be fully equipped with dealing with the missing values. In Pandas, missing values are given the value NaN, short for “Not a Number”. For technical reasons, these NaN values are always of the float64."
},
{
"code": null,
"e": 6812,
"s": 6774,
"text": "df.missing_col.dtypesdtype('float64')"
},
{
"code": null,
"e": 6947,
"s": 6812,
"text": "When converting a column with missing values to integers, we will also get a ValueError because NaN cannot be converted to an integer."
},
{
"code": null,
"e": 7167,
"s": 6947,
"text": "To get around the error, we can call astype('Int64') as we did above (Note it is captial I, same as mentioned in the last section). What this does is change Numpy’s NaN to Pandas’ NA and this allows it to be an integer."
},
{
"code": null,
"e": 7221,
"s": 7167,
"text": "df['missing_col'] = df['missing_col'].astype('Int64')"
},
{
"code": null,
"e": 7338,
"s": 7221,
"text": "Alternatively, we can replace Numpy NaN with another value (for example replacing NaN with 0) and call astype('int')"
},
{
"code": null,
"e": 7396,
"s": 7338,
"text": "df['mix_col'] = df['missing_col'].fillna(0).astype('int')"
},
{
"code": null,
"e": 7472,
"s": 7396,
"text": "If you want to learn more about handling missing values, you can check out:"
},
{
"code": null,
"e": 7495,
"s": 7472,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 7779,
"s": 7495,
"text": "Let’s move on to the money column. The problem is that if we are using the method above we’re going to get all NaN or NA values because they are all strings with symbols £ and ,, and they can’t be converted to numbers. So the first what we have to do is removing all invalid symbols."
},
{
"code": null,
"e": 8031,
"s": 7779,
"text": ">>> df['money_replace'] = df['money_col'].str.replace('£', '').str.replace(',','')>>> df['money_replace'] = pd.to_numeric(df['money_replace'])>>> df['money_replace']0 1000.01 2400.02 2400.03 2400.0Name: money_replace, dtype: float64"
},
{
"code": null,
"e": 8128,
"s": 8031,
"text": "We chain 2 replace() calls, one for £ and the other for ,, to replace them with an empty string."
},
{
"code": null,
"e": 8234,
"s": 8128,
"text": "If you are familiar with regular expression, we can also replace those symbols with a regular expression."
},
{
"code": null,
"e": 8398,
"s": 8234,
"text": ">>> df['money_regex'] = df['money_col'].str.replace('[\\£\\,]', '', regex=True)>>> df['money_regex'] = pd.to_numeric(df['money_replace'])>>> df['money_regex']"
},
{
"code": null,
"e": 8598,
"s": 8398,
"text": "replace('[\\£\\,]', '', regex=True) says we would like to replace £ and , with an empty string. The argument regex=True assumes the passed-in pattern is a regular expression (Note it defaults to True)."
},
{
"code": null,
"e": 8923,
"s": 8598,
"text": "We have True/False, but you can imagine a case in which need these as 0 and 1 , for instance, if you are building a machine learning model and this is one of your input features, you’d need it to be numeric and you would use 0 and 1 to represent False and True. This is actually very simple, you can just call astype('int'):"
},
{
"code": null,
"e": 8975,
"s": 8923,
"text": "df['boolean_col'] = df['boolean_col'].astype('int')"
},
{
"code": null,
"e": 9052,
"s": 8975,
"text": "So far, we have been converting data type one column at a time. For instance"
},
{
"code": null,
"e": 9209,
"s": 9052,
"text": "# Converting column string_col and int_col one at a timedf['string_col'] = df['string_col'].astype('float16')df['int_col'] = df['int_col'].astype('float16')"
},
{
"code": null,
"e": 9386,
"s": 9209,
"text": "There is a DataFrame method also called astype() allows us to convert multiple column data types at once. It is time-saving when you have a bunch of columns you want to change."
},
{
"code": null,
"e": 9457,
"s": 9386,
"text": "df = df.astype({ 'string_col': 'float16', 'int_col': 'float16'})"
},
{
"code": null,
"e": 9597,
"s": 9457,
"text": "If you want to set the data type for each column when reading a CSV file, you can use the argument dtype when loading data with read_csv():"
},
{
"code": null,
"e": 9711,
"s": 9597,
"text": "df = pd.read_csv( 'dataset.csv', dtype={ 'string_col': 'float16', 'int_col': 'float16' })"
},
{
"code": null,
"e": 9983,
"s": 9711,
"text": "The dtype argument takes a dictionary with the key representing the column and the value representing the data type. The difference between this and above is that this method does the converting during the reading process and can be time-saving and more memory efficient."
},
{
"code": null,
"e": 10122,
"s": 9983,
"text": "When data is a bit complex to convert, we can create a custom function and apply it to each value to convert to the appropriate data type."
},
{
"code": null,
"e": 10196,
"s": 10122,
"text": "For instance, the money_col column, here is a simple function we can use:"
},
{
"code": null,
"e": 10420,
"s": 10196,
"text": ">>> def convert_money(value): value = value.replace('£','').replace(',', '') return float(value)>>> df['money_col'].apply(convert_money)0 1000.01 2400.02 2400.03 2400.0Name: money_col, dtype: float64"
},
{
"code": null,
"e": 10455,
"s": 10420,
"text": "We can also use a lambda function:"
},
{
"code": null,
"e": 10546,
"s": 10455,
"text": "df['money_col'] .apply(lambda v: v.replace('£','').replace(',','')) .astype('float')"
},
{
"code": null,
"e": 10842,
"s": 10546,
"text": "The simplest way to convert data type from one to the other is to use astype() method. The method is supported by both Pandas DataFrame and Series. If you already have a numeric data type (int8, int16, int32, int64, float16, float32, float64, float128, and boolean) you can also use astype() to:"
},
{
"code": null,
"e": 10917,
"s": 10842,
"text": "convert it to another numeric data type (int to float, float to int, etc.)"
},
{
"code": null,
"e": 10981,
"s": 10917,
"text": "use it to downcast to a smaller or upcast to a larger byte size"
},
{
"code": null,
"e": 11293,
"s": 10981,
"text": "However, astype() won’t work for a column of mixed types. For instance, the mixed_col has a and missing_col has NaN. If we try to use astype() we would get a ValueError. As of Pandas 0.20.0, this error can be suppressed by setting the argument errors='ignore', but your original data will be returned untouched."
},
{
"code": null,
"e": 11458,
"s": 11293,
"text": "The Pandas to_numeric() function can handle these values more gracefully. Rather than fail, we can set the argument errors='coerce' to coerce invalid values to NaN:"
},
{
"code": null,
"e": 11506,
"s": 11458,
"text": "pd.to_numeric(df['mixed_col'], errors='coerce')"
},
{
"code": null,
"e": 11759,
"s": 11506,
"text": "We have seen how we can convert a Pandas data column to a numeric type with astype() and to_numeric(). astype() is the simplest way and offers more possibility in the way of conversion, while to_numeric() has more powerful functions for error handling."
},
{
"code": null,
"e": 11955,
"s": 11759,
"text": "I hope this article will help you to save time in learning Pandas. I recommend you to check out the documentation for the astypes() and to_numeric() API and to know about other things you can do."
},
{
"code": null,
"e": 12107,
"s": 11955,
"text": "Thanks for reading. Please check out the notebook for the source code and stay tuned if you are interested in the practical aspect of machine learning."
},
{
"code": null,
"e": 12167,
"s": 12107,
"text": "Pandas json_normalize() you should know for flattening JSON"
},
{
"code": null,
"e": 12254,
"s": 12167,
"text": "All Pandas cut() you should know for transforming numerical data into categorical data"
},
{
"code": null,
"e": 12311,
"s": 12254,
"text": "Using Pandas method chaining to improve code readability"
},
{
"code": null,
"e": 12355,
"s": 12311,
"text": "How to do a Custom Sort on Pandas DataFrame"
},
{
"code": null,
"e": 12412,
"s": 12355,
"text": "All the Pandas shift() you should know for data analysis"
},
{
"code": null,
"e": 12452,
"s": 12412,
"text": "When to use Pandas transform() function"
},
{
"code": null,
"e": 12491,
"s": 12452,
"text": "Pandas concat() tricks you should know"
},
{
"code": null,
"e": 12530,
"s": 12491,
"text": "All the Pandas merge() you should know"
},
{
"code": null,
"e": 12572,
"s": 12530,
"text": "Working with datetime in Pandas DataFrame"
},
{
"code": null,
"e": 12613,
"s": 12572,
"text": "Pandas read_csv() tricks you should know"
},
{
"code": null,
"e": 12683,
"s": 12613,
"text": "4 tricks you should know to parse date columns with Pandas read_csv()"
}
] |
trunc() in Python
|
In this tutorial, we are going to learn about the math.trunc() method.
The method math.trunc() is used to truncate the float values. It will act as math.floor() method for positive values and math.ceil() method for negative values.
Live Demo
# importing math module
import math
# floor value
print(math.floor(3.5))
# trunc for positive number
print(math.trunc(3.5))
If you run the above code, then you will get the similar result as follows.
3
3
Live Demo
# importing math module
import math
# ceil value
print(math.ceil(-3.5))
# trunc for negative number
print(math.trunc(-3.5))
If you run the above code, then you will get the similar result as follows.
-3
-3
If you have any doubts in the tutorial, mention them in the comment section.
|
[
{
"code": null,
"e": 1133,
"s": 1062,
"text": "In this tutorial, we are going to learn about the math.trunc() method."
},
{
"code": null,
"e": 1294,
"s": 1133,
"text": "The method math.trunc() is used to truncate the float values. It will act as math.floor() method for positive values and math.ceil() method for negative values."
},
{
"code": null,
"e": 1305,
"s": 1294,
"text": " Live Demo"
},
{
"code": null,
"e": 1429,
"s": 1305,
"text": "# importing math module\nimport math\n# floor value\nprint(math.floor(3.5))\n# trunc for positive number\nprint(math.trunc(3.5))"
},
{
"code": null,
"e": 1505,
"s": 1429,
"text": "If you run the above code, then you will get the similar result as follows."
},
{
"code": null,
"e": 1509,
"s": 1505,
"text": "3\n3"
},
{
"code": null,
"e": 1520,
"s": 1509,
"text": " Live Demo"
},
{
"code": null,
"e": 1644,
"s": 1520,
"text": "# importing math module\nimport math\n# ceil value\nprint(math.ceil(-3.5))\n# trunc for negative number\nprint(math.trunc(-3.5))"
},
{
"code": null,
"e": 1720,
"s": 1644,
"text": "If you run the above code, then you will get the similar result as follows."
},
{
"code": null,
"e": 1726,
"s": 1720,
"text": "-3\n-3"
},
{
"code": null,
"e": 1803,
"s": 1726,
"text": "If you have any doubts in the tutorial, mention them in the comment section."
}
] |
How to Clean JSON Data at the Command Line | by Ezz El Din Abdullah | Towards Data Science
|
jq is a lightweight command-line JSON processor written in C. It follows the Unix philosophy that it’s focused on one thing and it can do it very well. In this tutorial, we see how jq can be used to clean JSONs and retrieve some information or get rid of undesired ones.
There are some data that are more suitable to be in a JSON format than a CSV or any other format. Most modern APIs and NoSQL databases support JSONs and also useful if your data is hierarchical that can be considered trees that can go to any depth, essentially any dimension, unlike CSV which is just 2D and can only form tabular data, not a hierarchical one.
Today, we’re investigating a JSON file (from Kaggle) that contains intent recognition data. Please download it as this is the file we’re working on in this tutorial.
If you’re using macOS, try this:
$ brew install jq
or this if you want the latest version:
$ brew install --HEAD jq
$ mkdir github $ cd github $ git clone https://github.com/stedolan/jq.git $ cd jq$ autoreconf -i$ ./configure --disable-maintainer-mode$ make
$ choco install jq
If you need more information on how to install jq, please check out the installation page in the jq wiki
For this chatbot data, we have some probabilities for the conversation’s intents for a chatbot to use, and for each intent, we have multiple keys like the intent type and the text that the user can type and the responses that should be replied by the chatbot and so on.
Let’s now experiment jq by using the identity filter .
$ < intent.json jq '.' | head -n 20{ "intents": [ { "intent": "Greeting", "text": [ "Hi", "Hi there", "Hola", "Hello", "Hello there", "Hya", "Hya there" ], "responses": [ "Hi human, please tell me your GeniSys user", "Hello human, please tell me your GeniSys user", "Hola human, please tell me your GeniSys user" ], "extension": { "function": "",
And let’s see the content of the first object:
$ < intent.json jq '.intents[0]'{ "intent": "Greeting", "text": [ "Hi", "Hi there", "Hola", "Hello", "Hello there", "Hya", "Hya there" ], "responses": [ "Hi human, please tell me your GeniSys user", "Hello human, please tell me your GeniSys user", "Hola human, please tell me your GeniSys user" ], "extension": { "function": "", "entities": false, "responses": [] }, "context": { "in": "", "out": "GreetingUserRequest", "clear": false }, "entityType": "NA", "entities": []}
We can also use indexing, let’s get the first intent type:
$ < intent.json jq '.intents[0].intent'"Greeting"
What if we want to get all intent types that this chatbot can understand:
$ < intent.json jq '.intents[].intent'"Greeting""GreetingResponse""CourtesyGreeting""CourtesyGreetingResponse""CurrentHumanQuery""NameQuery""RealNameQuery""TimeQuery""Thanks""NotTalking2U""UnderstandQuery""Shutup""Swearing""GoodBye""CourtesyGoodBye""WhoAmI""Clever""Gossip""Jokes""PodBayDoor""PodBayDoorResponse""SelfAware"
One of the useful functions of jq is the select function
We can use it to filter some useful information. For example, let’s get the object of Thanks intent:
$ < intent.json jq '.intents[] | select(.intent=="Thanks")'{ "intent": "Thanks", "text": [ "OK thank you", "OK thanks", "OK", "Thanks", "Thank you", "That's helpful" ], "responses": [ "No problem!", "Happy to help!", "Any time!", "My pleasure" ], "extension": { "function": "", "entities": false, "responses": [] }, "context": { "in": "", "out": "", "clear": false }, "entityType": "NA", "entities": []}
Let’s just get the response out of it:
$ < intent.json jq '.intents[] | select(.intent=="Thanks") | .responses'[ "No problem!", "Happy to help!", "Any time!", "My pleasure"]
The intent object in the last example has just one value. What if an object has multiple values like text, we then need to use the object value iterator .[]For example, let’s see if a text in any object has the literal “Can you see me?”:
$ < intent.json jq '.intents[] | select(.text[]=="Can you see me?")'{ "intent": "WhoAmI", "text": [ "Can you see me?", "Do you see me?", "Can you see anyone in the camera?", "Do you see anyone in the camera?", "Identify me", "Who am I please" ], "responses": [ "Let me see", "Please look at the camera" ], "extension": { "function": "extensions.gHumans.getHumanByFace", "entities": false, "responses": [ "Hi %%HUMAN%%, how are you?", "I believe you are %%HUMAN%%, how are you?", "You are %%HUMAN%%, how are you doing?" ] }, "context": { "in": "", "out": "", "clear": false }, "entityType": "NA", "entities": []}
jq can get you the nested objects with the ’.’ identity operator before the name of the key:
$ < intent.json jq '.intents[] | select(.text[]=="Can you see me?").extension.responses'[ "Hi %%HUMAN%%, how are you?", "I believe you are %%HUMAN%%, how are you?", "You are %%HUMAN%%, how are you doing?"]
so .extension.responses is equivalent to | .extension.responses (the stdout of the last filter is piped into the nested objects) which is also equivalent to .extension|.responses
Let’s delete the context, extension, entityType, and entities keys:
$ < intent.json jq '.intents[] | select(.text[]=="Can you see me?") | del(.context,.extension,.entityType,.entities)'{ "intent": "WhoAmI", "text": [ "Can you see me?", "Do you see me?", "Can you see anyone in the camera?", "Do you see anyone in the camera?", "Identify me", "Who am I please" ], "responses": [ "Let me see", "Please look at the camera" ]}
Note here that multiple keys can be separated by commas:
del(.context,.extension,.entityType,.entities)
From our experiment with the JSON data of the chatbot intent, we learned how to clean JSON data by the following:
filtering out specific information from a JSON by indexing with the identity operator, array indexing, object identifier-index, and array/object value iterator
filtering out specific values inside an object using select function and we can also filter nested objects by piping the stdout to the desired object(s)
deleting specific keys from JSON using del function
I first saw jq at Data Science at the Command Line, I love this book!
Disclosure: The Amazon links for the book (in this section) are paid links so if you buy the book, I will have a small commission
This book tries to catch your attention on the ability of the command line when you do data science tasks — meaning you can obtain your data, manipulate it, explore it, and make your prediction on it using the command line. If you are a data scientist, aspiring to be, or want to know more about it, I highly recommend this book. You can read it online for free from its website or order an ebook or paperback.
You might be interested in the series of cleaning data at the command line:
Part 1 of cleaning CSV data
towardsdatascience.com
Part 2 of cleaning CSV data
towardsdatascience.com
How to clean text data at the command line
towardsdatascience.com
or why we use docker tutorial
medium.com
Take care, will see you in the next tutorials :)
Peace!
Click here to get fresh content to your inbox
Chatbots: Intent Recognition Dataset
4 Reasons You Should Use JSON Instead of CSV
How to install jq
jq repo by the author Stephen Dolan
Guide to Linux jq Command for JSON Processing
jq cookbook
JSON on the command line with jq
|
[
{
"code": null,
"e": 443,
"s": 172,
"text": "jq is a lightweight command-line JSON processor written in C. It follows the Unix philosophy that it’s focused on one thing and it can do it very well. In this tutorial, we see how jq can be used to clean JSONs and retrieve some information or get rid of undesired ones."
},
{
"code": null,
"e": 803,
"s": 443,
"text": "There are some data that are more suitable to be in a JSON format than a CSV or any other format. Most modern APIs and NoSQL databases support JSONs and also useful if your data is hierarchical that can be considered trees that can go to any depth, essentially any dimension, unlike CSV which is just 2D and can only form tabular data, not a hierarchical one."
},
{
"code": null,
"e": 969,
"s": 803,
"text": "Today, we’re investigating a JSON file (from Kaggle) that contains intent recognition data. Please download it as this is the file we’re working on in this tutorial."
},
{
"code": null,
"e": 1002,
"s": 969,
"text": "If you’re using macOS, try this:"
},
{
"code": null,
"e": 1020,
"s": 1002,
"text": "$ brew install jq"
},
{
"code": null,
"e": 1060,
"s": 1020,
"text": "or this if you want the latest version:"
},
{
"code": null,
"e": 1085,
"s": 1060,
"text": "$ brew install --HEAD jq"
},
{
"code": null,
"e": 1230,
"s": 1085,
"text": "$ mkdir github\t\t$ cd github\t\t$ git clone https://github.com/stedolan/jq.git\t\t$ cd jq$ autoreconf -i$ ./configure --disable-maintainer-mode$ make"
},
{
"code": null,
"e": 1249,
"s": 1230,
"text": "$ choco install jq"
},
{
"code": null,
"e": 1354,
"s": 1249,
"text": "If you need more information on how to install jq, please check out the installation page in the jq wiki"
},
{
"code": null,
"e": 1624,
"s": 1354,
"text": "For this chatbot data, we have some probabilities for the conversation’s intents for a chatbot to use, and for each intent, we have multiple keys like the intent type and the text that the user can type and the responses that should be replied by the chatbot and so on."
},
{
"code": null,
"e": 1679,
"s": 1624,
"text": "Let’s now experiment jq by using the identity filter ."
},
{
"code": null,
"e": 2137,
"s": 1679,
"text": "$ < intent.json jq '.' | head -n 20{ \"intents\": [ { \"intent\": \"Greeting\", \"text\": [ \"Hi\", \"Hi there\", \"Hola\", \"Hello\", \"Hello there\", \"Hya\", \"Hya there\" ], \"responses\": [ \"Hi human, please tell me your GeniSys user\", \"Hello human, please tell me your GeniSys user\", \"Hola human, please tell me your GeniSys user\" ], \"extension\": { \"function\": \"\","
},
{
"code": null,
"e": 2184,
"s": 2137,
"text": "And let’s see the content of the first object:"
},
{
"code": null,
"e": 2717,
"s": 2184,
"text": "$ < intent.json jq '.intents[0]'{ \"intent\": \"Greeting\", \"text\": [ \"Hi\", \"Hi there\", \"Hola\", \"Hello\", \"Hello there\", \"Hya\", \"Hya there\" ], \"responses\": [ \"Hi human, please tell me your GeniSys user\", \"Hello human, please tell me your GeniSys user\", \"Hola human, please tell me your GeniSys user\" ], \"extension\": { \"function\": \"\", \"entities\": false, \"responses\": [] }, \"context\": { \"in\": \"\", \"out\": \"GreetingUserRequest\", \"clear\": false }, \"entityType\": \"NA\", \"entities\": []}"
},
{
"code": null,
"e": 2776,
"s": 2717,
"text": "We can also use indexing, let’s get the first intent type:"
},
{
"code": null,
"e": 2826,
"s": 2776,
"text": "$ < intent.json jq '.intents[0].intent'\"Greeting\""
},
{
"code": null,
"e": 2900,
"s": 2826,
"text": "What if we want to get all intent types that this chatbot can understand:"
},
{
"code": null,
"e": 3224,
"s": 2900,
"text": "$ < intent.json jq '.intents[].intent'\"Greeting\"\"GreetingResponse\"\"CourtesyGreeting\"\"CourtesyGreetingResponse\"\"CurrentHumanQuery\"\"NameQuery\"\"RealNameQuery\"\"TimeQuery\"\"Thanks\"\"NotTalking2U\"\"UnderstandQuery\"\"Shutup\"\"Swearing\"\"GoodBye\"\"CourtesyGoodBye\"\"WhoAmI\"\"Clever\"\"Gossip\"\"Jokes\"\"PodBayDoor\"\"PodBayDoorResponse\"\"SelfAware\""
},
{
"code": null,
"e": 3281,
"s": 3224,
"text": "One of the useful functions of jq is the select function"
},
{
"code": null,
"e": 3382,
"s": 3281,
"text": "We can use it to filter some useful information. For example, let’s get the object of Thanks intent:"
},
{
"code": null,
"e": 3845,
"s": 3382,
"text": "$ < intent.json jq '.intents[] | select(.intent==\"Thanks\")'{ \"intent\": \"Thanks\", \"text\": [ \"OK thank you\", \"OK thanks\", \"OK\", \"Thanks\", \"Thank you\", \"That's helpful\" ], \"responses\": [ \"No problem!\", \"Happy to help!\", \"Any time!\", \"My pleasure\" ], \"extension\": { \"function\": \"\", \"entities\": false, \"responses\": [] }, \"context\": { \"in\": \"\", \"out\": \"\", \"clear\": false }, \"entityType\": \"NA\", \"entities\": []}"
},
{
"code": null,
"e": 3884,
"s": 3845,
"text": "Let’s just get the response out of it:"
},
{
"code": null,
"e": 4023,
"s": 3884,
"text": "$ < intent.json jq '.intents[] | select(.intent==\"Thanks\") | .responses'[ \"No problem!\", \"Happy to help!\", \"Any time!\", \"My pleasure\"]"
},
{
"code": null,
"e": 4261,
"s": 4023,
"text": "The intent object in the last example has just one value. What if an object has multiple values like text, we then need to use the object value iterator .[]For example, let’s see if a text in any object has the literal “Can you see me?”:"
},
{
"code": null,
"e": 4944,
"s": 4261,
"text": "$ < intent.json jq '.intents[] | select(.text[]==\"Can you see me?\")'{ \"intent\": \"WhoAmI\", \"text\": [ \"Can you see me?\", \"Do you see me?\", \"Can you see anyone in the camera?\", \"Do you see anyone in the camera?\", \"Identify me\", \"Who am I please\" ], \"responses\": [ \"Let me see\", \"Please look at the camera\" ], \"extension\": { \"function\": \"extensions.gHumans.getHumanByFace\", \"entities\": false, \"responses\": [ \"Hi %%HUMAN%%, how are you?\", \"I believe you are %%HUMAN%%, how are you?\", \"You are %%HUMAN%%, how are you doing?\" ] }, \"context\": { \"in\": \"\", \"out\": \"\", \"clear\": false }, \"entityType\": \"NA\", \"entities\": []}"
},
{
"code": null,
"e": 5037,
"s": 4944,
"text": "jq can get you the nested objects with the ’.’ identity operator before the name of the key:"
},
{
"code": null,
"e": 5246,
"s": 5037,
"text": "$ < intent.json jq '.intents[] | select(.text[]==\"Can you see me?\").extension.responses'[ \"Hi %%HUMAN%%, how are you?\", \"I believe you are %%HUMAN%%, how are you?\", \"You are %%HUMAN%%, how are you doing?\"]"
},
{
"code": null,
"e": 5425,
"s": 5246,
"text": "so .extension.responses is equivalent to | .extension.responses (the stdout of the last filter is piped into the nested objects) which is also equivalent to .extension|.responses"
},
{
"code": null,
"e": 5493,
"s": 5425,
"text": "Let’s delete the context, extension, entityType, and entities keys:"
},
{
"code": null,
"e": 5877,
"s": 5493,
"text": "$ < intent.json jq '.intents[] | select(.text[]==\"Can you see me?\") | del(.context,.extension,.entityType,.entities)'{ \"intent\": \"WhoAmI\", \"text\": [ \"Can you see me?\", \"Do you see me?\", \"Can you see anyone in the camera?\", \"Do you see anyone in the camera?\", \"Identify me\", \"Who am I please\" ], \"responses\": [ \"Let me see\", \"Please look at the camera\" ]}"
},
{
"code": null,
"e": 5934,
"s": 5877,
"text": "Note here that multiple keys can be separated by commas:"
},
{
"code": null,
"e": 5981,
"s": 5934,
"text": "del(.context,.extension,.entityType,.entities)"
},
{
"code": null,
"e": 6095,
"s": 5981,
"text": "From our experiment with the JSON data of the chatbot intent, we learned how to clean JSON data by the following:"
},
{
"code": null,
"e": 6255,
"s": 6095,
"text": "filtering out specific information from a JSON by indexing with the identity operator, array indexing, object identifier-index, and array/object value iterator"
},
{
"code": null,
"e": 6408,
"s": 6255,
"text": "filtering out specific values inside an object using select function and we can also filter nested objects by piping the stdout to the desired object(s)"
},
{
"code": null,
"e": 6460,
"s": 6408,
"text": "deleting specific keys from JSON using del function"
},
{
"code": null,
"e": 6530,
"s": 6460,
"text": "I first saw jq at Data Science at the Command Line, I love this book!"
},
{
"code": null,
"e": 6660,
"s": 6530,
"text": "Disclosure: The Amazon links for the book (in this section) are paid links so if you buy the book, I will have a small commission"
},
{
"code": null,
"e": 7071,
"s": 6660,
"text": "This book tries to catch your attention on the ability of the command line when you do data science tasks — meaning you can obtain your data, manipulate it, explore it, and make your prediction on it using the command line. If you are a data scientist, aspiring to be, or want to know more about it, I highly recommend this book. You can read it online for free from its website or order an ebook or paperback."
},
{
"code": null,
"e": 7147,
"s": 7071,
"text": "You might be interested in the series of cleaning data at the command line:"
},
{
"code": null,
"e": 7175,
"s": 7147,
"text": "Part 1 of cleaning CSV data"
},
{
"code": null,
"e": 7198,
"s": 7175,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 7226,
"s": 7198,
"text": "Part 2 of cleaning CSV data"
},
{
"code": null,
"e": 7249,
"s": 7226,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 7293,
"s": 7249,
"text": "How to clean text data at the command line"
},
{
"code": null,
"e": 7316,
"s": 7293,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 7346,
"s": 7316,
"text": "or why we use docker tutorial"
},
{
"code": null,
"e": 7357,
"s": 7346,
"text": "medium.com"
},
{
"code": null,
"e": 7406,
"s": 7357,
"text": "Take care, will see you in the next tutorials :)"
},
{
"code": null,
"e": 7413,
"s": 7406,
"text": "Peace!"
},
{
"code": null,
"e": 7459,
"s": 7413,
"text": "Click here to get fresh content to your inbox"
},
{
"code": null,
"e": 7497,
"s": 7459,
"text": "Chatbots: Intent Recognition Dataset"
},
{
"code": null,
"e": 7542,
"s": 7497,
"text": "4 Reasons You Should Use JSON Instead of CSV"
},
{
"code": null,
"e": 7560,
"s": 7542,
"text": "How to install jq"
},
{
"code": null,
"e": 7596,
"s": 7560,
"text": "jq repo by the author Stephen Dolan"
},
{
"code": null,
"e": 7643,
"s": 7596,
"text": "Guide to Linux jq Command for JSON Processing"
},
{
"code": null,
"e": 7655,
"s": 7643,
"text": "jq cookbook"
}
] |
Types of 2-D discrete data plots in MATLAB
|
22 Sep, 2021
Any data or variable that is limited to having certain values is known as discrete data. Many examples of discrete data can be observed in real life such as:
The output of a dice roll can take any whole number from 1 to 6.
The marks obtained by any student in a test can range from 0 to 100.
The number of children in a house.
When dealing with such data, we may require to plot graphs, histograms, or any other form of visual representation to analyze the data and achieve desired results.
MATLAB offers a wide variety of ways to plot discrete data. These include:
Vertical or Horizontal Bar-graphs
Pareto Charts
Stem charts
Scatter plots
Stairs
Let us first take some sample 2-D data to work with while demonstrating these different types of plots.
The above data shows the yearly revenue of a company for the duration of 5 years. This data can be shown in any of the above-mentioned plots:
This plot draws bars at positions specified by the array “Year” with the heights as specified in the array “Revenue”
Example:
Matlab
% MATLAB code for Bar graph% creating array for yearsyear = 2014:1:2018; % creating array for revenuerevenue = [1.72 2.00 2.08 2.67 2.03]; % bar plotbar(year,revenue) % label for X-axis xlabel('Year'); % label for Y-axis ylabel('Revenue'); % title for plottitle('Yearly Revenue')
Output:
This plot draws horizontal bars at positions specified by the array “Year” with the lengths as specified in the array “Revenue”.
Example:
Matlab
% MATLAB code for horizontal bar graph% creating array for yearsyear = 2014:1:2018; % creating array for revenuerevenue = [1.72 2.00 2.08 2.67 2.03]; % horizontal bar plotbarh(year,revenue) % label for X-axis xlabel('Revenue (in Cr.)'); % label for Y-axis ylabel('Year'); % title for plottitle('Yearly Revenue')
Output:
This plot shows vertical bars corresponding to the values of the data in descending order of value. This also shows a curve made with the cumulative values above each bar. In addition to this, the right side of the graph has a percentage scale that shows how much percentage each bar contributes to the sum of all values.
Example:
Matlab
% MATLAB code for Pareto Charts example% creating array for yearsyear = 2014:1:2018; % creating array for revenuerevenue = [1.72 2.00 2.08 2.67 2.03]; % pareto chart plotpareto(revenue,year) % label for X-axis xlabel('Year'); % label for Y-axis ylabel('Revenue (in Cr.)'); % title for plottitle('Yearly Revenue')
Output:
Bar Graphs (both vertical and horizontal) and Pareto charts can be used to represent data such as marks of a student in different subjects, rainfall received in different months, and many other data sets.
This plot shows a straight line with a bulb at the top (or bottom for negative values) corresponding to the values given in the data. The X-axis is scaled from the least to the highest value given. which may result in the first and last value being situated right at the border of the graph.
Example:
Matlab
% MATLAB code for Stem Charts % creating array for yearsyear = 2014:1:2018; % creating array for revenuerevenue = [1.72 2.00 2.08 2.67 2.03]; % stem chart plotstem(year,revenue) % label for X-axis xlabel('Year'); % label for Y-axis ylabel('Revenue (in Cr.)'); % title for plottitle('Yearly Revenue')
Output:
This plot shows dots placed at the values given in the data. The Y-axis is scaled from the lowest to the highest value in the data. The X-axis is scaled similarly as in stem charts, from least to highest value.
Example:
Matlab
% MATLAB code for Scatter Plot example% creating array for yearsyear = 2014:1:2018; % creating array for revenuerevenue = [1.72 2.00 2.08 2.67 2.03]; % scatter plotscatter(year,revenue) % label for X-axis xlabel('Year'); % label for Y-axis ylabel('Revenue (in Cr.)'); % title for plottitle('Yearly Revenue')
Output:
This plot shows a staircase-like structure with each step beginning at the next value given in the data. Similar to the scatter plot, X and Y axes scale from the lowest to the highest values given.
Example:
Matlab
% MATLAB code for Stairstep Plot % creating array for yearsyear = 2014:1:2018; % creating array for revenuerevenue = [1.72 2.00 2.08 2.67 2.03]; % stairstep plotstairs(year,revenue) % label for X-axis xlabel('Year'); % label for Y-axis ylabel('Revenue (in Cr.)'); % title for plottitle('Yearly Revenue')
Output:
Stem, Scatter, and Stairstep plots are ideally used when working with digital signals.
Blogathon-2021
MATLAB
MATLAB-graphs
Blogathon
MATLAB
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n22 Sep, 2021"
},
{
"code": null,
"e": 210,
"s": 52,
"text": "Any data or variable that is limited to having certain values is known as discrete data. Many examples of discrete data can be observed in real life such as:"
},
{
"code": null,
"e": 275,
"s": 210,
"text": "The output of a dice roll can take any whole number from 1 to 6."
},
{
"code": null,
"e": 344,
"s": 275,
"text": "The marks obtained by any student in a test can range from 0 to 100."
},
{
"code": null,
"e": 379,
"s": 344,
"text": "The number of children in a house."
},
{
"code": null,
"e": 543,
"s": 379,
"text": "When dealing with such data, we may require to plot graphs, histograms, or any other form of visual representation to analyze the data and achieve desired results."
},
{
"code": null,
"e": 618,
"s": 543,
"text": "MATLAB offers a wide variety of ways to plot discrete data. These include:"
},
{
"code": null,
"e": 652,
"s": 618,
"text": "Vertical or Horizontal Bar-graphs"
},
{
"code": null,
"e": 666,
"s": 652,
"text": "Pareto Charts"
},
{
"code": null,
"e": 678,
"s": 666,
"text": "Stem charts"
},
{
"code": null,
"e": 692,
"s": 678,
"text": "Scatter plots"
},
{
"code": null,
"e": 699,
"s": 692,
"text": "Stairs"
},
{
"code": null,
"e": 803,
"s": 699,
"text": "Let us first take some sample 2-D data to work with while demonstrating these different types of plots."
},
{
"code": null,
"e": 945,
"s": 803,
"text": "The above data shows the yearly revenue of a company for the duration of 5 years. This data can be shown in any of the above-mentioned plots:"
},
{
"code": null,
"e": 1062,
"s": 945,
"text": "This plot draws bars at positions specified by the array “Year” with the heights as specified in the array “Revenue”"
},
{
"code": null,
"e": 1072,
"s": 1062,
"text": "Example: "
},
{
"code": null,
"e": 1079,
"s": 1072,
"text": "Matlab"
},
{
"code": "% MATLAB code for Bar graph% creating array for yearsyear = 2014:1:2018; % creating array for revenuerevenue = [1.72 2.00 2.08 2.67 2.03]; % bar plotbar(year,revenue) % label for X-axis xlabel('Year'); % label for Y-axis ylabel('Revenue'); % title for plottitle('Yearly Revenue')",
"e": 1367,
"s": 1079,
"text": null
},
{
"code": null,
"e": 1375,
"s": 1367,
"text": "Output:"
},
{
"code": null,
"e": 1505,
"s": 1375,
"text": " This plot draws horizontal bars at positions specified by the array “Year” with the lengths as specified in the array “Revenue”."
},
{
"code": null,
"e": 1514,
"s": 1505,
"text": "Example:"
},
{
"code": null,
"e": 1521,
"s": 1514,
"text": "Matlab"
},
{
"code": "% MATLAB code for horizontal bar graph% creating array for yearsyear = 2014:1:2018; % creating array for revenuerevenue = [1.72 2.00 2.08 2.67 2.03]; % horizontal bar plotbarh(year,revenue) % label for X-axis xlabel('Revenue (in Cr.)'); % label for Y-axis ylabel('Year'); % title for plottitle('Yearly Revenue')",
"e": 1841,
"s": 1521,
"text": null
},
{
"code": null,
"e": 1849,
"s": 1841,
"text": "Output:"
},
{
"code": null,
"e": 2171,
"s": 1849,
"text": "This plot shows vertical bars corresponding to the values of the data in descending order of value. This also shows a curve made with the cumulative values above each bar. In addition to this, the right side of the graph has a percentage scale that shows how much percentage each bar contributes to the sum of all values."
},
{
"code": null,
"e": 2180,
"s": 2171,
"text": "Example:"
},
{
"code": null,
"e": 2187,
"s": 2180,
"text": "Matlab"
},
{
"code": "% MATLAB code for Pareto Charts example% creating array for yearsyear = 2014:1:2018; % creating array for revenuerevenue = [1.72 2.00 2.08 2.67 2.03]; % pareto chart plotpareto(revenue,year) % label for X-axis xlabel('Year'); % label for Y-axis ylabel('Revenue (in Cr.)'); % title for plottitle('Yearly Revenue')",
"e": 2508,
"s": 2187,
"text": null
},
{
"code": null,
"e": 2516,
"s": 2508,
"text": "Output:"
},
{
"code": null,
"e": 2721,
"s": 2516,
"text": "Bar Graphs (both vertical and horizontal) and Pareto charts can be used to represent data such as marks of a student in different subjects, rainfall received in different months, and many other data sets."
},
{
"code": null,
"e": 3013,
"s": 2721,
"text": "This plot shows a straight line with a bulb at the top (or bottom for negative values) corresponding to the values given in the data. The X-axis is scaled from the least to the highest value given. which may result in the first and last value being situated right at the border of the graph."
},
{
"code": null,
"e": 3022,
"s": 3013,
"text": "Example:"
},
{
"code": null,
"e": 3029,
"s": 3022,
"text": "Matlab"
},
{
"code": "% MATLAB code for Stem Charts % creating array for yearsyear = 2014:1:2018; % creating array for revenuerevenue = [1.72 2.00 2.08 2.67 2.03]; % stem chart plotstem(year,revenue) % label for X-axis xlabel('Year'); % label for Y-axis ylabel('Revenue (in Cr.)'); % title for plottitle('Yearly Revenue')",
"e": 3337,
"s": 3029,
"text": null
},
{
"code": null,
"e": 3345,
"s": 3337,
"text": "Output:"
},
{
"code": null,
"e": 3556,
"s": 3345,
"text": "This plot shows dots placed at the values given in the data. The Y-axis is scaled from the lowest to the highest value in the data. The X-axis is scaled similarly as in stem charts, from least to highest value."
},
{
"code": null,
"e": 3566,
"s": 3556,
"text": "Example: "
},
{
"code": null,
"e": 3573,
"s": 3566,
"text": "Matlab"
},
{
"code": "% MATLAB code for Scatter Plot example% creating array for yearsyear = 2014:1:2018; % creating array for revenuerevenue = [1.72 2.00 2.08 2.67 2.03]; % scatter plotscatter(year,revenue) % label for X-axis xlabel('Year'); % label for Y-axis ylabel('Revenue (in Cr.)'); % title for plottitle('Yearly Revenue')",
"e": 3889,
"s": 3573,
"text": null
},
{
"code": null,
"e": 3897,
"s": 3889,
"text": "Output:"
},
{
"code": null,
"e": 4095,
"s": 3897,
"text": "This plot shows a staircase-like structure with each step beginning at the next value given in the data. Similar to the scatter plot, X and Y axes scale from the lowest to the highest values given."
},
{
"code": null,
"e": 4104,
"s": 4095,
"text": "Example:"
},
{
"code": null,
"e": 4111,
"s": 4104,
"text": "Matlab"
},
{
"code": "% MATLAB code for Stairstep Plot % creating array for yearsyear = 2014:1:2018; % creating array for revenuerevenue = [1.72 2.00 2.08 2.67 2.03]; % stairstep plotstairs(year,revenue) % label for X-axis xlabel('Year'); % label for Y-axis ylabel('Revenue (in Cr.)'); % title for plottitle('Yearly Revenue')",
"e": 4423,
"s": 4111,
"text": null
},
{
"code": null,
"e": 4431,
"s": 4423,
"text": "Output:"
},
{
"code": null,
"e": 4518,
"s": 4431,
"text": "Stem, Scatter, and Stairstep plots are ideally used when working with digital signals."
},
{
"code": null,
"e": 4533,
"s": 4518,
"text": "Blogathon-2021"
},
{
"code": null,
"e": 4540,
"s": 4533,
"text": "MATLAB"
},
{
"code": null,
"e": 4554,
"s": 4540,
"text": "MATLAB-graphs"
},
{
"code": null,
"e": 4564,
"s": 4554,
"text": "Blogathon"
},
{
"code": null,
"e": 4571,
"s": 4564,
"text": "MATLAB"
}
] |
Python – List Files in a Directory
|
22 Jun, 2022
In this article, we will cover how do we list all files in a directory in python.
A Directory also sometimes known as a folder is a unit organizational structure in a computer’s file system for storing and locating files or more folders. Python now supports a number of APIs to list the directory contents. For instance, we can use the Path.iterdir, os.scandir, os.walk, Path.rglob, or os.listdir functions.
Directory in use: gfg
os.listdir() method gets the list of all files and directories in a specified directory. By default, it is the current directory. Beyond the first level of folders, os.listdir() does not return any files or folders.
Syntax: os.listdir(path)
Parameters:
Path of the directory
Return Type: returns a list of all files and directories in the specified path
Example 1: Get all the list files in a Directory
Python
# import OS moduleimport os # Get the list of all files and directoriespath = "C://Users//Vanshi//Desktop//gfg"dir_list = os.listdir(path) print("Files and directories in '", path, "' :") # prints all filesprint(dir_list)
Output:
Example 2: To get only .txt files.
Python3
# import OSimport os for x in os.listdir(): if x.endswith(".txt"): # Prints only text file present in My Folder print(x)
Output:
OS.walk() generates file names in a directory tree. This function returns a list of files in a tree structure. The method loops through all of the directories in a tree.
Syntax: os.walk(top, topdown, onerror, followlinks)
top: It is the top directory from which you want to retrieve the names of the component files and folders.
topdown: Specifies that directories should be scanned from the top down when set to True. If this parameter is False, directories will be examined from the top down.
onerror: It provides an error handler if an error is encountered
followlinks: if set to True, visits folders referenced by system links
Return: returns the name of every file and folder within a directory and any of its subdirectories.
Python3
# import OS moduleimport os # This is my pathpath = "C://Users//Vanshi//Desktop//gfg" # to store files in a listlist = [] # dirs=directoriesfor (root, dirs, file) in os.walk(path): for f in file: if '.txt' in f: print(f)
Output:
os.scandir() is supported for Python 3.5 and greater.
Syntax: os.scandir(path = ‘.’)
Return Type: returns an iterator of os.DirEntry object.
Python3
# import OS moduleimport os # This is my pathpath = "C://Users//Vanshi//Desktop//gfg" # Scan the directory and get# an iterator of os.DirEntry objects# corresponding to entries in it# using os.scandir() methodobj = os.scandir() # List all files and directories in the specified pathprint("Files and Directories in '% s':" % path)for entry in obj: if entry.is_dir() or entry.is_file(): print(entry.name)
Output:
The glob module is used to retrieve files/path names matching a specified pattern.
glob() method: With glob, we can use wild cards (“*, ?, [ranges]) to make path retrieval more simple and convenient.
Example:
Python3
import glob # This is my pathpath = "C:\\Users\\Vanshi\\Desktop\\gfg" # Using '*' patternprint('\nNamed with wildcard *:')for files in glob.glob(path + '*'): print(files) # Using '?' patternprint('\nNamed with wildcard ?:')for files in glob.glob(path + '?.txt'): print(files) # Using [0-9] patternprint('\nNamed with wildcard ranges:')for files in glob.glob(path + '/*[0-9].*'): print(files)
Output:
iglob() method can be used to print filenames recursively if the recursive parameter is set to True.
Syntax: glob.glob(pathname, *, recursive=False)
Example:
Python3
import glob # This is my pathpath = "C:\\Users\\Vanshi\\Desktop\\gfg**\\*.txt" # It returns an iterator which will# be printed simultaneously.print("\nUsing glob.iglob()") # Prints all types of txt files present in a Pathfor file in glob.iglob(path, recursive=True): print(file)
Output:
sooda367
surajkumarguptaintern
python-file-handling
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n22 Jun, 2022"
},
{
"code": null,
"e": 134,
"s": 52,
"text": "In this article, we will cover how do we list all files in a directory in python."
},
{
"code": null,
"e": 461,
"s": 134,
"text": "A Directory also sometimes known as a folder is a unit organizational structure in a computer’s file system for storing and locating files or more folders. Python now supports a number of APIs to list the directory contents. For instance, we can use the Path.iterdir, os.scandir, os.walk, Path.rglob, or os.listdir functions. "
},
{
"code": null,
"e": 483,
"s": 461,
"text": "Directory in use: gfg"
},
{
"code": null,
"e": 702,
"s": 485,
"text": " os.listdir() method gets the list of all files and directories in a specified directory. By default, it is the current directory. Beyond the first level of folders, os.listdir() does not return any files or folders."
},
{
"code": null,
"e": 727,
"s": 702,
"text": "Syntax: os.listdir(path)"
},
{
"code": null,
"e": 739,
"s": 727,
"text": "Parameters:"
},
{
"code": null,
"e": 761,
"s": 739,
"text": "Path of the directory"
},
{
"code": null,
"e": 840,
"s": 761,
"text": "Return Type: returns a list of all files and directories in the specified path"
},
{
"code": null,
"e": 889,
"s": 840,
"text": "Example 1: Get all the list files in a Directory"
},
{
"code": null,
"e": 896,
"s": 889,
"text": "Python"
},
{
"code": "# import OS moduleimport os # Get the list of all files and directoriespath = \"C://Users//Vanshi//Desktop//gfg\"dir_list = os.listdir(path) print(\"Files and directories in '\", path, \"' :\") # prints all filesprint(dir_list)",
"e": 1118,
"s": 896,
"text": null
},
{
"code": null,
"e": 1126,
"s": 1118,
"text": "Output:"
},
{
"code": null,
"e": 1164,
"s": 1129,
"text": "Example 2: To get only .txt files."
},
{
"code": null,
"e": 1172,
"s": 1164,
"text": "Python3"
},
{
"code": "# import OSimport os for x in os.listdir(): if x.endswith(\".txt\"): # Prints only text file present in My Folder print(x)",
"e": 1310,
"s": 1172,
"text": null
},
{
"code": null,
"e": 1319,
"s": 1310,
"text": " Output:"
},
{
"code": null,
"e": 1490,
"s": 1319,
"text": " OS.walk() generates file names in a directory tree. This function returns a list of files in a tree structure. The method loops through all of the directories in a tree."
},
{
"code": null,
"e": 1542,
"s": 1490,
"text": "Syntax: os.walk(top, topdown, onerror, followlinks)"
},
{
"code": null,
"e": 1649,
"s": 1542,
"text": "top: It is the top directory from which you want to retrieve the names of the component files and folders."
},
{
"code": null,
"e": 1815,
"s": 1649,
"text": "topdown: Specifies that directories should be scanned from the top down when set to True. If this parameter is False, directories will be examined from the top down."
},
{
"code": null,
"e": 1881,
"s": 1815,
"text": "onerror: It provides an error handler if an error is encountered "
},
{
"code": null,
"e": 1953,
"s": 1881,
"text": "followlinks: if set to True, visits folders referenced by system links "
},
{
"code": null,
"e": 2053,
"s": 1953,
"text": "Return: returns the name of every file and folder within a directory and any of its subdirectories."
},
{
"code": null,
"e": 2061,
"s": 2053,
"text": "Python3"
},
{
"code": "# import OS moduleimport os # This is my pathpath = \"C://Users//Vanshi//Desktop//gfg\" # to store files in a listlist = [] # dirs=directoriesfor (root, dirs, file) in os.walk(path): for f in file: if '.txt' in f: print(f)",
"e": 2303,
"s": 2061,
"text": null
},
{
"code": null,
"e": 2311,
"s": 2303,
"text": "Output:"
},
{
"code": null,
"e": 2367,
"s": 2311,
"text": " os.scandir() is supported for Python 3.5 and greater. "
},
{
"code": null,
"e": 2398,
"s": 2367,
"text": "Syntax: os.scandir(path = ‘.’)"
},
{
"code": null,
"e": 2454,
"s": 2398,
"text": "Return Type: returns an iterator of os.DirEntry object."
},
{
"code": null,
"e": 2462,
"s": 2454,
"text": "Python3"
},
{
"code": "# import OS moduleimport os # This is my pathpath = \"C://Users//Vanshi//Desktop//gfg\" # Scan the directory and get# an iterator of os.DirEntry objects# corresponding to entries in it# using os.scandir() methodobj = os.scandir() # List all files and directories in the specified pathprint(\"Files and Directories in '% s':\" % path)for entry in obj: if entry.is_dir() or entry.is_file(): print(entry.name)",
"e": 2875,
"s": 2462,
"text": null
},
{
"code": null,
"e": 2883,
"s": 2875,
"text": "Output:"
},
{
"code": null,
"e": 2967,
"s": 2883,
"text": "The glob module is used to retrieve files/path names matching a specified pattern. "
},
{
"code": null,
"e": 3084,
"s": 2967,
"text": "glob() method: With glob, we can use wild cards (“*, ?, [ranges]) to make path retrieval more simple and convenient."
},
{
"code": null,
"e": 3093,
"s": 3084,
"text": "Example:"
},
{
"code": null,
"e": 3101,
"s": 3093,
"text": "Python3"
},
{
"code": "import glob # This is my pathpath = \"C:\\\\Users\\\\Vanshi\\\\Desktop\\\\gfg\" # Using '*' patternprint('\\nNamed with wildcard *:')for files in glob.glob(path + '*'): print(files) # Using '?' patternprint('\\nNamed with wildcard ?:')for files in glob.glob(path + '?.txt'): print(files) # Using [0-9] patternprint('\\nNamed with wildcard ranges:')for files in glob.glob(path + '/*[0-9].*'): print(files)",
"e": 3503,
"s": 3101,
"text": null
},
{
"code": null,
"e": 3511,
"s": 3503,
"text": "Output:"
},
{
"code": null,
"e": 3613,
"s": 3511,
"text": " iglob() method can be used to print filenames recursively if the recursive parameter is set to True."
},
{
"code": null,
"e": 3661,
"s": 3613,
"text": "Syntax: glob.glob(pathname, *, recursive=False)"
},
{
"code": null,
"e": 3670,
"s": 3661,
"text": "Example:"
},
{
"code": null,
"e": 3678,
"s": 3670,
"text": "Python3"
},
{
"code": "import glob # This is my pathpath = \"C:\\\\Users\\\\Vanshi\\\\Desktop\\\\gfg**\\\\*.txt\" # It returns an iterator which will# be printed simultaneously.print(\"\\nUsing glob.iglob()\") # Prints all types of txt files present in a Pathfor file in glob.iglob(path, recursive=True): print(file)",
"e": 3961,
"s": 3678,
"text": null
},
{
"code": null,
"e": 3969,
"s": 3961,
"text": "Output:"
},
{
"code": null,
"e": 3978,
"s": 3969,
"text": "sooda367"
},
{
"code": null,
"e": 4000,
"s": 3978,
"text": "surajkumarguptaintern"
},
{
"code": null,
"e": 4021,
"s": 4000,
"text": "python-file-handling"
},
{
"code": null,
"e": 4028,
"s": 4021,
"text": "Python"
}
] |
Markov Chain
|
03 Dec, 2021
Markov chains, named after Andrey Markov, a stochastic model that depicts a sequence of possible events where predictions or probabilities for the next state are based solely on its previous event state, not the states before. In simple words, the probability that n+1th steps will be x depends only on the nth steps not the complete sequence of steps that came before n. This property is known as Markov Property or Memorylessness. Let us explore our Markov chain with the help of a diagram,
are Markov Process
A diagram representing a two-state(here, E and A) Markov process. Here the arrows originated from the current state and point to the future state and the number associated with the arrows indicates the probability of the Markov process changing from one state to another state. For instance, if the Markov process is in state E, then the probability it changes to state A is 0.7, while the probability it remains in the same state is 0.3. Similarly, for any process in state A, the probability to change to Estate is 0.4 and the probability to remain in the same state is 0.6.
From the diagram of the two-state Markov process, we can understand that the Markov chain is a directed graph. So we can represent it with the help of an adjacency matrix.
+——+——+
| A | E | — Each element denotes the probability weight of the edge
+——+——+——+ connecting the two corresponding vertices
| A | 0.6 | 0.4 | — 0.4 is the probability for state A to go to state E and 0.6 is the probability
+——+——+——+ to remain at the same state
| E | 0.7 | 0.3 | — 0.7 is the probability for state E to go to state A and 0.3 is the probability
+——+——+——+ to remain at the same state
This matrix is also called Transition Matrix. If the Markov chain has N possible states, the matrix will be an NxN matrix. Each row of this matrix should sum to 1. In addition to this, a Markov chain also has an Initial State Vector of order Nx1. These two entities are a must to represent a Markov chain.
N-step Transition Matrix: Now let us learn higher-order transition matrices. It helps us to find the chance of that transition occurring over multiple steps. To put in simple words, what will be the probability of moving from state A to state E over the N step? There is actually a very simple way to calculate it. This can be determined by calculating the value of entry (A, ) of the matrix obtained by raising the transition matrix to the power of N.
discrete-time Markov chains : This implies the index set T( state of the process at time t ) is a countable set here or we can say that changes occur at specific states. Generally, the term “Markov chain” is used for DTMC.
continuous-time Markov chains: Here the index set T( state of the process at time t ) is a continuum, which means changes are continuous in CTMC.
A Markov chain is said to be Irreducible if we can go from one state to another in a single or more than one step.A state in a Markov chain is said to be Periodic if returning to it requires a multiple of some integer larger than 1, the greatest common divisor of all the possible return path lengths will be the period of that state.A state in a Markov chain is said to be Transient if there is a non-zero probability that the chain will never return to the same state, otherwise, it is Recurrent.A state in a Markov chain is called Absorbing if there is no possible way to leave that state. Absorbing states do not have any outgoing transitions from it.
A Markov chain is said to be Irreducible if we can go from one state to another in a single or more than one step.
A state in a Markov chain is said to be Periodic if returning to it requires a multiple of some integer larger than 1, the greatest common divisor of all the possible return path lengths will be the period of that state.
A state in a Markov chain is said to be Transient if there is a non-zero probability that the chain will never return to the same state, otherwise, it is Recurrent.
A state in a Markov chain is called Absorbing if there is no possible way to leave that state. Absorbing states do not have any outgoing transitions from it.
Python3
# let's import our libraryimport scipy.linalgimport numpy as np # Encoding this states to numbers as it# is easier to deal with numbers instead# of words.state = ["A", "E"] # Assigning the transition matrix to a variable# i.e a numpy 2d matrix.MyMatrix = np.array([[0.6, 0.4], [0.7, 0.3]]) # Simulating a random walk on our Markov chain# with 20 steps. Random walk simply means that# we start with an arbitrary state and then we# move along our markov chain.n = 20 # decide which state to start withStartingState = 0CurrentState = StartingState # printing the stating state using state# dictionaryprint(state[CurrentState], "--->", end=" ") while n-1: # Deciding the next state using a random.choice() # function,that takes list of states and the probability # to go to the next states from our current state CurrentState = np.random.choice([0, 1], p=MyMatrix[CurrentState]) # printing the path of random walk print(state[CurrentState], "--->", end=" ") n -= 1print("stop") # Let us find the stationary distribution of our# Markov chain by Finding Left Eigen Vectors# We only need the left eigen vectorsMyValues, left = scipy.linalg.eig(MyMatrix, right=False, left=True) print("left eigen vectors = \n", left, "\n")print("eigen values = \n", MyValues) # Pi is a probability distribution so the sum of# the probabilities should be 1 To get that from# the above negative values we just have to normalizepi = left[:, 0]pi_normalized = [(x/np.sum(pi)).real for x in pi]pi_normalized
Markov chains make the study of many real-world processes much more simple and easy to understand. Using the Markov chain we can derive some useful results such as Stationary Distribution and many more.
MCMC(Markov Chain Monte Carlo), which gives a solution to the problems that come from the normalization factor, is based on Markov Chain.Markov Chains are used in information theory, search engines, speech recognition etc.Markov chain has huge possibilities, future and importance in the field of Data Science and the interested readers are requested to learn this stuff properly for being a competent person in the field of Data Science.
MCMC(Markov Chain Monte Carlo), which gives a solution to the problems that come from the normalization factor, is based on Markov Chain.
Markov Chains are used in information theory, search engines, speech recognition etc.
Markov chain has huge possibilities, future and importance in the field of Data Science and the interested readers are requested to learn this stuff properly for being a competent person in the field of Data Science.
The statistical system contains a finite number of states.The states are mutually exclusive and collectively exhaustive.The transition probability from one state to another state is constant over time.
The statistical system contains a finite number of states.
The states are mutually exclusive and collectively exhaustive.
The transition probability from one state to another state is constant over time.
Markov processes are fairly common in real-life problems and Markov chains can be easily implemented because of their memorylessness property. Using Markov chain can simplify the problem without affecting its accuracy.
Let us take an example to understand the advantage of this tool, suppose my friend is suggesting to have a meal. I may say that I do not want a pizza as I have that one hour ago. But Is it appropriate if I say that I do not want a pizza because I have it two months ago? That means in this case, my probability of picking a meal is entirely dependent on my immediately preceding meal. Here is the effectiveness of the Markov Chain.
clintra
TrueGeek-2021
Machine Learning
TrueGeek
Machine Learning
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n03 Dec, 2021"
},
{
"code": null,
"e": 547,
"s": 54,
"text": "Markov chains, named after Andrey Markov, a stochastic model that depicts a sequence of possible events where predictions or probabilities for the next state are based solely on its previous event state, not the states before. In simple words, the probability that n+1th steps will be x depends only on the nth steps not the complete sequence of steps that came before n. This property is known as Markov Property or Memorylessness. Let us explore our Markov chain with the help of a diagram,"
},
{
"code": null,
"e": 566,
"s": 547,
"text": "are Markov Process"
},
{
"code": null,
"e": 1143,
"s": 566,
"text": "A diagram representing a two-state(here, E and A) Markov process. Here the arrows originated from the current state and point to the future state and the number associated with the arrows indicates the probability of the Markov process changing from one state to another state. For instance, if the Markov process is in state E, then the probability it changes to state A is 0.7, while the probability it remains in the same state is 0.3. Similarly, for any process in state A, the probability to change to Estate is 0.4 and the probability to remain in the same state is 0.6."
},
{
"code": null,
"e": 1315,
"s": 1143,
"text": "From the diagram of the two-state Markov process, we can understand that the Markov chain is a directed graph. So we can represent it with the help of an adjacency matrix."
},
{
"code": null,
"e": 1349,
"s": 1315,
"text": " +——+——+ "
},
{
"code": null,
"e": 1477,
"s": 1349,
"text": " | A | E | — Each element denotes the probability weight of the edge "
},
{
"code": null,
"e": 1587,
"s": 1477,
"text": " +——+——+——+ connecting the two corresponding vertices "
},
{
"code": null,
"e": 1714,
"s": 1587,
"text": " | A | 0.6 | 0.4 | — 0.4 is the probability for state A to go to state E and 0.6 is the probability "
},
{
"code": null,
"e": 1774,
"s": 1714,
"text": " +——+——+——+ to remain at the same state "
},
{
"code": null,
"e": 1902,
"s": 1774,
"text": " | E | 0.7 | 0.3 | — 0.7 is the probability for state E to go to state A and 0.3 is the probability "
},
{
"code": null,
"e": 1962,
"s": 1902,
"text": " +——+——+——+ to remain at the same state "
},
{
"code": null,
"e": 2268,
"s": 1962,
"text": "This matrix is also called Transition Matrix. If the Markov chain has N possible states, the matrix will be an NxN matrix. Each row of this matrix should sum to 1. In addition to this, a Markov chain also has an Initial State Vector of order Nx1. These two entities are a must to represent a Markov chain."
},
{
"code": null,
"e": 2721,
"s": 2268,
"text": "N-step Transition Matrix: Now let us learn higher-order transition matrices. It helps us to find the chance of that transition occurring over multiple steps. To put in simple words, what will be the probability of moving from state A to state E over the N step? There is actually a very simple way to calculate it. This can be determined by calculating the value of entry (A, ) of the matrix obtained by raising the transition matrix to the power of N."
},
{
"code": null,
"e": 2944,
"s": 2721,
"text": "discrete-time Markov chains : This implies the index set T( state of the process at time t ) is a countable set here or we can say that changes occur at specific states. Generally, the term “Markov chain” is used for DTMC."
},
{
"code": null,
"e": 3090,
"s": 2944,
"text": "continuous-time Markov chains: Here the index set T( state of the process at time t ) is a continuum, which means changes are continuous in CTMC."
},
{
"code": null,
"e": 3746,
"s": 3090,
"text": "A Markov chain is said to be Irreducible if we can go from one state to another in a single or more than one step.A state in a Markov chain is said to be Periodic if returning to it requires a multiple of some integer larger than 1, the greatest common divisor of all the possible return path lengths will be the period of that state.A state in a Markov chain is said to be Transient if there is a non-zero probability that the chain will never return to the same state, otherwise, it is Recurrent.A state in a Markov chain is called Absorbing if there is no possible way to leave that state. Absorbing states do not have any outgoing transitions from it."
},
{
"code": null,
"e": 3861,
"s": 3746,
"text": "A Markov chain is said to be Irreducible if we can go from one state to another in a single or more than one step."
},
{
"code": null,
"e": 4082,
"s": 3861,
"text": "A state in a Markov chain is said to be Periodic if returning to it requires a multiple of some integer larger than 1, the greatest common divisor of all the possible return path lengths will be the period of that state."
},
{
"code": null,
"e": 4247,
"s": 4082,
"text": "A state in a Markov chain is said to be Transient if there is a non-zero probability that the chain will never return to the same state, otherwise, it is Recurrent."
},
{
"code": null,
"e": 4405,
"s": 4247,
"text": "A state in a Markov chain is called Absorbing if there is no possible way to leave that state. Absorbing states do not have any outgoing transitions from it."
},
{
"code": null,
"e": 4413,
"s": 4405,
"text": "Python3"
},
{
"code": "# let's import our libraryimport scipy.linalgimport numpy as np # Encoding this states to numbers as it# is easier to deal with numbers instead# of words.state = [\"A\", \"E\"] # Assigning the transition matrix to a variable# i.e a numpy 2d matrix.MyMatrix = np.array([[0.6, 0.4], [0.7, 0.3]]) # Simulating a random walk on our Markov chain# with 20 steps. Random walk simply means that# we start with an arbitrary state and then we# move along our markov chain.n = 20 # decide which state to start withStartingState = 0CurrentState = StartingState # printing the stating state using state# dictionaryprint(state[CurrentState], \"--->\", end=\" \") while n-1: # Deciding the next state using a random.choice() # function,that takes list of states and the probability # to go to the next states from our current state CurrentState = np.random.choice([0, 1], p=MyMatrix[CurrentState]) # printing the path of random walk print(state[CurrentState], \"--->\", end=\" \") n -= 1print(\"stop\") # Let us find the stationary distribution of our# Markov chain by Finding Left Eigen Vectors# We only need the left eigen vectorsMyValues, left = scipy.linalg.eig(MyMatrix, right=False, left=True) print(\"left eigen vectors = \\n\", left, \"\\n\")print(\"eigen values = \\n\", MyValues) # Pi is a probability distribution so the sum of# the probabilities should be 1 To get that from# the above negative values we just have to normalizepi = left[:, 0]pi_normalized = [(x/np.sum(pi)).real for x in pi]pi_normalized",
"e": 5919,
"s": 4413,
"text": null
},
{
"code": null,
"e": 6123,
"s": 5919,
"text": "Markov chains make the study of many real-world processes much more simple and easy to understand. Using the Markov chain we can derive some useful results such as Stationary Distribution and many more. "
},
{
"code": null,
"e": 6562,
"s": 6123,
"text": "MCMC(Markov Chain Monte Carlo), which gives a solution to the problems that come from the normalization factor, is based on Markov Chain.Markov Chains are used in information theory, search engines, speech recognition etc.Markov chain has huge possibilities, future and importance in the field of Data Science and the interested readers are requested to learn this stuff properly for being a competent person in the field of Data Science."
},
{
"code": null,
"e": 6700,
"s": 6562,
"text": "MCMC(Markov Chain Monte Carlo), which gives a solution to the problems that come from the normalization factor, is based on Markov Chain."
},
{
"code": null,
"e": 6786,
"s": 6700,
"text": "Markov Chains are used in information theory, search engines, speech recognition etc."
},
{
"code": null,
"e": 7003,
"s": 6786,
"text": "Markov chain has huge possibilities, future and importance in the field of Data Science and the interested readers are requested to learn this stuff properly for being a competent person in the field of Data Science."
},
{
"code": null,
"e": 7205,
"s": 7003,
"text": "The statistical system contains a finite number of states.The states are mutually exclusive and collectively exhaustive.The transition probability from one state to another state is constant over time."
},
{
"code": null,
"e": 7264,
"s": 7205,
"text": "The statistical system contains a finite number of states."
},
{
"code": null,
"e": 7327,
"s": 7264,
"text": "The states are mutually exclusive and collectively exhaustive."
},
{
"code": null,
"e": 7409,
"s": 7327,
"text": "The transition probability from one state to another state is constant over time."
},
{
"code": null,
"e": 7628,
"s": 7409,
"text": "Markov processes are fairly common in real-life problems and Markov chains can be easily implemented because of their memorylessness property. Using Markov chain can simplify the problem without affecting its accuracy."
},
{
"code": null,
"e": 8060,
"s": 7628,
"text": "Let us take an example to understand the advantage of this tool, suppose my friend is suggesting to have a meal. I may say that I do not want a pizza as I have that one hour ago. But Is it appropriate if I say that I do not want a pizza because I have it two months ago? That means in this case, my probability of picking a meal is entirely dependent on my immediately preceding meal. Here is the effectiveness of the Markov Chain."
},
{
"code": null,
"e": 8068,
"s": 8060,
"text": "clintra"
},
{
"code": null,
"e": 8082,
"s": 8068,
"text": "TrueGeek-2021"
},
{
"code": null,
"e": 8099,
"s": 8082,
"text": "Machine Learning"
},
{
"code": null,
"e": 8108,
"s": 8099,
"text": "TrueGeek"
},
{
"code": null,
"e": 8125,
"s": 8108,
"text": "Machine Learning"
}
] |
JavaScript Array forEach() Method
|
29 Oct, 2021
Below is the example of the Array forEach() method.
Example:<script> // JavaScript to illustrate forEach() method function func() { // Original array const items = [12, 24, 36]; const copy = []; items.forEach(function (item) { copy.push(item + item+2); }); document.write(copy); } func();</script>
<script> // JavaScript to illustrate forEach() method function func() { // Original array const items = [12, 24, 36]; const copy = []; items.forEach(function (item) { copy.push(item + item+2); }); document.write(copy); } func();</script>
Output:26,50,74
26,50,74
The arr.forEach() method calls the provided function once for each element of the array. The provided function may perform any kind of operation on the elements of the given array.
Syntax:
array.forEach(callback(element, index, arr), thisValue)
Parameters: This method accepts five parameters as mentioned above and described below:
callback: This parameter holds the function to be called for each element of the array.
element: The parameter holds the value of the elements being processed currently.
index: This parameter is optional, it holds the index of the current value element in the array starting from 0.
array: This parameter is optional, it holds the complete array on which Array.every is called.
thisArg: This parameter is optional, it holds the context to be passed as this to be used while executing the callback function. If the context is passed, it will be used like this for each invocation of the callback function, otherwise undefined is used as default.
Return value: The return value of this method is always undefined. This method may or may not change the original array provided as it depends upon the functionality of the argument function.
Below example illustrate the Array forEach() method in JavaScript:
Example: In this example the method forEach() calculates the square of every element of the array.const items = [1, 29, 47];
const copy = [];
items.forEach(function(item){
copy.push(item*item);
});
print(copy);
Output:1,841,2209
const items = [1, 29, 47];
const copy = [];
items.forEach(function(item){
copy.push(item*item);
});
print(copy);
Output:
1,841,2209
Code for the above method is provided below:
Program:
<script> // JavaScript to illustrate forEach() method function func() { // Original array const items = [1, 29, 47]; const copy = []; items.forEach(function (item) { copy.push(item * item); }); document.write(copy); } func();</script>
Output:
1,841,2209
Supported Browsers: The browsers supported by JavaScript Array forEach() method are listed below:
Google Chrome 1 and above
Edge 12 and above
Firefox 1.5 and above
Internet Explorer 9 and above
Opera 9.5 and above
Safari 3 and above
ysachin2314
javascript-array
JavaScript-Methods
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n29 Oct, 2021"
},
{
"code": null,
"e": 104,
"s": 52,
"text": "Below is the example of the Array forEach() method."
},
{
"code": null,
"e": 445,
"s": 104,
"text": "Example:<script> // JavaScript to illustrate forEach() method function func() { // Original array const items = [12, 24, 36]; const copy = []; items.forEach(function (item) { copy.push(item + item+2); }); document.write(copy); } func();</script> "
},
{
"code": "<script> // JavaScript to illustrate forEach() method function func() { // Original array const items = [12, 24, 36]; const copy = []; items.forEach(function (item) { copy.push(item + item+2); }); document.write(copy); } func();</script> ",
"e": 778,
"s": 445,
"text": null
},
{
"code": null,
"e": 794,
"s": 778,
"text": "Output:26,50,74"
},
{
"code": null,
"e": 803,
"s": 794,
"text": "26,50,74"
},
{
"code": null,
"e": 984,
"s": 803,
"text": "The arr.forEach() method calls the provided function once for each element of the array. The provided function may perform any kind of operation on the elements of the given array."
},
{
"code": null,
"e": 992,
"s": 984,
"text": "Syntax:"
},
{
"code": null,
"e": 1048,
"s": 992,
"text": "array.forEach(callback(element, index, arr), thisValue)"
},
{
"code": null,
"e": 1136,
"s": 1048,
"text": "Parameters: This method accepts five parameters as mentioned above and described below:"
},
{
"code": null,
"e": 1224,
"s": 1136,
"text": "callback: This parameter holds the function to be called for each element of the array."
},
{
"code": null,
"e": 1306,
"s": 1224,
"text": "element: The parameter holds the value of the elements being processed currently."
},
{
"code": null,
"e": 1419,
"s": 1306,
"text": "index: This parameter is optional, it holds the index of the current value element in the array starting from 0."
},
{
"code": null,
"e": 1514,
"s": 1419,
"text": "array: This parameter is optional, it holds the complete array on which Array.every is called."
},
{
"code": null,
"e": 1781,
"s": 1514,
"text": "thisArg: This parameter is optional, it holds the context to be passed as this to be used while executing the callback function. If the context is passed, it will be used like this for each invocation of the callback function, otherwise undefined is used as default."
},
{
"code": null,
"e": 1973,
"s": 1781,
"text": "Return value: The return value of this method is always undefined. This method may or may not change the original array provided as it depends upon the functionality of the argument function."
},
{
"code": null,
"e": 2040,
"s": 1973,
"text": "Below example illustrate the Array forEach() method in JavaScript:"
},
{
"code": null,
"e": 2272,
"s": 2040,
"text": "Example: In this example the method forEach() calculates the square of every element of the array.const items = [1, 29, 47];\nconst copy = [];\n\nitems.forEach(function(item){\n copy.push(item*item);\n});\nprint(copy);\nOutput:1,841,2209"
},
{
"code": null,
"e": 2389,
"s": 2272,
"text": "const items = [1, 29, 47];\nconst copy = [];\n\nitems.forEach(function(item){\n copy.push(item*item);\n});\nprint(copy);\n"
},
{
"code": null,
"e": 2397,
"s": 2389,
"text": "Output:"
},
{
"code": null,
"e": 2408,
"s": 2397,
"text": "1,841,2209"
},
{
"code": null,
"e": 2453,
"s": 2408,
"text": "Code for the above method is provided below:"
},
{
"code": null,
"e": 2462,
"s": 2453,
"text": "Program:"
},
{
"code": "<script> // JavaScript to illustrate forEach() method function func() { // Original array const items = [1, 29, 47]; const copy = []; items.forEach(function (item) { copy.push(item * item); }); document.write(copy); } func();</script>",
"e": 2774,
"s": 2462,
"text": null
},
{
"code": null,
"e": 2782,
"s": 2774,
"text": "Output:"
},
{
"code": null,
"e": 2793,
"s": 2782,
"text": "1,841,2209"
},
{
"code": null,
"e": 2891,
"s": 2793,
"text": "Supported Browsers: The browsers supported by JavaScript Array forEach() method are listed below:"
},
{
"code": null,
"e": 2917,
"s": 2891,
"text": "Google Chrome 1 and above"
},
{
"code": null,
"e": 2935,
"s": 2917,
"text": "Edge 12 and above"
},
{
"code": null,
"e": 2957,
"s": 2935,
"text": "Firefox 1.5 and above"
},
{
"code": null,
"e": 2987,
"s": 2957,
"text": "Internet Explorer 9 and above"
},
{
"code": null,
"e": 3007,
"s": 2987,
"text": "Opera 9.5 and above"
},
{
"code": null,
"e": 3026,
"s": 3007,
"text": "Safari 3 and above"
},
{
"code": null,
"e": 3038,
"s": 3026,
"text": "ysachin2314"
},
{
"code": null,
"e": 3055,
"s": 3038,
"text": "javascript-array"
},
{
"code": null,
"e": 3074,
"s": 3055,
"text": "JavaScript-Methods"
},
{
"code": null,
"e": 3085,
"s": 3074,
"text": "JavaScript"
},
{
"code": null,
"e": 3102,
"s": 3085,
"text": "Web Technologies"
}
] |
How to Count Occurrences of Specific Value in Pandas Column?
|
23 Dec, 2021
In this article, we will discuss how to count occurrences of a specific column value in the pandas column.
We can count by using the value_counts() method. This function is used to count the values present in the entire dataframe and also count values in a particular column.
Syntax:
data['column_name'].value_counts()[value]
where
data is the input dataframe
value is the string/integer value present in the column to be counted
column_name is the column in the dataframe
Example: To count occurrences of a specific value
Python3
# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # count values in name columnprint(data['name'].value_counts()['sravan']) # count values in subjects columnprint(data['subjects'].value_counts()['php']) # count values in marks columnprint(data['marks'].value_counts()[89])
Output:
3
2
1
If we want to count all values in a particular column, then we do not need to mention the value.
Syntax:
data['column_name'].value_counts()
Example: To count the occurrence of a value in a particular column
Python3
# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # count all values in name columnprint(data['name'].value_counts()) # count all values in subjects columnprint(data['subjects'].value_counts()) # count all values in marks columnprint(data['marks'].value_counts()) # count all values in age columnprint(data['age'].value_counts())
Output:
If we want to get the results in order (like ascending and descending order), we have to specify the parameter
Syntax:
Ascending order:
data[‘column_name’].value_counts(ascending=True)
Descending Order:
data[‘column_name’].value_counts(ascending=False)
Example: To get results in an ordered fashion
Python3
# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # count all values in name column in ascending orderprint(data['name'].value_counts(ascending=True)) # count all values in subjects column in ascending orderprint(data['subjects'].value_counts(ascending=True)) # count all values in marks column in descending orderprint(data['marks'].value_counts(ascending=False)) # count all values in age column in descending orderprint(data['age'].value_counts(ascending=False))
Output:
Here we can count the occurrence with or without NA values. By using dropna parameter to include NA values if set to True, it will not count NA if set to False.
Syntax:
Include NA values:
data[‘column_name’].value_counts(dropna=True)
Exclude NA Values:
data[‘column_name’].value_counts(dropna=False)
Example: Dealing with missing values
Python3
# import pandas moduleimport pandas as pd #import numpyimport numpy # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi', numpy.nan], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R', numpy.nan], 'marks': [98, 90, 78, 91, 87, 78, 89, 90, numpy.nan], 'age': [11, 23, 23, 21, 21, 21, 23, 21, numpy.nan]}) # count all values in name column including NAprint(data['name'].value_counts(dropna=False)) # count all values in subjects column including NAprint(data['subjects'].value_counts(dropna=False)) # count all values in marks column excluding NAprint(data['marks'].value_counts(dropna=False)) # count all values in age column excluding NAprint(data['age'].value_counts(dropna=True))
Output:
We are going to add normalize parameter to get the relative frequencies of the repeated data. It is set to True.
Syntax:
data[‘column_name’].value_counts(normalize=True)
Example: Count values with relative frequencies
Python3
# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # count all values in name with relative frequenciesprint(data['name'].value_counts(normalize=True))
Output:
sravan 0.375
ojaswi 0.125
ojsawi 0.125
bobby 0.125
rohith 0.125
gnanesh 0.125
Name: name, dtype: float64
If we want to get the details like count, mean, std, min, 25%, 50%,75%, max, then we have to use describe() method.
Syntax:
data['column_name'].describe()
Example: Get details
Python3
# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # get about ageprint(data['age'].describe())
Output:
count 8.000000
mean 20.500000
std 3.964125
min 11.000000
25% 21.000000
50% 21.000000
75% 23.000000
max 23.000000
Name: age, dtype: float64
Here this will return the count of all occurrences in a particular column.
Syntax:
data.groupby('column_name').size()
Example: Count of all occurrences in a particular column
Python3
# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # get the size of nameprint(data.groupby('name').size())
Output:
name
bobby 1
gnanesh 1
ojaswi 1
ojsawi 1
rohith 1
sravan 3
dtype: int64
Here this will return the count of all occurrences in a particular column across all columns.
Syntax:
data.groupby('column_name').count()
Example: Count of all occurrences in a particular column
Python3
# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # get the count of name across all columnsprint(data.groupby('name').count())
Output:
If we want to get the count in a particular range of values, then the bins parameter is applied. We can specify the number of ranges(bins).
Syntax:
(data['column_name'].value_counts(bins)
where,
data is the input dataframe
column_name is the column to get bins
bins is the total number of bins to be specified
Example: Get count in particular range of values
Python3
# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # get count of age column with 6 binsprint(data['age'].value_counts(bins=6)) # get count of age column with 4 binsprint(data['age'].value_counts(bins=4))
Output:
(19.0, 21.0] 4
(21.0, 23.0] 3
(10.987, 13.0] 1
(17.0, 19.0] 0
(15.0, 17.0] 0
(13.0, 15.0] 0
Name: age, dtype: int64
(20.0, 23.0] 7
(10.987, 14.0] 1
(17.0, 20.0] 0
(14.0, 17.0] 0
Name: age, dtype: int64
If we want to get a count of all columns across all columns, then we have to use apply() function. In that we will use value_counts() method.
Syntax:
data.apply(pd.value_counts)
Example: Get count of all columns across all columns
Python3
# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'bobby', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'html/css', 'python'], 'marks': [98, 90, 78, 91, 87], 'age': [11, 23, 23, 21, 21]}) # get all countdata.apply(pd.value_counts)
Output:
anikakapoor
pandas-dataframe-program
Picked
Python pandas-dataFrame
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Python Classes and Objects
Introduction To PYTHON
Python OOPs Concepts
Python | os.path.join() method
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Python | Get unique values from a list
Python | datetime.timedelta() function
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n23 Dec, 2021"
},
{
"code": null,
"e": 135,
"s": 28,
"text": "In this article, we will discuss how to count occurrences of a specific column value in the pandas column."
},
{
"code": null,
"e": 304,
"s": 135,
"text": "We can count by using the value_counts() method. This function is used to count the values present in the entire dataframe and also count values in a particular column."
},
{
"code": null,
"e": 312,
"s": 304,
"text": "Syntax:"
},
{
"code": null,
"e": 354,
"s": 312,
"text": "data['column_name'].value_counts()[value]"
},
{
"code": null,
"e": 360,
"s": 354,
"text": "where"
},
{
"code": null,
"e": 388,
"s": 360,
"text": "data is the input dataframe"
},
{
"code": null,
"e": 458,
"s": 388,
"text": "value is the string/integer value present in the column to be counted"
},
{
"code": null,
"e": 501,
"s": 458,
"text": "column_name is the column in the dataframe"
},
{
"code": null,
"e": 551,
"s": 501,
"text": "Example: To count occurrences of a specific value"
},
{
"code": null,
"e": 559,
"s": 551,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # count values in name columnprint(data['name'].value_counts()['sravan']) # count values in subjects columnprint(data['subjects'].value_counts()['php']) # count values in marks columnprint(data['marks'].value_counts()[89])",
"e": 1188,
"s": 559,
"text": null
},
{
"code": null,
"e": 1196,
"s": 1188,
"text": "Output:"
},
{
"code": null,
"e": 1202,
"s": 1196,
"text": "3\n2\n1"
},
{
"code": null,
"e": 1299,
"s": 1202,
"text": "If we want to count all values in a particular column, then we do not need to mention the value."
},
{
"code": null,
"e": 1307,
"s": 1299,
"text": "Syntax:"
},
{
"code": null,
"e": 1342,
"s": 1307,
"text": "data['column_name'].value_counts()"
},
{
"code": null,
"e": 1410,
"s": 1342,
"text": "Example: To count the occurrence of a value in a particular column "
},
{
"code": null,
"e": 1418,
"s": 1410,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # count all values in name columnprint(data['name'].value_counts()) # count all values in subjects columnprint(data['subjects'].value_counts()) # count all values in marks columnprint(data['marks'].value_counts()) # count all values in age columnprint(data['age'].value_counts())",
"e": 2104,
"s": 1418,
"text": null
},
{
"code": null,
"e": 2112,
"s": 2104,
"text": "Output:"
},
{
"code": null,
"e": 2223,
"s": 2112,
"text": "If we want to get the results in order (like ascending and descending order), we have to specify the parameter"
},
{
"code": null,
"e": 2231,
"s": 2223,
"text": "Syntax:"
},
{
"code": null,
"e": 2248,
"s": 2231,
"text": "Ascending order:"
},
{
"code": null,
"e": 2297,
"s": 2248,
"text": "data[‘column_name’].value_counts(ascending=True)"
},
{
"code": null,
"e": 2315,
"s": 2297,
"text": "Descending Order:"
},
{
"code": null,
"e": 2365,
"s": 2315,
"text": "data[‘column_name’].value_counts(ascending=False)"
},
{
"code": null,
"e": 2412,
"s": 2365,
"text": "Example: To get results in an ordered fashion "
},
{
"code": null,
"e": 2420,
"s": 2412,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # count all values in name column in ascending orderprint(data['name'].value_counts(ascending=True)) # count all values in subjects column in ascending orderprint(data['subjects'].value_counts(ascending=True)) # count all values in marks column in descending orderprint(data['marks'].value_counts(ascending=False)) # count all values in age column in descending orderprint(data['age'].value_counts(ascending=False))",
"e": 3242,
"s": 2420,
"text": null
},
{
"code": null,
"e": 3250,
"s": 3242,
"text": "Output:"
},
{
"code": null,
"e": 3411,
"s": 3250,
"text": "Here we can count the occurrence with or without NA values. By using dropna parameter to include NA values if set to True, it will not count NA if set to False."
},
{
"code": null,
"e": 3419,
"s": 3411,
"text": "Syntax:"
},
{
"code": null,
"e": 3438,
"s": 3419,
"text": "Include NA values:"
},
{
"code": null,
"e": 3484,
"s": 3438,
"text": "data[‘column_name’].value_counts(dropna=True)"
},
{
"code": null,
"e": 3503,
"s": 3484,
"text": "Exclude NA Values:"
},
{
"code": null,
"e": 3550,
"s": 3503,
"text": "data[‘column_name’].value_counts(dropna=False)"
},
{
"code": null,
"e": 3587,
"s": 3550,
"text": "Example: Dealing with missing values"
},
{
"code": null,
"e": 3595,
"s": 3587,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd #import numpyimport numpy # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi', numpy.nan], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R', numpy.nan], 'marks': [98, 90, 78, 91, 87, 78, 89, 90, numpy.nan], 'age': [11, 23, 23, 21, 21, 21, 23, 21, numpy.nan]}) # count all values in name column including NAprint(data['name'].value_counts(dropna=False)) # count all values in subjects column including NAprint(data['subjects'].value_counts(dropna=False)) # count all values in marks column excluding NAprint(data['marks'].value_counts(dropna=False)) # count all values in age column excluding NAprint(data['age'].value_counts(dropna=True))",
"e": 4450,
"s": 3595,
"text": null
},
{
"code": null,
"e": 4458,
"s": 4450,
"text": "Output:"
},
{
"code": null,
"e": 4571,
"s": 4458,
"text": "We are going to add normalize parameter to get the relative frequencies of the repeated data. It is set to True."
},
{
"code": null,
"e": 4579,
"s": 4571,
"text": "Syntax:"
},
{
"code": null,
"e": 4628,
"s": 4579,
"text": "data[‘column_name’].value_counts(normalize=True)"
},
{
"code": null,
"e": 4677,
"s": 4628,
"text": "Example: Count values with relative frequencies "
},
{
"code": null,
"e": 4685,
"s": 4677,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # count all values in name with relative frequenciesprint(data['name'].value_counts(normalize=True))",
"e": 5193,
"s": 4685,
"text": null
},
{
"code": null,
"e": 5201,
"s": 5193,
"text": "Output:"
},
{
"code": null,
"e": 5330,
"s": 5201,
"text": "sravan 0.375\nojaswi 0.125\nojsawi 0.125\nbobby 0.125\nrohith 0.125\ngnanesh 0.125\nName: name, dtype: float64"
},
{
"code": null,
"e": 5446,
"s": 5330,
"text": "If we want to get the details like count, mean, std, min, 25%, 50%,75%, max, then we have to use describe() method."
},
{
"code": null,
"e": 5454,
"s": 5446,
"text": "Syntax:"
},
{
"code": null,
"e": 5485,
"s": 5454,
"text": "data['column_name'].describe()"
},
{
"code": null,
"e": 5506,
"s": 5485,
"text": "Example: Get details"
},
{
"code": null,
"e": 5514,
"s": 5506,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # get about ageprint(data['age'].describe())",
"e": 5965,
"s": 5514,
"text": null
},
{
"code": null,
"e": 5973,
"s": 5965,
"text": "Output:"
},
{
"code": null,
"e": 6151,
"s": 5973,
"text": "count 8.000000\nmean 20.500000\nstd 3.964125\nmin 11.000000\n25% 21.000000\n50% 21.000000\n75% 23.000000\nmax 23.000000\nName: age, dtype: float64"
},
{
"code": null,
"e": 6226,
"s": 6151,
"text": "Here this will return the count of all occurrences in a particular column."
},
{
"code": null,
"e": 6234,
"s": 6226,
"text": "Syntax:"
},
{
"code": null,
"e": 6269,
"s": 6234,
"text": "data.groupby('column_name').size()"
},
{
"code": null,
"e": 6326,
"s": 6269,
"text": "Example: Count of all occurrences in a particular column"
},
{
"code": null,
"e": 6334,
"s": 6326,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # get the size of nameprint(data.groupby('name').size())",
"e": 6797,
"s": 6334,
"text": null
},
{
"code": null,
"e": 6805,
"s": 6797,
"text": "Output:"
},
{
"code": null,
"e": 6901,
"s": 6805,
"text": "name\nbobby 1\ngnanesh 1\nojaswi 1\nojsawi 1\nrohith 1\nsravan 3\ndtype: int64"
},
{
"code": null,
"e": 6995,
"s": 6901,
"text": "Here this will return the count of all occurrences in a particular column across all columns."
},
{
"code": null,
"e": 7003,
"s": 6995,
"text": "Syntax:"
},
{
"code": null,
"e": 7039,
"s": 7003,
"text": "data.groupby('column_name').count()"
},
{
"code": null,
"e": 7097,
"s": 7039,
"text": "Example: Count of all occurrences in a particular column "
},
{
"code": null,
"e": 7105,
"s": 7097,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # get the count of name across all columnsprint(data.groupby('name').count())",
"e": 7589,
"s": 7105,
"text": null
},
{
"code": null,
"e": 7597,
"s": 7589,
"text": "Output:"
},
{
"code": null,
"e": 7737,
"s": 7597,
"text": "If we want to get the count in a particular range of values, then the bins parameter is applied. We can specify the number of ranges(bins)."
},
{
"code": null,
"e": 7745,
"s": 7737,
"text": "Syntax:"
},
{
"code": null,
"e": 7785,
"s": 7745,
"text": "(data['column_name'].value_counts(bins)"
},
{
"code": null,
"e": 7792,
"s": 7785,
"text": "where,"
},
{
"code": null,
"e": 7820,
"s": 7792,
"text": "data is the input dataframe"
},
{
"code": null,
"e": 7858,
"s": 7820,
"text": "column_name is the column to get bins"
},
{
"code": null,
"e": 7907,
"s": 7858,
"text": "bins is the total number of bins to be specified"
},
{
"code": null,
"e": 7956,
"s": 7907,
"text": "Example: Get count in particular range of values"
},
{
"code": null,
"e": 7964,
"s": 7956,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'ojsawi', 'bobby', 'rohith', 'gnanesh', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'php', 'java', 'html/css', 'python', 'R'], 'marks': [98, 90, 78, 91, 87, 78, 89, 90], 'age': [11, 23, 23, 21, 21, 21, 23, 21]}) # get count of age column with 6 binsprint(data['age'].value_counts(bins=6)) # get count of age column with 4 binsprint(data['age'].value_counts(bins=4))",
"e": 8528,
"s": 7964,
"text": null
},
{
"code": null,
"e": 8536,
"s": 8528,
"text": "Output:"
},
{
"code": null,
"e": 8784,
"s": 8536,
"text": "(19.0, 21.0] 4\n(21.0, 23.0] 3\n(10.987, 13.0] 1\n(17.0, 19.0] 0\n(15.0, 17.0] 0\n(13.0, 15.0] 0\nName: age, dtype: int64\n(20.0, 23.0] 7\n(10.987, 14.0] 1\n(17.0, 20.0] 0\n(14.0, 17.0] 0\nName: age, dtype: int64"
},
{
"code": null,
"e": 8926,
"s": 8784,
"text": "If we want to get a count of all columns across all columns, then we have to use apply() function. In that we will use value_counts() method."
},
{
"code": null,
"e": 8934,
"s": 8926,
"text": "Syntax:"
},
{
"code": null,
"e": 8962,
"s": 8934,
"text": "data.apply(pd.value_counts)"
},
{
"code": null,
"e": 9015,
"s": 8962,
"text": "Example: Get count of all columns across all columns"
},
{
"code": null,
"e": 9023,
"s": 9015,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd # create a dataframe# with 5 rows and 4 columnsdata = pd.DataFrame({ 'name': ['sravan', 'bobby', 'sravan', 'sravan', 'ojaswi'], 'subjects': ['java', 'php', 'java', 'html/css', 'python'], 'marks': [98, 90, 78, 91, 87], 'age': [11, 23, 23, 21, 21]}) # get all countdata.apply(pd.value_counts)",
"e": 9368,
"s": 9023,
"text": null
},
{
"code": null,
"e": 9376,
"s": 9368,
"text": "Output:"
},
{
"code": null,
"e": 9388,
"s": 9376,
"text": "anikakapoor"
},
{
"code": null,
"e": 9413,
"s": 9388,
"text": "pandas-dataframe-program"
},
{
"code": null,
"e": 9420,
"s": 9413,
"text": "Picked"
},
{
"code": null,
"e": 9444,
"s": 9420,
"text": "Python pandas-dataFrame"
},
{
"code": null,
"e": 9458,
"s": 9444,
"text": "Python-pandas"
},
{
"code": null,
"e": 9465,
"s": 9458,
"text": "Python"
},
{
"code": null,
"e": 9563,
"s": 9465,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 9595,
"s": 9563,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 9622,
"s": 9595,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 9645,
"s": 9622,
"text": "Introduction To PYTHON"
},
{
"code": null,
"e": 9666,
"s": 9645,
"text": "Python OOPs Concepts"
},
{
"code": null,
"e": 9697,
"s": 9666,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 9753,
"s": 9697,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 9795,
"s": 9753,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 9837,
"s": 9795,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 9876,
"s": 9837,
"text": "Python | Get unique values from a list"
}
] |
chroot command in Linux with examples
|
15 May, 2019
chroot command in Linux/Unix system is used to change the root directory. Every process/command in Linux/Unix like systems has a current working directory called root directory. It changes the root directory for currently running processes as well as its child processes.A process/command that runs in such a modified environment cannot access files outside the root directory. This modified environment is known as “chroot jail” or “jailed directory”. Some root user and privileged process are allowed to use chroot command.
“chroot” command can be very useful:
To create a test environment.
To recover the system or password.
To reinstall the bootloader.
Syntax:
chroot /path/to/new/root command
OR
chroot /path/to/new/root /path/to/server
OR
chroot [options] /path/to/new/root /path/to/server
Options:
–userspec=USER:GROUP : This option describe the user and group which is to be used. Either name or numeric ID can be used to specify the user and group.
–groups=G_LIST : It describe the supplementary groups as g1,g2,..,gN.
–help : Shows the help message, and exit.
–version : Gives version information, and exit.
Example:
Step 1: We will create a mini-jail with bash and basic commands only. Let’s create a “jail” directory inside the “home” directory, which will be our new root.$ mkdir $HOME/jail
$ mkdir $HOME/jail
Step 2: Create directories inside “$HOME/jail”:$ mkdir -p $HOME/jail/{bin, lib64}
$ cd $HOME/jail
$ mkdir -p $HOME/jail/{bin, lib64}
$ cd $HOME/jail
Step 3: Copy /bin/bash and /bin/ls into $HOME/jail/bin/ location using cp command:$ cp -v /bin/{bash, ls} $HOME/jail/bin
$ cp -v /bin/{bash, ls} $HOME/jail/bin
Step 4: Use ldd command to print shared libraries:$ ldd /bin/bash
$ ldd /bin/bash
Step 5: Copy required libraries into $HOME/jail/lib64/ location using cp command:cp -v libraries/displayed/by/above/command $HOME/jail/lib64Similarly, copy the libraries of ls command into $HOME/jail/lib64 location.
cp -v libraries/displayed/by/above/command $HOME/jail/lib64
Similarly, copy the libraries of ls command into $HOME/jail/lib64 location.
Step 6: Finally, chroot into your mini-jail:$ sudo chroot $HOME/jail /bin/bashNow user sees $HOME/jail directory as its root directory. This is a great boost in the security.
$ sudo chroot $HOME/jail /bin/bash
Now user sees $HOME/jail directory as its root directory. This is a great boost in the security.
linux-command
Linux-directory-commands
Picked
Linux-Unix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n15 May, 2019"
},
{
"code": null,
"e": 580,
"s": 54,
"text": "chroot command in Linux/Unix system is used to change the root directory. Every process/command in Linux/Unix like systems has a current working directory called root directory. It changes the root directory for currently running processes as well as its child processes.A process/command that runs in such a modified environment cannot access files outside the root directory. This modified environment is known as “chroot jail” or “jailed directory”. Some root user and privileged process are allowed to use chroot command."
},
{
"code": null,
"e": 617,
"s": 580,
"text": "“chroot” command can be very useful:"
},
{
"code": null,
"e": 647,
"s": 617,
"text": "To create a test environment."
},
{
"code": null,
"e": 682,
"s": 647,
"text": "To recover the system or password."
},
{
"code": null,
"e": 711,
"s": 682,
"text": "To reinstall the bootloader."
},
{
"code": null,
"e": 719,
"s": 711,
"text": "Syntax:"
},
{
"code": null,
"e": 752,
"s": 719,
"text": "chroot /path/to/new/root command"
},
{
"code": null,
"e": 755,
"s": 752,
"text": "OR"
},
{
"code": null,
"e": 796,
"s": 755,
"text": "chroot /path/to/new/root /path/to/server"
},
{
"code": null,
"e": 799,
"s": 796,
"text": "OR"
},
{
"code": null,
"e": 850,
"s": 799,
"text": "chroot [options] /path/to/new/root /path/to/server"
},
{
"code": null,
"e": 859,
"s": 850,
"text": "Options:"
},
{
"code": null,
"e": 1012,
"s": 859,
"text": "–userspec=USER:GROUP : This option describe the user and group which is to be used. Either name or numeric ID can be used to specify the user and group."
},
{
"code": null,
"e": 1082,
"s": 1012,
"text": "–groups=G_LIST : It describe the supplementary groups as g1,g2,..,gN."
},
{
"code": null,
"e": 1124,
"s": 1082,
"text": "–help : Shows the help message, and exit."
},
{
"code": null,
"e": 1172,
"s": 1124,
"text": "–version : Gives version information, and exit."
},
{
"code": null,
"e": 1181,
"s": 1172,
"text": "Example:"
},
{
"code": null,
"e": 1358,
"s": 1181,
"text": "Step 1: We will create a mini-jail with bash and basic commands only. Let’s create a “jail” directory inside the “home” directory, which will be our new root.$ mkdir $HOME/jail"
},
{
"code": null,
"e": 1377,
"s": 1358,
"text": "$ mkdir $HOME/jail"
},
{
"code": null,
"e": 1476,
"s": 1377,
"text": "Step 2: Create directories inside “$HOME/jail”:$ mkdir -p $HOME/jail/{bin, lib64}\n$ cd $HOME/jail\n"
},
{
"code": null,
"e": 1528,
"s": 1476,
"text": "$ mkdir -p $HOME/jail/{bin, lib64}\n$ cd $HOME/jail\n"
},
{
"code": null,
"e": 1649,
"s": 1528,
"text": "Step 3: Copy /bin/bash and /bin/ls into $HOME/jail/bin/ location using cp command:$ cp -v /bin/{bash, ls} $HOME/jail/bin"
},
{
"code": null,
"e": 1688,
"s": 1649,
"text": "$ cp -v /bin/{bash, ls} $HOME/jail/bin"
},
{
"code": null,
"e": 1754,
"s": 1688,
"text": "Step 4: Use ldd command to print shared libraries:$ ldd /bin/bash"
},
{
"code": null,
"e": 1770,
"s": 1754,
"text": "$ ldd /bin/bash"
},
{
"code": null,
"e": 1986,
"s": 1770,
"text": "Step 5: Copy required libraries into $HOME/jail/lib64/ location using cp command:cp -v libraries/displayed/by/above/command $HOME/jail/lib64Similarly, copy the libraries of ls command into $HOME/jail/lib64 location."
},
{
"code": null,
"e": 2046,
"s": 1986,
"text": "cp -v libraries/displayed/by/above/command $HOME/jail/lib64"
},
{
"code": null,
"e": 2122,
"s": 2046,
"text": "Similarly, copy the libraries of ls command into $HOME/jail/lib64 location."
},
{
"code": null,
"e": 2297,
"s": 2122,
"text": "Step 6: Finally, chroot into your mini-jail:$ sudo chroot $HOME/jail /bin/bashNow user sees $HOME/jail directory as its root directory. This is a great boost in the security."
},
{
"code": null,
"e": 2332,
"s": 2297,
"text": "$ sudo chroot $HOME/jail /bin/bash"
},
{
"code": null,
"e": 2429,
"s": 2332,
"text": "Now user sees $HOME/jail directory as its root directory. This is a great boost in the security."
},
{
"code": null,
"e": 2443,
"s": 2429,
"text": "linux-command"
},
{
"code": null,
"e": 2468,
"s": 2443,
"text": "Linux-directory-commands"
},
{
"code": null,
"e": 2475,
"s": 2468,
"text": "Picked"
},
{
"code": null,
"e": 2486,
"s": 2475,
"text": "Linux-Unix"
}
] |
Weekday() and WeekdayName() Function in MS Access
|
23 Sep, 2020
1. Weekday() Function :In MS Access, The weekday() function returns the weekday number for a given date. In this function, a date will be passed as a parameter and it returns the weekday of that date. By default the 1 is denoted for Sunday and 7 for Saturday. The second parameter will be optional it will be the first day of the week.
Syntax :
Weekday(date, firstdayofweek)
Parameters Value :
Example-1 :
SELECT Weekday(#06/17/2020#);
Output :
4
Example-2 :
SELECT Weekday(Date());
Output :
5
2. WeekdayName() Function :In MS Access, the WeekdayName() function returns the weekday name. In this function, the first function will be the week number and the second parameter will be abbreviate.it is optional. If you want abbreviation then pass true otherwise pass false. and the third parameter will be the first day of the week. It is also optional.
Syntax :
WeekdayName(number, abbreviate, firstdayofweek)
Parameters :
Example-1 :
SELECT WeekdayName(1);
Output :
Sunday
Example-2 :
SELECT WeekdayName (3, TRUE, 2)
Output :
Wed
DBMS-SQL
SQL
SQL
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n23 Sep, 2020"
},
{
"code": null,
"e": 364,
"s": 28,
"text": "1. Weekday() Function :In MS Access, The weekday() function returns the weekday number for a given date. In this function, a date will be passed as a parameter and it returns the weekday of that date. By default the 1 is denoted for Sunday and 7 for Saturday. The second parameter will be optional it will be the first day of the week."
},
{
"code": null,
"e": 373,
"s": 364,
"text": "Syntax :"
},
{
"code": null,
"e": 404,
"s": 373,
"text": "Weekday(date, firstdayofweek)\n"
},
{
"code": null,
"e": 423,
"s": 404,
"text": "Parameters Value :"
},
{
"code": null,
"e": 435,
"s": 423,
"text": "Example-1 :"
},
{
"code": null,
"e": 466,
"s": 435,
"text": "SELECT Weekday(#06/17/2020#);\n"
},
{
"code": null,
"e": 475,
"s": 466,
"text": "Output :"
},
{
"code": null,
"e": 478,
"s": 475,
"text": "4\n"
},
{
"code": null,
"e": 490,
"s": 478,
"text": "Example-2 :"
},
{
"code": null,
"e": 515,
"s": 490,
"text": "SELECT Weekday(Date());\n"
},
{
"code": null,
"e": 524,
"s": 515,
"text": "Output :"
},
{
"code": null,
"e": 527,
"s": 524,
"text": "5\n"
},
{
"code": null,
"e": 884,
"s": 527,
"text": "2. WeekdayName() Function :In MS Access, the WeekdayName() function returns the weekday name. In this function, the first function will be the week number and the second parameter will be abbreviate.it is optional. If you want abbreviation then pass true otherwise pass false. and the third parameter will be the first day of the week. It is also optional."
},
{
"code": null,
"e": 893,
"s": 884,
"text": "Syntax :"
},
{
"code": null,
"e": 942,
"s": 893,
"text": "WeekdayName(number, abbreviate, firstdayofweek)\n"
},
{
"code": null,
"e": 955,
"s": 942,
"text": "Parameters :"
},
{
"code": null,
"e": 967,
"s": 955,
"text": "Example-1 :"
},
{
"code": null,
"e": 991,
"s": 967,
"text": "SELECT WeekdayName(1);\n"
},
{
"code": null,
"e": 1000,
"s": 991,
"text": "Output :"
},
{
"code": null,
"e": 1008,
"s": 1000,
"text": "Sunday\n"
},
{
"code": null,
"e": 1020,
"s": 1008,
"text": "Example-2 :"
},
{
"code": null,
"e": 1053,
"s": 1020,
"text": "SELECT WeekdayName (3, TRUE, 2)\n"
},
{
"code": null,
"e": 1062,
"s": 1053,
"text": "Output :"
},
{
"code": null,
"e": 1067,
"s": 1062,
"text": "Wed\n"
},
{
"code": null,
"e": 1076,
"s": 1067,
"text": "DBMS-SQL"
},
{
"code": null,
"e": 1080,
"s": 1076,
"text": "SQL"
},
{
"code": null,
"e": 1084,
"s": 1080,
"text": "SQL"
}
] |
C# | Math.Abs() Method | Set - 2 - GeeksforGeeks
|
01 Feb, 2019
C# | Math.Abs() Method | Set – 1
In C#, Abs() is a Math class method which is used to return the absolute value of a specified number. This method can be overload by passing the different type of parameters to it. There are total 7 methods in its overload list.
Math.Abs(Decimal)Math.Abs(Double)Math.Abs(Int16)Math.Abs(Int32)Math.Abs(Int64)Math.Abs(SByte)Math.Abs(Single)Math.Abs(Int64)This method is used to return the absolute value of a 64-bit signed integer.Syntax:public static long Abs (long val);Parameter:val: It is the required number which is greater than Int64.MinValue, but less than or equal to Int64.MaxValue of type System.Int64.Return Type: It returns a 64-bit signed integer say r, such that 0 ≤ r ≤ Int64.MaxValue.Exception: This method will give OverflowException if the value of val is equals to Int64.MinValue.Example:// C# Program to illustrate the// Math.Abs(Int64) Methodusing System; class Geeks { // Main Method public static void Main() { // Taking long values long[] val = {Int64.MaxValue, 78345482, -4687985, 0}; // using foreach loop foreach(long value in val) // Displaying the result Console.WriteLine("Absolute value of {0} = {1}", value, Math.Abs(value)); }}Output:Absolute value of 9223372036854775807 = 9223372036854775807
Absolute value of 78345482 = 78345482
Absolute value of -4687985 = 4687985
Absolute value of 0 = 0
Math.Abs(SByte)This method is used to return the absolute value of an 8-bit signed integer.Syntax:public static sbyte Abs (sbyte val);Parameter:val: It is the required number which is greater than SByte.MinValue, but less than or equal to SByte.MaxValue of type System.SByte.Return Type: It returns a 8-bit signed integer say r, such that 0 ≤ r ≤ SByte.MaxValue.Exception: This method will give OverflowException if the value of val is equals to SByte.MinValue.Example:// C# Program to illlustrate the// Math.Abs(SByte) Methodusing System; class Geeks { // Main Method public static void Main() { // Taking SByte values sbyte[] sb = {SByte.MaxValue, 45, -41, 0}; // using foreach loop foreach(sbyte value in sb) // Displaying the result Console.WriteLine("Absolute value of {0} = {1}", value, Math.Abs(value)); }}Output:Absolute value of 127 = 127
Absolute value of 45 = 45
Absolute value of -41 = 41
Absolute value of 0 = 0
Math.Abs(Single)This method is used to return the absolute value of a single-precision floating-point number.Syntax:public static float Abs (float val);Parameter:val: It is the required number which is greater than or equal to Single.MinValue, but less than or equal to MaxValue of type System.Single.Return Type: It returns a single-precision floating-point number say r, such that 0 ≤ r ≤ Single.MaxValue.Note:If val is equal to NegativeInfinity or PositiveInfinity, the return value will be PositiveInfinity.If the val is equal to NaN then return value will be NaN.Example:// C# Program to illlustrate the// Math.Abs(Single) Methodusing System; class Geeks { // Main Method public static void Main() { float nan = float.NaN; // Taking float values float[] fl = {float.MinValue, 127.58f, 0.000f, 7556.48e10f, nan, float.MaxValue}; // using foreach loop foreach(float value in fl) // Displaying the result Console.WriteLine("Absolute value of {0} = {1}", value, Math.Abs(value)); }}Output:Absolute value of -3.402823E+38 = 3.402823E+38
Absolute value of 127.58 = 127.58
Absolute value of 0 = 0
Absolute value of 7.55648E+13 = 7.55648E+13
Absolute value of NaN = NaN
Absolute value of 3.402823E+38 = 3.402823E+38
Reference: https://docs.microsoft.com/en-us/dotnet/api/system.math.abs?view=netframework-4.7.2My Personal Notes
arrow_drop_upSave
Math.Abs(Decimal)
Math.Abs(Double)
Math.Abs(Int16)
Math.Abs(Int32)
Math.Abs(Int64)
Math.Abs(SByte)
Math.Abs(Single)Math.Abs(Int64)This method is used to return the absolute value of a 64-bit signed integer.Syntax:public static long Abs (long val);Parameter:val: It is the required number which is greater than Int64.MinValue, but less than or equal to Int64.MaxValue of type System.Int64.Return Type: It returns a 64-bit signed integer say r, such that 0 ≤ r ≤ Int64.MaxValue.Exception: This method will give OverflowException if the value of val is equals to Int64.MinValue.Example:// C# Program to illustrate the// Math.Abs(Int64) Methodusing System; class Geeks { // Main Method public static void Main() { // Taking long values long[] val = {Int64.MaxValue, 78345482, -4687985, 0}; // using foreach loop foreach(long value in val) // Displaying the result Console.WriteLine("Absolute value of {0} = {1}", value, Math.Abs(value)); }}Output:Absolute value of 9223372036854775807 = 9223372036854775807
Absolute value of 78345482 = 78345482
Absolute value of -4687985 = 4687985
Absolute value of 0 = 0
Math.Abs(SByte)This method is used to return the absolute value of an 8-bit signed integer.Syntax:public static sbyte Abs (sbyte val);Parameter:val: It is the required number which is greater than SByte.MinValue, but less than or equal to SByte.MaxValue of type System.SByte.Return Type: It returns a 8-bit signed integer say r, such that 0 ≤ r ≤ SByte.MaxValue.Exception: This method will give OverflowException if the value of val is equals to SByte.MinValue.Example:// C# Program to illlustrate the// Math.Abs(SByte) Methodusing System; class Geeks { // Main Method public static void Main() { // Taking SByte values sbyte[] sb = {SByte.MaxValue, 45, -41, 0}; // using foreach loop foreach(sbyte value in sb) // Displaying the result Console.WriteLine("Absolute value of {0} = {1}", value, Math.Abs(value)); }}Output:Absolute value of 127 = 127
Absolute value of 45 = 45
Absolute value of -41 = 41
Absolute value of 0 = 0
Math.Abs(Single)This method is used to return the absolute value of a single-precision floating-point number.Syntax:public static float Abs (float val);Parameter:val: It is the required number which is greater than or equal to Single.MinValue, but less than or equal to MaxValue of type System.Single.Return Type: It returns a single-precision floating-point number say r, such that 0 ≤ r ≤ Single.MaxValue.Note:If val is equal to NegativeInfinity or PositiveInfinity, the return value will be PositiveInfinity.If the val is equal to NaN then return value will be NaN.Example:// C# Program to illlustrate the// Math.Abs(Single) Methodusing System; class Geeks { // Main Method public static void Main() { float nan = float.NaN; // Taking float values float[] fl = {float.MinValue, 127.58f, 0.000f, 7556.48e10f, nan, float.MaxValue}; // using foreach loop foreach(float value in fl) // Displaying the result Console.WriteLine("Absolute value of {0} = {1}", value, Math.Abs(value)); }}Output:Absolute value of -3.402823E+38 = 3.402823E+38
Absolute value of 127.58 = 127.58
Absolute value of 0 = 0
Absolute value of 7.55648E+13 = 7.55648E+13
Absolute value of NaN = NaN
Absolute value of 3.402823E+38 = 3.402823E+38
Reference: https://docs.microsoft.com/en-us/dotnet/api/system.math.abs?view=netframework-4.7.2My Personal Notes
arrow_drop_upSave
This method is used to return the absolute value of a 64-bit signed integer.
Syntax:
public static long Abs (long val);
Parameter:
val: It is the required number which is greater than Int64.MinValue, but less than or equal to Int64.MaxValue of type System.Int64.
Return Type: It returns a 64-bit signed integer say r, such that 0 ≤ r ≤ Int64.MaxValue.
Exception: This method will give OverflowException if the value of val is equals to Int64.MinValue.
Example:
// C# Program to illustrate the// Math.Abs(Int64) Methodusing System; class Geeks { // Main Method public static void Main() { // Taking long values long[] val = {Int64.MaxValue, 78345482, -4687985, 0}; // using foreach loop foreach(long value in val) // Displaying the result Console.WriteLine("Absolute value of {0} = {1}", value, Math.Abs(value)); }}
Output:
Absolute value of 9223372036854775807 = 9223372036854775807
Absolute value of 78345482 = 78345482
Absolute value of -4687985 = 4687985
Absolute value of 0 = 0
This method is used to return the absolute value of an 8-bit signed integer.
Syntax:
public static sbyte Abs (sbyte val);
Parameter:
val: It is the required number which is greater than SByte.MinValue, but less than or equal to SByte.MaxValue of type System.SByte.
Return Type: It returns a 8-bit signed integer say r, such that 0 ≤ r ≤ SByte.MaxValue.
Exception: This method will give OverflowException if the value of val is equals to SByte.MinValue.
Example:
// C# Program to illlustrate the// Math.Abs(SByte) Methodusing System; class Geeks { // Main Method public static void Main() { // Taking SByte values sbyte[] sb = {SByte.MaxValue, 45, -41, 0}; // using foreach loop foreach(sbyte value in sb) // Displaying the result Console.WriteLine("Absolute value of {0} = {1}", value, Math.Abs(value)); }}
Output:
Absolute value of 127 = 127
Absolute value of 45 = 45
Absolute value of -41 = 41
Absolute value of 0 = 0
This method is used to return the absolute value of a single-precision floating-point number.
Syntax:
public static float Abs (float val);
Parameter:
val: It is the required number which is greater than or equal to Single.MinValue, but less than or equal to MaxValue of type System.Single.
Return Type: It returns a single-precision floating-point number say r, such that 0 ≤ r ≤ Single.MaxValue.
Note:
If val is equal to NegativeInfinity or PositiveInfinity, the return value will be PositiveInfinity.
If the val is equal to NaN then return value will be NaN.
Example:
// C# Program to illlustrate the// Math.Abs(Single) Methodusing System; class Geeks { // Main Method public static void Main() { float nan = float.NaN; // Taking float values float[] fl = {float.MinValue, 127.58f, 0.000f, 7556.48e10f, nan, float.MaxValue}; // using foreach loop foreach(float value in fl) // Displaying the result Console.WriteLine("Absolute value of {0} = {1}", value, Math.Abs(value)); }}
Output:
Absolute value of -3.402823E+38 = 3.402823E+38
Absolute value of 127.58 = 127.58
Absolute value of 0 = 0
Absolute value of 7.55648E+13 = 7.55648E+13
Absolute value of NaN = NaN
Absolute value of 3.402823E+38 = 3.402823E+38
Reference: https://docs.microsoft.com/en-us/dotnet/api/system.math.abs?view=netframework-4.7.2
CSharp-Math
CSharp-method
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Program to calculate Electricity Bill
Linked List Implementation in C#
C# | How to insert an element in an Array?
HashSet in C# with Examples
Lambda Expressions in C#
Main Method in C#
Difference between Hashtable and Dictionary in C#
C# | Dictionary.Add() Method
Collections in C#
Different Ways to Convert Char Array to String in C#
|
[
{
"code": null,
"e": 23911,
"s": 23883,
"text": "\n01 Feb, 2019"
},
{
"code": null,
"e": 23944,
"s": 23911,
"text": "C# | Math.Abs() Method | Set – 1"
},
{
"code": null,
"e": 24173,
"s": 23944,
"text": "In C#, Abs() is a Math class method which is used to return the absolute value of a specified number. This method can be overload by passing the different type of parameters to it. There are total 7 methods in its overload list."
},
{
"code": null,
"e": 27883,
"s": 24173,
"text": "Math.Abs(Decimal)Math.Abs(Double)Math.Abs(Int16)Math.Abs(Int32)Math.Abs(Int64)Math.Abs(SByte)Math.Abs(Single)Math.Abs(Int64)This method is used to return the absolute value of a 64-bit signed integer.Syntax:public static long Abs (long val);Parameter:val: It is the required number which is greater than Int64.MinValue, but less than or equal to Int64.MaxValue of type System.Int64.Return Type: It returns a 64-bit signed integer say r, such that 0 ≤ r ≤ Int64.MaxValue.Exception: This method will give OverflowException if the value of val is equals to Int64.MinValue.Example:// C# Program to illustrate the// Math.Abs(Int64) Methodusing System; class Geeks { // Main Method public static void Main() { // Taking long values long[] val = {Int64.MaxValue, 78345482, -4687985, 0}; // using foreach loop foreach(long value in val) // Displaying the result Console.WriteLine(\"Absolute value of {0} = {1}\", value, Math.Abs(value)); }}Output:Absolute value of 9223372036854775807 = 9223372036854775807\nAbsolute value of 78345482 = 78345482\nAbsolute value of -4687985 = 4687985\nAbsolute value of 0 = 0\nMath.Abs(SByte)This method is used to return the absolute value of an 8-bit signed integer.Syntax:public static sbyte Abs (sbyte val);Parameter:val: It is the required number which is greater than SByte.MinValue, but less than or equal to SByte.MaxValue of type System.SByte.Return Type: It returns a 8-bit signed integer say r, such that 0 ≤ r ≤ SByte.MaxValue.Exception: This method will give OverflowException if the value of val is equals to SByte.MinValue.Example:// C# Program to illlustrate the// Math.Abs(SByte) Methodusing System; class Geeks { // Main Method public static void Main() { // Taking SByte values sbyte[] sb = {SByte.MaxValue, 45, -41, 0}; // using foreach loop foreach(sbyte value in sb) // Displaying the result Console.WriteLine(\"Absolute value of {0} = {1}\", value, Math.Abs(value)); }}Output:Absolute value of 127 = 127\nAbsolute value of 45 = 45\nAbsolute value of -41 = 41\nAbsolute value of 0 = 0\nMath.Abs(Single)This method is used to return the absolute value of a single-precision floating-point number.Syntax:public static float Abs (float val);Parameter:val: It is the required number which is greater than or equal to Single.MinValue, but less than or equal to MaxValue of type System.Single.Return Type: It returns a single-precision floating-point number say r, such that 0 ≤ r ≤ Single.MaxValue.Note:If val is equal to NegativeInfinity or PositiveInfinity, the return value will be PositiveInfinity.If the val is equal to NaN then return value will be NaN.Example:// C# Program to illlustrate the// Math.Abs(Single) Methodusing System; class Geeks { // Main Method public static void Main() { float nan = float.NaN; // Taking float values float[] fl = {float.MinValue, 127.58f, 0.000f, 7556.48e10f, nan, float.MaxValue}; // using foreach loop foreach(float value in fl) // Displaying the result Console.WriteLine(\"Absolute value of {0} = {1}\", value, Math.Abs(value)); }}Output:Absolute value of -3.402823E+38 = 3.402823E+38\nAbsolute value of 127.58 = 127.58\nAbsolute value of 0 = 0\nAbsolute value of 7.55648E+13 = 7.55648E+13\nAbsolute value of NaN = NaN\nAbsolute value of 3.402823E+38 = 3.402823E+38\nReference: https://docs.microsoft.com/en-us/dotnet/api/system.math.abs?view=netframework-4.7.2My Personal Notes\narrow_drop_upSave"
},
{
"code": null,
"e": 27901,
"s": 27883,
"text": "Math.Abs(Decimal)"
},
{
"code": null,
"e": 27918,
"s": 27901,
"text": "Math.Abs(Double)"
},
{
"code": null,
"e": 27934,
"s": 27918,
"text": "Math.Abs(Int16)"
},
{
"code": null,
"e": 27950,
"s": 27934,
"text": "Math.Abs(Int32)"
},
{
"code": null,
"e": 27966,
"s": 27950,
"text": "Math.Abs(Int64)"
},
{
"code": null,
"e": 27982,
"s": 27966,
"text": "Math.Abs(SByte)"
},
{
"code": null,
"e": 31599,
"s": 27982,
"text": "Math.Abs(Single)Math.Abs(Int64)This method is used to return the absolute value of a 64-bit signed integer.Syntax:public static long Abs (long val);Parameter:val: It is the required number which is greater than Int64.MinValue, but less than or equal to Int64.MaxValue of type System.Int64.Return Type: It returns a 64-bit signed integer say r, such that 0 ≤ r ≤ Int64.MaxValue.Exception: This method will give OverflowException if the value of val is equals to Int64.MinValue.Example:// C# Program to illustrate the// Math.Abs(Int64) Methodusing System; class Geeks { // Main Method public static void Main() { // Taking long values long[] val = {Int64.MaxValue, 78345482, -4687985, 0}; // using foreach loop foreach(long value in val) // Displaying the result Console.WriteLine(\"Absolute value of {0} = {1}\", value, Math.Abs(value)); }}Output:Absolute value of 9223372036854775807 = 9223372036854775807\nAbsolute value of 78345482 = 78345482\nAbsolute value of -4687985 = 4687985\nAbsolute value of 0 = 0\nMath.Abs(SByte)This method is used to return the absolute value of an 8-bit signed integer.Syntax:public static sbyte Abs (sbyte val);Parameter:val: It is the required number which is greater than SByte.MinValue, but less than or equal to SByte.MaxValue of type System.SByte.Return Type: It returns a 8-bit signed integer say r, such that 0 ≤ r ≤ SByte.MaxValue.Exception: This method will give OverflowException if the value of val is equals to SByte.MinValue.Example:// C# Program to illlustrate the// Math.Abs(SByte) Methodusing System; class Geeks { // Main Method public static void Main() { // Taking SByte values sbyte[] sb = {SByte.MaxValue, 45, -41, 0}; // using foreach loop foreach(sbyte value in sb) // Displaying the result Console.WriteLine(\"Absolute value of {0} = {1}\", value, Math.Abs(value)); }}Output:Absolute value of 127 = 127\nAbsolute value of 45 = 45\nAbsolute value of -41 = 41\nAbsolute value of 0 = 0\nMath.Abs(Single)This method is used to return the absolute value of a single-precision floating-point number.Syntax:public static float Abs (float val);Parameter:val: It is the required number which is greater than or equal to Single.MinValue, but less than or equal to MaxValue of type System.Single.Return Type: It returns a single-precision floating-point number say r, such that 0 ≤ r ≤ Single.MaxValue.Note:If val is equal to NegativeInfinity or PositiveInfinity, the return value will be PositiveInfinity.If the val is equal to NaN then return value will be NaN.Example:// C# Program to illlustrate the// Math.Abs(Single) Methodusing System; class Geeks { // Main Method public static void Main() { float nan = float.NaN; // Taking float values float[] fl = {float.MinValue, 127.58f, 0.000f, 7556.48e10f, nan, float.MaxValue}; // using foreach loop foreach(float value in fl) // Displaying the result Console.WriteLine(\"Absolute value of {0} = {1}\", value, Math.Abs(value)); }}Output:Absolute value of -3.402823E+38 = 3.402823E+38\nAbsolute value of 127.58 = 127.58\nAbsolute value of 0 = 0\nAbsolute value of 7.55648E+13 = 7.55648E+13\nAbsolute value of NaN = NaN\nAbsolute value of 3.402823E+38 = 3.402823E+38\nReference: https://docs.microsoft.com/en-us/dotnet/api/system.math.abs?view=netframework-4.7.2My Personal Notes\narrow_drop_upSave"
},
{
"code": null,
"e": 31676,
"s": 31599,
"text": "This method is used to return the absolute value of a 64-bit signed integer."
},
{
"code": null,
"e": 31684,
"s": 31676,
"text": "Syntax:"
},
{
"code": null,
"e": 31719,
"s": 31684,
"text": "public static long Abs (long val);"
},
{
"code": null,
"e": 31730,
"s": 31719,
"text": "Parameter:"
},
{
"code": null,
"e": 31862,
"s": 31730,
"text": "val: It is the required number which is greater than Int64.MinValue, but less than or equal to Int64.MaxValue of type System.Int64."
},
{
"code": null,
"e": 31951,
"s": 31862,
"text": "Return Type: It returns a 64-bit signed integer say r, such that 0 ≤ r ≤ Int64.MaxValue."
},
{
"code": null,
"e": 32051,
"s": 31951,
"text": "Exception: This method will give OverflowException if the value of val is equals to Int64.MinValue."
},
{
"code": null,
"e": 32060,
"s": 32051,
"text": "Example:"
},
{
"code": "// C# Program to illustrate the// Math.Abs(Int64) Methodusing System; class Geeks { // Main Method public static void Main() { // Taking long values long[] val = {Int64.MaxValue, 78345482, -4687985, 0}; // using foreach loop foreach(long value in val) // Displaying the result Console.WriteLine(\"Absolute value of {0} = {1}\", value, Math.Abs(value)); }}",
"e": 32520,
"s": 32060,
"text": null
},
{
"code": null,
"e": 32528,
"s": 32520,
"text": "Output:"
},
{
"code": null,
"e": 32688,
"s": 32528,
"text": "Absolute value of 9223372036854775807 = 9223372036854775807\nAbsolute value of 78345482 = 78345482\nAbsolute value of -4687985 = 4687985\nAbsolute value of 0 = 0\n"
},
{
"code": null,
"e": 32765,
"s": 32688,
"text": "This method is used to return the absolute value of an 8-bit signed integer."
},
{
"code": null,
"e": 32773,
"s": 32765,
"text": "Syntax:"
},
{
"code": null,
"e": 32810,
"s": 32773,
"text": "public static sbyte Abs (sbyte val);"
},
{
"code": null,
"e": 32821,
"s": 32810,
"text": "Parameter:"
},
{
"code": null,
"e": 32953,
"s": 32821,
"text": "val: It is the required number which is greater than SByte.MinValue, but less than or equal to SByte.MaxValue of type System.SByte."
},
{
"code": null,
"e": 33041,
"s": 32953,
"text": "Return Type: It returns a 8-bit signed integer say r, such that 0 ≤ r ≤ SByte.MaxValue."
},
{
"code": null,
"e": 33141,
"s": 33041,
"text": "Exception: This method will give OverflowException if the value of val is equals to SByte.MinValue."
},
{
"code": null,
"e": 33150,
"s": 33141,
"text": "Example:"
},
{
"code": "// C# Program to illlustrate the// Math.Abs(SByte) Methodusing System; class Geeks { // Main Method public static void Main() { // Taking SByte values sbyte[] sb = {SByte.MaxValue, 45, -41, 0}; // using foreach loop foreach(sbyte value in sb) // Displaying the result Console.WriteLine(\"Absolute value of {0} = {1}\", value, Math.Abs(value)); }}",
"e": 33601,
"s": 33150,
"text": null
},
{
"code": null,
"e": 33609,
"s": 33601,
"text": "Output:"
},
{
"code": null,
"e": 33715,
"s": 33609,
"text": "Absolute value of 127 = 127\nAbsolute value of 45 = 45\nAbsolute value of -41 = 41\nAbsolute value of 0 = 0\n"
},
{
"code": null,
"e": 33809,
"s": 33715,
"text": "This method is used to return the absolute value of a single-precision floating-point number."
},
{
"code": null,
"e": 33817,
"s": 33809,
"text": "Syntax:"
},
{
"code": null,
"e": 33854,
"s": 33817,
"text": "public static float Abs (float val);"
},
{
"code": null,
"e": 33865,
"s": 33854,
"text": "Parameter:"
},
{
"code": null,
"e": 34005,
"s": 33865,
"text": "val: It is the required number which is greater than or equal to Single.MinValue, but less than or equal to MaxValue of type System.Single."
},
{
"code": null,
"e": 34112,
"s": 34005,
"text": "Return Type: It returns a single-precision floating-point number say r, such that 0 ≤ r ≤ Single.MaxValue."
},
{
"code": null,
"e": 34118,
"s": 34112,
"text": "Note:"
},
{
"code": null,
"e": 34218,
"s": 34118,
"text": "If val is equal to NegativeInfinity or PositiveInfinity, the return value will be PositiveInfinity."
},
{
"code": null,
"e": 34276,
"s": 34218,
"text": "If the val is equal to NaN then return value will be NaN."
},
{
"code": null,
"e": 34285,
"s": 34276,
"text": "Example:"
},
{
"code": "// C# Program to illlustrate the// Math.Abs(Single) Methodusing System; class Geeks { // Main Method public static void Main() { float nan = float.NaN; // Taking float values float[] fl = {float.MinValue, 127.58f, 0.000f, 7556.48e10f, nan, float.MaxValue}; // using foreach loop foreach(float value in fl) // Displaying the result Console.WriteLine(\"Absolute value of {0} = {1}\", value, Math.Abs(value)); }}",
"e": 34827,
"s": 34285,
"text": null
},
{
"code": null,
"e": 34835,
"s": 34827,
"text": "Output:"
},
{
"code": null,
"e": 35059,
"s": 34835,
"text": "Absolute value of -3.402823E+38 = 3.402823E+38\nAbsolute value of 127.58 = 127.58\nAbsolute value of 0 = 0\nAbsolute value of 7.55648E+13 = 7.55648E+13\nAbsolute value of NaN = NaN\nAbsolute value of 3.402823E+38 = 3.402823E+38\n"
},
{
"code": null,
"e": 35154,
"s": 35059,
"text": "Reference: https://docs.microsoft.com/en-us/dotnet/api/system.math.abs?view=netframework-4.7.2"
},
{
"code": null,
"e": 35166,
"s": 35154,
"text": "CSharp-Math"
},
{
"code": null,
"e": 35180,
"s": 35166,
"text": "CSharp-method"
},
{
"code": null,
"e": 35183,
"s": 35180,
"text": "C#"
},
{
"code": null,
"e": 35281,
"s": 35183,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 35290,
"s": 35281,
"text": "Comments"
},
{
"code": null,
"e": 35303,
"s": 35290,
"text": "Old Comments"
},
{
"code": null,
"e": 35341,
"s": 35303,
"text": "Program to calculate Electricity Bill"
},
{
"code": null,
"e": 35374,
"s": 35341,
"text": "Linked List Implementation in C#"
},
{
"code": null,
"e": 35417,
"s": 35374,
"text": "C# | How to insert an element in an Array?"
},
{
"code": null,
"e": 35445,
"s": 35417,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 35470,
"s": 35445,
"text": "Lambda Expressions in C#"
},
{
"code": null,
"e": 35488,
"s": 35470,
"text": "Main Method in C#"
},
{
"code": null,
"e": 35538,
"s": 35488,
"text": "Difference between Hashtable and Dictionary in C#"
},
{
"code": null,
"e": 35567,
"s": 35538,
"text": "C# | Dictionary.Add() Method"
},
{
"code": null,
"e": 35585,
"s": 35567,
"text": "Collections in C#"
}
] |
Java String concat() Method
|
❮ String Methods
Concatenate two strings:
String firstName = "John ";
String lastName = "Doe";
System.out.println(firstName.concat(lastName));
Try it Yourself »
The concat() method appends (concatenate) a
string to the end of another string.
public String concat(String string2)
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
[email protected]
Your message has been sent to W3Schools.
|
[
{
"code": null,
"e": 19,
"s": 0,
"text": "\n❮ String Methods\n"
},
{
"code": null,
"e": 44,
"s": 19,
"text": "Concatenate two strings:"
},
{
"code": null,
"e": 145,
"s": 44,
"text": "String firstName = \"John \";\nString lastName = \"Doe\";\nSystem.out.println(firstName.concat(lastName));"
},
{
"code": null,
"e": 165,
"s": 145,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 247,
"s": 165,
"text": "The concat() method appends (concatenate) a \nstring to the end of another string."
},
{
"code": null,
"e": 285,
"s": 247,
"text": "public String concat(String string2)\n"
},
{
"code": null,
"e": 318,
"s": 285,
"text": "We just launchedW3Schools videos"
},
{
"code": null,
"e": 360,
"s": 318,
"text": "Get certifiedby completinga course today!"
},
{
"code": null,
"e": 467,
"s": 360,
"text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:"
},
{
"code": null,
"e": 486,
"s": 467,
"text": "[email protected]"
}
] |
Stemming vs Lemmatization. Truncate a word to its root or base... | by Aditya Beri | Towards Data Science
|
Stemming and Lemmatization are text normalization techniques within the field of Natural language Processing that are used to prepare text, words, and documents for further processing. In this blog, you may study stemming and lemmatization in an exceedingly practical approach covering the background, applications of stemming and lemmatization, and the way to stem and lemmatize words, sentences and documents using the Python nltk package which is the natural language package provided by Python
In natural language processing, you may want your program to acknowledge that the words “kick” and “kicked” are just different tenses of the same verb. this can be the concept of reducing different kinds of a word to a core root.
Stemming is the process of producing morphological variants of a root/base word. Stemming programs are commonly referred to as stemming algorithms or stemmers.
Often when searching text for a certain keyword, it helps if the search returns variations of the word. For instance, searching for “boat” might also return “boats” and “boating”. Here, “boat” would be the stem for [boat, boater, boating, boats].
Stemming is a somewhat crude method for cataloging related words; it essentially chops off letters from the end until the stem is reached. This works fairly well in most cases, but unfortunately English has many exceptions where a more sophisticated process is required. In fact, spaCy doesn’t include a stemmer, opting instead to rely entirely on lemmatization.
One of the most common — and effective — stemming tools is Porter’s Algorithm developed by Martin Porter in 1980. The algorithm employs five phases of word reduction, each with its own set of mapping rules. In the first phase, simple suffix mapping rules are defined, such as:
# Import the toolkit and the full Porter Stemmer libraryimport nltkfrom nltk.stem.porter import *p_stemmer = PorterStemmer()words = ['run','runner','running','ran','runs','easily','fairly']for word in words: print(word+' --> '+p_stemmer.stem(word))
run --> runrunner --> runnerrunning --> runran --> ranruns --> runeasily --> easilifairly --> fairli
Note how the stemmer recognizes “runner” as a noun, not a verb form or participle. Also, the adverbs “easily” and “fairly” are stemmed to the unusual root “easili” and “fairli”
This is somewhat of a misnomer, as Snowball is the name of a stemming language developed by Martin Porter. The algorithm used here is more accurately called the “English Stemmer” or “Porter2 Stemmer”. It offers a slight improvement over the original Porter stemmer, both in logic and speed. Since nltk uses the name SnowballStemmer, we’ll use it here.
from nltk.stem.snowball import SnowballStemmer# The Snowball Stemmer requires that you pass a language parameters_stemmer = SnowballStemmer(language='english')words = ['run','runner','running','ran','runs','easily','fairly'for word in words: print(word+' --> '+s_stemmer.stem(word))
run --> runrunner --> runnerrunning --> runran --> ranruns --> runeasily --> easilifairly --> fair
In this case, the stemmer performed the same as the Porter Stemmer, with the exception that it handled the stem of “fairly” more appropriately with “fair”
Stemming has its drawbacks. If given the token saw, stemming might always return saw, whereas lemmatization would likely return either see or saw depending on whether the use of the token was as a verb or a noun
In contrast to stemming, lemmatization looks beyond word reduction and considers a language’s full vocabulary to apply a morphological analysis to words. The lemma of ‘was’ is ‘be’ and the lemma of ‘mice’ is ‘mouse’.
Lemmatization is typically seen as much more informative than simple stemming, which is why Spacy has opted to only have Lemmatization available instead of Stemming
Lemmatization looks at surrounding text to determine a given word’s part of speech, it does not categorize phrases.
# Perform standard imports:import spacynlp = spacy.load('en_core_web_sm')def show_lemmas(text): for token in text: print(f'{token.text:{12}} {token.pos_:{6}} {token.lemma:<{22}} {token.lemma_}')
Here we’re using an f-string to format the printed text by setting minimum field widths and adding a left-align to the lemma hash value.
doc = nlp(u"I saw eighteen mice today!")show_lemmas(doc)
I PRON 561228191312463089 -PRON-saw VERB 11925638236994514241 seeeighteen NUM 9609336664675087640 eighteenmice NOUN 1384165645700560590 mousetoday NOUN 11042482332948150395 today! PUNCT 17494803046312582752 !
Notice that the lemma of `saw` is `see`, `mice` is the plural form of `mouse`, and yet `eighteen` is its own number, *not* an expanded form of `eight`.
CONCLUSION
One thing to note about lemmatization is that it is harder to create a lemmatizer in a new language than it is a stemming algorithm because we require a lot more knowledge about structure of a language in lemmatizers.
Stemming and Lemmatization both generate the foundation sort of the inflected words and therefore the only difference is that stem may not be an actual word whereas, lemma is an actual language word.Stemming follows an algorithm with steps to perform on the words which makes it faster. Whereas, in lemmatization, you used a corpus also to supply lemma which makes it slower than stemming. you furthermore might had to define a parts-of-speech to get the proper lemma.
The above points show that if speed is concentrated then stemming should be used since lemmatizers scan a corpus which consumes time and processing. It depends on the problem you’re working on that decides if stemmers should be used or lemmatizers.
Thanks to Jose Portilla’s work for helping throughout
Note- All the code explained has been given in the GitHub repo with some more examples to enhance your knowledge and get a better grip over this topic. Also, extra concepts on stop words and vocabulary have been included there. Click on the link below:-
|
[
{
"code": null,
"e": 670,
"s": 172,
"text": "Stemming and Lemmatization are text normalization techniques within the field of Natural language Processing that are used to prepare text, words, and documents for further processing. In this blog, you may study stemming and lemmatization in an exceedingly practical approach covering the background, applications of stemming and lemmatization, and the way to stem and lemmatize words, sentences and documents using the Python nltk package which is the natural language package provided by Python"
},
{
"code": null,
"e": 900,
"s": 670,
"text": "In natural language processing, you may want your program to acknowledge that the words “kick” and “kicked” are just different tenses of the same verb. this can be the concept of reducing different kinds of a word to a core root."
},
{
"code": null,
"e": 1060,
"s": 900,
"text": "Stemming is the process of producing morphological variants of a root/base word. Stemming programs are commonly referred to as stemming algorithms or stemmers."
},
{
"code": null,
"e": 1307,
"s": 1060,
"text": "Often when searching text for a certain keyword, it helps if the search returns variations of the word. For instance, searching for “boat” might also return “boats” and “boating”. Here, “boat” would be the stem for [boat, boater, boating, boats]."
},
{
"code": null,
"e": 1670,
"s": 1307,
"text": "Stemming is a somewhat crude method for cataloging related words; it essentially chops off letters from the end until the stem is reached. This works fairly well in most cases, but unfortunately English has many exceptions where a more sophisticated process is required. In fact, spaCy doesn’t include a stemmer, opting instead to rely entirely on lemmatization."
},
{
"code": null,
"e": 1947,
"s": 1670,
"text": "One of the most common — and effective — stemming tools is Porter’s Algorithm developed by Martin Porter in 1980. The algorithm employs five phases of word reduction, each with its own set of mapping rules. In the first phase, simple suffix mapping rules are defined, such as:"
},
{
"code": null,
"e": 2199,
"s": 1947,
"text": "# Import the toolkit and the full Porter Stemmer libraryimport nltkfrom nltk.stem.porter import *p_stemmer = PorterStemmer()words = ['run','runner','running','ran','runs','easily','fairly']for word in words: print(word+' --> '+p_stemmer.stem(word))"
},
{
"code": null,
"e": 2300,
"s": 2199,
"text": "run --> runrunner --> runnerrunning --> runran --> ranruns --> runeasily --> easilifairly --> fairli"
},
{
"code": null,
"e": 2477,
"s": 2300,
"text": "Note how the stemmer recognizes “runner” as a noun, not a verb form or participle. Also, the adverbs “easily” and “fairly” are stemmed to the unusual root “easili” and “fairli”"
},
{
"code": null,
"e": 2829,
"s": 2477,
"text": "This is somewhat of a misnomer, as Snowball is the name of a stemming language developed by Martin Porter. The algorithm used here is more accurately called the “English Stemmer” or “Porter2 Stemmer”. It offers a slight improvement over the original Porter stemmer, both in logic and speed. Since nltk uses the name SnowballStemmer, we’ll use it here."
},
{
"code": null,
"e": 3115,
"s": 2829,
"text": "from nltk.stem.snowball import SnowballStemmer# The Snowball Stemmer requires that you pass a language parameters_stemmer = SnowballStemmer(language='english')words = ['run','runner','running','ran','runs','easily','fairly'for word in words: print(word+' --> '+s_stemmer.stem(word))"
},
{
"code": null,
"e": 3214,
"s": 3115,
"text": "run --> runrunner --> runnerrunning --> runran --> ranruns --> runeasily --> easilifairly --> fair"
},
{
"code": null,
"e": 3369,
"s": 3214,
"text": "In this case, the stemmer performed the same as the Porter Stemmer, with the exception that it handled the stem of “fairly” more appropriately with “fair”"
},
{
"code": null,
"e": 3581,
"s": 3369,
"text": "Stemming has its drawbacks. If given the token saw, stemming might always return saw, whereas lemmatization would likely return either see or saw depending on whether the use of the token was as a verb or a noun"
},
{
"code": null,
"e": 3798,
"s": 3581,
"text": "In contrast to stemming, lemmatization looks beyond word reduction and considers a language’s full vocabulary to apply a morphological analysis to words. The lemma of ‘was’ is ‘be’ and the lemma of ‘mice’ is ‘mouse’."
},
{
"code": null,
"e": 3963,
"s": 3798,
"text": "Lemmatization is typically seen as much more informative than simple stemming, which is why Spacy has opted to only have Lemmatization available instead of Stemming"
},
{
"code": null,
"e": 4079,
"s": 3963,
"text": "Lemmatization looks at surrounding text to determine a given word’s part of speech, it does not categorize phrases."
},
{
"code": null,
"e": 4284,
"s": 4079,
"text": "# Perform standard imports:import spacynlp = spacy.load('en_core_web_sm')def show_lemmas(text): for token in text: print(f'{token.text:{12}} {token.pos_:{6}} {token.lemma:<{22}} {token.lemma_}')"
},
{
"code": null,
"e": 4421,
"s": 4284,
"text": "Here we’re using an f-string to format the printed text by setting minimum field widths and adding a left-align to the lemma hash value."
},
{
"code": null,
"e": 4478,
"s": 4421,
"text": "doc = nlp(u\"I saw eighteen mice today!\")show_lemmas(doc)"
},
{
"code": null,
"e": 4765,
"s": 4478,
"text": "I PRON 561228191312463089 -PRON-saw VERB 11925638236994514241 seeeighteen NUM 9609336664675087640 eighteenmice NOUN 1384165645700560590 mousetoday NOUN 11042482332948150395 today! PUNCT 17494803046312582752 !"
},
{
"code": null,
"e": 4917,
"s": 4765,
"text": "Notice that the lemma of `saw` is `see`, `mice` is the plural form of `mouse`, and yet `eighteen` is its own number, *not* an expanded form of `eight`."
},
{
"code": null,
"e": 4928,
"s": 4917,
"text": "CONCLUSION"
},
{
"code": null,
"e": 5146,
"s": 4928,
"text": "One thing to note about lemmatization is that it is harder to create a lemmatizer in a new language than it is a stemming algorithm because we require a lot more knowledge about structure of a language in lemmatizers."
},
{
"code": null,
"e": 5615,
"s": 5146,
"text": "Stemming and Lemmatization both generate the foundation sort of the inflected words and therefore the only difference is that stem may not be an actual word whereas, lemma is an actual language word.Stemming follows an algorithm with steps to perform on the words which makes it faster. Whereas, in lemmatization, you used a corpus also to supply lemma which makes it slower than stemming. you furthermore might had to define a parts-of-speech to get the proper lemma."
},
{
"code": null,
"e": 5864,
"s": 5615,
"text": "The above points show that if speed is concentrated then stemming should be used since lemmatizers scan a corpus which consumes time and processing. It depends on the problem you’re working on that decides if stemmers should be used or lemmatizers."
},
{
"code": null,
"e": 5918,
"s": 5864,
"text": "Thanks to Jose Portilla’s work for helping throughout"
}
] |
Flutter - BorderRadius Widget - GeeksforGeeks
|
21 Feb, 2022
BorderRadius is a built-in widget in flutter. Its main functionality is to add a curve around the border-corner of a widget. There are in total of five ways in which we can use this widget, the first is by using BorderRadius.all, the radius for all the corners are the same here. The second way is by using BorderRadius.Circle, here we need to specify radius only once which would be a double value. The third way is by using BorderRadius.horizontal, here we can specify different border-radius for the left and the right side. The fourth way is by using BorderRadius.only, it can take a different radius for all four border corners. And the last way is by using BorderRadius.vertical, which can give a different radius to the upper and the lower portion of the border. Implementation of all these ways is shown below with the help of examples.
const BorderRadius.all(
Radius radius
)
BorderRadius.circular(
double radius
)
const BorderRadius.horizontal(
{Radius left: Radius.zero,
Radius right: Radius.zero}
)
const BorderRadius.only(
{Radius topLeft: Radius.zero,
Radius topRight: Radius.zero,
Radius bottomLeft: Radius.zero,
Radius bottomRight: Radius.zero}
)
const BorderRadius.vertical(
{Radius top: Radius.zero,
Radius bottom: Radius.zero}
)
bottomLeft: The bottomLeft property takes in Radius class as the object. It controls the radius of the bottom-left corner of the border.
// Implementation
final Radius bottomLeft
bottomRight: This property also holds Radius as the object to decide the radius of the bottom-right corner of the border.
topLeft: This property also holds Radius class as the object to decide the radius of the top-left corner of the border.
topRight: This property also holds Radius class as the object to decide the radius of the top-right corner of the border.
Now, we are going to see how we can add border-radius to the border using all the methods. The border in the app below is created by using Border.all widget, around a NetworkImage which is placed inside BoxDecoration widget.
BorderRadius Widget
This is how our border looks now. Let’s see how to add a curve to the corners.
Example 1: BorderRadius.all
Dart
import 'package:flutter/material.dart'; void main() { runApp( //Our app widget tree starts here MaterialApp( home: Scaffold( appBar: AppBar( title: Text('GeeksforGeeks'), backgroundColor: Colors.greenAccent[400], leading: IconButton( icon: Icon(Icons.menu), tooltip: 'Menu', onPressed: () {}, ), //IconButton actions: <Widget>[ IconButton( icon: Icon(Icons.comment), tooltip: 'Comment', onPressed: () {}, ), //IconButton ], //<Widget>[] ), //AppBar body: Center( child: Padding( padding: const EdgeInsets.all(12.0), child: SizedBox( height: 250, child: Container( decoration: BoxDecoration( image: const DecorationImage( image: NetworkImage( 'https://media.geeksforgeeks.org/wp-content/cdn-uploads/logo.png'), ), border: Border.all( color: const Color(0xFF000000), width: 4.0, style: BorderStyle.solid), //Border.all /*** The BorderRadius widget is here ***/ borderRadius: BorderRadius.all( Radius.circular(10), ), //BorderRadius.all ), //BoxDecoration ), ), ), ), //Center ), //Scaffold debugShowCheckedModeBanner: false, //Debug banner is turned off ), //MaterialApp );}
Output:
BorderRadius.all
Explanation: The curve around the corners of the borders in the above app has been added using BorderRadius.all. The BorderRadius.all is taking Radius.circular as the object and 10 is the parameter assigned to that. And we can see a curve of radius 10 pixels has been added to all the corners.
Example 2: BorderRadius.circle
// Code snippet of the BorderRadius.Circular
...
borderRadius: BorderRadius.circular(50.0),
...
Output:
BorderRadius.circular
Explanation: The above code snippet is of BorderRadius.circular. It only takes in a double value as the object to give an equal curve to all the corners in the border. In the above app the radius is set to 50 pixels.
Example 3: BorderRadous.horizontal:
// Code sippet of BorderRadius.horizontal
...
borderRadius: BorderRadius.horizontal(
left: Radius.circular(15),
right: Radius.circular(30),
), //BorderRadius.horizontal
...
Output:
BorderRadius.horizontal
Explanation: Here BorderRadius.horizontal has been used to add a border around the corners. Inside the BorderRadius.horizontal widget the left property is holding Radius.circular(15), which gives the left side of the border (i.e. the top-left and bottom,-left) a radius of 15 pixels and the right property is holding Radius.circular(30), which in turn gives the right portion of the border a radius of 30 pixels.
Example 4: BorderRadus.only
// Code sippet of BorderRadius.only
...
borderRadius: BorderRadius.only(
topLeft: Radius.circular(5),
topRight: Radius.circular(10),
bottomLeft: Radius.circular(15),
bottomRight: Radius.circular(20),
),//BorderRadius.Only
...
Output:
BorderRadius.only
Explanation: In the above app the BorderRadius.only is used to add different curves around different corners of the borders. BorderRadius.only takes in four properties that are topLeft, topRight, bottomLeft and bottomRight to add a specific amount of radius to the corners in the border. In the top-left corner the radius is 5px,in the top-right corner the radius is10 px, in the bottom-left corner the border-radius is 15 px and in the bottom-right corner the radius is 20 px.
Example 5: BorderRadius.vertical
// Code sippet of BorderRadius.vertical
...
borderRadius: BorderRadius.vertical(
top: Radius.circular(10),
bottom: Radius.circular(30),
),//BorderRadius.vertical
...
Output:
BorderRadius.vertical
Explanation: BorderRadius.vertical is the widget used here to add border-radius to the corners. It takes in top and the bottom radius to specify border-radius to the upper and the lower portion of the border. Here the border-radius added to the upper portion is 10 px and the border-radius added to the lower portion is 30 px.
To see the full code of all the examples used in this article click here.
simranarora5sos
android
Flutter
Flutter-widgets
Android
Dart
Flutter
Android
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Create and Add Data to SQLite Database in Android?
Broadcast Receiver in Android With Example
Services in Android with Example
Content Providers in Android with Example
Android RecyclerView in Kotlin
Flutter - DropDownButton Widget
Flutter - Asset Image
Flutter - Custom Bottom Navigation Bar
Splash Screen in Flutter
Flutter - Checkbox Widget
|
[
{
"code": null,
"e": 24340,
"s": 24312,
"text": "\n21 Feb, 2022"
},
{
"code": null,
"e": 25186,
"s": 24340,
"text": "BorderRadius is a built-in widget in flutter. Its main functionality is to add a curve around the border-corner of a widget. There are in total of five ways in which we can use this widget, the first is by using BorderRadius.all, the radius for all the corners are the same here. The second way is by using BorderRadius.Circle, here we need to specify radius only once which would be a double value. The third way is by using BorderRadius.horizontal, here we can specify different border-radius for the left and the right side. The fourth way is by using BorderRadius.only, it can take a different radius for all four border corners. And the last way is by using BorderRadius.vertical, which can give a different radius to the upper and the lower portion of the border. Implementation of all these ways is shown below with the help of examples."
},
{
"code": null,
"e": 25226,
"s": 25186,
"text": "const BorderRadius.all(\nRadius radius\n)"
},
{
"code": null,
"e": 25265,
"s": 25226,
"text": "BorderRadius.circular(\ndouble radius\n)"
},
{
"code": null,
"e": 25352,
"s": 25265,
"text": "const BorderRadius.horizontal(\n{Radius left: Radius.zero,\nRadius right: Radius.zero}\n)"
},
{
"code": null,
"e": 25504,
"s": 25352,
"text": "const BorderRadius.only(\n{Radius topLeft: Radius.zero,\nRadius topRight: Radius.zero,\nRadius bottomLeft: Radius.zero,\nRadius bottomRight: Radius.zero}\n)"
},
{
"code": null,
"e": 25589,
"s": 25504,
"text": "const BorderRadius.vertical(\n{Radius top: Radius.zero,\nRadius bottom: Radius.zero}\n)"
},
{
"code": null,
"e": 25726,
"s": 25589,
"text": "bottomLeft: The bottomLeft property takes in Radius class as the object. It controls the radius of the bottom-left corner of the border."
},
{
"code": null,
"e": 25778,
"s": 25726,
"text": " // Implementation\n final Radius bottomLeft"
},
{
"code": null,
"e": 25900,
"s": 25778,
"text": "bottomRight: This property also holds Radius as the object to decide the radius of the bottom-right corner of the border."
},
{
"code": null,
"e": 26020,
"s": 25900,
"text": "topLeft: This property also holds Radius class as the object to decide the radius of the top-left corner of the border."
},
{
"code": null,
"e": 26142,
"s": 26020,
"text": "topRight: This property also holds Radius class as the object to decide the radius of the top-right corner of the border."
},
{
"code": null,
"e": 26367,
"s": 26142,
"text": "Now, we are going to see how we can add border-radius to the border using all the methods. The border in the app below is created by using Border.all widget, around a NetworkImage which is placed inside BoxDecoration widget."
},
{
"code": null,
"e": 26387,
"s": 26367,
"text": "BorderRadius Widget"
},
{
"code": null,
"e": 26466,
"s": 26387,
"text": "This is how our border looks now. Let’s see how to add a curve to the corners."
},
{
"code": null,
"e": 26494,
"s": 26466,
"text": "Example 1: BorderRadius.all"
},
{
"code": null,
"e": 26499,
"s": 26494,
"text": "Dart"
},
{
"code": "import 'package:flutter/material.dart'; void main() { runApp( //Our app widget tree starts here MaterialApp( home: Scaffold( appBar: AppBar( title: Text('GeeksforGeeks'), backgroundColor: Colors.greenAccent[400], leading: IconButton( icon: Icon(Icons.menu), tooltip: 'Menu', onPressed: () {}, ), //IconButton actions: <Widget>[ IconButton( icon: Icon(Icons.comment), tooltip: 'Comment', onPressed: () {}, ), //IconButton ], //<Widget>[] ), //AppBar body: Center( child: Padding( padding: const EdgeInsets.all(12.0), child: SizedBox( height: 250, child: Container( decoration: BoxDecoration( image: const DecorationImage( image: NetworkImage( 'https://media.geeksforgeeks.org/wp-content/cdn-uploads/logo.png'), ), border: Border.all( color: const Color(0xFF000000), width: 4.0, style: BorderStyle.solid), //Border.all /*** The BorderRadius widget is here ***/ borderRadius: BorderRadius.all( Radius.circular(10), ), //BorderRadius.all ), //BoxDecoration ), ), ), ), //Center ), //Scaffold debugShowCheckedModeBanner: false, //Debug banner is turned off ), //MaterialApp );}",
"e": 28121,
"s": 26499,
"text": null
},
{
"code": null,
"e": 28129,
"s": 28121,
"text": "Output:"
},
{
"code": null,
"e": 28146,
"s": 28129,
"text": "BorderRadius.all"
},
{
"code": null,
"e": 28441,
"s": 28146,
"text": "Explanation: The curve around the corners of the borders in the above app has been added using BorderRadius.all. The BorderRadius.all is taking Radius.circular as the object and 10 is the parameter assigned to that. And we can see a curve of radius 10 pixels has been added to all the corners."
},
{
"code": null,
"e": 28472,
"s": 28441,
"text": "Example 2: BorderRadius.circle"
},
{
"code": null,
"e": 28602,
"s": 28472,
"text": " // Code snippet of the BorderRadius.Circular\n ...\n borderRadius: BorderRadius.circular(50.0),\n ..."
},
{
"code": null,
"e": 28610,
"s": 28602,
"text": "Output:"
},
{
"code": null,
"e": 28632,
"s": 28610,
"text": "BorderRadius.circular"
},
{
"code": null,
"e": 28849,
"s": 28632,
"text": "Explanation: The above code snippet is of BorderRadius.circular. It only takes in a double value as the object to give an equal curve to all the corners in the border. In the above app the radius is set to 50 pixels."
},
{
"code": null,
"e": 28885,
"s": 28849,
"text": "Example 3: BorderRadous.horizontal:"
},
{
"code": null,
"e": 29167,
"s": 28885,
"text": " // Code sippet of BorderRadius.horizontal\n ...\n borderRadius: BorderRadius.horizontal(\n left: Radius.circular(15),\n right: Radius.circular(30),\n ), //BorderRadius.horizontal\n ..."
},
{
"code": null,
"e": 29175,
"s": 29167,
"text": "Output:"
},
{
"code": null,
"e": 29200,
"s": 29175,
"text": " BorderRadius.horizontal"
},
{
"code": null,
"e": 29614,
"s": 29200,
"text": "Explanation: Here BorderRadius.horizontal has been used to add a border around the corners. Inside the BorderRadius.horizontal widget the left property is holding Radius.circular(15), which gives the left side of the border (i.e. the top-left and bottom,-left) a radius of 15 pixels and the right property is holding Radius.circular(30), which in turn gives the right portion of the border a radius of 30 pixels."
},
{
"code": null,
"e": 29642,
"s": 29614,
"text": "Example 4: BorderRadus.only"
},
{
"code": null,
"e": 30008,
"s": 29642,
"text": " // Code sippet of BorderRadius.only\n ...\n borderRadius: BorderRadius.only(\n topLeft: Radius.circular(5),\n topRight: Radius.circular(10),\n bottomLeft: Radius.circular(15),\n bottomRight: Radius.circular(20),\n ),//BorderRadius.Only\n ..."
},
{
"code": null,
"e": 30017,
"s": 30008,
"text": "Output: "
},
{
"code": null,
"e": 30035,
"s": 30017,
"text": "BorderRadius.only"
},
{
"code": null,
"e": 30513,
"s": 30035,
"text": "Explanation: In the above app the BorderRadius.only is used to add different curves around different corners of the borders. BorderRadius.only takes in four properties that are topLeft, topRight, bottomLeft and bottomRight to add a specific amount of radius to the corners in the border. In the top-left corner the radius is 5px,in the top-right corner the radius is10 px, in the bottom-left corner the border-radius is 15 px and in the bottom-right corner the radius is 20 px."
},
{
"code": null,
"e": 30547,
"s": 30513,
"text": "Example 5: BorderRadius.vertical "
},
{
"code": null,
"e": 30813,
"s": 30547,
"text": " // Code sippet of BorderRadius.vertical\n ...\n borderRadius: BorderRadius.vertical(\n top: Radius.circular(10),\n bottom: Radius.circular(30),\n ),//BorderRadius.vertical\n ..."
},
{
"code": null,
"e": 30821,
"s": 30813,
"text": "Output:"
},
{
"code": null,
"e": 30845,
"s": 30821,
"text": " BorderRadius.vertical "
},
{
"code": null,
"e": 31172,
"s": 30845,
"text": "Explanation: BorderRadius.vertical is the widget used here to add border-radius to the corners. It takes in top and the bottom radius to specify border-radius to the upper and the lower portion of the border. Here the border-radius added to the upper portion is 10 px and the border-radius added to the lower portion is 30 px."
},
{
"code": null,
"e": 31247,
"s": 31172,
"text": "To see the full code of all the examples used in this article click here. "
},
{
"code": null,
"e": 31263,
"s": 31247,
"text": "simranarora5sos"
},
{
"code": null,
"e": 31271,
"s": 31263,
"text": "android"
},
{
"code": null,
"e": 31279,
"s": 31271,
"text": "Flutter"
},
{
"code": null,
"e": 31295,
"s": 31279,
"text": "Flutter-widgets"
},
{
"code": null,
"e": 31303,
"s": 31295,
"text": "Android"
},
{
"code": null,
"e": 31308,
"s": 31303,
"text": "Dart"
},
{
"code": null,
"e": 31316,
"s": 31308,
"text": "Flutter"
},
{
"code": null,
"e": 31324,
"s": 31316,
"text": "Android"
},
{
"code": null,
"e": 31422,
"s": 31324,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31480,
"s": 31422,
"text": "How to Create and Add Data to SQLite Database in Android?"
},
{
"code": null,
"e": 31523,
"s": 31480,
"text": "Broadcast Receiver in Android With Example"
},
{
"code": null,
"e": 31556,
"s": 31523,
"text": "Services in Android with Example"
},
{
"code": null,
"e": 31598,
"s": 31556,
"text": "Content Providers in Android with Example"
},
{
"code": null,
"e": 31629,
"s": 31598,
"text": "Android RecyclerView in Kotlin"
},
{
"code": null,
"e": 31661,
"s": 31629,
"text": "Flutter - DropDownButton Widget"
},
{
"code": null,
"e": 31683,
"s": 31661,
"text": "Flutter - Asset Image"
},
{
"code": null,
"e": 31722,
"s": 31683,
"text": "Flutter - Custom Bottom Navigation Bar"
},
{
"code": null,
"e": 31747,
"s": 31722,
"text": "Splash Screen in Flutter"
}
] |
How to use POST method to send data in jQuery Ajax?
|
The jQuery.post( url, [data], [callback], [type] ) method loads a page from the server using a POST HTTP request.
Here is the description of all the parameters used by this method −
url − A string containing the URL to which the request is sent
data − This optional parameter represents key/value pairs or the return value of the .serialize() function that will be sent to the server.
callback − This optional parameter represents a function to be executed whenever the data is loaded successfully.
type − This optional parameter represents a type of data to be returned to callback function: "xml", "html", "script", "json", "jsonp", or "text".
Let’s say we have the following PHP content in result.php file:
<?php
if( $_REQUEST["name"] ) {
$name = $_REQUEST['name'];
echo "Welcome ". $name;
}
?>
Here's the code snippet to implement how to use POST method to send data in jQuery Ajax:
<head>
<script src = "https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script>
$(document).ready(function() {
$("#driver").click(function(event){
$.post(
"result.php",
{ name: "John" },
function(data) {
$('#stage').html(data);
}
);
});
});
</script>
</head>
<body>
<p>Click on the button to load result.html file −</p>
<div id = "stage" style = "background-color:cc0;">
STAGE
</div>
<input type = "button" id = "driver" value = "Load Data" />
</body>
|
[
{
"code": null,
"e": 1176,
"s": 1062,
"text": "The jQuery.post( url, [data], [callback], [type] ) method loads a page from the server using a POST HTTP request."
},
{
"code": null,
"e": 1244,
"s": 1176,
"text": "Here is the description of all the parameters used by this method −"
},
{
"code": null,
"e": 1307,
"s": 1244,
"text": "url − A string containing the URL to which the request is sent"
},
{
"code": null,
"e": 1447,
"s": 1307,
"text": "data − This optional parameter represents key/value pairs or the return value of the .serialize() function that will be sent to the server."
},
{
"code": null,
"e": 1561,
"s": 1447,
"text": "callback − This optional parameter represents a function to be executed whenever the data is loaded successfully."
},
{
"code": null,
"e": 1708,
"s": 1561,
"text": "type − This optional parameter represents a type of data to be returned to callback function: \"xml\", \"html\", \"script\", \"json\", \"jsonp\", or \"text\"."
},
{
"code": null,
"e": 1772,
"s": 1708,
"text": "Let’s say we have the following PHP content in result.php file:"
},
{
"code": null,
"e": 1867,
"s": 1772,
"text": "<?php\nif( $_REQUEST[\"name\"] ) {\n\n $name = $_REQUEST['name'];\n echo \"Welcome \". $name;\n}\n?>"
},
{
"code": null,
"e": 1956,
"s": 1867,
"text": "Here's the code snippet to implement how to use POST method to send data in jQuery Ajax:"
},
{
"code": null,
"e": 2743,
"s": 1956,
"text": " <head>\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js\"></script>\n \n <script>\n $(document).ready(function() {\n \n $(\"#driver\").click(function(event){\n \n $.post(\n \"result.php\",\n { name: \"John\" },\n function(data) {\n $('#stage').html(data);\n }\n ); \n });\n });\n </script>\n </head>\n \n <body>\n \n <p>Click on the button to load result.html file −</p>\n \n <div id = \"stage\" style = \"background-color:cc0;\">\n STAGE\n </div>\n \n <input type = \"button\" id = \"driver\" value = \"Load Data\" />\n \n </body>"
}
] |
Essential Guide To Translating Between Python and R | by Molly Liebeskind | Towards Data Science
|
As a native English speaker, when I started learning Spanish as a child, three key practices helped me go from awkwardly trying to translate my English words into Spanish to thinking and responding (occasionally even dreaming) directly in Spanish.
Connecting the new Spanish word to the English word I already knew. Drawing parallels between Spanish and English words enables me to quickly grasp the meaning of the new word.Repeating the word many times and using it in many different scenarios drilled the words into my mind.Leveraging contextual clues enabled me to better understand how and why the word is used over its synonyms.
Connecting the new Spanish word to the English word I already knew. Drawing parallels between Spanish and English words enables me to quickly grasp the meaning of the new word.
Repeating the word many times and using it in many different scenarios drilled the words into my mind.
Leveraging contextual clues enabled me to better understand how and why the word is used over its synonyms.
When you first learn to code, repetition and contextualizing are essential. Through consistent repetition, you begin to memorize the vocabulary and syntax. As you begin seeing the code used in different contextual environments, often through project development you are able to understand how and why different functions and techniques are used. But there isn’t necessarily an easy way to connect the new way of thinking to the language you speak, which means that you aren’t just memorizing a word but, instead you have to develop a new understanding of each programming concept. Even the first line of code you write, print(“Hello World!”) requires you to learn how the print function works, how the editor returns a print statement, and when to use quotation marks. When you learn a second programming language, you have the benefit of translating concepts from the language you know to the new language to learn more efficiently and quickly.
The world of data science is split between Python advocates and R enthusiasts. But rather than declaring a side, anyone who has learned one of these languages should capitalize on their advantage and dive into the other. There are infinite parallels between Python and R and with both languages at your disposal, you can solve challenges in the best way possible rather than the limiting yourself to half of the tool shed. Below is a simple guide to connecting R and Python for easy translation between the two. By making these connections, repeatedly interacting with the new language, and contextualizing with projects, anyone who understands either Python or R can quickly begin programming in the other.
Looking at the Python and R side-by-side, you can see that they function and appear very similar with only minor differences in their syntax.
Datatypes
# Python # Rtype() class()type(5) #int class(5) #numerictype(5.5) #float class(5.5) #numerictype('Hello') #string class('Hello') #charactertype(True) #bool class(True) #logical
Assigning variables
# Python # Ra = 5 a <- 5
Importing Packages
# Python # Rpip install packagename install.packages(packagename)import packagename library(packagename)
Math: It’s the same — math is the same in all languages!
# Python # R+ - / * + - / *# The same goes for logical operators< #less than < #less than> #greater than > #greater than<= #less than or equal to <= #less than or equal to== #is equal to == #is equal to!= #is not equal to != #is not equal to& #and & #and| #or | #or
Calling a function
# Python # Rfunctionname(args, kwargs) functionname(args, kwargs)print("Hello World!") print("Hello World!")
If / Else Statements
# Python # Rif True: if (TRUE) { print('Hello World!') print('Go to sleep!')else: } else { print('Not true!') print('Not true!') }
Lists & Vectors: This one is a bit of a stretch but I have found the connection to be helpful.
In python, a list is a mutable collection of ordered items of any datatype. Indexing a list in Python starts at 0 and is not inclusive.
In R, a vector is a a mutable collection of ordered items of the same type. Indexing a vector in R starts at 1 and is inclusive.
# Python # Rls = [1, 'a', 3, False] vc <- c(1, 2, 3, 4)# Python indexing starts at 0, R indexing starts at 1b = ls[0] b = vc[1]print(b) #returns 1 print(b) #returns 1c = ls[0:1] c = vc[1:2]print(c) #returns 1 print(c) #returns 1, 2
For loops
# Python # Rfor i in range(2, 5): for(i in 1:10) { a = i a <- i }
Both python and R offer simple and streamlined data manipulation, making them essential tools for data scientists. Both languages are equipped with packages that enable loading, cleaning, and processing of data frames.
In python, pandas is the most common library used for loading and manipulating data frames using the DataFrame object.
In R, tidyverse is a similar library that enables simple data frame manipulation using the data.frame object.
Additionally, for streamlined code, both languages allow multiple operations to be piped together. In python, the . can be used to combine different operations while the %>% pipe is used in R.
Reading, Writing, and Viewing Data
# Python # Rimport pandas as pd library(tidyverse)# load and view datadf = pd.read_csv('path.csv') df <- read_csv('path.csv')df.head() head(df)df.sample(100) sample(df, 100)df.describe() summary(df)# write to csvdf.to_csv('exp_path.csv') write_csv(df, 'exp_path.csv')
Renaming & Adding Columns
# Python # Rdf = df.rename({'a': 'b'}, axis=1) df %>% rename(a = b)df.newcol = [1, 2, 3] df$newcol <- c(1, 2, 3)df['newcol'] = [1, 2, 3] df %>% mutate(newcol = c(1, 2, 3))
Selecting and Filtering Columns
# Python # Rdf['col1', 'col2'] df %<% select(col1,col2)df.drop('col1') df %<% select(-col1)
Filtering Rows
# Python # Rdf.drop_duplicates() df %<% distinct()df[df.col > 3] df %<% filter(col > 3)
Sorting Values
# Python # Rdf.sort_values(by='column') arrange(df, column)
Aggregating Data
# Pythondf.groupby('col1')['agg_col').agg(['mean()']).reset_index()# Rdf %>% group_by(col1) %>% summarize(mean = mean(agg_col, na.rm=TRUE)) %>% ungroup() #if resetting index
Aggregating With Filter
# Pythondf.groupby('col1').filter(lambda x: x.col2.mean() > 10)# Rdf %>% group_by(col1) %>% filter(mean(col2) >10)
Merging Data Frames
# Pythonpd.merge(df1, df2, left_on="df1_col", right_on="df2_col")# Rmerge(df1, df2, by.df1="df1_col", by.df2="df2_col")
The above examples are a starting point for creating mental parallels between Python and R. While most data scientists will lean towards one language or the other, being comfortable in both enables you to leverage the tools that best fit your needs.
Please leave any parallels that you have found helpful and additional comments below.
|
[
{
"code": null,
"e": 420,
"s": 172,
"text": "As a native English speaker, when I started learning Spanish as a child, three key practices helped me go from awkwardly trying to translate my English words into Spanish to thinking and responding (occasionally even dreaming) directly in Spanish."
},
{
"code": null,
"e": 806,
"s": 420,
"text": "Connecting the new Spanish word to the English word I already knew. Drawing parallels between Spanish and English words enables me to quickly grasp the meaning of the new word.Repeating the word many times and using it in many different scenarios drilled the words into my mind.Leveraging contextual clues enabled me to better understand how and why the word is used over its synonyms."
},
{
"code": null,
"e": 983,
"s": 806,
"text": "Connecting the new Spanish word to the English word I already knew. Drawing parallels between Spanish and English words enables me to quickly grasp the meaning of the new word."
},
{
"code": null,
"e": 1086,
"s": 983,
"text": "Repeating the word many times and using it in many different scenarios drilled the words into my mind."
},
{
"code": null,
"e": 1194,
"s": 1086,
"text": "Leveraging contextual clues enabled me to better understand how and why the word is used over its synonyms."
},
{
"code": null,
"e": 2140,
"s": 1194,
"text": "When you first learn to code, repetition and contextualizing are essential. Through consistent repetition, you begin to memorize the vocabulary and syntax. As you begin seeing the code used in different contextual environments, often through project development you are able to understand how and why different functions and techniques are used. But there isn’t necessarily an easy way to connect the new way of thinking to the language you speak, which means that you aren’t just memorizing a word but, instead you have to develop a new understanding of each programming concept. Even the first line of code you write, print(“Hello World!”) requires you to learn how the print function works, how the editor returns a print statement, and when to use quotation marks. When you learn a second programming language, you have the benefit of translating concepts from the language you know to the new language to learn more efficiently and quickly."
},
{
"code": null,
"e": 2848,
"s": 2140,
"text": "The world of data science is split between Python advocates and R enthusiasts. But rather than declaring a side, anyone who has learned one of these languages should capitalize on their advantage and dive into the other. There are infinite parallels between Python and R and with both languages at your disposal, you can solve challenges in the best way possible rather than the limiting yourself to half of the tool shed. Below is a simple guide to connecting R and Python for easy translation between the two. By making these connections, repeatedly interacting with the new language, and contextualizing with projects, anyone who understands either Python or R can quickly begin programming in the other."
},
{
"code": null,
"e": 2990,
"s": 2848,
"text": "Looking at the Python and R side-by-side, you can see that they function and appear very similar with only minor differences in their syntax."
},
{
"code": null,
"e": 3000,
"s": 2990,
"text": "Datatypes"
},
{
"code": null,
"e": 3320,
"s": 3000,
"text": "# Python # Rtype() class()type(5) #int class(5) #numerictype(5.5) #float class(5.5) #numerictype('Hello') #string class('Hello') #charactertype(True) #bool class(True) #logical"
},
{
"code": null,
"e": 3340,
"s": 3320,
"text": "Assigning variables"
},
{
"code": null,
"e": 3426,
"s": 3340,
"text": "# Python # Ra = 5 a <- 5"
},
{
"code": null,
"e": 3445,
"s": 3426,
"text": "Importing Packages"
},
{
"code": null,
"e": 3612,
"s": 3445,
"text": "# Python # Rpip install packagename install.packages(packagename)import packagename library(packagename)"
},
{
"code": null,
"e": 3669,
"s": 3612,
"text": "Math: It’s the same — math is the same in all languages!"
},
{
"code": null,
"e": 4156,
"s": 3669,
"text": "# Python # R+ - / * + - / *# The same goes for logical operators< #less than < #less than> #greater than > #greater than<= #less than or equal to <= #less than or equal to== #is equal to == #is equal to!= #is not equal to != #is not equal to& #and & #and| #or | #or"
},
{
"code": null,
"e": 4175,
"s": 4156,
"text": "Calling a function"
},
{
"code": null,
"e": 4340,
"s": 4175,
"text": "# Python # Rfunctionname(args, kwargs) functionname(args, kwargs)print(\"Hello World!\") print(\"Hello World!\")"
},
{
"code": null,
"e": 4361,
"s": 4340,
"text": "If / Else Statements"
},
{
"code": null,
"e": 4661,
"s": 4361,
"text": "# Python # Rif True: if (TRUE) { print('Hello World!') print('Go to sleep!')else: } else { print('Not true!') print('Not true!') }"
},
{
"code": null,
"e": 4756,
"s": 4661,
"text": "Lists & Vectors: This one is a bit of a stretch but I have found the connection to be helpful."
},
{
"code": null,
"e": 4892,
"s": 4756,
"text": "In python, a list is a mutable collection of ordered items of any datatype. Indexing a list in Python starts at 0 and is not inclusive."
},
{
"code": null,
"e": 5021,
"s": 4892,
"text": "In R, a vector is a a mutable collection of ordered items of the same type. Indexing a vector in R starts at 1 and is inclusive."
},
{
"code": null,
"e": 5386,
"s": 5021,
"text": "# Python # Rls = [1, 'a', 3, False] vc <- c(1, 2, 3, 4)# Python indexing starts at 0, R indexing starts at 1b = ls[0] b = vc[1]print(b) #returns 1 print(b) #returns 1c = ls[0:1] c = vc[1:2]print(c) #returns 1 print(c) #returns 1, 2"
},
{
"code": null,
"e": 5396,
"s": 5386,
"text": "For loops"
},
{
"code": null,
"e": 5542,
"s": 5396,
"text": "# Python # Rfor i in range(2, 5): for(i in 1:10) { a = i a <- i }"
},
{
"code": null,
"e": 5761,
"s": 5542,
"text": "Both python and R offer simple and streamlined data manipulation, making them essential tools for data scientists. Both languages are equipped with packages that enable loading, cleaning, and processing of data frames."
},
{
"code": null,
"e": 5880,
"s": 5761,
"text": "In python, pandas is the most common library used for loading and manipulating data frames using the DataFrame object."
},
{
"code": null,
"e": 5990,
"s": 5880,
"text": "In R, tidyverse is a similar library that enables simple data frame manipulation using the data.frame object."
},
{
"code": null,
"e": 6183,
"s": 5990,
"text": "Additionally, for streamlined code, both languages allow multiple operations to be piped together. In python, the . can be used to combine different operations while the %>% pipe is used in R."
},
{
"code": null,
"e": 6218,
"s": 6183,
"text": "Reading, Writing, and Viewing Data"
},
{
"code": null,
"e": 6629,
"s": 6218,
"text": "# Python # Rimport pandas as pd library(tidyverse)# load and view datadf = pd.read_csv('path.csv') df <- read_csv('path.csv')df.head() head(df)df.sample(100) sample(df, 100)df.describe() summary(df)# write to csvdf.to_csv('exp_path.csv') write_csv(df, 'exp_path.csv')"
},
{
"code": null,
"e": 6655,
"s": 6629,
"text": "Renaming & Adding Columns"
},
{
"code": null,
"e": 6929,
"s": 6655,
"text": "# Python # Rdf = df.rename({'a': 'b'}, axis=1) df %>% rename(a = b)df.newcol = [1, 2, 3] df$newcol <- c(1, 2, 3)df['newcol'] = [1, 2, 3] df %>% mutate(newcol = c(1, 2, 3))"
},
{
"code": null,
"e": 6961,
"s": 6929,
"text": "Selecting and Filtering Columns"
},
{
"code": null,
"e": 7123,
"s": 6961,
"text": "# Python # Rdf['col1', 'col2'] df %<% select(col1,col2)df.drop('col1') df %<% select(-col1)"
},
{
"code": null,
"e": 7138,
"s": 7123,
"text": "Filtering Rows"
},
{
"code": null,
"e": 7295,
"s": 7138,
"text": "# Python # Rdf.drop_duplicates() df %<% distinct()df[df.col > 3] df %<% filter(col > 3)"
},
{
"code": null,
"e": 7310,
"s": 7295,
"text": "Sorting Values"
},
{
"code": null,
"e": 7409,
"s": 7310,
"text": "# Python # Rdf.sort_values(by='column') arrange(df, column)"
},
{
"code": null,
"e": 7426,
"s": 7409,
"text": "Aggregating Data"
},
{
"code": null,
"e": 7611,
"s": 7426,
"text": "# Pythondf.groupby('col1')['agg_col').agg(['mean()']).reset_index()# Rdf %>% group_by(col1) %>% summarize(mean = mean(agg_col, na.rm=TRUE)) %>% ungroup() #if resetting index"
},
{
"code": null,
"e": 7635,
"s": 7611,
"text": "Aggregating With Filter"
},
{
"code": null,
"e": 7758,
"s": 7635,
"text": "# Pythondf.groupby('col1').filter(lambda x: x.col2.mean() > 10)# Rdf %>% group_by(col1) %>% filter(mean(col2) >10)"
},
{
"code": null,
"e": 7778,
"s": 7758,
"text": "Merging Data Frames"
},
{
"code": null,
"e": 7898,
"s": 7778,
"text": "# Pythonpd.merge(df1, df2, left_on=\"df1_col\", right_on=\"df2_col\")# Rmerge(df1, df2, by.df1=\"df1_col\", by.df2=\"df2_col\")"
},
{
"code": null,
"e": 8148,
"s": 7898,
"text": "The above examples are a starting point for creating mental parallels between Python and R. While most data scientists will lean towards one language or the other, being comfortable in both enables you to leverage the tools that best fit your needs."
}
] |
Python String startswith() Method
|
Python string method startswith() checks whether string starts with str, optionally restricting the matching with the given indices start and end.
Following is the syntax for startswith() method −
str.startswith(str, beg=0,end=len(string));
str − This is the string to be checked.
str − This is the string to be checked.
beg − This is the optional parameter to set start index of the matching boundary.
beg − This is the optional parameter to set start index of the matching boundary.
end − This is the optional parameter to end start index of the matching boundary.
end − This is the optional parameter to end start index of the matching boundary.
This method returns true if found matching string otherwise false.
The following example shows the usage of startswith() method.
#!/usr/bin/python
str = "this is string example....wow!!!";
print str.startswith( 'this' )
print str.startswith( 'is', 2, 4 )
print str.startswith( 'this', 2, 4 )
When we run above program, it produces following result −
True
True
False
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2392,
"s": 2244,
"text": "Python string method startswith() checks whether string starts with str, optionally restricting the matching with the given indices start and end."
},
{
"code": null,
"e": 2442,
"s": 2392,
"text": "Following is the syntax for startswith() method −"
},
{
"code": null,
"e": 2487,
"s": 2442,
"text": "str.startswith(str, beg=0,end=len(string));\n"
},
{
"code": null,
"e": 2527,
"s": 2487,
"text": "str − This is the string to be checked."
},
{
"code": null,
"e": 2567,
"s": 2527,
"text": "str − This is the string to be checked."
},
{
"code": null,
"e": 2649,
"s": 2567,
"text": "beg − This is the optional parameter to set start index of the matching boundary."
},
{
"code": null,
"e": 2731,
"s": 2649,
"text": "beg − This is the optional parameter to set start index of the matching boundary."
},
{
"code": null,
"e": 2813,
"s": 2731,
"text": "end − This is the optional parameter to end start index of the matching boundary."
},
{
"code": null,
"e": 2895,
"s": 2813,
"text": "end − This is the optional parameter to end start index of the matching boundary."
},
{
"code": null,
"e": 2962,
"s": 2895,
"text": "This method returns true if found matching string otherwise false."
},
{
"code": null,
"e": 3024,
"s": 2962,
"text": "The following example shows the usage of startswith() method."
},
{
"code": null,
"e": 3188,
"s": 3024,
"text": "#!/usr/bin/python\n\nstr = \"this is string example....wow!!!\";\nprint str.startswith( 'this' )\nprint str.startswith( 'is', 2, 4 )\nprint str.startswith( 'this', 2, 4 )"
},
{
"code": null,
"e": 3246,
"s": 3188,
"text": "When we run above program, it produces following result −"
},
{
"code": null,
"e": 3263,
"s": 3246,
"text": "True\nTrue\nFalse\n"
},
{
"code": null,
"e": 3300,
"s": 3263,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 3316,
"s": 3300,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 3349,
"s": 3316,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 3368,
"s": 3349,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 3403,
"s": 3368,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 3425,
"s": 3403,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 3459,
"s": 3425,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 3487,
"s": 3459,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 3522,
"s": 3487,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 3536,
"s": 3522,
"text": " Lets Kode It"
},
{
"code": null,
"e": 3569,
"s": 3536,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3586,
"s": 3569,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 3593,
"s": 3586,
"text": " Print"
},
{
"code": null,
"e": 3604,
"s": 3593,
"text": " Add Notes"
}
] |
Operation Counts Method in Algorithm
|
There are different methods to estimate the cost of some algorithm. One of them by using the operation count. We can estimate the time complexity of an algorithm by choosing one of different operations. These are like add, subtract etc. We have to check how many of these operations are done. The success of this method depends on our ability to identify the operations that contribute most of the time complexity.
Suppose we have an array, of size n [0 to n - 1]. Our algorithm will find the index of largest element. We can estimate the cost by counting number of comparison operation is performed between each pair of elements of the array. We have to remember that, we will choose only one operation. In this algorithm there are few more operations like increment of iteration variable i, or assign values for index etc. But they are not considered in this case.
getMax(arr, n):
index := 0
max := arr[0]
for i in range 1 to n - 1, do
if arr[i] > max, then
max := arr[i]
index := i
end if
done
return index
We have to choose those operations that are performed maximum amount of time to estimate the cost. Suppose we have one bubble sort algorithm, and we count the swap operation. Then we have to keep in mind that when it will be maximum. That will give us maximum result during analysis.
|
[
{
"code": null,
"e": 1477,
"s": 1062,
"text": "There are different methods to estimate the cost of some algorithm. One of them by using the operation count. We can estimate the time complexity of an algorithm by choosing one of different operations. These are like add, subtract etc. We have to check how many of these operations are done. The success of this method depends on our ability to identify the operations that contribute most of the time complexity."
},
{
"code": null,
"e": 1929,
"s": 1477,
"text": "Suppose we have an array, of size n [0 to n - 1]. Our algorithm will find the index of largest element. We can estimate the cost by counting number of comparison operation is performed between each pair of elements of the array. We have to remember that, we will choose only one operation. In this algorithm there are few more operations like increment of iteration variable i, or assign values for index etc. But they are not considered in this case."
},
{
"code": null,
"e": 2117,
"s": 1929,
"text": "getMax(arr, n):\n index := 0\n max := arr[0]\n for i in range 1 to n - 1, do\n if arr[i] > max, then\n max := arr[i]\n index := i\n end if\n done\n return index"
},
{
"code": null,
"e": 2401,
"s": 2117,
"text": "We have to choose those operations that are performed maximum amount of time to estimate the cost. Suppose we have one bubble sort algorithm, and we count the swap operation. Then we have to keep in mind that when it will be maximum. That will give us maximum result during analysis."
}
] |
VBScript InStrRev Function
|
The InStrRev Function returns the first occurrence of one string within another string. The Search happens from right to Left.
InStrRev(string1,string2[,start,[compare]])
String1, a Required Parameter. String to be searched.
String1, a Required Parameter. String to be searched.
String2, a Required Parameter. String against which String1 is searched.
String2, a Required Parameter. String against which String1 is searched.
Start, an Optional Parameter. Specifies the Starting position for the search. The Search begins at the first position from right to left.
Start, an Optional Parameter. Specifies the Starting position for the search. The Search begins at the first position from right to left.
Compare, an Optional Parameter. Specifies the String Comparison to be used. It can take the below mentioned values −
0 = vbBinaryCompare - Performs Binary Comparison(Default)
1 = vbTextCompare - Performs Text Comparison
Compare, an Optional Parameter. Specifies the String Comparison to be used. It can take the below mentioned values −
0 = vbBinaryCompare - Performs Binary Comparison(Default)
0 = vbBinaryCompare - Performs Binary Comparison(Default)
1 = vbTextCompare - Performs Text Comparison
1 = vbTextCompare - Performs Text Comparison
<!DOCTYPE html>
<html>
<body>
<script language = "vbscript" type = "text/vbscript">
var = "Microsoft VBScript"
document.write("Line 1 : " & InStrRev(var,"s",10) & "<br />")
document.write("Line 2 : " & InStrRev(var,"s",7) & "<br />")
document.write("Line 3 : " & InStrRev(var,"f",-1,1) & "<br />")
document.write("Line 4 : " & InStrRev(var,"t",5) & "<br />")
document.write("Line 5 : " & InStrRev(var,"i",7) & "<br />")
document.write("Line 6 : " & InStrRev(var,"i",7) & "<br />")
document.write("Line 7 : " & InStrRev(var,"VB",1))
</script>
</body>
</html>
When you save it as .html and execute it in Internet Explorer, then the above script will produce the following result −
Line 1 : 6
Line 2 : 6
Line 3 : 8
Line 4 : 0
Line 5 : 2
Line 6 : 2
Line 7 : 0
63 Lectures
4 hours
Frahaan Hussain
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2207,
"s": 2080,
"text": "The InStrRev Function returns the first occurrence of one string within another string. The Search happens from right to Left."
},
{
"code": null,
"e": 2252,
"s": 2207,
"text": "InStrRev(string1,string2[,start,[compare]])\n"
},
{
"code": null,
"e": 2306,
"s": 2252,
"text": "String1, a Required Parameter. String to be searched."
},
{
"code": null,
"e": 2360,
"s": 2306,
"text": "String1, a Required Parameter. String to be searched."
},
{
"code": null,
"e": 2433,
"s": 2360,
"text": "String2, a Required Parameter. String against which String1 is searched."
},
{
"code": null,
"e": 2506,
"s": 2433,
"text": "String2, a Required Parameter. String against which String1 is searched."
},
{
"code": null,
"e": 2644,
"s": 2506,
"text": "Start, an Optional Parameter. Specifies the Starting position for the search. The Search begins at the first position from right to left."
},
{
"code": null,
"e": 2782,
"s": 2644,
"text": "Start, an Optional Parameter. Specifies the Starting position for the search. The Search begins at the first position from right to left."
},
{
"code": null,
"e": 3005,
"s": 2782,
"text": "Compare, an Optional Parameter. Specifies the String Comparison to be used. It can take the below mentioned values −\n\n0 = vbBinaryCompare - Performs Binary Comparison(Default)\n1 = vbTextCompare - Performs Text Comparison\n\n"
},
{
"code": null,
"e": 3122,
"s": 3005,
"text": "Compare, an Optional Parameter. Specifies the String Comparison to be used. It can take the below mentioned values −"
},
{
"code": null,
"e": 3180,
"s": 3122,
"text": "0 = vbBinaryCompare - Performs Binary Comparison(Default)"
},
{
"code": null,
"e": 3238,
"s": 3180,
"text": "0 = vbBinaryCompare - Performs Binary Comparison(Default)"
},
{
"code": null,
"e": 3283,
"s": 3238,
"text": "1 = vbTextCompare - Performs Text Comparison"
},
{
"code": null,
"e": 3328,
"s": 3283,
"text": "1 = vbTextCompare - Performs Text Comparison"
},
{
"code": null,
"e": 3976,
"s": 3328,
"text": "<!DOCTYPE html>\n<html>\n <body>\n <script language = \"vbscript\" type = \"text/vbscript\">\n var = \"Microsoft VBScript\"\n document.write(\"Line 1 : \" & InStrRev(var,\"s\",10) & \"<br />\")\n document.write(\"Line 2 : \" & InStrRev(var,\"s\",7) & \"<br />\")\n document.write(\"Line 3 : \" & InStrRev(var,\"f\",-1,1) & \"<br />\")\n document.write(\"Line 4 : \" & InStrRev(var,\"t\",5) & \"<br />\")\n document.write(\"Line 5 : \" & InStrRev(var,\"i\",7) & \"<br />\")\n document.write(\"Line 6 : \" & InStrRev(var,\"i\",7) & \"<br />\")\n document.write(\"Line 7 : \" & InStrRev(var,\"VB\",1))\n </script>\n </body>\n</html>"
},
{
"code": null,
"e": 4097,
"s": 3976,
"text": "When you save it as .html and execute it in Internet Explorer, then the above script will produce the following result −"
},
{
"code": null,
"e": 4176,
"s": 4097,
"text": "Line 1 : 6\nLine 2 : 6\nLine 3 : 8\nLine 4 : 0\nLine 5 : 2\nLine 6 : 2\nLine 7 : 0 \n"
},
{
"code": null,
"e": 4209,
"s": 4176,
"text": "\n 63 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 4226,
"s": 4209,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 4233,
"s": 4226,
"text": " Print"
},
{
"code": null,
"e": 4244,
"s": 4233,
"text": " Add Notes"
}
] |
The Gamification Of Fitbit. “Anyone can look for fashion in a... | by Matt.0 | Towards Data Science
|
“Anyone can look for fashion in a boutique or history in a museum. The creative explorer looks for history in a hardware store and fashion in an airport” — Robert Wieder
You may, or may not, have heard of the term gamification but chances are you’ve experienced it.
Gamification is the application of game-design elements, and game principles, in non-game contexts. The idea is, if you use elements of games, like linking rules and rewards into a feedback system, you can make (almost) any activity motivating and fun.
Gamification is the concept behind eLearning. In elementary school I remember all the students wanted to play The Oregon Trail in computer class. I also remember another game where you had to solve math problems before something hit the floor. Okay, maybe it wasn’t the most thrilling introduction to gamification but I remember it nonetheless.
At some point in my career, I got tired of using nano and decided to I wanted to try to learn Vim.
It was then that I discovered two very enjoyable examples of gamification:
Vim Adventures is kind of like Zelda for Gameboy where you have to move through the environment and solve riddles - except with Vim commands! You can watch it being played on Twitch here.
shortcutFoo teaches you shortcuts for Vim, Emacs, Command Line, Regex etc. via interval training, which is essentially spaced repetition. This helps you memorize shortcuts more efficiently.
Today, I enjoy eLearning-gamification on platforms like DuoLingo, and DataCamp.
I’ve also recently started to participate in a Kaggle competition, “PUBG Finish Placement Prediction”. Kaggle is a Google owned hangout for data science enthusiasts where they can use machine learning to solve predictive analytics problems for cash and clout. Similar to chess there are so-called Kaggle Grandmasters.
Our laboratory studies perinatal influences on the biological embedding of early adversity of mental health outcomes. We combine genetic, epigenetic and epidemiological approaches to identify pregnant women who’s offspring may potentially be at risk for adverse mental health outcomes.
My supervisor approached me with a challenge; how feasible would it be to access biometric data from 200 Fitbits?
So I bought myself a Fitbit Charge2 fitness tracker and hit the gym!
At some point I think we both realized that this project was going to be a big undertaking. Perhaps R isn’t really intended to do large scale real-time data management from API services. It’s great for static files, or static endpoints, but if you’re working with multiple participants a dedicated solution like Fitabase may work the best - or so they claim.
Nonetheless, I wanted to try out a bunch of cool new things in R like making a personal website using blogdown, using gganimate with Rokemon, accessing the fitbit API with httr as well as adding a background image with some custom CSS/HTML. Is there possibly a better way to possibly gamify my leaRning curve - I think not.
The following was my attempt at e-learning gamification for R.
I used the blogdown package to allow me to write blog posts as R Markdown documents, knitting everything to a nice neat static website that I can push online. It was a nice opportunity to learn a bit about pandoc, Hugo, CSS/HTML lurking beneath the server side code. I decided to go with the Academic theme for Hugo, pull in as much data as I could from the Fitbit API, clean it up, and then perform some exploratory data analysis. In the process, I generated some cool animated sprites and use video game inspired visualizations.
Fitbit uses OAuth 2.0 Access Token for making HTTP request to the Fitbit API. You need to set up an account to use the API and include your token in R. Instead of reading the FITBIT DEV HELP section I would direct the reader to better more-concise instructions here.
Now that you have an account we’re ready to do stuff in R.
Set your token up:
# You Found A Secret Area!token = "yourToken"
I had never made an HTTP request before and although the process is officially documented here it can be a tad overwhelming. Therefore, I initially resorted to using an R package built to access the R API, under-the-hood, called fitbitr.
Unfortunately this would limit me to only accessing some basic user information, heart rate and step count data.
The first function in this package sends a GET request to the Get Profile resource URL.
# Extracting Resources# Get userInfouser_info <- fitbitr::getUserInfo(token)# Hailing a Chocobo!# What is my stride length in meters?strideLengthWalking <- user_info$strideLengthWalking
My stride length is 68.5.
Stride length is measured from heel to heel and determines how far you walk with each step. On average, a man’s walking stride length is 2.5 feet, or 30 inches, while a woman’s average stride length is 2.2 feet, or 26.4 inches, according to this report.
# Hitting 80 MPH# What is my running stride lengthstrideLengthRunning <- user_info$strideLengthRunning
My running stride length is 105.5.
The Fitbit uses your sex and height by default to gauge your stride length which could potentially be inaccurate.
# Looking for the fourth chaos emerald # What is my average daily steps?averageDailySteps <- user_info$averageDailySteps
My average daily steps is 14214.
Considering that the daily recommended steps is 10,000 I’d say that’s acceptable. That being said, there’s always room for improvement.
I’m going to grab a week’s worth of data for a very preliminary EDA.
# Smashing buttonsdays <- c("Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday")monday_heart <- getTimeSeries(token, type = "heart", activityDetail = "1min", date = "2018-08-20", startTime = "00:00", endTime = "23:59")monday_heart %<>% mutate(date = "2018-08-20")monday_steps <- getTimeSeries(token, type = "steps", activityDetail = "1min", date = "2018-08-20")monday_steps %<>% mutate(date = "2018-08-20")monday <- monday_heart %>% full_join(monday_steps)monday %<>% mutate(week_date = "Monday")monday %<>% mutate(day_of_week = "1")tuesday_heart <- getTimeSeries(token, type = "heart", activityDetail = "1min", date = "2018-08-21")tuesday_heart %<>% mutate(date = "2018-08-21")tuesday_steps <- getTimeSeries(token, type = "steps", activityDetail = "1min", date = "2018-08-21")tuesday_steps %<>% mutate(date = "2018-08-21")tuesday <- tuesday_heart %>% full_join(tuesday_steps)tuesday %<>% mutate(week_date = "Tuesday")tuesday %<>% mutate(day_of_week = "2")wednesday_heart <- getTimeSeries(token, type = "heart", activityDetail = "1min", date = "2018-08-22")wednesday_heart %<>% mutate(date = "2018-08-22")wednesday_steps <- getTimeSeries(token, type = "steps", activityDetail = "1min", date = "2018-08-22")wednesday_steps %<>% mutate(date = "2018-08-22")wednesday <- wednesday_heart %>% full_join(wednesday_steps)wednesday %<>% mutate(week_date = "Wednesday")wednesday %<>% mutate(day_of_week = "3")thursday_heart <- getTimeSeries(token, type = "heart", activityDetail = "1min", date = "2018-08-23")thursday_heart %<>% mutate(date = "2018-08-23")thursday_steps <- getTimeSeries(token, type = "steps", activityDetail = "1min", date = "2018-08-23")thursday_steps %<>% mutate(date = "2018-08-23")thursday <- thursday_heart %>% full_join(thursday_steps)thursday %<>% mutate(week_date = "Thursday")thursday %<>% mutate(day_of_week = "4")friday_heart <- getTimeSeries(token, type = "heart", activityDetail = "1min", date = "2018-08-24")friday_heart %<>% mutate(date = "2018-08-24")friday_steps <- getTimeSeries(token, type = "steps", activityDetail = "1min", date = "2018-08-24")friday_steps %<>% mutate(date = "2018-08-24")friday <- friday_heart %>% full_join(friday_steps)friday %<>% mutate(week_date = "Friday")friday %<>% mutate(day_of_week = "5")saturday_heart <- getTimeSeries(token, type = "heart", activityDetail = "1min", date = "2018-08-24")saturday_heart %<>% mutate(date = "2018-08-24")saturday_steps <- getTimeSeries(token, type = "steps", activityDetail = "1min", date = "2018-08-24")saturday_steps %<>% mutate(date = "2018-08-24")saturday <- saturday_heart %>% full_join(saturday_steps)saturday %<>% mutate(week_date = "Saturday")saturday %<>% mutate(day_of_week = "6")week <- monday %>% bind_rows(tuesday) %>% bind_rows(wednesday) %>% bind_rows(thursday) %>% bind_rows(friday) %>% bind_rows(saturday) week$date <- as.Date(week$date)
# Opening pod bay doorsweek %>% group_by(type) %>% summarise( total = sum(value), minimum = min(value), mean = mean(value), median = median(value), maximum = max(value), max_time = max(time) ) %>% knitr::kable(digits = 3) %>% kable_styling(full_width = F)
Since this is a post about gamification I decided to do something fun with my exploratory data visualizations. I wanted to use the Rokemon package which allows me to set the theme of ggplot2 (and ggplot2 extensions) to Game Boy and Game Boy Advance themes! When convenient, I’ve combined plots with cowplot.
Let’s take a quick look at the relationship and distribution of heart rate and step count.
# Doing the thing...g <- week %>% spread(type, value) %>% rename(hear_rate = "heart rate") %>% na.omit() %>% ggplot(aes(steps, hear_rate)) + geom_point() + geom_smooth(method="lm", se=F, colour = "#DE7243") gb <- g + theme_gameboy()gba <- g + theme_gba()plot_grid(gb, gba, labels = c("", ""), align = "h")
Alternatively, we could get a better look at the data by adding marginal density plots to the scatterplots with the ggMarginal() function from the ggExtra package.
Let’s take a quick look at the distribution of the contiguous variables to get a better idea than the mean and median.
# Loading..... Wait, what else were you expecting?annotations_steps <- data_frame( x = c(45, 100, 165), y = c(0.01, 0.01, 0.01), label = c('walking pace', 'brisk walking pace', 'running pace'), type = c('steps', 'steps', 'steps'))g <- week %>% ggplot(aes(value)) + geom_density(fill = "#DE7243") + geom_text(data = annotations_steps, aes(x = x, y = y, label = label), angle = -30, hjust = 1) + facet_grid(.~type, scales = 'free_x') + labs(title = 'Heart Rate and Steps-per-minute over two months', subtitle = 'Data gathered from Fitbit Charge2')g + theme_gameboy()g + theme_gba()
Heart rate is a little right-skewed, probably due to sleep and sedentary work. Similarly, for step count you see that only a small bump under brisk walking pace from when I skateboarded to work.
This week I didn’t work out so I thought I’d at least look at when I was on my way to work. The figure below shows blue for heart rate/min and orange is the number of steps/min.
# You are carrying too much to be able to runbetween_six_nine <- function(time) time > 7*60*60 & time < 10*60*60is_weekday <- function(day_of_week) day_of_week %in% 1:6week$week_date_f <- factor(week$week_date, levels=c("Monday","Tuesday","Wednesday", "Thursday", "Friday", "Saturday"))g <- week %>% filter(between_six_nine(time) & is_weekday(day_of_week)) %>% spread(type, value) %>% ggplot(aes(x = time)) + geom_bar(aes(y = steps), color = '#DE7243', alpha = 0.3, stat = 'identity') + geom_line(aes(y = `heart rate`), color = '#E3F24D', size = 0.8) + facet_grid(~week_date_f) + scale_x_continuous(breaks=c(27000, 30000, 33000, 36000), labels=c("7am", "8am", "9am", "10am"))g + theme_gameboy()g + theme_gba()
My activity has been pretty much the same all week since I skateboard to work every morning.
# 60% of the time, it loads ALL the timestep_counts <- week %>% filter(type == 'steps') %>% group_by(day_of_week) %>% summarise( type = last(type), avg_num_steps = sprintf('avg num steps: %3.0f', sum(value)/52) )g <- week %>% ggplot(aes(x= value, y = fct_rev(factor(day_of_week)))) + geom_density_ridges(scale = 2.5, fill = "#DE7243") + geom_text(data = step_counts, nudge_y = 0.15, hjust = 0, aes(x = 85, y = fct_rev(factor(day_of_week)), label = avg_num_steps)) + scale_y_discrete(breaks=1:6, labels = c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday")) + facet_grid(.~type, scales = "free") + labs(x = '', y = "Day of the Week") g + theme_gameboy()g + theme_gba()
The distribution of steps per-minute was pretty constant because as I said I didn’t work-out; this likely reflects me shuffling to get tea.
It looks like Monday was the day I got my heart rate up the most, the bimodal peak is probably when I was running around looking for a rental property.
Eventually, I found an excellent tutorial by obrl-soil which introduced me to the httr package and gave me the confidence I needed to peruse the Fitbit DEV web API reference. Now I was able to gain access to far more sources of data.
A brief overview of what data is available from the Fitbit API:
# Your Boko Club is badly damaged# make a kable table for data you can access from Fitbit APIdt01 <- data.frame(Scope = c("activity", "heartrate", "location", "nutrition", "profile", "settings", "sleep", "social", "weight"), Description = c("The activity scope includes activity data and exercise log related features, such as steps, distance, calories burned, and active minutes", "The heartrate scope includes the continuous heart rate data and related analysis", "The location scope includes the GPS and other location data", "The nutrition scope includes calorie consumption and nutrition related features, such as food/water logging, goals, and plans", "The profile scope is the basic user information", "The settings scope includes user account and device settings, such as alarms", "The sleep scope includes sleep logs and related sleep analysis", "The social scope includes friend-related features, such as friend list, invitations, and leaderboard", "The weight scope includes weight and related information, such as body mass index, body fat percentage, and goals"))dt01 %>% kable("html") %>% kable_styling(full_width = F) %>% column_spec(1, bold = T, border_right = T) %>% column_spec(2, width = "30em", background = "#E3F24D")
What are the units of measurement?
# Loading Cutscenes You Can't Skip# make a Kable table or measurement informationdt03 <- data.frame(unitType = c("duration", "distance", "elevation", "height", "weight", "body measurements", "liquids", "blood glucose"), unit = c("milliseconds", "kilometers", "meters", "centimeters", "kilograms", "centimeters", "milliliters", "millimoles per liter"))dt03 %>% kable("html") %>% kable_styling(full_width = F) %>% column_spec(1, bold = T, border_right = T) %>% column_spec(2, width = "30em", background = "#E3F24D")
Define a function for turning a json list into a dataframe.
# Inserting last-minute subroutines into program...# json-as-list to dataframe (for simple cases without nesting!)jsonlist_to_df <- function(data = NULL) { purrr::transpose(data) %>% purrr::map(., unlist) %>% as_tibble(., stringsAsFactors = FALSE)}
GET request to retrieve minute-by-minute heart rate data for my 10km run.
# Preparing for the mini-bossget_workout <- function(date = NULL, start_time = NULL, end_time = NULL, token = Sys.getenv('FITB_AUTH')) {GET(url = paste0('https://api.fitbit.com/1/user/-/activities/heart/date/', date, '/1d/1min/time/', start_time, '/', end_time, '.json'), add_headers(Authorization = paste0("Bearer ", token)))}# Get the workout for my 10Km run got_workout <- get_workout(date = '2018-10-21', start_time = '09:29', end_time = '10:24')workout <- content(got_workout)# summaryworkout[['activities-heart']][[1]][['heartRateZones']] <- jsonlist_to_df(workout[['activities-heart']][[1]][['heartRateZones']])# the datasetworkout[['activities-heart-intraday']][['dataset']] <- jsonlist_to_df(workout[['activities-heart-intraday']][['dataset']])# format the time workout$`activities-heart-intraday`$dataset$time <- as.POSIXlt(workout$`activities-heart-intraday`$dataset$time, format = '%H:%M:%S')lubridate::date(workout$`activities-heart-intraday`$dataset$time) <- '2018-10-21'# find time zone# grep("Canada", OlsonNames(), value=TRUE)lubridate::tz(workout$`activities-heart-intraday`$dataset$time) <- 'Canada/Eastern'
Let’s take a look at the summary for my 10Km run:
# Farming Hell Cowsworkout$`activities-heart`[[1]]$heartRateZones %>% kable() %>% kable_styling(full_width = F)
obrl-soil used the MyZone Efforts Points (MEPS) which is calculated minute-by-minute as a percentage of max heart rate. It measures the effort put in - The more points the better. Another example of gamification in action.
# Looting a chestmeps_max <- function(age = NULL) { 207 - (0.7 * age) }
Mine is 186.
Now is we create a tribble with 4 heart ranges showing the lower and higher bounds and use the mutate() function from above to calculate what my max heart rate is (with lower and upper bounds).
# Taking the hobbits to Isengardmy_MEPS <- tribble(~MEPS, ~hr_range, ~hr_lo, ~hr_hi, 1, '50-59%', 0.50, 0.59, 2, '60-69%', 0.60, 0.69, 3, '70-79%', 0.70, 0.79, 4, '>=80', 0.80, 1.00) %>% mutate(my_hr_low = floor(meps_max(30) * hr_lo), my_hr_hi = ceiling(meps_max(30) * hr_hi))my_MEPS## # A tibble: 4 x 6## MEPS hr_range hr_lo hr_hi my_hr_low my_hr_hi## <dbl> <chr> <dbl> <dbl> <dbl> <dbl>## 1 1 50-59% 0.5 0.59 93 110## 2 2 60-69% 0.6 0.69 111 129## 3 3 70-79% 0.7 0.79 130 147## 4 4 >=80 0.8 1 148 186
With the equation now defined let’s calculate my total MEPS:
# Checkpoint!mep <- mutate(workout$`activities-heart-intraday`$dataset, meps = case_when(value >= 146 ~ 4, value >= 128 ~ 3, value >= 109 ~ 2, value >= 91 ~ 1, TRUE ~ 0)) %>% summarise("Total MEPS" = sum(meps))
Wow it’s 216!
I’m not sure what that exactly means but apparently the maximum possible MEPS in a 42-minute workout is 168 and since I ran this 10Km in 54:35 I guess that’s good?
I’d like to post sub 50 minutes on my next 10Km run but I’m not sure if I should be aiming to shoot for a greater percentage of peak heart rate minutes or not - guess I will need to look into this.
Let’s examine my sleep patterns last night.
# Resting at Campfireget_sleep <- function(startDate = NULL, endDate = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1.2/user/-/sleep/date/', startDate, "/", endDate, '.json'), add_headers(Authorization = paste0("Bearer ", token)))}# make sure that there is data for those days otherwise it tosses an errorgot_sleep <- get_sleep(startDate = "2018-08-21", endDate = "2018-08-22")sleep <- content(got_sleep)dateRange <- seq(as.Date("2018-08-21"), as.Date("2018-08-22"), "days")sleep_pattern <- NULLfor(j in 1:length(dateRange)){ sleep[['sleep']][[j]][['levels']][['data']] <- jsonlist_to_df(sleep[['sleep']][[j]][['levels']][['data']]) tmp <- sleep$sleep[[j]]$levels$`data`sleep_pattern <- bind_rows(sleep_pattern, tmp)}
Okay now that the data munging is complete, let’s look at my sleep pattern.
# Now entering... The Twilight Zoneg <- sleep_pattern %>% group_by(level, seconds) %>% summarise() %>% summarise(seconds = sum(seconds)) %>% mutate(percentage = seconds/sum(seconds)) %>% ggplot(aes(x = "", y = percentage, fill = c("S", "A", "R"))) + geom_bar(width = 1, stat = "identity") + theme(axis.text.y = element_blank(), axis.text.x = element_blank(), axis.line = element_blank(), plot.caption = element_text(size = 5), plot.title = element_blank()) + labs(fill = "class", x = NULL, y = NULL, title = "Sleep stages", caption = "A = Awake; R = Restless; S = Asleep") + coord_polar(theta = "y", start = 0) + scale_fill_manual(values = c("#FF3F3F", "#2BD1FC", "#BA90A6"))g + theme_gameboy()g + theme_gba()
A pie chart is probably not the best way to show this data. Let’s visualize the distribution with a box plot.
# Entering Cheat Codes!g <- ggplot(sleep_pattern, aes(y=log10(seconds), x=level)) + geom_boxplot(color="#031300", fill='#152403') + labs(x = "", title = 'Sleep patterns over a month', subtitle = 'Data gathered from Fitbit Charge2') + theme(legend.position = "none") g + theme_gameboy()g + theme_gba()
An even better way to visualize the distribution would be to use a violin plot with the raw data points overlaid.
# Neglecting Sleep...g <- ggplot(sleep_pattern, aes(y=log10(seconds), x=level)) + geom_violin(color="#031300", fill='#152403') + geom_point() + labs(x = "", title = 'Sleep patterns over a month', subtitle = 'Data gathered from Fitbit Charge2') + theme(legend.position = "none")g + theme_gameboy()g + theme_gba()
You can do API requests for various periods from the Fitbit Activity and Exercise Logs but since I’ve only had mine a couple months I’ll use the 3m period.
I will also need to trim off any day’s which are in the future otherwise they’ll appear as 0 calories in the figures. It’s best to use the Sys.Date() function rather than hardcoding the date when doing EDA, making a Shiny app, or parameterizing a RMarkdown file. This way you can explore different time periods without anything breaking.
I cannot remember when I started wearing my Fitbit but we can figure that out with the following code:
# ULTIMATE IS READY!# Query how many days since you've had fitbit forinception <- user_info$memberSince
I’ve had my Fitbit since 2018–08–20.
Let’s gather data from September 20th until November 6th 2018.
# Catching them all!### Caloriesget_calories <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/calories/date/', baseDate, "/", period, '.json'), add_headers(Authorization = paste0("Bearer ", token)))} got_calories <- get_calories(baseDate = "2018-11-20", period = "3m")calories <- content(got_calories)# turn into dfcalories[['activities-calories']] <- jsonlist_to_df(calories[['activities-calories']])# assign easy object and renamecalories <- calories[['activities-calories']]colnames(calories) <- c("dateTime", "calories")### STEPSget_steps <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/steps/date/', baseDate, "/", period, '.json'), add_headers(Authorization = paste0("Bearer ", token)))} got_steps <- get_steps(baseDate = "2018-11-20", period = "3m")steps <- content(got_steps)# turn into dfsteps[['activities-steps']] <- jsonlist_to_df(steps[['activities-steps']])# assign easy object and renamesteps <- steps[['activities-steps']]colnames(steps) <- c("dateTime", "steps")### DISTANCEget_distance <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/distance/date/', baseDate, "/", period, '.json'), add_headers(Authorization = paste0("Bearer ", token)))} got_distance <- get_distance(baseDate = "2018-11-20", period = "3m")distance <- content(got_distance)# turn into dfdistance[['activities-distance']] <- jsonlist_to_df(distance[['activities-distance']])# assign easy object and renamedistance <- distance[['activities-distance']]colnames(distance) <- c("dateTime", "distance")### FLOORSget_floors <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/floors/date/', baseDate, "/", period, '.json'), add_headers(Authorization = paste0("Bearer ", token)))} got_floors <- get_floors(baseDate = "2018-11-20", period = "3m")floors <- content(got_floors)# turn into dffloors[['activities-floors']] <- jsonlist_to_df(floors[['activities-floors']])# assign easy object and renamefloors <- floors[['activities-floors']]colnames(floors) <- c("dateTime", "floors")### ELEVATIONget_elevation <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/elevation/date/', baseDate, "/", period, '.json'), add_headers(Authorization = paste0("Bearer ", token)))} got_elevation <- get_elevation(baseDate = "2018-11-20", period = "3m")elevation <- content(got_elevation)# turn into dfelevation[['activities-elevation']] <- jsonlist_to_df(elevation[['activities-elevation']])# assign easy object and renameelevation <- elevation[['activities-elevation']]colnames(elevation) <- c("dateTime", "elevation")### minutesSedentaryget_minutesSedentary <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/minutesSedentary/date/', baseDate, "/", period, '.json'), add_headers(Authorization = paste0("Bearer ", token)))} got_minutesSedentary <- get_minutesSedentary(baseDate = "2018-11-20", period = "3m")minutesSedentary <- content(got_minutesSedentary)# turn into dfminutesSedentary[['activities-minutesSedentary']] <- jsonlist_to_df(minutesSedentary[['activities-minutesSedentary']])# assign easy object and renameminutesSedentary <- minutesSedentary[['activities-minutesSedentary']]colnames(minutesSedentary) <- c("dateTime", "minutesSedentary")### minutesLightlyActiveget_minutesLightlyActive <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/minutesLightlyActive/date/', baseDate, "/", period, '.json'), add_headers(Authorization = paste0("Bearer ", token)))} got_minutesLightlyActive <- get_minutesLightlyActive(baseDate = "2018-11-20", period = "3m")minutesLightlyActive <- content(got_minutesLightlyActive)# turn into dfminutesLightlyActive[['activities-minutesLightlyActive']] <- jsonlist_to_df(minutesLightlyActive[['activities-minutesLightlyActive']])# assign easy object and renameminutesLightlyActive <- minutesLightlyActive[['activities-minutesLightlyActive']]colnames(minutesLightlyActive) <- c("dateTime", "minutesLightlyActive")### minutesFairlyActiveget_minutesFairlyActive <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/minutesFairlyActive/date/', baseDate, "/", period, '.json'), add_headers(Authorization = paste0("Bearer ", token)))} got_minutesFairlyActive <- get_minutesFairlyActive(baseDate = "2018-11-20", period = "3m")minutesFairlyActive <- content(got_minutesFairlyActive)# turn into dfminutesFairlyActive[['activities-minutesFairlyActive']] <- jsonlist_to_df(minutesFairlyActive[['activities-minutesFairlyActive']])# assign easy object and renameminutesFairlyActive <- minutesFairlyActive[['activities-minutesFairlyActive']]colnames(minutesFairlyActive) <- c("dateTime", "minutesFairlyActive")### minutesVeryActiveget_minutesVeryActive <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/minutesVeryActive/date/', baseDate, "/", period, '.json'), add_headers(Authorization = paste0("Bearer ", token)))} got_minutesVeryActive <- get_minutesVeryActive(baseDate = "2018-11-20", period = "3m")minutesVeryActive <- content(got_minutesVeryActive)# turn into dfminutesVeryActive[['activities-minutesVeryActive']] <- jsonlist_to_df(minutesVeryActive[['activities-minutesVeryActive']])# assign easy object and renameminutesVeryActive <- minutesVeryActive[['activities-minutesVeryActive']]colnames(minutesVeryActive) <- c("dateTime", "minutesVeryActive")### activityCaloriesget_activityCalories <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/activityCalories/date/', baseDate, "/", period, '.json'), add_headers(Authorization = paste0("Bearer ", token)))} got_activityCalories <- get_activityCalories(baseDate = "2018-11-20", period = "3m")activityCalories <- content(got_activityCalories)# turn into dfactivityCalories[['activities-activityCalories']] <- jsonlist_to_df(activityCalories[['activities-activityCalories']])# assign easy object and renameactivityCalories <- activityCalories[['activities-activityCalories']]colnames(activityCalories) <- c("dateTime", "activityCalories")##### Join multiple dataframes with purrr::reduce and dplyr::left_joinactivity_df <- list(calories, steps, distance, floors, elevation, activityCalories, minutesSedentary, minutesLightlyActive, minutesFairlyActive, minutesVeryActive) %>% purrr::reduce(left_join, by = "dateTime")# Add the dateTime to this dataframeactivity_df$dateTime <- as.Date(activity_df$dateTime)names <- c(2:ncol(activity_df))activity_df[,names] <- lapply(activity_df[,names], as.numeric)# trim off any days that haven't happened yetactivity_df %<>% filter(dateTime <= "2018-11-06")
# We're giving it all she's got!get_frequentActivities <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/recent.json'), add_headers(Authorization = paste0("Bearer ", token)))}got_frequentActivities <- get_frequentActivities(baseDate = "2018-11-20", period = "3m")frequentActivities <- content(got_frequentActivities)# This is a list object let's look at how many frequent activities are loggedlength(frequentActivities)## [1] 5# Take a look at the object with str()str(frequentActivities)## List of 5## $ :List of 6## ..$ activityId : int 2131## ..$ calories : int 0## ..$ description: chr ""## ..$ distance : int 0## ..$ duration : int 3038000## ..$ name : chr "Weights"## $ :List of 6## ..$ activityId : int 90009## ..$ calories : int 0## ..$ description: chr "Running - 5 mph (12 min/mile)"## ..$ distance : int 0## ..$ duration : int 1767000## ..$ name : chr "Run"## $ :List of 6## ..$ activityId : int 90013## ..$ calories : int 0## ..$ description: chr "Walking less than 2 mph, strolling very slowly"## ..$ distance : int 0## ..$ duration : int 2407000## ..$ name : chr "Walk"## $ :List of 6## ..$ activityId : int 90001## ..$ calories : int 0## ..$ description: chr "Very Leisurely - Less than 10 mph"## ..$ distance : int 0## ..$ duration : int 4236000## ..$ name : chr "Bike"## $ :List of 6## ..$ activityId : int 15000## ..$ calories : int 0## ..$ description: chr ""## ..$ distance : int 0## ..$ duration : int 1229000## ..$ name : chr "Sport"
I would never have considered myself a Darwin or a Thoreau but apparently strolling very slowly is my favorite activity in terms of time spent.
You can see that my Fitbit has also logged times for Weights, Sports and Biking which is likely from when I’ve manually logged my activities. There’s a possibility that Fitbit is registering Biking for when I skateboard.
Previously I had always used the corrplot package to create a correlation plot; however, it doesn’t play nicely with ggplot meaning you cannot add Game Boy themes easily. Nonetheless, I was able to give it a retro-looking palette with some minor tweaking.
Since I had two colors in mind from the original gameboy, and knew their hex code, I was able to generate a palette from this website.
# Aligning Covariance Matrices # drop dateTimecorr_df <- activity_df[,2:11]# Correlation matrixcorr <- cor(na.omit(corr_df))corrplot(corr, type = "upper", bg = "#9BBB0E", tl.col = "#565656", col = c("#CADCA0", "#B9CD93", "#A8BE85", "#97AF78", "#86A06B", "#75915E", "#648350", "#537443", "#426536", "#315629", "#20471B", "#0F380E"))
In a correlation plot the color of each circle indicates the magnitude of the correlation, and the size of the circle indicates its significance.
After a bit of searching for a ggplot2 extension I was able to use ggcorrplot which allowed me to use gameboy themes again!
# Generating textures...ggcorrplot(corr, hc.order = TRUE, type = "lower", lab = TRUE, lab_size = 2, tl.cex = 8, show.legend = FALSE, colors = c( "#306230", "#306230", "#0F380F" ), title="Correlogram", ggtheme=theme_gameboy)
# Game Over. Loading previous saveggcorrplot(corr, hc.order = TRUE, type = "lower", lab = TRUE, lab_size = 2, tl.cex = 8, show.legend = FALSE, colors = c( "#3B7AAD", "#56B1F7", "#1D3E5D" ), title="Correlogram", ggtheme=theme_gba)
# Link saying "hyahhh!"# Staticg <- activity_df %>% ggplot(aes(x=dateTime, y=calories)) + geom_line(colour = "black") + geom_point(shape = 21, colour = "black", aes(fill = calories), size = 5, stroke = 1) + xlab("") + ylab("Calorie Expenditure")g + theme_gameboy() + theme(legend.position = "none")g + theme_gba() + theme(legend.position = "none")
# Panick! at the Discord...# gganimateg <- activity_df %>% ggplot(aes(x=dateTime, y=calories)) + geom_line(colour = "black") + geom_point(shape = 21, colour = "black", aes(fill = calories), size = 5, stroke = 1) + transition_time(dateTime) + shadow_mark() + ease_aes('linear') + xlab("") + ylab("Calorie Expenditure") g + theme_gba() + theme(legend.position = "none")
Distance is determined by using your steps and your estimated stride length (for the height you put in).
I’ve also made plots for Distance, Steps, Elevationand Floorsbut you’ll have to check out this page to see them.
Even though Fitbit offers a nice dashboard for a single user it’s not scale-able. By accessing the data directly one can ask the questions they want from 200 individuals — or more. If one was inclined, they could even build a fancy Shiny dashboard with bespoke visualizations.
If you have any questions or comments you can always reach me on LinkedIn. Till then, see you in the next post!
# Wubba Lubba Dub Dub# https://www.spriters-resource.com/game_boy_advance/kirbynim/sheet/15585/sprite_sheet <- png::readPNG("kirby.png")Nframes <- 11 # number of frames to extractwidth <- 29 # width of a framesprite_frames <- list() # storage for the extracted frames# Not equal sized frames in the sprite sheet. Need to compensate for each frameoffset <- c(0, -4, -6, -7, -10, -16, -22, -26, -28, -29, -30)# Manually extract each framefor (i in seq(Nframes)) { sprite_frames[[i]] <- sprite_sheet[120:148, (width*(i-1)) + (1:width) + offset[i], 1:3]}# Function to convert a sprite frame to a data.frame# and remove any background pixels i.e. #00DBFFsprite_frame_to_df <- function(frame) { plot_df <- data_frame( fill = as.vector(as.raster(frame)), x = rep(1:width, width), y = rep(width:1, each=width) ) %>% filter(fill != '#00DBFF')}sprite_dfs <- sprite_frames %>% map(sprite_frame_to_df) %>% imap(~mutate(.x, idx=.y))fill_manual_values <- unique(sprite_dfs[[1]]$fill)fill_manual_values <- setNames(fill_manual_values, fill_manual_values)mega_df <- dplyr::bind_rows(sprite_dfs)p <- ggplot(mega_df, aes(x, y, fill=fill)) + geom_tile(width=0.9, height=0.9) + coord_equal(xlim=c(1, width), ylim=c(1, width)) + scale_fill_manual(values = fill_manual_values) + theme_gba() + xlab("") + ylab("") + theme(legend.position = 'none', axis.text=element_blank(), axis.ticks = element_blank())panim <- p + transition_manual(idx, seq_along(sprite_frames)) + labs(title = "gganimate Kirby")gganimate::animate(panim, fps=30, width=400, height=400)
|
[
{
"code": null,
"e": 342,
"s": 172,
"text": "“Anyone can look for fashion in a boutique or history in a museum. The creative explorer looks for history in a hardware store and fashion in an airport” — Robert Wieder"
},
{
"code": null,
"e": 438,
"s": 342,
"text": "You may, or may not, have heard of the term gamification but chances are you’ve experienced it."
},
{
"code": null,
"e": 691,
"s": 438,
"text": "Gamification is the application of game-design elements, and game principles, in non-game contexts. The idea is, if you use elements of games, like linking rules and rewards into a feedback system, you can make (almost) any activity motivating and fun."
},
{
"code": null,
"e": 1036,
"s": 691,
"text": "Gamification is the concept behind eLearning. In elementary school I remember all the students wanted to play The Oregon Trail in computer class. I also remember another game where you had to solve math problems before something hit the floor. Okay, maybe it wasn’t the most thrilling introduction to gamification but I remember it nonetheless."
},
{
"code": null,
"e": 1135,
"s": 1036,
"text": "At some point in my career, I got tired of using nano and decided to I wanted to try to learn Vim."
},
{
"code": null,
"e": 1210,
"s": 1135,
"text": "It was then that I discovered two very enjoyable examples of gamification:"
},
{
"code": null,
"e": 1398,
"s": 1210,
"text": "Vim Adventures is kind of like Zelda for Gameboy where you have to move through the environment and solve riddles - except with Vim commands! You can watch it being played on Twitch here."
},
{
"code": null,
"e": 1588,
"s": 1398,
"text": "shortcutFoo teaches you shortcuts for Vim, Emacs, Command Line, Regex etc. via interval training, which is essentially spaced repetition. This helps you memorize shortcuts more efficiently."
},
{
"code": null,
"e": 1668,
"s": 1588,
"text": "Today, I enjoy eLearning-gamification on platforms like DuoLingo, and DataCamp."
},
{
"code": null,
"e": 1986,
"s": 1668,
"text": "I’ve also recently started to participate in a Kaggle competition, “PUBG Finish Placement Prediction”. Kaggle is a Google owned hangout for data science enthusiasts where they can use machine learning to solve predictive analytics problems for cash and clout. Similar to chess there are so-called Kaggle Grandmasters."
},
{
"code": null,
"e": 2272,
"s": 1986,
"text": "Our laboratory studies perinatal influences on the biological embedding of early adversity of mental health outcomes. We combine genetic, epigenetic and epidemiological approaches to identify pregnant women who’s offspring may potentially be at risk for adverse mental health outcomes."
},
{
"code": null,
"e": 2386,
"s": 2272,
"text": "My supervisor approached me with a challenge; how feasible would it be to access biometric data from 200 Fitbits?"
},
{
"code": null,
"e": 2455,
"s": 2386,
"text": "So I bought myself a Fitbit Charge2 fitness tracker and hit the gym!"
},
{
"code": null,
"e": 2814,
"s": 2455,
"text": "At some point I think we both realized that this project was going to be a big undertaking. Perhaps R isn’t really intended to do large scale real-time data management from API services. It’s great for static files, or static endpoints, but if you’re working with multiple participants a dedicated solution like Fitabase may work the best - or so they claim."
},
{
"code": null,
"e": 3138,
"s": 2814,
"text": "Nonetheless, I wanted to try out a bunch of cool new things in R like making a personal website using blogdown, using gganimate with Rokemon, accessing the fitbit API with httr as well as adding a background image with some custom CSS/HTML. Is there possibly a better way to possibly gamify my leaRning curve - I think not."
},
{
"code": null,
"e": 3201,
"s": 3138,
"text": "The following was my attempt at e-learning gamification for R."
},
{
"code": null,
"e": 3732,
"s": 3201,
"text": "I used the blogdown package to allow me to write blog posts as R Markdown documents, knitting everything to a nice neat static website that I can push online. It was a nice opportunity to learn a bit about pandoc, Hugo, CSS/HTML lurking beneath the server side code. I decided to go with the Academic theme for Hugo, pull in as much data as I could from the Fitbit API, clean it up, and then perform some exploratory data analysis. In the process, I generated some cool animated sprites and use video game inspired visualizations."
},
{
"code": null,
"e": 3999,
"s": 3732,
"text": "Fitbit uses OAuth 2.0 Access Token for making HTTP request to the Fitbit API. You need to set up an account to use the API and include your token in R. Instead of reading the FITBIT DEV HELP section I would direct the reader to better more-concise instructions here."
},
{
"code": null,
"e": 4058,
"s": 3999,
"text": "Now that you have an account we’re ready to do stuff in R."
},
{
"code": null,
"e": 4077,
"s": 4058,
"text": "Set your token up:"
},
{
"code": null,
"e": 4123,
"s": 4077,
"text": "# You Found A Secret Area!token = \"yourToken\""
},
{
"code": null,
"e": 4361,
"s": 4123,
"text": "I had never made an HTTP request before and although the process is officially documented here it can be a tad overwhelming. Therefore, I initially resorted to using an R package built to access the R API, under-the-hood, called fitbitr."
},
{
"code": null,
"e": 4474,
"s": 4361,
"text": "Unfortunately this would limit me to only accessing some basic user information, heart rate and step count data."
},
{
"code": null,
"e": 4562,
"s": 4474,
"text": "The first function in this package sends a GET request to the Get Profile resource URL."
},
{
"code": null,
"e": 4748,
"s": 4562,
"text": "# Extracting Resources# Get userInfouser_info <- fitbitr::getUserInfo(token)# Hailing a Chocobo!# What is my stride length in meters?strideLengthWalking <- user_info$strideLengthWalking"
},
{
"code": null,
"e": 4774,
"s": 4748,
"text": "My stride length is 68.5."
},
{
"code": null,
"e": 5028,
"s": 4774,
"text": "Stride length is measured from heel to heel and determines how far you walk with each step. On average, a man’s walking stride length is 2.5 feet, or 30 inches, while a woman’s average stride length is 2.2 feet, or 26.4 inches, according to this report."
},
{
"code": null,
"e": 5131,
"s": 5028,
"text": "# Hitting 80 MPH# What is my running stride lengthstrideLengthRunning <- user_info$strideLengthRunning"
},
{
"code": null,
"e": 5166,
"s": 5131,
"text": "My running stride length is 105.5."
},
{
"code": null,
"e": 5280,
"s": 5166,
"text": "The Fitbit uses your sex and height by default to gauge your stride length which could potentially be inaccurate."
},
{
"code": null,
"e": 5401,
"s": 5280,
"text": "# Looking for the fourth chaos emerald # What is my average daily steps?averageDailySteps <- user_info$averageDailySteps"
},
{
"code": null,
"e": 5434,
"s": 5401,
"text": "My average daily steps is 14214."
},
{
"code": null,
"e": 5570,
"s": 5434,
"text": "Considering that the daily recommended steps is 10,000 I’d say that’s acceptable. That being said, there’s always room for improvement."
},
{
"code": null,
"e": 5639,
"s": 5570,
"text": "I’m going to grab a week’s worth of data for a very preliminary EDA."
},
{
"code": null,
"e": 8514,
"s": 5639,
"text": "# Smashing buttonsdays <- c(\"Sunday\", \"Monday\", \"Tuesday\", \"Wednesday\", \"Thursday\", \"Friday\", \"Saturday\")monday_heart <- getTimeSeries(token, type = \"heart\", activityDetail = \"1min\", date = \"2018-08-20\", startTime = \"00:00\", endTime = \"23:59\")monday_heart %<>% mutate(date = \"2018-08-20\")monday_steps <- getTimeSeries(token, type = \"steps\", activityDetail = \"1min\", date = \"2018-08-20\")monday_steps %<>% mutate(date = \"2018-08-20\")monday <- monday_heart %>% full_join(monday_steps)monday %<>% mutate(week_date = \"Monday\")monday %<>% mutate(day_of_week = \"1\")tuesday_heart <- getTimeSeries(token, type = \"heart\", activityDetail = \"1min\", date = \"2018-08-21\")tuesday_heart %<>% mutate(date = \"2018-08-21\")tuesday_steps <- getTimeSeries(token, type = \"steps\", activityDetail = \"1min\", date = \"2018-08-21\")tuesday_steps %<>% mutate(date = \"2018-08-21\")tuesday <- tuesday_heart %>% full_join(tuesday_steps)tuesday %<>% mutate(week_date = \"Tuesday\")tuesday %<>% mutate(day_of_week = \"2\")wednesday_heart <- getTimeSeries(token, type = \"heart\", activityDetail = \"1min\", date = \"2018-08-22\")wednesday_heart %<>% mutate(date = \"2018-08-22\")wednesday_steps <- getTimeSeries(token, type = \"steps\", activityDetail = \"1min\", date = \"2018-08-22\")wednesday_steps %<>% mutate(date = \"2018-08-22\")wednesday <- wednesday_heart %>% full_join(wednesday_steps)wednesday %<>% mutate(week_date = \"Wednesday\")wednesday %<>% mutate(day_of_week = \"3\")thursday_heart <- getTimeSeries(token, type = \"heart\", activityDetail = \"1min\", date = \"2018-08-23\")thursday_heart %<>% mutate(date = \"2018-08-23\")thursday_steps <- getTimeSeries(token, type = \"steps\", activityDetail = \"1min\", date = \"2018-08-23\")thursday_steps %<>% mutate(date = \"2018-08-23\")thursday <- thursday_heart %>% full_join(thursday_steps)thursday %<>% mutate(week_date = \"Thursday\")thursday %<>% mutate(day_of_week = \"4\")friday_heart <- getTimeSeries(token, type = \"heart\", activityDetail = \"1min\", date = \"2018-08-24\")friday_heart %<>% mutate(date = \"2018-08-24\")friday_steps <- getTimeSeries(token, type = \"steps\", activityDetail = \"1min\", date = \"2018-08-24\")friday_steps %<>% mutate(date = \"2018-08-24\")friday <- friday_heart %>% full_join(friday_steps)friday %<>% mutate(week_date = \"Friday\")friday %<>% mutate(day_of_week = \"5\")saturday_heart <- getTimeSeries(token, type = \"heart\", activityDetail = \"1min\", date = \"2018-08-24\")saturday_heart %<>% mutate(date = \"2018-08-24\")saturday_steps <- getTimeSeries(token, type = \"steps\", activityDetail = \"1min\", date = \"2018-08-24\")saturday_steps %<>% mutate(date = \"2018-08-24\")saturday <- saturday_heart %>% full_join(saturday_steps)saturday %<>% mutate(week_date = \"Saturday\")saturday %<>% mutate(day_of_week = \"6\")week <- monday %>% bind_rows(tuesday) %>% bind_rows(wednesday) %>% bind_rows(thursday) %>% bind_rows(friday) %>% bind_rows(saturday) week$date <- as.Date(week$date)"
},
{
"code": null,
"e": 8970,
"s": 8514,
"text": "# Opening pod bay doorsweek %>% group_by(type) %>% summarise( total = sum(value), minimum = min(value), mean = mean(value), median = median(value), maximum = max(value), max_time = max(time) ) %>% knitr::kable(digits = 3) %>% kable_styling(full_width = F)"
},
{
"code": null,
"e": 9278,
"s": 8970,
"text": "Since this is a post about gamification I decided to do something fun with my exploratory data visualizations. I wanted to use the Rokemon package which allows me to set the theme of ggplot2 (and ggplot2 extensions) to Game Boy and Game Boy Advance themes! When convenient, I’ve combined plots with cowplot."
},
{
"code": null,
"e": 9369,
"s": 9278,
"text": "Let’s take a quick look at the relationship and distribution of heart rate and step count."
},
{
"code": null,
"e": 9707,
"s": 9369,
"text": "# Doing the thing...g <- week %>% spread(type, value) %>% rename(hear_rate = \"heart rate\") %>% na.omit() %>% ggplot(aes(steps, hear_rate)) + geom_point() + geom_smooth(method=\"lm\", se=F, colour = \"#DE7243\") gb <- g + theme_gameboy()gba <- g + theme_gba()plot_grid(gb, gba, labels = c(\"\", \"\"), align = \"h\")"
},
{
"code": null,
"e": 9871,
"s": 9707,
"text": "Alternatively, we could get a better look at the data by adding marginal density plots to the scatterplots with the ggMarginal() function from the ggExtra package."
},
{
"code": null,
"e": 9990,
"s": 9871,
"text": "Let’s take a quick look at the distribution of the contiguous variables to get a better idea than the mean and median."
},
{
"code": null,
"e": 10612,
"s": 9990,
"text": "# Loading..... Wait, what else were you expecting?annotations_steps <- data_frame( x = c(45, 100, 165), y = c(0.01, 0.01, 0.01), label = c('walking pace', 'brisk walking pace', 'running pace'), type = c('steps', 'steps', 'steps'))g <- week %>% ggplot(aes(value)) + geom_density(fill = \"#DE7243\") + geom_text(data = annotations_steps, aes(x = x, y = y, label = label), angle = -30, hjust = 1) + facet_grid(.~type, scales = 'free_x') + labs(title = 'Heart Rate and Steps-per-minute over two months', subtitle = 'Data gathered from Fitbit Charge2')g + theme_gameboy()g + theme_gba()"
},
{
"code": null,
"e": 10807,
"s": 10612,
"text": "Heart rate is a little right-skewed, probably due to sleep and sedentary work. Similarly, for step count you see that only a small bump under brisk walking pace from when I skateboarded to work."
},
{
"code": null,
"e": 10985,
"s": 10807,
"text": "This week I didn’t work out so I thought I’d at least look at when I was on my way to work. The figure below shows blue for heart rate/min and orange is the number of steps/min."
},
{
"code": null,
"e": 11749,
"s": 10985,
"text": "# You are carrying too much to be able to runbetween_six_nine <- function(time) time > 7*60*60 & time < 10*60*60is_weekday <- function(day_of_week) day_of_week %in% 1:6week$week_date_f <- factor(week$week_date, levels=c(\"Monday\",\"Tuesday\",\"Wednesday\", \"Thursday\", \"Friday\", \"Saturday\"))g <- week %>% filter(between_six_nine(time) & is_weekday(day_of_week)) %>% spread(type, value) %>% ggplot(aes(x = time)) + geom_bar(aes(y = steps), color = '#DE7243', alpha = 0.3, stat = 'identity') + geom_line(aes(y = `heart rate`), color = '#E3F24D', size = 0.8) + facet_grid(~week_date_f) + scale_x_continuous(breaks=c(27000, 30000, 33000, 36000), labels=c(\"7am\", \"8am\", \"9am\", \"10am\"))g + theme_gameboy()g + theme_gba()"
},
{
"code": null,
"e": 11842,
"s": 11749,
"text": "My activity has been pretty much the same all week since I skateboard to work every morning."
},
{
"code": null,
"e": 12596,
"s": 11842,
"text": "# 60% of the time, it loads ALL the timestep_counts <- week %>% filter(type == 'steps') %>% group_by(day_of_week) %>% summarise( type = last(type), avg_num_steps = sprintf('avg num steps: %3.0f', sum(value)/52) )g <- week %>% ggplot(aes(x= value, y = fct_rev(factor(day_of_week)))) + geom_density_ridges(scale = 2.5, fill = \"#DE7243\") + geom_text(data = step_counts, nudge_y = 0.15, hjust = 0, aes(x = 85, y = fct_rev(factor(day_of_week)), label = avg_num_steps)) + scale_y_discrete(breaks=1:6, labels = c(\"Monday\", \"Tuesday\", \"Wednesday\", \"Thursday\", \"Friday\", \"Saturday\")) + facet_grid(.~type, scales = \"free\") + labs(x = '', y = \"Day of the Week\") g + theme_gameboy()g + theme_gba()"
},
{
"code": null,
"e": 12736,
"s": 12596,
"text": "The distribution of steps per-minute was pretty constant because as I said I didn’t work-out; this likely reflects me shuffling to get tea."
},
{
"code": null,
"e": 12888,
"s": 12736,
"text": "It looks like Monday was the day I got my heart rate up the most, the bimodal peak is probably when I was running around looking for a rental property."
},
{
"code": null,
"e": 13122,
"s": 12888,
"text": "Eventually, I found an excellent tutorial by obrl-soil which introduced me to the httr package and gave me the confidence I needed to peruse the Fitbit DEV web API reference. Now I was able to gain access to far more sources of data."
},
{
"code": null,
"e": 13186,
"s": 13122,
"text": "A brief overview of what data is available from the Fitbit API:"
},
{
"code": null,
"e": 14943,
"s": 13186,
"text": "# Your Boko Club is badly damaged# make a kable table for data you can access from Fitbit APIdt01 <- data.frame(Scope = c(\"activity\", \"heartrate\", \"location\", \"nutrition\", \"profile\", \"settings\", \"sleep\", \"social\", \"weight\"), Description = c(\"The activity scope includes activity data and exercise log related features, such as steps, distance, calories burned, and active minutes\", \"The heartrate scope includes the continuous heart rate data and related analysis\", \"The location scope includes the GPS and other location data\", \"The nutrition scope includes calorie consumption and nutrition related features, such as food/water logging, goals, and plans\", \"The profile scope is the basic user information\", \"The settings scope includes user account and device settings, such as alarms\", \"The sleep scope includes sleep logs and related sleep analysis\", \"The social scope includes friend-related features, such as friend list, invitations, and leaderboard\", \"The weight scope includes weight and related information, such as body mass index, body fat percentage, and goals\"))dt01 %>% kable(\"html\") %>% kable_styling(full_width = F) %>% column_spec(1, bold = T, border_right = T) %>% column_spec(2, width = \"30em\", background = \"#E3F24D\")"
},
{
"code": null,
"e": 14978,
"s": 14943,
"text": "What are the units of measurement?"
},
{
"code": null,
"e": 15920,
"s": 14978,
"text": "# Loading Cutscenes You Can't Skip# make a Kable table or measurement informationdt03 <- data.frame(unitType = c(\"duration\", \"distance\", \"elevation\", \"height\", \"weight\", \"body measurements\", \"liquids\", \"blood glucose\"), unit = c(\"milliseconds\", \"kilometers\", \"meters\", \"centimeters\", \"kilograms\", \"centimeters\", \"milliliters\", \"millimoles per liter\"))dt03 %>% kable(\"html\") %>% kable_styling(full_width = F) %>% column_spec(1, bold = T, border_right = T) %>% column_spec(2, width = \"30em\", background = \"#E3F24D\")"
},
{
"code": null,
"e": 15980,
"s": 15920,
"text": "Define a function for turning a json list into a dataframe."
},
{
"code": null,
"e": 16238,
"s": 15980,
"text": "# Inserting last-minute subroutines into program...# json-as-list to dataframe (for simple cases without nesting!)jsonlist_to_df <- function(data = NULL) { purrr::transpose(data) %>% purrr::map(., unlist) %>% as_tibble(., stringsAsFactors = FALSE)}"
},
{
"code": null,
"e": 16312,
"s": 16238,
"text": "GET request to retrieve minute-by-minute heart rate data for my 10km run."
},
{
"code": null,
"e": 17486,
"s": 16312,
"text": "# Preparing for the mini-bossget_workout <- function(date = NULL, start_time = NULL, end_time = NULL, token = Sys.getenv('FITB_AUTH')) {GET(url = paste0('https://api.fitbit.com/1/user/-/activities/heart/date/', date, '/1d/1min/time/', start_time, '/', end_time, '.json'), add_headers(Authorization = paste0(\"Bearer \", token)))}# Get the workout for my 10Km run got_workout <- get_workout(date = '2018-10-21', start_time = '09:29', end_time = '10:24')workout <- content(got_workout)# summaryworkout[['activities-heart']][[1]][['heartRateZones']] <- jsonlist_to_df(workout[['activities-heart']][[1]][['heartRateZones']])# the datasetworkout[['activities-heart-intraday']][['dataset']] <- jsonlist_to_df(workout[['activities-heart-intraday']][['dataset']])# format the time workout$`activities-heart-intraday`$dataset$time <- as.POSIXlt(workout$`activities-heart-intraday`$dataset$time, format = '%H:%M:%S')lubridate::date(workout$`activities-heart-intraday`$dataset$time) <- '2018-10-21'# find time zone# grep(\"Canada\", OlsonNames(), value=TRUE)lubridate::tz(workout$`activities-heart-intraday`$dataset$time) <- 'Canada/Eastern'"
},
{
"code": null,
"e": 17536,
"s": 17486,
"text": "Let’s take a look at the summary for my 10Km run:"
},
{
"code": null,
"e": 17648,
"s": 17536,
"text": "# Farming Hell Cowsworkout$`activities-heart`[[1]]$heartRateZones %>% kable() %>% kable_styling(full_width = F)"
},
{
"code": null,
"e": 17871,
"s": 17648,
"text": "obrl-soil used the MyZone Efforts Points (MEPS) which is calculated minute-by-minute as a percentage of max heart rate. It measures the effort put in - The more points the better. Another example of gamification in action."
},
{
"code": null,
"e": 17943,
"s": 17871,
"text": "# Looting a chestmeps_max <- function(age = NULL) { 207 - (0.7 * age) }"
},
{
"code": null,
"e": 17956,
"s": 17943,
"text": "Mine is 186."
},
{
"code": null,
"e": 18150,
"s": 17956,
"text": "Now is we create a tribble with 4 heart ranges showing the lower and higher bounds and use the mutate() function from above to calculate what my max heart rate is (with lower and upper bounds)."
},
{
"code": null,
"e": 18875,
"s": 18150,
"text": "# Taking the hobbits to Isengardmy_MEPS <- tribble(~MEPS, ~hr_range, ~hr_lo, ~hr_hi, 1, '50-59%', 0.50, 0.59, 2, '60-69%', 0.60, 0.69, 3, '70-79%', 0.70, 0.79, 4, '>=80', 0.80, 1.00) %>% mutate(my_hr_low = floor(meps_max(30) * hr_lo), my_hr_hi = ceiling(meps_max(30) * hr_hi))my_MEPS## # A tibble: 4 x 6## MEPS hr_range hr_lo hr_hi my_hr_low my_hr_hi## <dbl> <chr> <dbl> <dbl> <dbl> <dbl>## 1 1 50-59% 0.5 0.59 93 110## 2 2 60-69% 0.6 0.69 111 129## 3 3 70-79% 0.7 0.79 130 147## 4 4 >=80 0.8 1 148 186"
},
{
"code": null,
"e": 18936,
"s": 18875,
"text": "With the equation now defined let’s calculate my total MEPS:"
},
{
"code": null,
"e": 19247,
"s": 18936,
"text": "# Checkpoint!mep <- mutate(workout$`activities-heart-intraday`$dataset, meps = case_when(value >= 146 ~ 4, value >= 128 ~ 3, value >= 109 ~ 2, value >= 91 ~ 1, TRUE ~ 0)) %>% summarise(\"Total MEPS\" = sum(meps))"
},
{
"code": null,
"e": 19261,
"s": 19247,
"text": "Wow it’s 216!"
},
{
"code": null,
"e": 19425,
"s": 19261,
"text": "I’m not sure what that exactly means but apparently the maximum possible MEPS in a 42-minute workout is 168 and since I ran this 10Km in 54:35 I guess that’s good?"
},
{
"code": null,
"e": 19623,
"s": 19425,
"text": "I’d like to post sub 50 minutes on my next 10Km run but I’m not sure if I should be aiming to shoot for a greater percentage of peak heart rate minutes or not - guess I will need to look into this."
},
{
"code": null,
"e": 19667,
"s": 19623,
"text": "Let’s examine my sleep patterns last night."
},
{
"code": null,
"e": 20449,
"s": 19667,
"text": "# Resting at Campfireget_sleep <- function(startDate = NULL, endDate = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1.2/user/-/sleep/date/', startDate, \"/\", endDate, '.json'), add_headers(Authorization = paste0(\"Bearer \", token)))}# make sure that there is data for those days otherwise it tosses an errorgot_sleep <- get_sleep(startDate = \"2018-08-21\", endDate = \"2018-08-22\")sleep <- content(got_sleep)dateRange <- seq(as.Date(\"2018-08-21\"), as.Date(\"2018-08-22\"), \"days\")sleep_pattern <- NULLfor(j in 1:length(dateRange)){ sleep[['sleep']][[j]][['levels']][['data']] <- jsonlist_to_df(sleep[['sleep']][[j]][['levels']][['data']]) tmp <- sleep$sleep[[j]]$levels$`data`sleep_pattern <- bind_rows(sleep_pattern, tmp)}"
},
{
"code": null,
"e": 20525,
"s": 20449,
"text": "Okay now that the data munging is complete, let’s look at my sleep pattern."
},
{
"code": null,
"e": 21402,
"s": 20525,
"text": "# Now entering... The Twilight Zoneg <- sleep_pattern %>% group_by(level, seconds) %>% summarise() %>% summarise(seconds = sum(seconds)) %>% mutate(percentage = seconds/sum(seconds)) %>% ggplot(aes(x = \"\", y = percentage, fill = c(\"S\", \"A\", \"R\"))) + geom_bar(width = 1, stat = \"identity\") + theme(axis.text.y = element_blank(), axis.text.x = element_blank(), axis.line = element_blank(), plot.caption = element_text(size = 5), plot.title = element_blank()) + labs(fill = \"class\", x = NULL, y = NULL, title = \"Sleep stages\", caption = \"A = Awake; R = Restless; S = Asleep\") + coord_polar(theta = \"y\", start = 0) + scale_fill_manual(values = c(\"#FF3F3F\", \"#2BD1FC\", \"#BA90A6\"))g + theme_gameboy()g + theme_gba()"
},
{
"code": null,
"e": 21512,
"s": 21402,
"text": "A pie chart is probably not the best way to show this data. Let’s visualize the distribution with a box plot."
},
{
"code": null,
"e": 21841,
"s": 21512,
"text": "# Entering Cheat Codes!g <- ggplot(sleep_pattern, aes(y=log10(seconds), x=level)) + geom_boxplot(color=\"#031300\", fill='#152403') + labs(x = \"\", title = 'Sleep patterns over a month', subtitle = 'Data gathered from Fitbit Charge2') + theme(legend.position = \"none\") g + theme_gameboy()g + theme_gba()"
},
{
"code": null,
"e": 21955,
"s": 21841,
"text": "An even better way to visualize the distribution would be to use a violin plot with the raw data points overlaid."
},
{
"code": null,
"e": 22302,
"s": 21955,
"text": "# Neglecting Sleep...g <- ggplot(sleep_pattern, aes(y=log10(seconds), x=level)) + geom_violin(color=\"#031300\", fill='#152403') + geom_point() + labs(x = \"\", title = 'Sleep patterns over a month', subtitle = 'Data gathered from Fitbit Charge2') + theme(legend.position = \"none\")g + theme_gameboy()g + theme_gba()"
},
{
"code": null,
"e": 22458,
"s": 22302,
"text": "You can do API requests for various periods from the Fitbit Activity and Exercise Logs but since I’ve only had mine a couple months I’ll use the 3m period."
},
{
"code": null,
"e": 22796,
"s": 22458,
"text": "I will also need to trim off any day’s which are in the future otherwise they’ll appear as 0 calories in the figures. It’s best to use the Sys.Date() function rather than hardcoding the date when doing EDA, making a Shiny app, or parameterizing a RMarkdown file. This way you can explore different time periods without anything breaking."
},
{
"code": null,
"e": 22899,
"s": 22796,
"text": "I cannot remember when I started wearing my Fitbit but we can figure that out with the following code:"
},
{
"code": null,
"e": 23003,
"s": 22899,
"text": "# ULTIMATE IS READY!# Query how many days since you've had fitbit forinception <- user_info$memberSince"
},
{
"code": null,
"e": 23040,
"s": 23003,
"text": "I’ve had my Fitbit since 2018–08–20."
},
{
"code": null,
"e": 23103,
"s": 23040,
"text": "Let’s gather data from September 20th until November 6th 2018."
},
{
"code": null,
"e": 30471,
"s": 23103,
"text": "# Catching them all!### Caloriesget_calories <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/calories/date/', baseDate, \"/\", period, '.json'), add_headers(Authorization = paste0(\"Bearer \", token)))} got_calories <- get_calories(baseDate = \"2018-11-20\", period = \"3m\")calories <- content(got_calories)# turn into dfcalories[['activities-calories']] <- jsonlist_to_df(calories[['activities-calories']])# assign easy object and renamecalories <- calories[['activities-calories']]colnames(calories) <- c(\"dateTime\", \"calories\")### STEPSget_steps <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/steps/date/', baseDate, \"/\", period, '.json'), add_headers(Authorization = paste0(\"Bearer \", token)))} got_steps <- get_steps(baseDate = \"2018-11-20\", period = \"3m\")steps <- content(got_steps)# turn into dfsteps[['activities-steps']] <- jsonlist_to_df(steps[['activities-steps']])# assign easy object and renamesteps <- steps[['activities-steps']]colnames(steps) <- c(\"dateTime\", \"steps\")### DISTANCEget_distance <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/distance/date/', baseDate, \"/\", period, '.json'), add_headers(Authorization = paste0(\"Bearer \", token)))} got_distance <- get_distance(baseDate = \"2018-11-20\", period = \"3m\")distance <- content(got_distance)# turn into dfdistance[['activities-distance']] <- jsonlist_to_df(distance[['activities-distance']])# assign easy object and renamedistance <- distance[['activities-distance']]colnames(distance) <- c(\"dateTime\", \"distance\")### FLOORSget_floors <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/floors/date/', baseDate, \"/\", period, '.json'), add_headers(Authorization = paste0(\"Bearer \", token)))} got_floors <- get_floors(baseDate = \"2018-11-20\", period = \"3m\")floors <- content(got_floors)# turn into dffloors[['activities-floors']] <- jsonlist_to_df(floors[['activities-floors']])# assign easy object and renamefloors <- floors[['activities-floors']]colnames(floors) <- c(\"dateTime\", \"floors\")### ELEVATIONget_elevation <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/elevation/date/', baseDate, \"/\", period, '.json'), add_headers(Authorization = paste0(\"Bearer \", token)))} got_elevation <- get_elevation(baseDate = \"2018-11-20\", period = \"3m\")elevation <- content(got_elevation)# turn into dfelevation[['activities-elevation']] <- jsonlist_to_df(elevation[['activities-elevation']])# assign easy object and renameelevation <- elevation[['activities-elevation']]colnames(elevation) <- c(\"dateTime\", \"elevation\")### minutesSedentaryget_minutesSedentary <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/minutesSedentary/date/', baseDate, \"/\", period, '.json'), add_headers(Authorization = paste0(\"Bearer \", token)))} got_minutesSedentary <- get_minutesSedentary(baseDate = \"2018-11-20\", period = \"3m\")minutesSedentary <- content(got_minutesSedentary)# turn into dfminutesSedentary[['activities-minutesSedentary']] <- jsonlist_to_df(minutesSedentary[['activities-minutesSedentary']])# assign easy object and renameminutesSedentary <- minutesSedentary[['activities-minutesSedentary']]colnames(minutesSedentary) <- c(\"dateTime\", \"minutesSedentary\")### minutesLightlyActiveget_minutesLightlyActive <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/minutesLightlyActive/date/', baseDate, \"/\", period, '.json'), add_headers(Authorization = paste0(\"Bearer \", token)))} got_minutesLightlyActive <- get_minutesLightlyActive(baseDate = \"2018-11-20\", period = \"3m\")minutesLightlyActive <- content(got_minutesLightlyActive)# turn into dfminutesLightlyActive[['activities-minutesLightlyActive']] <- jsonlist_to_df(minutesLightlyActive[['activities-minutesLightlyActive']])# assign easy object and renameminutesLightlyActive <- minutesLightlyActive[['activities-minutesLightlyActive']]colnames(minutesLightlyActive) <- c(\"dateTime\", \"minutesLightlyActive\")### minutesFairlyActiveget_minutesFairlyActive <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/minutesFairlyActive/date/', baseDate, \"/\", period, '.json'), add_headers(Authorization = paste0(\"Bearer \", token)))} got_minutesFairlyActive <- get_minutesFairlyActive(baseDate = \"2018-11-20\", period = \"3m\")minutesFairlyActive <- content(got_minutesFairlyActive)# turn into dfminutesFairlyActive[['activities-minutesFairlyActive']] <- jsonlist_to_df(minutesFairlyActive[['activities-minutesFairlyActive']])# assign easy object and renameminutesFairlyActive <- minutesFairlyActive[['activities-minutesFairlyActive']]colnames(minutesFairlyActive) <- c(\"dateTime\", \"minutesFairlyActive\")### minutesVeryActiveget_minutesVeryActive <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/minutesVeryActive/date/', baseDate, \"/\", period, '.json'), add_headers(Authorization = paste0(\"Bearer \", token)))} got_minutesVeryActive <- get_minutesVeryActive(baseDate = \"2018-11-20\", period = \"3m\")minutesVeryActive <- content(got_minutesVeryActive)# turn into dfminutesVeryActive[['activities-minutesVeryActive']] <- jsonlist_to_df(minutesVeryActive[['activities-minutesVeryActive']])# assign easy object and renameminutesVeryActive <- minutesVeryActive[['activities-minutesVeryActive']]colnames(minutesVeryActive) <- c(\"dateTime\", \"minutesVeryActive\")### activityCaloriesget_activityCalories <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/activityCalories/date/', baseDate, \"/\", period, '.json'), add_headers(Authorization = paste0(\"Bearer \", token)))} got_activityCalories <- get_activityCalories(baseDate = \"2018-11-20\", period = \"3m\")activityCalories <- content(got_activityCalories)# turn into dfactivityCalories[['activities-activityCalories']] <- jsonlist_to_df(activityCalories[['activities-activityCalories']])# assign easy object and renameactivityCalories <- activityCalories[['activities-activityCalories']]colnames(activityCalories) <- c(\"dateTime\", \"activityCalories\")##### Join multiple dataframes with purrr::reduce and dplyr::left_joinactivity_df <- list(calories, steps, distance, floors, elevation, activityCalories, minutesSedentary, minutesLightlyActive, minutesFairlyActive, minutesVeryActive) %>% purrr::reduce(left_join, by = \"dateTime\")# Add the dateTime to this dataframeactivity_df$dateTime <- as.Date(activity_df$dateTime)names <- c(2:ncol(activity_df))activity_df[,names] <- lapply(activity_df[,names], as.numeric)# trim off any days that haven't happened yetactivity_df %<>% filter(dateTime <= \"2018-11-06\")"
},
{
"code": null,
"e": 32151,
"s": 30471,
"text": "# We're giving it all she's got!get_frequentActivities <- function(baseDate = NULL, period = NULL, token = Sys.getenv('FITB_AUTH')){ GET(url = paste0('https://api.fitbit.com/1/user/-/activities/recent.json'), add_headers(Authorization = paste0(\"Bearer \", token)))}got_frequentActivities <- get_frequentActivities(baseDate = \"2018-11-20\", period = \"3m\")frequentActivities <- content(got_frequentActivities)# This is a list object let's look at how many frequent activities are loggedlength(frequentActivities)## [1] 5# Take a look at the object with str()str(frequentActivities)## List of 5## $ :List of 6## ..$ activityId : int 2131## ..$ calories : int 0## ..$ description: chr \"\"## ..$ distance : int 0## ..$ duration : int 3038000## ..$ name : chr \"Weights\"## $ :List of 6## ..$ activityId : int 90009## ..$ calories : int 0## ..$ description: chr \"Running - 5 mph (12 min/mile)\"## ..$ distance : int 0## ..$ duration : int 1767000## ..$ name : chr \"Run\"## $ :List of 6## ..$ activityId : int 90013## ..$ calories : int 0## ..$ description: chr \"Walking less than 2 mph, strolling very slowly\"## ..$ distance : int 0## ..$ duration : int 2407000## ..$ name : chr \"Walk\"## $ :List of 6## ..$ activityId : int 90001## ..$ calories : int 0## ..$ description: chr \"Very Leisurely - Less than 10 mph\"## ..$ distance : int 0## ..$ duration : int 4236000## ..$ name : chr \"Bike\"## $ :List of 6## ..$ activityId : int 15000## ..$ calories : int 0## ..$ description: chr \"\"## ..$ distance : int 0## ..$ duration : int 1229000## ..$ name : chr \"Sport\""
},
{
"code": null,
"e": 32295,
"s": 32151,
"text": "I would never have considered myself a Darwin or a Thoreau but apparently strolling very slowly is my favorite activity in terms of time spent."
},
{
"code": null,
"e": 32516,
"s": 32295,
"text": "You can see that my Fitbit has also logged times for Weights, Sports and Biking which is likely from when I’ve manually logged my activities. There’s a possibility that Fitbit is registering Biking for when I skateboard."
},
{
"code": null,
"e": 32772,
"s": 32516,
"text": "Previously I had always used the corrplot package to create a correlation plot; however, it doesn’t play nicely with ggplot meaning you cannot add Game Boy themes easily. Nonetheless, I was able to give it a retro-looking palette with some minor tweaking."
},
{
"code": null,
"e": 32907,
"s": 32772,
"text": "Since I had two colors in mind from the original gameboy, and knew their hex code, I was able to generate a palette from this website."
},
{
"code": null,
"e": 33239,
"s": 32907,
"text": "# Aligning Covariance Matrices # drop dateTimecorr_df <- activity_df[,2:11]# Correlation matrixcorr <- cor(na.omit(corr_df))corrplot(corr, type = \"upper\", bg = \"#9BBB0E\", tl.col = \"#565656\", col = c(\"#CADCA0\", \"#B9CD93\", \"#A8BE85\", \"#97AF78\", \"#86A06B\", \"#75915E\", \"#648350\", \"#537443\", \"#426536\", \"#315629\", \"#20471B\", \"#0F380E\"))"
},
{
"code": null,
"e": 33385,
"s": 33239,
"text": "In a correlation plot the color of each circle indicates the magnitude of the correlation, and the size of the circle indicates its significance."
},
{
"code": null,
"e": 33509,
"s": 33385,
"text": "After a bit of searching for a ggplot2 extension I was able to use ggcorrplot which allowed me to use gameboy themes again!"
},
{
"code": null,
"e": 33817,
"s": 33509,
"text": "# Generating textures...ggcorrplot(corr, hc.order = TRUE, type = \"lower\", lab = TRUE, lab_size = 2, tl.cex = 8, show.legend = FALSE, colors = c( \"#306230\", \"#306230\", \"#0F380F\" ), title=\"Correlogram\", ggtheme=theme_gameboy)"
},
{
"code": null,
"e": 34131,
"s": 33817,
"text": "# Game Over. Loading previous saveggcorrplot(corr, hc.order = TRUE, type = \"lower\", lab = TRUE, lab_size = 2, tl.cex = 8, show.legend = FALSE, colors = c( \"#3B7AAD\", \"#56B1F7\", \"#1D3E5D\" ), title=\"Correlogram\", ggtheme=theme_gba)"
},
{
"code": null,
"e": 34516,
"s": 34131,
"text": "# Link saying \"hyahhh!\"# Staticg <- activity_df %>% ggplot(aes(x=dateTime, y=calories)) + geom_line(colour = \"black\") + geom_point(shape = 21, colour = \"black\", aes(fill = calories), size = 5, stroke = 1) + xlab(\"\") + ylab(\"Calorie Expenditure\")g + theme_gameboy() + theme(legend.position = \"none\")g + theme_gba() + theme(legend.position = \"none\")"
},
{
"code": null,
"e": 34942,
"s": 34516,
"text": "# Panick! at the Discord...# gganimateg <- activity_df %>% ggplot(aes(x=dateTime, y=calories)) + geom_line(colour = \"black\") + geom_point(shape = 21, colour = \"black\", aes(fill = calories), size = 5, stroke = 1) + transition_time(dateTime) + shadow_mark() + ease_aes('linear') + xlab(\"\") + ylab(\"Calorie Expenditure\") g + theme_gba() + theme(legend.position = \"none\")"
},
{
"code": null,
"e": 35047,
"s": 34942,
"text": "Distance is determined by using your steps and your estimated stride length (for the height you put in)."
},
{
"code": null,
"e": 35160,
"s": 35047,
"text": "I’ve also made plots for Distance, Steps, Elevationand Floorsbut you’ll have to check out this page to see them."
},
{
"code": null,
"e": 35437,
"s": 35160,
"text": "Even though Fitbit offers a nice dashboard for a single user it’s not scale-able. By accessing the data directly one can ask the questions they want from 200 individuals — or more. If one was inclined, they could even build a fancy Shiny dashboard with bespoke visualizations."
},
{
"code": null,
"e": 35549,
"s": 35437,
"text": "If you have any questions or comments you can always reach me on LinkedIn. Till then, see you in the next post!"
}
] |
Java Program to Add Two Binary Strings - GeeksforGeeks
|
28 Mar, 2021
When two binary strings are added, then the sum returned is also a binary string.
Example:
Input : x = "10", y = "01"
Output: "11"
Input : x = "110", y = "011"
Output: "1001"
Explanation:
110
+ 011
=1001
Here, we need to start adding from the right side and when the sum returned is more than one then store the carry for the next digits.
Let’s see a program in order to get the clear concept of above topic.
Example:
Java
// java program to add two binary strings public class gfg { // Function to add two binary strings static String add_Binary(String x, String y) { // Initializing result String res = ""; // Initializing digit sum int d = 0; // Traversing both the strings starting // from the last characters int k = x.length() - 1, l = y.length() - 1; while (k >= 0 || l >= 0 || d == 1) { // Computing the sum of last // digits and the carry d += ((k >= 0) ? x.charAt(k) - '0' : 0); d += ((l >= 0) ? y.charAt(l) - '0' : 0); // When the current digit's sum is either // 1 or 3 then add 1 to the result res = (char)(d % 2 + '0') + res; // Computing carry d /= 2; // Moving to the next digits k--; l--; } return res; } // The Driver code public static void main(String args[]) { String x = "011011", y = "1010111"; System.out.print(add_Binary(x, y)); }}
1110010
Java-String-Programs
Java
Java Programs
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Different ways of Reading a text file in Java
Stream In Java
Constructors in Java
Generics in Java
Exceptions in Java
Convert a String to Character array in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
How to Iterate HashMap in Java?
|
[
{
"code": null,
"e": 23972,
"s": 23944,
"text": "\n28 Mar, 2021"
},
{
"code": null,
"e": 24054,
"s": 23972,
"text": "When two binary strings are added, then the sum returned is also a binary string."
},
{
"code": null,
"e": 24063,
"s": 24054,
"text": "Example:"
},
{
"code": null,
"e": 24182,
"s": 24063,
"text": "Input : x = \"10\", y = \"01\"\nOutput: \"11\"\n\nInput : x = \"110\", y = \"011\"\nOutput: \"1001\"\nExplanation:\n 110 \n+ 011\n=1001"
},
{
"code": null,
"e": 24317,
"s": 24182,
"text": "Here, we need to start adding from the right side and when the sum returned is more than one then store the carry for the next digits."
},
{
"code": null,
"e": 24387,
"s": 24317,
"text": "Let’s see a program in order to get the clear concept of above topic."
},
{
"code": null,
"e": 24396,
"s": 24387,
"text": "Example:"
},
{
"code": null,
"e": 24401,
"s": 24396,
"text": "Java"
},
{
"code": "// java program to add two binary strings public class gfg { // Function to add two binary strings static String add_Binary(String x, String y) { // Initializing result String res = \"\"; // Initializing digit sum int d = 0; // Traversing both the strings starting // from the last characters int k = x.length() - 1, l = y.length() - 1; while (k >= 0 || l >= 0 || d == 1) { // Computing the sum of last // digits and the carry d += ((k >= 0) ? x.charAt(k) - '0' : 0); d += ((l >= 0) ? y.charAt(l) - '0' : 0); // When the current digit's sum is either // 1 or 3 then add 1 to the result res = (char)(d % 2 + '0') + res; // Computing carry d /= 2; // Moving to the next digits k--; l--; } return res; } // The Driver code public static void main(String args[]) { String x = \"011011\", y = \"1010111\"; System.out.print(add_Binary(x, y)); }}",
"e": 25498,
"s": 24401,
"text": null
},
{
"code": null,
"e": 25506,
"s": 25498,
"text": "1110010"
},
{
"code": null,
"e": 25527,
"s": 25506,
"text": "Java-String-Programs"
},
{
"code": null,
"e": 25532,
"s": 25527,
"text": "Java"
},
{
"code": null,
"e": 25546,
"s": 25532,
"text": "Java Programs"
},
{
"code": null,
"e": 25551,
"s": 25546,
"text": "Java"
},
{
"code": null,
"e": 25649,
"s": 25551,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25658,
"s": 25649,
"text": "Comments"
},
{
"code": null,
"e": 25671,
"s": 25658,
"text": "Old Comments"
},
{
"code": null,
"e": 25717,
"s": 25671,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 25732,
"s": 25717,
"text": "Stream In Java"
},
{
"code": null,
"e": 25753,
"s": 25732,
"text": "Constructors in Java"
},
{
"code": null,
"e": 25770,
"s": 25753,
"text": "Generics in Java"
},
{
"code": null,
"e": 25789,
"s": 25770,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 25833,
"s": 25789,
"text": "Convert a String to Character array in Java"
},
{
"code": null,
"e": 25859,
"s": 25833,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 25893,
"s": 25859,
"text": "Convert Double to Integer in Java"
},
{
"code": null,
"e": 25940,
"s": 25893,
"text": "Implementing a Linked List in Java using Class"
}
] |
Apache HttpClient - User Authentication
|
Using HttpClient, you can connect to a website which needed username and password. This chapter explains, how to execute a client request against a site that asks for username and password.
The CredentialsProvider Interface maintains a collection to hold the user login credentials. You can create its object by instantiating the BasicCredentialsProvider class, the default implementation of this interface.
CredentialsProvider credentialsPovider = new BasicCredentialsProvider();
You can set the required credentials to the CredentialsProvider object using the setCredentials() method.
This method accepts two objects as given below −
AuthScope object − Authentication scope specifying the details like hostname, port number, and authentication scheme name.
AuthScope object − Authentication scope specifying the details like hostname, port number, and authentication scheme name.
Credentials object − Specifying the credentials (username, password).
Credentials object − Specifying the credentials (username, password).
Set the credentials using the setCredentials() method for both host and proxy as shown below −
credsProvider.setCredentials(new AuthScope("example.com", 80),
new UsernamePasswordCredentials("user", "mypass"));
credsProvider.setCredentials(new AuthScope("localhost", 8000),
new UsernamePasswordCredentials("abc", "passwd"));
Create a HttpClientBuilder using the custom() method of the HttpClients class.
//Creating the HttpClientBuilder
HttpClientBuilder clientbuilder = HttpClients.custom();
You can set the above created credentialsPovider object to a HttpClientBuilder using the setDefaultCredentialsProvider() method.
Set the CredentialProvider object created in the previous step to the client builder by passing it to the CredentialsProvider object() method as shown below.
clientbuilder = clientbuilder.setDefaultCredentialsProvider(credsProvider);
Build the CloseableHttpClient object using the build() method of the HttpClientBuilder class.
CloseableHttpClient httpclient = clientbuilder.build()
Create a HttpRequest object by instantiating the HttpGet class. Execute this request using
the execute() method.
//Creating a HttpGet object
HttpGet httpget = new HttpGet("https://www.tutorialspoint.com/ ");
//Executing the Get request
HttpResponse httpresponse = httpclient.execute(httpget);
Following is an example program which demonstrates the execution of a HTTP request against a target site that requires user authentication.
import org.apache.http.Header;
import org.apache.http.HttpResponse;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.Credentials;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.impl.client.HttpClients;
public class UserAuthenticationExample {
public static void main(String args[]) throws Exception{
//Create an object of credentialsProvider
CredentialsProvider credentialsPovider = new BasicCredentialsProvider();
//Set the credentials
AuthScope scope = new AuthScope("https://www.tutorialspoint.com/questions/", 80);
Credentials credentials = new UsernamePasswordCredentials("USERNAME", "PASSWORD");
credentialsPovider.setCredentials(scope,credentials);
//Creating the HttpClientBuilder
HttpClientBuilder clientbuilder = HttpClients.custom();
//Setting the credentials
clientbuilder = clientbuilder.setDefaultCredentialsProvider(credentialsPovider);
//Building the CloseableHttpClient object
CloseableHttpClient httpclient = clientbuilder.build();
//Creating a HttpGet object
HttpGet httpget = new HttpGet("https://www.tutorialspoint.com/questions/index.php");
//Printing the method used
System.out.println(httpget.getMethod());
//Executing the Get request
HttpResponse httpresponse = httpclient.execute(httpget);
//Printing the status line
System.out.println(httpresponse.getStatusLine());
int statusCode = httpresponse.getStatusLine().getStatusCode();
System.out.println(statusCode);
Header[] headers= httpresponse.getAllHeaders();
for (int i = 0; i<headers.length;i++) {
System.out.println(headers[i].getName());
}
}
}
On executing, the above program generates the following output.
GET
HTTP/1.1 200 OK
200
46 Lectures
3.5 hours
Arnab Chakraborty
23 Lectures
1.5 hours
Mukund Kumar Mishra
16 Lectures
1 hours
Nilay Mehta
52 Lectures
1.5 hours
Bigdata Engineer
14 Lectures
1 hours
Bigdata Engineer
23 Lectures
1 hours
Bigdata Engineer
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2017,
"s": 1827,
"text": "Using HttpClient, you can connect to a website which needed username and password. This chapter explains, how to execute a client request against a site that asks for username and password."
},
{
"code": null,
"e": 2235,
"s": 2017,
"text": "The CredentialsProvider Interface maintains a collection to hold the user login credentials. You can create its object by instantiating the BasicCredentialsProvider class, the default implementation of this interface."
},
{
"code": null,
"e": 2309,
"s": 2235,
"text": "CredentialsProvider credentialsPovider = new BasicCredentialsProvider();\n"
},
{
"code": null,
"e": 2415,
"s": 2309,
"text": "You can set the required credentials to the CredentialsProvider object using the setCredentials() method."
},
{
"code": null,
"e": 2464,
"s": 2415,
"text": "This method accepts two objects as given below −"
},
{
"code": null,
"e": 2587,
"s": 2464,
"text": "AuthScope object − Authentication scope specifying the details like hostname, port number, and authentication scheme name."
},
{
"code": null,
"e": 2710,
"s": 2587,
"text": "AuthScope object − Authentication scope specifying the details like hostname, port number, and authentication scheme name."
},
{
"code": null,
"e": 2780,
"s": 2710,
"text": "Credentials object − Specifying the credentials (username, password)."
},
{
"code": null,
"e": 2850,
"s": 2780,
"text": "Credentials object − Specifying the credentials (username, password)."
},
{
"code": null,
"e": 2945,
"s": 2850,
"text": "Set the credentials using the setCredentials() method for both host and proxy as shown below −"
},
{
"code": null,
"e": 3183,
"s": 2945,
"text": "credsProvider.setCredentials(new AuthScope(\"example.com\", 80), \n new UsernamePasswordCredentials(\"user\", \"mypass\"));\ncredsProvider.setCredentials(new AuthScope(\"localhost\", 8000), \n new UsernamePasswordCredentials(\"abc\", \"passwd\"));\n"
},
{
"code": null,
"e": 3262,
"s": 3183,
"text": "Create a HttpClientBuilder using the custom() method of the HttpClients class."
},
{
"code": null,
"e": 3352,
"s": 3262,
"text": "//Creating the HttpClientBuilder\nHttpClientBuilder clientbuilder = HttpClients.custom();\n"
},
{
"code": null,
"e": 3481,
"s": 3352,
"text": "You can set the above created credentialsPovider object to a HttpClientBuilder using the setDefaultCredentialsProvider() method."
},
{
"code": null,
"e": 3639,
"s": 3481,
"text": "Set the CredentialProvider object created in the previous step to the client builder by passing it to the CredentialsProvider object() method as shown below."
},
{
"code": null,
"e": 3716,
"s": 3639,
"text": "clientbuilder = clientbuilder.setDefaultCredentialsProvider(credsProvider);\n"
},
{
"code": null,
"e": 3810,
"s": 3716,
"text": "Build the CloseableHttpClient object using the build() method of the HttpClientBuilder class."
},
{
"code": null,
"e": 3866,
"s": 3810,
"text": "CloseableHttpClient httpclient = clientbuilder.build()\n"
},
{
"code": null,
"e": 3979,
"s": 3866,
"text": "Create a HttpRequest object by instantiating the HttpGet class. Execute this request using\nthe execute() method."
},
{
"code": null,
"e": 4161,
"s": 3979,
"text": "//Creating a HttpGet object\nHttpGet httpget = new HttpGet(\"https://www.tutorialspoint.com/ \");\n\n//Executing the Get request\nHttpResponse httpresponse = httpclient.execute(httpget);\n"
},
{
"code": null,
"e": 4301,
"s": 4161,
"text": "Following is an example program which demonstrates the execution of a HTTP request against a target site that requires user authentication."
},
{
"code": null,
"e": 6339,
"s": 4301,
"text": "import org.apache.http.Header;\nimport org.apache.http.HttpResponse;\nimport org.apache.http.auth.AuthScope;\nimport org.apache.http.auth.Credentials;\nimport org.apache.http.auth.UsernamePasswordCredentials;\nimport org.apache.http.client.CredentialsProvider;\nimport org.apache.http.client.methods.HttpGet;\nimport org.apache.http.impl.client.BasicCredentialsProvider;\nimport org.apache.http.impl.client.CloseableHttpClient;\nimport org.apache.http.impl.client.HttpClientBuilder;\nimport org.apache.http.impl.client.HttpClients;\n\npublic class UserAuthenticationExample {\n \n public static void main(String args[]) throws Exception{\n \n //Create an object of credentialsProvider\n CredentialsProvider credentialsPovider = new BasicCredentialsProvider();\n\n //Set the credentials\n AuthScope scope = new AuthScope(\"https://www.tutorialspoint.com/questions/\", 80);\n \n Credentials credentials = new UsernamePasswordCredentials(\"USERNAME\", \"PASSWORD\");\n credentialsPovider.setCredentials(scope,credentials);\n\n //Creating the HttpClientBuilder\n HttpClientBuilder clientbuilder = HttpClients.custom();\n\n //Setting the credentials\n clientbuilder = clientbuilder.setDefaultCredentialsProvider(credentialsPovider);\n\n //Building the CloseableHttpClient object\n CloseableHttpClient httpclient = clientbuilder.build();\n\n //Creating a HttpGet object\n HttpGet httpget = new HttpGet(\"https://www.tutorialspoint.com/questions/index.php\");\n\n //Printing the method used\n System.out.println(httpget.getMethod()); \n\n //Executing the Get request\n HttpResponse httpresponse = httpclient.execute(httpget);\n\n //Printing the status line\n System.out.println(httpresponse.getStatusLine());\n int statusCode = httpresponse.getStatusLine().getStatusCode();\n System.out.println(statusCode);\n\n Header[] headers= httpresponse.getAllHeaders();\n for (int i = 0; i<headers.length;i++) {\n System.out.println(headers[i].getName());\n }\n }\n}"
},
{
"code": null,
"e": 6403,
"s": 6339,
"text": "On executing, the above program generates the following output."
},
{
"code": null,
"e": 6428,
"s": 6403,
"text": "GET\nHTTP/1.1 200 OK\n200\n"
},
{
"code": null,
"e": 6463,
"s": 6428,
"text": "\n 46 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 6482,
"s": 6463,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 6517,
"s": 6482,
"text": "\n 23 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6538,
"s": 6517,
"text": " Mukund Kumar Mishra"
},
{
"code": null,
"e": 6571,
"s": 6538,
"text": "\n 16 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6584,
"s": 6571,
"text": " Nilay Mehta"
},
{
"code": null,
"e": 6619,
"s": 6584,
"text": "\n 52 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6637,
"s": 6619,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 6670,
"s": 6637,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6688,
"s": 6670,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 6721,
"s": 6688,
"text": "\n 23 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6739,
"s": 6721,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 6746,
"s": 6739,
"text": " Print"
},
{
"code": null,
"e": 6757,
"s": 6746,
"text": " Add Notes"
}
] |
GPU Information in ElectronJS - GeeksforGeeks
|
22 Sep, 2021
ElectronJS is an Open Source Framework used for building Cross-Platform native desktop applications using web technologies such as HTML, CSS, and JavaScript which are capable of running on Windows, macOS, and Linux operating systems. It combines the Chromium engine and NodeJS into a Single Runtime.
GPU (Graphics Processing Unit) is a specialized programmable processor used for rendering all graphical content such as images on the computer’s screen. It is designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output on a display device. All modern Computer systems come with built-in GPU components i.e. they can either be a part of the motherboard circuitry or they can be another component altogether which is connected to the motherboard externally. Chromium extensively uses this GPU component when displaying GPU accelerated content. Chromium uses GPUs to accelerate web-page rendering, HTML, CSS, and other graphical elements within the browser. The latest versions of Chromium use the GPU component for video rendering and processing as well. GPU consumes less power than the CPU which reduces power consumption and generates less heat overall. GPU also helps in balancing the load on the CPU by resource-sharing, therefore, allowing the CPU to perform faster and take up heavier computational tasks. Chromium browsers have a dedicated GPU tab for monitoring and displaying all GPU related information in the system. It can be accessed by visiting the chrome://gpu/ page in the browser. The electron can also access and use this GPU related Information for the application by using the Instance events and methods of the app module. This tutorial will demonstrate how to fetch, display, and control GPU related Information in Electron.
We assume that you are familiar with the prerequisites as covered in the above-mentioned link. For Electron to work, node and npm need to be pre-installed in the system.
Project Structure:
Example: Follow the Steps given in Printing in ElectronJS to set up the basic Electron Application. Copy the Boilerplate code for the main.js file and the index.html file as provided in the article. Also, perform the necessary changes mentioned for the package.json file to launch the Electron Application. We will continue building our application using the same code base. The basic steps required to set up the Electron application remain the same.
package.json:
{
"name": "electron-gpu",
"version": "1.0.0",
"description": "GPU Information in Electron",
"main": "main.js",
"scripts": {
"start": "electron ."
},
"keywords": [
"electron"
],
"author": "Radhesh Khanna",
"license": "ISC",
"dependencies": {
"electron": "^8.3.0"
}
}
Output:
GPU Information in Electron: The app module is used to control the application’s event lifecycle. This module is part of the Main Process. To import and use the app module in the Renderer Process, we will be using Electron remote module.
index.html: Add the following snippet in that file.
HTML
<h3>GPU Information in Electron</h3> <button id="metrics"> Fetch App Metrics </button> <br><br> <button id="basic"> Get Basic GPU Information </button> <br><br> <button id="complete"> Get Complete GPU Information </button> <br><br> <button id="features"> Get GPU Feature Status </button>
index.js: All the buttons created in the index.html will be used to display different bits of information relating to the GPU and the application. These buttons do not have any functionality associated with them yet. To change this, add the following code in the index.js file.
Javascript
const electron = require('electron')// Importing the app module using Electron remoteconst app = electron.remote.app; app.on('gpu-info-update', () => { console.log('GPU Information has been Updated');}); app.on('gpu-process-crashed', (event, killed) => { console.log('GPU Process has crashed'); console.log(event); console.log('Whether GPU Process was killed - ', killed);}); var metrics = document.getElementById('metrics');metrics.addEventListener('click', () => { console.dir(app.getAppMetrics());}); var basic = document.getElementById('basic');basic.addEventListener('click', () => { app.getGPUInfo('basic').then(basicObj => { console.dir(basicObj); });}); var complete = document.getElementById('complete');complete.addEventListener('click', () => { app.getGPUInfo('complete').then(completeObj => { console.dir(completeObj); });}); var features = document.getElementById('features');features.addEventListener('click', () => { console.dir(app.getGPUFeatureStatus());});
The app.getAppMetrics() Instance method of the app module is used to return an Array of ProcessMetric objects that correspond to memory and CPU usage statistics of all the processes associated with the application. The ProcessMetric object consists of the following parameters.
pid: Integer The Process ID (PID) of the process. Every process running within the application is represented by a separate ProcessMetric object within the array. This parameter is important because several Instance methods of the webFrame module use the PID as an input argument. This parameter is also important for debugging and checking the system metrics of the various ongoing processes within the native system OS that are associated with the application.
type: String This parameter represents the type of the process running within the application. This parameter can hold any one of the following values:BrowserTabUtilityZygoteSandbox helperGPUPepper PluginPepper Plugin BrokerUnknown
Browser
Tab
Utility
Zygote
Sandbox helper
GPU
Pepper Plugin
Pepper Plugin Broker
Unknown
cpu: Object This parameter returns a CPUUsage object which represents the CPU usage of the process. This object can also be obtained from the process.getCPUUsage() Instance method of the global Process object and behaves exactly in the same manner. For more detailed Information on the CPUUsage object and its behaviour, Refer to the article: Process Object in ElectronJS.
creationTime: Integer This parameter represents the creation time of the process. The time is represented as number of milliseconds since the epoch. This parameter can also be obtained from the process.getCreationTime() Instance method of the global Process object and behaves exactly in the same manner. For more detailed Information on the creationTime parameter and its behaviour, refer to the article: Process Object in ElectronJS. Note: Since the PID can be reused again by the OS after a process dies, it is useful to use both the pid parameter and the creationTime parameter to uniquely identify and distinguish a process.
memory This parameter returns a MemoryInfo object which represents the Memory Information of the process. This object consists of detailed information of the memory being used on the actual physical RAM by the process. It consists of the following parameters.workingSetSize: Integer This parameter represents the amount of memory currently pinned to actual physical RAM by the process.peakWorkingSetSize: Integer This parameter represents the maximum amount of memory that has ever been pinned to actual physical RAM by the process.privateBytes: Integer (Optional) This parameter is supported in Windows OS only. This parameter represents the amount of memory not shared by other processes, such as V8 Engine Memory Heap or HTML content.
workingSetSize: Integer This parameter represents the amount of memory currently pinned to actual physical RAM by the process.
peakWorkingSetSize: Integer This parameter represents the maximum amount of memory that has ever been pinned to actual physical RAM by the process.
privateBytes: Integer (Optional) This parameter is supported in Windows OS only. This parameter represents the amount of memory not shared by other processes, such as V8 Engine Memory Heap or HTML content.
sandboxed: Boolean (Optional) This parameter is supported in Windows and macOS only. This parameter represents whether the process is sandboxed on the OS level.
integrityLevel: String (Optional) This parameter is supported in Windows OS only. This parameter can hold any one of the following values:untrustedlowmediumhighunknown
untrusted
low
medium
high
unknown
The app.getGPUFeatureStatus() Instance method of the app module returns a GPUFeatureStatus object. This object represents the Graphics Feature Status of the GPU from the chrome://gpu page in the Chromium browser.
Note: The GPUFeatureStatus object consists of the exact same parameters as shown in the above image. The values displayed for the parameters of the GPUFeatureStatus object are in abbreviated format and might differ from what is shown in the image. These parameters can hold any one of the following values with their respective color codes:
disabled_software: Yellow Software only. Hardware acceleration is disabled.
disabled_off: Red
disabled_off_ok: Yellow
unavailable_software: Yellow Software only, hardware acceleration unavailable.
unavailable_off: Red
unavailable_off_ok: Yellow
enabled_readback: Yellow Hardware-accelerated but at reduced performance.
enabled_force: Green Hardware-accelerated on all pages.
enabled: Green Hardware accelerated.
enabled_on: Green
enabled_force_on: Green Force enabled.
The app.disableHardwareAcceleration() Instance method of the app module disables Hardware acceleration for the entire application. This Instance method can only be used before the ready event of the app module is emitted. Hence, this method needs to be called in the main.js file (Main Process).
Javascript
const { app, BrowserWindow } = require('electron')app.disableHardwareAcceleration();
In an Electron application, Chromium disables 3D APIs (e.g. WebGL) until a restart of the application on a per-domain basis if the GPU processes crash too frequently. This is the default behavior of Chromium. There can be a variety of reasons which can cause the GPU processes to crash frequently including problems in the System hardware or overuse of system resources. The app.disableDomainBlockingFor3DAPIs() Instance method of the app module disables this default behavior of Chromium. This Instance method can only be used before the ready event of the app module is emitted. Hence, this method needs to be called in the main.js file (Main Process).
Javascript
const { app, BrowserWindow } = require('electron')app.disableHardwareAcceleration();app.disableDomainBlockingFor3DAPIs();
The app.getGPUInfo(info) Instance method fetches and returns the GPU Information from Chromium related to the Electron application. This Instance method returns a Promise and it is resolved to an object containing the relevant information based on the info: String parameter provided. Refer to the output for a better understanding. The info parameter can hold any one of the following values:
complete: The Promise returned is fulfilled with an object containing all the GPU Information as mentioned in the official Chromium’s GPUInfo object documentation. This includes the version and driver information that’s shown on chrome://gpu page of the Chromium browser. When info: complete, the Instance method takes a much longer time to execute as compared to info: basic. Refer to the Output for better understanding.
GPU Driver and Version Information – 1
GPU Driver and Version Information – 2
basic: The Promise returned is fulfilled with an object containing only a fewer and more essential parameters than the object returned when requested with info: complete. This value should be used if only basic information like vendorId parameter or driverId parameter is needed. The sample parameters returned are displayed below:
{ auxAttributes:
{ amdSwitchable: true,
canSupportThreadedTextureMailbox: false,
directComposition: false,
directRendering: true,
glResetNotificationStrategy: 0,
inProcessGpu: true,
initializationTime: 0,
jpegDecodeAcceleratorSupported: false,
optimus: false,
passthroughCmdDecoder: false,
sandboxed: false,
softwareRendering: false,
supportsOverlays: false,
videoDecodeAcceleratorFlags: 0 },
gpuDevice:
[ { active: true, deviceId: 26657, vendorId: 4098 },
{ active: false, deviceId: 3366, vendorId: 32902 } ],
machineModelName: 'MacBookPro',
machineModelVersion: '11.5' }
The app module emits the following Instance Events which are related to the GPU.
gpu-info-update: Event This Instance event is emitted whenever there is a change in any GPU related information for the different processes within the application. Based on the usage, functionality and the number of processes active within the application, this Instance event can be emitted multiple times.
gpu-process-crashed: Event This Instance event is emitted whenever the GPU process crashes or is killed by the native OS. This can cause the application to hang if not handled and hence we can use this Instance event to take necessary action and make the application exit cleanly. This event returns the following parameters.event: Event The global Event object.killed: Boolean This parameter represents whether the Process was killed.
event: Event The global Event object.
killed: Boolean This parameter represents whether the Process was killed.
At this point, upon launching the Electron application, we should be able to fetch and display all GPU related Information in the console output.
Output:
Note – We have used the console.dir() JavaScript method to output and display an object in the Console window of Chrome DevTools. To display objects, this method is preferred over the console.log() method.
For more detailed information. Refer to the article: Difference between console.dir and console.log.
anikakapoor
ElectronJS
HTML
JavaScript
Node.js
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to update Node.js and NPM to next version ?
How to Insert Form Data into Database using PHP ?
CSS to put icon inside an input element in a form
REST API (Introduction)
Types of CSS (Cascading Style Sheet)
Convert a string to an integer in JavaScript
How to calculate the number of days between two dates in javascript?
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
File uploading in React.js
|
[
{
"code": null,
"e": 25042,
"s": 25014,
"text": "\n22 Sep, 2021"
},
{
"code": null,
"e": 25342,
"s": 25042,
"text": "ElectronJS is an Open Source Framework used for building Cross-Platform native desktop applications using web technologies such as HTML, CSS, and JavaScript which are capable of running on Windows, macOS, and Linux operating systems. It combines the Chromium engine and NodeJS into a Single Runtime."
},
{
"code": null,
"e": 26852,
"s": 25342,
"text": "GPU (Graphics Processing Unit) is a specialized programmable processor used for rendering all graphical content such as images on the computer’s screen. It is designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output on a display device. All modern Computer systems come with built-in GPU components i.e. they can either be a part of the motherboard circuitry or they can be another component altogether which is connected to the motherboard externally. Chromium extensively uses this GPU component when displaying GPU accelerated content. Chromium uses GPUs to accelerate web-page rendering, HTML, CSS, and other graphical elements within the browser. The latest versions of Chromium use the GPU component for video rendering and processing as well. GPU consumes less power than the CPU which reduces power consumption and generates less heat overall. GPU also helps in balancing the load on the CPU by resource-sharing, therefore, allowing the CPU to perform faster and take up heavier computational tasks. Chromium browsers have a dedicated GPU tab for monitoring and displaying all GPU related information in the system. It can be accessed by visiting the chrome://gpu/ page in the browser. The electron can also access and use this GPU related Information for the application by using the Instance events and methods of the app module. This tutorial will demonstrate how to fetch, display, and control GPU related Information in Electron. "
},
{
"code": null,
"e": 27022,
"s": 26852,
"text": "We assume that you are familiar with the prerequisites as covered in the above-mentioned link. For Electron to work, node and npm need to be pre-installed in the system."
},
{
"code": null,
"e": 27042,
"s": 27022,
"text": "Project Structure: "
},
{
"code": null,
"e": 27495,
"s": 27042,
"text": "Example: Follow the Steps given in Printing in ElectronJS to set up the basic Electron Application. Copy the Boilerplate code for the main.js file and the index.html file as provided in the article. Also, perform the necessary changes mentioned for the package.json file to launch the Electron Application. We will continue building our application using the same code base. The basic steps required to set up the Electron application remain the same. "
},
{
"code": null,
"e": 27510,
"s": 27495,
"text": "package.json: "
},
{
"code": null,
"e": 27812,
"s": 27510,
"text": "{\n \"name\": \"electron-gpu\",\n \"version\": \"1.0.0\",\n \"description\": \"GPU Information in Electron\",\n \"main\": \"main.js\",\n \"scripts\": {\n \"start\": \"electron .\"\n },\n \"keywords\": [\n \"electron\"\n ],\n \"author\": \"Radhesh Khanna\",\n \"license\": \"ISC\",\n \"dependencies\": {\n \"electron\": \"^8.3.0\"\n }\n}"
},
{
"code": null,
"e": 27820,
"s": 27812,
"text": "Output:"
},
{
"code": null,
"e": 28058,
"s": 27820,
"text": "GPU Information in Electron: The app module is used to control the application’s event lifecycle. This module is part of the Main Process. To import and use the app module in the Renderer Process, we will be using Electron remote module."
},
{
"code": null,
"e": 28110,
"s": 28058,
"text": "index.html: Add the following snippet in that file."
},
{
"code": null,
"e": 28115,
"s": 28110,
"text": "HTML"
},
{
"code": "<h3>GPU Information in Electron</h3> <button id=\"metrics\"> Fetch App Metrics </button> <br><br> <button id=\"basic\"> Get Basic GPU Information </button> <br><br> <button id=\"complete\"> Get Complete GPU Information </button> <br><br> <button id=\"features\"> Get GPU Feature Status </button>",
"e": 28426,
"s": 28115,
"text": null
},
{
"code": null,
"e": 28705,
"s": 28426,
"text": "index.js: All the buttons created in the index.html will be used to display different bits of information relating to the GPU and the application. These buttons do not have any functionality associated with them yet. To change this, add the following code in the index.js file. "
},
{
"code": null,
"e": 28716,
"s": 28705,
"text": "Javascript"
},
{
"code": "const electron = require('electron')// Importing the app module using Electron remoteconst app = electron.remote.app; app.on('gpu-info-update', () => { console.log('GPU Information has been Updated');}); app.on('gpu-process-crashed', (event, killed) => { console.log('GPU Process has crashed'); console.log(event); console.log('Whether GPU Process was killed - ', killed);}); var metrics = document.getElementById('metrics');metrics.addEventListener('click', () => { console.dir(app.getAppMetrics());}); var basic = document.getElementById('basic');basic.addEventListener('click', () => { app.getGPUInfo('basic').then(basicObj => { console.dir(basicObj); });}); var complete = document.getElementById('complete');complete.addEventListener('click', () => { app.getGPUInfo('complete').then(completeObj => { console.dir(completeObj); });}); var features = document.getElementById('features');features.addEventListener('click', () => { console.dir(app.getGPUFeatureStatus());});",
"e": 29735,
"s": 28716,
"text": null
},
{
"code": null,
"e": 30013,
"s": 29735,
"text": "The app.getAppMetrics() Instance method of the app module is used to return an Array of ProcessMetric objects that correspond to memory and CPU usage statistics of all the processes associated with the application. The ProcessMetric object consists of the following parameters."
},
{
"code": null,
"e": 30476,
"s": 30013,
"text": "pid: Integer The Process ID (PID) of the process. Every process running within the application is represented by a separate ProcessMetric object within the array. This parameter is important because several Instance methods of the webFrame module use the PID as an input argument. This parameter is also important for debugging and checking the system metrics of the various ongoing processes within the native system OS that are associated with the application."
},
{
"code": null,
"e": 30708,
"s": 30476,
"text": "type: String This parameter represents the type of the process running within the application. This parameter can hold any one of the following values:BrowserTabUtilityZygoteSandbox helperGPUPepper PluginPepper Plugin BrokerUnknown"
},
{
"code": null,
"e": 30716,
"s": 30708,
"text": "Browser"
},
{
"code": null,
"e": 30720,
"s": 30716,
"text": "Tab"
},
{
"code": null,
"e": 30728,
"s": 30720,
"text": "Utility"
},
{
"code": null,
"e": 30735,
"s": 30728,
"text": "Zygote"
},
{
"code": null,
"e": 30750,
"s": 30735,
"text": "Sandbox helper"
},
{
"code": null,
"e": 30754,
"s": 30750,
"text": "GPU"
},
{
"code": null,
"e": 30768,
"s": 30754,
"text": "Pepper Plugin"
},
{
"code": null,
"e": 30789,
"s": 30768,
"text": "Pepper Plugin Broker"
},
{
"code": null,
"e": 30797,
"s": 30789,
"text": "Unknown"
},
{
"code": null,
"e": 31170,
"s": 30797,
"text": "cpu: Object This parameter returns a CPUUsage object which represents the CPU usage of the process. This object can also be obtained from the process.getCPUUsage() Instance method of the global Process object and behaves exactly in the same manner. For more detailed Information on the CPUUsage object and its behaviour, Refer to the article: Process Object in ElectronJS."
},
{
"code": null,
"e": 31800,
"s": 31170,
"text": "creationTime: Integer This parameter represents the creation time of the process. The time is represented as number of milliseconds since the epoch. This parameter can also be obtained from the process.getCreationTime() Instance method of the global Process object and behaves exactly in the same manner. For more detailed Information on the creationTime parameter and its behaviour, refer to the article: Process Object in ElectronJS. Note: Since the PID can be reused again by the OS after a process dies, it is useful to use both the pid parameter and the creationTime parameter to uniquely identify and distinguish a process."
},
{
"code": null,
"e": 32538,
"s": 31800,
"text": "memory This parameter returns a MemoryInfo object which represents the Memory Information of the process. This object consists of detailed information of the memory being used on the actual physical RAM by the process. It consists of the following parameters.workingSetSize: Integer This parameter represents the amount of memory currently pinned to actual physical RAM by the process.peakWorkingSetSize: Integer This parameter represents the maximum amount of memory that has ever been pinned to actual physical RAM by the process.privateBytes: Integer (Optional) This parameter is supported in Windows OS only. This parameter represents the amount of memory not shared by other processes, such as V8 Engine Memory Heap or HTML content."
},
{
"code": null,
"e": 32665,
"s": 32538,
"text": "workingSetSize: Integer This parameter represents the amount of memory currently pinned to actual physical RAM by the process."
},
{
"code": null,
"e": 32813,
"s": 32665,
"text": "peakWorkingSetSize: Integer This parameter represents the maximum amount of memory that has ever been pinned to actual physical RAM by the process."
},
{
"code": null,
"e": 33019,
"s": 32813,
"text": "privateBytes: Integer (Optional) This parameter is supported in Windows OS only. This parameter represents the amount of memory not shared by other processes, such as V8 Engine Memory Heap or HTML content."
},
{
"code": null,
"e": 33180,
"s": 33019,
"text": "sandboxed: Boolean (Optional) This parameter is supported in Windows and macOS only. This parameter represents whether the process is sandboxed on the OS level."
},
{
"code": null,
"e": 33348,
"s": 33180,
"text": "integrityLevel: String (Optional) This parameter is supported in Windows OS only. This parameter can hold any one of the following values:untrustedlowmediumhighunknown"
},
{
"code": null,
"e": 33358,
"s": 33348,
"text": "untrusted"
},
{
"code": null,
"e": 33362,
"s": 33358,
"text": "low"
},
{
"code": null,
"e": 33369,
"s": 33362,
"text": "medium"
},
{
"code": null,
"e": 33374,
"s": 33369,
"text": "high"
},
{
"code": null,
"e": 33382,
"s": 33374,
"text": "unknown"
},
{
"code": null,
"e": 33596,
"s": 33382,
"text": "The app.getGPUFeatureStatus() Instance method of the app module returns a GPUFeatureStatus object. This object represents the Graphics Feature Status of the GPU from the chrome://gpu page in the Chromium browser. "
},
{
"code": null,
"e": 33938,
"s": 33596,
"text": "Note: The GPUFeatureStatus object consists of the exact same parameters as shown in the above image. The values displayed for the parameters of the GPUFeatureStatus object are in abbreviated format and might differ from what is shown in the image. These parameters can hold any one of the following values with their respective color codes: "
},
{
"code": null,
"e": 34014,
"s": 33938,
"text": "disabled_software: Yellow Software only. Hardware acceleration is disabled."
},
{
"code": null,
"e": 34032,
"s": 34014,
"text": "disabled_off: Red"
},
{
"code": null,
"e": 34056,
"s": 34032,
"text": "disabled_off_ok: Yellow"
},
{
"code": null,
"e": 34135,
"s": 34056,
"text": "unavailable_software: Yellow Software only, hardware acceleration unavailable."
},
{
"code": null,
"e": 34156,
"s": 34135,
"text": "unavailable_off: Red"
},
{
"code": null,
"e": 34183,
"s": 34156,
"text": "unavailable_off_ok: Yellow"
},
{
"code": null,
"e": 34257,
"s": 34183,
"text": "enabled_readback: Yellow Hardware-accelerated but at reduced performance."
},
{
"code": null,
"e": 34313,
"s": 34257,
"text": "enabled_force: Green Hardware-accelerated on all pages."
},
{
"code": null,
"e": 34350,
"s": 34313,
"text": "enabled: Green Hardware accelerated."
},
{
"code": null,
"e": 34368,
"s": 34350,
"text": "enabled_on: Green"
},
{
"code": null,
"e": 34407,
"s": 34368,
"text": "enabled_force_on: Green Force enabled."
},
{
"code": null,
"e": 34704,
"s": 34407,
"text": "The app.disableHardwareAcceleration() Instance method of the app module disables Hardware acceleration for the entire application. This Instance method can only be used before the ready event of the app module is emitted. Hence, this method needs to be called in the main.js file (Main Process). "
},
{
"code": null,
"e": 34715,
"s": 34704,
"text": "Javascript"
},
{
"code": "const { app, BrowserWindow } = require('electron')app.disableHardwareAcceleration();",
"e": 34800,
"s": 34715,
"text": null
},
{
"code": null,
"e": 35456,
"s": 34800,
"text": "In an Electron application, Chromium disables 3D APIs (e.g. WebGL) until a restart of the application on a per-domain basis if the GPU processes crash too frequently. This is the default behavior of Chromium. There can be a variety of reasons which can cause the GPU processes to crash frequently including problems in the System hardware or overuse of system resources. The app.disableDomainBlockingFor3DAPIs() Instance method of the app module disables this default behavior of Chromium. This Instance method can only be used before the ready event of the app module is emitted. Hence, this method needs to be called in the main.js file (Main Process). "
},
{
"code": null,
"e": 35467,
"s": 35456,
"text": "Javascript"
},
{
"code": "const { app, BrowserWindow } = require('electron')app.disableHardwareAcceleration();app.disableDomainBlockingFor3DAPIs();",
"e": 35589,
"s": 35467,
"text": null
},
{
"code": null,
"e": 35984,
"s": 35589,
"text": "The app.getGPUInfo(info) Instance method fetches and returns the GPU Information from Chromium related to the Electron application. This Instance method returns a Promise and it is resolved to an object containing the relevant information based on the info: String parameter provided. Refer to the output for a better understanding. The info parameter can hold any one of the following values: "
},
{
"code": null,
"e": 36408,
"s": 35984,
"text": "complete: The Promise returned is fulfilled with an object containing all the GPU Information as mentioned in the official Chromium’s GPUInfo object documentation. This includes the version and driver information that’s shown on chrome://gpu page of the Chromium browser. When info: complete, the Instance method takes a much longer time to execute as compared to info: basic. Refer to the Output for better understanding. "
},
{
"code": null,
"e": 36447,
"s": 36408,
"text": "GPU Driver and Version Information – 1"
},
{
"code": null,
"e": 36486,
"s": 36447,
"text": "GPU Driver and Version Information – 2"
},
{
"code": null,
"e": 36818,
"s": 36486,
"text": "basic: The Promise returned is fulfilled with an object containing only a fewer and more essential parameters than the object returned when requested with info: complete. This value should be used if only basic information like vendorId parameter or driverId parameter is needed. The sample parameters returned are displayed below:"
},
{
"code": null,
"e": 37467,
"s": 36818,
"text": "{ auxAttributes:\n { amdSwitchable: true,\n canSupportThreadedTextureMailbox: false,\n directComposition: false,\n directRendering: true,\n glResetNotificationStrategy: 0,\n inProcessGpu: true,\n initializationTime: 0,\n jpegDecodeAcceleratorSupported: false,\n optimus: false,\n passthroughCmdDecoder: false,\n sandboxed: false,\n softwareRendering: false,\n supportsOverlays: false,\n videoDecodeAcceleratorFlags: 0 },\ngpuDevice:\n [ { active: true, deviceId: 26657, vendorId: 4098 },\n { active: false, deviceId: 3366, vendorId: 32902 } ],\nmachineModelName: 'MacBookPro',\nmachineModelVersion: '11.5' }"
},
{
"code": null,
"e": 37549,
"s": 37467,
"text": "The app module emits the following Instance Events which are related to the GPU. "
},
{
"code": null,
"e": 37857,
"s": 37549,
"text": "gpu-info-update: Event This Instance event is emitted whenever there is a change in any GPU related information for the different processes within the application. Based on the usage, functionality and the number of processes active within the application, this Instance event can be emitted multiple times."
},
{
"code": null,
"e": 38293,
"s": 37857,
"text": "gpu-process-crashed: Event This Instance event is emitted whenever the GPU process crashes or is killed by the native OS. This can cause the application to hang if not handled and hence we can use this Instance event to take necessary action and make the application exit cleanly. This event returns the following parameters.event: Event The global Event object.killed: Boolean This parameter represents whether the Process was killed."
},
{
"code": null,
"e": 38331,
"s": 38293,
"text": "event: Event The global Event object."
},
{
"code": null,
"e": 38405,
"s": 38331,
"text": "killed: Boolean This parameter represents whether the Process was killed."
},
{
"code": null,
"e": 38552,
"s": 38405,
"text": "At this point, upon launching the Electron application, we should be able to fetch and display all GPU related Information in the console output. "
},
{
"code": null,
"e": 38561,
"s": 38552,
"text": "Output: "
},
{
"code": null,
"e": 38768,
"s": 38561,
"text": "Note – We have used the console.dir() JavaScript method to output and display an object in the Console window of Chrome DevTools. To display objects, this method is preferred over the console.log() method. "
},
{
"code": null,
"e": 38869,
"s": 38768,
"text": "For more detailed information. Refer to the article: Difference between console.dir and console.log."
},
{
"code": null,
"e": 38881,
"s": 38869,
"text": "anikakapoor"
},
{
"code": null,
"e": 38892,
"s": 38881,
"text": "ElectronJS"
},
{
"code": null,
"e": 38897,
"s": 38892,
"text": "HTML"
},
{
"code": null,
"e": 38908,
"s": 38897,
"text": "JavaScript"
},
{
"code": null,
"e": 38916,
"s": 38908,
"text": "Node.js"
},
{
"code": null,
"e": 38933,
"s": 38916,
"text": "Web Technologies"
},
{
"code": null,
"e": 38938,
"s": 38933,
"text": "HTML"
},
{
"code": null,
"e": 39036,
"s": 38938,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 39045,
"s": 39036,
"text": "Comments"
},
{
"code": null,
"e": 39058,
"s": 39045,
"text": "Old Comments"
},
{
"code": null,
"e": 39106,
"s": 39058,
"text": "How to update Node.js and NPM to next version ?"
},
{
"code": null,
"e": 39156,
"s": 39106,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 39206,
"s": 39156,
"text": "CSS to put icon inside an input element in a form"
},
{
"code": null,
"e": 39230,
"s": 39206,
"text": "REST API (Introduction)"
},
{
"code": null,
"e": 39267,
"s": 39230,
"text": "Types of CSS (Cascading Style Sheet)"
},
{
"code": null,
"e": 39312,
"s": 39267,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 39381,
"s": 39312,
"text": "How to calculate the number of days between two dates in javascript?"
},
{
"code": null,
"e": 39442,
"s": 39381,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 39514,
"s": 39442,
"text": "Differences between Functional Components and Class Components in React"
}
] |
Deploy Machine Learning applications to Kubernetes using Streamlit and Polyaxon | by Mourad Mourafiq | Towards Data Science
|
Brief introduction to containers, Kubernetes, Streamlit, and Polyaxon.
Create a Kubernetes cluster and deploy Polyaxon with Helm.
How to explore datasets on a Jupyter Notebook running on a Kubernetes cluster.
How to train multiple versions of a machine learning model using Polyaxon on Kubernetes.
How to save a machine learning model.
How to analyze the models using Polyaxon UI.
How to expose the model with a user interface using Streamlit and make new predictions.
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. Docker is a tool designed to make it easier to create, deploy, and run applications by using containers.
In our guide we will use containers to package our code and dependencies and easily deploy them on Kubernetes.
Kubernetes is a powerful open-source distributed system for managing containerized applications. In simple words, Kubernetes is a system for running and orchestrating containerized applications across a cluster of machines. It is a platform designed to completely manage the life cycle of containerized applications.
Why should I use Kubernetes.
Load Balancing: Automatically distributes the load between containers.
Scaling: Automatically scale up or down by adding or removing containers when demand changes such as peak hours, weekends and holidays.
Storage: Keeps storage consistent with multiple instances of an application.
Self-healing Automatically restarts containers that fail and kills containers that don’t respond to your user-defined health check.
Automated Rollouts you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all of their resources to the new container.
Streamlit is an open-source framework to create an interactive, beautiful visualization app. All in python!
Streamlit provides many useful features that can be very helpful in making visualizations for data-driven projects.
Example of Face-GAN explorer using Streamlit
Why should I use Streamlit?
Simple and easy way to create an interactive user interface
Requires zero development experience
It’s fun making use of different function in your data-driven projects :)
Comprehensive documentation
Polyaxon is an open-source cloud native machine learning platform, that provides simple interfaces to train, monitor, and manage models.
Polyaxon runs on top of Kubernetes to allow scaling up and down of your cluster’s resources, and provides tools to automate the process of experimentation, while tracking information about models, configurations, parameters, and code.
Why should I use Polyaxon?
Automatically track key model metrics, hyperparameters, visualizations, artifacts and resources, and version control code and data.
Maximize the usage of your cluster by scheduling jobs and experiments via the CLI, dashboard, SDKs, or REST API.
Use optimization algorithms to effectively run parallel experiments and find the best model.
Visualize, search, and compare experiment results, hyperparams, training data and source code versions, so you can quickly analyze what worked and what didn’t.
Consistently develop, validate, deliver, and monitor models to create a competitive advantage.
Scale your resources as needed, and run jobs and experiments on any platform (AWS, Microsoft Azure, Google Cloud Platform, and on-premises hardware).
Helm is the package manager for Kubernetes, it allows us to deploy and manage the life cycle of cloud native projects like Polyaxon.
In this tutorial we will be using Azure Kubernetes Service (AKS), a fully managed Kubernetes service on Azure. If you do not have an account with Azure, you can sign-up here for a free account.
In future posts, we will provide similar instructions of running this guide on Google Cloud Platform (GKE), AWS (EKS), and a local cluster with Minikube.
The purpose of this tutorial is to get hands-on experience of running machine learning experimentation and deployment on Kubernetes. Let’s get started by creating our workspace.
Let’s create a simple Kubernetes cluster on AKS with a single node:
az aks create --resource-group myResourceGroup --name streamlit-polyaxon --node-count 1 --enable-addons monitoring --generate-ssh-keys
To make sure you are on the right cluster you can execute the command
az aks get-credentials --resource-group myResourceGroup --name streamlit-polyaxon
Install Helm on your local machine to be able to manage Polyaxon as well as other cloud native projects that you might want to run on Kubernetes.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3chmod 700 get_helm.sh./get_helm.sh
helm repo add polyaxon https://charts.polyaxon.com
pip install -U polyaxon
polyaxon admin deploy
kubectl get deployment -n polyaxon -w
This should take about 3 min:
NAME READY UP-TO-DATE AVAILABLE AGEpolyaxon-polyaxon-api 1/1 1 1 3m17spolyaxon-polyaxon-gateway 1/1 1 1 3m17spolyaxon-polyaxon-operator 1/1 1 1 3m17spolyaxon-polyaxon-streams 1/1 1 1 3m17s
Polyaxon provides a simple command to expose the dashboard and the API in a secure way on your localhost:
polyaxon port-forward
In a different terminal session than the one used for exposing the dashboard, run:
polyaxon project create --name=streamlit-app
You should see:
Project `streamlit-app` was created successfully.You can view this project on Polyaxon UI: http://localhost:8000/ui/root/streamlit-app/
Now we can move to the next section: training and analyzing a model.
In this tutorial we will train a model to classify Iris flower species from its features.
Iris features: Sepal, Petal, lengths, and widths
We will start first by exploring the iris dataset in a notebook session running on our Kubernetes cluster.
Let’s start a new notebook session and wait until it reaches the running state:
polyaxon run --hub jupyterlab -w
Polyaxon provides a list of highly productive components, called hub, and allows to start a notebook session using a single command. behind the scene Polyaxon will create a Kubernetes deployment and a headless service, and will expose the service using Polyaxon’s API. For more details please check Polyaxon’s open-source hub.
After a couple of seconds the notebook will be running.
Note: if you stopped the previous command, you can always get the last (cached) running operation by executing the command:
polyaxon ops service
Let’s create a new notebook and start by examining the dataset’s features:
Commands executed:
from sklearn.datasets import load_irisiris= load_iris()print(iris.feature_names)print(iris.target_names)print(iris.data.shape)print(iris.target.shape)print(iris.target)
The dataset is about the Iris flower species:
There are different classes of algorithms that scikit-learn offers, in the scope of this tutorial, we will use Nearest Neighbors algorithm.
Before we create a robust script, we will play around with a simple model in our notebook session:
Commands executed:
from sklearn.neighbors import KNeighborsClassifierX = iris.datay = iris.targetclassifier = KNeighborsClassifier(n_neighbors=3)# Fit the modelclassifier.fit(X, y)# Predict new datanew_data = [[3, 2, 5.3, 2.9]]print(classifier.predict(new_data))# Show the resultsprint(iris.target_names[classifier.predict(new_data)])
In this case we used n_neighbors=3 and the complete dataset for training the model.
In order to explore different variants of our model, we need to make a script for our model, and parametrize the inputs and outputs, to easily change the parameters such as n_neighbors we also need to establish some rigorous way of estimating the performance of the model.
A practical way of doing that, is by creating an evaluation procedure where we would split the dataset to training and testing. We train the model on the training set and evaluate it on the testing set.
scikit-learn provides methods to split a dataset:
from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1012)
Now that we established some practices let’s create a function that accepts parameters, trains the model, and saves the resulting score:
Commands executed:
from sklearn.model_selection import train_test_splitfrom sklearn.neighbors import KNeighborsClassifierfrom sklearn import metricsfrom sklearn.datasets import load_iristry: from sklearn.externals import joblibexcept: passdef train_and_eval( n_neighbors=3, leaf_size=30, metric='minkowski', p=2, weights='uniform', test_size=0.3, random_state=1012, model_path=None,): iris = load_iris() X = iris.data y = iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state) classifier = KNeighborsClassifier(n_neighbors=n_neighbors, leaf_size=leaf_size, metric=metric, p=p, weights=weights) classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) accuracy = metrics.accuracy_score(y_test, y_pred) recall = metrics.recall_score(y_test, y_pred, average='weighted') f1 = metrics.f1_score(y_pred, y_pred, average='weighted') results = { 'accuracy': accuracy, 'recall': recall, 'f1': f1, } if model_path: joblib.dump(classifier, model_path) return results
Now we have a script that accepts parameters to evaluate the model based on different inputs, saves the model and returns the results, but this is still very manual, and for larger and more complex models this is very impractical.
Instead of running the model by manually changing the values in the notebook, we will create a script and run the model using Polyaxon. We will also log the resulting metrics and model using Polyaxon’s tracking module.
The code for the model that we will train can be found in this github repo.
Running the example with the default parameters:
polyaxon run --url=https://raw.githubusercontent.com/polyaxon/polyaxon-examples/master/in_cluster/sklearn/iris/polyaxonfile.yml -l
Running with a different parameters:
polyaxon run --url=https://raw.githubusercontent.com/polyaxon/polyaxon-examples/master/in_cluster/sklearn/iris/polyaxonfile.yml -l -P n_neighbors=50
Instead of manually changing the parameters, we will automate this process by exploring a space of configurations:
polyaxon run --url=https://raw.githubusercontent.com/polyaxon/polyaxon-examples/master/in_cluster/sklearn/iris/hyper-polyaxonfile.yml --eager
You will see the CLI creating several experiments that will run in parallel:
Starting eager mode...Creating 15 operationsA new run `b6cdaaee8ce74e25bc057e23196b24e6` was created...
Sorting the experiments based on their accuracy metric
Comparing accuracy against n_neighbors
In our script we used Polyaxon to log a model every time we run an experiment:
# Logging the modeltracking.log_model(model_path, name="iris-model", framework="scikit-learn")
We will deploy a simple streamlit app that will load our model and display an app that makes a prediction based on the features and displays an image corresponding to the flower class.
import streamlit as stimport pandas as pdimport joblibimport argparsefrom PIL import Imagedef load_model(model_path: str): model = open(model_path, "rb") return joblib.load(model)if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument( '--model-path', type=str, ) args = parser.parse_args() setosa = Image.open("images/iris-setosa.png") versicolor = Image.open("images/iris-versicolor.png") virginica = Image.open("images/iris-virginica.png") classifier = load_model(args.model_path) print(classifier) st.title("Iris flower species Classification") st.sidebar.title("Features") parameter_list = [ "Sepal length (cm)", "Sepal Width (cm)", "Petal length (cm)", "Petal Width (cm)" ] sliders = [] for parameter, parameter_df in zip(parameter_list, ['5.2', '3.2', '4.2', '1.2']): values = st.sidebar.slider( label=parameter, key=parameter, value=float(parameter_df), min_value=0.0, max_value=8.0, step=0.1 ) sliders.append(values) input_variables = pd.DataFrame([sliders], columns=parameter_list) prediction = classifier.predict(input_variables) if prediction == 0: elif prediction == 1: st.image(versicolor) else: st.image(virginica)
Let’s schedule the app with Polyaxon
polyaxon run --url=https://raw.githubusercontent.com/polyaxon/polyaxon-examples/master/in_cluster/sklearn/iris/streamlit-polyaxonfile.yml -P uuid=86ffaea976c647fba813fca9153781ff
Note that the uuid 86ffaea976c647fba813fca9153781ff will be different in your use case.
In this tutorial, we went through an end-to-end process of training and deploying a simple classification app using Kubernetes, Streamlit, and Polyaxon.
You can find the source code for this tutorial in this repo.
|
[
{
"code": null,
"e": 243,
"s": 172,
"text": "Brief introduction to containers, Kubernetes, Streamlit, and Polyaxon."
},
{
"code": null,
"e": 302,
"s": 243,
"text": "Create a Kubernetes cluster and deploy Polyaxon with Helm."
},
{
"code": null,
"e": 381,
"s": 302,
"text": "How to explore datasets on a Jupyter Notebook running on a Kubernetes cluster."
},
{
"code": null,
"e": 470,
"s": 381,
"text": "How to train multiple versions of a machine learning model using Polyaxon on Kubernetes."
},
{
"code": null,
"e": 508,
"s": 470,
"text": "How to save a machine learning model."
},
{
"code": null,
"e": 553,
"s": 508,
"text": "How to analyze the models using Polyaxon UI."
},
{
"code": null,
"e": 641,
"s": 553,
"text": "How to expose the model with a user interface using Streamlit and make new predictions."
},
{
"code": null,
"e": 924,
"s": 641,
"text": "A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. Docker is a tool designed to make it easier to create, deploy, and run applications by using containers."
},
{
"code": null,
"e": 1035,
"s": 924,
"text": "In our guide we will use containers to package our code and dependencies and easily deploy them on Kubernetes."
},
{
"code": null,
"e": 1352,
"s": 1035,
"text": "Kubernetes is a powerful open-source distributed system for managing containerized applications. In simple words, Kubernetes is a system for running and orchestrating containerized applications across a cluster of machines. It is a platform designed to completely manage the life cycle of containerized applications."
},
{
"code": null,
"e": 1381,
"s": 1352,
"text": "Why should I use Kubernetes."
},
{
"code": null,
"e": 1452,
"s": 1381,
"text": "Load Balancing: Automatically distributes the load between containers."
},
{
"code": null,
"e": 1588,
"s": 1452,
"text": "Scaling: Automatically scale up or down by adding or removing containers when demand changes such as peak hours, weekends and holidays."
},
{
"code": null,
"e": 1665,
"s": 1588,
"text": "Storage: Keeps storage consistent with multiple instances of an application."
},
{
"code": null,
"e": 1797,
"s": 1665,
"text": "Self-healing Automatically restarts containers that fail and kills containers that don’t respond to your user-defined health check."
},
{
"code": null,
"e": 1972,
"s": 1797,
"text": "Automated Rollouts you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all of their resources to the new container."
},
{
"code": null,
"e": 2080,
"s": 1972,
"text": "Streamlit is an open-source framework to create an interactive, beautiful visualization app. All in python!"
},
{
"code": null,
"e": 2196,
"s": 2080,
"text": "Streamlit provides many useful features that can be very helpful in making visualizations for data-driven projects."
},
{
"code": null,
"e": 2241,
"s": 2196,
"text": "Example of Face-GAN explorer using Streamlit"
},
{
"code": null,
"e": 2269,
"s": 2241,
"text": "Why should I use Streamlit?"
},
{
"code": null,
"e": 2329,
"s": 2269,
"text": "Simple and easy way to create an interactive user interface"
},
{
"code": null,
"e": 2366,
"s": 2329,
"text": "Requires zero development experience"
},
{
"code": null,
"e": 2440,
"s": 2366,
"text": "It’s fun making use of different function in your data-driven projects :)"
},
{
"code": null,
"e": 2468,
"s": 2440,
"text": "Comprehensive documentation"
},
{
"code": null,
"e": 2605,
"s": 2468,
"text": "Polyaxon is an open-source cloud native machine learning platform, that provides simple interfaces to train, monitor, and manage models."
},
{
"code": null,
"e": 2840,
"s": 2605,
"text": "Polyaxon runs on top of Kubernetes to allow scaling up and down of your cluster’s resources, and provides tools to automate the process of experimentation, while tracking information about models, configurations, parameters, and code."
},
{
"code": null,
"e": 2867,
"s": 2840,
"text": "Why should I use Polyaxon?"
},
{
"code": null,
"e": 2999,
"s": 2867,
"text": "Automatically track key model metrics, hyperparameters, visualizations, artifacts and resources, and version control code and data."
},
{
"code": null,
"e": 3112,
"s": 2999,
"text": "Maximize the usage of your cluster by scheduling jobs and experiments via the CLI, dashboard, SDKs, or REST API."
},
{
"code": null,
"e": 3205,
"s": 3112,
"text": "Use optimization algorithms to effectively run parallel experiments and find the best model."
},
{
"code": null,
"e": 3365,
"s": 3205,
"text": "Visualize, search, and compare experiment results, hyperparams, training data and source code versions, so you can quickly analyze what worked and what didn’t."
},
{
"code": null,
"e": 3460,
"s": 3365,
"text": "Consistently develop, validate, deliver, and monitor models to create a competitive advantage."
},
{
"code": null,
"e": 3610,
"s": 3460,
"text": "Scale your resources as needed, and run jobs and experiments on any platform (AWS, Microsoft Azure, Google Cloud Platform, and on-premises hardware)."
},
{
"code": null,
"e": 3743,
"s": 3610,
"text": "Helm is the package manager for Kubernetes, it allows us to deploy and manage the life cycle of cloud native projects like Polyaxon."
},
{
"code": null,
"e": 3937,
"s": 3743,
"text": "In this tutorial we will be using Azure Kubernetes Service (AKS), a fully managed Kubernetes service on Azure. If you do not have an account with Azure, you can sign-up here for a free account."
},
{
"code": null,
"e": 4091,
"s": 3937,
"text": "In future posts, we will provide similar instructions of running this guide on Google Cloud Platform (GKE), AWS (EKS), and a local cluster with Minikube."
},
{
"code": null,
"e": 4269,
"s": 4091,
"text": "The purpose of this tutorial is to get hands-on experience of running machine learning experimentation and deployment on Kubernetes. Let’s get started by creating our workspace."
},
{
"code": null,
"e": 4337,
"s": 4269,
"text": "Let’s create a simple Kubernetes cluster on AKS with a single node:"
},
{
"code": null,
"e": 4472,
"s": 4337,
"text": "az aks create --resource-group myResourceGroup --name streamlit-polyaxon --node-count 1 --enable-addons monitoring --generate-ssh-keys"
},
{
"code": null,
"e": 4542,
"s": 4472,
"text": "To make sure you are on the right cluster you can execute the command"
},
{
"code": null,
"e": 4624,
"s": 4542,
"text": "az aks get-credentials --resource-group myResourceGroup --name streamlit-polyaxon"
},
{
"code": null,
"e": 4770,
"s": 4624,
"text": "Install Helm on your local machine to be able to manage Polyaxon as well as other cloud native projects that you might want to run on Kubernetes."
},
{
"code": null,
"e": 4900,
"s": 4770,
"text": "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3chmod 700 get_helm.sh./get_helm.sh"
},
{
"code": null,
"e": 4951,
"s": 4900,
"text": "helm repo add polyaxon https://charts.polyaxon.com"
},
{
"code": null,
"e": 4975,
"s": 4951,
"text": "pip install -U polyaxon"
},
{
"code": null,
"e": 4997,
"s": 4975,
"text": "polyaxon admin deploy"
},
{
"code": null,
"e": 5035,
"s": 4997,
"text": "kubectl get deployment -n polyaxon -w"
},
{
"code": null,
"e": 5065,
"s": 5035,
"text": "This should take about 3 min:"
},
{
"code": null,
"e": 5399,
"s": 5065,
"text": "NAME READY UP-TO-DATE AVAILABLE AGEpolyaxon-polyaxon-api 1/1 1 1 3m17spolyaxon-polyaxon-gateway 1/1 1 1 3m17spolyaxon-polyaxon-operator 1/1 1 1 3m17spolyaxon-polyaxon-streams 1/1 1 1 3m17s"
},
{
"code": null,
"e": 5505,
"s": 5399,
"text": "Polyaxon provides a simple command to expose the dashboard and the API in a secure way on your localhost:"
},
{
"code": null,
"e": 5527,
"s": 5505,
"text": "polyaxon port-forward"
},
{
"code": null,
"e": 5610,
"s": 5527,
"text": "In a different terminal session than the one used for exposing the dashboard, run:"
},
{
"code": null,
"e": 5655,
"s": 5610,
"text": "polyaxon project create --name=streamlit-app"
},
{
"code": null,
"e": 5671,
"s": 5655,
"text": "You should see:"
},
{
"code": null,
"e": 5807,
"s": 5671,
"text": "Project `streamlit-app` was created successfully.You can view this project on Polyaxon UI: http://localhost:8000/ui/root/streamlit-app/"
},
{
"code": null,
"e": 5876,
"s": 5807,
"text": "Now we can move to the next section: training and analyzing a model."
},
{
"code": null,
"e": 5966,
"s": 5876,
"text": "In this tutorial we will train a model to classify Iris flower species from its features."
},
{
"code": null,
"e": 6015,
"s": 5966,
"text": "Iris features: Sepal, Petal, lengths, and widths"
},
{
"code": null,
"e": 6122,
"s": 6015,
"text": "We will start first by exploring the iris dataset in a notebook session running on our Kubernetes cluster."
},
{
"code": null,
"e": 6202,
"s": 6122,
"text": "Let’s start a new notebook session and wait until it reaches the running state:"
},
{
"code": null,
"e": 6235,
"s": 6202,
"text": "polyaxon run --hub jupyterlab -w"
},
{
"code": null,
"e": 6562,
"s": 6235,
"text": "Polyaxon provides a list of highly productive components, called hub, and allows to start a notebook session using a single command. behind the scene Polyaxon will create a Kubernetes deployment and a headless service, and will expose the service using Polyaxon’s API. For more details please check Polyaxon’s open-source hub."
},
{
"code": null,
"e": 6618,
"s": 6562,
"text": "After a couple of seconds the notebook will be running."
},
{
"code": null,
"e": 6742,
"s": 6618,
"text": "Note: if you stopped the previous command, you can always get the last (cached) running operation by executing the command:"
},
{
"code": null,
"e": 6763,
"s": 6742,
"text": "polyaxon ops service"
},
{
"code": null,
"e": 6838,
"s": 6763,
"text": "Let’s create a new notebook and start by examining the dataset’s features:"
},
{
"code": null,
"e": 6857,
"s": 6838,
"text": "Commands executed:"
},
{
"code": null,
"e": 7026,
"s": 6857,
"text": "from sklearn.datasets import load_irisiris= load_iris()print(iris.feature_names)print(iris.target_names)print(iris.data.shape)print(iris.target.shape)print(iris.target)"
},
{
"code": null,
"e": 7072,
"s": 7026,
"text": "The dataset is about the Iris flower species:"
},
{
"code": null,
"e": 7212,
"s": 7072,
"text": "There are different classes of algorithms that scikit-learn offers, in the scope of this tutorial, we will use Nearest Neighbors algorithm."
},
{
"code": null,
"e": 7311,
"s": 7212,
"text": "Before we create a robust script, we will play around with a simple model in our notebook session:"
},
{
"code": null,
"e": 7330,
"s": 7311,
"text": "Commands executed:"
},
{
"code": null,
"e": 7646,
"s": 7330,
"text": "from sklearn.neighbors import KNeighborsClassifierX = iris.datay = iris.targetclassifier = KNeighborsClassifier(n_neighbors=3)# Fit the modelclassifier.fit(X, y)# Predict new datanew_data = [[3, 2, 5.3, 2.9]]print(classifier.predict(new_data))# Show the resultsprint(iris.target_names[classifier.predict(new_data)])"
},
{
"code": null,
"e": 7730,
"s": 7646,
"text": "In this case we used n_neighbors=3 and the complete dataset for training the model."
},
{
"code": null,
"e": 8003,
"s": 7730,
"text": "In order to explore different variants of our model, we need to make a script for our model, and parametrize the inputs and outputs, to easily change the parameters such as n_neighbors we also need to establish some rigorous way of estimating the performance of the model."
},
{
"code": null,
"e": 8206,
"s": 8003,
"text": "A practical way of doing that, is by creating an evaluation procedure where we would split the dataset to training and testing. We train the model on the training set and evaluate it on the testing set."
},
{
"code": null,
"e": 8256,
"s": 8206,
"text": "scikit-learn provides methods to split a dataset:"
},
{
"code": null,
"e": 8400,
"s": 8256,
"text": "from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1012)"
},
{
"code": null,
"e": 8537,
"s": 8400,
"text": "Now that we established some practices let’s create a function that accepts parameters, trains the model, and saves the resulting score:"
},
{
"code": null,
"e": 8556,
"s": 8537,
"text": "Commands executed:"
},
{
"code": null,
"e": 9667,
"s": 8556,
"text": "from sklearn.model_selection import train_test_splitfrom sklearn.neighbors import KNeighborsClassifierfrom sklearn import metricsfrom sklearn.datasets import load_iristry: from sklearn.externals import joblibexcept: passdef train_and_eval( n_neighbors=3, leaf_size=30, metric='minkowski', p=2, weights='uniform', test_size=0.3, random_state=1012, model_path=None,): iris = load_iris() X = iris.data y = iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state) classifier = KNeighborsClassifier(n_neighbors=n_neighbors, leaf_size=leaf_size, metric=metric, p=p, weights=weights) classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) accuracy = metrics.accuracy_score(y_test, y_pred) recall = metrics.recall_score(y_test, y_pred, average='weighted') f1 = metrics.f1_score(y_pred, y_pred, average='weighted') results = { 'accuracy': accuracy, 'recall': recall, 'f1': f1, } if model_path: joblib.dump(classifier, model_path) return results"
},
{
"code": null,
"e": 9898,
"s": 9667,
"text": "Now we have a script that accepts parameters to evaluate the model based on different inputs, saves the model and returns the results, but this is still very manual, and for larger and more complex models this is very impractical."
},
{
"code": null,
"e": 10117,
"s": 9898,
"text": "Instead of running the model by manually changing the values in the notebook, we will create a script and run the model using Polyaxon. We will also log the resulting metrics and model using Polyaxon’s tracking module."
},
{
"code": null,
"e": 10193,
"s": 10117,
"text": "The code for the model that we will train can be found in this github repo."
},
{
"code": null,
"e": 10242,
"s": 10193,
"text": "Running the example with the default parameters:"
},
{
"code": null,
"e": 10373,
"s": 10242,
"text": "polyaxon run --url=https://raw.githubusercontent.com/polyaxon/polyaxon-examples/master/in_cluster/sklearn/iris/polyaxonfile.yml -l"
},
{
"code": null,
"e": 10410,
"s": 10373,
"text": "Running with a different parameters:"
},
{
"code": null,
"e": 10559,
"s": 10410,
"text": "polyaxon run --url=https://raw.githubusercontent.com/polyaxon/polyaxon-examples/master/in_cluster/sklearn/iris/polyaxonfile.yml -l -P n_neighbors=50"
},
{
"code": null,
"e": 10674,
"s": 10559,
"text": "Instead of manually changing the parameters, we will automate this process by exploring a space of configurations:"
},
{
"code": null,
"e": 10816,
"s": 10674,
"text": "polyaxon run --url=https://raw.githubusercontent.com/polyaxon/polyaxon-examples/master/in_cluster/sklearn/iris/hyper-polyaxonfile.yml --eager"
},
{
"code": null,
"e": 10893,
"s": 10816,
"text": "You will see the CLI creating several experiments that will run in parallel:"
},
{
"code": null,
"e": 10997,
"s": 10893,
"text": "Starting eager mode...Creating 15 operationsA new run `b6cdaaee8ce74e25bc057e23196b24e6` was created..."
},
{
"code": null,
"e": 11052,
"s": 10997,
"text": "Sorting the experiments based on their accuracy metric"
},
{
"code": null,
"e": 11091,
"s": 11052,
"text": "Comparing accuracy against n_neighbors"
},
{
"code": null,
"e": 11170,
"s": 11091,
"text": "In our script we used Polyaxon to log a model every time we run an experiment:"
},
{
"code": null,
"e": 11265,
"s": 11170,
"text": "# Logging the modeltracking.log_model(model_path, name=\"iris-model\", framework=\"scikit-learn\")"
},
{
"code": null,
"e": 11450,
"s": 11265,
"text": "We will deploy a simple streamlit app that will load our model and display an app that makes a prediction based on the features and displays an image corresponding to the flower class."
},
{
"code": null,
"e": 12815,
"s": 11450,
"text": "import streamlit as stimport pandas as pdimport joblibimport argparsefrom PIL import Imagedef load_model(model_path: str): model = open(model_path, \"rb\") return joblib.load(model)if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument( '--model-path', type=str, ) args = parser.parse_args() setosa = Image.open(\"images/iris-setosa.png\") versicolor = Image.open(\"images/iris-versicolor.png\") virginica = Image.open(\"images/iris-virginica.png\") classifier = load_model(args.model_path) print(classifier) st.title(\"Iris flower species Classification\") st.sidebar.title(\"Features\") parameter_list = [ \"Sepal length (cm)\", \"Sepal Width (cm)\", \"Petal length (cm)\", \"Petal Width (cm)\" ] sliders = [] for parameter, parameter_df in zip(parameter_list, ['5.2', '3.2', '4.2', '1.2']): values = st.sidebar.slider( label=parameter, key=parameter, value=float(parameter_df), min_value=0.0, max_value=8.0, step=0.1 ) sliders.append(values) input_variables = pd.DataFrame([sliders], columns=parameter_list) prediction = classifier.predict(input_variables) if prediction == 0: elif prediction == 1: st.image(versicolor) else: st.image(virginica)"
},
{
"code": null,
"e": 12852,
"s": 12815,
"text": "Let’s schedule the app with Polyaxon"
},
{
"code": null,
"e": 13031,
"s": 12852,
"text": "polyaxon run --url=https://raw.githubusercontent.com/polyaxon/polyaxon-examples/master/in_cluster/sklearn/iris/streamlit-polyaxonfile.yml -P uuid=86ffaea976c647fba813fca9153781ff"
},
{
"code": null,
"e": 13119,
"s": 13031,
"text": "Note that the uuid 86ffaea976c647fba813fca9153781ff will be different in your use case."
},
{
"code": null,
"e": 13272,
"s": 13119,
"text": "In this tutorial, we went through an end-to-end process of training and deploying a simple classification app using Kubernetes, Streamlit, and Polyaxon."
}
] |
Python Program to Accept Three Digits and Print all Possible Combinations from the Digits
|
When it is required to print all possible combination of digits when the input is taken from the user, nested loop is used.
Below is a demonstration of the same −
Live Demo
first_num = int(input("Enter the first number..."))
second_num = int(input("Enter the second number..."))
third_num = int(input("Enter the third number..."))
my_list = []
print("The first number is ")
print(first_num)
print("The second number is ")
print(second_num)
print("The third number is ")
print(third_num)
my_list.append(first_num)
my_list.append(second_num)
my_list.append(third_num)
for i in range(0,3):
for j in range(0,3):
for k in range(0,3):
if(i!=j&j!=k&k!=i):
print(my_list[i],my_list[j],my_list[k])
Enter the first number...3
Enter the second number...5
Enter the third number...8
The first number is
3
The second number is
5
The third number is
8
3 5 8
3 8 5
5 3 8
5 8 3
8 3 5
8 5 3
The three numbers re taken as input from the user.
The three numbers re taken as input from the user.
An empty list is created.
An empty list is created.
The three numbers are displayed on the console.
The three numbers are displayed on the console.
These numbers are appended to the empty list.
These numbers are appended to the empty list.
Three nested loops are used, and the numbers are iterated over.
Three nested loops are used, and the numbers are iterated over.
When they are not equal, their combinations are displayed as output on the console.
When they are not equal, their combinations are displayed as output on the console.
|
[
{
"code": null,
"e": 1186,
"s": 1062,
"text": "When it is required to print all possible combination of digits when the input is taken from the user, nested loop is used."
},
{
"code": null,
"e": 1225,
"s": 1186,
"text": "Below is a demonstration of the same −"
},
{
"code": null,
"e": 1236,
"s": 1225,
"text": " Live Demo"
},
{
"code": null,
"e": 1784,
"s": 1236,
"text": "first_num = int(input(\"Enter the first number...\"))\nsecond_num = int(input(\"Enter the second number...\"))\nthird_num = int(input(\"Enter the third number...\"))\nmy_list = []\nprint(\"The first number is \")\nprint(first_num)\nprint(\"The second number is \")\nprint(second_num)\nprint(\"The third number is \")\nprint(third_num)\n\nmy_list.append(first_num)\nmy_list.append(second_num)\nmy_list.append(third_num)\n\nfor i in range(0,3):\n for j in range(0,3):\n for k in range(0,3):\n if(i!=j&j!=k&k!=i):\n print(my_list[i],my_list[j],my_list[k])"
},
{
"code": null,
"e": 1969,
"s": 1784,
"text": "Enter the first number...3\nEnter the second number...5\nEnter the third number...8\nThe first number is\n3\nThe second number is\n5\nThe third number is\n8\n3 5 8\n3 8 5\n5 3 8\n5 8 3\n8 3 5\n8 5 3"
},
{
"code": null,
"e": 2020,
"s": 1969,
"text": "The three numbers re taken as input from the user."
},
{
"code": null,
"e": 2071,
"s": 2020,
"text": "The three numbers re taken as input from the user."
},
{
"code": null,
"e": 2097,
"s": 2071,
"text": "An empty list is created."
},
{
"code": null,
"e": 2123,
"s": 2097,
"text": "An empty list is created."
},
{
"code": null,
"e": 2171,
"s": 2123,
"text": "The three numbers are displayed on the console."
},
{
"code": null,
"e": 2219,
"s": 2171,
"text": "The three numbers are displayed on the console."
},
{
"code": null,
"e": 2265,
"s": 2219,
"text": "These numbers are appended to the empty list."
},
{
"code": null,
"e": 2311,
"s": 2265,
"text": "These numbers are appended to the empty list."
},
{
"code": null,
"e": 2375,
"s": 2311,
"text": "Three nested loops are used, and the numbers are iterated over."
},
{
"code": null,
"e": 2439,
"s": 2375,
"text": "Three nested loops are used, and the numbers are iterated over."
},
{
"code": null,
"e": 2523,
"s": 2439,
"text": "When they are not equal, their combinations are displayed as output on the console."
},
{
"code": null,
"e": 2607,
"s": 2523,
"text": "When they are not equal, their combinations are displayed as output on the console."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.