GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
57,139,784
0
0
0
0
1
false
0
2019-07-19T10:48:00.000
0
1
0
ValueError: could not broadcast input array from shape (848,837,8) into shape (800,800,8)
57,110,876
0
python,deep-learning,valueerror
so in my case, I was running the code in python 2.7 in which Python 2.7 will give different ceil value as the answer, Python 3.4 reports different value.
When I run the code in kaggle kernel, it is working fine, but when executing the code on my machine, it throws this error. please help
0
1
53
0
58,452,764
0
0
0
0
1
false
13
2019-07-19T13:13:00.000
-1
3
0
import pandas results in ModuleNotFoundError :_lzma
57,113,269
-0.066568
pandas,python-3.7,ubuntu-18.04,lzma
just upgraded to version 0.25.1 and works well
On Ubuntu 18.04 with python 3.7.3, I'm attempting to import pandas but this fails because it can't find _lzma. I've verified that _lzma is installed with dpkg: /usr/lib/python3.7/lib-dynload/_lzma.cpython-37m-x86_64-linux-gnu.so. Oddly, _lzma is not a dependency of pandas (as specified by pip3).
0
1
19,345
0
57,150,251
0
0
0
0
1
false
1
2019-07-19T15:48:00.000
0
2
0
Automating both Uploading the training data label csv and Training processed the model in AutoML Vision Image classification
57,115,839
0
python,google-cloud-platform,automl,google-cloud-automl
I used AutoML RESTapi for creating datasets to training the model. Although If I want to retrain a model on more data, I have to delete the previously trained model and create and train a new one.
I have to manually upload the training data label CSV and click on 'train' to train the model. I want to automate all these preferably with python.
0
1
109
0
57,118,303
0
0
0
0
1
false
1
2019-07-19T18:00:00.000
1
1
0
Spark read from second row like Pandas header=1
57,117,577
0.197375
python,csv,apache-spark,pyspark,apache-spark-sql
Looks like there's no option in spark csv to specify how many lines to skip. Here are some alternatives you can try: Read with option("header", "true"), and rename the column names using withColumnRenamed. Read with option("header", "false"), and select rows from 2nd line using select. If the first character of the first line is different from all other lines, you can use comment option to skip it. For example, if the first character of line #1 is D, you set comment='D'. Just be careful, comment will skip any line that starts with D here. Hope this helps.
In Pandas with Python I could use: for item in read_csv(csv_file, header=1) And in Spark I only have the option of true/false? df = spark.read.format("csv").option("header", "true").load('myfile.csv') How can I read starting from the second row in Spark? The suggested duplicate post is an outdated version of Spark. I am using the latest, 2.4.3.
0
1
1,631
0
57,124,598
0
0
0
0
1
false
2
2019-07-19T18:13:00.000
0
1
0
How does log in spark stage/tasks help in understanding actual spark transformation it corresponds to
57,117,739
0
python,scala,apache-spark,pyspark,apache-spark-sql
Break your execution down. It's the easiest way to understand where the error might be coming from. Running a 500+ line of code for the first time is never a good idea. You want to have the intermediate results while you are working with it. Another way is to use an IDE and walk through the code. This can help you understand where the error originated from. I prefer PyCharm (Community Edition is free), but VS Code might be a good alternative too.
Often during debugging Spark Jobs on failure we can find the appropriate Stage and task responsible for the failure such as String Index Out of Bounds exception but it becomes difficult to understand which transformation is responsible for this failure.The UI shows information such as Exchange/HashAggregate/Aggregate but finding the actual transformation responsible for this failure becomes really difficult in 500+ lines of code, so how should it be possible to debug Spark task failures and tracing the transformation responsible for the same?
0
1
117
0
57,139,328
0
0
0
0
1
true
3
2019-07-20T16:32:00.000
3
1
0
Can CNN autoencoders have different input and output dimensions?
57,126,626
1.2
python,tensorflow,keras,deep-learning
Auto-encoder (AE) is an architecture that tries to encode your image into a lower-dimensional representation by learning to reconstruct the data from such representation simultaniously. Therefore AE rely on a unsupervised (don't need labels) data that is used both as an input and as the target (used in the loss). You can try using a U-net based architecture for your usecase. A U-net would forward intermediate data representations to later layers of the network which should assist with faster learning/mapping of the inputs into a new domain.. You can also experiment with a simple architecture containing a few ResNet blocks without any downsampling layers, which might or might not be enough for your use-case. If you want to dig a little deeper you can look into Disco-GAN and related methods.They explicitly try to map image into a new domain while maintaining image information.
I am working on a problem which requires me to build a deep learning model that based on certain input image it has to output another image. It is worth noting that these two images are conceptually related but they don't have the same dimensions. At first I thought that a classical CNN with a final dense layer whose argument is the multiplication of the height and width of the output image would suit this case, but when training it was giving strange figures such as accuracy of 0. While looking for some answers on the Internet I discovered the concepts of CNN autoencoders and I was wondering if this approach could help me solve my problem. Among all the examples I saw, the input and output of an autoencoder had the same size and dimensions. At this point I wanted to ask if there was a type of CNN autoencoders that produce an output image that has different dimension compared to input image.
0
1
1,346
0
57,135,408
0
0
0
0
1
false
0
2019-07-21T15:59:00.000
0
1
0
Out of sample predictions with LSTM
57,134,812
0
python,tensorflow,keras,deep-learning,time-series
There are two main ways to create train/validation set for a time series situation: splitting your samples ( taking for example 80 % of the time series for training and 20 % for validation) splitting your time series ( training your model on the first n-k values of the time series and validation on the k other values)
This is a general question about making real future predictions with an LSTM model using keras & tensorflow in Python (optional R). For example stock prices. I know there is a train/test split to measure the accuracy/performance of the model comparing my results with the test prices. But I want to make real future predictions/out of sample predictions. Does anyone has an idea & would like to share some thoughts on it? Me only came to mind to use a rolling window but that didn't work at all. So I'm glad about every tip you guys have.
0
1
279
0
57,139,854
0
0
0
0
1
false
0
2019-07-22T05:27:00.000
0
1
0
Can I use GCP for training only but predict with my own AI machine?
57,139,636
0
python,machine-learning,google-cloud-platform
Decide if you want to use Tensorflow or Keras etc. Prepare scripts to train and save model, and another script to use it for prediction. It should be simple enough to use GCP for training and download the model to use on your machine. You can choose to use a high end machine (lot of memory, cores, GPU) on GCP. Training in distributed mode may be more complex. Then download the model and use it on local machine. If you run into issues, post your scripts and ask another question.
My laptop had a problem with training big a dataset but not for predicting. Can I use Google Cloud Platform for training, only then export and download some sort of weights or model of that machine learning, so I can use it on my own laptop, and if so how to do it?
0
1
35
0
57,216,482
0
0
0
0
1
false
0
2019-07-22T12:48:00.000
0
1
0
Weighted clustering using NearestNeighbors
57,146,403
0
python,scikit-learn,cluster-analysis
If you have an inverted index, enforcing a certain value to be required, while having other values optional and only used for similarity should be straightforward. Just think of the full-text search example with required and optional terms. Depending on how many queries you do, linear search as well as a "group by" approach may be fine.
I have a use case where in I need to cluster N transactions but with a constraint that a particular column value in the resultant clusters should be same for individual clusters. I have been using NearestNeighbors - NN from sklearn for this purpose and it seems to workout to an extend. The distance metric chosen is Cosine and the the type of data is categorical - one hot encoding is done before actual clustering. Now if I have columns c1, c2...cn, which is used along with NN for clustering and if I want to enforce the criteria that for a particular cluster derived, Gi, there should be a single unique value for column cx with in Gi. How would I enforce this? I went through a couple of documents and some of the techniques indirectly suggests to groupby the column cx and then do the clustering or duplicate the column cx in data and cluster. Are these valid approached to tackle the problem?
0
1
36
0
57,148,940
0
0
0
0
1
false
0
2019-07-22T14:15:00.000
1
1
0
Custom data prediction using decision trees
57,147,986
0.197375
python,machine-learning,scikit-learn
You can use the custom data for prediction as long as it has the same number, order and type of features as your training data into an array type, not in list. If you meet these conditions, you can send that array to the model for prediction with the normal predict() method.
I am using train_test_split to train the model and check the results using predict. How do I proceed to predict the labels of additional data, for example, from a test set or from user inputs?
0
1
35
0
57,183,592
0
0
0
0
1
false
0
2019-07-22T23:56:00.000
0
3
0
obj file loaded as Scene object instead of Trimesh object
57,155,089
0
python,trimesh
I have the same problem. You can access the "geometry" member of the scene, but I have no idea how the original mesh is split into multiple ones. Following.
I'm trying to load a mesh from an .obj file (from ShapeNet dataset) using Trimesh, and then use the repair.fix_winding(mesh) function. But when I load the mesh, via trimesh.load('/path/to/file.obj') or trimesh.load_mesh('/path/to/file.obj'), the object class returned is Scene, which is incompatible with repair.fix_winding(mesh), only Trimesh object are accepted. How can I force it to load and return a Trimesh object or parse the Scene object to Trimesh object? Or any other way to fix winding of the triangles? Using: Python 3.6.5 Trimesh 3.0.14 MacOS 10.14.5
0
1
2,337
0
57,162,164
0
0
0
0
5
false
0
2019-07-23T01:45:00.000
2
6
0
Generating random division problems in python
57,155,636
0.066568
python,math
1) Take any non-zero randomized Divisor (x). // say 5 2) Take any randomized temporary Dividend (D). // say 24 3) Calculate R = D % x; // => 4 4) return Dividend as (D -x ) // return 20 Now, your dividend will always be perfectly divisible by the divisor.
I need to generate random division problems for an educational game that I am building. Generating random addition, subtraction, and multiplication problems is not too difficult. But I want my division problems to not have any remainders. With addition, subtraction and multiplication, I could just do [random number] times or plus or subtract [random number]. It is not so easy if I want to do division problems. Can anyone help? Thanks in advance
0
1
766
0
57,155,682
0
0
0
0
5
true
0
2019-07-23T01:45:00.000
2
6
0
Generating random division problems in python
57,155,636
1.2
python,math
x/y = z y*z = x Generate y and z as integers, then calculate x.
I need to generate random division problems for an educational game that I am building. Generating random addition, subtraction, and multiplication problems is not too difficult. But I want my division problems to not have any remainders. With addition, subtraction and multiplication, I could just do [random number] times or plus or subtract [random number]. It is not so easy if I want to do division problems. Can anyone help? Thanks in advance
0
1
766
0
57,165,189
0
0
0
0
5
false
0
2019-07-23T01:45:00.000
0
6
0
Generating random division problems in python
57,155,636
0
python,math
I think that @Vira has the right idea. If you want to generate a and b such that a = b * q + r with r=0, the good way to do it is : Generate b randomly Generate q randomly Compute a = b * q Ask to compute the division : a divided by b. The answer is q.
I need to generate random division problems for an educational game that I am building. Generating random addition, subtraction, and multiplication problems is not too difficult. But I want my division problems to not have any remainders. With addition, subtraction and multiplication, I could just do [random number] times or plus or subtract [random number]. It is not so easy if I want to do division problems. Can anyone help? Thanks in advance
0
1
766
0
57,155,658
0
0
0
0
5
false
0
2019-07-23T01:45:00.000
1
6
0
Generating random division problems in python
57,155,636
0.033321
python,math
You can simply generate the divisor and quotient randomly and then compute the dividend. Note that the divisor must be nonzero (thanks to @o11c's remind).
I need to generate random division problems for an educational game that I am building. Generating random addition, subtraction, and multiplication problems is not too difficult. But I want my division problems to not have any remainders. With addition, subtraction and multiplication, I could just do [random number] times or plus or subtract [random number]. It is not so easy if I want to do division problems. Can anyone help? Thanks in advance
0
1
766
0
57,155,672
0
0
0
0
5
false
0
2019-07-23T01:45:00.000
-1
6
0
Generating random division problems in python
57,155,636
-0.033321
python,math
you can generate a number to be divided as [random number1]x[random number2] . The problem will then be [random number1]x[random number2] divide by [random number1]
I need to generate random division problems for an educational game that I am building. Generating random addition, subtraction, and multiplication problems is not too difficult. But I want my division problems to not have any remainders. With addition, subtraction and multiplication, I could just do [random number] times or plus or subtract [random number]. It is not so easy if I want to do division problems. Can anyone help? Thanks in advance
0
1
766
0
57,165,277
0
1
0
0
1
true
0
2019-07-23T05:02:00.000
1
1
0
Bamboo error - ImportError: no module named pandas
57,156,974
1.2
python,bamboo
It sounds like you are running this Bamboo plan using the Agent Host and not a Docker Container. As such you will need to: Remote/Log into the Bamboo server Use pip or some other package tool to install requests, itertools, and any other missing imports Alternatively, you could set-up an isolated Docker image that has all these dependencies and build within that.
I have created a Bamboo task which runs the python code from a BitBucket Repo. Bamboo config: I am running the script as a file. I have selected interpreter as Shell and given this in the Script Body to execute the script python create_issue.py -c conf.yml After I click on 'Run Plan', the build fails with ImportError: No module named pandas. The rest of the libraries are working fine, like, requests, itertools, etc.
0
1
485
0
57,157,989
0
0
0
0
1
false
0
2019-07-23T06:31:00.000
0
1
0
how to get best fit line when we have data on vertical line?
57,157,943
0
python,linear-regression
Instead of fitting y as a function of x, in this case you should fit x as a function of y.
I started learning Linear Regression and I was solving this problem. When i draw scatter plot between independent variable and dependent variable, i get vertical lines. I have 0.5M sample data. X-axis data is given within range of let say 0-20. In this case I am getting multiple target value for same x-axis point hence it draws vertical line. My question is, Is there any way i can transform the data in such a way that it doesn't perform vertical line and i can get my model working. There are 5-6 such independent variable that draw the same pattern. Thanks in advance.
0
1
850
0
57,170,037
0
0
0
0
2
true
0
2019-07-23T12:23:00.000
0
2
0
detect objects in only specific region of the frame-yolo-opencv
57,164,149
1.2
python,opencv,image-processing,yolo
You can perform yolo on the entire image as usual, but add an if condition to only draw boxes the center of which falls in a specific region. Or you can add this condition (position) next to the conditions of IoU (where detected boxes are filtered). Also you can separate counting based on the direction of moving vehicles and use two different counters for the two directions. If you don't mind me asking, how are you tracking the vehicles?
I am counting the total no. of vehicles in a video, but I want to detect only the vehicles which are travelling up(roads have a divider) so my point is, Can i use yolo only on a rectangle where vehicles are moving up? I dont want to detect vehicles that are on the other side of the road. is there a way like i can draw a rectangle and only detect objects on that specific rectangle? The best I can think of is for every frame, i'll have to crop the frame, perform all the operations and stitch it back to the original frame. I am expecting an easier alternative for the same Any help is appreciated. Thanks
0
1
955
0
71,621,543
0
0
0
0
2
false
0
2019-07-23T12:23:00.000
0
2
0
detect objects in only specific region of the frame-yolo-opencv
57,164,149
0
python,opencv,image-processing,yolo
i'm doing a similar thing... if your product is going to be fixed on like a light poll then clearly you can either detect the road and zebra crossing by training a model. or manually enter these values... later run your object detection and object tracking on only these parts of the frames i.e, use frame[ymax:ymin, xmax:xmin] This reduces the image size so your processing speed increases. but why do you need the full image again after your work? still if you do need it then you just have to add the values of xmin and ymin of your object detection box on the road to the bounding box of the vehicle detected in that object detection box to get its bounding box values in uncropped image.
I am counting the total no. of vehicles in a video, but I want to detect only the vehicles which are travelling up(roads have a divider) so my point is, Can i use yolo only on a rectangle where vehicles are moving up? I dont want to detect vehicles that are on the other side of the road. is there a way like i can draw a rectangle and only detect objects on that specific rectangle? The best I can think of is for every frame, i'll have to crop the frame, perform all the operations and stitch it back to the original frame. I am expecting an easier alternative for the same Any help is appreciated. Thanks
0
1
955
0
57,170,266
0
0
0
0
1
false
0
2019-07-23T18:09:00.000
0
1
0
Is ColumnDataSource() the only way to get plots updated in a bokeh web app?
57,169,954
0
python-3.x,dataframe,callback,bokeh
The short answer to the question in the title is "Yes". The ColumnDataSource is the special, central data structure of Bokeh. It provides the data for all the glyphs in a plot, or content in data tables, and automatically keeps that data synchronized on the Python and JavaScript sides, so that you don't have to, e.g write a bunch of low-level websocket code yourself. To update things like glyphs in a plot, you update the CDS that drives them. It's possible there are improvements that could be made in your approach to updating the CDS, but it is impossibe to speculate without seeing actual code for what you have tried.
My data is in a large multi-indexed pandas DataFrame. I re-index to flatten the DataFrame and then feed it through ColumnDataSource, but I need to group my data row wise in order to plot it correctly (think bunch of torque curves corresponding to a bunch of gears for a car). If I just plot the dictionary output of ColumnDataSource, it's a mess. I've tried converting the ColumnDataSource output back to DataFrame, but then I lose the update functionality, the callback won't touch the DataFrame, and the plots won't change. Anyone have any ideas?
0
1
109
0
61,889,824
0
0
0
1
1
false
0
2019-07-23T23:56:00.000
-1
2
0
How to best(most efficiently) read the first sheet in Excel file into Pandas Dataframe?
57,173,573
-0.099668
python,pandas,python-2.7
The method read_excel() reads the data into a Pandas Data Frame, where the first parameter is the filename and the second parameter is the sheet. df = pd.read_excel('File.xlsx', sheetname='Sheet1')
Loading the excel file using read_excel takes quite long. Each Excel file has several sheets. The first sheet is pretty small and is the sheet I'm interested in but the other sheets are quite large and have graphs in them. Generally this wouldn't be a problem if it was one file, but I need to do this for potentially thousands of files and pick and combine the necessary data together to analyze. If somebody knows a way to efficiently load in the file directly or somehow quickly make a copy of the Excel data as text that would be helpful!
0
1
768
0
57,176,551
0
0
0
0
1
false
0
2019-07-24T03:29:00.000
0
2
0
When training neural networks, does Tensorflow automatically revert back to the best epoch after finishing?
57,174,825
0
python,tensorflow,machine-learning,keras,epoch
keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=True, save_weights_only=False, mode='auto', period=1)
If not, why not? Sometimes I will have an epoch that gets 95ish % and then finish with an epoch that has 10% or so less accuracy. I just never can tell whether it reverts back to that best epoch.
0
1
98
0
57,181,431
0
0
0
0
1
false
2
2019-07-24T09:11:00.000
0
2
0
Does pandas read the full data file and stores it in a data frame? Is it efficient to load a 100mb file in pandas?
57,179,262
0
python,pandas,csv,data-science
Yes the performance is affected and sometimes system gets slow. You can try to read the data in the form of table or you can also use chunksize. This will improve the efficiency
I want to load a file of size around 100mb using pandas. I know we can load but I want to know does the file size affects the efficiency of the program. And is there any way to load the file efficiently?
0
1
96
0
57,180,683
0
0
0
0
1
false
0
2019-07-24T10:10:00.000
0
1
0
Minimization of known cost function without knowing original function
57,180,431
0
python,scipy,minimization,scipy-optimize
A function approximation algorithm needs you to make a few assumptions about how your mathematical model behaves. If you see things from a black box point of view, three scenarios can occur - X -> MODEL -> Y You have the X and MODEL, but you dont have the Y; This is simulation You have the MODEL and Y, but you dont have the X; This is Optimization You have the X and Y, but you dont have the MODEL; This is mathematical modelling However there is a catch. You can NEVER do 3. directly. Instead you use a trick to reframe the 3. as a 2. (optimization problem). The trick is to say that you assume your model to be something like y=mx+c, and then instead of finding the model you find new inputs m and c. Thus, we can instead say - You have the (MODEL, X) and Y but you dont have M,C (New inputs); This is optimization as well. (M,C) -> (MODEL + X) -> Y This means, that even if you dont know the input function, you have to assume some model and then estimate the parameters which when tuned, let the model behave as the close to the input function as possible. Basically, what you need is machine learning. You have the inputs, you have the outputs (or you can get them but running your first function with a large sample of outputs), you have the cost function. Assume a model, and train it to approximate your input function. If you are not sure what to use, then use a generalized function approximator AKA neural networks. But beware, it needs a lot more data to train.
I am trying to fit one function to another function by adjusting two parameters. But I dont know the form of this function. I have only cost function because for computation of this function is used LAMMPS (molecular dynamics). I need some tool which i can give only cost function and my guess and then it would minimize it. I was looking on SciPy optimization but it looks like it needs the original function which i dont have.
0
1
54
0
58,828,414
0
0
0
0
2
false
4
2019-07-25T02:01:00.000
1
8
0
Pandas-profiling error AttributeError: 'DataFrame' object has no attribute 'profile_report'
57,193,292
0.024995
python-3.x,pandas,pandas-profiling
This should work for those who want to use the latest version: Run pip uninstall pandas_profiling from anaconda prompt (given you're using Spyder, I'd guess this would be your case) / or command prompt Run pip install https://github.com/pandas-profiling/pandas-profiling/archive/master.zip If you're using something like a Jupyter Notebook/Jupyter Lab, be sure to restart your kernel and re-import your packages. I hope this helps.
I wanted to use pandas-profiling to do some eda on a dataset but I'm getting an error : AttributeError: 'DataFrame' object has no attribute 'profile_report' I have created a python script on spyder with the following code : import pandas as pd import pandas_profiling data_abc = pd.read_csv('abc.csv') profile = data_abc.profile_report(title='Pandas Profiling Report') profile.to_file(output_file="abc_pandas_profiling.html") AttributeError: 'DataFrame' object has no attribute 'profile_report'
0
1
14,039
0
57,193,306
0
0
0
0
2
false
4
2019-07-25T02:01:00.000
0
8
0
Pandas-profiling error AttributeError: 'DataFrame' object has no attribute 'profile_report'
57,193,292
0
python-3.x,pandas,pandas-profiling
The only workaround I found was that the python script I made is getting executed from the command prompt and giving the correct output but the code is still giving an error in Spyder.
I wanted to use pandas-profiling to do some eda on a dataset but I'm getting an error : AttributeError: 'DataFrame' object has no attribute 'profile_report' I have created a python script on spyder with the following code : import pandas as pd import pandas_profiling data_abc = pd.read_csv('abc.csv') profile = data_abc.profile_report(title='Pandas Profiling Report') profile.to_file(output_file="abc_pandas_profiling.html") AttributeError: 'DataFrame' object has no attribute 'profile_report'
0
1
14,039
0
57,195,148
0
0
0
0
1
true
0
2019-07-25T05:01:00.000
0
1
0
While train the model always getting "Saving checkpoint to path training/model.ckpt"
57,194,634
1.2
python,opencv,tensorflow,deep-learning,artificial-intelligence
This message is informative. It's just telling you, that the checkpoint of the model you're training was saved. In case you find the model "getting worse" from that checkpoint, you can cancel the training with the latest best version of the model saved. Look for some tutorials for how to find out when the model is not getting better. Get familiar with terms like overfitting/overtraining, etc. to understand the process itself.
I am working with one project in python using Tensorflow. But I am very beginner in Tensorflow and OpenCV. Last day I tried to custom objects. But while training I am always getting one status. "I0725 10:26:31.453798 5176 supervisor.py:1117] Saving checkpoint to path traini ng/model.ckpt". I don't know I what exactly happening, Is this error or not? I already waited around 10 hours. But now also getting this same status.
0
1
352
0
57,204,226
0
0
0
0
1
false
0
2019-07-25T14:04:00.000
0
1
0
Grouping low frequency levels of categorical variables to improve machine learning performance
57,203,915
0
python,machine-learning
This isn't a programming question... By having fewer classes, you inherently increase the chance of randomly predicting the correct class. Consider a stacked model (two models) where you have a primary model to classify between the overrepresented classes and the 'other' class, and then have a secondary model to classify between the classes within the 'other' class if the primary model predicts the 'other' class.
I'm trying to find ways to improve performance of machine learning models either binary classification, regression or multinomial classification. I'm now looking at the topic categorical variables and trying to combine low occuring levels together. Let's say a categorical variable has 10 levels where 5 levels account for 85% of the total frequency count and the 5 levels remaining account for the 15% remaining. I'm currently trying different thresholds (30%, 20%, 10%) to combine levels together. This means I combine together the levels which represent either 30%, 20% or 10% of the remaining counts. I was wondering if grouping these "low frequency groups" into a new level called "others" would have any benefit in improving the performance. I further use a random forest for feature selection and I know that having fewer levels than orignally may create a loss of information and therefore not improve my performance. Also, I tried discretizing numeric variables but noticed that my performance was weaker because random forests benefit from having the hability to split on their preferred split point rather than being forced to split on an engineered split point that I would have created by discretizing. In your experience, would grouping low occuring levels together have a positive impact on performance ? If yes, would you recommend any techniques ? Thank you for your help !
0
1
392
0
57,207,926
0
0
0
0
2
false
0
2019-07-25T15:37:00.000
1
3
0
How can we be sure of the efficiency of a neural network
57,205,718
0.066568
python,machine-learning,neural-network,dataset,normalization
OP: Your questions are very good for someone that's just getting started in machine learning. Have you ensured that the distribution of your training and test dataset are similar? I would try to keep the number of samples per class (label) about equal if possible. For instance, if your training set is severely imbalanced then your prediction algorithm might tend to favor the label that shows up more often. I think you are on the right track to overfit your model to ensure your neural net architecture, training and whatever else is setup correctly. Are you using regularization? If so, I think you might want to remove that to see if your model can fit to your training dataset. I understand that this goes against what the accepted answer's #2 suggests but this is a useful way to debug your setup How good are the labels for your dataset? If you have any noise in your labels then that would affect the accuracy of your classifier You could also try transfer learning if you cannot get more training data
I trained a Forward Neural Net for binary classification and I got an accuracy of 83%, which (I hope)I'm going to improve later by changing parameters in inputs. But some tests make me feel confused : My dataset length is 671 so I divide it as 513 train set, 58 Validation set and 100 test set When I change the size of my sets (Train, Validation, Test), the accuracy score can decrease to some very low scores like 40% The neural net is supposed to learn from the train set but when I test, after the training, with the same Train set and not the Test set, I thought that the model should score 100 %, because he just learned from it, watched it, but it only improves a few with 87% ... I'm a beginner in ML so I don't know if it's normal or not, I'm just curious and wanna catch all the small things to know, to understand perfectly what I'm doing. I guess it's maybe the normalization of my vectors sets, but I don't know very much about it. I can share you my full code if you want to, but as every neural net, it's quite long but easy to read.
0
1
380
0
57,211,888
0
0
0
0
2
true
0
2019-07-25T15:37:00.000
1
3
0
How can we be sure of the efficiency of a neural network
57,205,718
1.2
python,machine-learning,neural-network,dataset,normalization
As suggested by many people 3:1:1 (60:20:20 = train-validate-test) ratio is a thumb rule to split data, if you are playing with small data set it better to stick with 80:20 or 70:30 just train-test,I usually go for 90:10 ratio for better results. Before you start with classification, first check whether your data set is balanced or imbalanced ( there should not be less example belongs to one class as compared to other ) because even though it gives you good accuracy it will mislead the results. If data set is imbalanced, pre-processed data set with sampling algorithm (for e.g SMOTE) and re-sample it. It will create equal sets of examples for class based on neighbors. As correctly mentioned in other answer, use cross validation classification such as K-fold. The concept of cross validation is done to tweak the parameters used for training in order to optimize its accuracy and to nullify the effect of over-fitting on the training data, it also remove the noise in data set. I usually go for 10-fold cross validation where data set divided in 10 partition and in each iteration 1/10 partition use as test and rest as training. Take the average of the 10 computations to get a good estimate of the performance of your classifier.
I trained a Forward Neural Net for binary classification and I got an accuracy of 83%, which (I hope)I'm going to improve later by changing parameters in inputs. But some tests make me feel confused : My dataset length is 671 so I divide it as 513 train set, 58 Validation set and 100 test set When I change the size of my sets (Train, Validation, Test), the accuracy score can decrease to some very low scores like 40% The neural net is supposed to learn from the train set but when I test, after the training, with the same Train set and not the Test set, I thought that the model should score 100 %, because he just learned from it, watched it, but it only improves a few with 87% ... I'm a beginner in ML so I don't know if it's normal or not, I'm just curious and wanna catch all the small things to know, to understand perfectly what I'm doing. I guess it's maybe the normalization of my vectors sets, but I don't know very much about it. I can share you my full code if you want to, but as every neural net, it's quite long but easy to read.
0
1
380
0
57,208,228
0
0
0
0
1
true
0
2019-07-25T17:54:00.000
2
1
0
How to prevent printing loss at each step while using the tensorflow object detection api?
57,207,729
1.2
python,tensorflow,deep-learning,google-colaboratory,object-detection-api
add ; if thats the print statement at the end of the print statement Add %%capture in the first line of the cell for no print for the cell and for speficic function from IPython.utils import io with io.capture_output() as captured: function()
I am training using the Tensorflow Object Detection API in Google Colab. I want to suppress printing the loss at each step as the web page crashes after 30 minutes due to a large amount of text being printed as the output of the cell. I have to manually clear the output of the cell every 30 minutes or so to avoid this issue. Is there any way to modify the train.py code so that Tensorflow stops printing the loss at every step. I have tried changing the code in line 57 of the research/object_detection/legacy/train.py from tf.logging.set_verbosity(tf.logging.INFO) to tf.logging.set_verbosity(tf.logging.WARN) but it did not seem to work. Any suggestions/workarounds?
0
1
290
0
57,230,509
0
0
0
0
1
false
1
2019-07-27T06:34:00.000
0
3
0
How can I properly split imbalanced dataset to train and test set?
57,229,775
0
python,machine-learning,train-test-split,imbalanced-data
Start from 50/50 and go on changing the sets as 60/40, 70/30, 80/20, 90/10. declare all the results and come to some conclusion. In one of my work on Flight delays prediction project, I used 60/40 database and got 86.8 % accuracy using MLP NN.
I have a flight delay dataset and try to split the set to train and test set before sampling. On-time cases are about 80% of total data and delayed cases are about 20% of that. Normally in machine learning ratio of train and test set size is 8:2. But the data is too imbalanced. So considering extreme case, most of train data are on-time cases and most of test data are delayed cases and accuracy will be poor. So my question is How can I properly split imbalanced dataset to train and test set??
0
1
1,879
0
57,231,506
0
0
0
0
1
false
1
2019-07-27T08:53:00.000
1
1
0
When the weight of model get updated if I am using adam optimization?
57,230,643
0.197375
python-3.x,optimization,keras,deep-learning
Adam just changes how the gradient update is performed in gradient descent, it does not change when that happens, so its literally the same as in normal gradient descent. When using mini-batch gradient descent (the current standard), weight updates happen after every batch.
I know when the weight of model updated while using gradient descent(in all three types of GD) but in my case I am using adam optimization with custom loss(triplet loss), when the weight get updated in the model in this case? Is it after every sample,every batch or every epochs? Thanks in advance.
0
1
140
0
57,388,681
0
0
0
0
1
false
1
2019-07-27T18:39:00.000
0
2
0
Detect forehead points using Dlib/python
57,235,066
0
python,opencv,image-processing,deep-learning,dlib
you can use the tool provided with dlib called "imgLab" to train your own shape detector by performing landmark annotations
Do we have any way to get points on the forehead of a face image? I am using 68 points landmarks shape_predictor to get other points on the face but for this particular problem I need points that are from the hairline to the center of the forehead. Any suggestions would be helpful.
0
1
922
0
57,244,533
0
0
0
0
1
false
0
2019-07-28T19:32:00.000
0
3
0
How come multiple regression has so many assumptions, while advanced machine learning algorithms have next to none?
57,244,326
0
python,r
There are a number of reasons for this, in my opinion. Linear regression assumes your y to be linearly related to variables where as tree-based models are considered non-linear models. (So the linearity assumption goes out the window) Breaking some of the linear regression assumptions may not inherently decrease the predictive ability of your model, but WILL bias the coefficients. Often times, when building a regression model you are trying to determine the effect of an X variable on the Y. In this case you mainly only care about this weight and don't care as much about how well your model predicts. If you are aiming only for predictive ability you can break some assumptions. These are the main 2 thoughts that come to mind. I would be interested to hear other people's take as well.
I am analyzing a real-estate dataset. While all regression assumptions fail, my XGBoosting model thrives. Am I missing something? Is XGBoost just the superior model in this case? The dataset is around 67.000 observations and 30 variables.
0
1
947
0
57,256,900
0
0
0
0
1
false
2
2019-07-29T14:42:00.000
1
2
0
How extract contrast level of a photo - opencv
57,256,159
0.099668
python,opencv,image-processing,colors,contrast
Contrast is usually understood as intensity contrast and can be computed on the Luminance component (Y). It is a measure of spread of the histogram, such as the standard deviation.
I need to return the (average) contrast value of an image. Is this possible and if so, please show how? More than just the code, (obviously the code is welcome), in what color space should I work? Is HSV appropriate or best? Can the contrast value of a single pixel be computed?
0
1
3,574
0
57,549,978
0
0
0
0
2
false
2
2019-07-29T14:50:00.000
0
2
0
Tensorflow Serving number of requests in queue
57,256,298
0
python-3.x,tensorflow,prometheus,tensorflow-serving
what 's more ,you can assign the number of threads by the --rest_api_num_threads or let it empty and automatically configured by tf serivng
I have my own TensorFlow serving server for multiple neural networks. Now I want to estimate the load on it. Does somebody know how to get the current number of requests in a queue in TensorFlow serving? I tried using Prometheus, but there is no such option.
0
1
1,082
0
57,549,954
0
0
0
0
2
true
2
2019-07-29T14:50:00.000
1
2
0
Tensorflow Serving number of requests in queue
57,256,298
1.2
python-3.x,tensorflow,prometheus,tensorflow-serving
Actually ,the tf serving doesn't have requests queue , which means that the tf serving would't rank the requests, if there are too many requests. The only thing that tf serving would do is allocating a threads pool, when the server is initialized. when a request coming , the tf serving will use a unused thread to deal with the request , if there are no free threads, the tf serving will return a unavailable error.and the client shoule retry again later. you can find the these information in the comments of tensorflow_serving/batching/streaming_batch_schedulor.h
I have my own TensorFlow serving server for multiple neural networks. Now I want to estimate the load on it. Does somebody know how to get the current number of requests in a queue in TensorFlow serving? I tried using Prometheus, but there is no such option.
0
1
1,082
0
57,268,592
0
0
0
0
1
false
0
2019-07-30T08:55:00.000
0
1
0
Can I tell a machine learning model that the dependent variable is normally distributed?
57,267,855
0
python,machine-learning,scikit-learn,regression,supervised-learning
One way you could do this is by creating a custom objective function that penalizes predictions that are not normally distributed.
I am trying to set up a machine learning model predicting a continuous variable y on the basis of a feature vector (x1, x2, ..., xn). I know from elsewhere that y follows a normal distribution. Can I somehow specify this to the model and enhance its predictions this way? Is there a specific model that allows me to do this? I have used linear models, k-nearest neighbour models and random forest models (in python). All of them give some predictions but I was wondering whether they can be outperformed by some model that would know the distribution of the predicted variable.
0
1
65
0
62,347,635
0
0
0
0
1
true
1
2019-07-30T10:01:00.000
1
1
0
Is there a limit about image size when we train custom object with already trained models?
57,269,090
1.2
python,tensorflow,detection,yolo
Providing the solution here (Answer Section), even though it is present in the comment section (Thanks dasmehdix for the update) for the benefit of the community. No, there is no a limit of image size that we use to train our model.
I already trained ssd_mobilenet_v2_coco with my custom data set on tensorflow. Also I trained YOLO with my data set too. I solved all problems and they work. I encounter a problem with both models. When my data set includes images with more size than 400kb, the trained models do not work. Some times "allocation of memory" problem occurs. I solved them with changing parameters(batch size etc.). But I still don't know whether there is a limit of image size when we are preparing data set? Why more than 400kb images are problem for my system? My question is not about pixel size,it's about image file size. Thanks... System Info Nvidia RTX 2060 6G AMD Ryzen 7 16GB DDR4 2600Mhz Ram Cuda:10.0 CudNN:7.4.2(I also try diffrent versions and same results occur) Tensorflow:2
0
1
84
0
57,325,381
0
1
0
0
1
false
0
2019-07-30T11:27:00.000
0
1
0
Kernel Restarts upon failing(?) to import tensorflow
57,270,586
0
python,tensorflow
Ok so I did some black mathmagic and fixed it. What I did was reducing the tensorflow version via pip (to 1.14, but I don't think that matters) and then upgrading my entire conda set up again with conda update --all. I have no idea why, and the anaconda console screamed like it was tortured during the entire update, but now it works and I don't think I'll touch it again. If I see better fix, or if I encounter problems with this one, I'll update this post again.
So I've recently updated my Anaconda environment, and now I can't import tensorflow anymore. Everytime I run a script containing it, the Spyder Console runs for a while, then just stops and resets to ln[1]. I tried to see how far the script compiles, and it does everything fine, until the import statement. Weirdly enough, the autocomplete still works for tf, which means that my installation should be fine. Reinstalling tensorflow also didn't do anything. There are no error messages, because the compiler dies a silent death everytime I run the script. I've seen others describe a similar problem on a Jupyter, but their fixes didn't work. (Running the script without Spyder just freezes Python) I'd greatly appreciate help
0
1
316
0
60,646,118
0
0
0
0
1
true
0
2019-07-31T03:16:00.000
1
1
0
Word/Sentence similarity. What is the best approach?
57,282,671
1.2
python,nlp
I have found two great solutions, by using Cosine similarity and Levenshtein distance. Im my case, Cosine similarity worked better, because I easily found part of the brand name into the text, so getting a score of 100% of accuracy. Matrix replacing (Levenshtein) was also good, but I good some errors due to very similar words in the dataset.
I need to build an algorithm for product master data purposes and I'm not sure about the best NLP approach for this. The scenario is: - I have Product golden records; - I have many others Product catalogs that need to be harmonized; Example: - Product Golden Record: Coke and Coke Zero; - Products description that need to be hamonized: Coke 300ml, Coke Zero 300ml, Cke zero. I need an algorithm that harmonize by similarity, since I have to consider typos and, sometimes, piece of a product in a sentence. Example: Coke zero JS MKT (JS and MKT are garbage, but the sentence is more similar to Coke Zero). I've been testing some NLP for sentence similarity such as Bag of words as well as reading some other approaches such as Cosine Similarity and Levenshtein distance. However, I don't know what is the best option for my case. Could you please help me to understand the best way to achieve what I need?
0
1
104
0
57,290,376
0
0
0
0
1
true
1
2019-07-31T10:20:00.000
1
1
0
speed up pandas search for a certain value not in the whole df
57,288,507
1.2
python-3.x,pandas
Just to make a full answer out of my comment: With -1 not in test1.values you can check if -1 is in your DataFrame. Regarding the performance, this still needs to check every single value, which is in your case 10^5*10^2 = 10^7. You only save with this the performance cost for summation and an additional comparison of these results.
I have a large pandas DataFrame consisting of some 100k rows and ~100 columns with different dtypes and arbitrary content. I need to assert that it does not contain a certain value, let's say -1. Using assert( not (any(test1.isin([-1]).sum()>0))) results in processing time of some seconds. Any idea how to speed it up?
0
1
43
0
57,290,466
0
0
0
1
1
false
0
2019-07-31T12:01:00.000
0
1
0
Is there any way to change columns datatype that should be int became a float while using read_sql from table
57,290,281
0
python,pandas
Since your column contains NaN values, which are floating point numbers, I don't think you can avoid this 'issue' loading from the Database without changing the query. If you wish to change the query, you can insert a WHERE clause that would exclude None values, or check if the row contains such a column value. What I suggest would be to use .fillna(), and then to cast as integers using .astype('int') Edit : Just in case, your question is wrong, you are saying Is there any way to change columns datatype that should be int became a float while using read_sql from table But since it includes NaN, it is not expected to be an int, but a float.
I am using read_sql function to pull data from a postgresql table. As I store that data in a dataframe, I could find that some integer dtype column is automatically getting converted to float, is there any way to prevent that while using read_sql functiononly
0
1
45
0
57,293,794
0
0
0
0
1
false
1
2019-07-31T14:56:00.000
0
1
0
what's the difference between tf.gfile.GFile().read() and cv2.imread() when it is about reading images
57,293,678
0
python,opencv,tensorflow
tf.gfile is basicly to use a less-convensional filesystem such as HDFS etc... cv2.imread(image) is for local filesystem usage.
When working with TensorFlow (image classification for example), sometimes images are loaded using cv2.imread(image) and other times they are loaded using tf.gfile.GFile(image, 'rb').read(). Are there any differences between cv2.imread(image) and tf.gfile.GFile(image, 'rb').read() when using them with TensorFlow? Edit: My question here is about performances and maintaining image accuracy (since both of them do the job).
0
1
452
0
57,294,007
0
0
0
0
1
false
7
2019-07-31T14:58:00.000
2
3
0
Sklearn PCA explained variance and explained variance ratio difference
57,293,716
0.132549
python,scikit-learn,pca,covariance
It's just normalization to see how each principal component important. You can say: explained_variance_ratio_ = explained_variance_/np.sum(explained_variance_)
I'm trying to get the variances from the eigen vectors. What is the difference between explained_variance_ratio_ and explained_variance_ in PCA?
0
1
10,300
0
57,297,560
0
0
0
0
1
false
1
2019-07-31T19:03:00.000
1
3
0
I want to extract a particular pattern from a content string: "Twitter for iPhone"
57,297,369
0.066568
python,regex,pandas,dataframe
Try df.col.str.extract(pat = '(Twitter for (iPhone|Samsung|others))')
I want to extract "Twitter for iPhone" part from this string. But I have different values in the place of "Twitter for iPhone" in 1000s of columns in a dataframe. I only need the values after ">" and before "<" from the following set of strings. I tried df.col.str.extract('(Twitter for iPhone|Twitter for Samsung|Twitter for others)') which extracts only those 'Twitter for iPhone' values but not the others and the rest are filled with NaNs.
1
1
259
0
57,310,189
0
0
0
0
1
false
2
2019-07-31T23:21:00.000
1
2
0
Finding the area under the curve of a probability distribution function in SciPy
57,299,971
0.099668
python,scipy,statsmodels,calculus,probability-density
Eureka! It's the .interval() method for a rv_continuous object in scipy.stats - just pass in your parameters and it will give you end points that contain that percentage of the distribution.
I'm working on a univariate problem which involves aggregating payment data on a customer level - so that I have one row per customer, and the total amount they've spent with us. Using this distribution of payment data, I fit an appropriate probability distribution and calculated the maximum likelihood estimates for the parameters of the pdf. Now I want to find the 90th percentile of the distribution. If I was to do this by hand I would set .10 equal to the integral from x to infinity of my pdf and then solve for x. Is there a package in python/scipy/statsmodels that allows me to do this? Thanks in advance! Cheers
0
1
666
0
57,308,378
0
0
0
0
1
true
0
2019-08-01T10:01:00.000
2
1
0
Should CountVectorizer be fit on both the train and test sets?
57,306,519
1.2
python,python-3.x,scikit-learn,countvectorizer
Generally the test_set should be kept unobserved, so the CountVectorizer should be only fitted on train_set
I have come across various articles online, some of which suggest that CountVectorizer should be fit on both the train and test sets, and some suggest that it should be fit only on the train set. Which approach is generally better for text classification?
0
1
96
0
57,445,839
0
0
0
0
1
false
8
2019-08-01T13:37:00.000
6
1
1
Can we disable h5py file locking for python file-like object?
57,310,333
1
python,python-3.6,hdf5,h5py
You just need to set the value to FALSE for the environment variable HDF5_USE_FILE_LOCKING. Examples are as follows: In Linux or MacOS via Terminal: export HDF5_USE_FILE_LOCKING=FALSE In Windows via Command Prompts (CMD): set HDF5_USE_FILE_LOCKING=FALSE
When opening an HDF5 file with h5py you can pass in a python file-like object. I have done so, where the file-like object is a custom implementation of my own network-based transport layer. This works great, I can slice large HDF5 files over a high latency transport layer. However HDF5 appears to provide its own file locking functionality, so that if you open multiple files for read-only within the same process (threading model) it will still only run the operations, effectively, in series. There are drivers in HDF5 that support parallel operations, such as h5py.File(f, driver='mpio'), but this doesn't appear to apply to python file-like objects which use h5py.File(f, driver='fileobj'). The only solution I see is to use multiprocessing. However the scalability is very limited, you can only realistically open 10's of processes because of overhead. My transport layer uses asyncio and is capable of parallel operations on the scale of 1,000's or 10,000's, allowing me to build a longer queue of slow file-read operations which boost my total throughput. I can achieve 1.5 GB/sec of large-file, random-seek, binary reads with my transport layer against a local S3 interface when I queue 10k IO ops in parallel (requiring 50GB of RAM to service the requests, an acceptable trade-off for the throughput). Is there any way I can disable the h5py file locking when using driver='fileobj'?
0
1
4,420
0
57,314,963
0
0
0
0
1
false
1
2019-08-01T17:53:00.000
0
1
0
Can I remove whiskers and outliers from Boxplot?
57,314,544
0
python,seaborn,boxplot
You can remove the outliers by setting showfliers=False and remove whiskers by setting whis=0.
Some people find it confusing with whiskers and outliers in a Boxplot. Is it possible to remove those from the Boxplot in Seaborn?
0
1
1,271
0
57,368,538
0
0
0
0
1
false
1
2019-08-01T19:24:00.000
0
1
0
feeding annotations as ground truth along with the images to the model
57,315,783
0
python,tensorflow,keras,computer-vision,object-detection
We need to feed the bounding boxes to the loss function. We need to design a custom loss function, preprocess the bounding boxes and feed it back during back propagation.
I am working on an object detection model. I have annotated images whose values are stored in a data frame with columns (filename,x,y,w,h, class). I have my images inside /drive/mydrive/images/ directory. I have saved the data frame into a CSV file in the same directory. So, now I have annotations in a CSV file and images in the images/ directory. I want to feed this CSV file as the ground truth along with the image so that when the bounding boxes are recognized by the model and it learns contents of the bounding box. How do I feed this CSV file with the images to the model so that I can train my model to detect and later on use the same to predict bounding boxes of similar images? I have no idea how to proceed. I do not get an error. I just want to know how to feed the images with bounding boxes so that the network can learn those bounding boxes.
0
1
75
0
57,335,390
0
0
0
0
2
false
0
2019-08-02T14:20:00.000
0
3
0
Detecting which words are the same between two pieces of text
57,328,345
0
python,algorithm
You can use dictionary to first store words from first text and than just simply look up while iterating the second text. But this will take space. So best way is to use regular expressions.
I need some python advice to implement an algorithm. What I need is to detect which words from text 1 are in text 2: Text 1: "Mary had a dog. The dog's name was Ethan. He used to run down the meadow, enjoying the flower's scent." Text 2: "Mary had a cat. The cat's name was Coco. He used to run down the street, enjoying the blue sky." I'm thinking I could use some pandas datatype to check repetitions, but I'm not sure. Any ideas on how to implement this would be very helpful. Thank you very much in advance.
0
1
77
0
57,328,491
0
0
0
0
2
false
0
2019-08-02T14:20:00.000
0
3
0
Detecting which words are the same between two pieces of text
57,328,345
0
python,algorithm
Since you do not show any work of your own, I'll just give an overall algorithm. First, split each text into its words. This can be done in several ways. You could remove any punctuation then split on spaces. You need to decide if an apostrophe as in dog's is part of the word--you probably want to leave apostrophes in. But remove periods, commas, and so forth. Second, place the words for each text into a set. Third, use the built-in set operations to find which words are in both sets. This will answer your actual question. If you want a different question that involves the counts or positions of the words, you should make that clear.
I need some python advice to implement an algorithm. What I need is to detect which words from text 1 are in text 2: Text 1: "Mary had a dog. The dog's name was Ethan. He used to run down the meadow, enjoying the flower's scent." Text 2: "Mary had a cat. The cat's name was Coco. He used to run down the street, enjoying the blue sky." I'm thinking I could use some pandas datatype to check repetitions, but I'm not sure. Any ideas on how to implement this would be very helpful. Thank you very much in advance.
0
1
77
0
57,339,152
0
0
0
0
1
true
2
2019-08-03T14:13:00.000
2
1
0
Is normalization necessary for RandomForest?
57,339,104
1.2
python,data-science,normalization,preprocessor,feature-engineering
1) No! Feature normalization isn't necessary for any tree-based classifier. 2) Generally speaking, normalization should be done on all features not just numerical ones. 3) In practice it doesn't make much difference. However, the correct practice is to identify the min and max values of each feature from the training set and then normalize the features of both sets according to those values. 4) Yes, afterwards any sample that needs to be classified should be processed with exactly the same way as you did during training.
1) Is normalization necessary for Random Forests? 2) Should all the features be normalized or only numerical ones? 3) Does it matter whether I normalize before or after splitting into train and test data? 4) Do I need to pre-process features of the future object that will be classified as well? (after accepting the model, not during the testing)
0
1
1,234
0
57,507,773
0
0
0
0
2
true
1
2019-08-03T16:40:00.000
0
2
0
Keras: Is there a need to reload the model if I train for 10 epochs multiple times?
57,340,285
1.2
python,tensorflow,keras,metrics
For any one who might have the same Issue. It seems that in Tensorflow 1.14, the implementation of Keras keeps the model weights, but restarts the optimizer which leads to bad results over many repetitions of the .fit() function. My loss is about 800 when using .fit() once and about 2800 when fitting for 5 epochs each time.
I am training a model and want to use the mAP metric. For some reason the tensorflow mean_average_precision_at_k does not work for me, but the sklearn average_precision_score works. How can I have access to the keras's model outputs to perform the sklearn metrics? Can I compile the model one time and fit for 10 epochs, perform the metric and fit again for 10 epochs? Or do I need to save the model and reload it every time? Thank you
0
1
258
0
57,344,741
0
0
0
0
2
false
1
2019-08-03T16:40:00.000
1
2
0
Keras: Is there a need to reload the model if I train for 10 epochs multiple times?
57,340,285
0.099668
python,tensorflow,keras,metrics
Can I compile the model one time and fit for 10 epochs, perform the metric and fit again for 10 epochs Yes, absolutely. The model will keep the training weights between calls to fit(). You can call this as many times as you please.
I am training a model and want to use the mAP metric. For some reason the tensorflow mean_average_precision_at_k does not work for me, but the sklearn average_precision_score works. How can I have access to the keras's model outputs to perform the sklearn metrics? Can I compile the model one time and fit for 10 epochs, perform the metric and fit again for 10 epochs? Or do I need to save the model and reload it every time? Thank you
0
1
258
0
57,345,858
0
0
0
0
1
false
3
2019-08-04T10:18:00.000
0
1
0
How much should batch size and number of epochs be when fitting a model in Keras?
57,345,714
0
python,machine-learning,keras,neural-network
No! Their is not any rule of thumb for selecting the batch size of the data. Its a trade off between better accuracy and time. So we have to take the batch size which will process our data fast and give good accuracy too. Now what happens when you take too large batch size. Actually after every batch your model is going to update their all the weights. Large batch size large error than according to the error your model will adjust weight. Now After processing a large number of batch and update the weight take less time than taking smaller batch and updating the weights after every batch. But when you take small batch size your model will update weights after every batch i.e(16,32,64) etc whatever you want than your model will be able to learn your data more accurately but its take time to update all the weights after every batch. Now According to the research papers most of the researchers use batch size (16,32,64) may be researcher use larger batch sizes but I haven't seen yet. I hope that answer is helpful for you. and if you want number of epochs to be optimized use callbacks for your neural network your neural network will automatically stop learning if your model is not learning for more than 4 or 5 epochs depends on you.
I am training a model with 107850 samples and validating on 26963 samples. How much should batch size and number of epochs be when fitting a model in Keras to optimize the validation accuracy? Is there any sort of rule of thumb to use based on data input size? Does it overfit a model if an increased number of epochs? Thank You.
0
1
456
0
57,347,119
0
0
0
0
1
false
0
2019-08-04T13:31:00.000
1
2
0
How do I bulk download images (70k) from urls with a restriction on the simultaneous downloads?
57,346,966
0.099668
python,image,url,download,bulk
Try downloading in batches like 500 images...then sleep for some 1 seconds and loop it....quite time consuming...but sure fire method....for the coding reference you can explore packges like urllib (for downloading) and as soon as u download the file use os.rename() to change the name....As u already know for that csv file use pandas...
I'm a bit clueless. I have a csv file with these columns: name - picture url I would like to bulk download the 70k images into a folder, rename the images with the name in the first column and number them if there is more than one per name. Some are jpegs some are pngs. I'm guessing I need to use pandas to get the data from the csv but I don't know how to make the downloading/renaming part without starting all the downloads at the same time, which will for sure crash my computer (It did, I wasn't even mad). Thanks in advance for any light you can shed on this.
0
1
642
0
57,351,808
0
0
0
0
1
true
0
2019-08-05T03:07:00.000
1
1
0
Standard Deviation of every pixel in an image in Python
57,351,759
1.2
python,arrays,numpy,standard-deviation
Use slicing, given images[num, width, height] you may calculate std. deviation of a single image using images[n].std() or for a single pixel: images[:, x, y].std()
I have an image stored in a 2D array called data. I know how to calculate the standard deviation of the entire array using numpy that outputs one number quantifying how much the data is spread. However, how can I made a standard deviation map (of the same size as my image array) and each element in this array is the standard deviation of the corresponding pixel in the image array (i.e, data).
0
1
5,088
0
57,353,337
0
0
0
0
1
false
0
2019-08-05T06:28:00.000
0
1
0
Convert each row in a PySpark DataFrame to a file in s3
57,353,211
0
python,apache-spark,amazon-s3,pyspark,pyspark-sql
I think directly we can't store for each row as a JSON based file. Instead of that we can do like iterate for each partition of dataframe and connect to S3 using AWS S3 based library's (to connect to S3 on the partition level). Then, On each partition with the help of iterator, we can convert the row into JSON based file and push to S3.
I'm using PySpark and I need to convert each row in a DataFrame to a JSON file (in s3), preferably naming the file using the value of a selected column. Couldn't find how to do that. Any help will be very appreciated.
0
1
166
0
57,369,728
0
0
0
1
1
false
0
2019-08-06T01:50:00.000
2
1
0
Import XLS file from GCS to BigQuery
57,367,921
0.379949
excel,google-cloud-storage,airflow,xls,python-bigquery
Bigquery don't support xls format. The easiest way is to transform the file in CSV and to load it into big query. However, I don't know your xls format. If it's multisheet you have to work on the file.
I have some .xls datas in my Google Cloud Storage and want to use airflow to store it to GCP. Can I export it directly to BigQuery or can i use additional library (such a pandas and xlrd) to convert the files and store it into BigQuery? Thanks
0
1
542
0
57,369,487
0
0
0
0
3
false
0
2019-08-06T04:52:00.000
0
3
0
what do hidden layers mean in a neural network?
57,369,148
0
python,tensorflow,neural-network
AFAIK, for this digit recognition case, one way to think about it is each level of the hidden layers represents the level of abstraction. For now, imagine the neural network for digit recognition has only 3 layers which is 1 input layer, 1 hidden layer and 1 output layer. Let's take a look at a number. To recognise that it is a number we can break the picture of the number to a few more abstract concepts such as lines, circles and arcs. If we want to recognise 6, we can first recognise the more abstract concept that is exists in the picture. for 6 it would be an arc and a circle for this example. For 8 it would be 2 circles. For 1 it would be a line. It is the same for a neural network. We can think of layer 1 for pixels, layer 2 for recognising the abstract concept we talked earlier such as lines, circles and arcs and finally in layer 3 we determine which number it is. Here we can see that the input goes through a series of layers from the most abstract layer to the less abstract layer (pixels -> line, circle, arcs -> number). In this example we only have 1 hidden layer but in real implementation it would be better to have more hidden layer that 1 depending on your interpretation of the neural network. Sometime we don't even have to think about what each layer represents and let the training do it fo us. That is the purpose of the training anyway.
in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model. I understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer) So for example, given the standard MNIST datset that is used to train and predict handwritten digits between 0 to 9, a model would look like this: An image of a handwritten digit will have 784 pixels. Since there are 784 pixels, there would be 784 input nodes and the value of each node is the pixel intensity(0-255) each node branches out and these branches are the weights. My next layer is my hidden layer, and the value of a given node in the hidden layer is the weighted sum of my input nodes (pixels*weights). Whatever value I get, I squash it with a sigmoid function and I get a value between 0 and 1. That number that I get from the sigmoid. What does it represent exactly and why is it relevant? My understanding is that if I want to build more hidden layers, I'll be using the values of my initial hidden layer, but at this point, i'm stuck as to what the values of the first hidden layer mean exactly. Thank you!
0
1
395
0
57,369,294
0
0
0
0
3
false
0
2019-08-06T04:52:00.000
0
3
0
what do hidden layers mean in a neural network?
57,369,148
0
python,tensorflow,neural-network
Consider a very basic example of AND, OR, NOT and XOR functions. You may already know that a single neuron is only suitable when the problem is linearly separable. Here in this case, AND, OR and NOT functions are linearly separable and so they can be easy handled using a single neuron. But consider the XOR function. It is not linearly separable. So a single neuron will not be able to predict the value of XOR function. Now, XOR function is a combination of AND, OR and NOT. Below equation is the relation between them: a XOR b = (a AND (NOT b)) OR ((NOT a) AND b) So, for XOR, we can use a network which contain three layers. First layer will act as NOT function, second layer will act as AND of the output of first layer and finally the output layer will act as OR of the 2nd hidden layer. Note: This is just a example to explain why it is needed, XOR can be implemented in various other combination of neurons.
in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model. I understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer) So for example, given the standard MNIST datset that is used to train and predict handwritten digits between 0 to 9, a model would look like this: An image of a handwritten digit will have 784 pixels. Since there are 784 pixels, there would be 784 input nodes and the value of each node is the pixel intensity(0-255) each node branches out and these branches are the weights. My next layer is my hidden layer, and the value of a given node in the hidden layer is the weighted sum of my input nodes (pixels*weights). Whatever value I get, I squash it with a sigmoid function and I get a value between 0 and 1. That number that I get from the sigmoid. What does it represent exactly and why is it relevant? My understanding is that if I want to build more hidden layers, I'll be using the values of my initial hidden layer, but at this point, i'm stuck as to what the values of the first hidden layer mean exactly. Thank you!
0
1
395
0
57,369,365
0
0
0
0
3
false
0
2019-08-06T04:52:00.000
0
3
0
what do hidden layers mean in a neural network?
57,369,148
0
python,tensorflow,neural-network
A hidden layer in a neural network may be understood as a layer that is neither an input nor an output, but instead is an intermediate step in the network's computation. In your MNIST case, the network's state in the hidden layer is a processed version of the inputs, a reduction from full digits to abstract information about those digits. This idea extends to all other hidden layer cases you'll encounter in machine learning -- a second hidden layer is an even more abstract version of the input data, a recurrent neural network's hidden layer is an interpretation of the inputs that happens to collect information over time, or the hidden state in a convolutional neural network is an interpreted version of the input with certain features isolated through the process of convolution. To reiterate, a hidden layer is an intermediate step in your neural network's process. The information in that layer is an abstraction of the input, and holds information required to solve the problem at the output.
in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model. I understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer) So for example, given the standard MNIST datset that is used to train and predict handwritten digits between 0 to 9, a model would look like this: An image of a handwritten digit will have 784 pixels. Since there are 784 pixels, there would be 784 input nodes and the value of each node is the pixel intensity(0-255) each node branches out and these branches are the weights. My next layer is my hidden layer, and the value of a given node in the hidden layer is the weighted sum of my input nodes (pixels*weights). Whatever value I get, I squash it with a sigmoid function and I get a value between 0 and 1. That number that I get from the sigmoid. What does it represent exactly and why is it relevant? My understanding is that if I want to build more hidden layers, I'll be using the values of my initial hidden layer, but at this point, i'm stuck as to what the values of the first hidden layer mean exactly. Thank you!
0
1
395
0
57,383,549
0
0
0
0
1
true
2
2019-08-06T13:48:00.000
2
3
0
How to evaluate HDBSCAN text clusters?
57,377,594
1.2
python,cluster-analysis,evaluation,hdbscan
Its the same problem everywhere in unsupervised learning. It is unsupervised, you are trying to discover something new and interesting. There is no way for the computer to decide whether something is actually interesting or new. It can decide and trivial cases when the prior knowledge is coded in machine processable form already, and you can compute some heuristics values as a proxy for interestingness. But such measures (including density-based measures such as DBCV are actually in no way better to judge this than the clustering algorithm itself is choosing the "best" solution). But in the end, there is no way around manually looking at the data, and doing the next steps - try to put into use what you learned of the data. Supposedly you are not invory tower academic just doing this because of trying to make up yet another useless method... So use it, don't fake using it.
I'm currently trying to use HDBSCAN to cluster movie data. The goal is to cluster similar movies together (based on movie info like keywords, genres, actor names, etc) and then apply LDA to each cluster and get the representative topics. However, I'm having a hard time evaluating the results (apart from visual analysis, which is not great as the data grows). With LDA, although it's hard to evaluate it, i've been using the coherence measure. However, does anyone have any idea on how to evaluate the clusters made by HDBSCAN? I haven't been able to find much info on it, so if anyone has any idea, I'd very much appreciate!
0
1
2,279
0
57,378,682
0
0
0
0
1
false
0
2019-08-06T14:09:00.000
1
1
0
Apache Nifi : I want to ingest my Data CSV to Elasticsearch without streaming it to some other processor using apache nifi
57,378,012
0.197375
python,csv,elasticsearch,apache-nifi
The output of the command executed by ExecuteStreamCommand will be written to a flow file that is transferred to the "output stream" relationship. You should be able to connect ExecuteStreamCommand "output stream" directly to PutElasticSearch.
I am trying to setup a simple process to modify my CSV file and ingest it to the elasticsearch DB using Apache Nifi. I don't want to stream my CSV file on Stdout, while passing my file from one processor to another. I've already made two flows. Myfirst flow get my CSV file using GetFile processor, customizes it using ExecuteStreamCommand in which i run my python script to read,modify and save my CSV file locally. My second flow again read that modified CSV file using GetFile processor and ingest it directly to my ElasticSearch DB. Now, for getting this task accomplished, i run these two flows seperatly. Can i connect the ExecuteStreamCommand of my first flow and GetFile of my second flow together, so that i can run them together as one single flow. Is there any other option to read a file written by ExecuteStreamCommand locally without streaming ??
0
1
312
0
57,393,865
0
0
0
0
2
true
2
2019-08-07T11:48:00.000
2
4
0
why is numpy where keep raising divide by zero encountered?
57,393,768
1.2
python,numpy,where-clause
Since it takes advantages of vectorization , it will execute 100/np.sqrt(w)for every element of w , so the division by 0 will happen , but then you are not taking the results associated with these entries. So basically with your trick you are still dividing by 0 but not using the entries where you divided by 0.
I have an array like w=[0.854,0,0.66,0.245,0,0,0,0] and want to apply 100/sqrt(x) on each value. As I can't divide by 0, I'm using this trick : w=np.where(w==0,0,100/np.sqrt(w)) As I'm not dividing by 0, I shouldn't have any warning. So why does numpy keep raising RuntimeWarning: divide by zero encountered in true_divide ?
0
1
615
0
57,393,853
0
0
0
0
2
false
2
2019-08-07T11:48:00.000
2
4
0
why is numpy where keep raising divide by zero encountered?
57,393,768
0.099668
python,numpy,where-clause
100/np.sqrt(w) still uses the w with zeros because function arguments are evaluated before executing a function. The square root of zero is zero, so you end up dividing 100 by an array that contains zeros, which in turn attempts to divide 100 by each element of this array and at some point tries to divide it by an element that's equal to zero.
I have an array like w=[0.854,0,0.66,0.245,0,0,0,0] and want to apply 100/sqrt(x) on each value. As I can't divide by 0, I'm using this trick : w=np.where(w==0,0,100/np.sqrt(w)) As I'm not dividing by 0, I shouldn't have any warning. So why does numpy keep raising RuntimeWarning: divide by zero encountered in true_divide ?
0
1
615
0
57,400,930
0
1
0
0
1
true
0
2019-08-07T19:10:00.000
4
1
0
Classification List Comprehension Not Behaving as Expected
57,400,902
1.2
python
-1 if x < .1 else 0 should be -1 if x < -.1 else 0
I have built a list comprehension that takes in a list of lists [actual, predicted] and then classifies the contained lists. I want to produce output lists that have 1 if the original element is > .1, -1 if the original output is < -.1 and 0 if the original element is < .1 and > -.1. For example [[2, 0, -2],[0, 0, 0]] would be mapped to [[1,0,-1], [0,0,0]]. I am using this code to perform this: classified = [list(map(lambda x: 1 if x > .1 else (-1 if x < .1 else 0), i)) for i in inputs]. However, when my code should classify an element as 0, it classifies it as -1. For example [-.15, -.05, 0] is mapped to [-1, -1, -1] instead of [-1, 0, 0]. It does classify points that map to 1 and -1 correctly.
0
1
42
0
57,402,233
0
0
0
0
1
true
5
2019-08-07T20:42:00.000
2
1
0
Plotly Express hover options
57,402,050
1.2
python,hover,plotly,plotly-dash,plotly-express
1 and 2 are not possible yet. For 3 there is a hovermode attribute in layout that you can set to show one hover label per trace per y-value.
In plotly_express.line the only options I see to modify hover settings are hover_name and hover_data. A few issues I'm facing with modifying hover are: It seems that even if I set hover_data=None it still shows the values for name,x, and y. How can I set it to only show the hover info I select without adding defaults? I can't find a setting to modify the opacity for hover boxes. I'm displaying a lot of hover info so my hover box is large, which makes it difficult to know where I am on the plot behind the box. How can I make it so hovering on one line displays hover info corresponding to a linked data column value on all other lines?
0
1
2,985
0
57,410,794
0
0
0
0
1
true
0
2019-08-08T10:39:00.000
0
1
0
Create an ID segmented images
57,410,686
1.2
python,machine-learning,computer-vision,dataset,image-segmentation
From what I understand, your segmented images come with 3 channels, where the color of each pixel corresponds to its GT label. When you train your image segmentation model, there is no need for an output of 3 channels (its redundant), so they recommend that you create a new annotated image, where you replace each color with the provided i.d. This recommendation is simply there to make the job of the model a little bit easier.
I have a dataset containing both RGB and Segmented images as ground truth , the readme.txt included in the annotated dataset stated this : GT_color :folder containing the groundtruth masks for semantic segmentation Annotations are given using a color representation, where each corresponds to a specific class. This is primairly provided for visualization. For training, create a corresponding ID image by assigning the colors to a specific class ID as given below. Class R G B ID Void - - - 0 Road 170 170 170 1 Grass 0 255 0 2 Vegetation 102 102 51 3 Tree 0 60 0 3 Sky 0 120 255 4 Obstacle 0 0 0 5 I don't understand what's meant by "creating corresponding ID image" , aren't the segmented images labelled already by each area's color ? meaning that the rgb are the labels ?
0
1
83
0
57,426,426
0
0
0
0
1
true
0
2019-08-09T06:13:00.000
0
2
0
How can I identify the records inside each cluster in a KNN model in SciKit-Learn Python?
57,424,338
1.2
python,scikit-learn,label,knn
KNN is not clustering, but classification. The parameter k is not the k of k-means; it is the number of neighbors not the number of clusters... Hence, setting k to 5 dors not suddenly produce 5 labels. Your training data has 2 labels, hence you get 2 labels. KNN = k-nearest neighbors classification. For k=5 this means 5 nearest neighbors. K-means clustering = approximate the data with k center vectors. An entirely different k.
I am making a KNN model. The target variable is divided in 2 categories, and the features are 3 categorical variables (country, language and company). The model says the optimal is 5 clusters, so I did it with 5. I need to know how can I see the records in each of the 5 clusters (I mean, the countries, languages and companies that the model is grouping in each of them). Is there a way to add the labels of the clusters to the dataframe? I tried: predictions = knn.predict(features) But that is only returning the estimations for the 2 labels of the target variable I did some research and found: km.labels_ But that only applies for KMeans, and I am using KNN I hope somebody can tell me the equivalent for that or how to solve the problem for KNN Model please
0
1
112
0
57,427,988
0
0
0
1
1
false
0
2019-08-09T09:16:00.000
0
1
0
How to run python script using S3 data in AWS
57,426,946
0
python-3.x,amazon-s3,aws-lambda,job-scheduling
You have several options: Use AWS Lambda, but Lambda has limited local storage (500mb) and memory (3gb) with 15 run time. Since you mentioned Pandas I recommend using AWS Glue which has ability: Detect new file Large Mem, CPU supported Visual data flow Support Spark DF Ability to query data from your CSV files Connect to different database engines. We currently use AWS Glue for our data parser processes
I have a CSV file in S3. I want to run a python script using data present in S3. The S3 file will change once in a week. I need to pass an input argument to my python script which loads my S3 file into Pandas and do some calculation to return the result. Currently I am loading this S3 file using Boto3 in my server for each input argument. This process takes more time to return the result, and my nginx returns with 504 Gateway timeout. I am expecting some AWS service to do it in cloud. Can anyone point me in a right direction which AWS service is suitable to use here
0
1
389
0
57,456,535
0
0
0
0
1
false
2
2019-08-09T11:49:00.000
3
1
0
Difference between DISTRIBUTE BY and Shuffle in Spark-SQL
57,429,479
0.53705
python,apache-spark,apache-spark-sql,pyspark-sql
Let me try to answer each part of your question: As per my understanding, the Spark Sql optimizer will distribute the datasets of both the participating tables (of the join) based on the join keys (shuffle phase) to co-locate the same keys in the same partition. If that is the case, then if we use the distribute by in the sql, then also we are doing the same thing. Yes that is correct. So in what way can distribute by could be used ameliorate join performance ? Sometimes one of your tables is already distributed, for example the table was bucketed or the data was aggregated before the join by the same key. In this case if you explicitly repartition also the second table (distribute by) you will achieve the same partitioning in both branches of the join and Spark will not induce any more shuffle in the first branch (sometimes this is referenced to as one-side shuffle-free join because the shuffle will occur only in one branch of the join - the one in which you call repartition / distribute by). On the other hand if you don't repartition explicitly the other table, Spark will see that each branch of the join has different partitioning and thus it will shuffle both branches. So in some special cases calling repartition (distribute by) can save you one shuffle. Notice that to make this work you need to achieve also the same number of partitions in both branches. So if you have two tables that you want to join on the key user_id and if the first table is bucketed into 10 buckets with this key then you need to repartition the other table also into 10 partitions by the same key and then the join will have only one shuffle (in the physical plan you can see that there will be Exchange operator only in one brach of the join). Or is it that it is better to use distribute by while writing the data to disk by the load process, so that subsequent queries using this data will benefit from it by not having to shuffle it ? Well, this is actually called bucketing (cluster by) and it will allow you to pre-shuffle the data once and then each time you read the data and join it (or aggregate) by the same key by which you bucketed, it will be free of shuffle. So yes, this is very common technique that you pay the cost only once when saving the data and then leverage that each time you read it.
I am trying to understand Distribute by clause and how it could be used in Spark-SQL to optimize Sort-Merge Joins. As per my understanding, the Spark Sql optimizer will distribute the datasets of both the participating tables (of the join) based on the join keys (shuffle phase) to co-locate the same keys in the same partition. If that is the case, then if we use the distribute by in the sql, then also we are doing the same thing. So in what way can distribute by could be used ameliorate join performance ? Or is it that it is better to use distribute by while writing the data to disk by the load process, so that subsequent queries using this data will benefit from it by not having to shuffle it ? Can you please explain with a real-world example to tune join using distribute by/cluster by in Spark-SQL ?
0
1
2,866
0
57,441,200
0
0
0
0
1
false
0
2019-08-10T08:49:00.000
0
1
0
How to use directional data as feature using scikit-learn knn?
57,440,714
0
python,scikit-learn,knn,gridsearchcv
You can break direction column, theta into 2 columns, sin(theta) and cos(theta), both of which would have continuity.
I'm new to using KNN and in my train set I have velocity vector. Since the directions 359° and 0° are completely different I was thinking of transforming the direction so that the vector in test data it points to 180°. I could make this transformation before using KNeighborsClassifier if I predict from one data point, but when I tune the hyperparameters with GridSearchCV the transformation should be done between every comparison. Is there some way to do that? Or some alternative way that I'm missing?
0
1
68
0
57,444,559
0
1
0
0
1
false
0
2019-08-10T17:59:00.000
1
2
0
How to save a figure in matplotlib by its variable name?
57,444,490
0.099668
python,matplotlib
You can save individual figures by: figX.savefig('figX_file.png') .
In matplotlib, matplotlib.pyplot.savefig saves the current figure. If I have multiple figures as variables in my workspace, e.g. fig1, fig2, and fig3, is it possible to save any of these figures based on their variable name, without first bringing them up as the current figure? E.g., I'd like to do something like: save('fig2', 'fig2_file.png')
0
1
209
0
57,450,598
0
0
0
0
1
false
0
2019-08-11T06:13:00.000
0
1
0
Why does not using retain_graph=True result in error?
57,447,736
0
python,neural-network,deep-learning,pytorch
By default, PyTorch doesn't store intermediate gradients, because the PyTorch's main feature is Dynamic Computational Graphs, so after backpropagation the graph will be freed all the intermediate buffers will be destroyed.
If I need to backpropagate through a neural network twice and I don't use retain_graph=True, I get an error. Why? I realize it is nice to keep the intermediate variables used for the first backpropagation to be reused for the second backpropagation. However, why aren't they simply recalculated, like they were originally calculated in the first backpropagation?
0
1
55
0
57,454,549
0
0
0
0
1
true
0
2019-08-11T22:56:00.000
0
1
0
How to get the rand() function in excel to rerun when accessing an excel file through python
57,454,186
1.2
excel,python-3.x,pandas,xlrd,xlwt
Excel formulas like RAND(), or any other formula, will only refresh when Excel is actually running and recalculating the worksheet. So, even though you may be access the data in an Excel workbook with Python, you won't be able to run Excel calculations that way. You will need to find a different approach.
I am trying to access an excel file using python for my physics class. I have to generates data that follows a function but creates variance so it doesn’t line up perfectly to the function(simulating the error experienced in experiments). I did this by using the rand() function. We need to generate a lot of data sets so that we can average them together and eliminate the error/noise creates by the rand() function. I tried to do this by loading the excel file and recording the data I need, but then I can’t figure out how to get the rand() function to rerun and create a new data set. In excel it reruns when i change the value of any cell on the excel sheet, but I don’t know how to do this when I’m accessing the file with Python. Can someone help me figure out how to do this? Thank You.
0
1
193
0
57,469,658
0
0
0
0
1
false
1
2019-08-12T10:25:00.000
0
1
0
SSd_mobilenet loss cannot go down
57,459,492
0
python,tensorflow,object-detection-api
I would first try with images of size 300 x 300 first, or use a library which will downscale your images and the bounding boxes.
I try train ssd-mobilenet in my own dataset : training image : 3400 with size :1600*1200 test set :800 with size :1600 *1200 tensorflow -gpu :1.13.1 gpu :4GB cuda 10.0 cudnn 7 object: road damage like aligator crack but after 197000 step my training loss cannot go down 2: I need helps.Thanks in advance
0
1
274
0
57,460,928
0
0
0
0
1
true
0
2019-08-12T12:04:00.000
2
1
0
Detect text only inside detected objects
57,460,861
1.2
python,opencv,yolo
Well, the first one that pops into my mind would be to crop the objects detected with YOLO and then run the OCR on that image. After running OCR, you'll have to do some postprocessing to classify each line of text to a specific category (price, name etc.)
I'm very new to Computer Vision, I'm tryind to build a CV model which will detect and recognize price tags and extract info from it. I've already trained model that can detect price tags using YOLO. But I also want to teach my system to detect and recognize text which only written inside these price tags. Than parse this info into different parts, for example: price, product name, product description. Or mayby I firstly need to parse detected blocks (price block on the left side of the price tag, product name on the right side, etc.) then read it. Any ideas would be appriciated.
0
1
71
0
57,510,905
0
0
0
0
1
true
0
2019-08-12T16:37:00.000
0
1
0
Rasa NLU model to old
57,465,038
1.2
python,anaconda,rasa-nlu,rasa-core
I believe you trained this model on the previous version of Rasa NLU and updated Rasa NLU to a new version (Rasa NLU is a dependency for Rasa Core, so changes were made in requirenments.txt file). If this is a case, there are 2 ways to fix it: Recommended solution. If you have data and parameters, train your NLU model again using current dependencies (this one that you have running now). So you have a new model which is compatible with your current version of Rasa If you don't have a data or can not retrain a model for some reason, then downgrade Rasa NLU to version 0.14.6. I'm not sure if your current Rasa core is compatible with NLU 0.14.6, so you might also need to downgrade Rasa core if you see errors. Good luck!
I have a problem. I am trying to use my model with Rasa core, but it gives me this error: rasa_nlu.model.UnsupportedModelError: The model version is to old to be loaded by this Rasa NLU instance. Either retrain the model, or run withan older version. Model version: 0.14.6 Instance version: 0.15.1 Does someone know which version I need to use then and how I can install that version?
1
1
879
0
57,467,836
0
0
0
0
1
true
2
2019-08-12T20:13:00.000
6
2
0
Reshaping Image for PyTorch
57,467,707
1.2
python,image,opencv,keras,pytorch
First, Keras format is (samples, height, width, channels). All you need to do is a moved = numpy.moveaxis(data, -1,1) If by luck you were using the non-default config "channels_first", then the config is identical to that of PyTorch, which is (samples, channels, height, width). And when transforming to torch: data = torch.from_numpy(moved)
I used to use keras and the image format it followed is [Height x Width x Channels x Samples]. i decided to switch to PyTorch. But i didn’t switch out my data loading schemes. So now i have numpy arrays of shape HxWxCxS, instead of SxCxHxW which is required for PyTorch. Does anyone have any idea to convert this ?
0
1
2,309
0
62,476,273
0
0
0
0
1
false
1
2019-08-12T20:45:00.000
2
1
0
Corrected P-value returned as NaN
57,468,030
0.379949
python,statsmodels
Check for NaNs in your p-value array. using multipletests on an array with any NaNs will return all NaNs, as you describe.
I have attempted to run a FDR correction on an array of p-values using both statsmodels.stats.multitest's multipletests(method='fdr_bh') and fdrcorrection. In both instances I receive an array of NaN's as the corrected p-values. I do not understand why the corrected p-value is returning as NaN. Could someone please help explain?
0
1
694
0
57,482,532
0
0
0
0
1
false
0
2019-08-13T17:03:00.000
1
1
0
How to compare frequency of unigrams with frequencies of bigrams, trigrams, etc?
57,482,331
0.197375
python,text,nlp,nltk
As you've asked this, there is no simple linear multiplier. You can make a general estimate by the size of your set of units. Consider the English alphabet of 26 letters: you have 26 possible unigrams, 26^2 digrams, 26^3 trigrams, ... Simple treatment suggests that you would multiply a digram's frequency by 26 to compare it with unigrams; trigram frequencies would get a 26^2 boost. I don't know whether that achieves the comparison you want, as the actual distribution of n-grams is not according to any mathematically tractable function. For instance, letter-trigram distribution is a good way to differentiate the language in use: English, French, Spanish, German, Romanian, etc. have readily differential distributions. Another possibility is to normalize the data: convert each value into a z-score, the amount of standard deviations above/below the mean of the distribution. The resulting list of z-scores has a mean of 0 and a standev of 1.0 Does either of those get you the results you need?
I want to build a word cloud containing multiple word structures (not just one word). In any given text we will have bigger frequencies for unigrams than bigrams. Actually, the n-gram frequency decreases when n increases for the same text. I want to find a magic number or a method to obtain comparative results between unigrams and bigrams, trigrams, n-grams. There is any magic number as a multiplier for n-gram frequency in order to be comparable with a unigram? A solution that I have now in mind is to make a top for any n-gram (1, 2, 3, ...) and use the first z positions for any category of n-grams.
0
1
442
0
57,485,755
0
0
0
0
1
true
0
2019-08-13T21:41:00.000
1
1
0
How can i generate Data from .gdf files using Jupyter notebook?
57,485,659
1.2
python
I recommend using the gdflib library. It'll allow you to process your .gdf files by organizing your data into nodes for further processing. It would also help if you could please provide a minimal reproducible example of what you have tried.
I'm preparing my dataset to be preprocessed before training with CNN model but i couldn't generate data from this type of file which contain several signals.
0
1
198
0
57,488,153
0
0
0
0
1
false
0
2019-08-14T01:33:00.000
1
1
0
Does it make sense to use a part of the dataset to train my model?
57,487,124
0.197375
python,machine-learning,xgbclassifier
To answer your first question, deleting the part of the dataset that doesn't work is not a good idea because then your model will overfit on the data that gives better numbers. This means that the accuracy will be higher, but when presented with something that is slightly different from the dataset, the probability of the network adapting is lower. To answer the second question, it seems like that's a good approach, but again I'd recommend keeping the full dataset.
The dataset I have is a set of quotations that were presented to various customers in order to sell a commodity. Prices of commodities are sensitive and standardized on a daily basis and therefore negotiations are pretty tricky around their prices. I'm trying to build a classification model that had to understand if a given quotation will be accepted by a customer or rejected by a customer. I made use of most classifiers I knew about and XGBClassifier was performing the best with ~95% accuracy. Basically, when I fed an unseen dataset it was able to perform well. I wanted to test how sensitive is the model to variation in prices, in order to do that, I synthetically recreated quotations with various prices, for example, if a quote was being presented for $30, I presented the same quote at $5, $10, $15, $20, $25, $35, $40, $45,.. I expected the classifier to give high probabilities of success as the prices were lower and low probabilities of success as the prices were higher, but this did not happen. Upon further investigation, I found out that some of the features were overshadowing the importance of price in the model and thus had to be dealt with. Even though I dealt with most features by either removing them or feature engineering them to better represent them I was still stuck with a few features that I just cannot remove (client-side requirements) When I checked the results, it turned out the model was sensitive to 30% of the test data and was showing promising results, but for the rest of the 70% it wasn't sensitive at all. This is when the idea struck my mind to feed only that segment of the training data where price sensitivity can be clearly captured or where the success of the quote is inversely related to the price being quoted. This created a loss of about 85% of the data, however the relationship that I wanted the model to learn was being captured perfectly well. This is going to be an incremental learning process for the model, so each time a new dataset comes, I'm thinking of first evaluating it for the price sensitivity and then feeding in only that segment of the data for training which is price sensitive. Having given some context to the problem, some of the questions I had were: Does it make sense to filter out the dataset for segments where the kind of relationship I'm looking for is being exhibited? Post training the model on the smaller segment of the data and reducing the number of features from 21 to 8, the model accuracy went down to ~87%, however it seems to have captured the price sensitivity bit perfectly. The way I evaluated price sensitivity is by taking the test dataset and artificially adding 10 rows for each quotation with varying prices to see how the success probability changes in the model. Is this a viable approach to such a problem?
0
1
53
0
57,491,074
0
0
0
0
1
true
0
2019-08-14T08:34:00.000
1
1
0
Mask-RCNN project
57,491,013
1.2
python,python-3.x,deep-learning
A pre-trained model is a model that has been trained usually with a big dataset using big computers and can be fine-tuned for a given problem using small amounts of computation. This is possible to do with Deep Learning, where training a model consists in adjusting some matrices of weights. When we refer to pre-trained weights we mean that we train a model with big datasets and then we store the weights in order to be used by other' tasks.
I am currently doing a group project about Semantic Segmentation and need to train model with own data set. The problem is data set is not available in any pre-trained model since the objective is to detect each part of sneakers(eg. lace, outsole, front patch, logos etc). None of our team members had never studied deep learning but had studied in computer science. Also, there's another question about Mask-RCNN. What is the exact meaning of weights of pre-trained model? Is it the weights calculated from DL model?
0
1
103
0
57,493,372
0
0
0
0
1
true
1
2019-08-14T10:55:00.000
1
1
0
df.set_index=("Neighbourhood",inplace=True) giving me SyntaxError: invalid syntax
57,493,320
1.2
python,python-3.x,indexing,methods
set_index is a function, you need to call it. try df.set_index("Neighbourhood",inplace=True) (without the =)
All of my previous code runs well. It is only when I try to set the index to a particular column as the code below shows, that I run into an error. Honestly - this same method has worked before and I have not been able to find any other method to do the same thing. df.set_index=("Neighbourhood",inplace=True) ​ Error message: File "", line 1 df.set_index=("Neighbourhood",inplace=True) ^ SyntaxError: invalid syntax
0
1
499
0
57,539,079
0
0
0
0
1
true
7
2019-08-15T00:11:00.000
6
1
0
ModuleNotFoundError: no module named efficientnet.tfkeras
57,503,473
1.2
python,keras
To install segmentation-models use the following command: pip install git+https://github.com/qubvel/segmentation_models
I attempted to do import segmentation_models as sm, but I got an error saying efficientnet was not found. So I then did pip install efficientnet and tried it again. I now get ModuleNotFoundError: no module named efficientnet.tfkeras, even though Keras is installed as I'm able to do from keras.models import * or anything else with Keras how can I get rid of this error?
0
1
10,143
0
57,529,064
0
0
0
0
1
true
1
2019-08-15T04:06:00.000
0
1
0
Setting up keras and tensoflow to operate with AMD GPU
57,504,746
1.2
python-3.x,tensorflow,keras,gpu,amd
To the best of my knowledge, PlaidML was not working because I did not have the required prerequisites such as OpenCL. Once I downloaded the Visual Studio C++ build tools in order to install PyopenCL from a .whl file. This seemed to resolve the issue
I am trying to set up Keras in order to run models using my GPU. I have a Radeon RX580 and am running Windows 10. I saw realized that CUDA only supports NVIDIA GPUs and was having difficulty finding a way to get my code to run on the GPU. I tried downloading and setting up plaidml but afterwards from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) only printed that I was running on a CPU and there was not a GPU available even though the plaidml setup was a success. I have read that PyOpenCl is needed but have not gotten a clear answer as to why or to what capacity. Does anyone know how to set up this AMD GPU to work properly? any help would be much appreciated. Thank you!
0
1
435
0
57,507,841
0
0
0
0
1
false
0
2019-08-15T09:40:00.000
0
1
0
What is the suitable technique for discrete classification and variable optimization problem?
57,507,740
0
python,optimization,classification
What I would do in this case is to use a finite-differences gradient approach to solve the problem. For that, you can follow the next steps: 1) Select a customer with "Bad" prediction, increase and decrease their variables one by one a little bit and check the prediction. 2) That way you'll which sign in each variable causes the probability of "Good" to go up. 3) Once you have it, move a bit all the variables in the direction of improvement 4) Repeat from 1) until convergence. BTW this is the way Excel Solver works.
I get a simple task request to perform a classification to predict classes i.e between 'Good' and 'Bad' customers. The problem is, recommendation is needed on suitable variables' values for those customers with 'Bad' prediction, so that they can take action to improve their profile. Examples of variable are 'Purchase Score' and 'Purchase Frequency'. Means that, these customers need to improve on these scores so that the prediction output can be obtained as 'Good' customers. In this problem, when the optimized variables are input back into the classification problem, it needs to output 'Good' label instead of 'Bad' label. I've searched through the optimization methods such as scipy.optimize and Genetic Algorithms, but from my understanding, the optimized variables are for a set continuous value target instead of a class target. What technique can I use to achieve the optimization and recommendation part for a class prediction?
0
1
28
0
57,524,523
0
0
1
0
1
true
1
2019-08-16T12:02:00.000
3
1
0
Using CP-SAT Solver for non-linear objective function
57,524,316
1.2
python,or-tools,cp-sat-solver
You have to create an intermediate variable using AddMultiplicationEquality(x2, [x, x])
I'm trying to use CP-SAT solver with some variables: x,y. I want to maximise an objective function of the form x**2-y*x with some constraints. I'm getting TypeError: unsupported operand type(s) for ** or pow(): 'IntVar' and 'int' error messages. Am I correct in assuming I cannot use nonlinear objective function for CP-SAT, as I couldn't find any documentation or examples that did employ nonlinear objectives? Or is there some way to do this?
0
1
671
0
57,528,483
0
0
0
0
1
true
0
2019-08-16T13:58:00.000
0
1
0
Plotly Express hovermode with arbitrary column
57,525,950
1.2
python,hover,plotly,plotly-python,plotly-express
No, there’s no feature for that at the moment. You could consider a second set of “iso-threshold” curves to see this kind of thing perhaps?
I see that the hovermode attribute in layout has options for x or y, but is it possible to use an arbitrary dataframe column? instead? For example, I'm plotting precision-recall curves. The x-axis is recall, and the y-axis is precision. The independent variable is a detection threshold value (with range of np.linspace(0,1.0,101)) with a column in my dataframe called threshold. When I hover over a precision-recall point, what I'm interested in is points on other curves with the same detection threshold value. So can I instead hover on this column?
0
1
544
0
57,539,279
0
0
0
0
1
false
2
2019-08-17T18:15:00.000
1
3
0
What input shape should I take in first layer of Sequential model when the dimensions of the images are (2048*1536)
57,538,810
0.066568
python,keras,deep-learning,dataset
I would resize first the images with cv2.resize(). Do you really need all the information from such a big image? For a sequential Model it follows for example: model = models.Sequential() model.add(layers.Conv2D(32,(3,3), activation='relu', input_shape = (height,width, ndim))) ..., where height and width denote your input image dimensions and ndim = 1 for greyscale and ndim = 3 for colored images.
I am having an image dataset each image is of dimensions=(2048,1536).In ImageDataGenerator to fetch data from the directory, I have used the same target size i.e (2048,1536) but while making Sequential model first layer, what input shape should I have to use?? Will it be same as (2048,1536) or I can take any random shape like (224,224).
0
1
347
0
57,784,170
0
0
0
0
1
false
6
2019-08-18T04:05:00.000
-1
2
0
Tensorflow shows only "successfully opened CUDA library libcublas.so.10.0 locally" and nothing about cudnn
57,541,567
-0.099668
python,tensorflow
Actually i always rely on a stable setup. And i tried most of the tf - cuda - cudnn versions. But most stable was tf 1.9.0 , CUDA 9.0, Cudnn 7 for me. Used it for too long without a problem. You should give it a try if it suits you.
My tensorflow only prints out the line: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally when running. Tensorflow logs on the net has lots of other libraries being loaded like libcudnn. As I think my installation performance is not optimal, I am trying to find out if it is because of this. Any help will be appreciated! my tf is 1.13.1 NVIDIA Driver Version: 418.67 CUDA Version: 10.1 (I have also 10.0 installed. can this be the problem?)
0
1
3,774
0
57,588,916
0
0
0
0
1
false
2
2019-08-21T09:05:00.000
3
1
0
What is the difference between math.isnan ,numpy.isnan and pandas.isnull in python 3?
57,588,107
0.53705
python-3.x,pandas,numpy
The only difference between math.isnan and numpy.isnan is that numpy.isnan can handle lists, arrays, tuples whereas math.isnan can ONLY handle single integers or floats. However, I suggest using math.isnan when you just want to check if a number is nan because numpy takes approximately 15MB of memory when importing it while math takes only 0,2M of memory As for pandas.isnull it returns True not only for nan but also for None python types and as numpy it can handle every structure of numbers. However, it is even more "heavy" than numpy.
A NaN of type decimal.Decimal causes: math.isnan to return True numpy.isnan to throw a TypeError exception. pandas.isnull to return False What is the difference between math.isnan, numpy.isnan and pandas.isnull?
0
1
1,748
0
57,593,244
0
0
0
0
1
true
5
2019-08-21T12:09:00.000
7
1
0
Combination of GridSearchCV's refit and scorer unclear
57,591,311
1.2
python,scikit-learn,grid-search
When refit=True, sklearn uses entire training set to refit the model. So, there is no test data left to estimate the performance using any scorer function. If you use multiple scorer in GridSearchCV, maybe f1_score or precision along with your balanced_accuracy, sklearn needs to know which one of those scorer to use to find the "inner winner" as you say. For example with KNN, f1_score might have best result with K=5, but accuracy might be highest for K=10. There is no way for sklearn to know which value of hyper-parameter K is the best. To resolve that, you can pass one string scorer to refit to specify which of those scorer should ultimately decide best hyper-parameter. This best value will then be used to retrain or refit the model using full dataset. So, when you've got just one scorer, as your case seems to be, you don't have to worry about this. Simply refit=True will suffice.
I use GridSearchCV to find the best parameters in the inner loop of my nested cross-validation. The 'inner winner' is found using GridSearchCV(scorer='balanced_accuracy'), so as I understand the documentation the model with the highest balanced accuracy on average in the inner folds is the 'best_estimator'. I don't understand what the different arguments for refit in GridSearchCV do in combination with the scorer argument. If refit is True, what scoring function will be used to estimate the performance of that 'inner winner' when refitted to the dataset? The same scoring function that was passed to scorer (so in my case 'balanced_accuracy')? Why can you pass also a string to refit? Does that mean that you can use different functions for 1.) finding the 'inner winner' and 2.) to estimate the performance of that 'inner winner' on the whole dataset?
0
1
3,919
0
57,599,130
0
0
0
0
1
false
0
2019-08-21T12:38:00.000
0
1
0
Logic for creating mutually exclusive groups of individuals (clusters)
57,591,831
0
python-3.x,logic,cluster-analysis
Solve it the opposite way. Rather than trying all combinations and then checking which conflict, first find all conflicts. So if a record is in groups A, B, and O, then mark AB, AO, and BO as incompatible. When going through combinations, you can easily check that adding B is impossible if you chose to use A etc.
I have survey data with about 100 columns for every individual. Based on certain criteria, for eg. a column contains info of whether a person reads comics and another column contains info of whether a person reads comics. I want to validate if the user has created clusters/groups that are mutually exclusive. eg. Group 1: Males with age 0-25 reading comics, Group 2: Males with age 20-25 reading comics as well as newspaper. In this case, I want to generate a warning that the groups are not mutually exclusive. One (inefficient) way of doing is creating a list of individuals for every group and then finding intersection for every combination of groups. If there is intersection, the groups are not mutually exclusive and hence incorrect. What is an efficient way of doing this? One (inefficient) way of doing is creating a list of individuals for every group and then finding intersection for every combination of groups. If there is intersection, the groups are not mutually exclusive and hence incorrect. Expected result: The created groups are mutually exclusive. or The created groups are not mutually exclusive.
0
1
44