GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 55,716,394 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2019-04-16T14:48:00.000 | 2 | 2 | 0 | Dealing with new words in gensim not found in model | 55,710,967 | 1.2 | python,nlp,gensim | Depending on the context, Gensim will usually either ignore unknown words, or throw an error like KeyError when an exact-word lookup fails. (Also, some word-vector models, like FastText, can synthesize better-than-nothing guesswork vectors for unknown words based on word-fragments observed during training.)
You should try your desired operations with the specific models/method of interest to observe the results.
If operation-interrupting errors are thrown and a problem for your code, you could pre-filter your lists-of-words to remove those not also present in the model. | Lets say I am trying to compute the average distance between a word and a document using distances() or compute cosine similarity between two documents using n_similarity(). However, lets say these new documents contain words that the original model did not. How does gensim deal with that?
I have been reading through the documentation and cannot find what gensim does with unfound words.
I would prefer gensim to not count those in towards the average. So, in the case of distances(), it should simply not return anything or something I can easily delete later before I compute the mean using numpy. In the case of n_similarity, gensim of course has to do it by itself....
I am asking because the documents and words that my program will have to classify will in some instances contain unknown words, names, brands etc that I do not want to be taken into consideration during classification. So, I want to know if I'll have to preprocess every document that I am trying to classify. | 0 | 1 | 179 |
0 | 55,712,434 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-16T15:57:00.000 | 0 | 1 | 0 | Linear algebra with large, sparse matrices | 55,712,254 | 0 | python,scipy,regression,sparse-matrix,least-squares | You could to use numpy.linalg.pinv to find "x" values | I want to solve the linear equation Ax = b, for the unknown matrix x. A and b are both large and sparse, and have shapes (when converted to dense) of 30,000 x 25 and 30,000 x 100,000, respectively.
I have tried using both scipy.sparse.linalg.lsqr and scipy.sparse.linalg.lsmr, but they both require that b be dense, which is computationally very expensive and prohibitive.
How can I do this? | 0 | 1 | 222 |
0 | 55,725,242 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-16T16:04:00.000 | 0 | 1 | 0 | Operating on large number of dataframes | 55,712,366 | 0 | python,pandas,bigdata,data-science | If all of your data has the same shape, then I don't see the point of using lists of pandas DataFrames for this.
The most performance you could get out of Python with the least work is just stacking the dataframes into a 3D Numpy array of dimensions (3000, 3000, 5000) and then doing an sum over the last axis.
As this requires > 360 GB of RAM (at least 180 GB for the loaded dataframes, 180 GB for the stacked Numpy array), this is probably beyond a usual Desktop workload, and you may want to check out big data tools as mentioned in the comments. | I have a large number of pandas dataframe > 5000 of shape 3000x3000 float values with density of 60% (i.e. 40% values are NaNs). These frames have identical index and columns.
I'd like to operate on these frames e.g. addition of all of them. If I do this sequentially, it takes more than 20 mins. Is there efficient way I could operate on them (e.g. sum them)?
How can I make this process memory efficient knowing that these dataframes are not dense? | 0 | 1 | 88 |
0 | 70,164,527 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-17T15:01:00.000 | 0 | 1 | 0 | pandas df to_csv freezes when using pyinstaller to save the df in the exe directory | 55,730,679 | 0 | python,pyinstaller | Look, if you use Innodb compiler then you will face a problem of permission denied error in the setup. So , I have tried to solve that by using temporary file but It is getting deleted after generation. But if you really want to solve this problem then use xlsxwriter and save it to a specific file location. | I am trying to save the output dataframe to a csv file while using pyinstaller to create an exe, but my code freezes and generate "[Errno 13] Permission denied: '.\Output.csv' " error. My question is. what wrong using df.to_csv to save the output file in the same exe directory ?
Thanks in adcance | 0 | 1 | 281 |
0 | 55,731,210 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-17T15:10:00.000 | 1 | 1 | 0 | 2D RGB image construction from 3D array in SimpleITK | 55,730,852 | 1.2 | python,image,simpleitk | The documentation reads:
Signature: sitk.GetImageFromArray(arr, isVector=None)
Docstring: Get a SimpleITK Image from a numpy array. If isVector is True, then the Image will have a Vector pixel type, and the last dimension of the array will be considered the component index. By default when isVector is None, 4D images are automatically considered 3D vector images.
Have you tried passing isVector=True? | I have an RGB image in the format of a 3D array with the shape of (m, n, 3). I would like to create a SimpleITK image. Using the GetImageFromArray() function results in creation of an image in 3D which is not what I am looking for. How can I create a 2D RGB image instead? | 0 | 1 | 443 |
0 | 56,049,754 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2019-04-18T06:31:00.000 | 0 | 2 | 0 | How to triangulate a point in 3D space, given coordinate points in 2 image and extrinsic values of the camera | 55,740,284 | 0 | python,numpy,triangulation,vision | Assume you have two cameras -- camera 1 and camera 2.
For each camera j = 1, 2 you are given:
The distance hj between it's center Oj, (is "focal point" the right term? Basically the point Oj from which the camera is looking at its screen) and the camera's screen. The camera's coordinate system is centered at Oj, the Oj--->x and Oj--->y axes are parallel to the screen, while the Oj--->z axis is perpendicular to the screen.
The 3 x 3 rotation matrix Uj and the 3 x 1 translation vector Tj which transforms the Cartesian 3D coordinates with respect to the system of camera j (see point 1) to the world-coordinates, i.e. the coordinates with respect to a third coordinate system from which all points in the 3D world are described.
On the screen of camera j, which is the plane parallel to the plane Oj-x-y and at a distance hj from the origin Oj, you have the 2D coordinates (let's say the x,y coordinates only) of point pj, where the two points p1 and p2 are in fact the projected images of the same point P, somewhere in 3D, onto the screens of camera 1 and 2 respectively. The projection is obtained by drawing the 3D line between point Oj and point P and defining point pj as the unique intersection point of this line with with the screen of camera j. The equation of the screen in camera j's 3D coordinate system is z = hj , so the coordinates of point pj with respect to the 3D coordinate system of camera j look like pj = (xj, yj, hj) and so the 2D screen coordinates are simply pj = (xj, yj) .
Input: You are given the 2D points p1 = (x1, y1), p2 = (x2, y2) , the twp cameras' focal distances h1, h2 , two 3 x 3 rotation matrices U1 and U2, two translation 3 x 1 vector columns T1 and T2 .
Output: The coordinates P = (x0, y0, z0) of point P in the world coordinate system.
One somewhat simple way to do this, avoiding homogeneous coordinates and projection matrices (which is fine too and more or less equivalent), is the following algorithm:
Form Q1 = [x1; y1; h1] and Q2 = [x2; y2; h2] , where they are interpreted as 3 x 1 vector columns;
Transform P1 = U1*Q1 + T1 and P2 = U1*Q2 + T1 , where * is matrix multiplication, here it is a 3 x 3 matrix multiplied by a 3 x 1 column, givin a 3 x 1 column;
Form the lines X = T1 + t1*(P1 - T1) and X = T2 + t2*(P2 - T2) ;
The two lines from the preceding step 3 either intersect at a common point, which is the point P or they are skew lines, i.e. they do not intersect but are not parallel (not coplanar).
If the lines are skew lines, find the unique point X1 on the first line and the uniqe point X2 on the second line such that the vector X2 - X1 is perpendicular to both lines, i.e. X2 - X1 is perpendicular to both vectors P1 - T1 and P2 - T2. These two point X1 and X2 are the closest points on the two lines. Then point P = (X1 + X2)/2 can be taken as the midpoint of the segment X1 X2.
In general, the two lines should pass very close to each other, so the two points X1 and X2 should be very close to each other. | I'm trying to write a function that when given two cameras, their rotation, translation matrices, focal point, and the coordinates of a point for each camera, will be able to triangulate the point into 3D space. Basically, given all the extrinsic/intrinsic values needed
I'm familiar with the general idea: to somehow create two rays and find the closest point that satisfies the least squares problem, however, I don't know exactly how to translate the given information to a series of equations to the coordinate point in 3D. | 0 | 1 | 1,933 |
0 | 55,767,754 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-18T18:25:00.000 | 0 | 3 | 0 | How to train a trained model with new examples in scikit-learn? | 55,751,844 | 0 | python,python-3.x,machine-learning,scikit-learn | Append the new data to your existing dataset, and train over the whole thing. Might want to reserve some of the new data for your testset. | I'm working on a machine learning classification task in which I have trained many models with different algorithms in scikit-learn and Random Forest Classifier performed the best. Now I want to train the model further with new examples but if I train the same model by calling the fit method on new examples then it will start training the model from beginning by erasing the old parameters.
So, how can I train the trained model by training it with new examples in scikit-learn?
I got some idea by reading online to pickle and unpickle the model but how would it help I don't know. | 0 | 1 | 845 |
0 | 55,795,347 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-04-18T20:43:00.000 | 1 | 2 | 0 | How to specify gradient in Pyomo with IPOPT | 55,753,497 | 1.2 | python-3.x,pyomo,ipopt | Pyomo provides first and second derivative information using the automatic differentiation features in the Ampl Solver Library (ASL). When calling IPOPT, Pyomo outputs your model using the '.nl' file format which is read by the ASL and linked to IPOPT. So you don't have to do anything to provide gradient information, this is done automatically. | Primary Question
When solving a NLP in Pyomo, using IPOPT as the solver, how can I tell IPOPT what the gradient of the objective function and/or constraints are? I have to pass a callable function that returns objective values--can I likewise pass a callable function that evaluates the gradient as well?
Secondary Question
How does Pyomo+IPOPT handle this by default? When I solve a simple NLP with Pyomo+IPOPT, part of the IPOPT output includes "number of objective gradient evaluations"...but how is it evaluating the gradient? Numerically with finite differences, or something?
I'm using Pyomo 5.6 with Python 3.6 and IPOPT 3.7. | 0 | 1 | 561 |
0 | 55,779,081 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-19T05:02:00.000 | 1 | 1 | 0 | Where to find a pretrained doc2vec model on Wikipedia or large article dataset like Google news? | 55,756,841 | 1.2 | python,nlp,gensim,word2vec,doc2vec | I'm not aware of any publicly-available standard gensim Doc2Vec models trained on Wikipedia. | Am struggling with training wikipedia dump on doc2vec model, not experienced in setting up a server as a local machine is out of question due to the ram it requires to do the training. I couldnt find a pre trained model except outdated copies for python 2. | 0 | 1 | 200 |
0 | 55,763,023 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-04-19T13:46:00.000 | 0 | 2 | 0 | input shape of convolutional neural network in keras | 55,762,873 | 0 | python,keras,classification,conv-neural-network | Make sure your image size is same as the size your Input layer is expecting. Classification architectures, in general, are not flexible to the spatial dimensions of your input. So, that is important. Otherwise you will get a shape mismatch error.
In case you want to change the input shape of your model, that is possible to do. It's hard to say exactly how it will affect your classification. You have to, probably, also tune your CNN filters so that the filters are not bigger than your feature maps. Otherwise that might downgrade your performance. But you can try that out and see what happens. | I am trying to build a image classifier using cnn. My images are of (256,256) pixel size.
What will happen if i train the cnn by setting the input shape as (64,64) or (128,128), since (256,256) will take a lot of time to process? | 0 | 1 | 443 |
0 | 55,762,953 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-04-19T13:46:00.000 | 0 | 2 | 0 | input shape of convolutional neural network in keras | 55,762,873 | 0 | python,keras,classification,conv-neural-network | It will throw an error. You can resize you images with cv2.resize() or you can put the right input shape in your cnn layer and then put a maxpooling layer to reduce the number of parameters. | I am trying to build a image classifier using cnn. My images are of (256,256) pixel size.
What will happen if i train the cnn by setting the input shape as (64,64) or (128,128), since (256,256) will take a lot of time to process? | 0 | 1 | 443 |
0 | 55,763,943 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-19T14:59:00.000 | 0 | 2 | 0 | How to round number to constant decimal | 55,763,795 | 0 | python | Try this if you really need unnecessary zeros:
def format_floats(reference, values):
formatted_values = []
for i in range(len(values)):
length = len(str(reference)[str(reference).find("."):])-1
new_float = str(round(values[i], length))
new_float += "0"*(len(str(reference))-len(new_float))
formatted_values.append(new_float)
return formatted_values
if __name__ == '__main__':
reference = 0.12345
values = [1.04, 2.045, 2.0]
print(format_floats(reference, values))
output: ['1.04000', '2.04500', '2.00000'] | I want to use "math.sqrt" and my output should have 4 decimal after point even for numbers like "4". is there any func or way?!
I used "round(num_sqrt, 4)" but it didn't work.
my input is like:
1
2
3
19
output must be:
1.0000
1.4142
1.7320
4.3588
and my output is:
1.0
1.4142
1.7320
4.3588 | 0 | 1 | 65 |
0 | 55,764,936 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-04-19T16:25:00.000 | 0 | 1 | 1 | How can i install opencv in python3.7 on ubuntu? | 55,764,829 | 0 | python,opencv,ubuntu | Does python-3.7 -m pip install opencv-python work? You may have to change the python-3.7 to whatever path/alias you use to open your own python 3.7. | I have a Nvidia Jetson tx2 with the orbitty shield on it.
I got it from a friend who worked on it last year. It came with ubuntu 16.04. I updated everything on it and i installed the latest python3.7 and pip.
I tried checking the version of opencv to see what i have but when i do import cv2 it gives me :
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'cv2'
Somehow besides python3.7 i have python2.7 and python3.5 installed. If i try to import cv2 on python2.7 and 3.5 it works, but in 3.7 it doesn't.
Can u tell me how can i install opencv in python3.7 and the latest version? | 0 | 1 | 1,498 |
0 | 56,173,568 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-20T21:40:00.000 | 0 | 1 | 0 | Negative Feature Importance Value in CatBoost LossFunctionChange | 55,777,986 | 1.2 | python,machine-learning,catboost | Negative feature importance value means that feature makes the loss go up. This means that your model is not getting good use of this feature. This might mean that your model is underfit (not enough iteration and it has not used the feature enough) or that the feature is not good and you can try removing it to improve final quality. | I am using CatBoost for ranking task. I am using QueryRMSE as my loss function. I notice for some features, the feature importance values are negative and I don't know how to interpret them.
It says in the documentation, the i-th feature importance is calculated as the difference between loss(model with i-th feature excluded) - loss(model).
So a negative feature importance value means that feature makes my loss go up?
What does that suggest then? | 0 | 1 | 2,295 |
0 | 60,627,318 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-04-21T19:14:00.000 | 0 | 1 | 0 | Storing and fetching multiple stocks in Arctic Library | 55,785,898 | 0 | python,pandas,finance | Arctic supports a few different storage engines. The only one that will do what you're looking for is VersionStore. It keeps versions of data, so any update you make to the data will be versioned, and you can retrieve data by timestamp ranges and by version.
However it does not let you do a subsetting of stock like you want to do. I'd recommend subsetting your universe (say into US, EMEA, EUR, etc) or into whatever other organization makes sense for your use case. | Looking for suggestions on how to store Price data using MAN AHL's Arctic Library for 5000 stocks EOD data as well as 1 minute data. Separate solutions for EOD and 1-minute data are also welcome. Once the data is stored, I want to perform the following operations:
Fetch data for a subset of stocks (lets say around 500-1000 out of the entire universe of 5000 stocks) between a given datetime range.
Any update to historical data (data once stored in database) should have versioning. Data prior to the update should not be discarded. I should be able to fetch data as of a particular version/timestamp.
Example format of data:
Date Stock Price
0 d1 s1 100
1 d2 s1 110
2 d3 s1 105
3 d1 s2 50
4 d2 s2 45
5 d3 s2 40 | 1 | 1 | 105 |
0 | 57,091,975 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2019-04-21T19:40:00.000 | 6 | 2 | 0 | What is the negative mean absolute error in scikit-learn? | 55,786,121 | 1 | python,machine-learning,scikit-learn,regression | I would like to add here, that this negative error is also helpful in finding best algorithm when you are comparing multiple algorithms through GridSearchCV().
This is because after training, GridSearchCV() ranks all the algorithms(estimators) and tells you which one is the best. Now when you use an error function, estimator with higher score will be ranked higher by sklearn, which is not true in the case of MAE (along with MSE and a few others).
To deal with this, the library flips the sign of error, so the highest MAE will be ranked lowest and vice versa.
So to answer your question: -2.6 is better than -3.0 because the actual MAE is 2.6 and 3.0 respectively. | I am trying to train a model using SciKit Learn's SVM module. For the scoring, I could not find the mean_absolute_error(MAE), however, negative_mean_absolute_error(NMAE) does exist. What is the difference between these 2 metrics? Lets say I get the following results for 2 models:
model 1 (NMAE = -2.6), model 2(NMAE = -3.0)
Which model is better? Is it model 1?
Moreover, how does the negative compare to the positive? Say the following:
model 1 (NMAE = -1.7), model 2(MAE = 1.4)
Here, which model is better? | 0 | 1 | 9,738 |
0 | 55,799,292 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-22T18:28:00.000 | 0 | 1 | 0 | Interpolating missing data in cumulative timeseries data | 55,799,207 | 0 | python,pandas,interpolation,nan | Answer taken from @Quang Hoang above wiht ffill() | I have a time-series dataframe with a cumulative data column. Data drops at night-time leaving me with NaN values, and picks up with first data read in the morning.
I would like to interpolate the data so that all NaN values take on the value of the last known float/valid number. Is this readily possible with .interpolate()? | 0 | 1 | 40 |
0 | 55,799,321 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-04-22T18:29:00.000 | 3 | 2 | 0 | Do we have to remove target variable from data in Scikit-learn's linearmodel.fit()? | 55,799,226 | 1.2 | python,scikit-learn,linear-regression | The X should not contain the target as one of the columns. If you include it your linear model will produce no coding errors, but to predict the target y it will just use the feature y. | Scikit-learn's documentation says there are two arguments to the function: X(data) and y(Target Values). Do we remove the target variable from our data and provide it separately as y? Or do we keep target variable in X and also provide it separately as y? I have come across both approaches and was wondering which was correct. | 0 | 1 | 443 |
0 | 55,801,233 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-04-22T18:29:00.000 | 1 | 2 | 0 | Do we have to remove target variable from data in Scikit-learn's linearmodel.fit()? | 55,799,226 | 0.099668 | python,scikit-learn,linear-regression | To my understand, you shouldn't predict tomorrow's weather by tomorrow's weather. If you already know what's the correct value, it is pointless to predict one.
However, you don't need to remove target variable in your dataset either, just don't include it in your X-axis.
What we are trying to do with a predictive model? Based on past records(both x and y), we trained our model to find their relationships. In future, we may no longer have y, but we still have x in our hands, assuming their relationship doesn't change, we predict what is the y for the future. | Scikit-learn's documentation says there are two arguments to the function: X(data) and y(Target Values). Do we remove the target variable from our data and provide it separately as y? Or do we keep target variable in X and also provide it separately as y? I have come across both approaches and was wondering which was correct. | 0 | 1 | 443 |
0 | 55,821,709 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-23T04:00:00.000 | 0 | 1 | 0 | cv2 - multi-user image display | 55,804,120 | 0 | python,cv2,multi-user | I was able to display the images on another user/host by setting the DISPLAY environment variable of the X server to match the desired user's DISPLAY. | Using python and OpenCV, is it possible to display the same image on multiple users?
I am using cv2.imshow but it only displays the image for the user that runs the code.
Thanks | 0 | 1 | 46 |
0 | 55,825,408 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-24T04:04:00.000 | 0 | 1 | 1 | How to run any Transformation Logic on HDFS data which is at Remote PC | 55,822,254 | 0 | python,apache-spark,hdfs,remote-access | I would suggest setting up a Spark Cluster in the same local network where you have the data and running spark transformations in the cluster remotely (SSH or Remote Desktop). The advantages of the setup are:
Network Latency will be minimised as the data is transferred in the
same network locally.
Running the transformations with distributed and in-memory processing engines like Apache Spark is fast.
Note: Please ignore if the response is in line with your second approach | I have huge size data (in TBs or PBs) in my HDFS which is located at remote PC. Now instead of taking Data to the Transformation Logic (which is not correct and efficient), I want to run my python Transformation Logic itself on the location where my Data is stored.
Seeking some useful ideas about the technologies which can be used to fulfill this requirement.
Things which I tried till now:
1) Approach 1
Took SSH Connection of Remote PC (where HDFS data is available), Copied my python Transformation Logic there and executed after fetching the data from HDFS.
2) Approach 2
Loaded HDFS data to Apache Spark RDDs which is on Remote PC where HDFS data is available and executed Spark Job from another PC.
Please suggest other technologies which can be used for Logic Execution remotely. | 0 | 1 | 42 |
0 | 55,839,963 | 0 | 0 | 0 | 0 | 3 | false | 2 | 2019-04-24T13:28:00.000 | 0 | 3 | 0 | Calculating the Number of flops for a given Neural Network? | 55,831,235 | 0 | python-3.x,deep-learning,conv-neural-network | There is no such code because the quantity of FLOPs is dependent on the hardware and software implementations. You can certainly derive a typical quantity from expanding the layer-by-layer operations for each parameter and weight. and making reasonable implementation assumptions for each activation function.
Input dimensions will proportionately affect the computations for the first layer.
I'm not sure what you intend for "generalized code in Python"; do you envision using a form of the Keras model as the input? This is possible, but you need to write modules that will extract the kernel characteristics and connection logic from the Keras representation.
Your quantity of operations will vary from one implementation to another. Hardware architectures now directly support parallel operations and short-cuts for sparse matrices. Some have extra functionality for adjusting floating-point representations for greater training speed. Software platforms include control and data flow analysis to optimize the functional flow. Any of these will change the FLOPs computation. | I have a neural network(ALEXnet or VGG16) written with Keras for Image Classification and I would like to calculate the Number of floating point operations for a network. The size of the images in the dataset could varry.
Can a generlized code be written in python which could calculate flops automatically ? or is there any libraray avaialble.
I am working with spyderAnaconda and the defined network is a sequential model.
Thank you. | 0 | 1 | 3,688 |
0 | 64,795,470 | 0 | 0 | 0 | 0 | 3 | false | 2 | 2019-04-24T13:28:00.000 | 2 | 3 | 0 | Calculating the Number of flops for a given Neural Network? | 55,831,235 | 0.132549 | python-3.x,deep-learning,conv-neural-network | FLOPs are the floating-point operations performed by a model. It is usually calculated using the number of multiply-add operations that a model performs. Multiply-add operations, as the name suggests, are operations involving multiplication and addition of 2 or more variables. For example, the expression, a * b + c * d, has 2 flops while a * b + c * d + e * f + f * h has 4 flops.
Let's now take a simple linear regression model for example. Assume this model has 4 parameters w1, w2, w3, and w4 and a bias b0. Inference on an input data, X = [x1, x2, x3, x4] results in output = x1 * h1 + x2 * h2 + x3 * h3 + x4 * h4 + b0. This operation has 4 flops
The FLOPs measurement in CNNs involves knowing the size of the input tensor, filters and output tensor for each layer. Using this information, flops are calculated for each layer and added together to obtain the total flops. Let's look at the first layer in VGG16 with input tensor of size 224x224x3, 64 filters of size 3x3x3 and output size of 224x224x64. Each element in the output results from an operation involving (3x3x3) multiply-add between the filter and the input tensor. Hence the number of flops for the first layer of VGG16 is (3x3x3)x(224x224x64)= 86,704,128 | I have a neural network(ALEXnet or VGG16) written with Keras for Image Classification and I would like to calculate the Number of floating point operations for a network. The size of the images in the dataset could varry.
Can a generlized code be written in python which could calculate flops automatically ? or is there any libraray avaialble.
I am working with spyderAnaconda and the defined network is a sequential model.
Thank you. | 0 | 1 | 3,688 |
0 | 66,452,856 | 0 | 0 | 0 | 0 | 3 | false | 2 | 2019-04-24T13:28:00.000 | 0 | 3 | 0 | Calculating the Number of flops for a given Neural Network? | 55,831,235 | 0 | python-3.x,deep-learning,conv-neural-network | many papers using their own flops counting code.
it is made by entering the input size of certain operation's tensor.
so they manually calculate it's flops
you can find it with keyword like 'flops constraint' or 'flops counter' in github.
or there are 'torchstat' tool which counts the flops and memory usage etc. | I have a neural network(ALEXnet or VGG16) written with Keras for Image Classification and I would like to calculate the Number of floating point operations for a network. The size of the images in the dataset could varry.
Can a generlized code be written in python which could calculate flops automatically ? or is there any libraray avaialble.
I am working with spyderAnaconda and the defined network is a sequential model.
Thank you. | 0 | 1 | 3,688 |
0 | 55,832,347 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-24T14:00:00.000 | 1 | 2 | 0 | Multiply each element of a column by each element of a different dataframe | 55,831,908 | 1.2 | python,data-manipulation | This will work. Here we are manipulating numpy array inside the DataFrame.
pd.DataFrame(df1.values*df2.values, columns=df1.columns, index=df1.index) | I have two data frame both having same number of columns but the first data frame has multiple rows and the second one has only one row but same number of columns as the first one. I need to multiply the entries of the first data frame with the second by column name.
DF:1
A B C
0 34 54 56
1 12 87 78
2 78 35 0
3 84 25 14
4 26 82 13
DF:2
A B C
0 2 3 1
Result
A B C
68 162 56
24 261 78
156 105 0
168 75 14
52 246 13 | 0 | 1 | 45 |
0 | 55,847,433 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-04-25T10:13:00.000 | 2 | 2 | 0 | neural network find best hyperameters or architecture first | 55,846,870 | 0.197375 | python,tensorflow,neural-network | First you should decide for an architecture and then play around with the hyperparameters. To compare different hyperparameters it is important to have the same base (architecture).
Of course you can also play around with the architecture (layers, nodes,...).But I think here it is easier to search for an architecture online, because often the same or a similar problem yet have been solved or described in a tutorial/blog.
The dropout is also a (training-)hyperparameter and not part of the architecture! | I'm implementing my first neural network for images classification.
I would like to know if i should start to find best hyperparameters first and then try to modify my neural network architecture (e.g number of layer, dropout...) or architecture then hyperameters? | 0 | 1 | 80 |
0 | 55,852,346 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-04-25T10:13:00.000 | 1 | 2 | 0 | neural network find best hyperameters or architecture first | 55,846,870 | 1.2 | python,tensorflow,neural-network | The answer is as always : it depends
What are you trying to achieve?
If you're hoping to make the worlds best image classifier by trial and error then you might want to ask yourself if you think you have more compute available than the people who have already done this. For a really good classifier there are several ones that come with tensorflow/keras and can be easily implemented. If you're goofing around and learning the coding then I'd recommend different architectures because that's going to teach you more functions. If you have a dataset you don't think existing solutions will be good at analysing and genuinely need the best network to solve classify them then unfortunately it still depends...
How to decide:
Firstly decide on the rough order of magnitude for your overall parameter count (the literal number of parameters your model has). For a given number of parameters, architecture is likely to produce the biggest difference in results between representative hyperparameter choices (don't choke your network down to a single neuron in the middle and expect it to be representative of that architecture).
Its important to compare the rough performance per parameter so you're not giving an edge to the networks with greater overfitting capacity. You don't need to use all your training data or even train to completion, mostly you'll find the better networks learn faster and finish better (mostly). In the past I've done grid searches with multiple trials at each point using significantly reduced data then optimised the architecture with the most potential by considering the gradients of the grid search. Fun fact: with sufficient time you can use gradient descent methods on hyperparameters to find local minima. You might well find that there are many similarly top performing models, all of which should you can tune until a clear winner emerges. | I'm implementing my first neural network for images classification.
I would like to know if i should start to find best hyperparameters first and then try to modify my neural network architecture (e.g number of layer, dropout...) or architecture then hyperameters? | 0 | 1 | 80 |
0 | 55,848,378 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-25T11:27:00.000 | 0 | 1 | 0 | How to reshape spatiotemporal data as lstm input? | 55,848,195 | 1.2 | python,pandas,numpy,lstm | You cannot easily use reshape when you have a different number of temporal steps for each example. What you typically do with LSTMs is that you have batches of examples and each batch is padded to the same length, usually with zeros. Use np.zeros(shape) and then iteratively assign to respective rows. | I have a dataset with columns like ['station_id', 'feature1', 'feature2',...]
Each row is a time step. And it is sorted by station_id.
The main problem is that station_ids have different number of timesteps ...
I want to shape it for an LSTM layer, like (NumberOfExamples, TimeSteps, FeaturesPerStep).
Can someone help me to use np.reshape() in this case please ? | 0 | 1 | 35 |
0 | 70,753,506 | 0 | 0 | 0 | 0 | 2 | false | 8 | 2019-04-25T12:39:00.000 | 1 | 2 | 0 | Gridsearchcv vs Bayesian optimization | 55,849,512 | 0.099668 | python-3.x,machine-learning,gridsearchcv | Grid search is known to be worse than random search for optimizing hyperparameters [1], both in theory and in practice. Never use grid search unless you are optimizing one parameter only.
On the other hand, Bayesian optimization is stated to outperform random search on various problems, also for optimizing hyperparameters [2]. However, this does not take into account several things: the generalization capabilities of models that use those hyperparameters, the effort to use Bayesian optimization compared to the much simpler random search, and the possibility to use random search in parallel.
So in conclusion, my recommendation is: never use grid search, use random search if you just want to try a few hyperparameters and can try them in parallel (or if you want the hyperparameters to generalize to different problems), and use Bayesian optimization if you want the best results and are willing to use a more advanced method.
[1] Random Search for Hyper-Parameter Optimization, Bergstra & Bengio 2012.
[2] Bayesian Optimization is Superior to Random Search for Machine Learning Hyperparameter Tuning: Analysis of the Black-Box Optimization Challenge 2020, Turner et al. 2021. | Which one among Gridsearchcv and Bayesian optimization works better for optimizing hyper parameters? | 0 | 1 | 2,664 |
0 | 55,850,059 | 0 | 0 | 0 | 0 | 2 | true | 8 | 2019-04-25T12:39:00.000 | 15 | 2 | 0 | Gridsearchcv vs Bayesian optimization | 55,849,512 | 1.2 | python-3.x,machine-learning,gridsearchcv | There is no better here, they are different approaches.
In Grid Search you try all the possible hyperparameters combinations within some ranges.
In Bayesian you don't try all the combinations, you search along the space of hyperparameters learning as you try them. This enables to avoid trying ALL the combinations.
So the pro of Grid Search is that you are exhaustive and the pro of Bayesian is that you don't need to be, basically if you can in terms of computing power go for Grid Search but if the space to search is too big go for Bayesian. | Which one among Gridsearchcv and Bayesian optimization works better for optimizing hyper parameters? | 0 | 1 | 2,664 |
0 | 55,851,082 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-25T13:56:00.000 | 2 | 2 | 0 | Should I scale a percentage variable? | 55,851,021 | 1.2 | python,machine-learning,neural-network | I think is not necessary. If the variables that are in percentage is between 0 and 1, you don't need scaled them because they are scaled already. | I have a dataframe containing variables of different scales (age, income, days as customer, percentage spent in each kind of product sold (values from 0 to 1), etc). I believe it's necessary to scale these variables for using in a neural network algorithm, for example.
My question is: The variables that are in percentage, are somehow already scaled, can I apply MinMax in my whole dataset or should I not consider these percentage variables in Min Max scaling and keep them with original values? | 0 | 1 | 1,082 |
0 | 56,388,008 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-25T14:54:00.000 | 0 | 1 | 0 | How to ensure static shapes in Tensorflow model for easy OpenVINO conversion? | 55,852,212 | 0 | python,tensorflow,speech,openvino | You can have dynamic shapes in TF model and provide static shape while cnverting model with ModelOptimizer. Example for input data of size 256x256 with 3 channels.
python mo_tf.py --input_shape [1,256,256,3] --input_model model.pb | I'm trying to optimize and convert a tensorflow model to OpenVINO IR. It hasn't been very successful because of the problems I'm facing with input shapes. So I'm planning to remodel the whole model with static shapes. The model I'm trying to work on is Tacotron by keithito.
How do I ensure all the nodes in my model will have static shapes?
Will just setting the input placeholder nodes to a fixed shape allow tensorflow to infer and fix the shapes of all other nodes? | 0 | 1 | 342 |
0 | 55,852,914 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2019-04-25T15:12:00.000 | 0 | 1 | 0 | How to improve the write speed to sql database using python | 55,852,550 | 0 | python,python-3.x,pandas,sqlalchemy,pyodbc | If you are trying to insert the csv as is into the database (i.e. without doing any processing in pandas), you could use sqlalchemy in python to execute a "BULK INSERT [params, file, etc.]". Alternatively, I've found that reading the csvs, processing, writing to csv, and then bulk inserting can be an option.
Otherwise, feel free to specify a bit more what you want to accomplish, how you need to process the data before inserting to the db, etc. | I'm trying to find a better way to push data to sql db using python. I have tried
dataframe.to_sql() method and cursor.fast_executemany()
but they don't seem to increase the speed with that data(the data is in csv files) i'm working with right now. Someone suggested that i could use named tuples and generators to load data much faster than pandas can do.
[Generally the csv files are atleast 1GB in size and it takes around 10-17 minutes to push one file]
I'm fairly new to much of concepts of python,so please suggest some method or atleast a reference any article that shows any info. Thanks in advance | 0 | 1 | 678 |
0 | 56,177,448 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-25T22:00:00.000 | 0 | 2 | 0 | Efficient way of flashing images in python | 55,858,199 | 1.2 | python,image,matplotlib,timer,frequency | Using a library called PsychoPy. It can guarantee that everything is drawn and allows you to control when a window is drawn (a frame) with the window.frame() function. | What would the most efficient way of flashing images be in python?
Currently I have an infinite while loop that calls sleep at the end, then uses matplotlib to display an image. However I can't get matplotlib to replace the current image, I instead have to close then show again which is slow. I'd like to flash sequences of images as precisely as possible at a set frequency.
I'm opens to solutions that use other libraries to do the replacement in place. | 0 | 1 | 571 |
0 | 58,901,069 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-26T07:18:00.000 | 0 | 1 | 0 | numerical entity extraction from unstructured texts using python | 55,862,614 | 0 | python-3.x,nlp,named-entity-recognition | So far my research shows that you can treat numbers as words.
This raises an issue : learning 5 will be ok, but 19684 will be to rare to be learned.
One proposal is to convert into words. "nineteen thousands six hundred eighty four" and embedding each word. The inconvenient is that you are now learning a (minimum) 6 dimensional vector (one dimension per word)
Based on your usage, you can also embed 0 to 3000 with distinct ids, and say 3001 to 10000 will map id 3001 in your dictionary, and then add one id in your dictionary for each 10x. | I want to extract numerical entities like temperature and duration mentioned in unstructured formats of texts using neural models like CRF using python. I would like to know how to proceed for numerical extraction as most of the examples available on the internet are for specific words or strings extraction.
Input: 'For 5 minutes there, I felt like baking in an oven at 350 degrees F'
Output: temperature: 350
duration: 5 minutes | 0 | 1 | 110 |
0 | 55,874,689 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-04-26T20:21:00.000 | 1 | 1 | 0 | Does Google-Colab continue running the script when "Runtime disconnected"? | 55,874,473 | 1.2 | python,neural-network,pytorch,google-colaboratory | Yes, for ~1.5 hours after you close the browser window.
To keep things running longer, you'll need an active tab. | I am training a neural network for Neural Machine Traslation on Google Colaboratory. I know that the limit before disconnection is 12 hrs, but I am frequently disconnected before (4 or 6 hrs). The amount of time required for the training is more then 12 hrs, so I add some savings each 5000 epochs.
I don't understand if when I am disconnected from Runtime (GPU is used) the code is still execute by Google on the VM? I ask it because I can easily save the intermediate models on Drive, and so continue the train also if I am disconnected.
Does anyone know it? | 0 | 1 | 6,010 |
0 | 55,881,586 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-27T14:24:00.000 | -1 | 2 | 0 | select costume column by position pandas | 55,881,531 | -0.099668 | python,pandas | Use this syntax: data.iloc[:, [0,1,20,22]]
Where 0,1,20 and 22 is the column index. | I have a dataset with number of columns. I need to select some columns by their position. for example, I want to select columns 0,3,6,7,15 (by position) from the dataset. I tried using iloc but it seems it is applicable in the range of position, ( I may be wrong?) If there are any better ideas? | 0 | 1 | 54 |
0 | 65,195,428 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2019-04-27T17:00:00.000 | 1 | 2 | 0 | How predict_proba in sklearn produces two columns? what are their significance? | 55,882,873 | 0.099668 | python,machine-learning,scikit-learn,precision-recall | We can distinguish between the classifiers using the classifier classes. if the classifier name is model then model.classes_ will give the distinct classes. | I was using simple logistic regression to predict a problem and trying to plot the precision_recall_curve and the roc_curve with predict_proba(X_test). I checked the docstring of predict_proba but hadn't had much details on how it works. I was having bad input every time and checked that y_test, predict_proba(X_test) doesn't match. Finally discovered predict_proba() produces 2 columns and people use the second.
It would be really helpful if someone can give an explanation how it produces two columns and their significance. TIA. | 0 | 1 | 4,584 |
0 | 55,891,000 | 0 | 0 | 0 | 0 | 1 | true | 7 | 2019-04-28T13:41:00.000 | 7 | 1 | 0 | Setting a random seed on TF 2.0 | 55,890,834 | 1.2 | python,tensorflow | Found it: tf.random.set_seed is what I was looking for | I have just upgraded from TF 1.13 to TF 2.0, and my interpreter is complaining because tf.set_random_seed does no longer exist.
What is the equivalent functionality in TF 2.0 ? | 0 | 1 | 1,162 |
0 | 55,892,936 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-28T17:24:00.000 | 1 | 1 | 0 | How exactly does matrix multiplication of 3d kernel and 3d image ( Say RGB) takes place to give 2d output? | 55,892,792 | 0.197375 | python,arrays,matrix,conv-neural-network,convolution | You have multiple questions in one. I will answer the about the "how the convolution takes place". Short answer: it is not a matrix multiplication.
Step 1) You slide a window of size (5,5,3) over your RGB image carving out subimages of that size. Incidentally these subimages have exactly the same dimension as that of the kernel.
Step 2) You multiply each subimage values with the values of the convolution component wise. The output of that is again (5,5,3) subimage "scaled" by the values of the kernel.
Step 3) You add all the values of the "scaled" (5,5,3) subimage together (effectively squishing the dimensions) into a single value -- that is our final output. | I have been studying convolution neural network architecture. I am horrendously confused on the part, where, a 3d kernel acts upon the 3d input image (well, it's 4d given we have stack of those images, but just to make explanation a bit easier). I know internet is full of stuffs like this. but i can't find exact answer to that matrix multiplication part.
To be easier for everyone to understand, Can someone show me an actual multiplication on how convolution of (5,5,3) matrix (our kernel) over (28,28,3) matrix (our RGB image ) takes place, outputting a 2d array.
Also, please also show, (with a detailed picture) , how those numerous 2d arrays gets flattened and gets connected to a single fully connected layer.
i know that, final layer of pooled 2d arrays are flattened. but, since there are like 64 2d arrays (just consider), .. so even if we flatten each one, we will have 64 flattened 1D array. so, how does this end up connecting to next fully connected layer ? (Picture please) | 0 | 1 | 574 |
0 | 55,895,064 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-28T21:46:00.000 | 0 | 3 | 0 | Trouble importing numpy | 55,894,852 | 0 | python,numpy,import | The easiest way to download and manage modules is using python's built-in pip command.
pip install numpy will install numpy in your site-packages directory, where it needs to be to use it with Python 2
pip3 install numpy will do the same thing for Python 3 | I want to import numpy. I do not have it as a module so I attempted to download it as a .whl file. I successfully downloaded it to my computer but am having trouble with installing into python 3.7.
I know I have to install numpy onto my computer and then Python. I downloaded the .whl file but am having trouble transferring it into my cmd prompt.
import numpy as np
Traceback (most recent call last):
File "stdin", line 1, in module
ModuleNotFoundError: No module named 'numpy'
I want to "import numpy as np" without errors | 0 | 1 | 1,651 |
0 | 55,904,338 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-04-29T12:55:00.000 | 6 | 1 | 0 | Keras / NN - Handling NaN, missing input | 55,903,882 | 1.2 | python,machine-learning,keras,neural-network | You need to have the same input size during training and inference. If you have a few missing values (a few %), you can always choose to replace the missing values by a 0 or by the average of the column. If you have more missing values (more than 50%) you are probably better off ignoring the column completely. Note that this theoretical, the best way to make it work is to try different strategies on your data. | These days i'm trying to teach myself machine learning and i'm going though some issues with my dataset.
Some of my rows (i work with csv files that i create with some js script, i feel more confident doing that in js) are empty wich is normal as i'm trying to build some guessing model but the issue is that it results in having nan values on my training set.
My NN was not training so i added a piece of code to remove them from my set but now i have some issues where my model can't work with input from different size..
So my question is: how do i handle missing data ? (i basically have 2 rows and can only have the value from 1 and can't merge them as it will not give good results)
i can remove it from my set, wich would reduce the accuracy of my model in the end.
PS: if needed i'll post some code when i come back home. | 0 | 1 | 3,144 |
0 | 55,906,967 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2019-04-29T15:58:00.000 | 2 | 2 | 0 | Trouble installing 'matplotlib' | 55,906,943 | 1.2 | python,python-3.x | Don't use pip install matplotlib.pyplot, use pip install matplotlib
matplotlib.pyplot is calling pyplot from the module matplotlib. What you want is the module, matplotlib. Then from idle or wherever you are running this, you can call matplotlib.pyplot | Cant' install 'matplotlib.pyplot' on Windows 10, Python 3.7
I tried 'pip install matplotlib.pyplot' and received an error
Here's the exact error code:
Could not find a version that satisfies the requirement matplotlib.pyplot (from versions: )
No matching distribution found for matplotlib.pyplot | 0 | 1 | 699 |
0 | 55,906,978 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2019-04-29T15:58:00.000 | 0 | 2 | 0 | Trouble installing 'matplotlib' | 55,906,943 | 0 | python,python-3.x | Try just using pip install on your command line (or terminal):
'pip install matplotlib'
I hope it helps.
BR | Cant' install 'matplotlib.pyplot' on Windows 10, Python 3.7
I tried 'pip install matplotlib.pyplot' and received an error
Here's the exact error code:
Could not find a version that satisfies the requirement matplotlib.pyplot (from versions: )
No matching distribution found for matplotlib.pyplot | 0 | 1 | 699 |
0 | 55,912,036 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-29T23:23:00.000 | 0 | 2 | 0 | How can I find the sum of multiple groups within a series? | 55,912,004 | 0 | python,python-3.x,pandas,dataframe,series | You can use the rolling method on a serie :
serie.rolling(24).sum()
To get directly the max
max_idx = serie.rolling(24).sum().idxmax()
You range of interest is [max_idx-24+1:max_idx] (from index max_idx - 24 + 1 to index max_idx (both included) , so be careful if you want to retrieve these elements, with .loc should be fine but if going back to the original list or if using iloc then you'll have to go to max_idx + 1. | I have a short series of ~60 values. What I need to do is find the largest sum of 24 consecutive values in the series.
e.g. I would need to be able to find the sums of the groups [0:23],[1:24],[2:25],[3:26], ... , [37:60] and determine which group has the largest sum. | 0 | 1 | 38 |
0 | 55,927,699 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-04-30T15:36:00.000 | 1 | 2 | 0 | Doc2Vec - Finding document similarity in test data | 55,924,378 | 1.2 | python,machine-learning,gensim,doc2vec | The act of training-up a Doc2Vec model leaves it with a record of the doc-vectors learned from the training data, and yes, most_similar() just looks among those vectors.
Generally, doing any operations on new documents that weren't part of training will require the use of infer_vector(). Note that such inference:
ignores any unknown words in the new document
may benefit from parameter tuning, especially for short documents
is currently done just one document at time in a single thread – so, acquiring inferred-vectors for a large batch of N-thousand docs can actually be slower than training a fresh model on the same N-thousand docs
isn't necessarily deterministic, unless you take extra steps, because the underlying algorithms use random initialization and randomized selection processes during training/inference
just gives you the vector, without loading it into any convenient storage-object for performing further most_similar()-like comparisons
On the other hand, such inference from a "frozen" model can be parallelized across processes or machines.
The n_similarity() method you mention isn't really appropriate for your needs: it's expecting lists of lookup-keys ('tags') for existing doc-vectors, not raw vectors like you're supplying.
The similarity_unseen_docs() method you mention in your answer is somewhat appropriate, but just takes a pair of docs, re-calculating their vectors each time – somewhat wasteful if a single new document's doc-vector needs to be compared against many other new documents' doc-vectors.
You may just want to train an all-new model, with both your "training documents" and your "test documents". Then all the "test documents" get their doc-vectors calculated, and stored inside the model, as part of the bulk training. This is an appropriate choice for many possible applications, and indeed could learn interesting relationships based on words that only appear in the "test docs" in a totally unsupervised way. And there's not yet any part of your question that gives reasons why it couldn't be considered here.
Alternatively, you'd want to infer_vector() all the new "test docs", and put them into a structure like the various KeyedVectors utility classes in gensim - remembering all the vectors in one array, remembering the mapping from doc-key to vector-index, and providing an efficient bulk most_similar() over the set of vectors. | I am trying to train a doc2vec model using training data, then finding the similarity of every document in the test data for a specific document in the test data using the trained doc2vec model. However, I am unable to determine how to do this.
I currently using model.docvecs.most_similar(...). However, this function only finds the similarity of every document in the training data for a specific document in the test data.
I have tried manually comparing the inferred vector of a specific document in the test data with the inferred vectors of every other document in the test data using model.docvecs.n_similarity(inferred_vector.tolist(), testvectors[i].tolist()) but this returns KeyError: "tag '-0.3502606451511383' not seen in training corpus/invalid" as there are vectors not in the dictionary. | 0 | 1 | 1,786 |
0 | 55,924,682 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-04-30T15:36:00.000 | 0 | 2 | 0 | Doc2Vec - Finding document similarity in test data | 55,924,378 | 0 | python,machine-learning,gensim,doc2vec | It turns out there is a function called similarity_unseen_docs(...) which can be used to find the similarity of 2 documents in the test data.
However, I will leave the question unsolved for now as it is not very optimal since I would need manually compare the specific document with every other document in the test data. Also, it compares the words in the documents instead of the vectors which could affect accuracy. | I am trying to train a doc2vec model using training data, then finding the similarity of every document in the test data for a specific document in the test data using the trained doc2vec model. However, I am unable to determine how to do this.
I currently using model.docvecs.most_similar(...). However, this function only finds the similarity of every document in the training data for a specific document in the test data.
I have tried manually comparing the inferred vector of a specific document in the test data with the inferred vectors of every other document in the test data using model.docvecs.n_similarity(inferred_vector.tolist(), testvectors[i].tolist()) but this returns KeyError: "tag '-0.3502606451511383' not seen in training corpus/invalid" as there are vectors not in the dictionary. | 0 | 1 | 1,786 |
0 | 55,932,064 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-01T04:38:00.000 | 0 | 1 | 0 | Text sequence to integer with many integer classes in Keras | 55,931,671 | 1.2 | python,tensorflow,machine-learning,keras,neural-network | Maybe you can formulate it as sequence prediction problem using RNN or regression problem with N digit output nodes. | I am getting strings ex. "one hundred twenty three", or "nine hundred ninety nine", and encoding it into a sequence of word tokens of length 4 using the Keras text preprocessing tokenizer and using it as my input with 4 nodes, and having many integer classes as my output ex. 0 1 2 ... 1000 with 1001 output nodes with a tensorflow backend.
I'm using an embedding input layer and then a flatten layer and then a dense output layer with softmax activation to classify the input sequence to a number.
This approach works well for numbers from 0-1000 etc. but scaling up to 100,000 numbers with strings like "eighty seven thousand four hundred twenty three" proves to be a problem with very long training times as there's 100,000 output neurons.
Is there a better way to structure the NN for possibly millions of numbers without sacrificing efficiency? | 0 | 1 | 91 |
0 | 55,941,809 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-01T19:02:00.000 | -2 | 2 | 0 | Scipy ImportError: No module named transform | 55,941,290 | 1.2 | python,scipy,scipy-spatial | Working on python 3.7.3. Make sure it's available on 2.7 | I'm using a python script within ROS. Ros uses python 2.7 and the version of scipy that I'm using is 0.19.1.
The following error is reported:
from scipy.spatial.transform import Rotation as R
ImportError: No module named transform | 0 | 1 | 11,819 |
0 | 55,953,034 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-05-02T10:56:00.000 | 0 | 1 | 0 | Convert Keras model output into sparse matrix without forloop | 55,950,909 | 0 | python,tensorflow,machine-learning,keras | Isn't there a batch_size parameter in the predict()?
If I get it correct the n means number of sample right?
Assume that you system ram is enough to hold the entire data but the VRAM is not. | I have a pretrained keras model that has output with dimesion of [n, 4000] (It makes the classification on 4000 classes).
I need to make a prediction on the test data (300K observations).
But when I call method model.predict(X_train), I get an run-out memory error, because I don't have enough RAM to store matrix with shape (300K , 4000).
Therefore, it would be logical to convert the model output to a sparse matrix.
But wrapping the predict method into scipy function sparse.csr_matrix does not work (sparse.csr_matrix(model.predict(X_train))), because it first allocates space in the RAM for the prediction, and only then it converts into the sparse matrix.
I can also make a prediction on a specific batch of test data, and then convert it using forloop.
But it seems to me that this is not optimal and very resource-consuming way.
Please give me advice, can there be any other methods for converting the model output into a sparse matrix? | 0 | 1 | 261 |
0 | 55,957,245 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-05-02T16:32:00.000 | 2 | 2 | 0 | Can you use Python with MS Machine Learning in SQL SERVER 2016 | 55,956,728 | 0.197375 | python,machine-learning,sql-server-2016 | To add to @DMellons answer; Java is supported in SQL 2019 and up. So:
SQL 2016: R
SQL 2017: R, Python
SQL 2019: R, Python, Java, more languages may come. | I want to use Microsoft's Machine Learning Services in SQL SERVER 2016, specifically to leverage Python, NOT R.
Is it possible? | 0 | 1 | 73 |
0 | 55,963,728 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-05-03T03:22:00.000 | 0 | 2 | 0 | $ in R... What is the equivalent in Python? | 55,962,891 | 0 | r,python-3.x | You can generally use pandas to mimic R. You can use [] as below.
my_column = df['columnName'] | is there a way to refer to a specific column relative to a specific data frame in python like there is in R (data.frame$data)? | 0 | 1 | 74 |
0 | 55,962,898 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2019-05-03T03:22:00.000 | 1 | 2 | 0 | $ in R... What is the equivalent in Python? | 55,962,891 | 1.2 | r,python-3.x | Usually with [] => data.frame["data"]
Or for object like with . => data.frame.data | is there a way to refer to a specific column relative to a specific data frame in python like there is in R (data.frame$data)? | 0 | 1 | 74 |
0 | 55,970,210 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-03T12:05:00.000 | 0 | 1 | 0 | Best way to scale across different datasets | 55,969,460 | 0 | python,scikit-learn,neural-network,preprocessor,feature-scaling | One possible solution could be like this.
Normalize (pre-process) the dataset A such that the range of each features is within a fixed interval, e.g., between [-1, 1].
Train your model on the normalized set A.
Whenever you are given a new dataset like B:
(3.1.) Normalize the new dataset such that the feature have the same range as they have in A ([-1, 1]).
(3.2) Apply your trained model (step 2) on the normalized new set (3.1).
As you have a one-to-one mapping between set B and its normalized version, then you can see what is the prediction on set B, based on predictions on normalized set B.
Note you do not need to have access to set B in advance (or such sets if they are hundreds of them). You normalize them, as soon as you are given one and you want to test your trained model on it. | I have come across a peculiar situation when preprocessing data.
Let's say I have a dataset A. I split the dataset into A_train and A_test. I fit the A_train using any of the given scalers (sci-kit learn) and transform A_test with that scaler. Now training the neural network with A_train and validating on A_test works well. No overfitting and performance is good.
Let's say I have dataset B with the same features as in A, but with different ranges of values for the features. A simple example of A and B could be Boston and Paris housing datasets respectively (This is just an analogy to say that features ranges like the cost, crime rate, etc vary significantly ). To test the performance of the above trained model on B, we transform B according to scaling attributes of A_train and then validate. This usually degrades performance, as this model is never shown the data from B.
The peculiar thing is if I fit and transform on B directly instead of using scaling attributes of A_train, the performance is a lot better. Usually, this reduces performance if I test this on A_test. In this scenario, it seems to work, although it's not right.
Since I work mostly on climate datasets, training on every dataset is not feasible. Therefore I would like to know the best way to scale such different datasets with the same features to get better performance.
Any ideas, please.
PS: I know training my model with more data can improve performance, but I am more interested in the right way of scaling. I tried removing outliers from datasets and applied QuantileTransformer, it improved performance but could be better. | 0 | 1 | 1,123 |
0 | 61,309,357 | 0 | 0 | 0 | 0 | 2 | false | 35 | 2019-05-03T13:20:00.000 | 0 | 5 | 0 | Tensorboard not found as magic function in jupyter | 55,970,686 | 0 | python,tensorflow,tensorflow2.0,tensorboard,tensorflow2.x | extension loading is required before. You can try -> %load_ext tensorboard
.It worked for me. I am using TensorFlow 1.> | I want to run tensorboard in jupyter using the latest tensorflow 2.0.0a0.
With the tensorboard version 1.13.1, and python 3.6.
using
...
%tensorboard --logdir {logs_base_dir}
I get the error :
UsageError: Line magic function %tensorboard not found
Do you have an idea what the problem could be? It seems that all versions are up to date and the command seems correct too.
Thanks | 0 | 1 | 25,713 |
0 | 72,496,748 | 0 | 0 | 0 | 0 | 2 | false | 35 | 2019-05-03T13:20:00.000 | 0 | 5 | 0 | Tensorboard not found as magic function in jupyter | 55,970,686 | 0 | python,tensorflow,tensorflow2.0,tensorboard,tensorflow2.x | That's how I solved it
%load_ext tensorboard
%tensorboard --logdir /content/drive/MyDrive/Dog\ Vision/logs
After --logdir, this is my path directory /content/drive/MyDrive/Dog\ Vision/logs. It should be different for you. | I want to run tensorboard in jupyter using the latest tensorflow 2.0.0a0.
With the tensorboard version 1.13.1, and python 3.6.
using
...
%tensorboard --logdir {logs_base_dir}
I get the error :
UsageError: Line magic function %tensorboard not found
Do you have an idea what the problem could be? It seems that all versions are up to date and the command seems correct too.
Thanks | 0 | 1 | 25,713 |
0 | 55,977,698 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-03T22:13:00.000 | 0 | 1 | 0 | Visualizing a frozen graph_def.pb | 55,977,680 | 0 | python,tensorflow,tensorboard | You can try to use TensorBoard. It is on the Tensorflow website... | I am wondering how to go about visualization of my frozen graph def. I need it to figure out my tensorflow networks input and output nodes. I have already tried several methods to no avail, like the summarize graph tool. Does anyone have an answer for some things that I can try? I am open to clarifying questions, thanks in advance. | 0 | 1 | 221 |
0 | 55,978,184 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-05-03T22:33:00.000 | 0 | 1 | 0 | Need setup recommendation for parallel processing many contracts through many scenarios | 55,977,829 | 0 | python,pandas,parallel-processing,dask | From a simple conceptual perspective:
Write yourself a function that takes a contract and a scenario as parameters and performs the desired calculation
Use Python's multiprocessing to set up a worker pool
Create a Queue (from multiprocessing package) that is to be shared across workers
Fill the queue with all combinations (might be a good idea to use fixed indices and only push a tuple of the contract/scenario indices (C, S) to the queue to reduce required space
Map your function to the worker pool given the queue
There are more elaborate ways to do this (including amqp/celery/...) depending whether you want to distribute tasks across multiple machines or just to all of your locally available cores. This simple concept should contain all required keywords to build your first local multiprocessing on your own! | I need a recommendation from gurus out there on how to go about setting up a modeling application. I have thousands of scenarios to run on thousands for contracts for cash flow projections. Assuming I have 1000 scenarios and 1000 contracts I would need to run 1,000,000 projections (1000x1000). I'd like to do this in parallel using dask, ray or some other method. My data are in dataframes but I'm open to better suggestions. I can create 2 loops (scenario,contract) for each run but this would be sequential.
Scenario1 w Contract1
Scenario1 w Contract2
Scenario1 w Contract3
.
.
.
Scenario1000 w Contract1000
I'd like to distribute compute to multiple processor and multiple servers.
I'll save my question on the inner loop projections where I have to run 100 scenario projections at each time step of the 1,000,000 runs for next time.
Any suggestion to point me in the right direction would help. | 0 | 1 | 37 |
0 | 58,553,910 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2019-05-03T23:01:00.000 | 0 | 4 | 0 | How to fix 'C extension not loaded, training will be slow. Install a C compiler and reinstall gensim for fast training.' | 55,978,013 | 0 | python-3.x,jupyter-notebook,anaconda,gensim,word2vec | I've faced this issue for a long time when I was running W2V Models which requires 'gensim'.
First of all I've installed Anaconda Navigator and then installed required packages using pip.
I've installed gensim manually using pip in cmd. When I run the W2V model, it took 40 min to train and give the result, which made me annoying and wasted a lot of time.
This problem got solved now. I just did what the warning showed. I've uninstalled gensim from my computer. Prior to that I've already created a system path of mingw-w64 in the environment variable which is an environment for c,c++ etc., programs. Later, I've reinstalled gensim using 'pip install gensim'.
Now the program is running within seconds which made a drastic change in the execution time.
Hope this helps... | I'm using the library node2vec, which is based on gensim word2vec model to encode nodes in an embedding space, but when i want to fit the word2vec object I get this warning:
C:\Users\lenovo\Anaconda3\lib\site-packages\gensim\models\base_any2vec.py:743:
UserWarning: C extension not loaded, training will be slow. Install a
C compiler and reinstall gensim for fast training.
Can any one help me to fix this issue please ? | 0 | 1 | 8,850 |
0 | 56,800,565 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2019-05-03T23:01:00.000 | 1 | 4 | 0 | How to fix 'C extension not loaded, training will be slow. Install a C compiler and reinstall gensim for fast training.' | 55,978,013 | 0.049958 | python-3.x,jupyter-notebook,anaconda,gensim,word2vec | anaconda prompt
conda update conda-build
==
windows 7 (32bit)
python 3.7.3
conda-build 3.18.5
gensim 3.4.0 | I'm using the library node2vec, which is based on gensim word2vec model to encode nodes in an embedding space, but when i want to fit the word2vec object I get this warning:
C:\Users\lenovo\Anaconda3\lib\site-packages\gensim\models\base_any2vec.py:743:
UserWarning: C extension not loaded, training will be slow. Install a
C compiler and reinstall gensim for fast training.
Can any one help me to fix this issue please ? | 0 | 1 | 8,850 |
0 | 56,666,561 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2019-05-03T23:01:00.000 | 1 | 4 | 0 | How to fix 'C extension not loaded, training will be slow. Install a C compiler and reinstall gensim for fast training.' | 55,978,013 | 0.049958 | python-3.x,jupyter-notebook,anaconda,gensim,word2vec | For me, degrading back to Gensim version 3.7.1 from 3.7.3 worked. | I'm using the library node2vec, which is based on gensim word2vec model to encode nodes in an embedding space, but when i want to fit the word2vec object I get this warning:
C:\Users\lenovo\Anaconda3\lib\site-packages\gensim\models\base_any2vec.py:743:
UserWarning: C extension not loaded, training will be slow. Install a
C compiler and reinstall gensim for fast training.
Can any one help me to fix this issue please ? | 0 | 1 | 8,850 |
0 | 61,169,238 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-05-04T07:45:00.000 | 7 | 1 | 0 | Does Google Colab use my internet traffic while downloading a dataset or importing a new package into colab notebook? | 55,980,568 | 1.2 | python,python-3.x,google-colaboratory,python-module | It happens on Google cloud servers and your internet connection is used only to run the code.
I tried downloading some huge dataset using wget and my internet data wasnt affected by it. | For example when importing CIFAR-10 from Keras (using from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data())
or temporarily installing a package like HAZM (Persian form of NLTK) using !pip install hazm which is not pre-installed on Google Colab, the cell containing the import statement starts to download the material it needs. I want to know if my internet traffic is used in the downloading process, or it happens on Google cloud servers and my internet connection is used only to run the code?
Thanks. | 0 | 1 | 2,643 |
0 | 55,984,404 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-05-04T12:35:00.000 | 0 | 1 | 0 | previous steps before to calculate disparity? Is rectification needed? | 55,982,564 | 1.2 | python-3.x,opencv,stereo-3d,disparity-mapping | Yes, disparity needs rectified images. Since the stereo matching is done with epipolar lines, rectified images ensure that all the distortions are rectified and hence the algorithm can search blocks in a straight line. For a basic level you can try out StereoBM provided by openCV using the recitified stereo image pair.
Raw frames from camera -> Rectification -> Disparity map -> Depth perception.
This will be the pipeline for any passive stereo camera. | I want to do stereo vision and finally find the real distance to the objects from cameras. I have done image rectification.Now I want to calculate disparity. My question is, to do disparity, do I need to rectify images first? Thank you! | 0 | 1 | 165 |
0 | 55,993,149 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-05T00:30:00.000 | 0 | 2 | 0 | Pandas read_csv method can't get 'œ' character properly while using encoding ISO 8859-15 | 55,987,923 | 0 | python-3.x,pandas,encoding | Anyone have a clue ? I've manage the problem by manually rewrite this special character before reading my csv with pandas but that doesn't answer my question :( | I have some trubble reading with pandas a csv file which include the special character 'œ'.
I've done some reseach and it appears that this character has been added to the ISO 8859-15 encoding standard.
I've tried to specify this encoding standard to the pandas read_csv methods but it doesn't properly get this special character (I got instead a '☐') in the result dataframe :
df= pd.read_csv(my_csv_path, ";", header=None, encoding="ISO-8859-15")
Does someone know how could I get the right 'œ' character (or eaven better the string 'oe') instead of this ?
Thank's a lot :) | 0 | 1 | 167 |
0 | 56,062,610 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-05-05T08:46:00.000 | 1 | 2 | 0 | additional of features decrease the accuracy- random forest | 55,990,255 | 0.099668 | python,machine-learning,random-forest | Basically, you may be "confusing" your model with useless features. MORE FEATURES or MORE DATA WILL NOT ALWAYS MAKE YOUR MODEL BETTER. The new features will also not get weight zero because the model will try hard to use them! Because there are so many (175!), RF is just not able to come back to the previous "pristine" model with better accuracy and recall (maybe these 9 features are really not adding anything useful).
Think about how a decision tree essentially works. These new features will cause some new splits that can worsen the results. Try to work it out from the basics and slowly adding new information always checking the performance. In addition, pay attention to for example the number of features used per split (mtry). For so many features, you would need to have a very high mtry (to allow for a big sample to be considered for every split). Have you considered adding 1 or 2 more and checking how the accuracy responds? Also, don't forget mtry! | I am using sklearn's random forests module to predict a binary target variable based on 166 features.
When I increase the number of dimensions to 175 the accuracy of the model decreases (from accuracy = 0.86 to 0.81 and from recall = 0.37 to 0.32) .
I would expect more data to only make the model more accurate, especially when the added features were with business value.
I built the model using sklearn in python.
Why the new features did not get weight 0 and left the accuracy as it was ? | 0 | 1 | 2,477 |
0 | 55,993,920 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-05-05T08:46:00.000 | 0 | 2 | 0 | additional of features decrease the accuracy- random forest | 55,990,255 | 0 | python,machine-learning,random-forest | More data does not always make the model more accurate. Random forest is a traditional machine learning method where the programmer has to do the feature selection. If the model is given a lot of data but it is bad, then the model will try to make sense out of that bad data too and will end up messing things up. More data is better for neural networks as those networks select the best possible features out of the data on their own.
Also, 175 features is too much and you should definitely look into dimensionality reduction techniques and select the features which are highly correlated with the target. there are several methods in sklearn to do that. You can try PCA if your data is numerical or RFE to remove bad features, etc. | I am using sklearn's random forests module to predict a binary target variable based on 166 features.
When I increase the number of dimensions to 175 the accuracy of the model decreases (from accuracy = 0.86 to 0.81 and from recall = 0.37 to 0.32) .
I would expect more data to only make the model more accurate, especially when the added features were with business value.
I built the model using sklearn in python.
Why the new features did not get weight 0 and left the accuracy as it was ? | 0 | 1 | 2,477 |
0 | 56,006,150 | 1 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-05T22:56:00.000 | 0 | 1 | 0 | Is there a way to download "Responses In Progress" survey from Qualtrics? | 55,997,128 | 0 | python,qualtrics | Not through the API. You can do it manually through the Qualtrics interface.
If you need to use the API and the survey is invite only, an alternative would be to download the distribution history for all the distributions. That will tell you the status of each invitee. | I'm looking for a way to download surveys that are still open on Qualtrics so that I can create a report on how many surveys are completed and how many are still in progress. I was able to follow their API documentation to download the completed surveys to a csv file but I couldn't find a way to do the same for the In Progress surveys. Thanks in advance for your help. | 0 | 1 | 156 |
0 | 67,363,684 | 0 | 0 | 0 | 0 | 3 | false | 9 | 2019-05-06T23:44:00.000 | 1 | 4 | 0 | Why not use mean squared error for classification problems? | 56,013,688 | 0.049958 | python,keras,lstm,cross-entropy,mean-square-error | The answer is right there in your question. Value of binary cross entropy loss is higher than rmse loss.
Case 1 (Large Error):
Lets say your model predicted 1e-7 and the actual label is 1.
Binary Cross Entropy loss will be -log(1e-7) = 16.11.
Root mean square error will be (1-1e-7)^2 = 0.99.
Case 2 (Small Error)
Lets say your model predicted 0.94 and the actual label is 1.
Binary Cross Entropy loss will be -log(0.94) = 0.06.
Root mean square error will be (1-1e-7)^2 = 0.06.
In Case 1 when prediction is far off from reality, BCELoss has larger value compared to RMSE. When you have large value of loss you'll have large value of gradients, thus optimizer will take a larger step in direction opposite to gradient. Which will result in relatively more reduction in loss. | I am trying to solve a simple binary classification problem using LSTM. I am trying to figure out the correct loss function for the network. The issue is, when I use the binary cross-entropy as loss function, the loss value for training and testing is relatively high as compared to using the mean squared error (MSE) function.
Upon research, I came across justifications that binary cross-entropy should be used for classification problems and MSE for the regression problem. However, in my case, I am getting better accuracies and lesser loss value with MSE for binary classification.
I am not sure how to justify these obtained results. Why not use mean squared error for classification problems? | 0 | 1 | 8,441 |
0 | 58,903,890 | 0 | 0 | 0 | 0 | 3 | false | 9 | 2019-05-06T23:44:00.000 | 6 | 4 | 0 | Why not use mean squared error for classification problems? | 56,013,688 | 1 | python,keras,lstm,cross-entropy,mean-square-error | I would like to show it using an example.
Assume a 6 class classification problem.
Assume,
True probabilities = [1, 0, 0, 0, 0, 0]
Case 1:
Predicted probabilities = [0.2, 0.16, 0.16, 0.16, 0.16, 0.16]
Case 2:
Predicted probabilities = [0.4, 0.5, 0.1, 0, 0, 0]
The MSE in the Case1 and Case 2 is 0.128 and 0.1033 respectively.
Although, Case 1 is correctly predicting class 1 for the instance, the loss in Case 1 is higher than the loss in Case 2. | I am trying to solve a simple binary classification problem using LSTM. I am trying to figure out the correct loss function for the network. The issue is, when I use the binary cross-entropy as loss function, the loss value for training and testing is relatively high as compared to using the mean squared error (MSE) function.
Upon research, I came across justifications that binary cross-entropy should be used for classification problems and MSE for the regression problem. However, in my case, I am getting better accuracies and lesser loss value with MSE for binary classification.
I am not sure how to justify these obtained results. Why not use mean squared error for classification problems? | 0 | 1 | 8,441 |
0 | 56,045,324 | 0 | 0 | 0 | 0 | 3 | true | 9 | 2019-05-06T23:44:00.000 | -1 | 4 | 0 | Why not use mean squared error for classification problems? | 56,013,688 | 1.2 | python,keras,lstm,cross-entropy,mean-square-error | I'd like to share my understanding of the MSE and binary cross-entropy functions.
In the case of classification, we take the argmax of the probability of each training instance.
Now, consider an example of a binary classifier where model predicts the probability as [0.49, 0.51]. In this case, the model will return 1 as the prediction.
Now, assume that the actual label is also 1.
In such a case, if MSE is used, it will return 0 as a loss value, whereas the binary cross-entropy will return some "tangible" value.
And, if somehow with all data samples, the trained model predicts a similar type of probability, then binary cross-entropy effectively return a big accumulative loss value, whereas MSE will return a 0.
According to the MSE, it's a perfect model, but, actually, it's not that good model, that's why we should not use MSE for classification. | I am trying to solve a simple binary classification problem using LSTM. I am trying to figure out the correct loss function for the network. The issue is, when I use the binary cross-entropy as loss function, the loss value for training and testing is relatively high as compared to using the mean squared error (MSE) function.
Upon research, I came across justifications that binary cross-entropy should be used for classification problems and MSE for the regression problem. However, in my case, I am getting better accuracies and lesser loss value with MSE for binary classification.
I am not sure how to justify these obtained results. Why not use mean squared error for classification problems? | 0 | 1 | 8,441 |
0 | 56,014,229 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-07T00:57:00.000 | 0 | 1 | 0 | Determine the language for UDF creation in Hive | 56,014,157 | 0 | java,python,hive,user-defined-functions | This question probably isnt within guidelines because you are asking for an opinion.
Having said that i would propose that:
A) you pick a language that you know.
B) if you know both, then pick based upon the features you need.
C) consider performance - i believe (but cannot confifm) that a compiled Java Jar will run without launching a java runtime just to support that Java module (it will run inside the hive java instance). To run a Python module a new python interpreter will need to be instantiated and data transferred via interprocess communication. Thus java is possibly slightly more perfofmant - especially if the algorithm is simple. However unless you are processing huge data sets you probably would not even notice.
Finally, you could probably do all of the functions you asked with Hive query language. | Summary : Concern is related to UDF creation in Hive.
Dear friends, As I am new in creating UDFs in Hive (I have read about this via google but not getting very clear idea), my first thing here is to determine which would be the best possible way like Java/Python or any other to write hive UDFs.
Another thing is on what basis I should analyse? What all parameter I should look for ?
Please not that I have few functions as given below for which UDFs needs to be written.
1. To select and group by clauses required for another function when "no aggregation" is needed.
2. To return the select and group by clauses required when "aggregation" is needed.
3. For vector_indexes are SUM, LISTAGG strings for the data collection query
4. To return the WHERE clause used by other function.
5 To return the nth item in a comma separated string.
6. Percentile Value function for Narrow data.
7. To calculates percentile for a given counter name. Along with the percentile, it also outputs the number of samples used in the calculation, the peak and average.
Thank you very much in advance, | 0 | 1 | 38 |
0 | 56,063,874 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-05-07T15:18:00.000 | 1 | 1 | 0 | DQN behaves differently on different computers | 56,025,783 | 0.197375 | python,python-3.x,tensorflow,keras,reinforcement-learning | I assume you run a certain version of your code with a given hyper-parameter values. Then, you need to fix random seed in the beginning of your code for tensorflow (e.g. tf.set_random_seed(1)), for numpy (e.g. np.random.seed(1)) and for random, if you use it.
Additionally, you have to have same version of tensorflow on all you machines. I had a experience that even the forward pass between 1.3 and 1.8 resulted in two different outputs. Same check is required for gym.
Finally, you have to check with either cpu or gpu. You cannot compare the results of a cpu run with a gpu run.
If neither of these checks worked, I can check your colab code if you want to share it. | I have a more or less standard implementation of DQN solving the Atari "Breakout" (from Coursera Reinforcement learning course), that behaves totally different on different computers:
on my Laptop it converges each time I run it
on Coursera and Google Colab servers it never converges!
I use
Python3
Tensorflow
Kerass (only for Conv2D, Flatten and Dense layers)
I already spend some two weeks on the issue without any progress :(
I already checked:
The versions:
Python: same (3.6.7)
Tensorflow: same (tested with 1.4.0 and 1.5.0)
numpy: same up the bugfix number (1.16.2 vs 1.16.3)
Random seeds
float32 vs float64: I always pass dtype=np.float32 to each np.array and tf.placeholder call.
CPU/GPU
My laptop, that converges, uses old CPU (that limits Tensorflow to <= 1.5.0)
On Coursera server, that never converges: CPU?
On Google Colab server, that never converges: GPU
My questions here are:
What may be cause of the different behavior?
How such problems get debugged?
What can I also do/check to finally find the problem?
Update: All the code (incl. hyperparameters, env, ...) is exactly the same. | 0 | 1 | 98 |
0 | 56,027,872 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-07T16:50:00.000 | 0 | 1 | 0 | Getting OpenCV to work with python after compiling from source | 56,027,199 | 1.2 | python,python-3.x,opencv | The solution ended up being both simpler and sloppier than I would have liked. I just installed the regular distribution using pip install opencv-contrib-python, then went into the cv2 folder in Lib/site-packages, replaced the python extension (cv2.cp36-win32.pyd in my case. may be different for others) with the .pyd file from my CMake build (build/lib/python3/Release) and copied everything from build/bin/Release into the Lib/site-packages/cv2 folder. It doesn't look pretty or organized but python can find everything now. If anyone has a cleaner way to do this I'd love to hear it. | I am having an issue getting OpenCV to work with python. I compiled from source using CMake in order to gain access to the SIFT module. Whenever I try to use openCV however, python returns the "No module named 'cv2'" error. It works fine when I install using pip but then I have no SIFT. My build directory is set as an environment variable and my bin directory is in my system path. There were no build issues and the applications that came with the build run fine. Is there another step that I have to perform, such as installing from the compiled project using pip? How do I get my openCV library, compiled from source, to be importable by python? | 0 | 1 | 407 |
0 | 56,039,845 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-08T11:29:00.000 | 0 | 1 | 0 | how to extract line from a word2vec file? | 56,039,771 | 1.2 | python,pycharm | glove_model["Activity"] should get you its vector representation from the loaded model. This is because glove_model is an object of type KeyedVectors and you can use key value to index into it. | I have created a word2vec file and I want to extract only the line at position [0]
this is the word2vec file
`36 16
Activity 0.013954502 0.009596351 -0.0002082094 -0.029975398 -0.0244055 -0.001624907 0.01995442 0.0050479663 -0.011549354 -0.020344704 -0.0113901375 -0.010574887 0.02007604 -0.008582828 0.030914625 -0.009170294
DATABASED%GWC%5 0.022193532 0.011890317 -0.018219836 0.02621059 0.0029900416 0.01779779 -0.026217759 0.0070709535 -0.021979155 0.02609082 0.009237218 -0.0065825963 -0.019650755 0.024096865 -0.022521153 0.014374277
DATABASED%GWC%7 0.021235622 -0.00062567473 -0.0045315344 0.028400827 0.016763352 0.02893731 -0.013499333 -0.0037113864 -0.016281538 0.004078895 0.015604254 -0.029257657 0.026601797 0.013721668 0.016954066 -0.026421601` | 0 | 1 | 12 |
0 | 56,047,915 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-08T19:28:00.000 | 0 | 1 | 0 | How would I go about image labeling/Classification? | 56,047,785 | 0 | python,machine-learning,deep-learning,classification | There's two routes you can take, one where you have labeled data (or you want to label data yourseld), and one where you don't have that.
Let's start with the latter. Say you have an image of a passport. You want to detect where the text in the image is, and what that text says. You can achieve this using a library called pytessaract. It's an AI that does exactly this for you. It works well because it has been trained on a lot of other images, so it's good in detecting text in any image.
If you have labels you might be able to improve your model you could make with pytessaract, but this is a lot harder. If you want to learn it anyway, I would recommend with learning ŧensorflow, and use "transfer learning" to improve your model. | Let's say I have a set of images of passports. I am working on a project where I have to identify the name on each passport and eventually transform that object into text.
For the very first part of labeling (or classification (I think. beginner here)) where the name is on each passport, how would I go about that?
What techniques / software can I use to accomplish this?
in great detail or any links would be great. I'm trying to figure out how this is done exactly so I can began coding
I know training a model is involved possibly but I'm just not sure
I'm using Python if that matters.
thanks | 0 | 1 | 91 |
0 | 56,048,584 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-08T20:10:00.000 | 2 | 1 | 0 | Is there a way to use the "read_csv" method to read the csv files in order they are listed in a directory? | 56,048,345 | 1.2 | python,pandas,csv,matplotlib,python-3.7 | you could use os.listdir() to get all the files in the folder and then sort them out in a certain way, for example by name(it would be enough using the python built in sorted() ). Instead if you want more fancy ordering you could retrieve both the name and last modified date and store them in a dictionary, order the keys and retrieve the values. So as @Fausto Morales said it all only depends on which order you would like them to be sorted. | I am plotting plots on one figure using matplotlib from csv files however, I want the plots in order. I want to somehow use the read_csv method to read the csv files from a directory in the order they are listed in so that they are outputted in the same fashion.
I want the plots listed under each other the same way the csv files are listed in the directory. | 0 | 1 | 47 |
0 | 56,049,286 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-05-08T21:06:00.000 | 0 | 2 | 0 | How do I group similar categories? | 56,049,055 | 0 | python,python-3.x,nlp,classification,text-classification | Use a pre trained model to generate embeddings, and from there you can cluster the embeddings using a clustering algorithm like t-SNE or UMAP. I recommend fasttext or spacy, with spacey being the easiest to use. | I have about 1200 tv show categories .. like Drama, News, Sports, Sports-non event, Drama Medical, Drama Crime.. etc
How do I use NLP so that I get groups such that Drama, Drama medical and Drama Crime group together and Sports, Sports-non event etc group together and so on... basically the end goal is to reduce the 1200 categories to very few broad categories.
Till now I have used bag of words to build a dictionary with 146 words.. | 0 | 1 | 204 |
0 | 56,053,083 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-05-09T04:27:00.000 | 1 | 2 | 0 | Keras tf backend predict speed slow for batch size of 1 | 56,052,206 | 0.099668 | python,performance,keras | The batch size controls parallelism when predicting, so it is expected that increasing the batch size will have better performance, as you can use more cores and use GPU more efficiently.
You cannot really workaround, there is nothing really to work around, using a batch size of one is the worst case for performance. Maybe you should look into a smaller network that is faster to predict, or predict on the CPU if your experiments are done in a GPU, to minimize overhead due to transfer.
Don't forget that model.predict does a full forward pass of the network, so its speed completely depends on the network architecture. | I am combining a Monte-Carlo Tree Search with a convolutional neural network as the rollout policy. I've identified the Keras model.predict function as being very slow. After experimentation, I found that surprisingly model parameter size and prediction sample size don't affect the speed significantly. For reference:
0.00135549 s for 3 samples with batch_size = 3
0.00303991 s for 3 samples with batch_size = 1
0.00115528 s for 1 sample with batch_size = 1
0.00136132 s for 10 samples with batch_size = 10
as you can see I can predict 10 samples at about the same speed as 1 sample. The change is also very minimal though noticeable if I decrease parameter size by 100X but I'd rather not change parameter size by that much anyway. In addition, the predict function is very slow the first time run through (~0.2s) though I don't think that's the problem here since the same model is predicting multiple times.
I wonder if there is some workaround because clearly the 10 samples can be evaluated very quickly, all I want to be able to do is predict the samples at different times and not all at once since I need to update the Tree Search before making a new prediction. Perhaps should I work with tensorflow instead? | 0 | 1 | 2,440 |
0 | 56,055,689 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-09T08:46:00.000 | 0 | 1 | 0 | cannot reshape array of size (x,) into shape (x,y,z,1) | 56,055,571 | 1.2 | python,numpy,reshape,shapes,numpy-ndarray | I found the very simple solution:
np.stack(x_train_left)
and then when i try:
x_train_left.shape prints (2200, 250, 250, 1) | I'm trying to convert a numpy ndarray with a shape of (2200,) to numpy ndarray with a shape of (2200,250,250,1). every single row contains an image (shape: 250,250,1)
This is my object:
type(x_train_left) prints numpy.ndarray
x_train_left.shape prints (2200,)
type(x_train_left[0]) prints numpy.ndarray
x_train_left[0].shape prints (250, 250, 1)
But for some reason when i try to reshape x_train_left to (2200,250,250,1) i get the following error:
ValueError: cannot reshape array of size 2200 into shape (2200,250,250,1)
Thank for any help, iv'e searched for duplicated subjects, but they all have different problems. | 0 | 1 | 1,841 |
0 | 56,058,663 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-09T08:57:00.000 | 0 | 1 | 0 | Load multiple Keras models in different processes | 56,055,769 | 0 | python,tensorflow,keras,multiprocessing,python-multiprocessing | I still don't know the exact cause of the problem. However, I found out that my main process was loading a keras model and removing that solved my problem. I can now have multiple models running in parallel. | I have several trained Keras models, weights stored in h5 files using keras.models.save_model. They do not have the same architecture.
My goal is to load all of them in separate processes and be able to predict. I currently try doing this using a class which stores a TensorFlow session and graph object. I then use with statements at loading time and prediction time to prevent interference with any global variables.
I can create my (empty) Keras Sequential model without problems, but when I call its load_weights function, the process just freezes.
Setups with Graph and Session objects I tried:
specific Graph and Session -> process freezes on load_weights
specific Graph only -> "TypeError: Cannot interpret feed_dict key as Tensor"
specific Session only -> process freezes on load_weights
I have been through most of the related answers on SO but have not been able to find a solution or even someone with the same problem.
Thanks for your help! | 0 | 1 | 713 |
0 | 56,082,079 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-10T01:43:00.000 | 2 | 1 | 0 | ML with imbalanced binary dataset | 56,069,657 | 0.379949 | python,scikit-learn,dataset,resampling,oversampling | Your assumption is correct. Your machine learning model is basically overfitting on your training data which has the same pattern repeated for one class and thus, the model learns that pattern and misses the rest of the patterns, that is there in test data. This means that the model will not perform well in the wild world.
If SMOTE is not working, you can experiment by testing different machine learning models. Random forest generally performs well on this type of datasets, so try to tune your rf model by pruning it or tuning the hyperparameters. Another way is to assign the class weights when training the model. You can also try penalized models which imposes an additional cost on the model when the misclassify the minority class.
You can also try undersampling since you have already tested oversampling. But most probably your undersampling will also suffer from the same problem. Please try simple oversampling as well instead of SMOTE to see how your results change.
Another more advanced method that you should experiment is batching. Take all of your minority class and an equal number of entries from the majority class and train a model. Keep doing this for all the batches of your majority class and in the end you will have multiple machine learning models, which you can then use together to vote. | I have a problem I am trying to solve:
- imbalanced dataset with 2 classes
- one class dwarfs the other one (923 vs 38)
- f1_macro score when the dataset is used as-is to train RandomForestClassifier stays for TRAIN and TEST in 0.6 - 0.65 range
While doing research on the topic yesterday, I educated myself in resampling and especially SMOTE algorithm. It seems to have worked wonders for my TRAIN score, as after balancing the dataset with them, my score went from ~0.6 up to ~0.97. The way that I have applied it was as follows:
I have splited away my TEST set away from the rest of data in the beginning (10% of the whole data)
I have applied SMOTE on TRAIN set only (class balance 618 vs 618)
I have trained a RandomForestClassifier on TRAIN set, and achieved f1_macro = 0.97
when testing with TEST set, f1_macro score remained in ~0.6 - 0.65 range
What I would assume happened, is that the holdout data in TEST set held observations, which were vastly different from pre-SMOTE observations of the minority class in TRAIN set, which ended up teaching the model to recognize cases in TRAIN set really well, but threw the model off-balance with these few outliers in the TEST set.
What are the common strategies to deal with this problem? Common sense would dictate that I should try and capture a very representative sample of minority class in the TRAIN set, but I do not think that sklearn has any automated tools which allow that to happen? | 0 | 1 | 162 |
0 | 56,090,218 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-11T12:02:00.000 | 2 | 1 | 0 | Minimize a piecewise linear, convex function with scipy | 56,090,155 | 1.2 | python,scipy | If the function is piecewise linear and convex, the minimum must be at one of the points where the linear pieces are connected. There is no need for a derivative, you should be able to use a binary search. | I want to find the minimum of a function which is piecewise linear, convex and differentiable at all but a finite number of points. What scipy.optimize.minimize method is appropriate to find a fast solution to my problem? | 0 | 1 | 146 |
0 | 56,091,221 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-11T13:53:00.000 | 0 | 1 | 0 | Pandas Merge Dataframes Sequentially on Conditions | 56,090,979 | 0 | python,pandas,dataframe | I don't think that there is one line code to do this. So follow the steps.
1) First, create a list:
dfs = []
2) Merge for each condition on dataframe:
dfs.append(pd.merge(df1,df2,left_on='col1',right_on='col1',how='outer')).dropna()
dfs.append(pd.merge(df1,df2,left_on='col1',right_on='col2',how='outer')).dropna()
dfs.append(pd.merge(df1,df2,left_on='col1',right_on='col3',how='outer')).dropna()
^ repeat
3) Now concatenate them:
pd.concat(dfs) | Suppose I have 2 dataframe:
DF1:
Col1 | Col2 | Col3
XCN000370/17-18C | XCN0003711718C | 0003971718
DF2
Col1 | Col2 | Col3
XCN0003711718C | XCN0003711718C | 0003971718
I want them to merge like this:
First Match Col1 (DF1) and Col1 (DF2)
In Remaining Unmatched, Match Col1 (DF1) with Col2 (DF2)
In remaining Unmatched, Match Col1 (DF1) with Col3 (DF2)
Now repeat this by exchanging DF1 and DF2 with remaining unmatched
In Remaining Unmatched Match Col1 (DF2) and Col1 (DF1)
In Remaining Unmatched, Match Col1 (DF2) with Col2 (DF1)
In remaining Unmatched, Match Col1 (DF2) with Col3 (DF1)
Any ideas? | 0 | 1 | 147 |
0 | 56,092,947 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-05-11T17:53:00.000 | 4 | 2 | 0 | How to get list of rows of pandas dataframe in python? | 56,092,914 | 1.2 | python-3.x,pandas | This should work. df.index.values
This returns index in the form of numpy array numpy.ndarray, run this type(df.index.values) to check. | How do I get a list of row lables in pandas?
I have a table with column labels and row labels. To return the column lables I use the dataframe "column" attribute.
It is possible to return the list of column labels with the attribute columns, but i couldn't find similiar attributes for rows. | 0 | 1 | 10,189 |
0 | 56,096,067 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-11T22:33:00.000 | 1 | 1 | 0 | How to evaluate/improve the accuracy from the prediction from a neural network with an unbalanced dataset? | 56,094,779 | 1.2 | python,machine-learning,scikit-learn,neural-network,classification | It all depends on your dataset. Neural network are not magical tools that can learn everything and also they require a lot of data compared to traditional machine learning models. In case of MLP, making a model extremely complex by adding a lot of layers is never a good idea as it makes the model more complex, slow and can lead to overfitting as well. Learning rate is an important factor as it is used to find the best solution for the model. A model makes mistakes and learns from it and the speed of learning is controlled by learning rate. If learning rate is too small, your model will take a long time to reach the best possible stage but if it is too high the model might just skip the best stage. The choice of activation function is again dependent on the use case and the data but for simpler datasets, activation function will not make a huge differnece.
In traditional deep learning models, a neural network is build up of several layers which might not always be dense. All the layers in MLP as dense i.e. feed forward. To improve your model, you can try a combination of dense layers along with cnn, rnn, lstm, gru or other layers. Which layer to use depends completely on your dataset. If you are using a very simple dataset for a school project, then experiment with traditional machine learning methods like random forest as you might get better results.
If you want to stick to neural nets, read about other types of layers, dropout, regularization, pooling, etc. | I used gridsearchcv to determine which hyperparameters in the mlpclassifier can make the accuracy from my neural network higher. I figured out that the amount of layers and nodes makes a difference but I'm trying to figure out which other configurations can make a difference in accuracy (F1 score actualy). But from my experience it lookes like parameters like "activation", "learning_rate", "solver" don't really change anything.
I need to do a research on which other hyperparameters can make a difference in the accuracy from predictions via the neural network.
Does someone have some tips/ideas on which parameters different from the amount of layers / nodes that can make a difference in the accuracy from my neural network predictions? | 0 | 1 | 490 |
0 | 63,011,554 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-12T00:28:00.000 | 0 | 2 | 0 | MNIST training time in CPU | 56,095,288 | 0 | python,tensorflow,neural-network,mnist | am not really clear about the benchmark you're looking for, there is it performance from training perspective, or accuracy? for accuracy, there are some tools that can do the comparison between the predictions and actuals so you can measure the performance | I have created a simple feed forward Neural Network library in Java - and I need a benchmark to compare and troubleshoot my library.
Computer specs:
AMD Ryzen 7 2700X Eight-Core Processor
RAM 16.0 GB
WINDOWS 10 OS
JVM args: -Xms1024m -Xmx8192m
Note that I am not using a GPU.
Please list the following specs:
Computer specs?
GPU or CPU (CPU is proffered but GPU is good info)
Number of inputs 784 (this is fixed)
For each layer:
How many nodes?
What activation function?
Output layer:
How many nodes? (10 if classification or 1 as regression)
What activation function?
What loss function?
What gradient descent algorithm (i.e.: vanilla)
What batch size?
How many epochs? (not iterations)
And finally, what is the training time and accuracy?
Thank you so much
Edit
Just to give an idea of what I am dealing with. I created a network with
784 input nodes
784 in hidden layer 0
256 in hidden layer 1
128 in hidden layer 2
1 output nodes
mini-batch size 5
16 threads for backprop
And it has been training for ~8 hours and has only completed 694 iterations - that is not even 20% of one epoch.
How is this done in minutes as I've seen some claims? | 0 | 1 | 1,145 |
0 | 56,117,031 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-13T06:29:00.000 | 0 | 1 | 0 | gensim doc2vec Model doesn't learn some words | 56,106,821 | 0 | python,gensim,doc2vec | If a word you expected to be learned in the model isn't in the model, the most likely causes are:
it wasn't really there, in the version the model saw, perhaps because your tokenization/preprocessing is broken. Enable logging at INFO level, and examine your corpus as presented to the model, to ensure it's tokenized as intended
it wasn't part of the surviving vocabulary after the 1st vocabulary-survey of the corpus. The default min_count=5 discards words with fewer than 5 occurrences, as such words both fail to get good vectors for themselves, and effectively serve as 'noise' interfering with the improvement of other vectors.
You can set min_count=1 to retain all words, but it's more likely to hurt than help your overall vector quality. Word2Vec & Doc2Vec require large, varied corpuses – if you want a good vector for a word, find more diverse examples of its usage in an expanded corpus.
(Also note: one of the simple & fast Doc2Vec modes, that's also often a top-performer, especially on shorter texts, is plain PV-DBOW mode: dm=0. This mode will allocate/randomly-initialize word-vectors, but then ignores them for training, only training the doc-vectors. If you use that mode, you can still request word-vectors from the model at the end – but they'll just be random nonsense.) | I'm currently learning gensim doc2model in Python3.6 to see similarity between sentences.
I created a model but it returns KeyError: "word 'WORD' not in vocabulary" when I input a word which obviously exists in the training dataset, to find a similar word/sentence.
Does it automatically skip some words not very important to define sentences? or is that simply a bug or something?
Very appreciated if I could have any way out to cover all the appearing words in the dataset. thanks. | 0 | 1 | 358 |
0 | 56,430,302 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-13T11:47:00.000 | 0 | 1 | 0 | Identify what group KNN classified a sample in | 56,111,669 | 0 | python,machine-learning,knn,nearest-neighbor | So as I'm understanding the question right you have the true group classification for your data.
In that case you can predict your whole dataset with your trained model and identify the outliers. | I want to be able to find which of my samples were wrongly classified by KNN, or which weren't classified at all.
I have used sckit-learn to run KNN. I have a df that has ~280000 samples split into four groups, I have 13 features by which to classify by. My precsion per group ranges from 0.30-0.90.
I expect the output to say which group each sample belongs to and which group it was classified into.
thanks! | 0 | 1 | 43 |
0 | 56,113,906 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-05-13T13:48:00.000 | 1 | 1 | 0 | Is there a way to solve yB = c without computing the right inverse? | 56,113,772 | 1.2 | python,numpy | You can transpose the equation and then use linalg.solve. | I would like to solve an equation of the form yB = c, where y is my unknown (possibly a matrix). However the B matrix is not well conditioned, and I would like to have a method similar to numpy.linalg.solve in order to maintain the numerical accuracy of the solution.
I have tried to simply use the inverse of B, with numpy.linalg.inv, to find the solution y = cB^-1 as well as using the pseudo-inverse (numpy.linalg.pinv), but they prooved to be not accurate enough...
I have also looked into the QR decomposition, since numpy provides the method for it, in order to adapt it to the right inverse case, but here I struggle with the algebra.
Is there an accurate way to solve this equation ? Or is there an equivalent to numpy.linalg.solve for the right inverse ? | 0 | 1 | 104 |
0 | 56,161,443 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-05-14T09:28:00.000 | 0 | 1 | 0 | Algorithms for constrained clustering on attributed graphs with some cluster-level constraints on their attributes | 56,127,111 | 0 | python,constraints,cluster-analysis,graph-theory | First of all, the problem is most likely NP-hard, so the best you can do is some greedy optimization. It will definitely help to first break the graph into subsets that cannot be connected ever (remove links of nodes that are not similar enough, then compute the connected components). Then for each component (which hopefully are much smaller than 250k, otherwise tough luck!) run a classic optimizer that allows you to specify the cost function. It is probably a good idea to use an integer linear program, and consider the Lagrange dual version of the problem. | I have a graph with 240k nodes and 550k edges with five attributes per node coming out of an autoencoder from a sparse dataset. I'm looking to partition the graph into n clusters, such that intra-partition attribute similarity is maximized, the partitions are connected, and the sum of one of the attributes doesn't exceed a threshold for any given cluster.
I've tried poking around with an autoencoder but had issues making a loss function that would get the results I needed. I've also looked at heirarchical clustering with connectivity constraints but can't find a way to enforce my sum constraint optimally. Same issue with community detection algorithms on graphs like Louvain.
If anyone knows of any approaches to solving this I'd love to hear it, ideally something implemented in Python already but I can probably implement whatever algorithm I need should it not be. Thanks! | 0 | 1 | 350 |
0 | 56,189,877 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-14T10:00:00.000 | 0 | 2 | 0 | Extracting a particular type of data from unstructured text namely Institutes | 56,127,781 | 0 | python,nlp,information-extraction | The problem you face is solved by specialized text search and text analysis tools. Using phonetic analysis and indexes.
One of the popular text analysis tools is Elasticsearch.
You index your documents and search them, using REST api.
Google also provide such tools for text analysis and indexing.
Also modern RDBMS tools like Oracle and PostgresSQL provide such features.
Good luck. | I need to extract the names of Institutes from the given data. Institues names will look similar ( Anna University, Mashsa Institute of Techology , Banglore School of Engineering, Model Engineering College). It will be a lot of similar data. I want to extract these from text. How can I create a model to extract these names from data(I need to extract from resumes-C.V)
I tried adding new NER in spacy but even after training, the loss doesnt decrease and predictions are wrong. That is why I want to make a new model just for this. | 0 | 1 | 472 |
0 | 56,129,689 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-05-14T11:08:00.000 | 0 | 1 | 0 | Python comparing millions of rows and hundreds of columns between two tables from relational DB | 56,129,032 | 0 | python,python-3.x,pandas,pandasql | For handling this kind of data I would recommend using something like Hadoop rather than pandas/python. This isn't much of an answer but I can't comment yet. | Currently our system is in live proving phase. So, we need to check whether the set of tables populated in production are matching with the tables populated in sandbox (test). At the moment we have written a query for each table comparison and then run it in sql client to check it. There will be few more tables to check in future. I thought of automating the process in python by supplying the table names to a function which can then load the two tables in dataframes and then do a comparison which could highlight the differences.
Some of the tables have 2.7 millions rows for a day and are wide having 400 columns. When I tried to load the data (2.7 m rows * 400 columns) into dataframe, I get an error as it runs out of memory as I run my query in Jupyter where I have only 20 GB limit. what are the options here? Is Pandas dataframes only way to compare this large dataset? or are there any other library to achieve the same? | 0 | 1 | 218 |
0 | 56,263,416 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-14T21:14:00.000 | 0 | 2 | 0 | Simple example on using BuildingsPy with Dymola | 56,138,688 | 0 | python,dymola | Thank you for your explanation, it's really clear and it helped me a lot. I tested one of my models but by launching the code, dymola opens but it does not load the library or my model exists. That's the message I got:
Error: Simulation failed in 'C:\Temp\tmp-simulator-wwuvls\BEE'
Exception: File C:\Temp\tmp-simulator-wwuvls\BEE\simulator.log does not exist.
You need to delete the directory manually. | I would like to use Python to call my Modelica models using Dymola and BuildingsPy. I read the BuildingsPy tutorial, I understand in general how it goes, but I admit that the examples are not too intuitive for me. Could someone help me with a simple example using for example an existing model in the Modelica library.
Thank you | 0 | 1 | 404 |
0 | 56,141,111 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2019-05-15T02:42:00.000 | 0 | 2 | 0 | How to persist a python dictionary? | 56,141,069 | 0 | python,pandas | I would recommend persisting as json using pandas and reloading again as needed with pandas. Pandas will make the reading and writing really easy for you. This allows you to have the superset of columns in the dataframe with nulls in the spots that are missing data.
This saves you from needing to do a key value pair storage option as well.
Plus, it looks like your data is already in a format close to json. Once you have it loaded back into the dataframe, querying as you need to will also be simple. | I have a python program that takes in a list of objects of different types, and for each type, the program will output a dictionary of key/value attributes where the key is some property of the given object's type, and value is its computed result.
To make it more concrete, my program takes in a list of 2000 objects, of 3 unique types: Cars, Planes, Ships. And for a car, it produces:
{"ID": , "Horsepower":120.5, "Fuel Efficiency": 19,
"Turning Radius":20, "Weight":500}
For ship, it's
{"ID": , "Displacement": 1000.5, "Fuel Efficiency": 8,
"Weight": 2000}
For plane, it's
{"ID": , "Engine Size": 200.5, "Fuel Efficiency": 8,
"Weight": 2000}
So you can see, for each type, the number and content of its dictionary output is different, while they may all share some common fields such as "ID" (unique across different objects), "Weight", etc.
And tomorrow there could be a new type that needs to be supported by the program with a similar output structure.
The question is what is the best way to persist these outputs, for easy querying/aggregation later on. Such as: give me all planes with a weight >= 1000, or give me all weights of all cars whose horse power is between 200 and 300.
Lets say we use pandas dataframe as our storage format, I am faced with 2 choices:
Take a union of all keys of all product types, and create a pandas df with those keys as column, and each row represents each product's output, and it may have None in a given column depending on product. This essentially creates a sparse matrix. And the column names can grow because new product types can have new keys as outputs.
Create a pandas df with 3 columns: ID, Key, Value.
Which one do you recommend or is there an obvious third option I'm missing? | 0 | 1 | 94 |
0 | 56,141,206 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2019-05-15T02:42:00.000 | 0 | 2 | 0 | How to persist a python dictionary? | 56,141,069 | 0 | python,pandas | json.dumps and json.loads are your friends. To translate your structure to be persisted, dumps creates a unique string that can be written to any file-like object and loads can reload it from a string-like object. Hope that helps! | I have a python program that takes in a list of objects of different types, and for each type, the program will output a dictionary of key/value attributes where the key is some property of the given object's type, and value is its computed result.
To make it more concrete, my program takes in a list of 2000 objects, of 3 unique types: Cars, Planes, Ships. And for a car, it produces:
{"ID": , "Horsepower":120.5, "Fuel Efficiency": 19,
"Turning Radius":20, "Weight":500}
For ship, it's
{"ID": , "Displacement": 1000.5, "Fuel Efficiency": 8,
"Weight": 2000}
For plane, it's
{"ID": , "Engine Size": 200.5, "Fuel Efficiency": 8,
"Weight": 2000}
So you can see, for each type, the number and content of its dictionary output is different, while they may all share some common fields such as "ID" (unique across different objects), "Weight", etc.
And tomorrow there could be a new type that needs to be supported by the program with a similar output structure.
The question is what is the best way to persist these outputs, for easy querying/aggregation later on. Such as: give me all planes with a weight >= 1000, or give me all weights of all cars whose horse power is between 200 and 300.
Lets say we use pandas dataframe as our storage format, I am faced with 2 choices:
Take a union of all keys of all product types, and create a pandas df with those keys as column, and each row represents each product's output, and it may have None in a given column depending on product. This essentially creates a sparse matrix. And the column names can grow because new product types can have new keys as outputs.
Create a pandas df with 3 columns: ID, Key, Value.
Which one do you recommend or is there an obvious third option I'm missing? | 0 | 1 | 94 |
0 | 56,158,265 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-15T11:17:00.000 | 1 | 1 | 0 | classificator using sample of a population: scaling the population and then sampling / scaling the sample / scaling the X_TRAIN split of the sample? | 56,148,094 | 0.197375 | python,data-science,sampling | Wonderful question. I had similar questions in my mind when I had started out few years ago. Let me try and give my two cents on this.
I suggest to go with creating a scaler for scaling X_train, store the scaler and see if use it to transform X_test. According to centrality theorem, if you have done random sampling, you should have a mean and variance similar to the population attributes. The scaler works based on these two parameters in most cases. If it is representative of the population parameters, as long as the test data is coming from same population the scaler should work. If it is not working, you need more samples for training or another sampling attempt to get X_train representative of population.
By doing this, you are sure the model is going to work on new samples also as long as it is generated by same process. After all, the model is not built to be tested it is meant to be in production doing some useful work.
My recommendation would be to go with 3) scaling X_train and use the scaler to transform X_test. | I am building a logistic regression classificator.
I start form a set of 500.000 record and I want to use only a sample of them.
what do you recommend:
1) scaling the population and then sampling
2) scaling the sample
3) scaling just the X_TRAIN split of the sample?
and why?
my considerations are:
1) this may have sense if the sample is representative of the population (should I test it?)
2) this is not convincing because I would go for several sample in order to see the generalization level of the classificator, and having everytime a slightly different scaler does not sounds good. plus it will bias the X_train, X_test split
3) This will not bias the X_train, X_test split but has the same doubt of the point2)
What would you recommend and why? | 0 | 1 | 48 |
0 | 56,169,770 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-15T12:00:00.000 | 2 | 1 | 0 | How to read the label(annotation) file from Synthia Dataset? | 56,148,891 | 1.2 | python,deep-learning,dataset,semantic-segmentation | I found the right way to read it as below:
label = np.asarray(imageio.imread(label_path, format='PNG-FI'))[:,:,0] | I am new to Synthia dataset. I would like to read the label file from this datset. I expect to have one channel matrix with size of my RGB image, but when I load the data I got 3x760x1280 and it is full of zeros.
I tried to read as belows:
label = np.asarray(imread(label_path))
Can anyone help to read these labels file correctly? | 0 | 1 | 335 |
0 | 56,224,394 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-05-16T05:21:00.000 | 0 | 1 | 0 | Jupyter Notebook is showing No pyspark kernel upon startup | 56,161,339 | 0 | pyspark,jupyter-notebook,kernel,ipython | The issue was resolved only by re configuring the Jupyter notebook. | I am running pyspark scripts in jupyter notebook but the kernel is not starting. upon selecting pyspark from the dropdown the kernel loads and remains busy for some time and then shows "no kernel".
Can someone help me?
Note: upon running "$Jupyter kernelspec list" i can see pyspark kernel in the list. | 0 | 1 | 113 |
0 | 71,233,630 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2019-05-16T10:12:00.000 | 0 | 2 | 0 | Setting seed on train_test_split sklearn python | 56,166,130 | 0 | python-3.x,scikit-learn,jupyter-notebook,train-test-split | simply in train_test_split, specify the parameter random_state=some_number_you_wan to use, like random_state=42 | is there any way to set seed on train_test_split on python sklearn. I have set the parameter random_state to an integer, but I still can not reproduce the result.
Thanks in advance. | 0 | 1 | 12,708 |
0 | 56,202,424 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-16T10:29:00.000 | 0 | 1 | 0 | the clustering of mixed data using python | 56,166,439 | 0 | python,cluster-analysis | There is not one optimal number of clusters. But dozens. Every heuristic will suggest a different "optimal" number for another poorly defined notion of what is "optimal" that likely has no relevancy for the problem that you are trying to solve in the first place.
Rather than being overly concerned with "optimality", rather explore and experiment more. Study what you are actually trying to achieve, and how to get this into mathematical form to be able to compute what is solving your problem, and what is solving someone else's... | I am trying to cluster a data set containing mixed data(nominal and ordinal) using k_prototype clustering based on Huang, Z.: Clustering large data sets with mixed numeric and categorical values.
my question is how to find the optimal number of clusters? | 0 | 1 | 121 |
Subsets and Splits