GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 45,957,039 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2017-08-24T16:02:00.000 | 0 | 2 | 0 | Use the previously trained model for further prediction in catboost | 45,866,292 | 1.2 | python-3.x,machine-learning,data-analysis,catboost | You can run the algorithm for the maximum number of iterations and then use CatBoost.predict() with ntree_limit parameter or CatBoost.staged_predict() to try different number of iterations. | I want to find optimal parameters for doing classification using Catboost.
I have training data and test data. I want to run the algorithm for say 500 iterations and then make predictions on test data. Next, I want to repeat this for 600 iterations and then 700 iterations and so on. I don't want to start from iteration 0 again. So, is there any way I can do this in Catboost algorithm?
Any help is highly appreciated! | 0 | 1 | 970 |
0 | 45,873,121 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-08-25T01:22:00.000 | 0 | 1 | 0 | Tensorflow install--python can import but ipython cannot | 45,873,077 | 0 | python,ubuntu,tensorflow,ipython,anaconda | Did you notice your anaconda environment? Did you launch your ipython in the same environment as you installed tensorflow? | I just installed tensorflow for Ubuntu 16.04 through Anaconda3. I can run the test tensorflow script with python and have no issues. But when I run it with ipython, it cannot find the modules. How can I link ipython to read the same libraries as the python ones? | 0 | 1 | 50 |
0 | 45,891,349 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-08-26T02:01:00.000 | 1 | 2 | 0 | Neural network to detect one class of object only | 45,891,271 | 0.099668 | python,neural-network | I'm not sure if I understand your question well.
Single class training data maybe not exist at all. If you want to detect only the sea cucumber, it is the two-classes classification problem right? It is the sea cucumber or not. Yes or no are two classes.
Yes right cool people implement the NN on raspberry pi. But to some extent it's just possible but not efficient. A good GPU will speed up much for training.
A PC is enable to train some small NN. | I'm totally new with neural networks (NN) in python, and I do not know if NN can run in raspberry pi 3? for I think the problem is that NN requires good CPU/GPU performance for training , data transferring and calculation.
So is it possible to train a NN with single class training data? inorder to save CPU/GPU?.
For example I want the system to detect only the sea cucumber in an image.
A good answer/explanation or link to any example will be very appreciated.
THANKYOU PO | 0 | 1 | 1,350 |
0 | 45,911,480 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-26T21:33:00.000 | 0 | 1 | 0 | How to build multi parameters linear regression in python | 45,899,846 | 0 | python,regression | I am also a rookie for data analysis and modeling.
If I faced this sort of problems, I might consider some questions like:
Whether there is really a significant linear or generalized linear relationship between independent and dependent variables? Should I pre-process or transfer them before regression?
Whether it is necessary to involve interactions among predictor variables?
How the quality of the data set used to train the model? Whether it is good enough for the true underlying relationship between Factors and Responses?
Should I select a more suitable method to create the prediction model? For example, we usually choose Partial Least Squares Regression (PLS), other than Ordinary Least Squares Regression (OLS), to solve multicollinearity in my work area.
Hope these could be helpful for you. | I want to ask about a multi parameters linear regression model.
The question is as following:
We have now data of 100 companies, and for each company, I have the data for parameter A,B,C,D for 3 seasons.(we can call it A1,A2,A3,B1,B2,B3..etc)
We assume that there is some relationship(which we do not know yet, and need to find)between A and BCD, and now we need to predict A for season 4 which is A4...
My method is calculate the relation using the formula of Ordinary least squares and get a final formula in the form as A4=x1*B4+x2*C4+x3*D4.
I get B4, C4, D4 by simply do linear regression on B,C,D
But the problem is the A4 I get in this way is worse than just do linear regression for A...
Can someone tell me a better solution for the problem?
Thanks | 0 | 1 | 85 |
0 | 45,913,568 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-08-28T06:52:00.000 | 0 | 1 | 0 | Encountered segmentation fault when calling feature_importance in LightGBM python API | 45,913,233 | 0 | python,lightgbm | This is a bug in LightGBM; 2.0.4 doesn't have this issue. It should be also fixed in LightGBM master. So either downgrade to 2.0.4, wait for a next release, or use LightGBM master.
The problem indeed depends on training data; feature_importances segfault only when there are "constant" trees in the trained ensemble, i.e. trees with a single leaf, without any splits. | I am using LightGBM 2.0.6 Python API. My training data has around 80K samples and 400 features, and I am training a model with ~2000 iterations, and the model is for multi-class classification (#classes = 10). When the model is trained, and when I called model.feature_importance(), I encountered segmentation fault.
I tried to generate artificial data to test (with the same number of samples, classes, iterations and hyperparameters), and I can successfully obtain the list of feature importance. Therefore I suspect whether the problem occurs depends on the training data.
I would like to see if someone else has encountered this problem and if so how was it overcome. Thank you. | 0 | 1 | 639 |
0 | 45,928,049 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-08-28T17:50:00.000 | 0 | 2 | 0 | Joining/Combining two models for Transfer Leaning in KERAS | 45,924,742 | 0 | python,tensorflow,deep-learning,keras,pre-trained-model | The issue has been resolved.
I used the model.add()function and then added all the required layers of both Model 1 and Model 2.
The following code would add the first 10 layers of Model 2 just after the Model 1.
for i in model2.layers[:10]:
model.add(i) | How can we join/combine two models in Transfer Leaning in KERAS?
I have two models:
model 1 = My Model
model 2 = Trained Model
I can combine these models by putting the model 2 as input and then passed its output to the model 1, which is the conventional way.
However, I am doing it in other way. I want to put the model 1 as input and then passed its output to the model 2 (i.e. trained model one). | 1 | 1 | 2,275 |
0 | 46,190,984 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-29T06:11:00.000 | 1 | 1 | 0 | Add a criteria to the dataset in clustering | 45,931,835 | 0.197375 | python,vector,machine-learning,k-means,dbscan | Your main problem is how to measure similarity.
I'm surprised you got the algorithms to run at all, because usually they would expect all vectors to have exactly the same length for computing distances. Maybe you had them automatically filled up with 0 values - and that is likely why the long vectors end up being very far away from all others.
Don't use the algorithms as black boxes
You need to understand what they are doing or the result will likely be useless. In your case, they are using a bad distance, so of course the result can't be very good.
So first, you'll need to find a better way of computing the distance of two points with different length. How similar should [0,1,2,1,0] and [30,40,50,60,50,40,30] be. To me, this is a highly similar pattern (ramp up, ramp down). | I'm pretty new to ML and Datascience, so my question may be a little silly.
I have a dataset, each row is a vector [a1,a2,a3,a3,...,an]. Those vectors are different not only in their measurements but also in number of n and the sum A = a1 + a2 + a3 +...+ an.
Most of the vectors have 5-6 dimensions, with some exception at 15-20 dimensions. On average, their components often have value of 40-50.
I have tried Kmeans, DBSCAN and GMM to cluster them:
Kmeans overall gives the best result, however, for vectors with 2-3 dimensions and vectors with low A, it often misclassifies.
DBSCAN can only separate vector with low dimension and low A from the dataset, the rest it treats as noise.
GMM separates the vectors with 5-10 dimension, low A, very good, but performs poorly on the rest.
Now I want to include the information of n and A into the process. Example:
-Vector 1 [0,1,2,1,0] and Vector 2 [0,2,4,5,3,2,1,0], they are differents in both n and A, they can't be in the same cluster. Each cluster only contains vectors with similar(close value) A and n, before taking their components into account.
I'm using sklearn on Python, I'm glad to hear suggestion and advice on this problem. | 0 | 1 | 91 |
0 | 45,935,693 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-08-29T09:36:00.000 | 0 | 1 | 0 | Matplolib, put text in front of graphs | 45,935,606 | 0 | python,matplotlib | If you change the command window to white you should be able to see the text. If you are using the command %matplotlib inline, together with a black command window, the text will not always be visible. | I have created some subplots using matplotlib librairies (pyplot and gridspec). I am trying to put some text in front of the graphs, but sometimes they are located below, in the background so I can't see them.
I don't know if I should used plt.text or annonate or rather use methods of sublplots? | 0 | 1 | 207 |
0 | 46,005,947 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-29T17:07:00.000 | 0 | 1 | 0 | Google Datalab and Python Issue | 45,944,697 | 0 | python,google-cloud-datalab | I was loading the data inappropriately. I was using pandas load_csv on my local machine, and BytesIO on in Datalab. The comma in the numberical value was throwing off the import of the data. I had to say that the delimiter is a "," and the thousand separator is also a "," | I have a python script that runs perfectly in my IDE on my local machine, but when I run it on Google Datalab, it throws this error:
ValueError: could not convert string to float: '80,354'
The code is simple, and the graph prints in my Pycharm IDE, but not on GoogleDatalab.
plt.plot(new_df['Volume'])
plt.show()
The error is related to the last line in the data. I'm using the date as an index. Here's what the data looks like? Is there a slash missing somehwere? What am I doing wrong or missing?
' Micro Market Volume\nMonth/Year \n2014-01-01 DALLAS-FT WORTH 63,974\n2014-02-01 DALLAS-FT WORTH 68,482\n2014-03-01 DALLAS-FT WORTH 85,866\n2014-04-01 DALLAS-FT WORTH 79,735\n2014-05-01 DALLAS-FT WORTH 75,339\n2014-06-01 DALLAS-FT WORTH 71,739\n2014-07-01 DALLAS-FT WORTH 85,893\n2014-08-01 DALLAS-FT WORTH 83,694\n2014-09-01 DALLAS-FT WORTH 87,567\n2014-10-01 DALLAS-FT WORTH 87,389\n2014-11-01 DALLAS-FT WORTH 68,340\n2014-12-01 DALLAS-FT WORTH 74,805\n2015-01-01 DALLAS-FT WORTH 68,568\n2015-02-01 DALLAS-FT WORTH 61,924\n2015-03-01 DALLAS-FT WORTH 56,885\n2015-04-01 DALLAS-FT WORTH 68,101\n2015-05-01 DALLAS-FT WORTH 52,806\n2015-06-01 DALLAS-FT WORTH 79,918\n2015-07-01 DALLAS-FT WORTH 92,134\n2015-08-01 DALLAS-FT WORTH 88,047\n2015-09-01 DALLAS-FT WORTH 91,377\n2015-10-01 DALLAS-FT WORTH 91,307\n2015-11-01 DALLAS-FT WORTH 65,415\n2015-12-01 DALLAS-FT WORTH 81,456\n2016-01-01 DALLAS-FT WORTH 82,820\n2016-02-01 DALLAS-FT WORTH 91,688\n2016-03-01 DALLAS-FT WORTH 81,495\n2016-04-01 DALLAS-FT WORTH 87,872\n2016-05-01 DALLAS-FT WORTH 82,031\n2016-06-01 DALLAS-FT WORTH 100,783\n2016-07-01 DALLAS-FT WORTH 99,285\n2016-08-01 DALLAS-FT WORTH 99,179\n2016-09-01 DALLAS-FT WORTH 93,939\n2016-10-01 DALLAS-FT WORTH 99,663\n2016-11-01 DALLAS-FT WORTH 86,751\n2016-12-01 DALLAS-FT WORTH 84,551\n2017-01-01 DALLAS-FT WORTH 81,890\n2017-02-01 DALLAS-FT WORTH 90,212\n2017-03-01 DALLAS-FT WORTH 97,798\n2017-04-01 DALLAS-FT WORTH 89,338\n2017-05-01 DALLAS-FT WORTH 96,891\n2017-06-01 DALLAS-FT WORTH 86,613\n2017-07-01 DALLAS-FT WORTH 80,354' | 0 | 1 | 68 |
0 | 45,950,446 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-08-30T01:03:00.000 | 0 | 2 | 0 | multiprocessing.map in python with very large read-only object? | 45,950,381 | 0 | python,multithreading,multiprocessing | Every sub-process will have its own resources, so this imply does. More precisely, every sub-process will copy a part of original dataframe, determined by your implementation.
But will it be fast then a shared one? I'm not sure. Unless your dataframe implement a w/r lock, read a shared one or read separated ones are the same. But why a dataframe need to lock read operation? It doesn't make sense. | I have a readonly very large dataframe and I want to do some calculation so I do a multiprocessing.map and set the dataframe as global. However, does this imply that for each process, the program will copy the dataframe separately(so it will be fast then a shared one)? | 0 | 1 | 843 |
0 | 45,985,888 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-31T15:58:00.000 | 2 | 2 | 0 | Numpy resize or Numpy reshape | 45,985,692 | 0.197375 | python,arrays,numpy,resize,reshape | Neither.
Reshape only changes the shape of the data, but not the total size, so you can for example reshape an array of shape 1x9 into one which is 3x3, but not into 2x4.
Resize does similar thing, but lets you increase the size, in which case it will fill new space with elements of array which is being resized.
You have two choices: write your function which does resizing in the manner you want it to do so, or use one of Python image libraries (PIL, Pillow...) to apply common image resizing functions. | I've been scouring the stackexchange archives and can not seem to come across the right answer... should reshape be used, should resize be used, but both fail...
setup: 3 netCDF files of two resolutions... 1 500 meter, 2 1000 meter
need to resize or decrease resolution or reshape or whatever the right word is the higher resolution file :)
using either gdalinfo or "print (np.shape(array))" we know that the higher resolution file has a shape or size of (2907, 2331) and the lower resolution array has the size of (1453, 1166)
So i have tried both np.resize (array, (1453,1166)) and np.reshape (array, (1453,1166)) and receive errors such as:
ValueError: cannot reshape array of size 6776217 into shape (1453,1166)
Surely I'm using the wrong terms / lingo and I apologize for that... on the command line to do what I would need done it would be as simple as gdal_translate -outsize x y -of GTiff infile outfile
Please help! | 0 | 1 | 1,755 |
0 | 45,992,812 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-09-01T01:52:00.000 | 0 | 1 | 1 | Does Pickle.dumps and loads used for sending network data require change in byte order? | 45,992,329 | 0 | python,sockets | So I think I figured out the answer.
By default, the pickle data format uses a printable ASCII
representation.
since ASCII is a single byte representation the endianess does not matter. | I know for integers one can use htonl and ntohl but what about pickle byte streams?
If I know that the next 150 bytes that are received are a pickle object, do I still have to reverse byte-order just in case one machine uses big-endian and the other is little-endian? | 0 | 1 | 358 |
0 | 50,102,832 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2017-09-01T05:33:00.000 | 9 | 1 | 0 | Get videoStream from RTMP to opencv | 45,993,828 | 1 | python,opencv,video-streaming,rtmp | Just open the address instead of your cam :
myrtmp_addr = "rtmp://myip:1935/myapp/mystream"
cap = cv2.VideoCapture(myrtmp_addr)
frame,err = cap.read()
From there you can handle your frames like when you get it from your cam.
if it still doesn't work, check if you have a valid version of ffmpeg linked with your opencv. You can check it with
print(cv2.getBuildInformation())
Hope I could help | I'm developing a python program to receive a live streaming video from android device via RTMP. I created a server and also I'm capable of transmitting a videoStream from android device. But the problem is I can't access that stream in opencv. Can anyone tell me a way to access it via opencv. It is better if you can post any python code snippets. | 0 | 1 | 9,717 |
0 | 45,997,098 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-09-01T07:15:00.000 | 1 | 1 | 0 | Is there a way to limit an existing TensorFlow object detection model? | 45,995,076 | 0.197375 | python,machine-learning,tensorflow,object-detection | Either you finetune your model, with just a few thousands steps, on pedestrians (a small dataset to train would be enough) or you look in your in your label definition file (.pbtx file), search for the person label, and do whatever stuff you want with the others. | On the tensorflow/models repo on GitHub, they supply five pre-trained models for object detection.
These models are trained on the COCO dataset and can identify 90 different objects.
I need a model to just detect people, and nothing else. I can modify the code to only print labels on people, but it will still look for the other 89 objects, which takes more time than just looking for one object.
I can train my own model, but I would rather be able to use a pre-trained model, instead of spending a lot of time training my own model.
So is there a way, either by modifying the model file, or the TensorFlow or Object Detection API code, so it only looks for a single object? | 0 | 1 | 1,270 |
0 | 45,996,951 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-09-01T09:04:00.000 | 2 | 1 | 0 | numpy.random and Monte Carlo | 45,996,727 | 1.2 | python,numpy,random,montecarlo,mersenne-twister | I do not think anyone can tell you if this algorithm suffices without knowing how the random numbers are being used.
What I would do is to replace the numpy random numbers by something else, certainly there are other modules already available that provide different algorithms.
If your simulation results are not affected by the choice of random number generator, it is already a good sign. | I wrote a Monte Carlo (MC) code in Python with a Fortran extension (compiled with f2py). As it is a stochastic integration, the algorithm relies heavily on random numbers, namely I use ~ 10^8 - 10^9 random numbers for a typical run. So far, I didn't really mind the 'quality' of the random numbers - this is, however, something that I want to check out.
My question is: does the Mersenne-Twister used by numpy suffice or are there better random number generators out there that one should (could) use? (better in the sense of runtime as well as quality of the generated sequence)
Any suggestions/experiences are most definitely welcome, thanks! | 0 | 1 | 859 |
0 | 68,411,615 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2017-09-02T15:38:00.000 | 1 | 2 | 0 | How can I parallelize fitting a gradient boosting model with sklearn? | 46,015,426 | 0.099668 | python,machine-learning,scikit-learn | There are alternatives like XGboost or LigtGMB that are distributed (i.e., can run parallel). These are well documented and popular boosting algorithms. | Relatively new to model building with sklearn. I know cross validation can be parallelized via the n_jobs parameter, but if I'm not using CV, how can I utilize my available cores to speed up model fitting? | 0 | 1 | 3,044 |
0 | 58,381,427 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-09-02T18:14:00.000 | 1 | 4 | 0 | How can I convert '?' to NaN | 46,016,838 | 0.049958 | python,pandas,dataframe | You can also do like this,
df[df == '?'] = np.nan | I have a dataset and there are missing values which are encoded as ?. My problem is how can I change the missing values, ?, to NaN? So I can drop any row with NaN. Can I just use .replace() ? | 0 | 1 | 8,652 |
0 | 52,961,763 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-09-02T18:14:00.000 | 2 | 4 | 0 | How can I convert '?' to NaN | 46,016,838 | 0.099668 | python,pandas,dataframe | You can also read the data initially by passing
df = pd.read_csv('filename',na_values = '?')
It will automatically replace '?' to NaN | I have a dataset and there are missing values which are encoded as ?. My problem is how can I change the missing values, ?, to NaN? So I can drop any row with NaN. Can I just use .replace() ? | 0 | 1 | 8,652 |
0 | 46,018,254 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-09-02T19:46:00.000 | 0 | 3 | 0 | Most efficient way to store financial data (Python) | 46,017,576 | 0 | python,database,csv,panel | I would also suggest to use DB, it is much more convenient to update tables in DB than a csv file, moreover if you have a substantial amount of observations, you will be able to access/manipulate your data much more faster.
Another solution is to keep separate updates in separate .csv files.
You can still keep your major file (the one which is regularly updated), and at the same time create separate files for each update. | The data exists out of Date, Open, High, Low, Close, Volume and it's currently stored in a .csv file. It's currently updating every minute and when time goes by the file keeps growing and growing. A problem is when I need 500 observations from the data, I need to import the whole .csv file and that is a problem yes. Especially when I need to access the data fast.
In Python I use the data mostly in a data frame or panel. | 0 | 1 | 841 |
0 | 70,355,115 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-09-02T19:46:00.000 | 0 | 3 | 0 | Most efficient way to store financial data (Python) | 46,017,576 | 0 | python,database,csv,panel | You maybe wanna check RethinkDB, it gives you fastness, reliability and also flexible searching ability, it has a good python driver. I also recommend to use docker, because in that case, regardless of which database you want to use, you can easily store the data of your db inside a folder, and you can anytime change that folder(when you have a 1TB hard, now you want to change it to 4TB hard). maybe using docker in your project is more important than DB. | The data exists out of Date, Open, High, Low, Close, Volume and it's currently stored in a .csv file. It's currently updating every minute and when time goes by the file keeps growing and growing. A problem is when I need 500 observations from the data, I need to import the whole .csv file and that is a problem yes. Especially when I need to access the data fast.
In Python I use the data mostly in a data frame or panel. | 0 | 1 | 841 |
0 | 46,022,740 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-09-03T10:52:00.000 | 0 | 2 | 0 | Change values of a 1D numpy array based on certain condition | 46,022,688 | 0 | python,numpy,vectorization | I found another answer:
A = np.where(A < 0, A + 5, A) | Very basic question:
Suppose I have a 1D numpy array (A) containing 5 elements:
A = np.array([ -4.0, 5.0, -3.5, 5.4, -5.9])
I need to add, 5 to all the elements of A that are lesser than zero. What is the numpy way to do this without for-looping ? | 0 | 1 | 4,001 |
0 | 46,032,724 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-09-04T07:58:00.000 | 0 | 2 | 0 | Not able to update the version of tensorflow | 46,032,658 | 0 | python,tensorflow | That should work. Check if you are using any environment but you are not updating the tensorflow version within the environment.
Also, please restart the notebook after saving it and run the cells and try. That should work.
Verify in the notebook : run - print(tf.__version__). Please mark the answer if it resolves. | I am facing the below error while running the code for LinearClassifier in tensorflow.
AttributeError: module 'tensorflow.python.estimator.estimator_lib' has no attribute 'LinearRegressor'
My current version for tensorflow is 1.2.1. I tried to update the version of the package from ANACONDA environment, its not showing for an upgrade.
I tried to upgrade it from command prompt by using below command, it is successfully updating the package however it is not reflecting to the actual library when I am using it.
pip install --upgrade tensorflow==1.3.0
FYI, I am using Jupyter Notebook and have created a separate environment for tensorflow.
Please let me know if I have missed anything. | 0 | 1 | 5,131 |
0 | 46,039,296 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-09-04T13:59:00.000 | 1 | 3 | 0 | how to prepare image dataset for training model? | 46,038,671 | 0.066568 | python,computer-vision,artificial-intelligence,keras,training-data | First detect the cars present in the image, and obtain their size and alignment. Then go for segmentation and labeling of the parking lot by fixing a suitable size and alignment. | I have a project that use Deep CNN to classify parking lot. My idea is to classify every space whether there is a car or not. and my question is, how do i prepare my image dataset to train my model ?
i have downloaded PKLot dataset for training included negative and positive image.
should i turn all my data training image to grayscale ? should i rezise all my training image to one fix size? (but if i resize my training image to one fixed size, i have landscape and portrait image). Thanks :) | 0 | 1 | 822 |
0 | 46,153,830 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-09-04T13:59:00.000 | 1 | 3 | 0 | how to prepare image dataset for training model? | 46,038,671 | 0.066568 | python,computer-vision,artificial-intelligence,keras,training-data | as you want use pklot dataset for training your machine and test with real data, the best approach is to make both datasets similar and homological, they must be normalized , fixed sized , gray-scaled and parameterized shapes. then you can use Scale-invariant feature transform (SIFT) for image feature extraction as basic method.the exact definition often depends on the problem or the type of application. Since features are used as the starting point and main primitives for subsequent algorithms, the overall algorithm will often only be as good as its feature detector. you can use these types of image features based on your problem:
Corners / interest points
Edges
Blobs / regions of interest points
Ridges
... | I have a project that use Deep CNN to classify parking lot. My idea is to classify every space whether there is a car or not. and my question is, how do i prepare my image dataset to train my model ?
i have downloaded PKLot dataset for training included negative and positive image.
should i turn all my data training image to grayscale ? should i rezise all my training image to one fix size? (but if i resize my training image to one fixed size, i have landscape and portrait image). Thanks :) | 0 | 1 | 822 |
0 | 46,041,308 | 0 | 0 | 0 | 0 | 1 | true | 16 | 2017-09-04T16:32:00.000 | 14 | 2 | 0 | Pandas corr() vs corrwith() | 46,041,148 | 1.2 | python,pandas | The first one computes correlation with another dataframe:
between rows or columns of two DataFrame objects
The second one computes it with itself
Compute pairwise correlation of columns | What is the reason of Pandas to provide two different correlation functions?
DataFrame.corrwith(other, axis=0, drop=False):
Correlation between rows or columns of two DataFrame objectsCompute pairwise
vs.
DataFrame.corr(method='pearson', min_periods=1): Compute pairwise
correlation of columns, excluding NA/null values
(from pandas 0.20.3 documentation) | 0 | 1 | 22,525 |
0 | 46,290,419 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-09-05T01:46:00.000 | 1 | 2 | 0 | No Module named "sklearn" | 46,045,916 | 0.099668 | python,python-3.x,scikit-learn | Try starting fresh and reinstall all of the necessary modules through miniconda3. Maybe the scikit-learn included on that install did not work.
You can then try
sudo apt-get install python3-scipy python3-sklearn | I am trying to import 'sklearn' using Python 3.4.3 on a Raspberry Pi 3 running Raspian.
I downloaded microconda3, which includes all the necessary modules to use scikit.
However, when I attempt to import 'sklearn' in IDLE, I receive an error stating that there is no module named 'sklearn'. | 0 | 1 | 4,166 |
0 | 46,061,009 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-09-05T13:12:00.000 | 3 | 1 | 0 | How to generate a random covariance matrix in Python? | 46,055,886 | 1.2 | python,r,statistics,covariance | OK, you only need one matrix and randomness isn't important. Here's a way to construct a matrix according to your description. Start with an identity matrix 50 by 50. Assign 10 to the first (upper left) element. Assign a small number (I don't know what's appropriate for your problem, maybe 0.1? 0.01? It's up to you) to all the other elements. Now take that matrix and square it (i.e. compute transpose(X) . X where X is your matrix). Presto! You've squared the eigenvalues so now you have a covariance matrix.
If the small element is small enough, X is already positive definite. But squaring guarantees it (assuming there are no zero eigenvalues, which you can verify by computing the determinant -- if the determinant is nonzero then there are no zero eigenvalues).
I assume you can find Python functions for these operations. | So I would like to generate a 50 X 50 covariance matrix for a random variable X given the following conditions:
one variance is 10 times larger than the others
the parameters of X are only slightly correlated
Is there a way of doing this in Python/R etc? Or is there a covariance matrix that you can think of that might satisfy these requirements?
Thank you for your help! | 0 | 1 | 1,796 |
0 | 57,173,648 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-09-05T19:59:00.000 | 0 | 2 | 0 | Initialize a dask series | 46,062,523 | 0 | python,pandas,dask | dask.dataframe.from_pandas(pandas.Series(my_data), npartitions=n) is what you need. from_pandas accepts both pandas.DataFrame/Series. | I was trying to add a column in dask dataframe, but it's not letting me to add columns of type list, so I reached a little bit and found that it would add a dask series. However I'm unable to convert my list to a dask series. Can you help me out? | 0 | 1 | 1,854 |
0 | 49,581,528 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-09-05T20:08:00.000 | 1 | 1 | 0 | tensorflow slim concurrent train and evaluation loops; single device | 46,062,649 | 1.2 | python-2.7,tensorflow,tensorflow-gpu,tf-slim | You can partition the GPU usage using the following code.
You can set the fraction of the GPU to be used for training and evaluation separately. The code below means that the process is given 30% of the memory.
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3000)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
sess.run(tf.app.run()) | I am interested in using the tensorflow slim library (tf.contrib.slim) to do evaluation of a model performance on a(n) (entire) test set periodically during training. The documentation is pretty clear that slim.evaluation.evaluation_loop is the way to go, and it looks promising. The issue is that I don't have a second gpu to spare, this model parameters take up an entire gpu's worth of memory, and I would like to do concurrent evaluation.
For example, if I had 2 GPUs, I could run a python script that terminated with "slim.learning.train()" on the first gpu, and another that terminated with "slim.evaluation.evaluation_loop()" on the second gpu.
Is there an approach that can manage 1 gpu's resources for both tasks? tf.train.Supervisor comes to mind, but I don't honestly know. | 0 | 1 | 215 |
0 | 46,070,137 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-09-06T07:34:00.000 | 1 | 4 | 0 | Shortest root using machine learning/AI | 46,069,364 | 0.049958 | algorithm,python-3.x,machine-learning,artificial-intelligence | This is not a machine learning problem but an optimization problem.
So you need a greedy algorithm for the shortest path
Indeed it could be solved this way but the challenge is to represent your grid as a graph...
For example, decomposing the grid in a n x n matrix. In your shortest path algorithm, a node is an element of your matrix (so you exclude the elements of the matrice that contains the scattered points) and the weight of the arcs are the distance.
However n must be small since shortest path algotithms are np-hard problems...
Maybe other algorithms exist for this specific problem but I'm not aware of. | Assume that I have set of points scattered on the XY plane, and i have two points say start and end point any where in XY plane. I want to find the shortest path between start and end point without touching scattered points. The path has to maintain certain offset ( i.e assume path has some width ).
How to approach this kind of problems in programming, Are there any algorithms in machine learning. | 0 | 1 | 1,861 |
0 | 46,078,672 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-09-06T07:34:00.000 | 0 | 4 | 0 | Shortest root using machine learning/AI | 46,069,364 | 0 | algorithm,python-3.x,machine-learning,artificial-intelligence | Like others already stated: this is not a typical "Artificial Intelligence" problem. It is kind of a path planning problem.
There are different algorithms available. If your path doesn't neet to satisfy any constraints like .g. smoothness, you can use an A*-Algorithm with distance as heuristic.
You have to represent your XYZ-space as a Graph where each node has a coordinate. Further you need to take into account, that no nodes lie near the points you want to avoid.
If your path needs to satisfy constraints, this turns into a more complicated path planning problem where you could apply optimization or RRTs. | Assume that I have set of points scattered on the XY plane, and i have two points say start and end point any where in XY plane. I want to find the shortest path between start and end point without touching scattered points. The path has to maintain certain offset ( i.e assume path has some width ).
How to approach this kind of problems in programming, Are there any algorithms in machine learning. | 0 | 1 | 1,861 |
0 | 52,784,797 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-09-07T01:12:00.000 | 0 | 1 | 0 | How to Plot with Label from scipy.cluster.hierarchy.linkage? | 46,086,401 | 1.2 | python,matplotlib,scipy,cluster-analysis,hierarchical-clustering | fcluster from scipy.cluster.hierarchy will do. | How can I plot a bunch of 2-D points X (say, using matplotlib) with color labels from scipy.cluster.hierarchy.linkage(X, method="single") (say, k = 3)? | 0 | 1 | 340 |
0 | 59,344,648 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-09-07T07:53:00.000 | 0 | 1 | 0 | Check Gurobi redundant constraints | 46,090,908 | 0 | python-3.x,gurobi | Your statements are somewhat contradictory: If gurobi was to add redundant constraints in the beginning, the presolve step would remove them again and second them being redundant would mean that they wont influence the solution. Thus, there would be a modelling error on your side.
As sascha mentioned it is extremely unlikely that any constraint-generating routine/presolve step would be falsely implemented - such errors would show up immediately, that is before patching into the published versions.
And from a theoretical side: These methods require mathematical proofs that guarantee they do not cut-off feasible solutions. What is overseen sometimes, I did that only recently, is the default lower bound of 0 on all variables which stems from the LP-notation according to standard form.
Further investigation can be done if you were to post the .lp and .mps files of your initial problem with your question. That way people could investigate themselves. | Is there any way to check whether Gurobi is adding an extra redundant constraint to the model?
I tried model.write() at every iteration and it seems fine. But still, my output is not what I'm expecting.
I want to check whether there are any unwanted constraints added by Gurobi when solving. | 0 | 1 | 249 |
0 | 46,101,630 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-09-07T11:29:00.000 | 1 | 1 | 0 | Output from scikit learn ML algorithms | 46,095,249 | 1.2 | python,scikit-learn,knn | I would recommend looking up this book.
Introduction to Machine Learning with Python A Guide for Data Scientists by Andreas C. Müller, Sarah Guido
The book has code written to visualise various outputs for Machine Learning algorithms. | I want to know is there any way to see "under the hood" on other python sklearn algorithms. For example, I have created a decision tree classifier using sklearn and have been able to export the exact structure of the tree but would like to also be able to do this with other algorithms, for example KNN classification. Is this possible? | 0 | 1 | 60 |
0 | 46,203,664 | 0 | 0 | 0 | 0 | 1 | false | 21 | 2017-09-07T13:42:00.000 | 2 | 2 | 0 | TensorFlow: How to handle void labeled data in image segmentation? | 46,097,968 | 0.197375 | python,tensorflow,neural-network,deep-learning,image-segmentation | If I understand correctly you have a portion of each image with label void in which you are not interested at all. Since there is not a easy way to obtain the real value behind this void spots, why don't you map these points to background label and try to get results for your model? I would try in a preprocessing state to clear the data labels from this void label and substitute them with background label.
Another possible strategy ,if you don's simply want to map void labels to background, is to run a mask (with a continuous motion from top to bottom from right to left) to check the neigthbooring pixels from a void pixel (let's say an area of 5x5 pixels) and assign to the void pixels the most common label besides void.
Also you can always keep a better subset of the data, filtering data where the percentage of void labels is over a threshold. You can keep only images with no void labels, or more likeley you can keep images that have only under a threshold (e.g. 5%) of non-labeled points. In this images you can implement the beforementioned strategies for replacing the void labels. | I was wondering how to handle not labeled parts of an image in image segmentation using TensorFlow. For example, my input is an image of height * width * channels. The labels are too of the size height * width, with one label for every pixel.
Some parts of the image are annotated, other parts are not. I would wish that those parts have no influence on the gradient computation whatsoever. Furthermore, I am not interested in the network predicting this “void” label.
Is there a label or a function for this? At the moment I am using tf.nn.sparse_softmax_cross_entropy_with_logits. | 0 | 1 | 3,236 |
0 | 47,979,807 | 0 | 1 | 0 | 0 | 2 | false | 9 | 2017-09-07T15:04:00.000 | 5 | 5 | 0 | pyinstaller fails with plotly | 46,099,695 | 0.197375 | python,plotly,pyinstaller | it's seem like plotly isn't fully supported by PyInstaller.
I used an work around solution that worked for me.
Don't use the one file option
Completely copy the plotly package(for me it was Lib\site-packages\plotly) for the python installation directory into the /dist/{exe name}/ directory | I am compiling my current program using pyinstaller and it seems to not be able to handle all required files in plotly. It runs fine on its own, and without plotly it can compile and run as well.
It seems be to failing to find a file "default-schema.json" that I cannot even locate anywhere on my drive.
Traceback (most recent call last): File "comdty_runtime.py", line
17, in File "", line 2237, in
_find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line
1161, in _load_backward_compatible File
"d:\users\ktehrani\appdata\local\continuum\anaconda3\envs\py34\lib\site-p
ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in
load_module
exec(bytecode, module.dict) File "actual_vs_mai.py", line 12, in File "", line 2237, in
_find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line
1161, in _load_backward_compatible File
"d:\users\ktehrani\appdata\local\continuum\anaconda3\envs\py34\lib\site-p
ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in
load_module
exec(bytecode, module.dict) File "site-packages\plotly__init__.py", line 31, in File
"", line 2237, in _find_and_load File
"", line 2226, in _find_and_load_unlocked
File "", line 1191, in _load_unlocked
File "", line 1161, in
_load_backward_compatible File "d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p
ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in
load_module
exec(bytecode, module.dict) File "site-packages\plotly\graph_objs__init__.py", line 14, in
File "", line 2237, in _find_and_load
File "", line 2226, in
_find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line
1161, in _load_backward_compatible File
"d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p
ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in
load_module
exec(bytecode, module.dict) File "site-packages\plotly\graph_objs\graph_objs.py", line 34, in
File "", line 2237, in _find_and_load
File "", line 2226, in
_find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line
1161, in _load_backward_compatible File
"d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p
ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in
load_module
exec(bytecode, module.dict) File "site-packages\plotly\graph_reference.py", line 578, in
File "site-packages\plotly\graph_reference.py", line 70, in
get_graph_referenc e File
"site-packages\setuptools-27.2.0-py3.4.egg\pkg_resources__init__.py",
li ne 1215, in resource_string File
"site-packages\setuptools-27.2.0-py3.4.egg\pkg_resources__init__.py",
li ne 1457, in get_resource_string File
"site-packages\setuptools-27.2.0-py3.4.egg\pkg_resources__init__.py",
li ne 1530, in _get File
"d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p
ackages\PyInstaller\loader\pyimod03_importers.py", line 474, in
get_data
with open(path, 'rb') as fp: FileNotFoundError: [Errno 2] No such file or directory: 'H:\Python\Commodity_M
AI_Trade_List\Code\dist\comdty_runtime\plotly\package_data\default-schema.
json' Failed to execute script comdty_runtime | 0 | 1 | 6,865 |
0 | 70,137,929 | 0 | 1 | 0 | 0 | 2 | false | 9 | 2017-09-07T15:04:00.000 | 0 | 5 | 0 | pyinstaller fails with plotly | 46,099,695 | 0 | python,plotly,pyinstaller | Before using pyinstaller, make sure that the imports that you are using in your program are specifically listed in the requirements file.
pip freeze->requirements.txt
quickly creates a requirements file in your program.
I am not sure if this is the solution? I had similar issues that seem to go away once I updated the requirements file then making an executable. | I am compiling my current program using pyinstaller and it seems to not be able to handle all required files in plotly. It runs fine on its own, and without plotly it can compile and run as well.
It seems be to failing to find a file "default-schema.json" that I cannot even locate anywhere on my drive.
Traceback (most recent call last): File "comdty_runtime.py", line
17, in File "", line 2237, in
_find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line
1161, in _load_backward_compatible File
"d:\users\ktehrani\appdata\local\continuum\anaconda3\envs\py34\lib\site-p
ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in
load_module
exec(bytecode, module.dict) File "actual_vs_mai.py", line 12, in File "", line 2237, in
_find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line
1161, in _load_backward_compatible File
"d:\users\ktehrani\appdata\local\continuum\anaconda3\envs\py34\lib\site-p
ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in
load_module
exec(bytecode, module.dict) File "site-packages\plotly__init__.py", line 31, in File
"", line 2237, in _find_and_load File
"", line 2226, in _find_and_load_unlocked
File "", line 1191, in _load_unlocked
File "", line 1161, in
_load_backward_compatible File "d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p
ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in
load_module
exec(bytecode, module.dict) File "site-packages\plotly\graph_objs__init__.py", line 14, in
File "", line 2237, in _find_and_load
File "", line 2226, in
_find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line
1161, in _load_backward_compatible File
"d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p
ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in
load_module
exec(bytecode, module.dict) File "site-packages\plotly\graph_objs\graph_objs.py", line 34, in
File "", line 2237, in _find_and_load
File "", line 2226, in
_find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line
1161, in _load_backward_compatible File
"d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p
ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in
load_module
exec(bytecode, module.dict) File "site-packages\plotly\graph_reference.py", line 578, in
File "site-packages\plotly\graph_reference.py", line 70, in
get_graph_referenc e File
"site-packages\setuptools-27.2.0-py3.4.egg\pkg_resources__init__.py",
li ne 1215, in resource_string File
"site-packages\setuptools-27.2.0-py3.4.egg\pkg_resources__init__.py",
li ne 1457, in get_resource_string File
"site-packages\setuptools-27.2.0-py3.4.egg\pkg_resources__init__.py",
li ne 1530, in _get File
"d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p
ackages\PyInstaller\loader\pyimod03_importers.py", line 474, in
get_data
with open(path, 'rb') as fp: FileNotFoundError: [Errno 2] No such file or directory: 'H:\Python\Commodity_M
AI_Trade_List\Code\dist\comdty_runtime\plotly\package_data\default-schema.
json' Failed to execute script comdty_runtime | 0 | 1 | 6,865 |
0 | 47,131,499 | 0 | 0 | 0 | 0 | 3 | false | 2 | 2017-09-10T17:22:00.000 | 0 | 3 | 0 | Faster RCNN tensorflow object detection API : dealing with big images | 46,143,492 | 0 | python,tensorflow,size,object-detection,region | I want to know what is your min_dimension which should be larger than 4000 in your case, otherwise the image will be scale down.
object_detection-> core-> preprocessor.py
def _compute_new_dynamic_size(image, min_dimension, max_dimension):
"""Compute new dynamic shape for resize_to_range method."""
image_shape = tf.shape(image)
orig_height = tf.to_float(image_shape[0])
orig_width = tf.to_float(image_shape[1])
orig_min_dim = tf.minimum(orig_height, orig_width)
# Calculates the larger of the possible sizes
min_dimension = tf.constant(min_dimension, dtype=tf.float32)
large_scale_factor = min_dimension / orig_min_dim
# Scaling orig_(height|width) by large_scale_factor will make the smaller
# dimension equal to min_dimension, save for floating point rounding errors.
# For reasonably-sized images, taking the nearest integer will reliably
# eliminate this error.
large_height = tf.to_int32(tf.round(orig_height * large_scale_factor))
large_width = tf.to_int32(tf.round(orig_width * large_scale_factor))
large_size = tf.stack([large_height, large_width])
if max_dimension:
# Calculates the smaller of the possible sizes, use that if the larger
# is too big.
orig_max_dim = tf.maximum(orig_height, orig_width)
max_dimension = tf.constant(max_dimension, dtype=tf.float32)
small_scale_factor = max_dimension / orig_max_dim
# Scaling orig_(height|width) by small_scale_factor will make the larger
# dimension equal to max_dimension, save for floating point rounding
# errors. For reasonably-sized images, taking the nearest integer will
# reliably eliminate this error.
small_height = tf.to_int32(tf.round(orig_height * small_scale_factor))
small_width = tf.to_int32(tf.round(orig_width * small_scale_factor))
small_size = tf.stack([small_height, small_width])
new_size = tf.cond(
tf.to_float(tf.reduce_max(large_size)) > max_dimension,
lambda: small_size, lambda: large_size)
else:
new_size = large_size
return new_size | I have images of a big size (6000x4000). I want to train FasterRCNN to detect quite small object (tipycally between 50 150 pixels). So for memory purpose I crop the images to 1000x1000. The training is ok. When I test the model on the 1000x1000 the results are really good. When I test the model on images of 6000x4000 the result are really bad...
I guess it is the region proposal step, but I don't know what I am doing wrong (the keep_aspect_ratio_resizer max_dimension is fix to 12000)...
Thanks for your help ! | 0 | 1 | 2,633 |
0 | 46,399,506 | 0 | 0 | 0 | 0 | 3 | true | 2 | 2017-09-10T17:22:00.000 | 3 | 3 | 0 | Faster RCNN tensorflow object detection API : dealing with big images | 46,143,492 | 1.2 | python,tensorflow,size,object-detection,region | You need to keep training images and images to test on of roughly same dimension. If you are using random resizing as data augmentation, you can vary the test images by roughly that factor.
Best way to deal with this problem is to crop large image into images of same dimension as used in training and then use Non-maximum suppression on crops to merge the prediction.
That way, If your smallest object to detect is of size 50px, you can have training images of size ~500px. | I have images of a big size (6000x4000). I want to train FasterRCNN to detect quite small object (tipycally between 50 150 pixels). So for memory purpose I crop the images to 1000x1000. The training is ok. When I test the model on the 1000x1000 the results are really good. When I test the model on images of 6000x4000 the result are really bad...
I guess it is the region proposal step, but I don't know what I am doing wrong (the keep_aspect_ratio_resizer max_dimension is fix to 12000)...
Thanks for your help ! | 0 | 1 | 2,633 |
0 | 46,143,853 | 0 | 0 | 0 | 0 | 3 | false | 2 | 2017-09-10T17:22:00.000 | 1 | 3 | 0 | Faster RCNN tensorflow object detection API : dealing with big images | 46,143,492 | 0.066568 | python,tensorflow,size,object-detection,region | It looks to me like you are training on images with a different aspect ratio than what you are testing on (square vs not square) --- this could lead to a significant degradation in quality.
Though to be honest I'm a bit surprised that the results could be really bad, if you are just visually evaluating, maybe you also have to turn down the score thresholds for visualization. | I have images of a big size (6000x4000). I want to train FasterRCNN to detect quite small object (tipycally between 50 150 pixels). So for memory purpose I crop the images to 1000x1000. The training is ok. When I test the model on the 1000x1000 the results are really good. When I test the model on images of 6000x4000 the result are really bad...
I guess it is the region proposal step, but I don't know what I am doing wrong (the keep_aspect_ratio_resizer max_dimension is fix to 12000)...
Thanks for your help ! | 0 | 1 | 2,633 |
0 | 46,158,131 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2017-09-11T01:58:00.000 | 1 | 2 | 0 | Performance issue with setting quadratic objective in CPLEX | 46,147,245 | 1.2 | python,cplex,quadratic | After asking my question in the IBM forum, I received and answer and it works. The fastest way to create the quadratic objective function is to use objective.set_quadratic() with only a list that contains the coefficient values (they can vary and don't need to be all equal to 1.0) | I am solving a large sparse quadratic problem. My objective function has only quadratic terms and the coefficients of all terms are the same and equal to 1 and it includes all of the variables.
I use objective.set_quadratic_coefficients function in python to create my objective function. For small problems (10000 variables), the objective function is generated quickly but it gets much slower for larger problems (100000 variables) and does return anything for my main problem that has 1000000 variables.
Is there an alternative to objective.set_quadratic_coefficients to speed up the creating the problem? | 0 | 1 | 119 |
0 | 46,151,523 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2017-09-11T08:40:00.000 | 3 | 2 | 0 | Initializing a matrix to infinity in python | 46,151,461 | 0.291313 | python,infinity | Numpy has infinity object, you can call it by np.inf. | How to initialize a matrix to a very large number, say to infinity.
Similar to initializing all elements to zero:
sample = np.matrix((np.zeros(50,50))
I want to initalize to infinity
How to do it in python? | 0 | 1 | 14,126 |
0 | 46,322,688 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-09-11T09:44:00.000 | 1 | 2 | 0 | Monte Carlo Marcov Chain with pymc | 46,152,636 | 0.099668 | python,pymc,markov-chains,mcmc | Perhaps, assuming each user behaves the same way in a particular time interval, at each interval t we can get the matrix
[ Pr 0->0 , Pr 1->0;
Pr 1->0 , Pr 1->0]
where Pr x ->y = (the number of people in interval t+1 who are in state y AND who were in state x in interval t) divided by (the number of people who were in state x in interval t), i.e. the probability based on the sample that someone in the given time interval in state x (0 or 1) will transition to state y (0 or 1) in the next time interval. | I'm trying to build a MCMC model to simulate a changing beavior over time. I have to simulate one day with a time interval of 10-minutes. I have several observations of one day from N users in 144 intervals. So I have U_k=U_1,...,U_N U users with k ranging from 1 to N and for each user I have X_i=X_1,...X_t samples. Each user has two possible states, 1 and 0. I have understood that I have to build a transition probability matrix for each time step and then run the MCMC model. Is it right? But I did not understood how to build it in pyMC can anybody provided me suggestion? | 0 | 1 | 245 |
0 | 46,161,737 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-09-11T15:43:00.000 | 0 | 1 | 0 | Python: LookupError: unknown encoding: cp0 | 46,159,517 | 0 | python,encoding,nltk,spyder,miniconda | I was able to work around this issue, but had to uninstall Miniconda and Python. I reinstalled Anaconda, launched Spyder from Anaconda-Navigator and its all working fine now. But I still don't understand the cause of this issue. It will be great if someone is able to explain. | I am trying to execute a simple nltk code: nltk.sent_tokenize(text) and am getting error LookupError: unknown encoding: cp0. I tried typing in chcp in my IPython Console and I am getting the same error.
I am working on Windows10 desktop, executing Python code over Miniconda > Spyder IDE. I have Python 2.7 installed. | 0 | 1 | 2,069 |
0 | 46,173,790 | 0 | 0 | 0 | 0 | 1 | true | 6 | 2017-09-12T09:58:00.000 | 4 | 1 | 0 | How to determine object orientation in binary image? (Python, OpenCV) | 46,173,428 | 1.2 | python,opencv,image-processing,computer-vision | Do not know if this works in general, but given your sample images I'd find the middle point of the short edges of the bounding box and get two rectangles for the two halves of the big BBox.
I would then compute the sum of the mask pixels in the two separate half-BBoxes, assuming white is 1 and black is 0. Since the white area is bigger on the half of the rectangle where the "front" of the turbine is, pick the direction according to which of the two half-BBoxes has a higher sum | I am supposed to determine the direction a windmill is facing from aerial images (with respect to True North - 0 to 359 degrees).
My question is, how can I determine the correct direction of the windmill and calculate its angle relative to the y-axis? Thanks! | 0 | 1 | 1,499 |
0 | 46,176,863 | 0 | 0 | 0 | 0 | 1 | false | 19 | 2017-09-12T12:40:00.000 | 2 | 2 | 0 | Why/How does Pandas use square brackets with .loc and .iloc? | 46,176,656 | 0.197375 | python,pandas,pandas-loc | Underneath the covers, both are using the __setitem__ and __getitem__ functions. | So .loc and .iloc are not your typical functions. They somehow use [ and ] to surround the arguments so that it is comparable to normal array indexing. However, I have never seen this in another library (that I can think of, maybe numpy as something like this that I'm blanking on), and I have no idea how it technically works/is defined in the python code.
Are the brackets in this case just syntactic sugar for a function call? If so, how then would one make an arbitrary function use brackets instead of parenthesis? Otherwise, what is special about their use/defintion Pandas? | 0 | 1 | 3,012 |
0 | 46,185,735 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-09-12T14:34:00.000 | 0 | 1 | 0 | Missing method NeuralNet.train_split() in lasagne | 46,179,209 | 0 | python-3.x,theano,lasagne | SOLVED
in previous versions, the input parameter train_split has been a number, that was used by the same-named method. In nolearn 0.6.0, it's a callable object, that can implement its own logic to split the data. So instead of providing a float number to the input parameter train_split, I have to provide a callable instance (the default one is TrainSplit), that will be executed in each training epoch. | I am learning to deal with python and lasagne. I have following installed on my pc:
python 3.4.3
theano 0.9.0
lasagne 0.2.dev1
and also six, scipy and numpy. I call net.fit(), and the stacktrace tries to call train_split(X, y, self), which, I guess, should split the samples into training set and validation set (both the inputs X as well as the outputs Y).
But there is no method like train_split(X, y, self) , there is only a float field train_split - I assume, the ratio between training and validation set sizes. Then I get following error:
Traceback (most recent call last):
File "...\workspaces\python\cnn\dl_tutorial\lasagne\Test.py", line
72, in
net = net1.fit(X[0:10,:,:,:],y[0:10])
File "...\Python34\lib\site-packages\nolearn\lasagne\base.py", line
544, in fit
self.train_loop(X, y, epochs=epochs)
File "...\Python34\lib\site-packages\nolearn\lasagne\base.py", line
554, in train_loop
X_train, X_valid, y_train, y_valid = self.train_split(X, y, self)
TypeError: 'float' object is not callable
What could be wrong or missing? Any suggestions? Thank you very much. | 0 | 1 | 47 |
0 | 55,276,764 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2017-09-12T15:27:00.000 | 1 | 2 | 0 | Python hybrid multiprocessing / MPI with shared memory in the same node | 46,180,292 | 0.099668 | python,multiprocessing,mpi4py | MPI-3 has a shared memory facility for precisely your sort of scenario. And you can use MPI through mpi4py....
Use MPI_Comm_split_type to split your communicator into groups that live on a node. Use MPI_Win_allocate_sharedfor a window on the node; specify nonzero size only on one rank. Use MPI_Win_shared_query to get pointers to that window. | I have a Python application that needs to load the same large
array (~4 GB) and do a perfectly parallel function on
chunks of this array. The array starts off saved to disk.
I typically run this application on a cluster computer with
something like, say, 10 nodes, each node of which has 8
compute cores and a total RAM of around 32GB.
The easiest approach (which doesn't work) is to do
n=80 mpi4py. The reason it doesn't work is that
each MPI core will load the 4GB map, and this will exhaust
the 32GB of RAM resulting in a MemoryError.
An alternative is that rank=0 is the only process that loads
the 4GB array, and it farms out chunks of the array to the rest
of the MPI cores -- but this approach is slow because of network
bandwidth issues.
The best approach would be if only 1 core in each node loads
the 4GB array and this array is made available as shared memory
(through multiprocessing?) for the remaining 7 cores on each
node.
How can I achieve this? How can I have MPI be aware of nodes
and make it coordinate with multiprocessing? | 0 | 1 | 981 |
0 | 46,187,418 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-09-13T00:25:00.000 | 2 | 2 | 0 | Compiling model as executable for faster inference? | 46,187,056 | 1.2 | python,machine-learning,tensorflow,inference | How do you set up a server? If you are setting up a server using python framework like django, flask or tornado, you just need to preload your model and keep it as a global variable, and then use this global variable to predict.
If you are using some other server. You can also make the entire python script you use to predict as a local server, and transform request or response between python server and web server. | Is there a way to compile the entire Python script with my trained model for faster inference? Seems like loading the Python interpreter, all of Tensorflow, numpy, etc. takes a non-trivial amount of time. When this has to happen at a server responding to a non-trivial frequency of requests, it seems slow.
Edit
I know I can use Tensorflow serving, but don't want to because of the costs associated with it. | 0 | 1 | 372 |
0 | 46,215,900 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-09-13T02:39:00.000 | 0 | 1 | 0 | How to generate vector arrows that conforms to a raster slope layer? | 46,187,988 | 0 | r,python-2.7,arcgis,r-raster | This is for display/mapping purposes only? Use a DEM or TIN and display your arrow lines in ArcScene.
EDIT: given your update about your data and software not working-
Try this:
1) Make a raster surface covering the extent of your data with a cell size of 100m (or smaller or larger if that doesn't suit)
2) Convert that raster to a polygon layer e.g. 'area_grid100m'
3) Do a spatial join and assign all points a polygon cell id from one of the unique id fields in 'area_grid100m'
4) Use Summarize to get the mean lat/long of the start points and mean lat/long of the end points for each polygon. Summarize on the polygon id field and get select mean for both the lat and long fields
5) Add summary table to ArcMap, right click and select Display XY Data (set X Field as longitude and y Field as latitude). Right right the result and select Data > Export Data to make it permanent. You will now have two points per 'area_grid100m' cell.
5) Recreate your lines using this new file, which will give you one line per cell
If the res is not small enough, make the 'area_grid' cells smaller. | I would like to generate vector arrows that conform to the topography/slope of a raster dataset of a river catchment area.
I have created a Fishnet grid of points in ArcGIS and I would like to create a single arrow for each point of a set length that will follow the shape of the slope i.e. follow the path of least resistance, the line will follow progressively small numbers in a 3 x 3 grid.
I think I can generate the vector arrows using vector plot. Is it possible to achieve the lines conforming to the raster?
UPDATE: I have ~200,000 lines that I generated from a grid of points. I am going to turn these into a raster using R and set it to the same resolution as my slope raster.
Any ideas on how to layer the raster lines on the slope so I can get the lines to follow the lowest values of the slope? | 0 | 1 | 322 |
0 | 46,212,289 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-09-13T09:39:00.000 | 0 | 2 | 0 | How to find degree of fit in Kmeans++ clustering in python | 46,194,025 | 0 | python,machine-learning,cluster-analysis,k-means | There are "soft" variants of k-means that allow this.
In particular, fuzzy-c-means (don't ask me why they use c instead of k...)
But beware that the resulting soft assignment is far from a statistical probability. It's just a number that gives some relative weight based on the squared distance, without any strong statistical model. | How to find degree of fit in K-means++ clustering such that it shows how much percentage the inputs are aligned to each clusters. For instance, input A is in cluster 1 for 0.4 and in cluster 2 for 0.6. | 0 | 1 | 121 |
0 | 46,209,673 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-09-14T02:16:00.000 | 1 | 1 | 0 | Scatter plot: Decreasing spacing between scatter points/x-axis ticks | 46,209,466 | 0.197375 | python,matplotlib | Assign or use numeric values to the x axis then label the x axis ticks with the non-numeric information. Then you can play around with x-axis scaling and limits to move the plot around. | I am currently working on a 2x2 subplot figure. In each subplot, I have 3 groups on the X axis. I want to decrease the spacing between each of these groups on the X-axis. Currently, the last value on 221 subplot is very close to the first value of 222 subplot.
I have changed the spacing between subplots but I would like each of the subplots to be more compact by decreasing the spacing between the X-axis values. The variable on the X-axis is non-numeric. | 0 | 1 | 452 |
0 | 51,272,932 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-09-14T03:57:00.000 | 0 | 1 | 0 | Python, extracting key aspects from consumer reviews | 46,210,282 | 0 | python,text,nltk | yes .. but u need to specify aspects by yourself by choosing the most essential specification for the product | I have a data set of consumer reviews. From these reviews I will like to extract most frequently occurring aspects. The process I am applying includes
- Step 1: Tokenizing reviews into sentences
- Step 2: Tokenizing sentences into words after basic NLP pre-processing. Pre-processing removes punctuation and English stop words.
- Step 3: Pos_tagging and extracting all words with pos tag of 'NN','NNP','NNS','NNPS'
- Step 4: Combining all the words across all reviews to find the most frequently occuring words
- Step 5: Using top 40 terms as my aspects
Is this a good approach or do you recommend doing something different? | 0 | 1 | 696 |
0 | 46,853,504 | 0 | 1 | 0 | 0 | 1 | false | 5 | 2017-09-14T15:07:00.000 | 0 | 2 | 0 | Updating pandas to version 0.19 in Azure ML Studio | 46,222,606 | 0 | python,pandas,azure,anaconda,azure-machine-learning-studio | The Azure Machine Learning Workbench allows for much more flexibility with setting up environments using Docker. I moved to using that tool. | I would really like to get access to some of the updated functions in pandas 0.19, but Azure ML studio uses pandas 0.18 as part of the Anaconda 4.0 bundle. Is there a way to update the version that is used within the "Execute Python Script" components? | 0 | 1 | 1,862 |
0 | 53,010,988 | 0 | 0 | 0 | 0 | 2 | false | 13 | 2017-09-14T20:59:00.000 | 0 | 5 | 0 | How to find pyspark dataframe memory usage? | 46,228,138 | 0 | python,apache-spark,dataframe,pyspark | You can persist dataframe in memory and take action as df.count(). You would be able to check the size under storage tab on spark web ui.. let me know if it works for you. | For python dataframe, info() function provides memory usage.
Is there any equivalent in pyspark ?
Thanks | 0 | 1 | 18,349 |
0 | 60,070,654 | 0 | 0 | 0 | 0 | 2 | false | 13 | 2017-09-14T20:59:00.000 | 8 | 5 | 0 | How to find pyspark dataframe memory usage? | 46,228,138 | 1 | python,apache-spark,dataframe,pyspark | I have something in mind, its just a rough estimation. as far as i know spark doesn't have a straight forward way to get dataframe memory usage, But Pandas dataframe does. so what you can do is.
select 1% of data sample = df.sample(fraction = 0.01)
pdf = sample.toPandas()
get pandas dataframe memory usage by pdf.info()
Multiply that values by 100, this should give a rough estimate of your whole spark dataframe memory usage.
Correct me if i am wrong :| | For python dataframe, info() function provides memory usage.
Is there any equivalent in pyspark ?
Thanks | 0 | 1 | 18,349 |
0 | 46,241,767 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2017-09-15T09:00:00.000 | 0 | 1 | 0 | Create high nr of random sequences with min Edit Distance time efficient | 46,235,675 | 0 | python,algorithm,performance,edit-distance | It seems, from wikipedia, that edit distance is one of three operations insertion, deletion, substitution; performed on a starting string. Why not systematically generate all strings up to N edits from a starting string then stop when you reach your limit?
There would be no need to check for the actual edit distance as they would be correct by generation. For randomness could you generate a number then shuffle them. | I need to create a program/script for the creation of a high numbers of random sequences (20 letter long sequence based on 4 different letters) with a minimum edit distance between all sequences. "High" would here be a minimum of 100k sequences, but if possible up to 1 million.
I started with a naive approach of just generating random 20 letter sequences, and for each sequence, calculate the edit distance between the sequence and all other sequences already created and stored. If the new sequence pass my threshold value, store it, otherwise discard.
As you understand, this scales very badly for higher number of sequences. Up to 10k is reasonably fine, but trying to get 100k this starts to get troublesome.
I really only need to create the sequences once and store the output, so I'm really not that fussy about speed, but making 1 million at this rate today is just no possible.
Been trying to think of alternatives to speed up the process, like building the sequences is "blocks" of minimal ED and then combining, but haven't come up with any solution.
Wondering, do anyone have any smart idea/method that could be implemented to create such high number of sequences with minimal ED more time efficient?
Cheers,
JB | 0 | 1 | 79 |
0 | 48,470,963 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-09-15T16:34:00.000 | 4 | 1 | 0 | conditional logit for panel data in python | 46,244,095 | 0.664037 | python | I'm the creator of pylogit.
I don't have built in utilities for estimating conditional logits with fixed effects. However, you can use pylogit to estimate this model. Simply
Create dummy variables for each decision maker. Be sure to leave out one decision maker for identification.
For each created dummy variable, add dummy variable's column name to the one's utility specification. | I am trying to estimate a logit model with individual fixed effects in a panel data setting, i.e. a conditional logit model, with python.
I have found the pylogit library. However, the documentation I could find, explained how to use the conditional logit model for multinomial models with varying choice attributes. This model does not seem to be the same use case as a simple binary panel model.
So my questions are:
Does pylogit allow to estimate conditional logits for panel data?
If so, is there documentation?
If not, are there other libraries that allow you to estimate this type of model?
Any help would be much appreciated. | 0 | 1 | 2,528 |
0 | 49,274,464 | 0 | 0 | 0 | 0 | 1 | false | 21 | 2017-09-16T19:21:00.000 | -1 | 2 | 0 | Scikit-learn: preprocessing.scale() vs preprocessing.StandardScalar() | 46,257,627 | -0.099668 | python,scikit-learn,scale | My understanding is that scale will transform data in min-max range of the data, while standardscaler will transform data in range of [-1, 1]. | I understand that scaling means centering the mean(mean=0) and making unit variance(variance=1).
But, What is the difference between preprocessing.scale(x)and preprocessing.StandardScalar() in scikit-learn? | 0 | 1 | 9,544 |
0 | 46,259,970 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-09-16T19:47:00.000 | 1 | 1 | 0 | How to extract enumerated labels and corresponding numerical values from a .sav file? | 46,257,848 | 1.2 | python,r,spss | For R, you can perhaps use the haven package. Of the course the results will depend on the files being imported, but the package does included functions for dealing with/viewing labels (presuming the labels actually exist). | How can I extract a mapping of numbers to labels from a .sav file without access to SPSS?
I am working with a non-profit who uses SPSS, but I don't have access to SPSS (and they are not technical). They've sent me some SPSS files, and I was able to extract these into csv files which have correct information with an R package called foreign.
However for some files the R package extracts textual labels and for other files the R package extracts numbers. The files are for parallel case studies of different individuals, and when I count the labels vs. the numbers they don't even match exactly (say 15 labels vs. 18 enums because the underlying records were made across many years and by different personnel, so I assume the labels probably don't match in any case). So I really need to see the number to label matching in the underlying enum. How can I do this without access to SPSS?
(p.s. I also tried using scipy.io to read the .sav file and got the error Exception: Invalid SIGNATURE: b'$F' when testing on multiple files before giving up so that seems like a non-starter) | 0 | 1 | 102 |
0 | 46,406,790 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-09-18T14:28:00.000 | 1 | 3 | 0 | No module named PyQt4 in python 3.6 when I use matplotlib.pyplot | 46,281,845 | 0.066568 | python,matplotlib,pyqt4,ubuntu-16.04,python-3.6 | For python 3.6(since i had that in my computer), you go to command line , and type this :
conda install -c anaconda pyqt=5.6.0
If you are unsure about the python and pyqt version. Then type :
conda info pyqt
This will output the relevant pyqt version. Hence you can check your pyqt version and install from command mentioned at first. | When I import matplotlib.pyplot in any python 3.6 program, I get the following error:
$ python kernel1.py
Traceback (most recent call last):
File "kernel1.py", line 13, in <module>
import matplotlib.pyplot as plt
File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/pyplot.py", line 115, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
globals(),locals(),[backend_name],0)
File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5agg.py", line 16, in <module>
from .backend_qt5 import QtCore
File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5.py", line 26, in <module>
import matplotlib.backends.qt_editor.figureoptions as figureoptions
File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/backends/qt_editor/figureoptions.py", line 20, in <module>
import matplotlib.backends.qt_editor.formlayout as formlayout
File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/backends/qt_editor/formlayout.py", line 56, in <module>
from matplotlib.backends.qt_compat import QtGui, QtWidgets, QtCore
File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/backends/qt_compat.py", line 137, in <module>
from PyQt4 import QtCore, QtGui
ModuleNotFoundError: No module named 'PyQt4'
However, if I use python 3.5, matplotlib.pyplot works perfectly.
I have tried using sudo apt-get install python-qt4. Still I get the same error.
I am using Ubuntu 16.04. | 0 | 1 | 4,436 |
0 | 46,287,762 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-09-18T18:37:00.000 | 0 | 2 | 0 | Error installing Tensorflow in Windows 7 | 46,286,077 | 0 | python,tensorflow | Make sure your Tensorflow folder is somewhere that the environment will look, such as [Python install directory]/Lib/Site-packages | I am trying to install Tensorflow on a Windows 7 laptop in order to use jupyter notebook to play around with the object detection notebook in Github. I am facing this error:
ImportError Traceback (most recent call
last) in ()
4 import sys
5 import tarfile
----> 6 import tensorflow as tf
7 import zipfile
8
ImportError: No module named tensorflow
I am getting the above error when I start the Jupyter Notebook from inside the Conda environment in Windows 7. I have installed Python 3.5.4 & within conda environment, tensorflow as well.
I am also getting ... not recognized as an internal/external... for $ command while giving $ python and sometimes also for pip3 I have included several file paths in Environment Variables. Can you please suggest me what to do. I am using the Conda env as I feel I have a problem in having Windows Service Pack 1. | 0 | 1 | 820 |
0 | 46,299,601 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-09-19T11:27:00.000 | 0 | 1 | 0 | Sound digit classification | 46,299,246 | 0 | python-3.x,machine-learning,scikit-learn,sound-recognition | Yes,
you can use 1D convolutional neural network. The convolutional filters can exploit consecutive parts of the signal and therefore it can be useful.
You can also try to look into recurrent neural networks which are more complex. | What I'm trying to do is conceptually similar to the infamous NMIST classification example. Except that each digit is a computer generated sound wave.
I'll be adding some background noise to the data in order to improve real world accuracy.
My question is; considering the sequential data what model is best suited for this? Am I right in assuming a convolutional net would work?
I favour simpler model in exchange for a few percentage performance points and preferably it could be written with the Scikit Learn library. | 0 | 1 | 58 |
0 | 46,303,902 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-09-19T14:59:00.000 | 1 | 2 | 0 | Most efficient way to compare two near identical CSV's in Python? | 46,303,776 | 0.099668 | python,algorithm,csv,search | An efficient way would be to read each line from the first file(with less number of lines) and save in an object like Set or Dictionary, where you can access using O(1) complexity.
And then read lines from the second file and check if it exists in the Set or not. | I have two CSV's, each with about 1M lines, n number of columns, with identical columns. I want the most efficient way to compare the two files to find where any difference may lie. I would prefer to parse this data with Python rather than use any excel-related tools. | 0 | 1 | 147 |
0 | 46,308,361 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2017-09-19T18:54:00.000 | 0 | 2 | 0 | Completely removing Tensorflow, pip and virtualenv | 46,307,874 | 0 | python,tensorflow,pip,anaconda,virtualenv | Yes, after a short reading into the topic, I simply unistalled tf using sudo pip uninstall tensorflow within my virtualenv and then deactivated the virtualenv. Don't know how to really uninstall the virtualenv as well but I guess that is already enough then and I can proceed with the installation of Anaconda?
I also have installed some additional packages like mathplotlib, ipython etc. but I can keep them as well without problems?
Thanks | I am new to all the tensorflow and Python programming and have installed tensorflow through pip and virtualenv but now I read that in order to use Spyder for Python it is best to use Anaconda. I know that tf can be installed through conda as well but how do I go about it now? Do I have to completely remove the existing installations first and if yes, can someone explain in detail which and how I can do it? | 0 | 1 | 633 |
0 | 46,308,280 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2017-09-19T18:54:00.000 | 0 | 2 | 0 | Completely removing Tensorflow, pip and virtualenv | 46,307,874 | 0 | python,tensorflow,pip,anaconda,virtualenv | Just install Anaconda it will take care of everything. Uninstalling the existing ones is up to you, they wont harm anything. | I am new to all the tensorflow and Python programming and have installed tensorflow through pip and virtualenv but now I read that in order to use Spyder for Python it is best to use Anaconda. I know that tf can be installed through conda as well but how do I go about it now? Do I have to completely remove the existing installations first and if yes, can someone explain in detail which and how I can do it? | 0 | 1 | 633 |
0 | 46,359,347 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2017-09-20T17:42:00.000 | 0 | 1 | 0 | Encoded categorical features in h2o in python | 46,328,579 | 1.2 | python,h2o,categorical-data | The best way to see inside a model is to export the pojo, and look at the java source code. You should see how it is processing enums.
But, if I understand the rest of your question correctly, it should be fine. As long as the training data contains all possible values of a category it will work as you expect. If a categorical value not see in training is presented in production it will be treated as an NA. | Is there a way to see how the categorical features are encoded when we allow h2o to automatically create categorical data by casting a column to enum type?
I am implementing holdout stacking where my underlying training data differs for each model. I have a common feature that I want to make sure is encoded the same way across both sets. The feature contains names (str). It is guaranteed that all names that appear in one data set will be appear in the other. | 0 | 1 | 246 |
0 | 51,174,183 | 0 | 1 | 0 | 0 | 3 | false | 15 | 2017-09-20T18:42:00.000 | 1 | 13 | 0 | Use AWS Glue Python with NumPy and Pandas Python Packages | 46,329,561 | 0.015383 | python,pandas,amazon-web-services,aws-lambda,aws-glue | As of now, You can use Python extension modules and libraries with your AWS Glue ETL scripts as long as they are written in pure Python. C libraries such as pandas are not supported at the present time, nor are extensions written in other languages. | What is the easiest way to use packages such as NumPy and Pandas within the new ETL tool on AWS called Glue? I have a completed script within Python I would like to run in AWS Glue that utilizes NumPy and Pandas. | 0 | 1 | 29,917 |
0 | 46,416,040 | 0 | 1 | 0 | 0 | 3 | false | 15 | 2017-09-20T18:42:00.000 | 2 | 13 | 0 | Use AWS Glue Python with NumPy and Pandas Python Packages | 46,329,561 | 0.03076 | python,pandas,amazon-web-services,aws-lambda,aws-glue | when you click run job you have a button Job parameters (optional) that is collapsed by default , when we click on it we have the following options which we can use to save the libraries in s3 and this works for me :
Python library path
s3://bucket-name/folder-name/file-name
Dependent jars path
s3://bucket-name/folder-name/file-name
Referenced files path
s3://bucket-name/folder-name/file-name | What is the easiest way to use packages such as NumPy and Pandas within the new ETL tool on AWS called Glue? I have a completed script within Python I would like to run in AWS Glue that utilizes NumPy and Pandas. | 0 | 1 | 29,917 |
0 | 46,414,546 | 0 | 1 | 0 | 0 | 3 | false | 15 | 2017-09-20T18:42:00.000 | 1 | 13 | 0 | Use AWS Glue Python with NumPy and Pandas Python Packages | 46,329,561 | 0.015383 | python,pandas,amazon-web-services,aws-lambda,aws-glue | If you go to edit a job (or when you create a new one) there is an optional section that is collapsed called "Script libraries and job parameters (optional)". In there, you can specify an S3 bucket for Python libraries (as well as other things). I haven't tried it out myself for that part yet, but I think that's what you are looking for. | What is the easiest way to use packages such as NumPy and Pandas within the new ETL tool on AWS called Glue? I have a completed script within Python I would like to run in AWS Glue that utilizes NumPy and Pandas. | 0 | 1 | 29,917 |
0 | 46,645,511 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-09-21T08:34:00.000 | 0 | 2 | 1 | Installing OpenCV for all conda environments | 46,339,134 | 0 | python,opencv,anaconda,conda | So you are providing the python package and library path to environment specific location, in order to make it available environment try using the anaconda/bin and lib path. Can't make it as a comment due to low reputation. | I have an Ubuntu 16.04 system with an Anaconda installation. I want to compile and install OpenCV 3.3 and use also the Python bindings. I used the following CMake command:
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_CUDA=ON -D D WITH_FFMPEG=1 -D WITH_CUBLAS=ON -D WITH_TBB=ON -D WITH_V4L=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=OFF -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules -D BUILD_EXAMPLES=ON -D BUILD_TIFF=ON -D PYTHON_EXECUTABLE=/home/guel/anaconda2/envs/py27/bin/python -D PYTHON2_LIBRARIES=/home/guel/anaconda2/envs/py27/lib/libpython2.7.so -D PYTHON2_PACKAGES_PATH=/home/guel/anaconda2/envs/py27/lib/python2.7/site-packages -DWITH_EIGEN=OFF -D BUILD_opencv_cudalegacy=OFF ..
The command does the job but then, of course, OpenCV is installed only for a specific conda environment that I created. However, I want to be able to use it also from different environments without having to go through the compilation for each and every environment. Is there a way to achieve that in a simple way? Since the OpenCv libraries are actually installed in /usr/local, I can imagine that there must be a simple way to link the libraries to each new conda enviroment but I couldn't figure out exactly how. | 0 | 1 | 2,609 |
0 | 46,347,870 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-09-21T15:00:00.000 | 0 | 3 | 0 | print_rows(num_rows=m, num_columns=n) in graphlab / turi not working | 46,347,289 | 0 | python-2.7,jupyter-notebook,rows,nearest-neighbor,graphlab | ok, well, seems like I have to define the number or neighbours with:
tfidf_model.query(Test_AD, k=100).show()
so I can get a list of first 100 in the canvass. | I using jupyter notebook and graphlab / turi for tfidf-nearest_neighbors model, which works fine so far.
However, when I query the model like
tfidf_model.query(Test_AD)
I always just get the head - [5 rows x 4 columns]
I am supposed to use "print_rows(num_rows=m, num_columns=n)" to print more rows and columns like:
tfidf_model.query(Test_AD).print_rows(num_rows=50, num_columns=4)
however, when I used it, I dont get any rows anymore, only the summary field:
Starting pairwise querying.
+--------------+---------+-------------+--------------+
| Query points | # Pairs | % Complete. | Elapsed Time |
+--------------+---------+-------------+--------------+
| 0 | 1 | 0.00519481 | 13.033ms |
| Done | | 100 | 106.281ms |
+--------------+---------+-------------+--------------+
That's it. No error message, nothing. Any Ideas, how to get all/ more rows?
I tried to convert into pandas or .show() command etc., didnt help. | 0 | 1 | 1,900 |
0 | 46,450,459 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2017-09-22T03:04:00.000 | 0 | 1 | 0 | Python: Date comparing works in Spyder but not in Console | 46,356,186 | 1.2 | python,pandas,datetime,spyder | The problem was related to having several python installations on my PC. After removing all of it and installing a single instance, it was working well. Thanks for the tipp, Carlos Cordoba! | i have written a little csv parser based on pandas.
It works like a charm in Spyder 3.
Yesterday i tried to put it into production and run it with a .bat file, like:
python my_parser.py
In the console it doesn't work at all.
Pandas behaves different: The read_csv method lost the "quotechar" keyword argument, for example.
Especially date comparisons break all the time.
I read the dates with pandas as per
pd.read_csv(parse_dates=[col3, col5, col8])
Then i try a date calculation by substracting pd.to_datetime('now')
I tested everything, and as said, in Spyder no failure is thrown, it works and produces results as it should be.
As soon as i start it in the console, he throws type errors.
The most often one of the two dates is a mere string and the other stays a datetime, so the minus operation fails.
I could now rewrite the code and find a procedure that works in both, Spyder and console.
However, i prefer to ask you guys here:
What could be a possible reason that spyder and the console python behave completely different from each other?
It's really annoying to debug code that does not throw any failures, so i really would like to understand the cause. | 0 | 1 | 164 |
0 | 46,358,100 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-09-22T04:29:00.000 | 0 | 2 | 0 | Spark: Dangers of using Python | 46,356,893 | 0 | python,scala,apache-spark,pyspark,user-defined-functions | In your driver application, you don't necessarily have to collect a ton of records. Maybe you're just doing a reduce down to some statistics.
This is just typical behavior: Drivers usually deal with statistical results. Your mileage may vary.
On the other hand, Spark applications typically use the executors to read in as much data as their memory allows and process it. So memory management is almost always a concern.
I think this is the distinction the book is getting at. | In the book "Spark: The definitive guide" (currently early release, text might change), the authors advise against the use of Pyspark for user-defined functions in Spark:
"Starting up this Python process is expensive but the real cost is in serializing the data to Python. This is costly for two reasons, it is an expensive computation but also once the data enters Python, Spark cannot manage the memory of the worker. This means that you could potentially cause a worker to fail if it becomes resource constrained (because both the JVM and python are competing for memory on the same machine)."
I understand that the competition for worker node resources between Python and the JVM can be a serious problem. But doesn't that also apply to the driver? In this case, it would be an argument against using Pyspark at all. Could anyone please explain what makes the situation different on the driver? | 0 | 1 | 134 |
0 | 72,120,529 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2017-09-23T07:49:00.000 | 0 | 3 | 0 | solving Ax =b for a non-square matrix A using python | 46,377,331 | 0 | python,numpy,matrix | Can use:
np.linalg.lstsq(x, y)
np.linalg.pinv(X) @ y
LinearRegression().fit(X, y)
Where 1 and 2 are from numpy and 3 is from sklearn.linear_model.
As a side note you will need to concatenate ones(use np.ones_like) in both 1 and 2 to represent the bias from the equation y = ax + b | I'm focusing on the special case where A is a n x d matrix (where k < d) representing an orthogonal basis for a subspace of R^d and b is known to be inside the subspace.
I thought of using the tools provided with numpy, however they only work with square matrices. I had the approach of filling in the matrix with some linearly independent vectors to "square" it and then solve, but I could not figure out how to choose those vectors so that they will be linearly independent of the basis vectors, plus I think it's not the only approach and I'm missing something that can make this easier.
is there indeed a simpler approach than the one I mentioned? if not, how indeed do I choose those vectors that would complete Ainto a square matrix? | 0 | 1 | 23,050 |
0 | 46,385,537 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-09-23T23:41:00.000 | 1 | 2 | 0 | how to init a array with each element holding the value different from its neighbours | 46,385,250 | 0.099668 | python,arrays,numpy | You can write your own matrix initializer.
Go through the array[i][j] for each i, j pick a random number between 0 and 7.
If the number equals to either left element: array[i][j-1] or to the upper one: array[i-1][j] regenerate it once again.
You have 2/7 probability to encounter such a bad case, and 4/49 to make it twice in a row, 8/343 for 3 in a row, etc.. the probability dropes down very quickly.
The average case complexity for n elements in a matrix would be O(n). | I have a matrix or a multiple array written in python, each element in the array is an integer ranged from 0 to 7, how would I randomly initalize this matrix or multiple array, so that for each element holds a value, which is different from the values of its 4 neighbours(left,right, top, bottom)? can it be implemented in numpy? | 0 | 1 | 62 |
0 | 46,430,717 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-09-24T01:24:00.000 | 0 | 1 | 0 | PYTHON: I can't get scipy/sklearn to work. No scipy module | 46,385,689 | 0 | python,python-2.7,numpy,scikit-learn,anaconda | It looks to me like you may have two versions of Python installed. In your original stack trace, you can see that the version of Python that is complaining about scipy is coming from "C:/Python27/". However, your install of Anaconda looks like it's coming from "C:/Users/james/Anaconda2".
I would recommend putting Anaconda's python.exe first in your PATH. | Windows 10
Python 2.7
Anaconda
pip
I am having big problems installing SciKit.
I have tried every installation option I can find.
tried installing with pip and anaconda. It says it is successfully installed but I can't import it to my script - I get error -
Traceback (most recent call last):
File "C:/Python27/trash.py", line 1, in
from sklearn import datasets
File "C:\Python27\lib\site-packages\sklearn__init__.py", line 134, in
from .base import clone
File "C:\Python27\lib\site-packages\sklearn\base.py", line 10, in
from scipy import sparse
ImportError: No module named scipy
I have installed numpy, pandas, ipython, sympy, scipy etc .... everything that any post or forum says is needed. My pc says I already have scipy installed. I was told the easiest option was to do it with Anaconda. Anaconda also says it is all already installed.
///////////////////////////////////////////////////////////////////////
If I try install it with pip install scipy or pip -U install scipy I get this error ---
Command "c:\python27\python.exe -u -c "import setuptools, tokenize;file='c:\users\james\appdata\local\temp\pip-build-g1vohj\scipy\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record c:\users\james\appdata\local\temp\pip-xjacl_-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\james\appdata\local\temp\pip-build-g1vohj\scipy\
///////////////////////////////////////////
Anaconda using conda install scipy I get --
(C:\Users\james\Anaconda2) C:\Users\james>conda install scipy Fetching package metadata ........... Solving package specifications: . # All requested packages already installed. # packages in environment at C:\Users\james\Anaconda2: # scipy 0.19.1 np113py27_0
I get the same response when installing all the stuff that is required like numpy.
//////////////////////////////////////////////////////////
I am trying to get started on machine learning but this is just a nightmare.
please help me... | 0 | 1 | 507 |
0 | 46,392,727 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2017-09-24T17:04:00.000 | 0 | 5 | 0 | Generating Random Numbers Under some constraints | 46,392,625 | 0 | python,random | If you need random integer values between 0 and c use random.randint(0, c). For random floating point values between 0 anc c use random.uniform(0, c). | I happen to have a list y = [y1, y2, ... yn]. I need to generate random numbers ai such that 0<=ai<=c (for some constant c) and sum of all ai*yi = 0. Can anyone help me out on how to code this in python? Even pseudocode/logic works, I am having a conceptual problem here. While the first constraint is easy to satisfy, I cannot get anything for the second constraint.
EDIT: Take all yi = +1 or -1, if that helps, but a general solution to this would be interesting. | 0 | 1 | 1,846 |
0 | 46,440,005 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2017-09-24T17:04:00.000 | 0 | 5 | 0 | Generating Random Numbers Under some constraints | 46,392,625 | 0 | python,random | I like splitting this problem up. Note that there must be some positive and some negative values of y (otherwise sum(ai*yi) can't equal zero).
Generate random positive coefficients ai for the negative values of y, and construct the sum of ai*yi over only the negative values of y (let's say this sum is -R).
Assuming there are "m" remaining positive values, choose random numbers for the first m-1 of the ai coefficients for positive yi values, according to ai = uniform(R/(m*max(y)).
Use your constraint to determine am = (R-sum(aiyi | yi> 0))/ym. Notice that, by construction, all ai are positive and the sum of aiyi = 0.
Also note that multiplying all ai by the same amount k will also satisfy the constraint. Therefore, find the largest ai (let's call it amax), and if amax is greater than c, multiply all values of ai by c/(amax + epsilon), where epsilon is any number greater than zero.
Did I miss anything? | I happen to have a list y = [y1, y2, ... yn]. I need to generate random numbers ai such that 0<=ai<=c (for some constant c) and sum of all ai*yi = 0. Can anyone help me out on how to code this in python? Even pseudocode/logic works, I am having a conceptual problem here. While the first constraint is easy to satisfy, I cannot get anything for the second constraint.
EDIT: Take all yi = +1 or -1, if that helps, but a general solution to this would be interesting. | 0 | 1 | 1,846 |
0 | 68,513,454 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-09-24T20:42:00.000 | 0 | 2 | 0 | Does tf.histogram_fixed_width() support back propagation? | 46,394,659 | 0 | python,tensorflow | I had the similar problem. There are two ways you can try: 1. After the output layer, add an extra layer to produce the histogram; 2. Use something like tf.RegisterGradient or tf.custom_gradient to define your own gradients for the operations. | I want to use the histogram of the output from a CNN to compute the loss. I am wondering whether tf.histogram_fixed_width() supports the gradient to flow back to its former layer. Only it works, I can add a loss layer after calculating the histogram. | 0 | 1 | 670 |
0 | 46,404,352 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-09-25T11:35:00.000 | 0 | 2 | 0 | Is pyspark streaming suitable for machine learning/ scientific computing? | 46,404,243 | 0 | python,numpy,pyspark,spark-streaming | Pyspark is used to run program/code/algorithm in spark, which are coded in python language.
For machine leaning, spake has MLlib library packages.
For streaming purpose, spark has Spark streaming lib packages
You can explore Storm as well for real time streaming. | I am new to spark and have to write a streaming application that has to perform tasks like fast fourier transformations and some machine learning stuff like classification/regression with svms etc. I want to do this in pyspark, because of python's huge variety of modules like numpy, scikit-learn etc. My question is, is it possible to do such stuff in a streaming application? As far as I know, spark uses dstreams. Are these streams convertible to something like numpy arrays or anything similar that can serve as an input for python functions?
Thx | 0 | 1 | 189 |
0 | 46,480,253 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-09-25T11:35:00.000 | 0 | 2 | 0 | Is pyspark streaming suitable for machine learning/ scientific computing? | 46,404,243 | 0 | python,numpy,pyspark,spark-streaming | Machine learning is process of learn from data. First you train your model and then use that on top of data stream.
Data can be processed as mini, micro or even real time, depends on amount of data it is generating in a particular time.
Flume and Kafka are used to fetch data on real time and store on HDFS or can be fed to Spark with Spark streaming pointing to flume sink. | I am new to spark and have to write a streaming application that has to perform tasks like fast fourier transformations and some machine learning stuff like classification/regression with svms etc. I want to do this in pyspark, because of python's huge variety of modules like numpy, scikit-learn etc. My question is, is it possible to do such stuff in a streaming application? As far as I know, spark uses dstreams. Are these streams convertible to something like numpy arrays or anything similar that can serve as an input for python functions?
Thx | 0 | 1 | 189 |
0 | 53,309,575 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-09-25T13:32:00.000 | 0 | 1 | 0 | PCA for RGB image in python | 46,406,527 | 0 | python,rgb,pca | Separate three channels i.e., Red, Blue, Green and apply PCA on each. After applying PCA on each channel again join them. | I'm trying to reduce dimension of RGB images using PCA on python. But it seems to me that all codes I found only work on a greyscale image. Is there any way to do PCA on RGB image using any python library like sklearn or opencv?
Thanks | 0 | 1 | 1,645 |
0 | 46,418,511 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-09-26T05:21:00.000 | 2 | 2 | 0 | How to resize (interpolate) a tensor in Keras? | 46,418,373 | 1.2 | python,tensorflow,deep-learning,keras,keras-2 | I would use Repeat to add one element and implement the interpolation as a new lambda layer. I don't think there's an existing layer for this in keras. | I want to resize a tensor (between layers) of size say (None, 2, 7, 512) to (None, 2, 8, 512), by interpolating it (say using nearest neighbor), similar to this function tf.image.resize_nearest_neighbor available in Tensorflow.
Is there any way to do that?
I tried directly using the Tensorflow function tf.image.resize_nearest_neighbor and the pass the tensors to the next Keras layer, but with the next layer this error was thrown:
AttributeError: 'Tensor' object has no attribute '_keras_history'
I believe this is due to some attributes that are missing in Tensorflow tensors, which makes sense as the layer expects Keras tensors to be passed. | 0 | 1 | 5,492 |
0 | 46,461,915 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-09-26T11:13:00.000 | 0 | 1 | 0 | tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq output projection | 46,424,912 | 0 | python,tensorflow,lstm,embedding,rnn | Alright! So, I have found the answer to the question.
The main source of confusion was in the dimensions
[output_size x num_decoder_symbols] of the W matrix itself.
The output_size here doesn't refer to the output_size that you want, but is the output_size (same as the size of the hidden vector) of the LSTM cell. Thus the matrix multiplication u x W will result in a vector of size num_decoder_symbols that can be considered as the logits for the output symbols. | The official documentation for tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq has the following explanation for the output_projection argument:
output_projection: None or a pair (W, B) of output projection weights and biases; W has shape [output_size x num_decoder_symbols] and B has shape [num_decoder_symbols]; if provided and feed_previous=True, each fed previous output will first be multiplied by W and added B.
I don't understand why the B argument should have the size of [num_decoder_symbols]? Since the output is first multiplied by W and then the biases are added, Shouldn't it be [output_size]? | 0 | 1 | 141 |
0 | 57,285,250 | 0 | 1 | 0 | 0 | 1 | false | 13 | 2017-09-26T12:41:00.000 | 6 | 2 | 0 | Difference between pandas .iloc and .iat? | 46,426,875 | 1 | python,pandas,dataframe | iat and at gives only a single value output, while iloc and loc can give multiple row output. Example: iloc[1:2,5:8] is valid but iat[1:2,5:8] will throw error | I've recently noticed that a function where I iterate over a DataFrame rows using .iloc is very slow. I found out that there's a faster method called .iat, that's said to be equivalent to .iloc. I tried it and it cut the run time down by about 75%.
But I'm a little hesitant: why is there an "equivalent" method that's faster? There must be some difference between the inner workings of these two and a reason why they both exist and not just the faster one. I've tried looking everywhere but even the pandas documentation just states that
DataFrame.iat
Fast integer location scalar accessor.
Similarly to iloc, iat provides integer based lookups. You can also set using these indexers.
And that doesn't help.
Are there limits to using .iat? Why is faster; is it sloppier? Or do I just switch to using .iat and happily forget .iloc ever existed? | 0 | 1 | 5,846 |
0 | 46,443,938 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-09-27T05:01:00.000 | -1 | 1 | 0 | Machine Learning dataset with many discrete features | 46,439,814 | -0.197375 | python,pandas,dataframe,machine-learning | It depends on the purpose of the transformation. Converting categories to numerical labels may not make sense if the ordinal representation does not correspond to the logic of the categories. In this case, the "one-hot" encoding approach you have adopted is the best way to go, if (as I surmise from your post) the intention is to use the generated variables as the input to some sort of regression model. You can achieve what you are looking to do using pandas.get_dummies. | I am working with a medical data set that contains many variables with discrete outputs. For example: type of anesthesia, infection site, Diabetes y/n. And to deal with this I have just been converting them into multiple columns with ones and zeros and then removing one to make sure there is not a direct correlation between them but I was wondering if there was a more efficient way of doing this | 0 | 1 | 89 |
0 | 46,460,039 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-09-28T01:11:00.000 | 3 | 1 | 0 | Intuitive understanding of Numpy nd-array | 46,459,441 | 1.2 | python,arrays,numpy,data-structures | ndarray vs list: both can hold a 1-dimensional collection of elements; however, in an ndarray the elements would usually all be of the same type (e.g., 64-bit floating point numbers), and numpy provides operators (and behind-the-scenes optimizations) for calculations on these vectors. For example, you can (quickly) add elements in nda1 and nda2 via nda3 = nda1 + nda2. With lists, you would need to do lst3 = [a + b for (a, b) in zip(lst1, lst2)]. On the other hand, you can easily insert and remove items in lists. ndarrays are designed for high-performance computations on vectors of numbers; lists are designed for ad hoc operations on arbitrary collections of objects.
ndarray vs dictionary: these are quite dissimilar. Dictionaries allow you to select objects from an arbitrary collection by name; ndarrays usually only hold numbers, and only allow lookup via index number (unless you get into recarrays, which you didn't ask about).
ndarray vs Pandas dataframe: dataframes are somewhat similar to multidimensional ndarrays, in that they are designed to hold similar types of data in each column. However, different columns of a dataframe would often hold different types of data, while all the elements in a multidimensional ndarray would usually be numbers of the same type. In addition, dataframes provide name-based indexing across rows and columns. I like to think of dataframes as something like a dictionary of 1-dimensional ndarrays, i.e., each column of the dataframe is like a 1-dimensional ndarray, and you can retrieve the column by name and then manipulate it. But Pandas provides additional indexing goodness, so you can also give a name to each row, and then pull elements out of the table based on both their row and column names. Pandas also provides operators for element-wise operations (e.g., adding two columns together), similar to numpy. But it generally does this by matching index terms, not row/column numbers. So data manipulations in Pandas are slower but more reliable.
ndarrays vs structured arrays: structured arrays are somewhat like the rows of a Pandas dataframe (you can have different standardized types of data in each column). But the semantics for manipulating them are more like standard numpy operations -- you have to make sure the right data is in the right spot in the array before you operate on it. (Pandas will re-sort the tables so the row-names match if needed.)
ndarray vs sequence of lists: ndarrays are initialized and displayed like sequences of lists, and if you take the nth element of a 2D array, you get a list (the row). But internally, in an ndarray, every element has the same datatype (unlike lists), and the values are packed tightly and uniformly in memory. This allows processors to quickly perform operations on all the values together. Lists are more like pointers to values stored elsewhere, and mathematical computations on lists or lists-of-lists are not optimized. Also, you can't use 2D or 3D indexing with lists-of-lists (you have to say lst[1][2][3], not lst[1, 2, 3]), nor can you easily do elementwise operations (lst1+lst2 does not do elementwise addition like nda1+nda2).
higher dimensions: you can create an ndarray with any number of dimensions. This is sort of similar to a list of lists of lists. e.g., this makes a 3D array: np.array([[[1, 2], [3, 4]], [[5, 6], [7,8]]]) | So I've read the manual - but the structure still comes confusing to me. Specifically, what is the relationship between:
nd-array and Python list?
nd-array and Python dictionary?
nd-array and Pandas DataFrame?
nd-arrays and Numpy "structured arrays"?
Also, is nd-array just like a sequence of lists?
Where does the "n-dimension" come into the picture? Because it looks just like a matrix, which is just two dimensions.
Thanks! | 0 | 1 | 184 |
0 | 46,509,800 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-01T05:47:00.000 | 0 | 1 | 0 | Tensorflow hangs when initializing large matrix with variable.Whats the best solution for handling large matrix multiplication in tensorflow? | 46,509,601 | 0 | python-3.x,matrix,tensorflow,ubuntu-16.04,tensorflow-gpu | You can divide data set into batches and then process you model or you can use tensor flow queue | I am trying to model a neural network using tensorflow.
But the matrices are in the order of 800000x300000.When I initialize the variables using global variable initializer in tensorflow, the system freezes. How to do deal with this problem?
Could tensorflow with gpu support will be able to handle this large matrix? | 0 | 1 | 125 |
0 | 51,921,509 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-01T19:56:00.000 | 0 | 1 | 0 | How to plot a cluster in python prepared using categorical data | 46,516,325 | 0 | python,machine-learning,cluster-analysis,categorical-data | Agreeing with @DIMKOIM, Multiple Correspondence Analysis is your best bet. PCA is mainly used for continuous variables. To visualize your data, you can build a scatter plot from scratch. | I have a high-dimensional dataset which is categorical in nature and I have used Kmodes to identify clusters, I want to visualize the clusters, what would be the best way to do that? PCA doesn't seem to be a recommended method for dimensionality reduction in a categorical dataset, how to visualize in such a scenario? | 0 | 1 | 1,437 |
0 | 46,517,378 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-10-01T20:52:00.000 | 0 | 1 | 0 | Stop OpenCV for seventeen seconds? python | 46,516,782 | 1.2 | python,opencv,wait,python-2.x | I was able to solve the problem by putting what I want to run before resuming processes (playing the sound) in another script, performing import sound and breaking out of the loop to stop the program, and I can't figure out how to start it again. For my purposes, I can restart it manually. | I have a motion detection program in OpenCV, and I want it to play a sound when it detects motion. I use winsound. However, OpenCV still seems to be gathering frames while the sound is playing, so I want to know a way to stop all OpenCV processes for about 17 seconds. I tried time.sleep and running it with the -u tag. Neither worked. Any ideas? Thanks. | 0 | 1 | 32 |
0 | 46,517,906 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-10-01T21:33:00.000 | 3 | 1 | 0 | Loss layer on Keras using two input layers and numpy operations | 46,517,118 | 1.2 | python,numpy,keras,loss | No, gradients are needed to perform gradient descent, so if you only have a numerical loss, it cannot be differentiated, in contrast to a symbolic loss that is required by Keras.
Your only chance is to implement your loss using keras.backend functions or to use another Deep Learning framework that might let you specify the gradient manually. You still would need to compute the gradient somehow. | I have a loss function implemented that uses numpy and opencv methods. This function also uses the input image and the output of the network.
Is it possible to convert the input and the output layers to numpy arrays, compute the loss and use it to optimize the network? | 0 | 1 | 228 |
0 | 46,526,690 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-10-02T13:08:00.000 | 0 | 1 | 0 | Best strategy for storing timed data and plotting afterwards? | 46,526,112 | 0 | python,bash,datetime,plot,storing-data | Firstly, if you plan on accessing the data structure which holds the URL-(time,Price) for specific URLs - use dictionary, since URLs are unique (the URL will be the key in the dictionary).
Otherwise u can keep a list of (URL,(time,Price)) tuples.
Secondly, a list of (Time, Price) Tuples since you don't need to sort them (they will be sorted already by the way you insert them).
{} - Dictionary
[] - List
() - tuple
Option 1:
[(URL, [(time, Price)])]
Option 2:
{URL, [(time, Price)]} | I'm trying to learn Python by converting a crude bash script of mine.
It runs every 5 minutes, and basically does the the following:
Loops line-by-line through a .csv file, grabs first element ($URL),
Fetches $URL with wget and extracts $price from the page,
Adds $price to the end of the same line in the .csv file,
Continues looping for the remaining lines.
I use it to keep track of products prices on eBay and similar websites. It notifies me when a lower price is found and plots graphs with the product's price history.
It's simple enough that I could just replicate the algorithm in Python, however, as I'm trying to learn it, it seems there are several types of objects (lists, dicts, etc.) that could do the storing much more efficiently. My plan is using pickle or even a simple DB solution (like dataset) from the beggining, instead of messing around with .csv files and extracting the data via sketchy string manipulations.
One of the improvements I would also like to make is store the absolute time of each fetch alongside its price, so I can plot a "true timed" graph, instead of assuming each cycle is 5 minutes away from each other (which it never is).
Therefore, my question sums to...
Assuming I need to work with the following data structure:
List of Products, each with its respective
URL
And its list of Time<->Prices pairs
What would be the best strategy in Python for doing it so?
Should I use dictionaries, lists, sets or maybe even creating a custom class for products? | 0 | 1 | 34 |
0 | 50,614,889 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2017-10-03T03:44:00.000 | 0 | 1 | 0 | Tensorflow GPU doesn't work in Pycharm | 46,536,893 | 1.2 | python,ubuntu,tensorflow,pycharm,tensorflow-gpu | Actually the problem was, the python environment for the pycharm project is not the same as which is in run configurations. This issue was fixed by changing the environment in run configurations. | When i'm running my tensorflow training module in pycharm IDE in Ubuntu 16.04, it doesn't show any training with GPU and it trains usually with CPU. But When i run the same python script using terminal it runs using GPU training. I want to know how to configure GPU training in Pycharm IDE. | 0 | 1 | 691 |
0 | 46,578,602 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-10-03T11:42:00.000 | -1 | 2 | 0 | Equal-width and equal-depth binning, using scipy | 46,543,779 | -0.099668 | python,scipy,data-mining,binning | Don't expect everything to require a library.
Both bombings can be implemented in 1 or 2 lines of Python code if you think about them for a minute. It probably takes you longer to find/install/study a library than to just write this code yourself. | I have wound several examples of equal-mean binning, using scipy, but I wondering if it is possible to use library for equal-width or -depth binning.
Actually, I'm fine using other libraries, not only scipy | 0 | 1 | 1,881 |
0 | 46,546,554 | 0 | 0 | 0 | 1 | 1 | true | 1 | 2017-10-03T13:57:00.000 | 0 | 1 | 0 | How to Skip Columns of CSV file | 46,546,388 | 1.2 | python,csv,google-api,google-bigquery,google-python-api | You can use pandas library for that.
import pandas as pd
data = pd.read_csv('input_data.csv')
useful_columns = [col1, col2, ... ] # List the columns you need data[useful_columns].to_csv('result_data.csv', index=False) # index=False is to prevent creating extra column | I am trying to upload data from certain fields in a CSV file to an already existing table.
From my understanding, the way to do this is to create a new table and then append the relevant columns of the newly created table to the corresponding columns of the main table.
How exactly do I append certain columns of data from one table to another?
As in, what specific commands?
I am using the bigquery api and the python-client-library. | 0 | 1 | 4,039 |
0 | 46,559,545 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-10-04T06:55:00.000 | 0 | 2 | 0 | numpy - create polynomial by its roots | 46,558,735 | 0 | python,numpy | For that purpose you will need to implement the multiplication of polynomial, that is, you need to make sure your product is able to generate the product of
(am * x^m + ... + a0) * (bn * x^n + ... + b0)
If your product is able to do this, then knowing the roots of
r1, ..., rk
You can write this as
(x - r1) * ... * (x - rk)
and you need to repeatedly calculate the product here. | I'm trying to create a numpy.polynomial by the roots of the polynomial.
I could only find a way to do that by the polynomial's a's
The way it works now, for the polynomial x^2 - 3x + 2 I can create it like that:
poly1d([1, -3, 2])
I want to create it by its roots, which are -1, -2 | 0 | 1 | 852 |
0 | 46,573,367 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-10-04T16:37:00.000 | 3 | 1 | 0 | Does every chosen seed for random.seed() guarantee that random will generate a uniformly distributed sequence? | 46,569,943 | 1.2 | python,random,seed | Let's say I would like to generate n > 10 ^ 20 numbers
Let's say not. If you could generate a billion values per second, that would require 1E20 values / 1E9 values per second / 3600 seconds per hour / 24 hours per day / 365.25 days per year, which is more than 3000 years. Even if you have hardware and energy sources that reliable, you won't be there to see the outcome.
using random.seed(SEED) and n subsequent calls to random.random()
The results would be statistically indistinguishable from uniform because the underlying algorithm, Mersenne Twister, is designed to produce that behavior. | Let's say I would like to generate n > 10 ^ 20 numbers using random.seed(SEED) and n subsequent calls to random.random(). Is the generated sequence guaranteed to be uniformly distributed regardless of the chosen value of SEED? | 0 | 1 | 124 |
0 | 46,669,389 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2017-10-04T23:52:00.000 | 0 | 1 | 0 | Automating excel reporting and graphs - Python xlsxWriter/xlswings or Ruby axlsx/win32ole | 46,575,847 | 0 | python,ruby,excel,xlsxwriter,axlsx | If you already have VBA that works for your project, then translating it to Ruby + WIN32OLE is probably your quickest path to working code. Anything you can do in VBA is doable in Ruby (if you find something you can't do, post here to ask for help).
I prefer working with Excel via OLE since I know the file produced by Excel will work anywhere I open it. I haven't used axlsx but I'm sure it's a fine project; I just wouldn't trust that it would produce working Excel files every time. | I want to create a program, which automates excel reporting including various graphs in colours. The program needs to be able to read an excel dataset. Based on this dataset, the program then has to create report pages and graphs and then export to an excel file as well as pdf file.
I have done some research and it seems this is possible using python with pandas - xlsxWriter or xlswings as well as Ruby gems - axlsx or win32ole.
Which is the user-friendlier and easy to learn alternative? What are the advantages and disadvantages? Are there other options I should consider (I would like to avoid VBA - as this is how the reports are currently produced)?
Any responses and comments are appreciated. Thank you! | 0 | 1 | 388 |
0 | 48,451,660 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2017-10-05T06:34:00.000 | 5 | 1 | 0 | Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow | 46,579,249 | 1.2 | python-3.x,tensorflow | You need a 64 bit version of Python. | I am using python 3.6 on my pc(windows 10)
I wanted to install tensor flow package (using pip),
SO opened the cmd and typed the following as specified in the tensorflow website,
i want to install the cpu package not the gpu package
C:\Users\rahul>C:\Windows.old\Users\rahul\AppData\Local\Programs\Python\Python36-32\Scripts\ pip3.exe install --upgrade tensorflow
but i get this error
Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow,
How do i overcome this. | 0 | 1 | 7,333 |
Subsets and Splits